StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

Problem

  • existing models are both inefficient and ineffective in such multi-domain image translation tasks
  • incapable of jointly training domains from different datasets

New method

  • Stargan, a novel and scalable approch that can perform image-to-image translations for multiple domains using only a single model
  • A mask vector to domain label enables joint training between domains of different datasets

Star Generative Adversarial Networks

Star Generative Adversarial Networks

1. Multi-Domain Image-to-Image Translation

notation meaning
x input image
y output image
c target domain label
c' original domain label
Dsrc(x) a probability distribution over sources given by D
Dcls(c'|x) a probability distribution over domain labels computed by D
λcls hyper-parameters that control the relative importance of domain classification and reconstruction losses
λrec hyper-parameters control the relative importance of reconstruction losses
m a mask vector
[\cdot] concatenation
c_i a vector for the labels of the i-th dataset
\hat{x} sampled uniformly along a straight line between a pair of a real and a generated images
λ_{gp} hyper-parameters control the gradient penalty
  • Goals:To train a single generator G that learns mappings among multiple domains
  • train G to translate an input image x into an output image y conditioned on the target domain label c, G(x, c) → y
  • Discriminator produces probability distributions over both sources and domain labels, D : x → {Dsrc(x), Dcls(x)}, in order to allows a single discriminator to control multiple domains.

Adversarial Loss

\mathcal{L}_{adv} = \mathbb{E}_x [log D_{src}(x)] + \mathbb{E}_{x,c}[log (1- D_{src}(G(x, c))]\tag{1}

Dsrc(x) as a probability distribution over sources given by D. The generator G tries to minimize this objective, while the discriminator D tries to maximize it

Domain Classification Loss

  • add an auxiliary classifier on top of D and impose the domain classification loss when optimizing both D and G
  • decompose the objective into two terms: a domain classification loss of
    real images used to optimize D, and a domain classification loss of fake images used to optimize G
    \mathcal{L}_{cls}^r = \mathbb{E}_{x,c'}[-log D_{cls}(c'|x)]\tag{2}
    \mathcal{L}_{cls}^f = \mathbb{E}_{x,c}[-log D_{cls}(c|G(x,c))]\tag{3}

Reconstruction Loss

  • problem: minimizing the losses(Eqs. (1) and (3)) does not guarantee that translated images preserve the content of its input images while changing only the domain-related part of the inputs
  • method: apply a cycle consistency loss to the generator
    \mathcal{L}_{rec} = \mathbb{E}_{x,c,c'}[||x-G(G(x,c), c')||_1]
    G takes in the translated image G(x, c) and the original domain label c' as input and tries to reconstruct the original image x. We adopt the L1 norm as our reconstruction loss.
    Note that we use a single generator twice, first to translate an original image into an image in the target domain and then to reconstruct the original image from the translated image.

Full Objective

\mathcal{L}_D = -\mathcal{L}_{adv} + \lambda_{cls}\mathcal{L}_{cls}^r
\mathcal{L}_G = \mathcal{L}_{adv}+\lambda_{cls}\mathcal{L}_{cls}^f+\lambda_{rec}\mathcal{L}_{rec}

We use λ_{cls} = 1 and λ_{rec} = 10 in all of our experiments

2. Training with Multiple Datasets

  • Problem:the complete information on the label vector c' is required when reconstructing the input image x from the translated image G(x, c)

Mask Vector

  • introduce a mask vector m that allows StarGAN to ignore unspecified
    labels and focus on the explicitly known label provided by a particular dataset.
  • use an n-dimensional one-hot vector to represent m, with n being the number of datasets. In addition, we define a unified version of the label as a vector

\tilde{c} = [c_1,c_2...c_n,m]
For the remaining n-1 unknown labels we simply assign zero values

Training Strategy

  • use the domain label \tilde{c} as input to the generator
  • the generator learns to ignore the unspecified labels, which are zero vectors, and focus on the explicitly given label
  • extend the auxiliary classifier of the discriminator to generate probability distributions over labels for all datasets
  • train the model in a multi-task learning setting, where the discriminator tries to minimize only the classification error associated to the known label
  • Under these settings, by alternating between CelebA and RaFD the discriminator learns all of the discriminative features for both datasets, and the generator learns to control all the labels in both datasets.

Implementation

Improved GAN Training

  • replace Eq. (1) with Wasserstein GAN objective with gradient penalty defined as

\mathcal{L}_{adv} = \mathbb{E}_x[D_{src}(x)]-\mathbb{E}_{x,c}[D_{src}(G(x,c))]-\lambda_{gp}\mathbb{E}_\hat{x}[||\nabla_\hat{x}D_{src}(\hat{x})||_2-1)^2]

where \hat{x} is sampled uniformly along a straight line between a pair of a real and a generated images. We use λ_{gp} = 10 for all experiments

Network Architecture

  • generator network composed of two convolutional layers with the stride size of two for downsampling, six residual blocks, and two transposed convolutional layers with the stride size of two for upsampling.
  • use instance normalization for the generator but no normalization for
    the discriminator.
  • leverage PatchGANs for the discriminator network, which classifies whether local image patches are real or fake.
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 220,137评论 6 511
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,824评论 3 396
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 166,465评论 0 357
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,131评论 1 295
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,140评论 6 397
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,895评论 1 308
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,535评论 3 420
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,435评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,952评论 1 319
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,081评论 3 340
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,210评论 1 352
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,896评论 5 347
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,552评论 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,089评论 0 23
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,198评论 1 272
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,531评论 3 375
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,209评论 2 357

推荐阅读更多精彩内容