李宏毅GAN学习笔记(1)

GAN Lecture 1


Yann LeCun: Adversarial training is the coolest thing since sliced bread.

Outline

  • Basic Idea of GAN
  • GAN as Structured Learning
  • Can Generator Learn by Itself?
  • Can Discriminator Generate?
  • A Little Bit Theory

Basic Idea of GAN

Generation

Generation: it is a neural network or a function.

input vector: each dimension of input vector represents some characterstics.

Discriminator

Discriminator: it is a neural network or a function.

output scalar: large value means real, smaller value means fake.

GAN Algorithm


  • Initialize generator and discriminator

  • In each training iteration:

    Step 1: fix generator G, and update discriminator D

    Step 2: fix discriminator D, and update generator G


Initialize \theta_d for D and \theta_g for G

  • In each training iteration:

    • Sample m examples \{ x^1, x^2, \dots, x^m \} from database
    • Sample m noise samples \{ z^1, z^2, \dots, z^m \} from a distrubution
    • Obtaining generated data \{ \widetilde{x}^1, \widetilde{x}^2, \dots, \widetilde{x}^m \}, \, \widetilde{x}^i=G(z^i)
    • Update discriminator parameters \theta_d to maximize
      • \widetilde{V}=\frac{1}{m}\sum^m_{i=1}logD(x^i)+\frac{1}{m}\sum^m_{i=1}log(1-D(\widetilde{x}^i))
      • \theta_d \gets \theta_d+\eta\bigtriangledown_{\widetilde{V}(\theta_d)}

    Learning D

    • Sample m noise samples\{z^1, z^2, \dots, z^m \} from a distribution
    • Update generator parameters \theta_g to maxmize
      • \widetilde{V}=\frac{1}{m}\sum_{i=1}^mlog(D(G(z^i)))
      • \theta_g\gets\theta_g+\eta\bigtriangledown_{\widetilde{V}(\theta_g)}

    Learning G

GAN as Structured Learning

Structured Learning Approach

  • Bottom Up: Learn to generate the object at the component level

    缺乏大局观

  • Top Down: Evaluating the whole object, and find the best one

Bottom \; Up + Top \; Down \Rightarrow GAN

Can Generator Learn by Itself?

若使用Auto-Encoder技术作为Generator,那么需要更深的网络结构。

Can Discriminator Generate?

Discriminator -- Training

  • General Algorithm
    • Given a set of positive example, randomly generate a set of negative examples.
  • In each iteration
    • Learn a discriminator D that can discriminate positive and negative examples.
    • Generate negative examples by discriminator D: \widetilde{x}=arg max_{x\in X}D(x)

Generator v.s. Discriminator

  • Generator
    • Pros:
      • Easy to generate even with deep model.
    • Cons:
      • Imitate the appearance.
      • Hard to learn the correlation between components.

  • Discriminator
    • Pros:
      • Considering the big picture.
    • Cons:
      • Generation is not always feasible, especially when your model is deep.
      • How to do negative sampling?

Discriminator + Generator -- Training

  • General Algorithm
    • Given a set of positive example, randomly generate a set of negative examples.
  • In each iteration
    • Learn a discriminator D that can discriminate positive and negative examples.
    • Generate negative examples by discriminator D: \widetilde{x}=arg max_{x\in X}D(x) \,\Leftrightarrow\, G\to\widetilde{x}
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • 周末看奇葩说,听到一个观点,关于自我。 大概是:为了融入群体要不要伪装自己。 几年前,我一直就是拿个烈士啊。也没见...
    麦田里的西西弗妹阅读 83评论 0 0
  • 第一次见到表弟是在初中,那时正是冬天,表弟穿着土黄色的大棉袄,下身就着灰黑色的麻布裤,脸蛋被冷风吹得红扑扑的如同熟...
    多多掌柜阅读 365评论 4 3
  • 制作简介 本书是因为书友发资源给我,要我制作加工,但是一直拖下来了,近期花了点时间加工了一番。 题外话 本书是村上...
    hyx108阅读 534评论 0 0