Tensorflow笔记 4.2 学习率

概念

学习率

learning_rate:表示了每次参数更新的幅度大小。学习率过大,会导致待优化的参数在最小值附近波动,不收敛;学习率过小,会导致待优化的参数收敛缓慢。
在训练过程中,参数的更新向着损失函数梯度下降的方向。
参数的更新公式为:

w_{n+1} = w_n − learningrate*∇

学习率的设置

学习率过大,会导致待优化的参数在最小值附近波动,不收敛;学习率过小,会导致待优化的参数收敛缓慢。

指数衰减学习率:学习率随着训练轮数变化而动态更新

学习率计算公式如下:

Learning_rate=LEARNING_RATE_BASELEARNING_RATE_DECAY (global_step/LEARNING_RATE_BATCH_SIZE)

用 Tensorflow 的函数表示为:

global_step = tf.Variable(0, trainable=False)

learning_rate = tf.train.exponential_decay(
LEARNING_RATE_BASE,
global_step,
LEARNING_RATE_STEP, LEARNING_RATE_DECAY,
staircase=True/False)

其中, LEARNING_RATE_BASE 为学习率初始值, LEARNING_RATE_DECAY 为学习率衰减率,global_step 记录了当前训练轮数,为不可训练型参数。学习率 learning_rate 更新频率为输入数据集总样本数除以每次喂入样本数。若 staircase 设置为 True 时,表示 global_step/learning rate step 取整数,学习率阶梯型衰减;若 staircase 设置为 false 时,学习率会是一条平滑下降的曲线。

代码

#coding:utf-8
#设损失函数 loss-(w+1)^2,令w初值是常数10。反向传播就是求最优w,即求最小loss对应的w值。
#使用指数衰减的学习率,在迭代初期得到较高的下降速度,可以在较小的训练轮数下取的更有收敛度。

import tensorflow as tf

LEARN_RATE_BASE = 0.1 #最初学习率
LEARN_RATE_DECAY = 0.99 #学习率从衰减率
LEARN_RATE_STEP = 1 #喂入多少轮BATCH_SIZE后,更新一次学习率,一般设为:总样本数/BATCH_SIZE

#运行了几轮BATCH_SIZE的计数器,初值给0,设为不被训练。
global_step = tf.Variable(0, trainable=False)
#定义指数下降学习率
learning_rate = tf.train.exponential_decay(LEARN_RATE_BASE, global_step, LEARN_RATE_STEP, LEARN_RATE_DECAY, staircase=True)
#定义待优化参数,初值给10
w = tf.Variable(tf.constant(5, dtype=tf.float32))
#定义损失函数loss
loss = tf.square(w+1)
#定义反向传播方法
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
#生成会话,训练40轮
with tf.Session() as sess:
    init_op=tf.global_variables_initializer()
    sess.run(init_op)
    for i in range (40):
        sess.run(train_step)
        learning_rate_val = sess.run(learning_rate)
        global_step_val = sess.run(global_step)
        w_val = sess.run(w)
        loss_val = sess.run(loss)
        print("After %s steps: global step is %f, w is %f, learning rate is %f, loss is %f\n" % (i, global_step_val, w_val, learning_rate_val, loss_val))

运行结果

After 0 steps: global step is 1.000000, w is 3.800000, learning rate is 0.099000, loss is 23.040001

After 1 steps: global step is 2.000000, w is 2.849600, learning rate is 0.098010, loss is 14.819419

After 2 steps: global step is 3.000000, w is 2.095001, learning rate is 0.097030, loss is 9.579033

After 3 steps: global step is 4.000000, w is 1.494386, learning rate is 0.096060, loss is 6.221961

After 4 steps: global step is 5.000000, w is 1.015167, learning rate is 0.095099, loss is 4.060896

After 5 steps: global step is 6.000000, w is 0.631886, learning rate is 0.094148, loss is 2.663051

After 6 steps: global step is 7.000000, w is 0.324608, learning rate is 0.093207, loss is 1.754587

After 7 steps: global step is 8.000000, w is 0.077684, learning rate is 0.092274, loss is 1.161403

After 8 steps: global step is 9.000000, w is -0.121202, learning rate is 0.091352, loss is 0.772287

After 9 steps: global step is 10.000000, w is -0.281761, learning rate is 0.090438, loss is 0.515867

After 10 steps: global step is 11.000000, w is -0.411674, learning rate is 0.089534, loss is 0.346128

After 11 steps: global step is 12.000000, w is -0.517024, learning rate is 0.088638, loss is 0.233266

After 12 steps: global step is 13.000000, w is -0.602644, learning rate is 0.087752, loss is 0.157891

After 13 steps: global step is 14.000000, w is -0.672382, learning rate is 0.086875, loss is 0.107334

After 14 steps: global step is 15.000000, w is -0.729305, learning rate is 0.086006, loss is 0.073276

After 15 steps: global step is 16.000000, w is -0.775868, learning rate is 0.085146, loss is 0.050235

After 16 steps: global step is 17.000000, w is -0.814036, learning rate is 0.084294, loss is 0.034583

After 17 steps: global step is 18.000000, w is -0.845387, learning rate is 0.083451, loss is 0.023905

After 18 steps: global step is 19.000000, w is -0.871193, learning rate is 0.082617, loss is 0.016591

After 19 steps: global step is 20.000000, w is -0.892476, learning rate is 0.081791, loss is 0.011561

After 20 steps: global step is 21.000000, w is -0.910065, learning rate is 0.080973, loss is 0.008088

After 21 steps: global step is 22.000000, w is -0.924629, learning rate is 0.080163, loss is 0.005681

After 22 steps: global step is 23.000000, w is -0.936713, learning rate is 0.079361, loss is 0.004005

After 23 steps: global step is 24.000000, w is -0.946758, learning rate is 0.078568, loss is 0.002835

After 24 steps: global step is 25.000000, w is -0.955125, learning rate is 0.077782, loss is 0.002014

After 25 steps: global step is 26.000000, w is -0.962106, learning rate is 0.077004, loss is 0.001436

After 26 steps: global step is 27.000000, w is -0.967942, learning rate is 0.076234, loss is 0.001028

After 27 steps: global step is 28.000000, w is -0.972830, learning rate is 0.075472, loss is 0.000738

After 28 steps: global step is 29.000000, w is -0.976931, learning rate is 0.074717, loss is 0.000532

After 29 steps: global step is 30.000000, w is -0.980378, learning rate is 0.073970, loss is 0.000385

After 30 steps: global step is 31.000000, w is -0.983281, learning rate is 0.073230, loss is 0.000280

After 31 steps: global step is 32.000000, w is -0.985730, learning rate is 0.072498, loss is 0.000204

After 32 steps: global step is 33.000000, w is -0.987799, learning rate is 0.071773, loss is 0.000149

After 33 steps: global step is 34.000000, w is -0.989550, learning rate is 0.071055, loss is 0.000109

After 34 steps: global step is 35.000000, w is -0.991035, learning rate is 0.070345, loss is 0.000080

After 35 steps: global step is 36.000000, w is -0.992297, learning rate is 0.069641, loss is 0.000059

After 36 steps: global step is 37.000000, w is -0.993369, learning rate is 0.068945, loss is 0.000044

After 37 steps: global step is 38.000000, w is -0.994284, learning rate is 0.068255, loss is 0.000033

After 38 steps: global step is 39.000000, w is -0.995064, learning rate is 0.067573, loss is 0.000024

After 39 steps: global step is 40.000000, w is -0.995731, learning rate is 0.066897, loss is 0.000018
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容