在各种神经网络层中添加L1或L2正则项
权重W:
W=get_variable('W', shape=[self.EMBEDDING_SIZE, self.EMBEDDING_DIM], regularizer=tf.contrib.layers.l1_regularizer(0.01))
W=get_variable('W', shape=[self.EMBEDDING_SIZE, self.EMBEDDING_DIM], regularizer=tf.contrib.layers.l2_regularizer(0.01))
偏置b:
b=get_variable('b', shape=[self.TAG_SIZE,], dtype=tf.float32, regularizer=tf.contrib.layers.l1_regularizer(0.01))
b=get_variable('b', shape=[self.TAG_SIZE,], dtype=tf.float32, regularizer=tf.contrib.layers.l2_regularizer(0.01))
全连接层:
tf.layers.dense(input, output, activity_regularizer=tf.contrib.layers.l1_regularizer(0.01) )
tf.layers.dense(input, output, activity_regularizer=tf.contrib.layers.l2_regularizer(0.01) )
卷积层:
conv = tf.layers.conv1d(embedding_inputs, 1024, 5, name='conv', activity_regularizer=tf.contrib.layers.l1_regularizer(0.01))
conv = tf.layers.conv1d(embedding_inputs, 1024, 5, name='conv', activity_regularizer=tf.contrib.layers.l2_regularizer(0.01))
以上层最后计算loss要添加上正则loss
regularizer_loss = tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))
loss+=regularizer_loss
optimizer=tf.train.AdamOptimizer(learning_rate).minimizer(loss)
循环卷积层:
tv = tf.trainable_variables()
regularization_cost = 0.001* tf.reduce_sum([ tf.nn.l2_loss(v) for v in tv ])
cost = original_cost_function + regularization_cost
optimizer = tf.train.AdamOptimizer(0.01).minimize(cost)
一个更为通用的方法是:
先定义正则方法:
regularizer=tf.contrib.layers.l1_regularizer(0.01)
再选择对哪些神经网络施加正则:
tf.contrib.layers.apply_regularization(regularizer, ['W','b','conv','LSTM'])
最后跟上面一样,再loss上加上正则loss:
regularizer_loss=tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))
loss+=regularizer_loss
train_op=tf.train.AdamOptimizer(0.01).minimize(loss)
如果想更大程度地将小过拟合,就把正则项系数(惩罚系数)调大。
如果对你有帮助的话可以点击“喜欢”或者关注我,谢谢咯。