在尝试eager execution的时候,突然发现一个tape.gradient()
死活都是none
的情况
当时的代码是这样的
def total_loss(pred, images, labels, bboxes, landmarks):
"""
Return
--------------------
(total_loss, cls_loss, bbox_loss, landmark_loss)
"""
c_loss = cls_loss(pred[0], labels)
b_loss = bbox_loss(pred[1], bboxes, labels)
l_loss = landmark_loss(pred[2], landmarks, labels)
return c_loss + 0.5 * b_loss + 0.5 * l_loss#, c_loss, b_loss, l_loss
def grad(model, images, labels, bboxes, landmarks):
pred = model(images)
with tf.GradientTape() as tape:
loss_value = total_loss(pred, images, labels, bboxes, landmarks)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
然后在多次尝试下,最终发现如果把代码改成下面这样,就能得到梯度
def total_loss(model, images, labels, bboxes, landmarks):
"""
Return
--------------------
(total_loss, cls_loss, bbox_loss, landmark_loss)
"""
pred = model(images)
c_loss = cls_loss(pred[0], labels)
b_loss = bbox_loss(pred[1], bboxes, labels)
l_loss = landmark_loss(pred[2], landmarks, labels)
return c_loss + 0.5 * b_loss + 0.5 * l_loss#, c_loss, b_loss, l_loss
def grad(model, images, labels, bboxes, landmarks):
with tf.GradientTape() as tape:
# must execute model(x) in the context of tf.GradientTape()
loss_value = total_loss(model, images, labels, bboxes, landmarks)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
看到唯一的区别了吗?这也是根本原因所在。
计算model的输出必须在tf.GradientTape()
的上下文中进行。
StackOverflow上有一个同样的问题,解释的很好