##每周学一点--Python深度学习篇
##基于python和tensorflow的网络微调(fine tunning) P117
1.在已训练好的base network上添加自定义网络,
Conv2D, MaxPooling2D... + Flatten, Dense(自己添加的网络,用于训练)
code:
model = models.Sequential()
model.add(conv_base) #训练好的卷积、池化网络层
model.add(layers.Flatten()) #将上层的输出展平到一维
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid')) #最后变成单一输出,输出值位0~1概率
2.冻结base network
code:
conv_base.trainable = False #将base network可传播属性设为false
3.训练所添加网络部分
code:
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
4.解冻一些base network(从后往前)
通过设置参数layer.trainable = True
5.重新训练整个网络
#模型结构查看
model.summary()
##卷积神经网络可视化
#关于numpy.expand_dims(a, axis), 在axis参数位置添加一个维度?
import numpy as np
test = np.array([[1, 2, 3], [4, 5, 6]])
np.shape(test): (2, 3)
x = np.expand_dims(test, axis=0)
np.shape(x): (1, 2, 3)
y = np.expand_dims(test, axis=1)
np.shape(y): (2, 1, 3)
z = np.expand_dims(test, axis=2)
np.shape(z): (2, 3, 1)