前向传播
较为简单
向量点乘,加权求和
反向传播
感觉有点像负反馈的感觉
专业解释:该方法对网络中所有权重计算损失函数的梯度。 这个梯度会反馈给最优化方法,用来更新权值以最小化损失函数。
搭建八股(重要)
- import + 常量定义
生成数据集 - 前向传播
定义输入,参数和输出
tf.matmul(a,b)
w1=
w2= - 反向传播
定义损失函数,反向传播方法
loss=
train_step= - 生成会话,训练
复现
有些地方还是有一点不懂,照着抄了一点
1 #coding:utf-8
2
3 #1 import+set constant
4 import tensorflow as tf
5 import numpy as np
6 SEED = 23455
7 SIZE = 8
8 rdm = np.random.RandomState(SEED)
9 X = rdm.rand(32,2)
10 Y =[[int(x0+x1 > 1)] for (x0,x1) in X]
11
12 #2 forward propagation
13 x = tf.placeholder(tf.float32,shape = (None,2))
14 y_ = tf.placeholder(tf.float32,shape = (None,1))
15
16 w1 = tf.Variable(tf.random_normal([2,3],stddev=1,seed=1))
17 w2 = tf.Variable(tf.random_normal([3,1],stddev=1,seed=1))
18
19 a = tf.matmul(x,w1)
20 y = tf.matmul(a,w2)
21
22 #3 backward propagation
23 loss_mse=tf.reduce_mean(tf.square(y_-y))
24 train_step= tf.train.GradientDescentOptimizer(0.001).minimize(loss_mse)
25
26 #4 session + training
27 with tf.Session() as sess:
28 init_op = tf.global_variables_initializer()
29 sess.run(init_op)
30
31 STEPS=4000
32 for i in range(STEPS):
33 start=(i*SIZE)%32
34 end=start+SIZE
35 sess.run(train_step,feed_dict={x:X[start:end],y_:Y[start:end]})
36 if i % 1000 == 0:
37 total_loss=sess.run(loss_mse,feed_dict={x:X,y_:Y})
38 print("After %d training steps,loss_mse on all data is %g" % (i, total_loss))
39 print "\n"
40 print "w1:\n",sess.run(w1)
41 print "w2:\n",sess.run(w2)