深度学习(三):通用梯度下降(一次学习多个权重)

1 多输入梯度下降学习

1、梯度下降也适用于多输入。因为例子中,三个权重共享一个输出节点,它们也共享一个 delta。但是由于 input 不同,不同的权重对应不同的 weight_delta(权重增量)


#1空白神经网络
def w_sum(a,b):
    assert (len(a) == len(b))
    output = 0
    for i in range(len(a)):
        output += (a[i] * b[i])
    return output

weights = [0.1,0.2,-0.1]

def neural_network(input,weights):
    pred = w_sum(input,weights)
    return pred

#2做出预测,计算误差和增量
toes = [8.5,9.5,9.9,9.0]
wlrec = [0.65,0.8,0.8,0.9]
nfans = [1.2,1.3,0.5,1.0]

win_or_lose_binary = [1,1,0,1]
true = win_or_lose_binary[0]

#3计算权重增量
def ele_mul(number,vector):
    output = [0,0,0]
    assert(len(output) == len(vector))
    for i in range(len(vector)):
        output[i] += number * vector[i]
    return output

input = [toes[0],wlrec[0],nfans[0]]

pred = neural_network(input,weights)

error = (pred - true)**2

delta = pred - true

weight_deltas = ele_mul(delta,input)#三个权重共享一个输出节点,所以它们共享一个delta

#4更新权重
alpha = 0.01

for i in range(len(weights)):
    weights[i] -= weight_deltas[i] * alpha

pred = neural_network(input,weights)

print("Weights:" + str(weights))
print("Weight_delta:" + str(weight_deltas))
print("pred:" + str(pred))


2、可以画出三条独立的误差/权重曲线,每条曲线对应一个权重。曲线的斜率反应了 weight_delta 的值。由于 a 的 input 值更大,所以 a 的weight_delta 更陡峭

3、大部分的学习(权重改变)都是在输入值最大的权重项上进行的,因为能够显著的改变斜率。基于斜率的差异,需要将 alpha 设置的比较低。如果 alpha 设置的比较高(上面的代码 alpha 由 0.01 变更为0.1),也会发散

4、单项权重冻结: 当权重项 a 冻结时,如果权重项 b 和权重项 c 达到了收敛(误差为0),然后再尝试训练权重 a , 则 a 也不会移动。因为这时 error 已经为 0 ,delta 已经为 0 ,weight_delat = delta * input 已经为 0 。这揭示了神经网络的潜在的一项负面特性:权重项 a 可能对应着重要的输入数据,会对预测结果产生举足轻重的影响,但如果网络在训练数据中意外找到了一种不需要它可以准确预测的情况,那么权重 a 将不在对预测结果产生任何影响。

def neural_network(input,weights):
    out = 0
    for i in range(len(input)):
        out += (input[i] * weights[i])
    return out

def ele_mul(scalar,vector):
    out = [0,0,0]
    for i in range(len(vector)):
        out[i] += scalar * vector[i]
    return out

toes = [8.5,9.5,9.9,9.0]
wlrec = [0.65,0.8,0.8,0.9]
nfans = [1.2,1.3,0.5,1.0]

win_or_lose_binary = [1,1,0,1]
true = win_or_lose_binary[0]

alpha = 0.3
weights = [0.1,0.2,-0.1]

input = [toes[0],wlrec[0],nfans[0]]

for iter in range(3):
    pred = neural_network(input,weights)
    error = (pred - true)**2
    delta = pred - true
    weight_deltas = ele_mul(delta,input)
    #冻结权重weights[0]
    weight_deltas[0] = 0

    print('Iteration:' + str(iter+1))
    print('Pred:' + str(pred))
    print('Error:' + str(error))
    print('Delta:' + str(delta))
    print('Weights:' + str(weights))
    print('Weights_deltas:' + str(weight_deltas))
    print('')

    for i in range(len(weights)):
        weights[i] -= alpha * weight_deltas[i]

2 多输出梯度下降学习

1、神经网络也可以只用一个输入做出多个预测。在这一案例中,weight_deltas 共享相同的输入节点,同时具有独一无二的输出节点(delta)

#1空白神经网络
weights = [0.3,0.2,0.9]

def ele_mul(scalar,vector):
    out = [0,0,0]
    for i in range(len(vector)):
        out[i] += scalar * vector[i]
    return out

def neural_network(input,weights):
    pred = ele_mul(input,weights)
    return pred

#2预测
wlrec = [0.65,1.0,1,0,0.9]

hurt = [0.1,0.0,0.0,0.9]
win = [1,1,0,1]
sad = [0.1,0.0,0.1,0.2]

input = wlrec[0]
true = [hurt[0],win[0],sad[0]]

pred = neural_network(input,weights)

error = [0,0,0]
delta = [0,0,0]

for i in range(len(true)):
    error[i] = (pred[i] - true[i])**2
    delta[i] = pred[i] - true[i]

#3比较,将其应用于每项权重
def scalar_ele_mul(number,vector):
    output = [0,0,0]
    assert(len(output) == len(vector))
    for i in range(len(vector)):
        output[i] = number * vector[i]
    return output

weight_deltas = scalar_ele_mul(input,delta)

alpha = 0.1

#4学习更新权重
for i in range(len(weights)):
    weights[i] -= (weight_deltas[i] * alpha)


print('Weights:' + str(weights))
print('Weights_deltas:' + str(weight_deltas))

3 具有多个输入和输出的梯度下降

1、梯度下降可以推广到任意大的网络


#多输入,多输出
import numpy as np

#1空白神经网络
#weights 各列分别代表相对应的数据:平均脚趾数、历史胜率、粉丝数
#weights 各行分别代表预测的不同类型:预测是否受伤,预测是否赢球,预测悲伤程度
weights = [[0.1,0.1,-0.3],
           [0.1,0.2,0.0],
           [0.0,1.3,0.1]]

def w_sum(a,b):
    assert (len(a) == len(b))
    output = 0
    for i in range(len(a)):
        output += (a[i] * b[i])
    return output

def vect_mat_mul(vect,matrix):
    assert(len(vect) == len(matrix))
    output = [0,0,0]
    for i in range(len(vect)):
        output[i] = w_sum(vect,matrix[i])
    return output

def neural_network(input,weights):
    pred = vect_mat_mul(input,weights)
    return pred

#2预测

toes = np.array([8.5,9.5,9.9,9.0])
wlrec = np.array([0.65,0.8,0.8,0.9])
nfans = np.array([1.2,1.3,0.5,1.0])

hurt = [0.1,0.0,0.0,0.1]
win = [1,1,0,1]
sad = [0.1,0.0,0.1,0.2]

alpha = 0.1

input = [toes[0],wlrec[0],nfans[0]]
true = [hurt[0],win[0],sad[0]]

pred = neural_network(input,weights)

error = [0,0,0]
delta = [0,0,0]

for i in range(len(true)):
    error[i] = (pred[i] - true[i])**2
    delta[i] = pred[i] - true[i]
print('ori pred: ' + str(pred))
print('delta: ' + str(delta))

#3比较,计算每一项weight_delta,将其应用于对应的权重
def outer_prod(a,b):
    out = np.zeros((len(a),len(b)))
    for i in range(len(a)):
        for j in range(len(b)):
            out[i][j] = a[i]*b[j]
    return out
weight_deltas = outer_prod(input,delta)

#4更新权重
for i in range(len(weights)):
    for j in range(len(weights[0])):
        weights[i][j] -= alpha * weight_deltas[i][j]

print('Weights:' + str(np.array(weights)))
print('Weights_deltas:' + str(weight_deltas))
print('pred:' + str(pred))

4 参考资料

《深度学习图解》

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容