技术交流QQ群:1027579432,欢迎你的加入!
一、pytroch中的RNN相关函数介绍
-
1.对于简单的RNN结构,有两种方式进行调用:
- 1.1 torch.nn.RNN():可以接收一个序列的输入,默认会传入全0的隐藏状态,也可以自己定义初始的隐藏状态
- 参数介绍:
- input_size:输入x的特征数量
- hiddien_size:隐藏层的特征数量
- num_layers:RNN的层数
- nonlinearity:指定使用的非线性激活函数是tanh还是relu,默认是tanh
- bias:如果是False,那么RNN层就不会使用偏置权重b_ih和b_hh,默认是True
- batch_first:如果是True,那么输入的Tensor的shape是[batch_size, time_step, feature],输出也是[batch_size, time_step, feature],默认是False,即[time_step, batch_size, feature]
- dropout:如果值是非零,那么除最后一层外,其他层的输出都会套上一个dropout层
- bidirectional:如果是True,将是一个双向RNN,默认是False
- RNN的输入:(input, h_0)
- input shape: [time_step, batch_size, feature]
- h_0 shape: [num_layers*num_directions, batch_size, hidden_size] # num_directions参数的值由参数bidirectional决定
- RNN的输出: (output, h_n)
- output shape: [time_step, batch_size, hidden_sizenum_directions]
对于输入序列的每个元素,RNN每层的计算公式是: h_t = tanh((w_ihx_t+b_ih)+(w_hh*h_t-1+b_hh))
- output shape: [time_step, batch_size, hidden_sizenum_directions]
- RNN的模型参数:
- w_ih:第k层的input-hidden权重 [input_size,hidden_size]
- w_hh:第k层的hidden-hidden权重 [hidden_size,hidden_size]
- b_ih:第k层的input-hidden偏置 [hidden_size]
- b_hh:第k层的hidden-hidden偏置 [hidden_size]
import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import numpy as np rnn = nn.RNN(10, 20, 1) inputs = torch.randn(5, 3, 10) h0 = torch.randn(1, 3, 20) output, hn = rnn(inputs, h0) print("RNN的输出:\n", output) for param in list(rnn.parameters()): print("{} shape的大小是:{}".format(param, param.size()))
- 参数介绍:
- 1.1 torch.nn.RNN():可以接收一个序列的输入,默认会传入全0的隐藏状态,也可以自己定义初始的隐藏状态
- 1.2 torch.nn.RNNCell():只能接收序列中的单步的输入,且必须传入隐藏状态
- 参数介绍:
- input_size:输入特征的数量
- hidden_size: 隐藏层特征的数量
- bias:默认是True,如果是Flase,RNN cell中将不会加入bias
- nonlinearity:指定使用的非线性激活函数是tanh还是relu,默认是tanh
- RNNCell的输入: (input, hidden)
- input shape:[batch_size, input_size]
- hidden shape:[batch_size, hidden_size]
- RNNCell的输出:h
- h shape:[batch, hidden_size] 下一个时刻的隐藏状态
对于输入序列的每个元素,RNNCell每层的计算公式是: h_t = tanh((w_ihx+b_ih)+(w_hhh+b_hh))
- h shape:[batch, hidden_size] 下一个时刻的隐藏状态
- RNNCell的模型参数:
- w_ih:input-hidden权重 [input_size,hidden_size]
- w_hh:hidden-hidden权重 [hidden_size,hidden_size]
- b_ih:input-hidden偏置 [hidden_size]
- b_hh:hidden-hidden偏置 [hidden_size]
rnncell = nn.RNNCell(input_size=10, hidden_size=20) inputs = torch.randn(6, 3, 10) # 6表示下面循环6次 h = torch.randn(3, 20) output = [] for i in range(6): h = rnncell(inputs[i], h) output.append(h) print("RNNCell的输出:\n", output) for param in list(rnncell.parameters()): print("{} shape的大小是:{}".format(param, param.size()))
- 参数介绍:
-
2.对于简单的LSTM结构,有两种方式进行调用:
- 2.1 torch.nn.LSTM()
- 参数:
- input_size:输入特征的维度
- hidden_size:隐藏层的特征维度
- num_layers:层数
- bias:如果是True,那么将不会使用bias,默认是True
- batch_first:如果是True,那么输入的Tensor的shape是[batch_size, time_step, feature],输出也是[batch_size, time_step, feature],默认是False,即[time_step, batch_size, feature]
- dropout:如果值是非零,那么除最后一层外,其他层的输出都会套上一个dropout层
- bidirectional:如果是True,将是一个双向RNN,默认是False
- LSTM的输入:[input,(h_0, c_0)]
- input shape:(seq_len, batch_size, input_size)
- h_0 shape:(num_layers*num_directions, batch_size, hidden_size)
- c_0 shape:(num_layers*num_directions, batch_size, hidden_size)
- LSTM的输出:[output,(h_n,c_n)]
- output shape:[seq_len, batch_size, hidden_size * num_directions]
- h_n shape: [num_layers * num_directions, batch_size, hidden_size]
- c_n shape: [num_layers * num_directions, batch_size, hidden_size]
- LSTM的模型参数:
- w_ih:input-hidden的权重(w_ii,w_if,w_ig, w_io) [input_size,4*hidden_size]
- w_hh:hidden-hidden的权重(w_hi,w_hf,w_hg,w_ho) [hidden_size,4*hidden_size]
- b_ih:input-hidden的偏置(b_ii,b_if,b_ig, b_io) [4*hidden_size]
- b_hh:hidden-hidden的偏置(b_hi,b_hf,b_hg,b_ho) [4*hidden_size]
lstm = nn.LSTM(10, 20, 2) inputs = torch.randn(5, 3, 10) h0 = torch.randn(2, 3, 20) c0 = torch.randn(2, 3, 20) output, h_n = lstm(inputs, (h0, c0)) print("output:\n", output) print("h_n:\n", h_n)
- 参数:
- 2.2 torch.nn.LSTMCell()
- 参数:
- input_size:输入特征的数量
- hidden_size: 隐藏层特征的数量
- bias:默认是True,如果是Flase,RNN cell中将不会加入bias
- LSTMCell的输入input:[input, (h_0,c_0)]
- input: (seq_len, batch_size, input_size)
- h_0: (batch_size, hidden_size)
- c_0: (batch_size, hidden_size)
- LSTMCell的输出output:[h1,c1]
- h_1:(batch_size, hidden_size)
- c_1:(batch_size,hidden_size)
- LSTM的模型参数:
- w_ih:input-hidden的权重(w_ii,w_if,w_ig, w_io) [input_size,4*hidden_size]
- w_hh:hidden-hidden的权重(w_hi,w_hf,w_hg,w_ho) [hidden_size,4*hidden_size]
- b_ih:input-hidden的偏置(b_ii,b_if,b_ig, b_io) [4*hidden_size]
- b_hh:hidden-hidden的偏置(b_hi,b_hf,b_hg,b_ho) [4*hidden_size]
lstm_cell = nn.LSTMCell(10, 20) inputs = torch.randn(6, 3, 10) h_x = torch.randn(3, 20) c_x = torch.randn(3, 20) output = [] for i in range(6): h_x, c_x = lstm_cell(inputs[i], (h_x, c_x)) output.append(h_x) print("output:\n", output)
- 参数:
- 2.1 torch.nn.LSTM()
-
3.对于简单的GRU结构,有两种方式进行调用:
- 3.1 torch.nn.GRU()
- 参数:
- input_size:输入特征的维度
- hidden_size:隐藏层的特征维度
- num_layers:层数
- bias:如果是True,那么将不会使用bias,默认是True
- batch_first:如果是True,那么输入的Tensor的shape是[batch_size, time_step, feature],输出也是[batch_size, time_step, feature],默认是False,即[time_step, batch_size, feature]
- dropout:如果值是非零,那么除最后一层外,其他层的输出都会套上一个dropout层
- bidirectional:如果是True,将是一个双向RNN,默认是False
- 输入:input,h_0
- input shape:[seq_len, batch_size, input_size]
- h_0 shape:[num_layers*num_directions, batch_size, hidden_size]
- 输出:output,h_n
- output shape: [seq_len, batch_size, hidden_size*num_directions]
- h_n shape: [num_layers*num_directions, batch_size, hidden_size]
- GRU模型的参数:
- wih_l[k]:第k层可以学习的input-hidden权重(w_ir,w_ii,win),[input_shape,3*hidden_size]
- w_hh_l[k]:第k层可以学习的hidden-hidden权重(w_hr,w_hi,w_hn),[hidden_size,3*hidden_size]
- bias_ih_l[k]:第k层的可学习的input-hidden偏置(b_ir,b_ii,b_in),[3*hidden_size]
- bias_hh_l[k]:第k层的可学习的hidden-hidden偏置(b_hr,b_hi,b_hn) [3*hidden_size]
gru = nn.GRU(10, 20, 2) inputs = torch.randn(5, 3, 10) h0 = torch.randn(2, 3, 20) output = gru(inputs, h0) print("output:\n", output)
- 参数:
- 3.2 nn.GRUCell()
- 参数:
- input_size:输入特征的数量
- hidden_size: 隐藏层特征的数量
- bias:默认是True,如果是Flase,RNN cell中将不会加入bias
- nonlinearity:指定使用的非线性激活函数是tanh还是relu,默认是tanh
- 输入:[input, hidden]
- input shape: (batch_size, input_size)
- hidden shape: (batch_size, hidden_size)
- 输出:h
- h shape:[batch_size, hidden_size]
- GRUCell模型参数
- w_ih:input-hidden权重,[input_size, hidden_size]
- w_hh:hidden-hidden权重,[hidden_size,hidden_size]
- b_ih:input-hidden偏置,[hidden_size]
- b_hh:hidden-hidden偏置,[hidden_size]
gru_cell = nn.RNNCell(10, 20) input = torch.randn(6, 3, 10) hx = torch.randn(3, 20) output = [] for i in range(6): hx = gru_cell(input[i], hx) output.append(hx) print("output:\n", output)
- 参数:
- 3.1 torch.nn.GRU()
二、词嵌入模型介绍
- 在 PyTorch中使用 nn.Embedding 层来做嵌入词袋模型,Embedding层第一个输入表示词汇表中的单词总数,第二个输入表示词向量的维度,一般是100~500
embedding = nn.Embedding(10, 3)
input = torch.LongTensor([[1, 2, 4, 5], [4, 3, 2, 9]])
output = embedding(input)
print(output.size())