好了我们上一篇已经获取了数据,并简单处理了一下。
接下来我们的任务就是把了解一下这个机器人的代码。
我们首先知道seq2seq其实是一个解码器和一个编码器两个RNN组成的。
好了那我们就把seq2seq用pytorch写出来。
作为并不是特别了解seq2seq代码的我,选择借鉴一下别人的代码。
https://ptorch.com/news/137.html
在这里,我们找到别人的项目地址,并打开model.py
可以看到别人的seq2seq模型代码!
给大家展示一下这位作者的代码。(如侵权请联系)
import torch
import torch.nn as nn
import torch.nn.functional as F
USE_CUDA = torch.cuda.is_available()
device = torch.device("cuda" if USE_CUDA else "cpu")
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size, embedding, n_layers=1, dropout=0):
super(EncoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embedding = embedding
self.gru = nn.GRU(hidden_size, hidden_size, n_layers,
dropout=(0 if n_layers == 1 else dropout), bidirectional=True)
def forward(self, input_seq, input_lengths, hidden=None):
embedded = self.embedding(input_seq)
packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
outputs, hidden = self.gru(packed, hidden) # output: (seq_len, batch, hidden*n_dir)
outputs, _ = torch.nn.utils.rnn.pad_packed_sequence(outputs)
outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:] # Sum bidirectional outputs (1, batch, hidden)
return outputs, hidden
class Attn(nn.Module):
def __init__(self, method, hidden_size):
super(Attn, self).__init__()
self.method = method
self.hidden_size = hidden_size
if self.method == 'general':
self.attn = nn.Linear(self.hidden_size, hidden_size)
elif self.method == 'concat':
self.attn = nn.Linear(self.hidden_size * 2, hidden_size)
self.v = nn.Parameter(torch.FloatTensor(1, hidden_size))
def forward(self, hidden, encoder_outputs):
# hidden [1, 64, 512], encoder_outputs [14, 64, 512]
max_len = encoder_outputs.size(0)
batch_size = encoder_outputs.size(1)
# Create variable to store attention energies
attn_energies = torch.zeros(batch_size, max_len) # B x S
attn_energies = attn_energies.to(device)
# For each batch of encoder outputs
for b in range(batch_size):
# Calculate energy for each encoder output
for i in range(max_len):
attn_energies[b, i] = self.score(hidden[:, b], encoder_outputs[i, b].unsqueeze(0))
# Normalize energies to weights in range 0 to 1, resize to 1 x B x S
return F.softmax(attn_energies, dim=1).unsqueeze(1)
def score(self, hidden, encoder_output):
# hidden [1, 512], encoder_output [1, 512]
if self.method == 'dot':
energy = hidden.squeeze(0).dot(encoder_output.squeeze(0))
return energy
elif self.method == 'general':
energy = self.attn(encoder_output)
energy = hidden.squeeze(0).dot(energy.squeeze(0))
return energy
elif self.method == 'concat':
energy = self.attn(torch.cat((hidden, encoder_output), 1))
energy = self.v.squeeze(0).dot(energy.squeeze(0))
return energy
class LuongAttnDecoderRNN(nn.Module):
def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1):
super(LuongAttnDecoderRNN, self).__init__()
# Keep for reference
self.attn_model = attn_model
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout = dropout
# Define layers
self.embedding = embedding
self.embedding_dropout = nn.Dropout(dropout)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout))
self.concat = nn.Linear(hidden_size * 2, hidden_size)
self.out = nn.Linear(hidden_size, output_size)
# Choose attention model
if attn_model != 'none':
self.attn = Attn(attn_model, hidden_size)
def forward(self, input_seq, last_hidden, encoder_outputs):
# Note: we run this one step at a time
# Get the embedding of the current input word (last output word)
embedded = self.embedding(input_seq)
embedded = self.embedding_dropout(embedded) #[1, 64, 512]
if(embedded.size(0) != 1):
raise ValueError('Decoder input sequence length should be 1')
# Get current hidden state from input word and last hidden state
rnn_output, hidden = self.gru(embedded, last_hidden)
# Calculate attention from current RNN state and all encoder outputs;
# apply to encoder outputs to get weighted average
attn_weights = self.attn(rnn_output, encoder_outputs) #[64, 1, 14]
# encoder_outputs [14, 64, 512]
context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) #[64, 1, 512]
# Attentional vector using the RNN hidden state and context vector
# concatenated together (Luong eq. 5)
rnn_output = rnn_output.squeeze(0) #[64, 512]
context = context.squeeze(1) #[64, 512]
concat_input = torch.cat((rnn_output, context), 1) #[64, 1024]
concat_output = torch.tanh(self.concat(concat_input)) #[64, 512]
# Finally predict next token (Luong eq. 6, without softmax)
output = self.out(concat_output) #[64, output_size]
# Return final output, hidden state, and attention weights (for visualization)
return output, hidden, attn_weights
首先作者创建了一个EncoderRNN 即编码器。
接着创建了一个ATTN的注意力机制(attention注意力)这个简写
最后创建了一个LuongAttnDecoderRNN 的解码器
这个就是作者大概的模型框架
大概的模型框架看完,我们接着细看。
首先是
EncoderRNN这个RNN使用的GRU
在forward中参数是 input_seq,input_lengths,以及hidden,hidden未被赋值时候为None。
好各位同学,在这里我们请问一下input_lengths是做什么的?
很好看来大家都很聪明,都已经知道了!
没错,input_seq是输入的句子,当然转为了数字的句子(也就是说我们还有一个将文字转换为数值的工作),而这个input_lengths则是input_seq中各个句子的实际的长度。
接着forward对输入的参数具体的操作了,
首先就对input_seq进行了embedding词嵌入,生成了 embedded
接着用
torch.nn.utils.rnn.pack_padded_sequence
这个玩意处理了一下embedded,作为必要参数的input_lengths也一同传入,协助这玩意儿工作。
但是这玩意四个啥呢,看他长长的一串也应该猜了个大概吧(百度翻译:包装填充序列),但我们本着求知的精神,百度他一下
随便找出来这么一句话
这里的pack,理解成压紧比较好。 将一个 填充过的变长序列 压紧。(填充时候,会有冗余,所以压紧一下)
作为第二步解析,我们不再深入研究,这个方法,到最后自然会明了。
接着我们看到,我们用
torch.nn.utils.rnn.pack_padded_sequence
这玩意把embedded 填充好的数据拿出放入到GRU中训练
GRU训练时传入hidden作为参数。
torch.nn.utils.rnn.pad_packed_sequence
把压紧的序列再填充回来。填充时会初始化为0。
返回的Varaible的值的size是 T×B×, T 是最长序列的长度,B 是 batch_size,如果 batch_first=True,那么返回值是B×T×。
最后
outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:] # Sum bidirectional outputs (1, batch, hidden)
百度翻译注释 #双向输出总和(1,批处理,隐藏)
问,这个outputs加后面的outputs是一样吗?