Transformer 系列博客记录
- 全Transformer拆解,包含Seq2Seq, attention, self-attention, multi-headed attention, Positional Encoding, Residuals, Final Linear and Softmax Layer, Loss Function, greedy decoding and beam search.
https://jalammar.github.io/illustrated-transformer/