Deep contextualized word representations
Peters et al, 2018
-
ELMo (Embeddings from Language Models) learns a linear combination of the vectors stacked above each input word for each end task, which markedly improves performance over just using the top LSTM layer.
- High-level captures the context-dependent aspects of word meaning.
- Low-level captures the basic syntax.
-
Different from others, ELMo word representations are functions of the entire input sentence.
[Devlin et al. 2019]
2. Bidirectional language models (biLM)
Given a sequence of tokens
,
- A forward language model computes the probability of the sequence by modeling the probability of token
given the history
:
- A backward language model computes the probability of the token
given the future context
:
- A biLM combines both a forward and backward LM and jointly maximizes the log likelihoods of both directions:
where the forward and backward LMs share the parameters in token representationlayer and Softmax layer
, except their LSTMs
.
3. ELMO
- ELMo is a task specific combination of the intermediate layer representations in the biLM.
- For each token
, a L-layer biLM computes a set of 2L+1 representations
-
is the token layer.
-
contains two outputs from the
-th forward and backward BiLSTM layer at the position
.
-
- In practice, ELMo has to collapse all layers in
into a single vector.
- Simply, ELMo just selects the top layer
.
- Generally, ELMo computes a task specific weighting of all biLM layers:
whereare softmax-normalized weights and the scalar parameter
allows the task model to scale the tire ELMo vector.
- Simply, ELMo just selects the top layer
Reference
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.