Note 2: ELMo

Deep contextualized word representations

Peters et al, 2018

  1. ELMo (Embeddings from Language Models) learns a linear combination of the vectors stacked above each input word for each end task, which markedly improves performance over just using the top LSTM layer.
    • High-level captures the context-dependent aspects of word meaning.
    • Low-level captures the basic syntax.
    • Different from others, ELMo word representations are functions of the entire input sentence.


      [Devlin et al. 2019]

2. Bidirectional language models (biLM)

Given a sequence of N tokens [t_1, t_2, \ldots, t_N],

  • A forward language model computes the probability of the sequence by modeling the probability of token t_k given the history [t_1,\ldots,t_{k-1}]:
    p(t_1, \ldots, t_N)=\prod_{k=1}^{N}{p(t_k|t_1, \ldots, t_{k-1})}
  • A backward language model computes the probability of the token t_k given the future context [t_{k+1},\ldots,t_N]:
    p(t_1, \ldots, t_N)=\prod_{k=1}^{N}{p(t_k|t_{k+1}, \ldots, t_N)}
  • A biLM combines both a forward and backward LM and jointly maximizes the log likelihoods of both directions:
    \sum_{k=1}^{N}{\log{p(t_k|t_1,\ldots,t_{k-1};\Theta_x, \overrightarrow{\Theta}_{LSTM}, \Theta_s)} \\+ \log{p(t_k|t_{k+1},\ldots, t_N;\Theta_x, \overleftarrow{\Theta}_{LSTM}, \Theta_s)}}
    where the forward and backward LMs share the parameters in token representation \Theta_x layer and Softmax layer \Theta_s, except their LSTMs \Theta_{LSTM}.

3. ELMO

  • ELMo is a task specific combination of the intermediate layer representations in the biLM.
  • For each token t_k, a L-layer biLM computes a set of 2L+1 representations
    R_k=\{x_k^{LM}, \overrightarrow{h}_{k,j}^{LM}, \overleftarrow{h}_{k,j}^{LM}|j=1,\ldots,L\}=\{h_{k,j}^{LM}|j=0,\ldots,L\}
    • h_{k,0}^{LM} is the token layer.
    • h_{k,j}^{LM}=[\overrightarrow{h}_{k,j}^{LM}; \overleftarrow{h}_{k,j}^{LM}] contains two outputs from the j-th forward and backward BiLSTM layer at the position k.
  • In practice, ELMo has to collapse all layers in R into a single vector.
    • Simply, ELMo just selects the top layer E(R_k)=h_{k,j}^{LM}.
    • Generally, ELMo computes a task specific weighting of all biLM layers:
      {ELMo}_k^{task}=E(R_k; \Theta^{task})=\gamma^{task}\sum_{j=0}^{L}{s_j^{task}h_{k,j}^{LM}}
      where s^{task} are softmax-normalized weights and the scalar parameter \gamma^{task} allows the task model to scale the tire ELMo vector.

Reference

Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。