Do Transformer Modifications Transfer Across Implementations and Applications?
https://arxiv.org/abs/2102.11972
Submitted on 23 Feb 2021 (v1), last revised 10 Sep 2021 (this version, v2)
Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, Yanqi Zhou, Wei Li, Nan Ding, Jake Marcus, Adam Roberts, Colin Raffel
The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption. In this paper, we comprehensively evaluate many of these modifications in a shared experimental setting that covers most of the common uses of the Transformer in natural language processing. Surprisingly, we find that most modifications do not meaningfully improve performance. Furthermore, most of the Transformer variants we found beneficial were either developed in the same codebase that we used or are relatively minor changes. We conjecture that performance improvements may strongly depend on implementation details and correspondingly make some recommendations for improving the generality of experimental results.
自三年前引入变压器体系结构以来,研究界提出了大量的修改建议,其中很少有被广泛采用。在本文中,我们在一个共享的实验环境中综合评估了其中的许多修改,该实验环境涵盖了转换器在自然语言处理中的大多数常见用途。令人惊讶的是,我们发现大多数修改并没有显著提高性能。此外,我们发现的大多数有益的变压器变体要么是在我们使用的相同代码库中开发的,要么是相对较小的更改。我们推测,性能的提高在很大程度上取决于实现细节,并相应地提出了一些建议,以提高实验结果的通用性。
苏老师:除了这篇文章外,RMS Norm还被Google用在了T5中,并且在另外的一篇文章《Do Transformer Modifications Transfer Across Implementations and Applications?》中做了比较充分的对比实验,显示出RMS Norm的优越性。这样看来,未来RMS Norm很可能将会取代Layer Normalization而成为Transformer的标配。