Link of the Paper: https://arxiv.org/abs/1706.03762

Motivation:

  • The inherently sequential nature of Recurrent Models precludes parallelization within training examples.
  • Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences. In all but a few cases, however, such attention mechanisms are used in conjunction with a recurrent network.

Innovation:

  • The first sequence transduction model, the Transformer, relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or Convolutions. The Transformer follows the overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively.

    • Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. The authors employ a residual connection around each of the two sub-layers, followed by layer normalization. That is, the output of each sub-layer is LayerNorm (x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel = 512.
    • Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, they employ residual connections around each of the sub-layers, followed by layer normalization. They also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.

  • Scaled Dot-Product Attention and Multi-Head Attention. The Transformer uses multi-head attention in three different ways:

    • In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [ Google’s neural machine translation system: Bridging the gap between human and machine translation. ].
    • The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. [ More about Attention Definition ]
    • Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. They need to prevent leftward information flow in the decoder to preserve the auto-regressive property. They implement this inside of scaled dot-product attention by masking out (setting to -∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2.

In terms of encoder-decoder, the query Q is usually the hidden state of the decoder. Whereas key K, is the hidden state of the encoder, and the corresponding value V is normalized weight, representing how much attention a key gets.    -- The Transformer - Attention is all you need.

    

  • Positional Encoding: Add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. In this work, they use sine and cosine functions of different frequencies:

    • PE(pos, 2i) = sin ( pos / 100002i/dmodel )
    • PE(pos, 2i+1) = cos ( pos / 100002i/dmodel)

Improvement:

  • Position-wise Feed-Forward Networks: In addition to attention sub-layers, each of the layers in their encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. FFN(x) = max(0, xW1 + b1)W2 + b2.

General Points:

  • An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
  • Why Self-Attention:
    • One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required.
    • The third is the path length between long-range dependencies in the network.

Paper Reading - Attention Is All You Need ( NIPS 2017 ) ★的更多相关文章

  1. Paper Reading - Convolutional Sequence to Sequence Learning ( CoRR 2017 ) ★

    Link of the Paper: https://arxiv.org/abs/1705.03122 Motivation: Compared to recurrent layers, convol ...

  2. Paper Reading: Stereo DSO

    开篇第一篇就写一个paper reading吧,用markdown+vim写东西切换中英文挺麻烦的,有些就偷懒都用英文写了. Stereo DSO: Large-Scale Direct Sparse ...

  3. Paper Reading - Im2Text: Describing Images Using 1 Million Captioned Photographs ( NIPS 2011 )

    Link of the Paper: http://papers.nips.cc/paper/4470-im2text-describing-images-using-1-million-captio ...

  4. Paper Reading - Show, Attend and Tell: Neural Image Caption Generation with Visual Attention ( ICML 2015 )

    Link of the Paper: https://arxiv.org/pdf/1502.03044.pdf Main Points: Encoder-Decoder Framework: Enco ...

  5. Paper Reading - Sequence to Sequence Learning with Neural Networks ( NIPS 2014 )

    Link of the Paper: https://arxiv.org/pdf/1409.3215.pdf Main Points: Encoder-Decoder Model: Input seq ...

  6. [Paper Reading] Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

    论文链接:https://arxiv.org/pdf/1502.03044.pdf 代码链接:https://github.com/kelvinxu/arctic-captions & htt ...

  7. Paper Reading - CNN+CNN: Convolutional Decoders for Image Captioning

    Link of the Paper: https://arxiv.org/abs/1805.09019 Innovations: The authors propose a CNN + CNN fra ...

  8. Paper Reading - Learning to Evaluate Image Captioning ( CVPR 2018 ) ★

    Link of the Paper: https://arxiv.org/abs/1806.06422 Innovations: The authors propose a novel learnin ...

  9. Paper Reading - Convolutional Image Captioning ( CVPR 2018 )

    Link of the Paper: https://arxiv.org/abs/1711.09151 Motivation: LSTM units are complex and inherentl ...

随机推荐

  1. Android解析json数据

    Json数据 [{"code":"110000","sheng":"11","di":"0 ...

  2. android学习1:清晰详细android环境搭建,超简单

    废话少说,今天是Android学习的开篇的博客,接下来将把自己学习android的各种问题和经历总结一下,其实之前已经自己学过半年了,但是因为开始时刚学的移动端开发还没有概念,当时总结工作又做的不好, ...

  3. zookeeper报错 JAVA_HOME is not set

    很多开发者安装zookeeper的时候,应该会发现到这么一个问题: JAVA_HOME is not set 好的!那么这个是什么意思呢? 就是说你的  JAVA_HOME 变量没有设定 为什么会提示 ...

  4. 微信小程序——长按复制、一键复制

    wxml: 订单号:<text selectable='true' bindlongtap='copy' >{{OrderModel.OrderNo}}</text><b ...

  5. 大数据学习--day12(内部类)

    内部类学习     定义在类的内部的类  叫做内部类     包含了内部类的类 叫做外部类 内部类的作用      内部类是为了 实现 java中 多继承而存在的      内部类 可以继承其他类   ...

  6. Linux基础入门---学习心得

    之前一直以为Linux和Windows差不多,但是学习了Linux基础入门之后才发现两种操作系统之间差距非常大. Linux只是在硬件之上的内核和系统调用,就连我们在Windows里习以为常的图形界面 ...

  7. FPGA静态时序分析基础

    FPGA静态时序分析基础 基本概念 Skew: 时钟偏移 Skew表示时钟到达不同触发器的延时差别,Tskew = 时钟到达2号触发器的时刻 - 时钟到达1号触发器的时刻. Jitter: 时钟抖动 ...

  8. 20155222 2016-2017-2 《Java程序设计》第10周学习总结

    20155222 2016-2017-2 <Java程序设计>第10周学习总结 教材学习内容总结 简单JAVA socket * 1 搭建服务器端 * 1 创建ServerSocket对象 ...

  9. 学号20155308 2016-2017-2 《Java程序设计》实验一(Java开发环境的熟悉)实验报告

    学号20155308 2016-2017-2 <Java程序设计>实验一(Java开发环境的熟悉)实验报告 一.实验要求 使用JDK编译.运行简单的Java程序. 使用IDEA 编辑.编译 ...

  10. 20155322 2016-2017-2 《Java程序设计》第2周学习总结

    20155322 2016-2017-2 <Java程序设计>第2周学习总结 教材学习内容总结 本周按照教学安排学习教材的第三章,下面简单的概括一下我的学习总结: 第三章的主要内容是有关于 ...