Link of the Paper: https://arxiv.org/abs/1706.03762

Motivation:

  • The inherently sequential nature of Recurrent Models precludes parallelization within training examples.
  • Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences. In all but a few cases, however, such attention mechanisms are used in conjunction with a recurrent network.

Innovation:

  • The first sequence transduction model, the Transformer, relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or Convolutions. The Transformer follows the overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively.

    • Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. The authors employ a residual connection around each of the two sub-layers, followed by layer normalization. That is, the output of each sub-layer is LayerNorm (x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel = 512.
    • Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, they employ residual connections around each of the sub-layers, followed by layer normalization. They also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.

  • Scaled Dot-Product Attention and Multi-Head Attention. The Transformer uses multi-head attention in three different ways:

    • In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [ Google’s neural machine translation system: Bridging the gap between human and machine translation. ].
    • The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. [ More about Attention Definition ]
    • Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. They need to prevent leftward information flow in the decoder to preserve the auto-regressive property. They implement this inside of scaled dot-product attention by masking out (setting to -∞) all values in the input of the softmax which correspond to illegal connections. See Figure 2.

In terms of encoder-decoder, the query Q is usually the hidden state of the decoder. Whereas key K, is the hidden state of the encoder, and the corresponding value V is normalized weight, representing how much attention a key gets.    -- The Transformer - Attention is all you need.

    

  • Positional Encoding: Add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. In this work, they use sine and cosine functions of different frequencies:

    • PE(pos, 2i) = sin ( pos / 100002i/dmodel )
    • PE(pos, 2i+1) = cos ( pos / 100002i/dmodel)

Improvement:

  • Position-wise Feed-Forward Networks: In addition to attention sub-layers, each of the layers in their encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. FFN(x) = max(0, xW1 + b1)W2 + b2.

General Points:

  • An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
  • Why Self-Attention:
    • One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required.
    • The third is the path length between long-range dependencies in the network.

Paper Reading - Attention Is All You Need ( NIPS 2017 ) ★的更多相关文章

  1. Paper Reading - Convolutional Sequence to Sequence Learning ( CoRR 2017 ) ★

    Link of the Paper: https://arxiv.org/abs/1705.03122 Motivation: Compared to recurrent layers, convol ...

  2. Paper Reading: Stereo DSO

    开篇第一篇就写一个paper reading吧,用markdown+vim写东西切换中英文挺麻烦的,有些就偷懒都用英文写了. Stereo DSO: Large-Scale Direct Sparse ...

  3. Paper Reading - Im2Text: Describing Images Using 1 Million Captioned Photographs ( NIPS 2011 )

    Link of the Paper: http://papers.nips.cc/paper/4470-im2text-describing-images-using-1-million-captio ...

  4. Paper Reading - Show, Attend and Tell: Neural Image Caption Generation with Visual Attention ( ICML 2015 )

    Link of the Paper: https://arxiv.org/pdf/1502.03044.pdf Main Points: Encoder-Decoder Framework: Enco ...

  5. Paper Reading - Sequence to Sequence Learning with Neural Networks ( NIPS 2014 )

    Link of the Paper: https://arxiv.org/pdf/1409.3215.pdf Main Points: Encoder-Decoder Model: Input seq ...

  6. [Paper Reading] Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

    论文链接:https://arxiv.org/pdf/1502.03044.pdf 代码链接:https://github.com/kelvinxu/arctic-captions & htt ...

  7. Paper Reading - CNN+CNN: Convolutional Decoders for Image Captioning

    Link of the Paper: https://arxiv.org/abs/1805.09019 Innovations: The authors propose a CNN + CNN fra ...

  8. Paper Reading - Learning to Evaluate Image Captioning ( CVPR 2018 ) ★

    Link of the Paper: https://arxiv.org/abs/1806.06422 Innovations: The authors propose a novel learnin ...

  9. Paper Reading - Convolutional Image Captioning ( CVPR 2018 )

    Link of the Paper: https://arxiv.org/abs/1711.09151 Motivation: LSTM units are complex and inherentl ...

随机推荐

  1. Bridge(桥接)模式

    1. 概述 在软件系统中,某些类型由于自身的逻辑,它具有两个或多个维度的变化,那么如何应对这种“多维度的变化”?如何利用面向对象的技术来使得该类型能够轻松的沿着多个方向进行变化,而又不引入额外的复杂度 ...

  2. iOS之Custom UIViewController Transition

    本文学习下自定义ViewController的切换,从无交互的到交互式切换. (本文已同步到我的小站:icocoa,欢迎访问.) iOS7中定义了3个协议: UIViewControllerTrans ...

  3. http 协议状态码

    1xx   信息类状态码 100 - Continue 初始的请求已经接受,客户应当继续发送请求的其余部分.(HTTP 1.1新) 101 - Switching Protocols 服务器将遵从客户 ...

  4. mac 下安装php7.1 redis

    1.下载phpredis源文件 https://nodeload.github.com/nicolasff/phpredis/zip/master 下载后解压 2.执行命令 phpize  执行后执行 ...

  5. linux学习笔记三:防火墙设置

    请注意:centOS7和7之前的版本在防火墙设置上不同,只有正确的设置防火墙才能实现window下访问linux中的web应用. centOS6添加端口: vi /ets/sysconfig/ipta ...

  6. JDK1.8降到1.7技巧

    前言: 最近部署一个产品,该产品不支持JDK1.8,碰巧我的机器安装的是1.8,这就需要降到1.7才能部署启动成功.那么我也是不赞成卸载1.8来安装1.7,因为很多时候可能需要1.8和1.7来回切换. ...

  7. java学习无止境,工资价更高

    原 推荐10个Java方向最热门的开源项目(8月) 2018年08月28日 17:54:32 SnailClimb在CSDN 阅读数:849   版权声明:本文为博主原创文章,未经博主允许不得转载. ...

  8. 前端基础-jQuery的最常用的的方法each、data、

    阅读目录 each inArray get index data 一.jQuery中each方法的应用 jQuery中有个很重要的核心方法each,大部分jQuery方法在内部都会调用each,其主要 ...

  9. MySQL数据库创建用户并实现远程登录

    创建用户 CREATE USER 'username'@'host' IDENTIFIED BY 'password'; 2.授权 GRANT privileges_name privileges O ...

  10. Qt界面编程基本操作

    Qt界面编程基本操作 了解基本代码构成 类widget的头文件widget.h如下: #ifndef WIDGET_H #define WIDGET_H #include <QWidget> ...