Attention in Long Short-Term Memory Recurrent Neural Networks

by Jason Brownlee on June 30, 2017 in Deep Learning
 

The Encoder-Decoder architecture is popular because it has demonstrated state-of-the-art results across a range of domains.

A limitation of the architecture is that it encodes the input sequence to a fixed length internal representation. This imposes limits on the length of input sequences that can be reasonably learned and results in worse performance for very long input sequences.

In this post, you will discover the attention mechanism for recurrent neural networks that seeks to overcome this limitation.

After reading this post, you will know:

  • The limitation of the encode-decoder architecture and the fixed-length internal representation.
  • The attention mechanism to overcome the limitation that allows the network to learn where to pay attention in the input sequence for each item in the output sequence.
  • 5 applications of the attention mechanism with recurrent neural networks in domains such as text translation, speech recognition, and more.

Let’s get started.

Attention in Long Short-Term Memory Recurrent Neural Networks
Photo by Jonas Schleske, some rights reserved.

Problem With Long Sequences

The encoder-decoder recurrent neural network is an architecture where one set of LSTMs learn to encode input sequences into a fixed-length internal representation, and second set of LSTMs read the internal representation and decode it into an output sequence.

This architecture has shown state-of-the-art results on difficult sequence prediction problems like text translation and quickly became the dominant approach.

For example, see:

The encoder-decoder architecture still achieves excellent results on a wide range of problems. Nevertheless, it suffers from the constraint that all input sequences are forced to be encoded to a fixed length internal vector.

This is believed to limit the performance of these networks, especially when considering long input sequences, such as very long sentences in text translation problems.

A potential issue with this encoder–decoder approach is that a neural network needs to be able to compress all the necessary information of a source sentence into a fixed-length vector. This may make it difficult for the neural network to cope with long sentences, especially those that are longer than the sentences in the training corpus.

— Dzmitry Bahdanau, et al., Neural machine translation by jointly learning to align and translate, 2015

Attention within Sequences

Attention is the idea of freeing the encoder-decoder architecture from the fixed-length internal representation.

This is achieved by keeping the intermediate outputs from the encoder LSTM from each step of the input sequence and training the model to learn to pay selective attention to these inputs and relate them to items in the output sequence.

Put another way, each item in the output sequence is conditional on selective items in the input sequence.

Each time the proposed model generates a word in a translation, it (soft-)searches for a set of positions in a source sentence where the most relevant information is concentrated. The model then predicts a target word based on the context vectors associated with these source positions and all the previous generated target words.

… it encodes the input sentence into a sequence of vectors and chooses a subset of these vectors adaptively while decoding the translation. This frees a neural translation model from having to squash all the information of a source sentence, regardless of its length, into a fixed-length vector.

— Dzmitry Bahdanau, et al., Neural machine translation by jointly learning to align and translate, 2015

This increases the computational burden of the model, but results in a more targeted and better-performing model.

In addition, the model is also able to show how attention is paid to the input sequence when predicting the output sequence. This can help in understanding and diagnosing exactly what the model is considering and to what degree for specific input-output pairs.

The proposed approach provides an intuitive way to inspect the (soft-)alignment between the words in a generated translation and those in a source sentence. This is done by visualizing the annotation weights… Each row of a matrix in each plot indicates the weights associated with the annotations. From this we see which positions in the source sentence were considered more important when generating the target word.

— Dzmitry Bahdanau, et al., Neural machine translation by jointly learning to align and translate, 2015

Problem with Large Images

Convolutional neural networks applied to computer vision problems also suffer from similar limitations, where it can be difficult to learn models on very large images.

As a result, a series of glimpses can be taken of a large image to formulate an approximate impression of the image before making a prediction.

One important property of human perception is that one does not tend to process a whole scene in its entirety at once. Instead humans focus attention selectively on parts of the visual space to acquire information when and where it is needed, and combine information from different fixations over time to build up an internal representation of the scene, guiding future eye movements and decision making.

— Recurrent Models of Visual Attention, 2014

These glimpse-based modifications may also be considered attention, but are not considered in this post.

See the papers.

5 Examples of Attention in Sequence Prediction

This section provides some specific examples of how attention is used for sequence prediction with recurrent neural networks.

1. Attention in Text Translation

The motivating example mentioned above is text translation.

Given an input sequence of a sentence in French, translate and output a sentence in English. Attention is used to pay attention to specific words in the input sequence for each word in the output sequence.

We extended the basic encoder–decoder by letting a model (soft-)search for a set of input words, or their annotations computed by an encoder, when generating each target word. This frees the model from having to encode a whole source sentence into a fixed-length vector, and also lets the model focus only on information relevant to the generation of the next target word.

— Dzmitry Bahdanau, et al., Neural machine translation by jointly learning to align and translate, 2015

Attentional Interpretation of French to English Translation
Taken from Dzmitry Bahdanau, et al., Neural machine translation by jointly learning to align and translate, 2015

2. Attention in Image Descriptions

Different from the glimpse approach, the sequence-based attentional mechanism can be applied to computer vision problems to help get an idea of how to best use the convolutional neural network to pay attention to images when outputting a sequence, such as a caption.

Given an input of an image, output an English description of the image. Attention is used to pay focus on different parts of the image for each word in the output sequence.

We propose an attention based approach that gives state of the art performance on three benchmark datasets … We also show how the learned attention can be exploited to give more interpretability into the models generation process, and demonstrate that the learned alignments correspond very well to human intuition.

Attentional Interpretation of Output Words to Specific Regions on the Input Images
Taken from Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, 2016

— Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, 2016

3. Attention in Entailment

Given a premise scenario and a hypothesis about the scenario in English, output whether the premise contradicts, is not related, or entails the hypothesis.

For example:

  • premise: “A wedding party taking pictures
  • hypothesis: “Someone got married

Attention is used to relate each word in the hypothesis to words in the premise, and vise-versa.

We present a neural model based on LSTMs that reads two sentences in one go to determine entailment, as opposed to mapping each sentence independently into a semantic space. We extend this model with a neural word-by-word attention mechanism to encourage reasoning over entailments of pairs of words and phrases. … An extension with word-by-word neural attention surpasses this strong benchmark LSTM result by 2.6 percentage points, setting a new state-of-the-art accuracy…

— Reasoning about Entailment with Neural Attention, 2016

Attentional Interpretation of Premise Words to Hypothesis Words 
Taken from Reasoning about Entailment with Neural Attention, 2016

4. Attention in Speech Recognition

Given an input sequence of English speech snippets, output a sequence of phonemes.

Attention is used to relate each phoneme in the output sequence to specific frames of audio in the input sequence.

… a novel end-to-end trainable speech recognition architecture based on a hybrid attention mechanism which combines both content and location information in order to select the next position in the input sequence for decoding. One desirable property of the proposed model is that it can recognize utterances much longer than the ones it was trained on.

— Attention-Based Models for Speech Recognition, 2015.

Attentional Interpretation of Output Phoneme Location to Input Frames of Audio
Taken from Attention-Based Models for Speech Recognition, 2015

5. Attention in Text Summarization

Given an input sequence of an English article, output a sequence of English words that summarize the input.

Attention is used to relate each word in the output summary to specific words in the input document.

… a neural attention-based model for abstractive summarization, based on recent developments in neural machine translation. We combine this probabilistic model with a generation algorithm which produces accurate abstractive summaries.

— A Neural Attention Model for Abstractive Sentence Summarization, 2015

Attentional Interpretation of Words in the Input Document to the Output Summary
Taken from A Neural Attention Model for Abstractive Sentence Summarization, 2015.

Further Reading

This section provides additional resources if you would like to learn more about adding attention to LSTMs.

Keras does not offer attention out of the box at the time of writing, but there are few third-party implementations. See:

Do you know of some good resources on attention in recurrent neural networks?
Let me know in the comments.

Summary

In this post, you discovered the attention mechanism for sequence prediction problems with LSTM recurrent neural networks.

Specifically, you learned:

  • That the encoder-decoder architecture for recurrent neural networks uses a fixed-length internal representation that imposes a constraint that limits learning very long sequences.
  • That attention overcomes the limitation in the encode-decoder architecture by allowing the network to learn where to pay attention to the input for each item in the output sequence.
  • That the approach has been used across different types sequence prediction problems include text translation, speech recognition, and more.

Do you have any questions about attention in recurrent neural networks?
Ask your questions in the comments below and I will do my best to answer.

(zhuan) Attention in Long Short-Term Memory Recurrent Neural Networks的更多相关文章

  1. LSTM学习—Long Short Term Memory networks

    原文链接:https://colah.github.io/posts/2015-08-Understanding-LSTMs/ Understanding LSTM Networks Recurren ...

  2. Attention and Augmented Recurrent Neural Networks

    Attention and Augmented Recurrent Neural Networks CHRIS OLAHGoogle Brain SHAN CARTERGoogle Brain Sep ...

  3. [C5W1] Sequence Models - Recurrent Neural Networks

    第一周 循环序列模型(Recurrent Neural Networks) 为什么选择序列模型?(Why Sequence Models?) 在本课程中你将学会序列模型,它是深度学习中最令人激动的内容 ...

  4. The Unreasonable Effectiveness of Recurrent Neural Networks (RNN)

    http://karpathy.github.io/2015/05/21/rnn-effectiveness/ There’s something magical about Recurrent Ne ...

  5. 《The Unreasonable Effectiveness of Recurrent Neural Networks》阅读笔记

    李飞飞徒弟Karpathy的著名博文The Unreasonable Effectiveness of Recurrent Neural Networks阐述了RNN(LSTM)的各种magic之处, ...

  6. 第十四章——循环神经网络(Recurrent Neural Networks)(第二部分)

    本章共两部分,这是第二部分: 第十四章--循环神经网络(Recurrent Neural Networks)(第一部分) 第十四章--循环神经网络(Recurrent Neural Networks) ...

  7. 循环神经网络(RNN, Recurrent Neural Networks)介绍(转载)

    循环神经网络(RNN, Recurrent Neural Networks)介绍    这篇文章很多内容是参考:http://www.wildml.com/2015/09/recurrent-neur ...

  8. 第十四章——循环神经网络(Recurrent Neural Networks)(第一部分)

    由于本章过长,分为两个部分,这是第一部分. 这几年提到RNN,一般指Recurrent Neural Networks,至于翻译成循环神经网络还是递归神经网络都可以.wiki上面把Recurrent ...

  9. Pixel Recurrent Neural Networks翻译

    Pixel Recurrent Neural Networks 目前主要在用的文档存放: https://www.yuque.com/lart/papers/prnn github存档: https: ...

随机推荐

  1. sitecore教程路径分析器

    路径分析器是一个应用程序,使您可以创建一个地图,显示联系人在浏览您的网站时所采取的顺序路径.您可以在与广告系列互动时查看联系人所采用的路径,并触发目标和结果. 您可以创建新的路径分析器地图,以跟踪联系 ...

  2. Sitecore 8.1 - 特性和功能

    营销基础 一个新的Sitecore品牌术语取代了体验营销(以前的Sitecore DMS),这是Sitecore体验数据库(xDB)现在所在的位置. Sitecore 7.5和Sitecore 8.0 ...

  3. 使用JFileChooser实现在指定文件夹下批量添加根据“数字型样式”或“非数字型样式”命令的文件夹

    2018-11-05 20:57:00开始写 Folder.java类 import javax.swing.JFrame; import javax.swing.JPanel; import jav ...

  4. samba共享目录无法访问的一般解决方案,非用户登录和读写权限问题

    配smb,被第四点坑了很久,特此转载. 由于这5点都是比较普通的情况,不涉及用户登录和读写权限问题 1)关闭防火墙: #sevice iptables stop 2)修改 /etc/samba/smb ...

  5. JAVA基础2---深度解析A++和++A的区别

    我们都知道JAVA中A++和++A在用法上的区别,都是自增,A++是先取值再自增,++A是先自增再取值,那么为什么会是这样的呢? 1.关于A++和++A的区别,下面的来看个例子: public cla ...

  6. 20165316 技能学习心得与c语言学习

    20165316 技能学习心得与c语言学习 一.技能学习经验 我会打乒乓球,在中国,我只能说我"会"打,至于"比大多数人更好"我不敢断言,因为我无时无刻不感受到 ...

  7. 实现Winform 跨线程安全访问UI控件

    在多线程操作WinForm窗体上的控件时,出现“线程间操作无效:从不是创建控件XXXX的线程访问它”,那是因为默认情况下,在Windows应用程序中,.NET Framework不允许在一个线程中直接 ...

  8. 51nod 1130 N的阶乘的长度 V2(斯特林近似)

    输入N求N的阶乘的10进制表示的长度.例如6! = 720,长度为3.   Input 第1行:一个数T,表示后面用作输入测试的数的数量.(1 <= T <= 1000) 第2 - T + ...

  9. gradle 定义打包后的项目名

    war { archiveName 'ROOT.war' } 或 task makeWar(type:org.gradle.api.tasks.bundling.War) { //指定生成的jar名 ...

  10. java321 面向对象编程