Review: Conditional LMs

Note that, in the Encoder part, we reverse the input to the ‘RNN’ and it performs well.

And we use the Decoder network(also a RNN), and use the ‘beam search’ algorithm to generate the target statement word by word.

The above network is a translation model.But it still needs to optimizer.

A very essential part of the model is the [Attention mechanism].

Conditional LMs with Attention

First: talk about the [condition]

In last blog, we compress a lot of information in a finite-sized vector and use it as the condition. That is to say, in the ‘Decoder’, for each input we use this vector as the condition to predict the next word.

But is it really correct?

An obvious thing is that a finite-sized vector cannot contain all the information since the input sentence could have a very one length. And gradients have a long way to travl so even LSTMs could forget!

In Translation Question, we can solve the problem by this:

Represent a source sentence as a matrix whose size can be changeable.

Then Generate a target sentence from the matrix. (As the condition and the condition is transformed form that matrix)

So how does this do?

The very simpal way to fulfill that is [With Concatenation].

We have already known that the words can be represented by ‘embedding’ such as Word2Vec. And all the embeddings have the same size. For a sentence composed by n words, we can just put each word’s embedding together. So the matrix size is |vocabulary size|*n, which n is the length of sentence. That’s a really easy solution but it is useful. E.g.

Another solution proposed by Gehring et al. (2016,FAIR) is [With Convolutional Nets].

It is to say, we use all embedding of the word from the sentence to form the concatenation matrix (just like the above method), and then we use a CNN to handle this matrix using some filters. And final we also generate a new matrix to represent the information. And in my opinion, this is a bit like extracting advanced features from image processing. E.g.

The most important method is [using the Bidirectional RNNs].

For one side, we use a RNN to handle the embedding, and we get n hidden layers which n is the length of the word.

For another side, we use another RNN to handle the embedding, but we reverse the input and finally we also get n hidden layers.

We put the 2n hidden layers together to generate the conditional matrix. E.g.

There are some other ways needed to be founded.

So next to the important part: how to use the ‘Attention model’ and use the attention to generate the condition vector form the condition matrix F.

Firstly, considering the decoder RNN:

We have a ‘start hidden layer’ and then generate the next hidden layer using the input x and we still need a conditional vector.

Suppose we also had an attention vector a. We can generate the condition vector by doing this:

c = Fa. Where F is the matrix and a is the attention vector. This can be understood as weighting the conditional matrix so that we can pay more attention to the contents of a certain sentence.

E.g.

So How to generate the Attention Vector?

That is, how do we compute a.

We can do by the following method:

For the time t, we know the hidden layer Ht-1, and we do linear transformation to it to generate a vector r. ( r = VHt-1) V is the learned parameter. Then we take dot product with every column in the source matrix to compute the attention energy a. ( a = F.T*r). So we generate the attention vector a by using a softmax to Exponentiate and normalize it to 1.

That is a simplified version of Bahdanau et al.’s solution. Summary of it:

Another complex way to generate the attention vector is to use the [Nonlinear Attention-Energy Model].

Getting the r above, ( r = VHt-1) we generate a by: a = v.T * tanh(WF + r). Where v W and V is the learned parameter. How useful of the r is not to verify.

Summary

We put it all together and this is called the conditional LM with attention.

 

Attention in machine translation.

Add attention to seq2seq model translation: +11 BLEU.

An improvement in computing:

Note the difference form the above model. But whether it is useful is not sure.

 

About Gradients

We use the Gradient Descent.

 

Comprehension

Cho’s question: does a translator read and memorize the input sentence/document and then generate the output?

• Compressing the entire input sentence into a vector basically says “memorize the sentence”

• Common sense experience says translators refer back and forth to the input. (also backed up by eyetracking studies)

 

Image caption generation with attention: brief introduction

The main idea is that: we encode the picture to a matrix F and use it generate some attention and finally use the attention to generate the caption.

Generate matrix F:

Attention “weights” (a) are computed using exactly the same technique as discussed above.

Other techinques: Stochastic hard attention(sampling matrix F idea and not like the weighting matrix F idea). Learning Hard Attention. To be honesty, I don't know much about this.

【NLP】Conditional Language Modeling with Attention的更多相关文章

  1. 【NLP】Conditional Language Models

    Language Model estimates the probs that the sequences of words can be a sentence said by a human. Tr ...

  2. 【NLP】Tika 文本预处理:抽取各种格式文件内容

    Tika常见格式文件抽取内容并做预处理 作者 白宁超 2016年3月30日18:57:08 摘要:本文主要针对自然语言处理(NLP)过程中,重要基础部分抽取文本内容的预处理.首先我们要意识到预处理的重 ...

  3. [转]【NLP】干货!Python NLTK结合stanford NLP工具包进行文本处理 阅读目录

    [NLP]干货!Python NLTK结合stanford NLP工具包进行文本处理  原贴:   https://www.cnblogs.com/baiboy/p/nltk1.html 阅读目录 目 ...

  4. 【NLP】前戏:一起走进条件随机场(一)

    前戏:一起走进条件随机场 作者:白宁超 2016年8月2日13:59:46 [摘要]:条件随机场用于序列标注,数据分割等自然语言处理中,表现出很好的效果.在中文分词.中文人名识别和歧义消解等任务中都有 ...

  5. 【NLP】基于自然语言处理角度谈谈CRF(二)

    基于自然语言处理角度谈谈CRF 作者:白宁超 2016年8月2日21:25:35 [摘要]:条件随机场用于序列标注,数据分割等自然语言处理中,表现出很好的效果.在中文分词.中文人名识别和歧义消解等任务 ...

  6. 【NLP】基于机器学习角度谈谈CRF(三)

    基于机器学习角度谈谈CRF 作者:白宁超 2016年8月3日08:39:14 [摘要]:条件随机场用于序列标注,数据分割等自然语言处理中,表现出很好的效果.在中文分词.中文人名识别和歧义消解等任务中都 ...

  7. 【NLP】基于统计学习方法角度谈谈CRF(四)

    基于统计学习方法角度谈谈CRF 作者:白宁超 2016年8月2日13:59:46 [摘要]:条件随机场用于序列标注,数据分割等自然语言处理中,表现出很好的效果.在中文分词.中文人名识别和歧义消解等任务 ...

  8. 【NLP】条件随机场知识扩展延伸(五)

    条件随机场知识扩展延伸 作者:白宁超 2016年8月3日19:47:55 [摘要]:条件随机场用于序列标注,数据分割等自然语言处理中,表现出很好的效果.在中文分词.中文人名识别和歧义消解等任务中都有应 ...

  9. 【NLP】Attention Model(注意力模型)学习总结

    最近一直在研究深度语义匹配算法,搭建了个模型,跑起来效果并不是很理想,在分析原因的过程中,发现注意力模型在解决这个问题上还是很有帮助的,所以花了两天研究了一下. 此文大部分参考深度学习中的注意力机制( ...

随机推荐

  1. dsu on tree入门

    先瞎扯几句 说起来我跟这个算法好像还有很深的渊源呢qwq.当时在学业水平考试的考场上,题目都做完了不会做,于是开始xjb出题.突然我想到这么一个题 看起来好像很可做的样子,然而直到考试完我都只想出来一 ...

  2. Linux查看分区文件系统类型总结

    在Linux 中如何查看分区的文件系统类型,下面总结几种查看分区文件系统类型的方法. 1: df -T 命令查看 这个是最简单的命令,文件系统类型在Type列输出.只可以查看已经挂载的分区和文件系统类 ...

  3. 执行C#动态代码

    执行C#动态代码 using System; using System.Data; using System.Configuration; using System.Text; using Syste ...

  4. VSCode的Python扩展下程序运行的几种方式与环境变量管理

    在VSCode中编写Python程序时,由于有些地方要使用环境变量,但是发现设置的环境变量有时不起作用,花了点时间研究了一下,过程不表,直接说结论. 首先,环境变量的设置,Python扩展中有三种方式 ...

  5. JavaScript(三)数据类型转换

    类型转换JavaScript中的取值类型非常灵活,如当JavaScript期望使用一个布尔值的时候,你可以提供其它数据类型的,JavaScript将根据需要自行转换数据类型.如下示例: 10 + “o ...

  6. python3 set(集合)

    add(增加元素) name = set(['Tom','Lucy','Ben']) name.add('Juny') print(name) #输出:{'Lucy', 'Juny', 'Ben', ...

  7. Eclipse出错不断,注册表不能乱改

    Eclipse打不开,始终报错,还能不能开心的敲代码了??? 首先说下造成我这个愚蠢错误的起源:电脑是win10系统,本来是可以正常使用的.某一天,我正在使用python,打开命令提示符,看见开头是中 ...

  8. hashlib模块

    老师博客:http://www.cnblogs.com/Eva-J/articles/7228075.html#_label12 摘要算法 什么是摘要算法呢?摘要算法又称哈希算法.散列算法.它通过一个 ...

  9. Flink Event Time Processing and Watermarks(文末有翻译)

    If you are building a Realtime streaming application, Event Time processing is one of the features t ...

  10. 关于创建本地docker仓库

    从远程仓库中下载regitstry镜像文件,下载后运行命令即可:docker run -p 5000:5000 -d registry