【NLP】Recurrent Neural Network and Language Models
0. Overview
What is language models?
A time series prediction problem.
It assigns a probility to a sequence of words,and the total prob of all the sequence equal one.
Many Natural Language Processing can be structured as (conditional) language modelling.
Such as Translation:
P(certain Chinese text | given English text)
Note that the Prob follows the Bayes Formula.
How to evaluate a Language Model?
Measured with cross entropy.

Three data sets:
1 Penn Treebank: www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz
2 Billion Word Corpus: code.google.com/p/1-billion-word-language-modeling-benchmark/
3 WikiText datasets: Pointer Sentinel Mixture Models. Merity et al., arXiv 2016
|
Overview: Three approaches to build language models: Count based n-gram models: approximate the history of observed words with just the previous n words. Neural n-gram models: embed the same fixed n-gram history in a continues space and thus better capture correlations between histories. Recurrent Neural Networks: we drop the fixed n-gram history and compress the entire history in a fixed length vector, enabling long range correlations to be captured. |
1. N-Gram models:
Assumption:
Only previous history matters.
Only k-1 words are included in history
Kth order Markov model
2-gram language model:

The conditioning context, wi−1, is called the history
Estimate Probabilities:
(For example: 3-gram)
(count w1,w2,w3 appearing in the corpus)
Interpolated Back-Off:
That is , sometimes some certain phrase don’t appear in the corpus so the Prob of them is zero. To avoid this situation, we use Interpolated Back-off. That is to say, Interpolate k-gram models(k = n-1、n-2…1) into the n-gram models.
A simpal approach:

Summary for n-gram:
Good: easy to train. Fast.
Bad: Large n-grams are sparse. Hard to capture long dependencies. Cannot capture correlations between similary word distributions. Cannot resolve the word morphological problem.(running – jumping)
2. Neural N-Gram Language Models
Use A feed forward network like:

Trigram(3-gram) Neural Network Language Model for example:


Wi are hot-vectors. Pi are distributions. And shape is |V|(words in the vocabulary)

(a sampal:detail cal graph)

Define the loss:cross entopy:

Training: use Gradient Descent

And a sampal of taining:

Comparsion with Count based n-gram LMs:
Good: Better performance on unseen n-grams But poorer on seen n-grams.(Sol: direct(linear) n-gram fertures). Use smaller memory than Counted based n-gram.
Bad: The number of parameters in the models scales with n-gram size. There is a limit on the longest dependencies that an be captured.
3. Recurrent Neural Network LM
That is to say, using a recurrent neural network to build our LM.



Model and Train:

Algorithm: Back Propagation Through Time(BPTT)
Note:

Note that, the Gradient Descent depend heavily. So the improved algorithm is:
Algorithm: Truncated Back Propagation Through Time.(TBPTT)
So the Cal graph looks like this:

So the Training process and Gradient Descent:

Summary of the Recurrent NN LMs:
Good:
RNNs can represent unbounded dependencies, unlike models with a fixed n-gram order.
RNNs compress histories of words into a fixed size hidden vector.
The number of parameters does not grow with the length of dependencies captured, but they do grow with the amount of information stored in the hidden layer.
Bad:
RNNs are hard to learn and often will not discover long range dependencies present in the data(So we learn LSTM unit).
Increasing the size of the hidden layer, and thus memory, increases the computation and memory quadratically.
Mostly trained with Maximum Likelihood based objectives which do not encode the expected frequencies of words a priori.
Some blogs recommended:
|
Andrej Karpathy: The Unreasonable Effectiveness of Recurrent Neural Networks karpathy.github.io/2015/05/21/rnn-effectiveness/ Yoav Goldberg: The unreasonable effectiveness of Character-level Language Models nbviewer.jupyter.org/gist/yoavg/d76121dfde2618422139 Stephen Merity: Explaining and illustrating orthogonal initialization for recurrent neural networks. smerity.com/articles/2016/orthogonal_init.html |
【NLP】Recurrent Neural Network and Language Models的更多相关文章
- pytorch --Rnn语言模型(LSTM,BiLSTM) -- 《Recurrent neural network based language model》
论文通过实现RNN来完成了文本分类. 论文地址:88888888 模型结构图: 原理自行参考论文,code and comment: # -*- coding: utf-8 -*- # @time : ...
- Recurrent Neural Network系列1--RNN(循环神经网络)概述
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...
- 【NLP】自然语言处理:词向量和语言模型
声明: 这是转载自LICSTAR博士的牛文,原文载于此:http://licstar.net/archives/328 这篇博客是我看了半年的论文后,自己对 Deep Learning 在 NLP 领 ...
- Recurrent Neural Network Language Modeling Toolkit代码学习
Recurrent Neural Network Language Modeling Toolkit 工具使用点击打开链接 本博客地址:http://blog.csdn.net/wangxingin ...
- 课程五(Sequence Models),第一 周(Recurrent Neural Networks) —— 1.Programming assignments:Building a recurrent neural network - step by step
Building your Recurrent Neural Network - Step by Step Welcome to Course 5's first assignment! In thi ...
- Recurrent Neural Network(循环神经网络)
Reference: Alex Graves的[Supervised Sequence Labelling with RecurrentNeural Networks] Alex是RNN最著名变种 ...
- Recurrent Neural Network系列2--利用Python,Theano实现RNN
作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...
- Recurrent Neural Network[survey]
0.引言 我们发现传统的(如前向网络等)非循环的NN都是假设样本之间无依赖关系(至少时间和顺序上是无依赖关系),而许多学习任务却都涉及到处理序列数据,如image captioning,speech ...
- (zhuan) Recurrent Neural Network
Recurrent Neural Network 2016年07月01日 Deep learning Deep learning 字数:24235 this blog from: http:/ ...
随机推荐
- 【C#复习总结】细说表达式树
1 前言 系类1:细说委托 系类2:细说匿名方法 系列3:细说Lambda表达式 系列4:细说泛型委托 系列5:细说表达式树 系列6:细说事件 涛声依旧,再续前言,接着用大佬的文章作为开头. 表达式树 ...
- 配置Apache虚拟主机
实验环境 一台最小化安装的CentOS 7.3虚拟机 配置基础环境 1. 安装apache yum install -y httpd 2. 建立虚拟主机的根目录 mkdir /var/wwwroot ...
- developer的996,需要谁来拯救
不为996辩护,但向奋斗者致敬! 随着996.icu愈演愈烈,不仅是国际友人发文问候,连国内互联网的大佬都被卷进风波,整理下大致思路如下: 马云:因为有自己想要实现的目标,因为有奔头,所以我们努力工作 ...
- prometeus, grafana部署以及监控mysql
什么是普罗米修斯? Prometheus是一个最初在SoundCloud上构建的开源系统监视和警报工具包 .自2012年成立以来,许多公司和组织都采用了Prometheus,该项目拥有一个非常活跃的开 ...
- Jenkins - 构建Allure Report
前言 本文为Pytest+Allure定制报告进阶篇,集成Jenkins,在Jenkins中直接生成报告,更方便测试人员查看. 一.安装插件allure-jenkins-plugin 1.进入系统管理 ...
- H5 13-子元素选择器
13-子元素选择器 p{ color: red; } */ /* #identity>p{ color: blue; } */ div>ul>li>p{ color: purp ...
- Python入门-用户登录程序
_flag = Falsecount = 0users = [['ziv', '123'], ['alex', '12345']]while count < 3: username = inpu ...
- Elasticsearch--Aggregation详细总结(聚合统计)
Elasticsearch的Aggregation功能也异常强悍. Aggregation共分为三种:Metric Aggregations.Bucket Aggregations. Pipeline ...
- bootstrap简单使用
Bootstrap (版本 v3.3.7) 官网教程: https://v3.bootcss.com/css/ row——行 row——列 push——推 pull——拉 col-md-o ...
- 简要了解 MySql 5.5/5.6/5.7/8 出现的新特性
MySQL的开发周期 在比较之前,首先提一下MySQL的开发周期. MySQL一个大版本的开发,大致经历如下几个阶段: Feature Development Feature Testing Perf ...