0. Overview

What is language models?

A time series prediction problem.

It assigns a probility to a sequence of words,and the total prob of all the sequence equal one.

Many Natural Language Processing can be structured as (conditional) language modelling.

Such as Translation:

P(certain Chinese text | given English text)

Note that the Prob follows the Bayes Formula.

How to evaluate a Language Model?

Measured with cross entropy.

Three data sets:

1 Penn Treebank: www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz

2 Billion Word Corpus: code.google.com/p/1-billion-word-language-modeling-benchmark/

3 WikiText datasets: Pointer Sentinel Mixture Models. Merity et al., arXiv 2016

Overview: Three approaches to build language models:

Count based n-gram models: approximate the history of observed words with just the previous n words.

Neural n-gram models: embed the same fixed n-gram history in a continues space and thus better capture correlations between histories.

Recurrent Neural Networks: we drop the fixed n-gram history and compress the entire history in a fixed length vector, enabling long range correlations to be captured.

 

1. N-Gram models:

Assumption:

Only previous history matters.

Only k-1 words are included in history

Kth order Markov model

2-gram language model:

The conditioning context, wi−1, is called the history

Estimate Probabilities:

(For example: 3-gram)

(count w1,w2,w3 appearing in the corpus)

Interpolated Back-Off:

That is , sometimes some certain phrase don’t appear in the corpus so the Prob of them is zero. To avoid this situation, we use Interpolated Back-off. That is to say, Interpolate k-gram models(k = n-1、n-2…1) into the n-gram models.

A simpal approach:

Summary for n-gram:

Good: easy to train. Fast.

Bad: Large n-grams are sparse. Hard to capture long dependencies. Cannot capture correlations between similary word distributions. Cannot resolve the word morphological problem.(running – jumping)

2. Neural N-Gram Language Models

Use A feed forward network like:

Trigram(3-gram) Neural Network Language Model for example:


Wi are hot-vectors. Pi are distributions. And shape is |V|(words in the vocabulary)

(a sampal:detail cal graph)

Define the losscross entopy:

Training: use Gradient Descent

And a sampal of taining:

Comparsion with Count based n-gram LMs:

Good: Better performance on unseen n-grams But poorer on seen n-grams.(Sol: direct(linear) n-gram fertures). Use smaller memory than Counted based n-gram.

Bad: The number of parameters in the models scales with n-gram size. There is a limit on the longest dependencies that an be captured.

3. Recurrent Neural Network LM

That is to say, using a recurrent neural network to build our LM.

Model and Train:

Algorithm: Back Propagation Through Time(BPTT)

Note:

Note that, the Gradient Descent depend heavily. So the improved algorithm is:

Algorithm: Truncated Back Propagation Through Time.(TBPTT)

So the Cal graph looks like this:

So the Training process and Gradient Descent:

Summary of the Recurrent NN LMs:

Good:

RNNs can represent unbounded dependencies, unlike models with a fixed n-gram order.

RNNs compress histories of words into a fixed size hidden vector.

The number of parameters does not grow with the length of dependencies captured, but they do grow with the amount of information stored in the hidden layer.

Bad:

RNNs are hard to learn and often will not discover long range dependencies present in the data(So we learn LSTM unit).

Increasing the size of the hidden layer, and thus memory, increases the computation and memory quadratically.

Mostly trained with Maximum Likelihood based objectives which do not encode the expected frequencies of words a priori.

Some blogs recommended:

Andrej Karpathy: The Unreasonable Effectiveness of Recurrent Neural Networks karpathy.github.io/2015/05/21/rnn-effectiveness/

Yoav Goldberg: The unreasonable effectiveness of Character-level Language Models nbviewer.jupyter.org/gist/yoavg/d76121dfde2618422139

Stephen Merity: Explaining and illustrating orthogonal initialization for recurrent neural networks. smerity.com/articles/2016/orthogonal_init.html

【NLP】Recurrent Neural Network and Language Models的更多相关文章

  1. pytorch --Rnn语言模型(LSTM,BiLSTM) -- 《Recurrent neural network based language model》

    论文通过实现RNN来完成了文本分类. 论文地址:88888888 模型结构图: 原理自行参考论文,code and comment: # -*- coding: utf-8 -*- # @time : ...

  2. Recurrent Neural Network系列1--RNN(循环神经网络)概述

    作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...

  3. 【NLP】自然语言处理:词向量和语言模型

    声明: 这是转载自LICSTAR博士的牛文,原文载于此:http://licstar.net/archives/328 这篇博客是我看了半年的论文后,自己对 Deep Learning 在 NLP 领 ...

  4. Recurrent Neural Network Language Modeling Toolkit代码学习

    Recurrent Neural Network Language Modeling Toolkit  工具使用点击打开链接 本博客地址:http://blog.csdn.net/wangxingin ...

  5. 课程五(Sequence Models),第一 周(Recurrent Neural Networks) —— 1.Programming assignments:Building a recurrent neural network - step by step

    Building your Recurrent Neural Network - Step by Step Welcome to Course 5's first assignment! In thi ...

  6. Recurrent Neural Network(循环神经网络)

    Reference:   Alex Graves的[Supervised Sequence Labelling with RecurrentNeural Networks] Alex是RNN最著名变种 ...

  7. Recurrent Neural Network系列2--利用Python,Theano实现RNN

    作者:zhbzz2007 出处:http://www.cnblogs.com/zhbzz2007 欢迎转载,也请保留这段声明.谢谢! 本文翻译自 RECURRENT NEURAL NETWORKS T ...

  8. Recurrent Neural Network[survey]

    0.引言 我们发现传统的(如前向网络等)非循环的NN都是假设样本之间无依赖关系(至少时间和顺序上是无依赖关系),而许多学习任务却都涉及到处理序列数据,如image captioning,speech ...

  9. (zhuan) Recurrent Neural Network

    Recurrent Neural Network 2016年07月01日  Deep learning  Deep learning 字数:24235   this blog from: http:/ ...

随机推荐

  1. Spring Boot 之日志记录

    Spring Boot 之日志记录 Spring Boot 支持集成 Java 世界主流的日志库. 如果对于 Java 日志库不熟悉,可以参考:细说 Java 主流日志工具库 关键词: log4j, ...

  2. FIFO队列算法的C程序实现

    头文件:Queue.h #ifndef _Queue_H #define _Queue_H typedef struct QueueDef_ //队列对象定义 { u16 front; //队列头部 ...

  3. IntentService解析

    IntentService中内置了一个HandlerThread,能够对数据进行处理.相比于普通的Service,IntentService有以下优点: 1. 不用在Service创建线程. 2. 不 ...

  4. Java线程池实现原理与技术(ThreadPoolExecutor、Executors)

    本文将通过实现一个简易的线程池理解线程池的原理,以及介绍JDK中自带的线程池ThreadPoolExecutor和Executor框架. 1.无限制线程的缺陷 多线程的软件设计方法确实可以最大限度地发 ...

  5. Centos7 ssh配置RSA证书登录

    修改sshd配置文件 vim /etc/ssh/sshd_config #增加以下三项 RSAAuthentication yes PubkeyAuthentication yes Authorize ...

  6. 插入排序专题 直接插入 折半 希尔shell

    1.直接插入排序 分析:a[n]有n个元素 a[0...n-1]  从 i=1...n-1  a[i]依次与   a[0...n-2]数字进行比较 发现后面的数字大于前面的数字交换位置,每一次比较,与 ...

  7. A4纸尺寸 web打印报告

    A4纸对应的像素尺寸: <style> @media print { .Noprn{ display:none;} .print-hidden { display: none !impor ...

  8. mybatis的mapper注入失败

    因为处在两个不同的资源文件夹下: 导致classpath无法加载其中一些文件,所以修改为classpath*后顺利进行. <!-- 加载spring容器 --> <!-- neede ...

  9. Django之事务

    Django之事务 事务就是将一组操作捆绑在一起,只有当这一组操作全部都成功以后这个事务才算成功;当这组操作中有任何一个没有操作成功,则这个操作就会回滚,回到操作之前的状态. 其中牵扯到向数据库中写数 ...

  10. 牛客OI周赛8-普及组

    https://ac.nowcoder.com/acm/contest/543#question A. 代码: #include <bits/stdc++.h> using namespa ...