【MT】牛津的MT教程
Preamble
This repository contains the lecture slides and course description for the Deep Natural Language Processing course offered in Hilary Term 2017 at the University of Oxford.
This is an advanced course on natural language processing. Automatically processing natural language inputs and producing language outputs is a key component of Artificial General Intelligence. The ambiguities and noise inherent in human communication render traditional symbolic AI techniques ineffective for representing and analysing language data. Recently statistical techniques based on neural networks have achieved a number of remarkable successes in natural language processing leading to a great deal of commercial and academic interest in the field
This is an applied course focussing on recent advances in analysing and generating speech and text using recurrent neural networks. We introduce the mathematical definitions of the relevant machine learning models and derive their associated optimisation algorithms. The course covers a range of applications of neural networks in NLP including analysing latent dimensions in text, transcribing speech to text, translating between languages, and answering questions. These topics are organised into three high level themes forming a progression from understanding the use of neural networks for sequential language modelling, to understanding their use as conditional language models for transduction tasks, and finally to approaches employing these techniques in combination with other mechanisms for advanced applications. Throughout the course the practical implementation of such models on CPU and GPU hardware is also discussed.
This course is organised by Phil Blunsom and delivered in partnership with the DeepMind Natural Language Research Group.
Lecturers
- Phil Blunsom (Oxford University and DeepMind)
- Chris Dyer (Carnegie Mellon University and DeepMind)
- Edward Grefenstette (DeepMind)
- Karl Moritz Hermann (DeepMind)
- Andrew Senior (DeepMind)
- Wang Ling (DeepMind)
- Jeremy Appleyard (NVIDIA)
TAs
- Yannis Assael
- Yishu Miao
- Brendan Shillingford
- Jan Buys
Timetable
Practicals
- Group 1 - Monday, 9:00-11:00 (Weeks 2-8), 60.05 Thom Building
- Group 2 - Friday, 16:00-18:00 (Weeks 2-8), Room 379
- Practical 1: word2vec
- Practical 2: text classification
- Practical 3: recurrent neural networks for text classification and language modelling
- Practical 4: open practical
Lectures
Public Lectures are held in Lecture Theatre 1 of the Maths Institute, on Tuesdays and Thursdays, 16:00-18:00 (Hilary Term Weeks 1,3-8).
Lecture Materials
1. Lecture 1a - Introduction [Phil Blunsom]
This lecture introduces the course and motivates why it is interesting to study language processing using Deep Learning techniques.
2. Lecture 1b - Deep Neural Networks Are Our Friends [Wang Ling]
This lecture revises basic machine learning concepts that students should know before embarking on this course.
3. Lecture 2a- Word Level Semantics [Ed Grefenstette]
Words are the core meaning bearing units in language. Representing and learning the meanings of words is a fundamental task in NLP and in this lecture the concept of a word embedding is introduced as a practical and scalable solution.
Reading
Embeddings Basics
- Firth, John R. "A synopsis of linguistic theory, 1930-1955." (1957): 1-32.
- Curran, James Richard. "From distributional to semantic similarity." (2004).
- Collobert, Ronan, et al. "Natural language processing (almost) from scratch." Journal of Machine Learning Research 12. Aug (2011): 2493-2537.
- Mikolov, Tomas, et al. "Distributed representations of words and phrases and their compositionality." Advances in neural information processing systems. 2013.
Datasets and Visualisation
- Finkelstein, Lev, et al. "Placing search in context: The concept revisited." Proceedings of the 10th international conference on World Wide Web. ACM, 2001.
- Hill, Felix, Roi Reichart, and Anna Korhonen. "Simlex-999: Evaluating semantic models with (genuine) similarity estimation." Computational Linguistics (2016).
- Maaten, Laurens van der, and Geoffrey Hinton. "Visualizing data using t-SNE." Journal of Machine Learning Research 9.Nov (2008): 2579-2605.
Blog posts
- Deep Learning, NLP, and Representations, Christopher Olah.
- Visualizing Top Tweeps with t-SNE, in Javascript, Andrej Karpathy.
Further Reading
- Hermann, Karl Moritz, and Phil Blunsom. "Multilingual models for compositional distributed semantics." arXiv preprint arXiv:1404.4641 (2014).
- Levy, Omer, and Yoav Goldberg. "Neural word embedding as implicit matrix factorization." Advances in neural information processing systems. 2014.
- Levy, Omer, Yoav Goldberg, and Ido Dagan. "Improving distributional similarity with lessons learned from word embeddings." Transactions of the Association for Computational Linguistics 3 (2015): 211-225.
- Ling, Wang, et al. "Two/Too Simple Adaptations of Word2Vec for Syntax Problems." HLT-NAACL. 2015.
4. Lecture 2b - Overview of the Practicals [Chris Dyer]
This lecture motivates the practical segment of the course.
5. Lecture 3 - Language Modelling and RNNs Part 1 [Phil Blunsom]
Language modelling is important task of great practical use in many NLP applications. This lecture introduces language modelling, including traditional n-gram based approaches and more contemporary neural approaches. In particular the popular Recurrent Neural Network (RNN) language model is introduced and its basic training and evaluation algorithms described.
Reading
Textbook
Blogs
- The Unreasonable Effectiveness of Recurrent Neural Networks, Andrej Karpathy.
- The unreasonable effectiveness of Character-level Language Models, Yoav Goldberg.
- Explaining and illustrating orthogonal initialization for recurrent neural networks, Stephen Merity.
6. Lecture 4 - Language Modelling and RNNs Part 2 [Phil Blunsom]
This lecture continues on from the previous one and considers some of the issues involved in producing an effective implementation of an RNN language model. The vanishing and exploding gradient problem is described and architectural solutions, such as Long Short Term Memory (LSTM), are introduced.
Reading
Textbook
Vanishing gradients, LSTMs etc.
- On the difficulty of training recurrent neural networks. Pascanu et al., ICML 2013.
- Long Short-Term Memory. Hochreiter and Schmidhuber, Neural Computation 1997.
- Learning Phrase Representations using RNN EncoderDecoder for Statistical Machine Translation. Cho et al, EMNLP 2014.
- Blog: Understanding LSTM Networks, Christopher Olah.
Dealing with large vocabularies
- A scalable hierarchical distributed language model. Mnih and Hinton, NIPS 2009.
- A fast and simple algorithm for training neural probabilistic language models. Mnih and Teh, ICML 2012.
- On Using Very Large Target Vocabulary for Neural Machine Translation. Jean et al., ACL 2015.
- Exploring the Limits of Language Modeling. Jozefowicz et al., arXiv 2016.
- Efficient softmax approximation for GPUs. Grave et al., arXiv 2016.
- Notes on Noise Contrastive Estimation and Negative Sampling. Dyer, arXiv 2014.
- Pragmatic Neural Language Modelling in Machine Translation. Baltescu and Blunsom, NAACL 2015
Regularisation and dropout
- A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. Gal and Ghahramani, NIPS 2016.
- Blog: Uncertainty in Deep Learning, Yarin Gal.
Other stuff
- Recurrent Highway Networks. Zilly et al., arXiv 2016.
- Capacity and Trainability in Recurrent Neural Networks. Collins et al., arXiv 2016.
7. Lecture 5 - Text Classification [Karl Moritz Hermann]
This lecture discusses text classification, beginning with basic classifiers, such as Naive Bayes, and progressing through to RNNs and Convolution Networks.
Reading
- Recurrent Convolutional Neural Networks for Text Classification. Lai et al. AAAI 2015.
- A Convolutional Neural Network for Modelling Sentences, Kalchbrenner et al. ACL 2014.
- Semantic compositionality through recursive matrix-vector, Socher et al. EMNLP 2012.
- Blog: Understanding Convolution Neural Networks For NLP, Denny Britz.
- Thesis: Distributional Representations for Compositional Semantics, Hermann (2014).
8. Lecture 6 - Deep NLP on Nvidia GPUs [Jeremy Appleyard]
This lecture introduces Graphical Processing Units (GPUs) as an alternative to CPUs for executing Deep Learning algorithms. The strengths and weaknesses of GPUs are discussed as well as the importance of understanding how memory bandwidth and computation impact throughput for RNNs.
Reading
- Optimizing Performance of Recurrent Neural Networks on GPUs. Appleyard et al., arXiv 2016.
- Persistent RNNs: Stashing Recurrent Weights On-Chip, Diamos et al., ICML 2016
- Efficient softmax approximation for GPUs. Grave et al., arXiv 2016.
9. Lecture 7 - Conditional Language Models [Chris Dyer]
In this lecture we extend the concept of language modelling to incorporate prior information. By conditioning an RNN language model on an input representation we can generate contextually relevant language. This very general idea can be applied to transduce sequences into new sequences for tasks such as translation and summarisation, or images into captions describing their content.
Reading
- Recurrent Continuous Translation Models. Kalchbrenner and Blunsom, EMNLP 2013
- Sequence to Sequence Learning with Neural Networks. Sutskever et al., NIPS 2014
- Multimodal Neural Language Models. Kiros et al., ICML 2014
- Show and Tell: A Neural Image Caption Generator. Vinyals et al., CVPR 2015
10. Lecture 8 - Generating Language with Attention [Chris Dyer]
This lecture introduces one of the most important and influencial mechanisms employed in Deep Neural Networks: Attention. Attention augments recurrent networks with the ability to condition on specific parts of the input and is key to achieving high performance in tasks such as Machine Translation and Image Captioning.
Reading
- Neural Machine Translation by Jointly Learning to Align and Translate. Bahdanau et al., ICLR 2015
- Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention. Xu et al., ICML 2015
- Incorporating structural alignment biases into an attentional neural translation model. Cohn et al., NAACL 2016
- BLEU: a Method for Automatic Evaluation of Machine Translation. Papineni et al, ACL 2002
11. Lecture 9 - Speech Recognition (ASR) [Andrew Senior]
Automatic Speech Recognition (ASR) is the task of transducing raw audio signals of spoken language into text transcriptions. This talk covers the history of ASR models, from Gaussian Mixtures to attention augmented RNNs, the basic linguistics of speech, and the various input and output representations frequently employed.
12. Lecture 10 - Text to Speech (TTS) [Andrew Senior]
This lecture introduces algorithms for converting written language into spoken language (Text to Speech). TTS is the inverse process to ASR, but there are some important differences in the models applied. Here we review traditional TTS models, and then cover more recent neural approaches such as DeepMind's WaveNet model.
13. Lecture 11 - (Coming Soon) Question Answering [Karl Moritz Hermann]
[slides]
[video]
14. Lecture 12 - (Coming Soon) Memory [Ed Grefenstette]
[slides]
[video]
Piazza
We will be using Piazza to facilitate class discussion during the course. Rather than emailing questions directly, I encourage you to post your questions on Piazza to be answered by your fellow students, instructors, and lecturers. However do please do note that all the lecturers for this course are volunteering their time and may not always be available to give a response.
Find our class page at: https://piazza.com/ox.ac.uk/winter2017/dnlpht2017/home
Assessment
The primary assessment for this course will be a take-home assignment issued at the end of the term. This assignment will ask questions drawing on the concepts and models discussed in the course, as well as from selected research publications. The nature of the questions will include analysing mathematical descriptions of models and proposing extensions, improvements, or evaluations to such models. The assignment may also ask students to read specific research publications and discuss their proposed algorithms in the context of the course. In answering questions students will be expected to both present coherent written arguments and use appropriate mathematical formulae, and possibly pseudo-code, to illustrate answers.
The practical component of the course will be assessed in the usual way.
Acknowledgements
This course would not have been possible without the support of DeepMind, The University of Oxford Department of Computer Science, Nvidia, and the generous donation of GPU resources from Microsoft Azure.
【MT】牛津的MT教程的更多相关文章
- 快播王欣发布匿名IM社交软件“马桶MT”
2019年1月14日,快播王欣推出了一款匿名IM社交软件——马桶MT,它的灵感像是来自于美国的匿名分享应用Secret(已关闭). 原快播创始人王欣近日在微博预告了其新公司云歌人工智能推出一款全新社交 ...
- 一周学会Mootools 1.4中文教程:序论
刚才发了几篇Mootools(以后直接简称Moo或Mt,看到这两个名字的时候不要感到奇怪),有一位热心的朋友"追杀"告诉我说现在已经出到1.4了,就不要再纠结于1.2了,想象一下有 ...
- QT5静态编译教程,主要针对vs2012(渡世白玉)
QT5,VS2012静态编译,所有的库准备充分的话qwebkit也可以静态编译通过,但是我编译的版本使用中如果用了QWEBVIEW控件在连接时会出错. 注:我自己编译的环境是:win server 2 ...
- windbg调试系列教程:sos扩展的介绍和使用
SOS是什么? 直观来说,sos就是一个程序集文件.这个程序集的作用就是让我们在使用windbg分析.net进程时,更加方便快捷.通过sos,我们可以清晰的查看CLR运行时的各类信息,辅助我们去理解托 ...
- 2、如何解决xamarin没有相关教程的的指导贴
本篇文章主要在于解决xamarin相关文档偏少的问题. 最终的代码并不重要.重要的还是那种处理的方式 授人以渔 群里有群友讨论说需要读取安卓的 充电电流.这样的问题实际上在原生java有一堆.但是到了 ...
- QT5.8 VS2017 编译教程(可以使用VS2017 XP兼容包)
1.下载QT5.8源码 这个我不做过多解释. 2.安装使用的环境 visual studio 2017 Python Perl Ruby 安装好,并配置好环境PATH变量. 3.修改错误代码 错误 ...
- 超详细!Vuex手把手教程
目录 1,前言 2,Vuex 是什么 3,5大属性说明 4,state 4.1 直接访问 4.1 使用mapState映射 5,getters 5.1 先在vuex中定义getters 5.2 直接获 ...
- Sass学习笔记之入门篇
Sass又名SCSS,是CSS预处理器之一,,它能用来清晰地.结构化地描述文件样式,有着比普通 CSS 更加强大的功能. Sass 能够提供更简洁.更优雅的语法,同时提供多种功能来创建可维护和管理的样 ...
- 15 条实用 Linux/Unix 磁带管理命令
导读 磁带设备应只用于定期的文件归档或将数据从一台服务器传送至另一台.通常磁带设备与 Unix 机器连接,用 mt 或 mtx 控制.强烈建议您将所有的数据同时备份到磁盘(也许是云中)和磁带设备中. ...
随机推荐
- SmartSql 快速使用指南
SmartSql 快速使用指南(https://github.com/Ahoo-Wang/SmartSql) ISmartSqlMapper 常用(部分)接口概述 函数 说明 Execute IDbC ...
- 大数据时代的图表可视化利器——highcharts,D3和百度的echarts
大数据时代的图表可视化利器——highcharts,D3和百度的echarts https://blog.csdn.net/minidrupal/article/details/42153941 ...
- .net工具类 获取枚举类型的描述
一般情况我们会用枚举类型来存储一些状态信息,而这些信息有时候需要在前端展示,所以需要展示中文注释描述. 为了方便获取这些信息,就封装了一个枚举扩展类. /// <summary> /// ...
- MySQL 笔记整理(14) --count(*)这么慢,我该怎么办?
笔记记录自林晓斌(丁奇)老师的<MySQL实战45讲> (本篇内图片均来自丁奇老师的讲解,如有侵权,请联系我删除) 14) --count(*)这么慢,我该怎么办? 有时你会发现,随着系统 ...
- asp.net后台管理系统-登陆模块-路由权限控制_1
using System.Web.Routing; //重写System.Web.Routing中Initialize方法 protected override void Initialize(Req ...
- Winform宽度与高度
获取代码 非实时:一开始的宽度是多少就多少,拉宽了 获取的宽度还是刚开始的 ,同理高度 this.Width this.Height 获取代码 实时:调了窗体高度宽度,宽度 高度 也跟着变化,不再保持 ...
- spring aop中pointcut表达式完整版
spring aop中pointcut表达式完整版 本文主要介绍spring aop中9种切入点表达式的写法 execute within this target args @target @with ...
- 初学Shiro
Shiro Shiro是什么? Apache Shiro是Java的一个安全(权限)框架. Shiro可以非常容易的开发出足够好的应用,其不仅可以用在JavaSE环境下,也可以用者JavaEE环境下 ...
- 关于TCP的握手与挥手-----简单解释
所谓三次握手(Three-Way Handshake)即建立TCP连接,就是指建立一个TCP连接时,需要客户端和服务端总共发送3个包以确认连接的建立.在socket编程中,这一过程由客户端执行conn ...
- android使用百度地图最新sdk5.0后后代码混淆时,地图无法显示闪退问题
描述:刚开始遇到这个问题我一步一步去排除,最后发现在初始化地图的时候,代码混淆就有问题了, 问题描述:当跳显示地图的页面APP闪退, 解决对比: 1:对于老版本百度sdk:代码混淆时语句: -libr ...