Sequence models are central to NLP: they are models where there is some sort of dependence through time between your inputs. The classical example of a sequence model is the Hidden Markov Model for part-of-speech tagging. Another example is the conditional random field.

LSTM’s in Pytorch

Pytorch’s LSTM expects all of its inputs to be 3D tensors. The semantics of the axes of these tensors is important. The first axis is the sequence itself, the second indexes instances in the mini-batch, and the third indexes elements of the input. We haven’t discussed mini-batching, so lets just ignore that and assume we will always have just 1 dimension on the second axis. If we want to run the sequence model over the sentence “The cow jumped”, our input should look like

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim torch.manual_seed(1) lstm=nn.LSTM(3,3) #Input dim is 3, output dim is 3
inputs = [torch.randn(1, 3) for _ in range(5)] # make a sequence of length 5 # initialize the hidden state.
hidden = (torch.randn(1, 1, 3),
torch.randn(1, 1, 3)) for i in inputs:
out,hidden=lstm(i.view(1,1,-1),hidden) inputs=torch.cat(inputs).view(len(inputs),1,-1)
hidden=(torch.randn(1,1,3),torch.randn(1,1,3))
out,hidden=lstm(inputs,hidden)
print(out)
print(hidden)

Example: An LSTM for Part-of-Speech Tagging

Prepare data:

def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] for w in seq]
return torch.tensor(idxs, dtype=torch.long) training_data = [
("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
word_to_ix = {}
for sent, tags in training_data:
for word in sent:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
print(word_to_ix)
tag_to_ix = {"DET": 0, "NN": 1, "V": 2} # These will usually be more like 32 or 64 dimensional.
# We will keep them small, so we can see how the weights change as we train.
EMBEDDING_DIM = 6
HIDDEN_DIM = 6

Create the model:

class LSTMTagger(nn.Module):

    def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
super(LSTMTagger, self).__init__()
self.hidden_dim = hidden_dim self.word_embeddings = nn.Embedding(vocab_size, embedding_dim) # The LSTM takes word embeddings as inputs, and outputs hidden states
# with dimensionality hidden_dim.
self.lstm = nn.LSTM(embedding_dim, hidden_dim) # The linear layer that maps from hidden state space to tag space
self.hidden2tag = nn.Linear(hidden_dim, tagset_size)
self.hidden = self.init_hidden() def init_hidden(self):
# Before we've done anything, we dont have any hidden state.
# Refer to the Pytorch documentation to see exactly
# why they have this dimensionality.
# The axes semantics are (num_layers, minibatch_size, hidden_dim)
return (torch.zeros(1, 1, self.hidden_dim),
torch.zeros(1, 1, self.hidden_dim)) def forward(self, sentence):
embeds = self.word_embeddings(sentence)
lstm_out, self.hidden = self.lstm(
embeds.view(len(sentence), 1, -1), self.hidden)
tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))
tag_scores = F.log_softmax(tag_space, dim=1)
return tag_scores

Train the model:

model = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(tag_to_ix))
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1) # See what the scores are before training
# Note that element i,j of the output is the score for tag j for word i.
# Here we don't need to train, so the code is wrapped in torch.no_grad()
with torch.no_grad():
inputs = prepare_sequence(training_data[0][0], word_to_ix)
tag_scores = model(inputs)
print(tag_scores) for epoch in range(300): # again, normally you would NOT do 300 epochs, it is toy data
for sentence, tags in training_data:
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
model.zero_grad() # Also, we need to clear out the hidden state of the LSTM,
# detaching it from its history on the last instance.
model.hidden = model.init_hidden() # Step 2. Get our inputs ready for the network, that is, turn them into
# Tensors of word indices.
sentence_in = prepare_sequence(sentence, word_to_ix)
targets = prepare_sequence(tags, tag_to_ix) # Step 3. Run our forward pass.
tag_scores = model(sentence_in) # Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss = loss_function(tag_scores, targets)
loss.backward()
optimizer.step() # See what the scores are after training
with torch.no_grad():
inputs = prepare_sequence(training_data[0][0], word_to_ix)
tag_scores = model(inputs) # The sentence is "the dog ate the apple". i,j corresponds to score for tag j
# for word i. The predicted tag is the maximum scoring tag.
# Here, we can see the predicted sequence below is 0 1 2 0 1
# since 0 is index of the maximum value of row 1,
# 1 is the index of maximum value of row 2, etc.
# Which is DET NOUN VERB DET NOUN, the correct sequence!
print(tag_scores)

Sequence Models and Long-Short Term Memory Networks的更多相关文章

  1. LSTM学习—Long Short Term Memory networks

    原文链接:https://colah.github.io/posts/2015-08-Understanding-LSTMs/ Understanding LSTM Networks Recurren ...

  2. LSTM(Long Short Term Memory)

    长时依赖是这样的一个问题,当预测点与依赖的相关信息距离比较远的时候,就难以学到该相关信息.例如在句子”我出生在法国,……,我会说法语“中,若要预测末尾”法语“,我们需要用到上下文”法国“.理论上,递归 ...

  3. [C5W1] Sequence Models - Recurrent Neural Networks

    第一周 循环序列模型(Recurrent Neural Networks) 为什么选择序列模型?(Why Sequence Models?) 在本课程中你将学会序列模型,它是深度学习中最令人激动的内容 ...

  4. Sequence Models

    Sequence Models This is the fifth and final course of the deep learning specialization at Coursera w ...

  5. [C7] Andrew Ng - Sequence Models

    About this Course This course will teach you how to build models for natural language, audio, and ot ...

  6. Sequence Models 笔记(一)

    1 Recurrent Neural Networks(循环神经网络) 1.1 序列数据 输入或输出其中一个或两个是序列构成.例如语音识别,自然语言处理,音乐生成,感觉分类,dna序列,机器翻译,视频 ...

  7. 《Sequence Models》课堂笔记

    Lesson 5 Sequence Models 这篇文章其实是 Coursera 上吴恩达老师的深度学习专业课程的第五门课程的课程笔记. 参考了其他人的笔记继续归纳的. 符号定义 假如我们想要建立一 ...

  8. 吴恩达《深度学习》-第五门课 序列模型(Sequence Models)-第一周 循环序列模型(Recurrent Neural Networks) -课程笔记

    第一周 循环序列模型(Recurrent Neural Networks) 1.1 为什么选择序列模型?(Why Sequence Models?) 1.2 数学符号(Notation) 这个输入数据 ...

  9. 课程五(Sequence Models),第三周(Sequence models & Attention mechanism) —— 1.Programming assignments:Neural Machine Translation with Attention

    Neural Machine Translation Welcome to your first programming assignment for this week! You will buil ...

随机推荐

  1. C#趣味程序---个位数为6,且能被3整出的五位数

    using System; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { int ...

  2. [NPM] Run npm scripts in series

    After creating several npm script it becomes useful to run multiple scripts back-to-back in series. ...

  3. BZOJ 2064 - 状压DP

    传送门 题目大意: 给两个数组, 数组中的两个元素可以合并成两元素之和,每个元素都可以分裂成相应的大小,问从数组1变化到数组2至少需要多少步? 题目分析: 看到数据范围\(n<=10\), 显然 ...

  4. WPF Opacity 最小值多少会被击穿

    粗略测试 这样也行.再小不懂咯(跟Double精度有关???) <WrapPanel.Background> <SolidColorBrush x:Name="opacit ...

  5. 【t087】公共汽车

    Time Limit: 1 second Memory Limit: 128 MB [问题描述] 路人丁成为了一名新公交车司机,每个司机都有一张牌子,牌子的正面写了拥有这个牌子的司机开的线路号,另外一 ...

  6. oracle 全部查询和表空间,以及其关系

    select * from dba_users;   查看数据库里面全部用户,前提是你是有dba权限的帐号.如sys,system select * from all_users;     查看你能管 ...

  7. [Android]使用ViewPager实现图片滑动展示

    在淘宝等电商的APP首页经常能看到大幅的广告位,通常有多幅经常更新的图片用于展示促销信息,如下图所示: 通常会自动滚动,也可以根据手势滑动.我没有研究过人家的APP是通过什么实现的,可能有第三方已经封 ...

  8. FZU Problem 2062 Suneast &amp; Yayamao

    http://acm.fzu.edu.cn/problem.php?pid=2062 标题效果: 给你一个数n,要求求出用多少个数字能够表示1~n的全部数. 思路: 分解为二进制. 对于一个数n.看它 ...

  9. uinty3d导入错误问题解决

    导入第一被复制到文件unity3d在相应的文件夹的安装文件夹.回归后,unity3d软体.正确的选择"输入". 版权声明:本文博主原创文章.博客,未经同意不得转载.

  10. python 反转列表

    翻转一个链表 您在真实的面试中是否遇到过这个题? Yes 样例 给出一个链表1->2->3->null,这个翻转后的链表为3->2->1->null 步骤是这样的: ...