pytorch --- word2vec 实现 --《Efficient Estimation of Word Representations in Vector Space》
论文来自Mikolov等人的《Efficient Estimation of Word Representations in Vector Space》
论文地址: 66666
论文介绍了2个方法,原理不解释...
skim code and comment https://github.com/graykode/nlp-tutorial:
# -*- coding: utf-8 -*-
# @time : 2019/11/9 12:53 import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import matplotlib.pyplot as plt dtype = torch.FloatTensor # 3 Words Sentence
sentences = [ "i like dog", "i like cat", "i like animal",
"dog cat animal", "apple cat dog like", "dog fish milk like",
"dog cat eyes like", "i like apple", "apple i hate",
"apple i movie book music like", "cat dog hate", "cat dog like"] word_sequence = " ".join(sentences).split()
word_list = " ".join(sentences).split()
word_list = list(set(word_list))
word_dict = {w: i for i, w in enumerate(word_list)} # Word2Vec Parameter
batch_size = 20 # To show 2 dim embedding graph
embedding_size = 2 # To show 2 dim embedding graph
voc_size = len(word_list) # 产生 batch_size个,每个都是一个input和label, both are ont-hot vector
def random_batch(data, size):
random_inputs = []
random_labels = []
random_index = np.random.choice(range(len(data)), size, replace=False) for i in random_index:
random_inputs.append(np.eye(voc_size)[data[i][0]]) # target
random_labels.append(data[i][1]) # context word return random_inputs, random_labels # Make skip gram of one size window
skip_grams = []
# 从第2个word_sequence开始(index=1),预测index=0和index=2,也就是[index=1,index=0]和[index=1,index=2]的添加到skim_grams中
for i in range(1, len(word_sequence) - 1):
target = word_dict[word_sequence[i]]
context = [word_dict[word_sequence[i - 1]], word_dict[word_sequence[i + 1]]] for w in context:
skip_grams.append([target, w]) # Model
class Word2Vec(nn.Module):
def __init__(self):
super(Word2Vec, self).__init__() # W and WT is not Traspose relationship
self.W = nn.Parameter(-2 * torch.rand(voc_size, embedding_size) + 1).type(dtype) # voc_size > embedding_size Weight
self.WT = nn.Parameter(-2 * torch.rand(embedding_size, voc_size) + 1).type(dtype) # embedding_size > voc_size Weight def forward(self, X):
# X : [batch_size, voc_size]
hidden_layer = torch.matmul(X, self.W) # hidden_layer : [batch_size, embedding_size]
output_layer = torch.matmul(hidden_layer, self.WT) # output_layer : [batch_size, voc_size]
return output_layer model = Word2Vec() criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001) # Training
for epoch in range(5000): input_batch, target_batch = random_batch(skip_grams, batch_size) input_batch = Variable(torch.Tensor(input_batch))
target_batch = Variable(torch.LongTensor(target_batch)) optimizer.zero_grad()
output = model(input_batch) # output : [batch_size, voc_size], target_batch : [batch_size] (LongTensor, not one-hot)
loss = criterion(output, target_batch)
if (epoch + 1)%1000 == 0:
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss)) loss.backward()
optimizer.step() # because
# input_size is [batch_size,voc_size] , ( a word is one-hot voctor(lenght is voc_size) )
# W is [voc_size,emmedding_size]
# a word*W ,result is same as:
# [1,0,0]*[w1,w4
# w2,w5
# w3,w6]
# so one word embedding vector is [w1,w4]
# 即: W[i][0],W[i][1]
for i, label in enumerate(word_list):
W, WT = model.parameters()
x,y = float(W[i][0]), float(W[i][1])
plt.scatter(x, y)
plt.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom')
plt.show()
pytorch --- word2vec 实现 --《Efficient Estimation of Word Representations in Vector Space》的更多相关文章
- Efficient Estimation of Word Representations in Vector Space 论文笔记
Mikolov T , Chen K , Corrado G , et al. Efficient Estimation of Word Representations in Vector Space ...
- 一天一经典Efficient Estimation of Word Representations in Vector Space
摘要 本文提出了两种从大规模数据集中计算连续向量表示(Continuous Vector Representation)的计算模型架构.这些表示的有效性是通过词相似度任务(Word Similarit ...
- Efficient Estimation of Word Representations in Vector Space (2013)论文要点
论文链接:https://arxiv.org/pdf/1301.3781.pdf 参考: A Neural Probabilistic Language Model (2003)论文要点 https ...
- 【Deep Learning学习笔记】Efficient Estimation of Word Representations in Vector Space_google2013
标题:Efficient Estimation of Word Representations in Vector Space 作者:Tomas Mikolov 发表于:ICLR 2013 主要内容: ...
- 论文翻译——Deep contextualized word representations
Abstract We introduce a new type of deep contextualized word representation that models both (1) com ...
- Word Representations 词向量
常用的词向量方法word2vec. 一.Word2vec 1.参考资料: 1.1) 总览 https://zhuanlan.zhihu.com/p/26306795 1.2) 基础篇: 深度学习wo ...
- word2vec 理论与实践
导读 本文简单的介绍了Google 于 2013 年开源推出的一个用于获取 word vector 的工具包(word2vec),并且简单的介绍了其中的两个训练模型(Skip-gram,CBOW),以 ...
- TensorFlow v2.0实现Word2Vec算法
使用TensorFlow v2.0实现Word2Vec算法计算单词的向量表示,这个例子是使用一小部分维基百科文章来训练的. 更多信息请查看论文: Mikolov, Tomas et al. " ...
- 文本深度表示模型Word2Vec
简介 Word2vec 是 Google 在 2013 年年中开源的一款将词表征为实数值向量的高效工具, 其利用深度学习的思想,可以通过训练,把对文本内容的处理简化为 K 维向量空间中的向量运算,而向 ...
随机推荐
- C#实现DataTable转TXT文件
实现DataTable转TXT文件代码如下: public ExecutionResult DataTableToTxt(DataTable vContent, string vOutputFileP ...
- restapi-sql
身份验证,确定该成员是交过费的机构的成员,包含(用户名)和(密码) 各个表中的属性,有关timetemp等特殊类型,LocalDate等日期等具体格式. 引入了传输过程的不同的数据格式导致的两个错误, ...
- Educational Codeforces Round 80 (Rated for Div. 2)
A. Deadline 题目链接:https://codeforces.com/contest/1288/problem/A 题意: 给你一个 N 和 D,问是否存在一个 X , 使得 $x+\lce ...
- 《企业IT架构转型之道:阿里巴巴中台战略思想与架构实战》-总结
一.什么是业务中台 概念来自于阿里,介于前台和后台(此后台指的是云计算.数据库.消息队列.缓存等基础服务) 采用共享式架构设计解决以往烟囱式架构设计的资源浪费.重复造轮.试错成本高的问题 阿里的中 ...
- CAS是什么
CAS是什么? 比较并交换 例子1: public class ABADemo1 { public static void main(String[] args) { AtomicInteger at ...
- git查看远程仓库和本地的区别
git diff 你可以用 git diff 来比较项目中任意两个版本的差异. $ git diff master..test 上面这条命令只显示两个分支间的差异,如果你想找出 master , te ...
- 隐隐约约 听 RazorEngine 在 那里 据说 生成代码 很 美。
这只是 一个开始 ....
- 前端基础JavaScript
JavaScript概述 ECMAScript和JavaScript的关系 1996年11月,JavaScript的创造者--Netscape公司,决定将JavaScript提交给国际标准化组织ECM ...
- php--->php打印格式化
php打印格式化 当我们PHP调试的时候,用var_dump 或 print_r打印json数据或array数组时,html页面没有换行显示,看到的内容一大堆,不好定位.输出前添加html的pre标签 ...
- [GPU高性能编程CUDA实战].(桑德斯).聂雪军等.扫描版-百度云分享
链接:https://pan.baidu.com/s/1NkkDiyRgmfmhm9d2g_GBKQ 提取码:3usj