论文来自Mikolov等人的《Efficient Estimation of Word Representations in Vector Space》

论文地址: 66666

论文介绍了2个方法,原理不解释...

skim code and comment https://github.com/graykode/nlp-tutorial:

# -*- coding: utf-8 -*-
# @time : 2019/11/9 12:53 import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import matplotlib.pyplot as plt dtype = torch.FloatTensor # 3 Words Sentence
sentences = [ "i like dog", "i like cat", "i like animal",
"dog cat animal", "apple cat dog like", "dog fish milk like",
"dog cat eyes like", "i like apple", "apple i hate",
"apple i movie book music like", "cat dog hate", "cat dog like"] word_sequence = " ".join(sentences).split()
word_list = " ".join(sentences).split()
word_list = list(set(word_list))
word_dict = {w: i for i, w in enumerate(word_list)} # Word2Vec Parameter
batch_size = 20 # To show 2 dim embedding graph
embedding_size = 2 # To show 2 dim embedding graph
voc_size = len(word_list) # 产生 batch_size个,每个都是一个input和label, both are ont-hot vector
def random_batch(data, size):
random_inputs = []
random_labels = []
random_index = np.random.choice(range(len(data)), size, replace=False) for i in random_index:
random_inputs.append(np.eye(voc_size)[data[i][0]]) # target
random_labels.append(data[i][1]) # context word return random_inputs, random_labels # Make skip gram of one size window
skip_grams = []
# 从第2个word_sequence开始(index=1),预测index=0和index=2,也就是[index=1,index=0]和[index=1,index=2]的添加到skim_grams中
for i in range(1, len(word_sequence) - 1):
target = word_dict[word_sequence[i]]
context = [word_dict[word_sequence[i - 1]], word_dict[word_sequence[i + 1]]] for w in context:
skip_grams.append([target, w]) # Model
class Word2Vec(nn.Module):
def __init__(self):
super(Word2Vec, self).__init__() # W and WT is not Traspose relationship
self.W = nn.Parameter(-2 * torch.rand(voc_size, embedding_size) + 1).type(dtype) # voc_size > embedding_size Weight
self.WT = nn.Parameter(-2 * torch.rand(embedding_size, voc_size) + 1).type(dtype) # embedding_size > voc_size Weight def forward(self, X):
# X : [batch_size, voc_size]
hidden_layer = torch.matmul(X, self.W) # hidden_layer : [batch_size, embedding_size]
output_layer = torch.matmul(hidden_layer, self.WT) # output_layer : [batch_size, voc_size]
return output_layer model = Word2Vec() criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001) # Training
for epoch in range(5000): input_batch, target_batch = random_batch(skip_grams, batch_size) input_batch = Variable(torch.Tensor(input_batch))
target_batch = Variable(torch.LongTensor(target_batch)) optimizer.zero_grad()
output = model(input_batch) # output : [batch_size, voc_size], target_batch : [batch_size] (LongTensor, not one-hot)
loss = criterion(output, target_batch)
if (epoch + 1)%1000 == 0:
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss)) loss.backward()
optimizer.step() # because
# input_size is [batch_size,voc_size] , ( a word is one-hot voctor(lenght is voc_size) )
# W is [voc_size,emmedding_size]
# a word*W ,result is same as:
# [1,0,0]*[w1,w4
# w2,w5
# w3,w6]
# so one word embedding vector is [w1,w4]
# 即: W[i][0],W[i][1]
for i, label in enumerate(word_list):
W, WT = model.parameters()
x,y = float(W[i][0]), float(W[i][1])
plt.scatter(x, y)
plt.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom')
plt.show()

pytorch --- word2vec 实现 --《Efficient Estimation of Word Representations in Vector Space》的更多相关文章

  1. Efficient Estimation of Word Representations in Vector Space 论文笔记

    Mikolov T , Chen K , Corrado G , et al. Efficient Estimation of Word Representations in Vector Space ...

  2. 一天一经典Efficient Estimation of Word Representations in Vector Space

    摘要 本文提出了两种从大规模数据集中计算连续向量表示(Continuous Vector Representation)的计算模型架构.这些表示的有效性是通过词相似度任务(Word Similarit ...

  3. Efficient Estimation of Word Representations in Vector Space (2013)论文要点

    论文链接:https://arxiv.org/pdf/1301.3781.pdf 参考: A Neural Probabilistic Language Model (2003)论文要点  https ...

  4. 【Deep Learning学习笔记】Efficient Estimation of Word Representations in Vector Space_google2013

    标题:Efficient Estimation of Word Representations in Vector Space 作者:Tomas Mikolov 发表于:ICLR 2013 主要内容: ...

  5. 论文翻译——Deep contextualized word representations

    Abstract We introduce a new type of deep contextualized word representation that models both (1) com ...

  6. Word Representations 词向量

    常用的词向量方法word2vec. 一.Word2vec 1.参考资料: 1.1) 总览 https://zhuanlan.zhihu.com/p/26306795 1.2) 基础篇:  深度学习wo ...

  7. word2vec 理论与实践

    导读 本文简单的介绍了Google 于 2013 年开源推出的一个用于获取 word vector 的工具包(word2vec),并且简单的介绍了其中的两个训练模型(Skip-gram,CBOW),以 ...

  8. TensorFlow v2.0实现Word2Vec算法

    使用TensorFlow v2.0实现Word2Vec算法计算单词的向量表示,这个例子是使用一小部分维基百科文章来训练的. 更多信息请查看论文: Mikolov, Tomas et al. " ...

  9. 文本深度表示模型Word2Vec

    简介 Word2vec 是 Google 在 2013 年年中开源的一款将词表征为实数值向量的高效工具, 其利用深度学习的思想,可以通过训练,把对文本内容的处理简化为 K 维向量空间中的向量运算,而向 ...

随机推荐

  1. 2018 Multi-University Training Contest 10

      Recently, TeaTree acquire new knoledge gcd (Greatest Common Divisor), now she want to test you. As ...

  2. 盘它!!一步到位,Tensorflow 2的实战 !!LSTM下的股票预测(附详尽代码及数据集)

    关键词:tensorflow2.LSTM.时间序列.股票预测 Tensorflow 2.0发布已经有一段时间了,各种新API的确简单易用,除了官方文档以外能够找到的学习资料也很多,但是大都没有给出实战 ...

  3. SpringSecurity 自定义表单登录

    SpringSecurity 自定义表单登录 本篇主要讲解 在SpringSecurity中 如何 自定义表单登录 , SpringSecurity默认提供了一个表单登录,但是实际项目里肯定无法使用的 ...

  4. es7中数组如何判断元素是否存在

    const arr = [1,2,3,4,5,6] console.log(arr.includes(4)) //true

  5. [bzoj2115] [洛谷P4151] [Wc2011] Xor

    Description Input 第一行包含两个整数N和 M, 表示该无向图中点的数目与边的数目. 接下来M 行描述 M 条边,每行三个整数Si,Ti ,Di,表示 Si 与Ti之间存在 一条权值为 ...

  6. rabbitmq 实现延迟队列的两种方式

    原文地址:https://blog.csdn.net/u014308482/article/details/53036770 ps: 文章里面延迟队列=延时队列 什么是延迟队列 延迟队列存储的对象肯定 ...

  7. python3操作PyMySQL笔记

    python3操作mysql需要先安装PyMySQL pip install PyMySQL 在linux登录mysql ,并且在安装数据库时设置了数据库的用户名“root”和密码“root”,mys ...

  8. 图像矫正技术深入探讨(opencv)

    刚进入实验室导师就交给我一个任务,就是让我设计算法给图像进行矫正.哎呀,我不太会图像这块啊,不过还是接下来了,硬着头皮开干吧! 那什么是图像的矫正呢?举个例子就好明白了. 我的好朋友小明给我拍了这几张 ...

  9. 生成URL(而不是链接) Generating URLs (and Not Links) | 在视图中生成输出URL |高级路由特性 | 精通ASP-NET-MVC-5-弗瑞曼

    结果呢:

  10. html恶搞之无限弹窗

    啦啦啦啦啦 恶搞别人吗? 把下面代码做成html文件发给别人,用浏览器打开就可以看见效果了 <!DOCTYPE html> <html><head><meta ...