Paper Reference: word2vec Parameter Learning Explained

1. One-word context Model

In our setting, the vocabulary size is $V$, and the hidden layer size is $N$.

The input $x$ is a one-hot representation vector, which means for a given input context word, only one out of $V$ units, $\{x_1,\cdots,x_V\}$, will be 1, and all other units are 0. for example,

\[x=[0,\cdots,1,\cdots,0]\]

The weight between the input layer and the output layer can be represented by a $V \times N$ matrix $W$. Each row of $W$ is the $N$-dimension vector representation $v_w$ of the associated word of the input layer.

Given a context (a word), assuming $x_k=1$ and $x_{k’}=0$ for $k’\neq k$ then

\[h=x^TW=W{(k,\cdot):=v_{w_I}}\]

which is just copying the $k$-th row of $W$ to $h$. $v_{w_I}$ is the vector representation of the input word $w_I$. This implies that the link (activation) function of the hidden layer units is simply linear (i.e., directly passing its weighted sum of inputs to the next layer).

From the hidden layer to the output layer, there is a different weight matrix $W’=\{w’_{ij}\}$, which is a $N \times V$ matrix. Using these weights, we can compute a score $u_j$ for each word in the vocabulary,

\[ u_j={v’_{w_j}}^T \cdot h \]

where $v’_{w_j}$ is the $j$-th column of the matrix $W’$. Then we can use the softmax classification model to obtain the posterior distribution of the words, which is a multinomial distribution.

\[p(w_j|w_I)=y_j=\frac{\exp(u_j)}{\sum_{j’=1}^V{\exp(u_{j’})}}\]

where $y_j$ is the output of the $j$-th unit in the output layer.

Finally, we obtain:

\[p(w_j | w_I) = y_j = \frac{\exp( {v’_{w_o}}^T v_{w_I})}{\sum_{j’=1}^V{\exp( {v’_{w’_j}}^T v_{w_I})}}\]

Note that $v_w$ and $v’_w$ are two representations of the word $w$. $v_w$ comes from rows of $W$, which is the input $\to$ hidden weight matrix, and $v’_w$ comes from columns of $W’$, which is the hidden $\to$ output matrix. In subsequent analysis, we call $v_w$ as the “input vector”, and $v’_w$ as the “output vector” of the word w.

1.2. Cost Function

Let’s derive the weight update equation for this model. Although the actual computation is impractical, we still derivate the update equation to gain insights on the original model without tricks.

The training objective is to maximize the conditional probability of observing the actual output word $w_o$ (denote its corresponding index in the output layer as $j*$) given the input context word $w_I$ with regard to the weights.

\[ \max p(w_o|w_I)=\max y_{j*}\]

which is equivlant to minimize the negative-log probability, where :

\[ E=-\log p(w_o|w_I)=u_{j*}-\log \sum\limits_{j’=1}^V{\exp(u_{j’})}\]

$j*$ is the index of the actual output word in the output layer. Note that, this loss function can be understood as a special case of the cross-entropy measurement between two probabilistic distributions, which has been talked about in previous post:  Negative log-likelihood function.

1.3. Update weight from output $\to$ hidden layer

Let’s derive the update equation of the weights between hidden and output layer $W’_{N\times V}$.

Take the derivative of $E$ with regard to $j$-th unit’s net input $u_j$:

\[ \frac{\partial E}{\partial u_j}= y_j-t_j := e_j \]

where

\[t_j = \left\{\begin{aligned}1, j=j* \\ 0, j \neq j* \end{aligned}\right. \]

i.e., $t_j$ will only be 1 when the $j$-th is the actual output word, otherwise, $t_j=0$. Note that, this derivative tis simply the prediction error $e_j$ of the output layer.

Next we take the derivative on $w’_{ij}$ to obtain the gradient on the hidden $\to$ output weights $W’_{N \times V}$.

\[  \frac{\partial E}{\partial w’_{ij}}=  \frac{\partial E}{\partial u_j} \cdot  \frac{\partial u_j}{\partial w’_{ij} }= e_j\cdot h_i  \]

Therefore, with SGD, we can obtain the weight update equation for the hidden $\to$ output weight:

\[ w’_{ij} = w’_{ij} – \alpha \cdot e_j \cdot h_i\]

or vector repesentation:

\[ v’_{w_j}=v’_{w_j} – \alpha\cdot e_j \cdot h ~~~~~~j=1,2,\cdots,V\]

where $\alpha$ is the learning rate, $e_j = y_j – t_j$ and $h_i$ is the $i$-th unit in the hidden layer; $v’_{w_j}$ is the output vector of $w_j$.

Note that this update equation implies that we Have To Go Through Every Word In The Vocabulary, check its output probability $y_j$, and compare $y_j$ with its expected output $t_j$ (either 0 or 1).

If $y_j \ge t_j$ (“overestimating”), then we subtract a proportion of the hidden vector $h$ (i.e., $v_{w_I}$) from $v’_{w_o}$, thus making $v’_{w_o}$ far away from $v_{w_I}$; If $y_j \le t_j$ (“underestimating”), we add some $h$ to $v’_{w_o}$, thus making $v’_{w_o}$ closer to $v_{w_I}$. if $y_j \approx t_j$, then according to the update equation, little change will be made to the weights. Note, $v_w$ (input vector) and $v’_w$ (output vector) are two different vector representations of the word $w$.

1.4. Update weight from hidden $\to$ input layer

Having obtained the update equations for $W’$, we now move on to $W$. We take the derivative of $E$ on the output of the hidden layer, obteaining:

\[\frac{\partial E}{\partial h_i}=\sum\limits_{j=1}^V\frac{\partial E}{\partial u_j}\cdot\frac{\partial u_j}{\partial h_i}=\sum\limits_{j=1}^V{e_j\cdot w’_{ij}}:=EH_i\]

where $h_i$ is the output of the $i$-th unit of the hidden layer; $u_j$ is the net input of the $j$-thunit in the output layer; $e_j=y_j-t_j$ is the prediction error of the $j$-th word in the output layer. $EH$, a $N$-dim vector, is the sum of the output vectors of all words in the vocabulary, weighted by their prediction error.

Next, we should take the derivative of $E$ on $W$. First, we recall that the input layer value performs a linear computation to form the hidden layer:

\[h_j=\sum\limits_{k=1}^V{x_k}\cdot w_{ki}\]

Then, we  obtain:

\[\frac{\partial E}{\partial w_{ki}}=\frac{\partial E}{\partial h_i} \cdot \frac{\partial h_i}{\partial w_{ki}}=EH_i \cdot x_k\]

\[\frac{\partial E}{\partial W}=x\cdot EH\]

from which, we obtain a $V\times N$ matrix. since only one component of $x$ is non-zero, only one row of $\frac{\partial E}{\partial W}$ is non-zero, and the value of that row is $EH$, a $N$-dim vector.

We obtain the update equation of $W$ as :

\[v_{w_I}=v_{w_I}-\alpha\cdot EH\]

where, $v_{w_I}$ is a row of $W$, the “input vector”of the only context word, and is the only row of $W$ whose gradient is non-zero. All the other rows of $W$ remains unchanged in this iteration.

Vector $EH$ is the sum of output vectors of all words in the vocabulary weighted by their prediction error $e_j=y_j-t_j$, which is adding a portion of every output vector in  the vocabulary to the input vector of the context word.

As we iteratively update the model parameters by going through context-target word pairs generated from a training corpus, the e ects on the vectors will accumulate. We can imagine that the output vector of a word w is “dragged” back-and-forth by the input vectors of $w$'s co-occurring neighbors, as if there are physical strings between the vector of w and the vectors of its neighbors. Similarly, an input vector can also be considered as being dragged by many output vectors. This interpretation can remind us of gravity, or force-directed graph layout. The equilibrium length of each imaginary string is related to the strength of cooccurrence between the associated pair of words, as well as the learning rate. After many iterations, the relative positions of the input and output vectors will eventually stabilize.

2. Multi-word context Model

Now, we move on to the model with a multi-word context setting.

When computing the hidden layer output, the CBOW model computes the mean value of the inputs:

\[h=\frac{1}{C}W\dot (x_1+x_2+\cdots+x_C)=\frac{1}{C}\cdot (v_{w_1}+v_{w_2}+\cdots+v_{w_C})\]

where $C$ is the number of words in the context, $w_1,\cdots,w_C$ are the words in the context, and $v_w$ is the input vector of a word $w$. The loss function is:

\[E = - \log p(w_o | w_{I,1},\cdots,w_{I,C}) \]

\[=-u_{j*}+\log\sum\limits_{j’=1}^{V}{\exp(u_j’)}\]

2.2. Update weight from output $\to$ hidden layer

The update equation for the hidden $\to$ output weights stay the same as that for the one-word-context model.

\[ v’_{w_j}=v’_{w_j} – \alpha\cdot e_j \cdot h ~~~~~~j=1,2,\cdots,V\]

Note that we need to apply this to every element of the hidden!output weight matrix for each training instance.

2.3 Update weight from hidden $\to$ input layer

The update equation for input $to$ hidden weights is similar to 1.4, except that now we need to apply the following equation for every word $w_{I,c}$ in the context:

\[ v_{w_{I,c}}=v_{w_{I,c}} –\frac{1}{C}\cdot  \alpha \cdot EH \]

where $w_{w_{I,c}}$ is the input vector of the $c$-th word in the onput context; $\alpha$ is the learning rate; and $EH=\frac{\partial E}{\partial h_i}$.

CBOW Model Formula Deduction的更多相关文章

  1. RBM Formula Deduction

    Energy based Model the probability distribution (softmax function): \[p(x)=\frac{\exp(-E(x))}{\sum\l ...

  2. Logistic Regression - Formula Deduction

    Sigmoid Function \[ \sigma(z)=\frac{1}{1+e^{(-z)}} \] feature: axial symmetry: \[ \sigma(z)+ \sigma( ...

  3. word2vec模型原理与实现

    word2vec是Google在2013年开源的一款将词表征为实数值向量的高效工具. gensim包提供了word2vec的python接口. word2vec采用了CBOW(Continuous B ...

  4. word2vec——高效word特征提取

    继上次分享了经典统计语言模型,最近公众号中有很多做NLP朋友问到了关于word2vec的相关内容, 本文就在这里整理一下做以分享. 本文分为 概括word2vec 相关工作 模型结构 Count-ba ...

  5. 基于pytorch实现word2vec

    一.介绍 word2vec是Google于2013年推出的开源的获取词向量word2vec的工具包.它包括了一组用于word embedding的模型,这些模型通常都是用浅层(两层)神经网络训练词向量 ...

  6. Spark之导出PMML文件(Python)

    PMML,全称预言模型标记语言(Predictive Model Markup Language),利用XML描述和存储数据挖掘模型,是一个已经被W3C所接受的标准.PMML是一种基于XML的语言,用 ...

  7. ISLR系列:(2)分类 Logistic Regression & LDA & QDA & KNN

       Classification 此博文是 An Introduction to Statistical Learning with Applications in R 的系列读书笔记,作为本人的一 ...

  8. cw2vec理论及其实现

    导读 本文对AAAI 2018(Association for the Advancement of Artificial Intelligence 2018)高分录用的一篇中文词向量论文(cw2ve ...

  9. 通过Visualizing Representations来理解Deep Learning、Neural network、以及输入样本自身的高维空间结构

    catalogue . 引言 . Neural Networks Transform Space - 神经网络内部的空间结构 . Understand the data itself by visua ...

随机推荐

  1. [网站公告]3月10日23:00-4:00阿里云SLB升级,会有4-8次连接闪断

    大家好,阿里云将于3月10日23:00-4:00对负载均衡服务(SLB)做升级操作,升级期间SLB网络连接会有约4-8次闪断.由此给您带来麻烦,敬请谅解! 阿里云SLB升级公告内容如下: 尊敬的用户: ...

  2. 处理Linux下subversion尝试连接自建的VisualSVN server报“Key usage violation in certificate has been detected”错误的问题

    在Linux下使用subversion尝试链接VisualSVN server搭建的svn库,可能会报下面错误, svn: OPTIONS of 'https://server.domain.loca ...

  3. Bootstrap系列 -- 43. 固定导航条

    很多情况之一,设计师希望导航条固定在浏览器顶部或底部,这种固定式导航条的应用在移动端开发中更为常见.Bootstrap框架提供了两种固定导航条的方式:  .navbar-fixed-top:导航条固定 ...

  4. Xen虚拟化基本原理详解

    标签:虚拟化 xen 原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 .作者信息和本声明.否则将追究法律责任.http://wangzan18.blog.51cto.com/80210 ...

  5. Android闹钟开发与展示Demo

    前言: 看过了不少安卓闹钟开发的例子,都是点到为止,都不完整,这次整一个看看. 一.闹钟的设置不需要数据库,但是展示闹钟列表的时候需要,所以需要数据库: public class MySQLiteOp ...

  6. python2.X和3.X的一些区别【整理中】

    1.性能 Py3.0运行 pystone benchmark的速度比Py2.5慢30%.Guido认为Py3.0有极大的优化空间,在字符串和整形操作上可  以取得很好的优化结果.  Py3.1性能比P ...

  7. linux基础-第十七单元 Samba服务

    Samba的功能 Samba的安装 Samba服务的启动.停止.重启 Samba服务的配置 Samba服务的主配置文件 samba服务器配置实例 Samba客户端设置 windows客户端 Linux ...

  8. JavaScript 全栈工程师培训教程(来自阮一峰)

    来源于:https://twitter.com/ruanyf http://www.ruanyifeng.com/blog/2016/11/javascript.html 全栈工程师培训材料,帮助学习 ...

  9. Eclipse运行内存溢出

    VM arguments中添加如下: -Xms512m-Xmx1024m-XX:PermSize=256m-XX:MaxPermSize=256m-Xmn128m

  10. python 登陆接口

    #!/usr/bin/env pythonimport sysname = ''pw=''name_num = 0pw_num = 0#black_list = []with open('a.txt' ...