ML| EM
What's xxx
The EM algorithm is used to find the maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either there are missing values among the data, or the model can be formulated more simply by assuming the existence of additional unobserved data points.
The motivation is as follows. If we know the value of the parameters $\boldsymbol\theta$, we can usually find the value of the latent variables $\mathbf{Z}$ by maximizing the log-likelihood over all possible values of $\mathbf{Z}$, either simply by iterating over $\mathbf{Z}$ or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables $\mathbf{Z}$, we can find an estimate of the parameters $\boldsymbol\theta$ fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both $\boldsymbol\theta$ and $\mathbf{Z}$ are unknown:
- First, initialize the parameters $\boldsymbol\theta$ to some random values.
- Compute the best value for $\mathbf{Z}$ given these parameter values.
- Then, use the just-computed values of $\mathbf{Z}$ to compute a better estimate for the parameters $\boldsymbol\theta$. Parameters associated with a particular value of $\mathbf{Z}$ will use only those data points whose associated latent variable has that value.
- Iterate steps 2 and 3 until convergence.
The algorithm as just described monotonically approaches a local minimum of the cost function, and is commonly called hard EM. The k-means algorithm is an example of this class of algorithms.
However, we can do somewhat better by, rather than making a hard choice for $\mathbf{Z}$ given the current parameter values and averaging only over the set of data points associated with a particular value of $\mathbf{Z}$, instead determining the probability of each possible value of $\mathbf{Z}$ for each data point, and then using the probabilities associated with a particular value of $\mathbf{Z}$ to compute a weighted average over the entire set of data points. The resulting algorithm is commonly called soft EM, and is the type of algorithm normally associated with EM.
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.
Algorithm
Given a statistical model consisting of a set $\mathbf{X}$ of observed data, a set of unobserved latent data or missing values $\mathbf{Z}$, and a vector of unknown parameters $\boldsymbol\theta$, along with a likelihood function $L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}|\boldsymbol\theta)$, the maximum likelihood estimate (MLE) of the unknown parameters is determined by the marginal likelihood of the observed data
$L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}|\boldsymbol\theta) = \sum_{\mathbf{Z}} p(\mathbf{X},\mathbf{Z}|\boldsymbol\theta) $
However, this quantity is often intractable (e.g. if $\mathbf{Z}$ is a sequence of events, so that the number of values grows exponentially with the sequence length, making the exact calculation of the sum extremely difficult).
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:
1. Expectation step (E step): Calculate the expected value of the log likelihood function, with respect to the conditional distribution of $\mathbf{Z}$ given $\mathbf{X}$ under the current estimate of the parameters $\boldsymbol\theta^{(t)}$:
$Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}|\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta;\mathbf{X},\mathbf{Z}) \right] \,$
2. Maximization step (M step): Find the parameter that maximizes this quantity:
$\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) \, $
Note that in typical models to which EM is applied:
- The observed data points $\mathbf{X}$ may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). There may in fact be a vector of observations associated with each data point.
- The missing values (aka latent variables) $\mathbf{Z}$ are discrete, drawn from a fixed number of values, and there is one latent variable per observed data point.
- The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and parameters associated with a particular value of a latent variable (i.e. associated with all data points whose corresponding latent variable has a particular value).
ML| EM的更多相关文章
- 高斯混合模型(理论+opencv实现)
查资料的时候看了一个不文明的事情,转载别人的东西而不标注出处,结果原创无人知晓,转载很多人评论~~标注了转载而不说出处这样的人有点可耻! 写在前面: Gaussian Mixture Model (G ...
- 【十大经典数据挖掘算法】EM
[十大经典数据挖掘算法]系列 C4.5 K-Means SVM Apriori EM PageRank AdaBoost kNN Naïve Bayes CART 1. 极大似然 极大似然(Maxim ...
- opencv3中的机器学习算法之:EM算法
不同于其它的机器学习模型,EM算法是一种非监督的学习算法,它的输入数据事先不需要进行标注.相反,该算法从给定的样本集中,能计算出高斯混和参数的最大似然估计.也能得到每个样本对应的标注值,类似于kmea ...
- EM算法原理以及高斯混合模型实践
EM算法有很多的应用: 最广泛的就是GMM混合高斯模型.聚类.HMM等等. The EM Algorithm 高斯混合模型(Mixtures of Gaussians)和EM算法 EM算法 求最大似然 ...
- Notes : <Hands-on ML with Sklearn & TF> Chapter 7
.caret, .dropup > .btn > .caret { border-top-color: #000 !important; } .label { border: 1px so ...
- Notes : <Hands-on ML with Sklearn & TF> Chapter 6
.caret, .dropup > .btn > .caret { border-top-color: #000 !important; } .label { border: 1px so ...
- Notes : <Hands-on ML with Sklearn & TF> Chapter 5
.caret, .dropup > .btn > .caret { border-top-color: #000 !important; } .label { border: 1px so ...
- EM学习-思想和代码
EM算法的简明实现 当然是教学用的简明实现了,这份实现是针对双硬币模型的. 双硬币模型 假设有两枚硬币A.B,以相同的概率随机选择一个硬币,进行如下的抛硬币实验:共做5次实验,每次实验独立的抛十次,结 ...
- ML: 聚类算法R包-对比
测试验证环境 数据: 7w+ 条,数据结构如下图: > head(car.train) DV DC RV RC SOC HV LV HT LT Type TypeName 1 379 85.09 ...
随机推荐
- 【网络基础】【TCP/IP】私有IP地址段
私有IP地址段 Class A:10.0.0.0 - 10.255.255.255 Class B:172.16.0.0 - 172.31.255.255 Class C:192.168.0. ...
- 多线程之volatile关键字(五)
开始全文之前,先铺垫一下jvm基础知识以及线程栈: JVM栈是线程私有的,每个线程创建的同时都会创建JVM栈,JVM栈中存放的为当前线程中局部基本类型的变量(java中定义的八种基本类型:boolea ...
- PyCharm2019 激活方式
1.修改hosts激活:需要修改hosts,稳定无影响,持续更新,推荐~ 一.修改hosts激活 1.修改hosts文件 将0.0.0.0 account.jetbrains.com和0.0.0.0 ...
- Artwork Gym - 101550A 离线并查集
题目:题目链接 思路:每个空白区域当作一个并查集,因为正着使用并查集分割的话dfs会爆栈,判断过于复杂也会导致超时,我们采用离线反向操作,先全部涂好,然后把黑格子逐步涂白,我们把每个空白区域当作一个并 ...
- re--参考手册
表达式全集 字符 描述 \ 将下一个字符标记为一个特殊字符.或一个原义字符.或一个向后引用.或一个八进制转义符.例如,“n”匹配字符“n”.“\n”匹配一个换行符.串行“\\”匹配“\”而“\(”则匹 ...
- 常用C/C++预处理指令详解
预处理是在编译之前的处理,而编译工作的任务之一就是语法检查,预处理不做语法检查.预处理命令以符号“#”开头. 常用的预处理指令包括: 宏定义:#define 文件包含:#include 条件编译:#i ...
- hdu4864不是一般的贪心
题目表达的非常清楚,也不绕弯刚开始以为最大权匹配,仔细一想不对,这题的数据双循环建图都会爆,只能先贪心试一下,但一想贪心也要双循环啊,怎么搞? 想了好久没头绪,后来经学长提醒,可以把没用到的先记录下来 ...
- UVa 12167 & HDU 2767 强连通分量 Proving Equivalences
题意:给出一个有向图,问最少添加几条有向边使得原图强连通. 解法:求出SCC后缩点,统计一下出度为0的点和入度为0的点,二者取最大值就是答案. 还有个特殊情况就是本身就是强连通的话,答案就是0. #i ...
- vi 编辑器命令
插入命令 a append after the cursor A append after the current line i insert before the cursor I insert b ...
- LRESULT CALLBACK WndProc 窗口程序的 重点
LRESULT CALLBACK WndProc Windows程序所作的一切,都是回应发送给窗口消息处理程序的消息.这是概念上的主要难点之一,在开始写作Windows程序之前,必须先搞清楚. 窗口消 ...