What's xxx

The EM algorithm is used to find the maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either there are missing values among the data, or the model can be formulated more simply by assuming the existence of additional unobserved data points.

The motivation is as follows. If we know the value of the parameters $\boldsymbol\theta$, we can usually find the value of the latent variables $\mathbf{Z}$ by maximizing the log-likelihood over all possible values of $\mathbf{Z}$, either simply by iterating over $\mathbf{Z}$ or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables $\mathbf{Z}$, we can find an estimate of the parameters $\boldsymbol\theta$ fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both $\boldsymbol\theta$ and $\mathbf{Z}$ are unknown:

  1. First, initialize the parameters $\boldsymbol\theta$ to some random values.
  2. Compute the best value for $\mathbf{Z}$ given these parameter values.
  3. Then, use the just-computed values of $\mathbf{Z}$ to compute a better estimate for the parameters $\boldsymbol\theta$. Parameters associated with a particular value of $\mathbf{Z}$ will use only those data points whose associated latent variable has that value.
  4. Iterate steps 2 and 3 until convergence.

The algorithm as just described monotonically approaches a local minimum of the cost function, and is commonly called hard EM. The k-means algorithm is an example of this class of algorithms.

However, we can do somewhat better by, rather than making a hard choice for $\mathbf{Z}$ given the current parameter values and averaging only over the set of data points associated with a particular value of $\mathbf{Z}$, instead determining the probability of each possible value of $\mathbf{Z}$ for each data point, and then using the probabilities associated with a particular value of $\mathbf{Z}$ to compute a weighted average over the entire set of data points. The resulting algorithm is commonly called soft EM, and is the type of algorithm normally associated with EM.

With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.

Algorithm

Given a statistical model consisting of a set $\mathbf{X}$ of observed data, a set of unobserved latent data or missing values $\mathbf{Z}$, and a vector of unknown parameters $\boldsymbol\theta$, along with a likelihood function $L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}|\boldsymbol\theta)$, the maximum likelihood estimate (MLE) of the unknown parameters is determined by the marginal likelihood of the observed data

$L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}|\boldsymbol\theta) = \sum_{\mathbf{Z}} p(\mathbf{X},\mathbf{Z}|\boldsymbol\theta) $
However, this quantity is often intractable (e.g. if $\mathbf{Z}$ is a sequence of events, so that the number of values grows exponentially with the sequence length, making the exact calculation of the sum extremely difficult).

The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:

1. Expectation step (E step): Calculate the expected value of the log likelihood function, with respect to the conditional distribution of $\mathbf{Z}$ given $\mathbf{X}$ under the current estimate of the parameters $\boldsymbol\theta^{(t)}$:
$Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}|\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta;\mathbf{X},\mathbf{Z}) \right] \,$
2. Maximization step (M step): Find the parameter that maximizes this quantity:
$\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) \, $
Note that in typical models to which EM is applied:

  • The observed data points $\mathbf{X}$ may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). There may in fact be a vector of observations associated with each data point.
  • The missing values (aka latent variables) $\mathbf{Z}$ are discrete, drawn from a fixed number of values, and there is one latent variable per observed data point.
  • The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and parameters associated with a particular value of a latent variable (i.e. associated with all data points whose corresponding latent variable has a particular value).

ML| EM的更多相关文章

  1. 高斯混合模型(理论+opencv实现)

    查资料的时候看了一个不文明的事情,转载别人的东西而不标注出处,结果原创无人知晓,转载很多人评论~~标注了转载而不说出处这样的人有点可耻! 写在前面: Gaussian Mixture Model (G ...

  2. 【十大经典数据挖掘算法】EM

    [十大经典数据挖掘算法]系列 C4.5 K-Means SVM Apriori EM PageRank AdaBoost kNN Naïve Bayes CART 1. 极大似然 极大似然(Maxim ...

  3. opencv3中的机器学习算法之:EM算法

    不同于其它的机器学习模型,EM算法是一种非监督的学习算法,它的输入数据事先不需要进行标注.相反,该算法从给定的样本集中,能计算出高斯混和参数的最大似然估计.也能得到每个样本对应的标注值,类似于kmea ...

  4. EM算法原理以及高斯混合模型实践

    EM算法有很多的应用: 最广泛的就是GMM混合高斯模型.聚类.HMM等等. The EM Algorithm 高斯混合模型(Mixtures of Gaussians)和EM算法 EM算法 求最大似然 ...

  5. Notes : <Hands-on ML with Sklearn & TF> Chapter 7

    .caret, .dropup > .btn > .caret { border-top-color: #000 !important; } .label { border: 1px so ...

  6. Notes : <Hands-on ML with Sklearn & TF> Chapter 6

    .caret, .dropup > .btn > .caret { border-top-color: #000 !important; } .label { border: 1px so ...

  7. Notes : <Hands-on ML with Sklearn & TF> Chapter 5

    .caret, .dropup > .btn > .caret { border-top-color: #000 !important; } .label { border: 1px so ...

  8. EM学习-思想和代码

    EM算法的简明实现 当然是教学用的简明实现了,这份实现是针对双硬币模型的. 双硬币模型 假设有两枚硬币A.B,以相同的概率随机选择一个硬币,进行如下的抛硬币实验:共做5次实验,每次实验独立的抛十次,结 ...

  9. ML: 聚类算法R包-对比

    测试验证环境 数据: 7w+ 条,数据结构如下图: > head(car.train) DV DC RV RC SOC HV LV HT LT Type TypeName 1 379 85.09 ...

随机推荐

  1. AHB 总线问答(转)

    AHB总线问答 http://blog.163.com/huanhuan_hdu/blog/static/1352981182011625916845/ 仲裁:主设备可以在一个突发传输中解除HLOCK ...

  2. 消息中间件ActiveMQ及Spring整合JMS

    一 .消息中间件的基本介绍 1.1 消息中间件 1.1.1 什么是消息中间件 消息中间件利用高效可靠的消息传递机制进行平台无关的数据交流,并基于数据通信来进行分布式系统的集成.通过提供消息传递和消息排 ...

  3. 让Python带你看一场唯美的横飘雪!

    “北国风光,千里冰封,万里雪飘”,这句诗描写了一句美丽肃静的风光图,恰逢昨天笔者这边也下了一场比较大的雪,要不今天就用Python带大家也来领略一次美丽的雪景? 开发环境 版本:Python3.6 系 ...

  4. gcc——预处理(预编译),编译,汇编,链接

    一,预编译 操作步骤:gcc -E hello.c -o hello.i 主要作用: 处理关于 “#” 的指令 [1]删除#define,展开所有宏定义.例#define portnumber 333 ...

  5. linux学习-systemd-journald.service 简介

    过去只有 rsyslogd 的年代中,由于 rsyslogd 必须要开机完成并且执行了 rsyslogd 这个 daemon 之 后,登录文件才会开始记录.所以,核心还得要自己产生一个 klogd 的 ...

  6. JVM执行子系统探究——类文件结构初窥

    类文件(.class)是搞java的都非常熟悉的文件,一般我们在编写java之后文件之后,首先通过javac工具生成.class类字节码文件,而后在执行程序的时候由虚拟机加载执行.那么为什么要生成.c ...

  7. NPM包的安装及卸载

    NPM全名:node package manager,是node包管理工具,负责安装.卸载.更新等.新版的NodeJS已经集成了npm.所以装好NodeJS的同时,npm也已经装好了! 可以用cmd命 ...

  8. ogre3D学习基础7---材质详解

    物体着色的基础 --- 四种不同光照作用 1.环境反射 近似的模拟了场景中的全局辐射,也就是用来近似模拟所有光在场景中不断散射的结果.材质中有相应的属性来代表这种环境反射颜色. 2.漫反射 这种颜色是 ...

  9. PHP全栈开发

     DAY01_PHP基础第一天                 01.了解php  00:09:26 ★  02.php的开发环境准备  00:13:47 ★  03.人人都会编程  00:10:26 ...

  10. javascript 获取键盘上的按键代码KeyCode

    Enter键的keyCode为13 Alt + Enter 的keyCode为10 $(document).on( 'keypress', function ( e ) { console.log( ...