MLE vs MAP: the connection between Maximum Likelihood and Maximum A Posteriori Estimation
Reference:MLE vs MAP.
Maximum Likelihood Estimation (MLE) and Maximum A Posteriori (MAP), are both a method for estimating some variable in the setting of probability distributions or graphical models. They are similar, as they compute a single estimate, instead of a full distribution.
MLE, as we, who have already indulge ourselves in Machine Learning, would be familiar with this method. Sometimes, we even use it without knowing it. Take for example, when fitting a Gaussian to our dataset, we immediately take the sample mean and sample variance, and use it as the parameter of our Gaussian. This is MLE, as, if we take the derivative of the Gaussian function with respect to the mean and variance, and maximizing it (i.e. setting the derivative to zero), what we get is functions that are calculating sample mean and sample variance. Another example, most of the optimization in Machine Learning and Deep Learning (neural net, etc), could be interpreted as MLE.
Speaking in more abstract term, let’s say we have a likelihood function P(X|θ)P(X|θ). Then, the MLE for θ , the parameter we want to infer, is:

As taking a product of some numbers less than 1 would approaching 0 as the number of those numbers goes to infinity, it would be not practical to compute, because of computation underflow. Hence, we will instead work in the log space, as logarithm is monotonically increasing, so maximizing a function is equal to maximizing the log of that function.

To use this framework, we just need to derive the log likelihood of our model, then maximizing it with regard of θ using our favorite optimization algorithm like Gradient Descent.
Up to this point, we now understand what does MLE do. From here, we could draw a parallel line with MAP estimation.
MAP usually comes up in Bayesian setting. Because, as the name suggests, it works on a posterior distribution, not only the likelihood.
Recall, with Bayes’ rule, we could get the posterior as a product of likelihood and prior:

We are ignoring the normalizing constant as we are strictly speaking about optimization here, so proportionality is sufficient.
If we replace the likelihood in the MLE formula above with the posterior, we get:

Comparing both MLE and MAP equation, the only thing differs is the inclusion of prior P(θ) in MAP, otherwise they are identical. What it means is that, the likelihood is now weighted with some weight coming from the prior.
Let’s consider what if we use the simplest prior in our MAP estimation, i.e. uniform prior. This means, we assign equal weights everywhere, on all possible values of the θ. The implication is that the likelihood equivalently weighted by some constants. Being constant, we could be ignored from our MAP equation, as it will not contribute to the maximization.
Let’s be more concrete, let’s say we could assign six possible values into θ . Now, our prior P(θ) is 1/6 everywhere in the distribution. And consequently, we could ignore that constant in our MAP estimation.

We are back at MLE equation again!
If we use different prior, say, a Gaussian, then our prior is not constant anymore, as depending on the region of the distribution, the probability is high or low, never always the same.
What we could conclude then, is that MLE is a special case of MAP, where the prior is uniform!
MLE vs MAP: the connection between Maximum Likelihood and Maximum A Posteriori Estimation的更多相关文章
- Maximum Likelihood及Maximum Likelihood Estimation
1.What is Maximum Likelihood? 极大似然是一种找到最可能解释一组观测数据的函数的方法. Maximum Likelihood is a way to find the mo ...
- 最大似然估计实例 | Fitting a Model by Maximum Likelihood (MLE)
参考:Fitting a Model by Maximum Likelihood 最大似然估计是用于估计模型参数的,首先我们必须选定一个模型,然后比对有给定的数据集,然后构建一个联合概率函数,因为给定 ...
- 机器学习的MLE和MAP:最大似然估计和最大后验估计
https://zhuanlan.zhihu.com/p/32480810 TLDR (or the take away) 频率学派 - Frequentist - Maximum Likelihoo ...
- Linear Regression and Maximum Likelihood Estimation
Imagination is an outcome of what you learned. If you can imagine the world, that means you have lea ...
- 似然函数 | 最大似然估计 | likelihood | maximum likelihood estimation | R代码
学贝叶斯方法时绕不过去的一个问题,现在系统地总结一下. 之前过于纠结字眼,似然和概率到底有什么区别?以及这一个奇妙的对等关系(其实连续才是f,离散就是p). 似然函数 | 似然值 wiki:在数理统计 ...
- [Bayes] Maximum Likelihood estimates for text classification
Naïve Bayes Classifier. We will use, specifically, the Bernoulli-Dirichlet model for text classifica ...
- 最大似然估计(Maximum Likelihood,ML)
先不要想其他的,首先要在大脑里形成概念! 最大似然估计是什么意思?呵呵,完全不懂字面意思,似然是个啥啊?其实似然是likelihood的文言翻译,就是可能性的意思,所以Maximum Likeliho ...
- MLE、MAP、贝叶斯三种估计框架
三个不同的估计框架. MLE最大似然估计:根据训练数据,选取最优模型,预测.观测值D,training data:先验为P(θ). MAP最大后验估计:后验概率. Bayesian贝叶斯估计:综合模型 ...
- Maximum Likelihood Method最大似然法
最大似然法,英文名称是Maximum Likelihood Method,在统计中应用很广.这个方法的思想最早由高斯提出来,后来由菲舍加以推广并命名. 最大似然法是要解决这样一个问题:给定一组数据和一 ...
随机推荐
- lua学习笔记3--lua与c#交互
LuaInterface是C#与Lua连接的桥梁 LuaInterface是一个开源项目工程,内部有两个核心DLL文件: LuaInterface.dll:在C#中操作Lua代码需要依赖该文件; lu ...
- java导出execl报表
1. 下载jar包: 官方下载:http://poi.apache.org/download.html这里可以下载到它的最新版本和文档,目前最新版本是3.7,这里使用比较稳定的3.6版. 百度网盘下载 ...
- GIL全局解释器锁,线程池与进程池 同步异步,阻塞与非阻塞,异步回调
GIL全局解释器锁 1.什么是GIL 官方解释:'''In CPython, the global interpreter lock, or GIL, is a mutex that prevents ...
- 架构模式: 客户端 UI 构建
架构模式: 客户端 UI 构建 上下文 您已应用微服务架构模式.服务由业务能力/面向子域的团队开发,这些团队也负责用户体验.一些UI屏幕/页面显示来自多个服务的数据.例如,考虑亚马逊风格的产品详细信息 ...
- Tool.js(javascript帮助类)
//string.format $.format = function (source, params) { ) return function () { var args = $.makeArray ...
- Origin
1.简单的使用 http://wenku.baidu.com/link?url=K1ThI9a-Ws_Rk28K28kBEc9uNRN7k4vHV4pxfieMCaLeA4rGotRAnk8fxCUm ...
- Linux企业面试题(一)
一.如何过滤出已知当前目录dongdaxia中的所有一级目录(提示:不包含dongdaxia目录下面目录的子目录及隐藏目录,即只能是一级目录)? 1. 过滤以“d”开头的 2.查看以/结尾的目录 3. ...
- sshpass和做软链接
参考: https://help.aliyun.com/document_detail/54530.html?spm=5176.11065259.1996646101.searchclickresul ...
- Python链表操作(实现)
Python链表操作 在Python开发的面试中,我们经常会遇到关于链表操作的问题.链表作为一个非常经典的无序列表结构,也是一个开发工程师必须掌握的数据结构之一.在本文中,我将针对链表本身的数据结构特 ...
- package[golang]学习笔记之runtime
*获取当前函数名称,文件名称,行号等信息.通过这个函数配合Println函数可以方便的获取错误信息的位置 var n int //n==0 当前 //n==1 调用函数 //n==2 调用函数的调用函 ...