Machine Learning Trick of the Day (2): Gaussian Integral Trick
Machine Learning Trick of the Day (2): Gaussian Integral Trick
Today's trick, the Gaussian integral trick, is one that allows us to re-express a (potentially troublesome) function in an alternative form, in particular, as an integral of a Gaussian against another function — integrals against a Gaussian turn out not to be too troublesome and can provide many statistical and computational benefits. One popular setting where we can exploit such an alternative representation is for inference in discrete undirected graphical models (think Boltzmann machines or discrete Markov random fields). In such cases, this trick lets us transform our discrete problem into one that has an underlying continuous (Gaussian) representation, which we can then solve using our other machine learning tricks. But this is part of a more general strategy that is used throughout machine learning, whether in Bayesian posterior analysis, deep learning or kernel machines. This trick has many facets, and this post explores the Gaussian integral trick and its more general form, auxiliary variable augmentation.
Gaussian integral trick state expansion.
Gaussian Integral Trick
The Gaussian integral trick is one of the statistical flavour and allows us to turn a function that is an exponential in x2 into an exponential that is linear in x. We do this by augmenting a linear function with auxiliary variables and then integrating over these auxiliary variables, hence a form of auxiliary variable augmentation. The simplest form of this trick is to apply the following identity:
We can prove this to ourselves by exploiting our knowledge of Gaussian distributions (which this looks strikingly similar to) and our ability to complete the square when we see such quadratic forms. Separating out the scaling factor a we get:
Which by completing the square becomes:
where the last integral is solved by matching it to a Gaussian with mean μ=x2a and variance σ2=12a, which we know has a normalisation of 2πσ2−−−−√ — this last step shows how this trick got its name.
The 'Gaussian integral trick' was coined and initially described by Hertz et al. [Ch10, pg 253] [1], and is closely related to the Hubbard-Stratonovich transform (which provides the augmentation for exp(−x2)).
Transforming Binary MRFs
This trick is also valid in the multivariate case, which is what we will most often be interested in. One good place to see this trick in action is when applied to binary MRFs or Boltzmann machines. Binary MRFs have a joint probability, for binary random variables x:
where Z, is the normalising constant. The (multivariate) Gaussian integral trick can be applied to the quadratic term in this energy function allowing for an insightful analysis andinteresting reparameterisation that allows for alternative inference methods to be used. For example:
- We can conduct an analysis of Boltzmann machines that when combined with our earlier trick, (trick 1) the replica trick, allows for theoretical predictions about the performance of this model. See:
- Formal statistical Mechanics of neural networks, section 10.1 (eq 10.5), Hertz et al. [1]
- We can use the trick to create a Gaussian augmented space for discrete MRFs to which Hamiltonian Monte Carlo, previously restricted to continuous and differentiable models, can be applied [2][3]. See:
- Continuous Relaxations for Discrete Hamiltonian Monte Carlo, Zhang et al.
- Auxiliary-variable Exact Hamiltonian Monte Carlo Samplers for Binary Distributions. Pakman and Paninski.
Variable Augmentation
Graphical model for a general augmentation.
This trick is a special case of a more general strategy called variable (or data) augmentation — I prefervariable augmentation to data augmentation [4], since it will not be confused with observed data preprocessing and manipulation. In this setting, the introduction of auxiliary variables has been most often used to develop better mixing Markov chain Monte Carlo samplers. This is because after augmentation, the conditional distributions of the model often have highly convenient and easy-to-sample-from forms.
One recent example of variable augmentation (and that parallels our initial trick) is the Polya-Gamma variable augmentation. In this case, we can express the sigmoid function that appears when computing the mean of the Bernoulli distribution, as:
where p(y) has a Polya-Gamma distribution [5]. This nicely transforms the sigmoid into a Gaussian convolution (integrated against a Polya-Gamma random variable) — and gives us a different type of Gaussian integral trick. In fact, similar Gaussian integral tricks are abound, and are typically described under the heading of Gaussian scale-mixture distributions.
There are many examples of variable augmentation to be found, especially for binary and categorical distributions. Much guidance is available, and some papers that demonstrate this are:
- Albert and Chib's paper is one of the first where the concept of data augmentation is most clearly established, and to whom data augmentation is most often attributed. It shows augmentation for binary and categorical variables — a classic paper that everyone should read.
- Polson and Scott introduced the Polya-Gamma augmentation I described above, and is amongst the more recent of augmentation strategies. This augmentation can be used for more effective Monte Carlo or variational inference, e.g.,
- Ultimately, finding a good augmentation relies on exploiting known and tractable integrals. As such there can be a bit of an art to creating such augmentations and is what Van Dyk and Meng discuss.
Summary
The Gaussian integral trick is just one from a large class of variable augmentation strategies that are widely used in statistics and machine learning. They work by introducing auxiliary variables into our problems that induce an alternative representation, and that then give us additional statistical and computational benefits. Such methods lie at the heart of efficient inference algorithms, whether these be Monte Carlo or deterministic approximate inference schemes, making variable augmentation a favourite in our box of machine learning tricks.
Some References
| [1] | John Hertz, Anders Krogh, Richard G Palmer, Introduction to the theory of neural computation, , 1991 |
| [2] | Yichuan Zhang, Zoubin Ghahramani, Amos J Storkey, Charles A Sutton, Continuous relaxations for discrete hamiltonian monte carlo, Advances in Neural Information Processing Systems, 2012 |
| [3] | Ari Pakman, Liam Paninski, Auxiliary-variable exact Hamiltonian Monte Carlo samplers for binary distributions, Advances in Neural Information Processing Systems, 2013 |
| [4] | James H Albert, Siddhartha Chib, Bayesian analysis of binary and polychotomous response data, Journal of the American statistical Association, 1993 |
| [5] | Nicholas G Polson, James G Scott, Jesse Windle, Bayesian inference for logistic models using P\'olya--Gamma latent variables, Journal of the American Statistical Association, 2013 |
Machine Learning Trick of the Day (2): Gaussian Integral Trick的更多相关文章
- How do I learn machine learning?
https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644 How Can I Learn X? ...
- Machine Learning Trick of the Day (1): Replica Trick
Machine Learning Trick of the Day (1): Replica Trick 'Tricks' of all sorts are used throughout machi ...
- Kernel Functions for Machine Learning Applications
In recent years, Kernel methods have received major attention, particularly due to the increased pop ...
- Machine Learning for Developers
Machine Learning for Developers Most developers these days have heard of machine learning, but when ...
- 学习笔记之Machine Learning Crash Course | Google Developers
Machine Learning Crash Course | Google Developers https://developers.google.com/machine-learning/c ...
- How do I learn mathematics for machine learning?
https://www.quora.com/How-do-I-learn-mathematics-for-machine-learning How do I learn mathematics f ...
- Machine Learning and Data Mining(机器学习与数据挖掘)
Problems[show] Classification Clustering Regression Anomaly detection Association rules Reinforcemen ...
- [C2P1] Andrew Ng - Machine Learning
About this Course Machine learning is the science of getting computers to act without being explicit ...
- 【机器学习Machine Learning】资料大全
昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...
随机推荐
- 【搜索】POJ-2718 贪心+枚举
一.题目 Description Given a number of distinct decimal digits, you can form one integer by choosing a n ...
- java异常处理及自定义异常的使用
1. 异常介绍 异常机制可以提高程序的健壮性和容错性. Throwable:Throwable是java语言所有错误或异常的超类. 有两个子类Error和Exception. 1.1 编译期异常 编译 ...
- BETA-6
前言 我们居然又冲刺了·六 团队代码管理github 站立会议 队名:PMS 530雨勤(组长) 过去两天完成了哪些任务 新方案代码比之前的更简单,但是对场景的要求相应变高了,已经实现,误差感人 代码 ...
- OSG学习:计算纹理坐标
在很多时候,直接指定纹理坐标是非常不方便的,如曲面纹理坐标,只有少数的曲面(如圆锥.圆柱等)可以在不产生扭曲的情况下映射到平面上,其他的曲面在映射到表面时都会产生一定程度的扭曲.一般而言,曲面表面的曲 ...
- 网页访问过程(基于CDN)
1. 全局负载均衡(基于DNS) 如果有多台 WEB 服务器同时为一个域名提供服务时,即一条 URL 对应多个 IP 地址,那么该 URL 的权威域名服务器可能会根据该 URL 解析出多个 IP 地址 ...
- requests爬取知乎话题和子话题
zhihu.py # *_*coding:utf-8 *_* import pymysql import requests from lxml import etree from requests_t ...
- 【bzoj4712】洪水 树链剖分+线段树维护树形动态dp
题目描述 给出一棵树,点有点权.多次增加某个点的点权,并在某一棵子树中询问:选出若干个节点,使得每个叶子节点到根节点的路径上至少有一个节点被选择,求选出的点的点权和的最小值. 输入 输入文件第一行包含 ...
- DAY4-Flask项目
项目出现的问题: 问题处在import requests.requests库已经安装了啊; 找了半天也不知道具体错误在哪里,根据提示想是不是http.py这个模块与Python内置的同名模块冲突了?所 ...
- 在java中为什么要把main方法定义为一个static方法?
我们知道,在C/C++当中,这个main方法并不是属于某一个类的,它是一个全局的方法,所以当我们执行的时候,c++编译器很容易的就能找到这个main方法,然而当我们执行一个java程序的时候,因为ja ...
- Bank Robbery LightOJ - 1163(推方程 注意计算机的计算方式)
题意:一个数A,如果A去掉它的最后一位就变成了B,即B=A/10,给A - B,求A #include <iostream> #include <cstdio> #includ ...