Machine Learning Trick of the Day (2): Gaussian Integral Trick

Today's trick, the Gaussian integral trick, is one that allows us to re-express a (potentially troublesome) function in an alternative form, in particular, as an integral of a Gaussian against another function — integrals against a Gaussian turn out not to be too troublesome and can provide many statistical and computational benefits. One popular setting where we can exploit such an alternative representation is for inference in discrete undirected graphical models (think Boltzmann machines or discrete Markov random fields). In such cases, this trick lets us transform our discrete problem into one that has an underlying continuous (Gaussian) representation, which we can then solve using our other machine learning tricks. But this is part of a more general strategy that is used throughout machine learning, whether in Bayesian posterior analysis, deep learning or kernel machines. This trick has many facets, and this post explores the Gaussian integral trick and its more general form, auxiliary variable augmentation.

Gaussian integral trick state expansion.

Gaussian Integral Trick

The Gaussian integral trick is one of the statistical flavour and allows us to turn a function that is an exponential in x2 into an exponential that is linear in x. We do this by augmenting a linear function with auxiliary variables and then integrating over these auxiliary variables, hence a form of auxiliary variable augmentation. The simplest form of this trick is to apply the following identity:

∫exp(−ay2+xy)dy=πa−−√exp(14ax2)

We can prove this to ourselves by exploiting our knowledge of Gaussian distributions (which this looks strikingly similar to) and our ability to complete the square when we see such quadratic forms. Separating out the scaling factor a we get:

∫exp(−ay2+xy)dy=∫exp{−a(y2−xay)}dy

Which by completing the square becomes:

∫exp{−a(y−x2a)2+(x24a)}dy
=exp(x24a)∫exp{−a(y−x2a)2}dy
=exp(x24a)πa−−√

where the last integral is solved by matching it to a Gaussian with mean μ=x2a and variance σ2=12a, which we know has a normalisation of 2πσ2−−−−√ — this last step shows how this trick got its name.

The 'Gaussian integral trick' was coined and initially described by Hertz et al. [Ch10, pg 253] [1], and is closely related to the Hubbard-Stratonovich transform (which provides the augmentation for exp(−x2)).

Transforming Binary MRFs

This trick is also valid in the multivariate case, which is what we will most often be interested in. One good place to see this trick in action is when applied to binary MRFs or Boltzmann machines. Binary MRFs have a joint probability, for binary random variables x:

p(x)=1Zexp(θ⊤x+x⊤Wx)

where Z, is the normalising constant. The (multivariate) Gaussian integral trick can be applied to the quadratic term in this energy function allowing for an insightful analysis andinteresting reparameterisation that allows for alternative inference methods to be used. For example:

Variable Augmentation

Graphical model for a general augmentation.

This trick is a special case of a more general strategy called variable (or data) augmentation — I prefervariable augmentation to data augmentation [4], since it will not be confused with observed data preprocessing and manipulation. In this setting, the introduction of auxiliary variables has been most often used to develop better mixing Markov chain Monte Carlo samplers. This is because after augmentation, the conditional distributions of the model often have highly convenient and easy-to-sample-from forms.

One recent example of variable augmentation (and that parallels our initial trick) is the Polya-Gamma variable augmentation. In this case, we can express the sigmoid function that appears when computing the mean of the Bernoulli distribution, as:

σ(x)=11+exp(−x)=12√exp(x2)∫∞0exp(−12yx2)p(y)dy

where p(y) has a Polya-Gamma distribution [5]. This nicely transforms the sigmoid into a Gaussian convolution (integrated against a Polya-Gamma random variable) — and gives us a different type of Gaussian integral trick. In fact, similar Gaussian integral tricks are abound, and are typically described under the heading of Gaussian scale-mixture distributions.

There are many examples of variable augmentation to be found, especially for binary and categorical distributions. Much guidance is available, and some papers that demonstrate this are:

Summary

The Gaussian integral trick is just one from a large class of variable augmentation strategies that are widely used in statistics and machine learning. They work by introducing auxiliary variables into our problems that induce an alternative representation, and that then give us additional statistical and computational benefits. Such methods lie at the heart of efficient inference algorithms, whether these be Monte Carlo or deterministic approximate inference schemes, making variable augmentation a favourite in our box of machine learning tricks.


Some References
[1] John Hertz, Anders Krogh, Richard G Palmer, Introduction to the theory of neural computation, , 1991
[2] Yichuan Zhang, Zoubin Ghahramani, Amos J Storkey, Charles A Sutton, Continuous relaxations for discrete hamiltonian monte carlo, Advances in Neural Information Processing Systems, 2012
[3] Ari Pakman, Liam Paninski, Auxiliary-variable exact Hamiltonian Monte Carlo samplers for binary distributions, Advances in Neural Information Processing Systems, 2013
[4] James H Albert, Siddhartha Chib, Bayesian analysis of binary and polychotomous response data, Journal of the American statistical Association, 1993
[5] Nicholas G Polson, James G Scott, Jesse Windle, Bayesian inference for logistic models using P\'olya--Gamma latent variables, Journal of the American Statistical Association, 2013

Machine Learning Trick of the Day (2): Gaussian Integral Trick的更多相关文章

  1. How do I learn machine learning?

    https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644   How Can I Learn X? ...

  2. Machine Learning Trick of the Day (1): Replica Trick

    Machine Learning Trick of the Day (1): Replica Trick 'Tricks' of all sorts are used throughout machi ...

  3. Kernel Functions for Machine Learning Applications

    In recent years, Kernel methods have received major attention, particularly due to the increased pop ...

  4. Machine Learning for Developers

    Machine Learning for Developers Most developers these days have heard of machine learning, but when ...

  5. 学习笔记之Machine Learning Crash Course | Google Developers

    Machine Learning Crash Course  |  Google Developers https://developers.google.com/machine-learning/c ...

  6. How do I learn mathematics for machine learning?

    https://www.quora.com/How-do-I-learn-mathematics-for-machine-learning   How do I learn mathematics f ...

  7. Machine Learning and Data Mining(机器学习与数据挖掘)

    Problems[show] Classification Clustering Regression Anomaly detection Association rules Reinforcemen ...

  8. [C2P1] Andrew Ng - Machine Learning

    About this Course Machine learning is the science of getting computers to act without being explicit ...

  9. 【机器学习Machine Learning】资料大全

    昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...

随机推荐

  1. angularJS1笔记-(17)-ng-bind-html指令

    angular不推荐大家在绑定数据的时候绑定html,但是如果你非要这么干也并不是不可以的.举个例子: <!DOCTYPE html> <html lang="en&quo ...

  2. elasticsearch文档-字段的mapping

    mapping == Mapping是指定义如何将document映射到搜索引擎的过程,比如一个字段是否可以查询以及如何分词等,一个索引可以存储含有不同"mapping types" ...

  3. Beta阶段敏捷冲刺③

    1.提供当天站立式会议照片一张. 每个人的工作 (有work item 的ID),并将其记录在码云项目管理中: 1.1昨天已完成的工作. 姓名 昨天已完成的工作 徐璐琳 完善设置界面的功能 祁泽文 研 ...

  4. 软盘相关知识和通过BIOS中断访问软盘

    一. 软盘基础知识介绍 (1) 3.5英寸软盘 3.5英寸软盘分为上下两面,每面有80个磁道,每个磁道又分为18个扇区,每个扇区大小为512个字节. 软盘大小计算: 2面 * 80磁道 * 18扇区 ...

  5. Java的StringBuIlder扩容机制

    JDK 1.6中,扩容的源码是这样: void expandCapacity(int minimumCapacity) { int newCapacity = (value.length + 1) * ...

  6. POJ1430

    这个题目初看上去是一个排列组合题,而实际上……也是一个排列组合题. 题目描述是: Description The Stirling number of the second kind S(n, m) ...

  7. BZOJ3329 Xorequ(数位dp+矩阵快速幂)

    显然当x中没有相邻的1时该式成立,看起来这也是必要的. 于是对于第一问,数位dp即可.第二问写出dp式子后发现就是斐波拉契数列,矩阵快速幂即可. #include<iostream> #i ...

  8. 状压DP入门详解+题目推荐

    在动态规划的题型中,一般叫什么DP就是怎么DP,状压DP也不例外 所谓状态压缩,一般是通过用01串表示状态,充分利用二进制数的特性,简化计算难度.举个例子,在棋盘上摆放棋子的题目中,我们可以用1表示当 ...

  9. Spring点滴七:Spring中依赖注入(Dependency Injection:DI)

    Spring机制中主要有两种依赖注入:Constructor-based Dependency Injection(基于构造方法依赖注入) 和 Setter-based Dependency Inje ...

  10. 【BZOJ1034】泡泡堂(贪心)

    [BZOJ1034]泡泡堂(贪心) 题面 BZOJ 洛谷 题解 很基础的贪心,然而我竟然没写对...身败名裂. 大概就是类似田忌赛马. 先拿看当前最大值是否能否解决对面最大值,否则检查能否用最小值来兑 ...