Machine Learning Trick of the Day (2): Gaussian Integral Trick
Machine Learning Trick of the Day (2): Gaussian Integral Trick
Today's trick, the Gaussian integral trick, is one that allows us to re-express a (potentially troublesome) function in an alternative form, in particular, as an integral of a Gaussian against another function — integrals against a Gaussian turn out not to be too troublesome and can provide many statistical and computational benefits. One popular setting where we can exploit such an alternative representation is for inference in discrete undirected graphical models (think Boltzmann machines or discrete Markov random fields). In such cases, this trick lets us transform our discrete problem into one that has an underlying continuous (Gaussian) representation, which we can then solve using our other machine learning tricks. But this is part of a more general strategy that is used throughout machine learning, whether in Bayesian posterior analysis, deep learning or kernel machines. This trick has many facets, and this post explores the Gaussian integral trick and its more general form, auxiliary variable augmentation.
Gaussian integral trick state expansion.
Gaussian Integral Trick
The Gaussian integral trick is one of the statistical flavour and allows us to turn a function that is an exponential in x2 into an exponential that is linear in x. We do this by augmenting a linear function with auxiliary variables and then integrating over these auxiliary variables, hence a form of auxiliary variable augmentation. The simplest form of this trick is to apply the following identity:
We can prove this to ourselves by exploiting our knowledge of Gaussian distributions (which this looks strikingly similar to) and our ability to complete the square when we see such quadratic forms. Separating out the scaling factor a we get:
Which by completing the square becomes:
where the last integral is solved by matching it to a Gaussian with mean μ=x2a and variance σ2=12a, which we know has a normalisation of 2πσ2−−−−√ — this last step shows how this trick got its name.
The 'Gaussian integral trick' was coined and initially described by Hertz et al. [Ch10, pg 253] [1], and is closely related to the Hubbard-Stratonovich transform (which provides the augmentation for exp(−x2)).
Transforming Binary MRFs
This trick is also valid in the multivariate case, which is what we will most often be interested in. One good place to see this trick in action is when applied to binary MRFs or Boltzmann machines. Binary MRFs have a joint probability, for binary random variables x:
where Z, is the normalising constant. The (multivariate) Gaussian integral trick can be applied to the quadratic term in this energy function allowing for an insightful analysis andinteresting reparameterisation that allows for alternative inference methods to be used. For example:
- We can conduct an analysis of Boltzmann machines that when combined with our earlier trick, (trick 1) the replica trick, allows for theoretical predictions about the performance of this model. See:
- Formal statistical Mechanics of neural networks, section 10.1 (eq 10.5), Hertz et al. [1]
- We can use the trick to create a Gaussian augmented space for discrete MRFs to which Hamiltonian Monte Carlo, previously restricted to continuous and differentiable models, can be applied [2][3]. See:
- Continuous Relaxations for Discrete Hamiltonian Monte Carlo, Zhang et al.
- Auxiliary-variable Exact Hamiltonian Monte Carlo Samplers for Binary Distributions. Pakman and Paninski.
Variable Augmentation
Graphical model for a general augmentation.
This trick is a special case of a more general strategy called variable (or data) augmentation — I prefervariable augmentation to data augmentation [4], since it will not be confused with observed data preprocessing and manipulation. In this setting, the introduction of auxiliary variables has been most often used to develop better mixing Markov chain Monte Carlo samplers. This is because after augmentation, the conditional distributions of the model often have highly convenient and easy-to-sample-from forms.
One recent example of variable augmentation (and that parallels our initial trick) is the Polya-Gamma variable augmentation. In this case, we can express the sigmoid function that appears when computing the mean of the Bernoulli distribution, as:
where p(y) has a Polya-Gamma distribution [5]. This nicely transforms the sigmoid into a Gaussian convolution (integrated against a Polya-Gamma random variable) — and gives us a different type of Gaussian integral trick. In fact, similar Gaussian integral tricks are abound, and are typically described under the heading of Gaussian scale-mixture distributions.
There are many examples of variable augmentation to be found, especially for binary and categorical distributions. Much guidance is available, and some papers that demonstrate this are:
- Albert and Chib's paper is one of the first where the concept of data augmentation is most clearly established, and to whom data augmentation is most often attributed. It shows augmentation for binary and categorical variables — a classic paper that everyone should read.
- Polson and Scott introduced the Polya-Gamma augmentation I described above, and is amongst the more recent of augmentation strategies. This augmentation can be used for more effective Monte Carlo or variational inference, e.g.,
- Ultimately, finding a good augmentation relies on exploiting known and tractable integrals. As such there can be a bit of an art to creating such augmentations and is what Van Dyk and Meng discuss.
Summary
The Gaussian integral trick is just one from a large class of variable augmentation strategies that are widely used in statistics and machine learning. They work by introducing auxiliary variables into our problems that induce an alternative representation, and that then give us additional statistical and computational benefits. Such methods lie at the heart of efficient inference algorithms, whether these be Monte Carlo or deterministic approximate inference schemes, making variable augmentation a favourite in our box of machine learning tricks.
Some References
| [1] | John Hertz, Anders Krogh, Richard G Palmer, Introduction to the theory of neural computation, , 1991 |
| [2] | Yichuan Zhang, Zoubin Ghahramani, Amos J Storkey, Charles A Sutton, Continuous relaxations for discrete hamiltonian monte carlo, Advances in Neural Information Processing Systems, 2012 |
| [3] | Ari Pakman, Liam Paninski, Auxiliary-variable exact Hamiltonian Monte Carlo samplers for binary distributions, Advances in Neural Information Processing Systems, 2013 |
| [4] | James H Albert, Siddhartha Chib, Bayesian analysis of binary and polychotomous response data, Journal of the American statistical Association, 1993 |
| [5] | Nicholas G Polson, James G Scott, Jesse Windle, Bayesian inference for logistic models using P\'olya--Gamma latent variables, Journal of the American Statistical Association, 2013 |
Machine Learning Trick of the Day (2): Gaussian Integral Trick的更多相关文章
- How do I learn machine learning?
https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644 How Can I Learn X? ...
- Machine Learning Trick of the Day (1): Replica Trick
Machine Learning Trick of the Day (1): Replica Trick 'Tricks' of all sorts are used throughout machi ...
- Kernel Functions for Machine Learning Applications
In recent years, Kernel methods have received major attention, particularly due to the increased pop ...
- Machine Learning for Developers
Machine Learning for Developers Most developers these days have heard of machine learning, but when ...
- 学习笔记之Machine Learning Crash Course | Google Developers
Machine Learning Crash Course | Google Developers https://developers.google.com/machine-learning/c ...
- How do I learn mathematics for machine learning?
https://www.quora.com/How-do-I-learn-mathematics-for-machine-learning How do I learn mathematics f ...
- Machine Learning and Data Mining(机器学习与数据挖掘)
Problems[show] Classification Clustering Regression Anomaly detection Association rules Reinforcemen ...
- [C2P1] Andrew Ng - Machine Learning
About this Course Machine learning is the science of getting computers to act without being explicit ...
- 【机器学习Machine Learning】资料大全
昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...
随机推荐
- NBA篮球足球在线直播插件下载
PPlive:点此下载PPLive播放器 Sopcast:点此下载Sopcast播放器 UUSee:点此下载UUSee播放器 CCTVReg:点此下载CCTV插件 PPStream:点此下载PPstr ...
- 封装react组件——三级联动
思路: 数据设计:省份为一维数组,一级市为二维数组,二级市/区/县为三维数组.这样设计的好处在于根据数组索引实现数据的关联. UI组件: MUI的DropDownMenu组件或Select Field ...
- 弹出提示框的方式——java
1.显示一个错误对话框,该对话框显示的 message 为 'alert': JOptionPane.showMessageDialog(null, "alert", " ...
- 在sql server ide里数据修改数据
在sql server 的客户端工具ssms里,只有在工具里打开后直接修改. 除了用这种方法外,还有其它方法可以改吗?比如像pl/sql里的for update sql server的客户端功能比较差 ...
- JVM学习笔记(一):Java内存区域
由于Java程序是交由JVM执行的,所以我们在谈Java内存区域划分的时候事实上是指JVM内存区域划分.在讨论JVM内存区域划分之前,先来看一下Java程序具体执行的过程: 首先Java源代码文件(. ...
- C 数据类型——Day02
C 数据类型 在 C 语言中,数据类型指的是用于声明不同类型的变量或函数的一个广泛的系统.变量的类型决定了变量存储占用的空间,以及如何解释存储的位模式. C 中的类型可分为以下几种: 序号 类型与描述 ...
- Java应用中使用ShutdownHook友好地清理现场
在线上Java程序中经常遇到进程程挂掉,一些状态没有正确的保存下来,这时候就需要在JVM关掉的时候执行一些清理现场的代码.Java中得ShutdownHook提供了比较好的方案. JDK在1.3之后提 ...
- 【bzoj1833】 ZJOI2010—count 数字计数
http://www.lydsy.com/JudgeOnline/problem.php?id=1833 (题目链接) 题意 求在${[a,b]}$范围内整数中,每个数码出现的次数. Solution ...
- Linux系统启动详解(三)
上节已系统initramfs已启动完成,将系统控制权交给了真正的rootfs的/sbin/init,下面就是/sbin/init干活的时间了. 4 /sbin/init initramfs ...
- java的序列化流和打印流
对象操作流(序列化流) 每次读取和写出的都是JavaBean对象. 序列化:将对象写入到文件中的过程 反序列化:从文件中读取对象到程序的过程 transient: 标识瞬态,序列化的时候,该修饰符修饰 ...