A Statistical View of Deep Learning (III): Memory and Kernels
A Statistical View of Deep Learning (III): Memory and Kernels
Memory, the ways in which we remember and recall past experiences and data to reason about future events, is a term used frequently in current literature. All models in machine learning consist of a memory that is central to their usage. We have two principal types of memory mechanisms, most often addressed under the types of models they stem from: parametric and non-parametric (but also all the shades of grey in-between). Deep networksrepresent the archetypical parametric model, in which memory is implemented by distilling the statistical properties of observed data into a set of model parameters or weights. The poster-child for non-parametric models would bekernel machines (and nearest neighbours) that implement their memory mechanism by actually storing all the data explicitly. It is easy to think that these represent fundamentally different ways of reasoning about data, but the reality of how we derive these methods points to far deeper connections and a more fundamental similarity.
Deep networks, kernel methods and Gaussian processes form a continuum of approaches for solving the same problem - in their final form, these approaches might seem very different, but they are fundamentally related, and keeping this in mind can only be useful for future research. This connection is what I explore in this post.
Basis Functions and Neural Networks
All the methods in this post look at regression: learning discriminative or input-output mappings. All such methods extend the humble linear model, where we assume that linear combinations of the input data x, or transformations of it φ(x), explain the target values y. The φ(x) are basis functions that transform the data into a set of more interesting features. Features such as SIFT for images or MFCCs for audio have been popular in the past - in these cases, we still have a linear regression, since the basis functions are fixed. Neural networks give us the ability to use adaptive basis functions, allowing us to learn what the best features are from data instead of designing these by-hand, and allowing for a non-linear regression.
A useful probabilistic formulation separates the regression into systematic and random components: the systematic component is a function f we wish to learn, and the targets are noisy realisations of this function. To connect neural networks to the linear model, I'll explicitly separate the last linear layer of the neural network from the layers that appear before it. Thus for an L-layer deep neural network, I'll denote the first L-1 layers by the mapping φ(x; θ) with parameters θ, and the final layer weights w; the set of all model parameters is q = {θ, w}.
Once we have specified our probabilistic model, this implies an objective function for optimising the model parameters given by the negative log joint-probability. We can now apply back-propagation and learn all the parameters, performing MAP estimation in the neural network model. Memory in this model is maintained in the parametric modelling framework; we do not save the data but compactly represent it by the parameters of our model. This formulation has many nice properties: we can encode properties of the data into the function f, such as being a 2D image for which convolutions are sensible, and we can choose to do a stochastic approximation for scalability and perform gradient descent using mini-batches instead of the entire data set. The loss function for the output weights is of particular interest, since it will offers us a way to move from neural networks to other types of regression.
Kernel Methods
If you stare a bit longer at this last objective function, especially as formulated by explicitly representing the last linear layer, you'll very quickly be tempted to compute its dual function [1][pp. 293]. We'll do this by first setting the derivative w.r.t. w to zero and solving for it:
We've combined all basis functions/features for the observations into the matrix Φ. By taking this optimal solution for the last layer weights and substituting it into the loss function, two things emerge: we obtain the dual loss function that is completely rewritten in terms of a new parameter α, and the computation involves the matrix product or Gram matrix K=ΦΦ'. We can repeat the process and solve the dual loss for the optimal parameter α, and obtain:
And this is where the kernel machines deviate from neural networks. Since we only need to consider inner products of the features φ(x) (implied by maintaining K), instead of parameterising them using a non-linear mapping given by a deep network, we can usekernel substitution (aka, the kernel trick) and get the same behaviour by choosing an appropriate and rich kernel function k(x, x'). This highlights the deep relationship between deep networks and kernel machines: they are more than simply related, they are duals of each other.
The memory mechanism has now been completely transformed into a non-parametric one - we explicitly represent all the data points (through the matrix K). The advantage of the kernel approach is that is is often easier to encode properties of the functions we wish to represent e.g., functions that are up to p-th order differentiable or periodic functions, but stochastic approximation is now not possible. Predictions for a test pointx* can now be written in a few different ways:
The last equality is a form of solution implied by the Representer theorem and shows that we can instead think of a different formulation of our problem: one that directly penalises the function we are trying to estimate, subject to the constraint that the function lies within a Hilbert space (and providing a direct non-parametric view):
Gaussian Processes
We can go even one step further and obtain not only a MAP estimate of the function f, but also its variance. We must now specify a probability model that yields the same loss function as this last objective function. This is possible since we now know what a suitable prior over functions is, and this probabilistic model corresponds to Gaussian process (GP) regression [2]:
We can now apply the standard rules for Gaussian conditioning to obtain a mean and variance for any predictions x*. What we obtain is:
Conveniently, we obtain the same solution for the mean whether we use the kernel approach or the Gaussian conditioning approach. We now also have a way to compute the variance of the functions of interest, which is useful for many problems (such as active learning and optimistic exploration). Memory in the GP is also of the non-parametric flavour, since our problem is formulated in the same way as the kernel machines. GPs form another nice bridge between kernel methods and neural networks: we can see GPs as derived by Bayesian reasoning in kernel machines (which are themselves dual functions of neural nets), or we can obtain a GP by taking the number of hidden units in a one layer neural network to infinity [3].
Summary
Deep neural networks, kernel methods and Gaussian processes are all different ways of solving the same problem - how to learn the best regression functions possible. They are deeply connected: starting from one we can derive any of the other methods, and they expose the many interesting ways in which we can address and combine approaches that are ostensibly in competition. I think such connections are very interesting, and should prove important as we continue to build more powerful and faithful models for regression and classification.
Some References
| [1] | Christopher M Bishop, Pattern recognition and machine learning, , 2006 |
| [2] | Carl Edward Rasmussen, Gaussian processes for machine learning, , 2006 |
| [3] | Radford M Neal, Bayesian Learning for Neural Networks, , 1994 |
A Statistical View of Deep Learning (III): Memory and Kernels的更多相关文章
- A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems
A Statistical View of Deep Learning (IV): Recurrent Nets and Dynamical Systems Recurrent neural netw ...
- A Statistical View of Deep Learning (V): Generalisation and Regularisation
A Statistical View of Deep Learning (V): Generalisation and Regularisation We now routinely build co ...
- A Statistical View of Deep Learning (II): Auto-encoders and Free Energy
A Statistical View of Deep Learning (II): Auto-encoders and Free Energy With the success of discrimi ...
- A Statistical View of Deep Learning (I): Recursive GLMs
A Statistical View of Deep Learning (I): Recursive GLMs Deep learningand the use of deep neural netw ...
- 【深度学习Deep Learning】资料大全
最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books by Yoshua Bengio, Ian Goodfellow and Aaron C ...
- 机器学习(Machine Learning)&深度学习(Deep Learning)资料【转】
转自:机器学习(Machine Learning)&深度学习(Deep Learning)资料 <Brief History of Machine Learning> 介绍:这是一 ...
- 机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)
##机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2)---#####注:机器学习资料[篇目一](https://github.co ...
- (转)Understanding Memory in Deep Learning Systems: The Neuroscience, Psychology and Technology Perspectives
Understanding Memory in Deep Learning Systems: The Neuroscience, Psychology and Technology Perspecti ...
- translation of 《deep learning》 Chapter 1 Introduction
原文: http://www.deeplearningbook.org/contents/intro.html Inventors have long dreamed of creating mach ...
随机推荐
- java工程项目里,在一个包里面,不能出现同名的类名,这问题是刚接触java才会遇到的,特别是新手一般都没有建立包,而是使用默认的,易出现同名的类名,导致eclipse提示错误
java工程项目里,在一个包里面,不能出现同名的类名,这问题是刚接触java才会遇到的,特别是新手一般都没有建立包,而是使用默认的,易出现同名的类名,导致eclipse提示错误. 问题: 创建了一个工 ...
- IAAS云计算产品畅想-云主机产品内涵
这里所涉及的主要还是狭义的云主机产品. 主要还是谈云主机产品中公有云产品与私有云产品相比赋予更多的含义: 产品广义理解:公有云主机的最大特点就是基础资源按需支付 从这一句话中可以体现出来两个含义: 产 ...
- (C#)与Windows用户账户信息的获取
Console.WriteLine(Environment.UserName); //计算机NetBIOS名称 Console.WriteLine(Environment.MachineName); ...
- 最近的两个小项目,1:在Vscode里写C/C++
时间过得真快,一眨眼一个多月没更新了,但这一个月我可没偷懒啊,真的是忙.粘上两篇ReadMe勉强凑合一下,保持博客更新是好习惯. VscodeCppDemo Try to develop C/C++ ...
- 14、SQL Server 存储过程
SQL Server 存储过程 存储过程类似函数,可以重复使用.相对于函数,存储过程拥有更强大的功能和更高的灵活性. 存储过程中可以包含逻辑控制语句和数据操作语句,可以接受参数,输出参数,返回单个值或 ...
- 实训第一天--增删改查加hibernate+搭建环境常见问题
1. 搭建环境 安装 1)谷歌浏览器 2)jdk-7u67-windows-i586.exe 3)Tomcat7 4)NavicatforMySQL 两种方式: ftp://172.21.95 ...
- sqlserver 误删数据恢复
----创建存储过程 CREATE PROCEDURE Recover_Deleted_Data_Proc @Database_Name NVARCHAR(MAX) , @SchemaName_n_T ...
- 【转】Windows环境下.NET 操作Oracle问题
目前,Windows操作系统可以分成两类,32位和64位(64位也区分x86_64位和Itanium ),同时Oracle客户端也做了同样的区分. 在安装和开发的过程中,经常会遇到一些问题,本文就总结 ...
- 图论——读书笔记(基于BFS广度优先算法的广度优先树)
广度优先树 对于一个图G=(V,E)在跑过BFS算法的过程中会创建一棵广度优先树. 形式化一点的表示该广度 优先树的形成过程是这样的: 对于图G=(V,E)是有向图或是无向图, 和图中的源结点s, 我 ...
- Tomcat下work文件夹的作用
1.打补丁,重启tomcat时要删除work文件夹,有缓存. 2.work目录只是tomcat的工作目录,也就是tomcat把jsp转换为class文件的工作目录 jsp,tomcat的工作原理: 当 ...