Ridge Regression and Ridge Regression Kernel
Ridge Regression and Ridge Regression Kernel
Reference:
1. scikit-learn linear_model ridge regression
2. Machine learning for quantum mechanics in a nutshell Authors
3. sample plot ridge path code from #Fabian Pedregosa --
Ridge regression
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of coefficients. The ridge coefficients minimize a penalized residual sum of squares:
\[\underset{w}{min} {\left\| Xw-y\right\|_2^{2}} + \lambda\left\|{w}\right\|_2^2\]
Here, \(\lambda \ge 0\) is a complexity parameter that controls the amount of shrinkage: the larger the value of \(\lambda\), the greater the amount of shrinkage and thus the coefficients become robust to collinearity. The figure below show the relationship between the \(\lambda\) and oscillations of the weights.

Ridge Regression Theory
the \(\tilde{x}\) meaning the test dataset, and \(x_i\) means the training data:
\[f(\tilde{x}) = \sum_{i=0}^n \alpha_i k(\tilde{x}, x)\]
Although the dimensionalty of \(\mathbf{Hilbert}\) space can be high, the solution lives in the finite span of the projected training data, enabling a finite representation. The corresponding convex optimization problems is:
\[\underset{\alpha \varepsilon R^{n}} {\mathrm{argmin}} \sum_{i=1}^n(f(x_i - y_i)^2 + \lambda\left \| f \right \|_{2}^{H} \]
\[\Leftrightarrow \underset{\alpha \varepsilon R^{n}} {\mathrm{argmin}} \left \langle K\alpha-y, K\alpha-y \right \rangle + \lambda \alpha^{T}K\alpha\]
Where \(\left \| f \right \|_{2}^{H}\) is the norm of \(f\) in \(\mathbf{Hilbert}\) space, the complexity of the linear ridge regression model in feature space, and \(K \varepsilon R^{n\times n}, K_{i, j}=k(x_i, x_j)\) is the kernel matrix between training sampels. As before, setting the gradient to 0 yeilds an analytic solution for the regression coefficients:
\[\alpha^{T}K^{2}\alpha - 2\alpha^{T}Ky\ + y^{T}y + \lambda\alpha^{T}K\alpha = 0 \Leftrightarrow K^{2}\alpha + \lambda K\alpha\]
\[\Leftrightarrow \alpha=(K+\lambda I)^{-1}y \]
where \(\lambda\) is a hyperparameter determining strength of regularization. The norm of the coefficient vector \(\mathbf\sigma\) is related to the smooothness and simpler models.
Figure below give a example of a KRR model with Gaussian kernel that demostates the role length-scale hyperparameter \(\sigma\). Although \(\sigma\) is directly related to the regularization term. But it's does control smoothness of the predictor, and effectively regularizes.

The figure above show that kernel ridge regression with Gaussian kernel and different length scales. We learn \(\cos(x)\), the KRR models(dasded lines).The regularization constant \(\lambda\) set to be \(10^{-14}\), a very small \(\mathbf{\sigma}\) fit the train set well but in-between error are very bigger, while a too large \(\sigma\) results in too close to linear model, with both high train error and prediction error.
From above description we can see all information about KRR model is contained in the matrix \(\mathbf K\) of kernel evaluations between training data. Similarly, all info required to predict new inputs \(\tilde{x}\) is contained in the kernel matrix of training set versus prediction data.
Kernel Ridge Regression
The regression coefficients \(\lambda\) are obtained by solving the linear system of equations \((\mathbf K + \lambda \mathbf I)\lambda=\mathbf y\), where \((\mathbf K + \lambda \mathbf I)\) is symmetric and strictly positive definite. To solve this equation we can use Cholesky decomposition \(\mathbf K + \lambda \mathbf I\), where \(\mathbf U\) is upper triangular. One then we break up the \(\mathbf{U^{T}U\lambda=y}\) equantion into 2 equations, the first is \(\mathbf{U^{T}\beta=y}\), and the other is \(\mathbf{U\lambda=\beta}\). Since \(U^{T}\) is lower triangular and \(\mathbf{U}\) is upper triangular, this requires only 2 striaghtforward passes over the data called forward and backward substitution, respectively. For \(\mathbf{U^{T}\beta=y}\), just like below:
\[\mathbf{U_{1, 1}^T \beta_1=y_1} \Leftrightarrow \beta_1=y1/u_{1,1}\]
\[\mathbf{U_{2, 1}^T \beta_1 + \mathbf{U_{2, 2}^T} \beta_1=y_1} \Leftrightarrow \beta_2=(y2 - u_{1, 2}\beta_1)/u_{2,2}\]
\[ ...\]
\[\sum_{j}^i \mathbf{U_{i, j}^T}\beta_j=y_i \Leftrightarrow \beta_i=(y_i-\sum_{j=1}^{i-1}u_{j,i}\beta_j)/u_{i,j}\]
Once the model is trained, then predictions can be made, and the prediction for a new input \(\mathbf{\tilde{x}}\) is the inner product between the vector of coefficients and the vector of corresponding kernel evaluations.
For a test datasets \(\mathbf{\tilde{X}} \varepsilon \Bbb{R^{n \times d}}\), the rows \(\mathbf{\tilde{x},...,\tilde{x_n}}\), and \(\mathbf{L} \varepsilon \Bbb{R^{n \times n}}\) is the kernel matrix of training versus prediction inputs. \(\mathbf{L_{i,j}=k(x_i, \tilde{x_j})}\)
This method has the same order of complexity than an Ordinary Least Squares. In the next notes I will introduce the detail of implementation about KRR.
Ridge Regression and Ridge Regression Kernel的更多相关文章
- 机器学习方法(五):逻辑回归Logistic Regression,Softmax Regression
欢迎转载,转载请注明:本文出自Bin的专栏blog.csdn.net/xbinworld. 技术交流QQ群:433250724,欢迎对算法.技术.应用感兴趣的同学加入. 前面介绍过线性回归的基本知识, ...
- 机器学习---三种线性算法的比较(线性回归,感知机,逻辑回归)(Machine Learning Linear Regression Perceptron Logistic Regression Comparison)
最小二乘线性回归,感知机,逻辑回归的比较: 最小二乘线性回归 Least Squares Linear Regression 感知机 Perceptron 二分类逻辑回归 Binary Logis ...
- why constrained regression and Regularized regression equivalent
problem 1: $\min_{\beta} ~f_\alpha(\beta):=\frac{1}{2}\Vert y-X\beta\Vert^2 +\alpha\Vert \beta\Vert$ ...
- L1,L2范数和正则化 到lasso ridge regression
一.范数 L1.L2这种在机器学习方面叫做正则化,统计学领域的人喊她惩罚项,数学界会喊她范数. L0范数 表示向量xx中非零元素的个数. L1范数 表示向量中非零元素的绝对值之和. L2范数 表 ...
- 【机器学习】Linear least squares, Lasso,ridge regression有何本质区别?
Linear least squares, Lasso,ridge regression有何本质区别? Linear least squares, Lasso,ridge regression有何本质 ...
- Kernel Methods (3) Kernel Linear Regression
Linear Regression 线性回归应该算得上是最简单的一种机器学习算法了吧. 它的问题定义为: 给定训练数据集\(D\), 由\(m\)个二元组\(x_i, y_i\)组成, 其中: \(x ...
- 机器学习技法:06 Support Vector Regression
Roadmap Kernel Ridge Regression Support Vector Regression Primal Support Vector Regression Dual Summ ...
- 机器学习技法笔记:06 Support Vector Regression
Roadmap Kernel Ridge Regression Support Vector Regression Primal Support Vector Regression Dual Summ ...
- 【Support Vector Regression】林轩田机器学习技法
上节课讲了Kernel的技巧如何应用到Logistic Regression中.核心是L2 regularized的error形式的linear model是可以应用Kernel技巧的. 这一节,继续 ...
随机推荐
- OC中对象拷贝概念
OC中的对象拷贝概念,这个对于面向对象语言中都会有这种的问题,只是不同的语言有不同的解决方式:C++中有拷贝构造函数,Java中需要实现Cloneable接口,在clone方法中进行操作.但是不过OC ...
- log4j异常问题
log4j:WARN No appenders could be found for logger 转自:最爱NBA 直接写我的解决办法:在src下面新建file名为log4j.propertie ...
- juce 中的ReferenceCountedObjectPtr
提供了对引用计数对象的管理,其实也就是操作引用计数对象,当引用计数为零的时候将对象销毁,值得学习的是juce是如果将引用计数对象和它的智能指针结合在一起的,这个后面再加分析 //=========== ...
- [poco] HttpRequest之post方法
转自 http://www.cnblogs.com/yuanxiaoping_21cn_com/archive/2012/06/10/2544032.html #import <iostream ...
- hadoop笔记之Hive的数据存储(分区表)
Hive的数据存储(分区表) Hive的数据存储(分区表) 分区表 Partition对应于数据库的Partition列的密集索引 在Hive中,表中的一个Partition对应于表下的一个目录,所有 ...
- Python中文显示问题
默认pyhon使用ASCII码来解释程序的,默认不支持中文,需要在程序的第一行或者第二行声明编码. 官方解决方案:https://www.python.org/dev/peps/pep-0263/ T ...
- [原创]linux简单之美(二)
原文链接:linux简单之美(二) 我们在前一章中看到了如何仅仅用syscall做一些简单的事,现在我们看能不能直接调用C标准库中的函数快速做一些"复杂"的事: section . ...
- STL容器的内存分配
这篇文章参考的是侯捷的<STL源码剖析>,所以主要介绍的是SGI STL实现版本,这个版本也是g++自带的版本,另外有J.Plauger实现版本对应的是cl自带的版本,他们都是基于HP实现 ...
- SQLServer2012 和 MariaDB 10.0.3 分页效率的对比
1. 实验环境 R910服务器, 16G内存 SqlServer 2012 64bit MariaDB 10.0.3 64bit (InnoDB) 2. 实验表情况 rtlBill ...
- SymPy-符号运算好帮手
SymPy-符号运算好帮手 SymPy是Python的数学符号计算库,用它可以进行数学公式的符号推导.为了调用方便,下面所有的实例程序都假设事先从sympy库导入了所有内容: >>> ...