Ridge Regression and Ridge Regression Kernel

Reference:
1. scikit-learn linear_model ridge regression
2. Machine learning for quantum mechanics in a nutshell Authors
3. sample plot ridge path code from #Fabian Pedregosa --

Ridge regression

Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of coefficients. The ridge coefficients minimize a penalized residual sum of squares:
\[\underset{w}{min} {\left\| Xw-y\right\|_2^{2}} + \lambda\left\|{w}\right\|_2^2\]
Here, \(\lambda \ge 0\) is a complexity parameter that controls the amount of shrinkage: the larger the value of \(\lambda\), the greater the amount of shrinkage and thus the coefficients become robust to collinearity. The figure below show the relationship between the \(\lambda\) and oscillations of the weights.

Ridge Regression Theory

the \(\tilde{x}\) meaning the test dataset, and \(x_i\) means the training data:
\[f(\tilde{x}) = \sum_{i=0}^n \alpha_i k(\tilde{x}, x)\]
Although the dimensionalty of \(\mathbf{Hilbert}\) space can be high, the solution lives in the finite span of the projected training data, enabling a finite representation. The corresponding convex optimization problems is:
\[\underset{\alpha \varepsilon R^{n}} {\mathrm{argmin}} \sum_{i=1}^n(f(x_i - y_i)^2 + \lambda\left \| f \right \|_{2}^{H} \]
\[\Leftrightarrow \underset{\alpha \varepsilon R^{n}} {\mathrm{argmin}} \left \langle K\alpha-y, K\alpha-y \right \rangle + \lambda \alpha^{T}K\alpha\]
Where \(\left \| f \right \|_{2}^{H}\) is the norm of \(f\) in \(\mathbf{Hilbert}\) space, the complexity of the linear ridge regression model in feature space, and \(K \varepsilon R^{n\times n}, K_{i, j}=k(x_i, x_j)\) is the kernel matrix between training sampels. As before, setting the gradient to 0 yeilds an analytic solution for the regression coefficients:
\[\alpha^{T}K^{2}\alpha - 2\alpha^{T}Ky\ + y^{T}y + \lambda\alpha^{T}K\alpha = 0 \Leftrightarrow K^{2}\alpha + \lambda K\alpha\]
\[\Leftrightarrow \alpha=(K+\lambda I)^{-1}y \]
where \(\lambda\) is a hyperparameter determining strength of regularization. The norm of the coefficient vector \(\mathbf\sigma\) is related to the smooothness and simpler models.

Figure below give a example of a KRR model with Gaussian kernel that demostates the role length-scale hyperparameter \(\sigma\). Although \(\sigma\) is directly related to the regularization term. But it's does control smoothness of the predictor, and effectively regularizes.

The figure above show that kernel ridge regression with Gaussian kernel and different length scales. We learn \(\cos(x)\), the KRR models(dasded lines).The regularization constant \(\lambda\) set to be \(10^{-14}\), a very small \(\mathbf{\sigma}\) fit the train set well but in-between error are very bigger, while a too large \(\sigma\) results in too close to linear model, with both high train error and prediction error.

From above description we can see all information about KRR model is contained in the matrix \(\mathbf K\) of kernel evaluations between training data. Similarly, all info required to predict new inputs \(\tilde{x}\) is contained in the kernel matrix of training set versus prediction data.

Kernel Ridge Regression

The regression coefficients \(\lambda\) are obtained by solving the linear system of equations \((\mathbf K + \lambda \mathbf I)\lambda=\mathbf y\), where \((\mathbf K + \lambda \mathbf I)\) is symmetric and strictly positive definite. To solve this equation we can use Cholesky decomposition \(\mathbf K + \lambda \mathbf I\), where \(\mathbf U\) is upper triangular. One then we break up the \(\mathbf{U^{T}U\lambda=y}\) equantion into 2 equations, the first is \(\mathbf{U^{T}\beta=y}\), and the other is \(\mathbf{U\lambda=\beta}\). Since \(U^{T}\) is lower triangular and \(\mathbf{U}\) is upper triangular, this requires only 2 striaghtforward passes over the data called forward and backward substitution, respectively. For \(\mathbf{U^{T}\beta=y}\), just like below:
\[\mathbf{U_{1, 1}^T \beta_1=y_1} \Leftrightarrow \beta_1=y1/u_{1,1}\]
\[\mathbf{U_{2, 1}^T \beta_1 + \mathbf{U_{2, 2}^T} \beta_1=y_1} \Leftrightarrow \beta_2=(y2 - u_{1, 2}\beta_1)/u_{2,2}\]
\[ ...\]
\[\sum_{j}^i \mathbf{U_{i, j}^T}\beta_j=y_i \Leftrightarrow \beta_i=(y_i-\sum_{j=1}^{i-1}u_{j,i}\beta_j)/u_{i,j}\]
Once the model is trained, then predictions can be made, and the prediction for a new input \(\mathbf{\tilde{x}}\) is the inner product between the vector of coefficients and the vector of corresponding kernel evaluations.

For a test datasets \(\mathbf{\tilde{X}} \varepsilon \Bbb{R^{n \times d}}\), the rows \(\mathbf{\tilde{x},...,\tilde{x_n}}\), and \(\mathbf{L} \varepsilon \Bbb{R^{n \times n}}\) is the kernel matrix of training versus prediction inputs. \(\mathbf{L_{i,j}=k(x_i, \tilde{x_j})}\)
This method has the same order of complexity than an Ordinary Least Squares. In the next notes I will introduce the detail of implementation about KRR.

Ridge Regression and Ridge Regression Kernel的更多相关文章

  1. 机器学习方法(五):逻辑回归Logistic Regression,Softmax Regression

    欢迎转载,转载请注明:本文出自Bin的专栏blog.csdn.net/xbinworld. 技术交流QQ群:433250724,欢迎对算法.技术.应用感兴趣的同学加入. 前面介绍过线性回归的基本知识, ...

  2. 机器学习---三种线性算法的比较(线性回归,感知机,逻辑回归)(Machine Learning Linear Regression Perceptron Logistic Regression Comparison)

    最小二乘线性回归,感知机,逻辑回归的比较:   最小二乘线性回归 Least Squares Linear Regression 感知机 Perceptron 二分类逻辑回归 Binary Logis ...

  3. why constrained regression and Regularized regression equivalent

    problem 1: $\min_{\beta} ~f_\alpha(\beta):=\frac{1}{2}\Vert y-X\beta\Vert^2 +\alpha\Vert \beta\Vert$ ...

  4. L1,L2范数和正则化 到lasso ridge regression

    一.范数 L1.L2这种在机器学习方面叫做正则化,统计学领域的人喊她惩罚项,数学界会喊她范数. L0范数  表示向量xx中非零元素的个数. L1范数  表示向量中非零元素的绝对值之和. L2范数  表 ...

  5. 【机器学习】Linear least squares, Lasso,ridge regression有何本质区别?

    Linear least squares, Lasso,ridge regression有何本质区别? Linear least squares, Lasso,ridge regression有何本质 ...

  6. Kernel Methods (3) Kernel Linear Regression

    Linear Regression 线性回归应该算得上是最简单的一种机器学习算法了吧. 它的问题定义为: 给定训练数据集\(D\), 由\(m\)个二元组\(x_i, y_i\)组成, 其中: \(x ...

  7. 机器学习技法:06 Support Vector Regression

    Roadmap Kernel Ridge Regression Support Vector Regression Primal Support Vector Regression Dual Summ ...

  8. 机器学习技法笔记:06 Support Vector Regression

    Roadmap Kernel Ridge Regression Support Vector Regression Primal Support Vector Regression Dual Summ ...

  9. 【Support Vector Regression】林轩田机器学习技法

    上节课讲了Kernel的技巧如何应用到Logistic Regression中.核心是L2 regularized的error形式的linear model是可以应用Kernel技巧的. 这一节,继续 ...

随机推荐

  1. 打造 通用的 支持多数据库 操作的 DBHelper

    闲来无事,写一个通用的直持多数据库的DBHelper,支持单连接批量执行SQL 因为用了TransactionScope所以请引用System.TransactionScope.dll 代码尚未测试, ...

  2. BOOST CHRONO steadycolock::now分析

    一直觉得boost的时间库不是很好用,当然,也有可能是我没有深入理解,所以,把代码弄出来看看或许要好些,时间处理中,取当前时间真的是太常见,而boost中各种clock又区分不清楚,然而,代码能说明一 ...

  3. PyCharm 2016.1 for Mac 激活方法分享

    内容如题,需要就参考一下,不需要请绕行!内容来自墙外我只是搬运工! 简单介绍一下步骤: 1.下载下面的压缩包并解压下来. http://files.cnblogs.com/files/korykim/ ...

  4. verilog中=和<=的区别

    一般情况下使用<=,组合逻辑使用=赋值,时序逻辑使用<=赋值: 举个例子:初始化m=1,n=2,p=3:分别执行以下语句 1.begin m=n:n=p:p=m: end 2.begin ...

  5. Others in life

    耗电量主要是与电机有关,800W电机在48V下的工作电流大约是800/48=16.7A,因此其工作时间主要取决于电池的容量,如果电池容量是20Ah,那么大概也就连续工作1个小时左右,也就是30-40k ...

  6. linux note

    用 &&组合两个命令,比如: cd dir && ls

  7. SQL Server一些重要视图 1

    第一个: sys.indexs 每个堆与索引在它上有一行. 第二个: sys.partitions每个堆与索引的每一个分区返回一行.每一张表最多可以有1000个区. 第三个: sys. allocat ...

  8. 关于iOS7越狱的整理

    目前越狱非常的不稳定,已经白苹果第三次了.中途遇见了不少问题,去各大论坛找了下解决办法,算是搬运工. iOS7越狱过程中打开手机上的“evasi0n7”闪退,怎么办?1. 请先尝试卸载手机“evasi ...

  9. Strange Towers of Hanoi

    题目链接:http://sfxb.openjudge.cn/dongtaiguihua/E/ 题目描述:4个柱子的汉诺塔,求盘子个数n从1到12时,从A移到D所需的最大次数.限制条件和三个柱子的汉诺塔 ...

  10. poj2328---"right on"进入下一个case的模板(while)

    #include <stdio.h> #include <stdlib.h> #include<string.h> int main() { ]; ,end=; w ...