Ridge Regression and Ridge Regression Kernel

Reference:
1. scikit-learn linear_model ridge regression
2. Machine learning for quantum mechanics in a nutshell Authors
3. sample plot ridge path code from #Fabian Pedregosa --

Ridge regression

Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of coefficients. The ridge coefficients minimize a penalized residual sum of squares:
\[\underset{w}{min} {\left\| Xw-y\right\|_2^{2}} + \lambda\left\|{w}\right\|_2^2\]
Here, \(\lambda \ge 0\) is a complexity parameter that controls the amount of shrinkage: the larger the value of \(\lambda\), the greater the amount of shrinkage and thus the coefficients become robust to collinearity. The figure below show the relationship between the \(\lambda\) and oscillations of the weights.

Ridge Regression Theory

the \(\tilde{x}\) meaning the test dataset, and \(x_i\) means the training data:
\[f(\tilde{x}) = \sum_{i=0}^n \alpha_i k(\tilde{x}, x)\]
Although the dimensionalty of \(\mathbf{Hilbert}\) space can be high, the solution lives in the finite span of the projected training data, enabling a finite representation. The corresponding convex optimization problems is:
\[\underset{\alpha \varepsilon R^{n}} {\mathrm{argmin}} \sum_{i=1}^n(f(x_i - y_i)^2 + \lambda\left \| f \right \|_{2}^{H} \]
\[\Leftrightarrow \underset{\alpha \varepsilon R^{n}} {\mathrm{argmin}} \left \langle K\alpha-y, K\alpha-y \right \rangle + \lambda \alpha^{T}K\alpha\]
Where \(\left \| f \right \|_{2}^{H}\) is the norm of \(f\) in \(\mathbf{Hilbert}\) space, the complexity of the linear ridge regression model in feature space, and \(K \varepsilon R^{n\times n}, K_{i, j}=k(x_i, x_j)\) is the kernel matrix between training sampels. As before, setting the gradient to 0 yeilds an analytic solution for the regression coefficients:
\[\alpha^{T}K^{2}\alpha - 2\alpha^{T}Ky\ + y^{T}y + \lambda\alpha^{T}K\alpha = 0 \Leftrightarrow K^{2}\alpha + \lambda K\alpha\]
\[\Leftrightarrow \alpha=(K+\lambda I)^{-1}y \]
where \(\lambda\) is a hyperparameter determining strength of regularization. The norm of the coefficient vector \(\mathbf\sigma\) is related to the smooothness and simpler models.

Figure below give a example of a KRR model with Gaussian kernel that demostates the role length-scale hyperparameter \(\sigma\). Although \(\sigma\) is directly related to the regularization term. But it's does control smoothness of the predictor, and effectively regularizes.

The figure above show that kernel ridge regression with Gaussian kernel and different length scales. We learn \(\cos(x)\), the KRR models(dasded lines).The regularization constant \(\lambda\) set to be \(10^{-14}\), a very small \(\mathbf{\sigma}\) fit the train set well but in-between error are very bigger, while a too large \(\sigma\) results in too close to linear model, with both high train error and prediction error.

From above description we can see all information about KRR model is contained in the matrix \(\mathbf K\) of kernel evaluations between training data. Similarly, all info required to predict new inputs \(\tilde{x}\) is contained in the kernel matrix of training set versus prediction data.

Kernel Ridge Regression

The regression coefficients \(\lambda\) are obtained by solving the linear system of equations \((\mathbf K + \lambda \mathbf I)\lambda=\mathbf y\), where \((\mathbf K + \lambda \mathbf I)\) is symmetric and strictly positive definite. To solve this equation we can use Cholesky decomposition \(\mathbf K + \lambda \mathbf I\), where \(\mathbf U\) is upper triangular. One then we break up the \(\mathbf{U^{T}U\lambda=y}\) equantion into 2 equations, the first is \(\mathbf{U^{T}\beta=y}\), and the other is \(\mathbf{U\lambda=\beta}\). Since \(U^{T}\) is lower triangular and \(\mathbf{U}\) is upper triangular, this requires only 2 striaghtforward passes over the data called forward and backward substitution, respectively. For \(\mathbf{U^{T}\beta=y}\), just like below:
\[\mathbf{U_{1, 1}^T \beta_1=y_1} \Leftrightarrow \beta_1=y1/u_{1,1}\]
\[\mathbf{U_{2, 1}^T \beta_1 + \mathbf{U_{2, 2}^T} \beta_1=y_1} \Leftrightarrow \beta_2=(y2 - u_{1, 2}\beta_1)/u_{2,2}\]
\[ ...\]
\[\sum_{j}^i \mathbf{U_{i, j}^T}\beta_j=y_i \Leftrightarrow \beta_i=(y_i-\sum_{j=1}^{i-1}u_{j,i}\beta_j)/u_{i,j}\]
Once the model is trained, then predictions can be made, and the prediction for a new input \(\mathbf{\tilde{x}}\) is the inner product between the vector of coefficients and the vector of corresponding kernel evaluations.

For a test datasets \(\mathbf{\tilde{X}} \varepsilon \Bbb{R^{n \times d}}\), the rows \(\mathbf{\tilde{x},...,\tilde{x_n}}\), and \(\mathbf{L} \varepsilon \Bbb{R^{n \times n}}\) is the kernel matrix of training versus prediction inputs. \(\mathbf{L_{i,j}=k(x_i, \tilde{x_j})}\)
This method has the same order of complexity than an Ordinary Least Squares. In the next notes I will introduce the detail of implementation about KRR.

Ridge Regression and Ridge Regression Kernel的更多相关文章

  1. 机器学习方法(五):逻辑回归Logistic Regression,Softmax Regression

    欢迎转载,转载请注明:本文出自Bin的专栏blog.csdn.net/xbinworld. 技术交流QQ群:433250724,欢迎对算法.技术.应用感兴趣的同学加入. 前面介绍过线性回归的基本知识, ...

  2. 机器学习---三种线性算法的比较(线性回归,感知机,逻辑回归)(Machine Learning Linear Regression Perceptron Logistic Regression Comparison)

    最小二乘线性回归,感知机,逻辑回归的比较:   最小二乘线性回归 Least Squares Linear Regression 感知机 Perceptron 二分类逻辑回归 Binary Logis ...

  3. why constrained regression and Regularized regression equivalent

    problem 1: $\min_{\beta} ~f_\alpha(\beta):=\frac{1}{2}\Vert y-X\beta\Vert^2 +\alpha\Vert \beta\Vert$ ...

  4. L1,L2范数和正则化 到lasso ridge regression

    一.范数 L1.L2这种在机器学习方面叫做正则化,统计学领域的人喊她惩罚项,数学界会喊她范数. L0范数  表示向量xx中非零元素的个数. L1范数  表示向量中非零元素的绝对值之和. L2范数  表 ...

  5. 【机器学习】Linear least squares, Lasso,ridge regression有何本质区别?

    Linear least squares, Lasso,ridge regression有何本质区别? Linear least squares, Lasso,ridge regression有何本质 ...

  6. Kernel Methods (3) Kernel Linear Regression

    Linear Regression 线性回归应该算得上是最简单的一种机器学习算法了吧. 它的问题定义为: 给定训练数据集\(D\), 由\(m\)个二元组\(x_i, y_i\)组成, 其中: \(x ...

  7. 机器学习技法:06 Support Vector Regression

    Roadmap Kernel Ridge Regression Support Vector Regression Primal Support Vector Regression Dual Summ ...

  8. 机器学习技法笔记:06 Support Vector Regression

    Roadmap Kernel Ridge Regression Support Vector Regression Primal Support Vector Regression Dual Summ ...

  9. 【Support Vector Regression】林轩田机器学习技法

    上节课讲了Kernel的技巧如何应用到Logistic Regression中.核心是L2 regularized的error形式的linear model是可以应用Kernel技巧的. 这一节,继续 ...

随机推荐

  1. MySQL的group_concat与Oracle的wm_concat使用区别

    Oracle的wm_concat在拼接时,如果字段内容为空结果为空,null类型相加不受影响. MySQL的group_concat拼接时,如果不设置Separator,字段内容为空时不会得到空的结果 ...

  2. api(一) 创建窗口 (转)

    所有的Windows SDK编程都有一个类似的框架,本文就说说这个框架,Windows程序设计的框架分为“三部曲”: 一.注册窗口类 注册窗口类的API函数是RegisterClass或者Regist ...

  3. Latex常用包笔记

    1.hyperref 标签包 \usepackage[colorlinks,linkcolor=black,anchorcolor=blue,citecolor=green]{hyperref} 2. ...

  4. Scala的类中定义内部类实战

    scala独特之处在于可以在类中定义内部类,起到对外屏蔽作用. 类中默认都是public权限.后面将讲解如何引入接口,scala中的接口与java点区别.特质/接口(Trait)

  5. Python调用C/C++动态链接库的方法详解

    Python调用C/C++动态链接库的方法详解 投稿:shichen2014 这篇文章主要介绍了Python调用C/C++动态链接库的方法,需要的朋友可以参考下 本文以实例讲解了Python调用C/C ...

  6. 3.21 采购订单导入MDS

    3.21.1   业务方案描述 同一企业集团内部的不同法人之间,双方间内部往来业务频繁.受集团财务各自独立核算的要求,买方和卖方间采用买卖方式进行业务运作和财务结算. 对于买方,按照内部商定的协议价格 ...

  7. Delphi组件开发-在窗体标题栏添加按钮(使用MakeObjectInstance(NewWndProc),并处理好多消息)

    这是一个在窗体标题栏添加自定义按钮的组件(TTitleBarButton)开发实例,标题栏按钮组件TTitleBarButton以TComponent为直接继承对象,它是一个可以在窗体标题栏上显示按钮 ...

  8. poj1833---字典序算法

    题意:给定一个序列,求它的下k个排列 #include <stdio.h> #include <stdlib.h> int cmp(const void *a,const vo ...

  9. storage theory

    preface/prehight:topic: Storage(share fileSystem(可共享文件系统,Access I/O existence bottleNeck,access read ...

  10. 一个cocoapods问题的解决,希望能帮助到遇到相似情况的人

    之前10.7的系统上执行过cocoapods没有问题.如今系统版本号升级到了10.9,尝试使用cocoapods遇到问题,报告了类似以下的错误: Psych::SyntaxError - (/User ...