5种方法推导Normal Equation
引言:
Normal Equation 是最基础的最小二乘方法。在Andrew Ng的课程中给出了矩阵推到形式,本文将重点提供几种推导方式以便于全方位帮助Machine Learning用户学习。
Notations:
RSS(Residual Sum Squared error):残差平方和
β:参数列向量
X:N×p 矩阵,每行是输入的样本向量
y:标签列向量,即目标列向量
Method 1. 向量投影在特征纬度(Vector Projection onto the Column Space)
是一种最直观的理解: The optimization of linear regression is equivalent to finding the projection of vector y onto the column space of X. As the projection is denoted by Xβ, the optimal configuration of β is when the error vector y−Xβis orthogonal to the column space of X, that is
XT(y−Xβ)=0.(1)
Solving this gives:
β=(XTX)−1XTy.
Method 2. Direct Matrix Differentiation
通过重写S(β)为简单形式是一种最简明的方法
S(β)=(y−Xβ)T(y−Xβ)=yTy−βTXTy−yTXβ+βTXTXβ=yTy−2βTXTy+βTXTXβ.
差异化 S(β) w.r.t. β:
−2yTX+βT(XTX+(XTX)T)=−2yTX+2βTXTX=0,
Solving S(β) gives:
β=(XTX)−1XTy.
Method 3. Matrix Differentiation with Chain-rule
This is the simplest method for a lazy person, as it takes very little effort to reach the solution. The key is to apply the chain-rule:
∂S(β)∂β=∂(y−Xβ)T(y−Xβ)∂(y−Xβ)∂(y−Xβ)∂β=−2(y−Xβ)TX=0,
solving S(β) gives:
β=(XTX)−1XTy.
This method requires an understanding of matrix differentiation of the quadratic form: ∂xTWx∂x=xT(W+WT).
Method 4. Without Matrix Differentiation
We can rewrite S(β) as following:
S(β)=⟨β,β⟩−2⟨β,(XTX)−1XTy⟩+⟨(XTX)−1XTy,(XTX)−1XTy⟩+C,
where ⟨⋅,⋅⟩ is the inner product defined by
⟨x,y⟩=xT(XTX)y.
The idea is to rewrite S(β) into the form of S(β)=(x−a)2+b such that x can be solved exactly.
Method 5. Statistical Learning Theory
An alternative method to derive the normal equation arises from the statistical learning theory. The aim of this task is to minimize the expected prediction error given by:
EPE(β)=∫(y−xTβ)Pr(dx,dy),
where x stands for a column vector of random variables, y denotes the target random variable, and β denotes a column vector of parameters (Note the definitions are different from the notations before).
Differentiating EPE(β) w.r.t. β gives:
∂EPE(β)∂β=∫2(y−xTβ)(−1)xTPr(dx,dy).
Before we proceed, let’s check the dimensions to make sure the partial derivative is correct. EPE is the expected error: a 1×1 vector. β is a column vector that is N×1. According to the Jacobian in vector calculus, the resulting partial derivative should take the form
∂EPE∂β=(∂EPE∂β1,∂EPE∂β2,…,∂EPE∂βN),
which is a 1×N vector. Looking back at the right-hand side of the equation above, we find 2(y−xTβ)(−1) being a constant while xTbeing a row vector, resuling the same 1×Ndimension. Thus, we conclude the above partial derivative is correct. This derivative mirrors the relationship between the expected error and the way to adjust parameters so as to reduce the error. To understand why, imagine 2(y−xTβ)(−1) being the errors incurred by the current parameter configurations β and xT being the values of the input attributes. The resulting derivative equals to the error times the scales of each input attribute. Another way to make this point is: the contribution of error from each parameter βi has a monotonic relationship with the error 2(y−xTβ)(−1) as well as the scalar xT that was multiplied to each βi.
Now, let’s go back to the derivation. Because 2(y−xTβ)(−1) is 1×1, we can rewrite it with its transpose:
∂EPE(β)∂β=∫2(y−xTβ)T(−1)xTPr(dx,dy).
Solving ∂EPE(β)∂β=0 gives:
E[yTxT−βTxxT]=0E[βTxxT]=E[yTxT]E[xxTβ]=E[xy]β=E[xxT]−1E[xy].
5种方法推导Normal Equation的更多相关文章
- 机器学习入门:Linear Regression与Normal Equation -2017年8月23日22:11:50
本文会讲到: (1)另一种线性回归方法:Normal Equation: (2)Gradient Descent与Normal Equation的优缺点: 前面我们通过Gradient Desce ...
- 正规方程 Normal Equation
正规方程 Normal Equation 前几篇博客介绍了一些梯度下降的有用技巧,特征缩放(详见http://blog.csdn.net/u012328159/article/details/5103 ...
- machine learning (7)---normal equation相对于gradient descent而言求解linear regression问题的另一种方式
Normal equation: 一种用来linear regression问题的求解Θ的方法,另一种可以是gradient descent 仅适用于linear regression问题的求解,对其 ...
- coursera机器学习笔记-多元线性回归,normal equation
#对coursera上Andrew Ng老师开的机器学习课程的笔记和心得: #注:此笔记是我自己认为本节课里比较重要.难理解或容易忘记的内容并做了些补充,并非是课堂详细笔记和要点: #标记为<补 ...
- Normal Equation
一.Normal Equation 我们知道梯度下降在求解最优参数\(\theta\)过程中需要合适的\(\alpha\),并且需要进行多次迭代,那么有没有经过简单的数学计算就得到参数\(\theta ...
- Normal Equation Algorithm
和梯度下降法一样,Normal Equation(正规方程法)算法也是一种线性回归算法(Linear Regression Algorithm).与梯度下降法通过一步步计算来逐步靠近最佳θ值不同,No ...
- normal equation(正规方程)
normal equation(正规方程) 正规方程是通过求解下面的方程来找出使得代价函数最小的参数的: \[ \frac{\partial}{\partial\theta_j}J\left(\the ...
- YbSoftwareFactory 代码生成插件【二十五】:Razor视图中以全局方式调用后台方法输出页面代码的三种方法
上一篇介绍了 MVC中实现动态自定义路由 的实现,本篇将介绍Razor视图中以全局方式调用后台方法输出页面代码的三种方法. 框架最新的升级实现了一个页面部件功能,其实就是通过后台方法查询数据库内容,把 ...
- 去除inline-block元素间间距的N种方法
这篇文章发布于 2012年04月24日,星期二,22:38,归类于 css相关. 阅读 147771 次, 今日 52 次 by zhangxinxu from http://www.zhangxin ...
随机推荐
- 计算机程序的思维逻辑 (64) - 常见文件类型处理: 属性文件/CSV/EXCEL/HTML/压缩文件
对于处理文件,我们介绍了流的方式,57节介绍了字节流,58节介绍了字符流,同时,也介绍了比较底层的操作文件的方式,60节介绍了随机读写文件,61节介绍了内存映射文件,我们也介绍了对象的序列化/反序列化 ...
- ArcGIS制图技巧系列(3)—让地图更有立体感
ArcGIS制图技巧系列(3)-让地图更有立体感 by 李远祥 在前面的章节中,我们已经介绍过各种的地图效果,如发光效果,山体阴影效果,植被填充效果等,所有的这些效果不外乎是各种技术的叠加和技巧的使用 ...
- cordova StatusBar插件的使用(设置手机状态栏颜色和页面头部颜色一致),做出和原生一样的页面效果体验
cordova StatusBar插件的使用(设置手机状态栏颜色和页面头部颜色一致),做出和原生一样的页面效果体验设置设备状态栏背景颜色StatusBar.backgroundColorByHexSt ...
- java 非缓冲与缓冲数据写入比较
//非缓冲计时package com.swust; import java.io.*; /* *功能:创建一个程序,写10000个随机双精度的数到一个文件中,同时测试运用缓冲和非缓冲技术 * 进行这种 ...
- 读书笔记 effective c++ Item 13 用对象来管理资源
1.不要手动释放从函数返回的堆资源 假设你正在处理一个模拟Investment的程序库,不同的Investmetn类型从Investment基类继承而来, class Investment { ... ...
- 转:微信开发之使用java获取签名signature(贴源码,附工程)
微信开发之使用java获取签名signature(贴源码,附工程) 标签: 微信signature获取签名 2015-12-29 22:15 6954人阅读 评论(3) 收藏 举报 分类: 微信开发 ...
- 如何让 Git 忽略掉文件中的特定行内容?
近期在git遇到几个问题,让我重新认识到git的强大性,下面列出来记录一下 有一个数据库的配置文件,在用 git add 添加到 index file 时不能透露了相关配置.而如果用 .gitigno ...
- IntelliJ IDEA 2016.1.4 git 切换分支详解
参考网址: http://cache.baiducontent.com/c?m=9d78d513d9981de90fb3ca255501d7174202d7743da7c7647ac3e54a8414 ...
- vue生命周期的介绍
<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title> ...
- 【WC2015】混淆与破解 (Goldreich-Levin 算法)
这个嘛= =直接贴VFK的题解就行了吧,感觉自己还是差别人太多 http://vfleaking.blog.uoj.ac/blog/104 讲得挺明白了的说,但还是挺难理解的说,中间实现部分简直不要太 ...