Multivariate Linear Regression
Multiple Features
Linear regression with multiple variables is also known as "multivariate linear regression".
We now introduce notation for equations where we can have any number of input variables.
The multivariable form of the hypothesis function accommodating these multiple features is as follows:
In order to develop intuition about this function, we can think about θ0 as the basic price of a house, θ1 as the price per square meter, θ2 as the price per floor, etc. x1 will be the number of square meters in the house, x2 the number of floors, etc.
Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:
This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more.
Remark: Note that for convenience reasons in this course we assume .This allows us to do matrix operations with theta and x. Hence making the two vectors 'θ' and
match each other element-wise (that is, have the same number of elements: n+1).]
Gradient Descent For Multiple Variables
The gradient descent equation itself is generally the same form; we just have to repeat it for our 'n' features:
Gradient Descent in Practice I - Feature Scaling
Note: [6:20 - The average size of a house is 1000 but 100 is accidentally written instead]
We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.
The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:
These aren't exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.
Two techniques to help with this are feature scaling and mean normalization. Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1. Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula:
Where is the average of all the values for feature (i) and s_i is the range of values (max - min), or s_i is the standard deviation.
Note that dividing by the range, or dividing by the standard deviation, give different results. The quizzes in this course use range - the programming exercises use standard deviation.
For example, if x_i represents housing prices with a range of 100 to 2000 and a mean value of 1000, then, x_i := \dfrac{price-1000}{1900}.
Gradient Descent in Practice II - Learning Rate
Note: [5:20 - the x -axis label in the right graph should be θ rather than No. of iterations ]
Debugging gradient descent. Make a plot with number of iterations on the x-axis. Now plot the cost function, J(θ) over the number of iterations of gradient descent. If J(θ) ever increases, then you probably need to decrease α.
Automatic convergence test. Declare convergence if J(θ) decreases by less than E in one iteration, where E is some small value such as 10−3. However in practice it's difficult to choose this threshold value.
It has been proven that if learning rate α is sufficiently small, then J(θ) will decrease on every iteration.
To summarize:
If α is too small: slow convergence.
If α is too large: may not decrease on every iteration and thus may not converge.
Features and Polynomial Regression
We can improve our features and the form of our hypothesis function in a couple different ways.
We can combine multiple features into one. For example, we can combine x1 and x2 into a new feature x3 by taking x1⋅x2.
Polynomial Regression
Our hypothesis function need not be linear (a straight line) if that does not fit the data well.
We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic or square root function (or any other form).
One important thing to keep in mind is, if you choose your features this way then feature scaling becomes very important.
Multivariate Linear Regression的更多相关文章
- Machine Learning - week 2 - Multivariate Linear Regression
Multiple Features 上一章中,hθ(x) = θ0 + θ1x,表示只有一个 feature.现在,有多个 features,所以 hθ(x) = θ0 + θ1x1 + θ2x2 + ...
- 多元线性回归(Multivariate Linear Regression)简单应用
警告:本文为小白入门学习笔记 数据集: http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=DeepLearnin ...
- Machine Learning – 第2周(Linear Regression with Multiple Variables、Octave/Matlab Tutorial)
Machine Learning – Coursera Octave for Microsoft Windows GNU Octave官网 GNU Octave帮助文档 (有900页的pdf版本) O ...
- 【转】Derivation of the Normal Equation for linear regression
I was going through the Coursera "Machine Learning" course, and in the section on multivar ...
- 机器学习---线性回归(Machine Learning Linear Regression)
线性回归是机器学习中最基础的模型,掌握了线性回归模型,有利于以后更容易地理解其它复杂的模型. 线性回归看似简单,但是其中包含了线性代数,微积分,概率等诸多方面的知识.让我们先从最简单的形式开始. 一元 ...
- Andrew Ng Machine Learning 专题【Linear Regression】
此文是斯坦福大学,机器学习界 superstar - Andrew Ng 所开设的 Coursera 课程:Machine Learning 的课程笔记. 力求简洁,仅代表本人观点,不足之处希望大家探 ...
- CheeseZH: Stanford University: Machine Learning Ex1:Linear Regression
(1) How to comput the Cost function in Univirate/Multivariate Linear Regression; (2) How to comput t ...
- Coursera machine learning 第二周 quiz 答案 Linear Regression with Multiple Variables
https://www.coursera.org/learn/machine-learning/exam/7pytE/linear-regression-with-multiple-variables ...
- Multivariate Adaptive Regression Splines (MARSplines)
Introductory Overview Regression Problems Multivariate Adaptive Regression Splines Model Selection a ...
随机推荐
- 存储控制器和SDRAM 实验
S3C2440 存储控制器(memory controller)提供了訪问外部设备所需的信号,这是一种通过总线形式来訪问扩展的外设. S3C2440 的存储器控制器有下面的特性: 支持小字节序.大字 ...
- Makefile 文件格式
Makefile包含 目标文件.依赖文件.可运行命令三部分. 每部分的基本格式例如以下: test: prog.o code.o gcc -o test prog.o code.o 当中 ...
- D. Dreamoon and Sets(Codeforces Round #272)
D. Dreamoon and Sets time limit per test 1 second memory limit per test 256 megabytes input standard ...
- 8.NPM 使用介绍
转自:http://www.runoob.com/nodejs/nodejs-tutorial.html NPM是随同NodeJS一起安装的包管理工具,能解决NodeJS代码部署上的很多问题,常见的使 ...
- DG Cascade Standby
SUMMARY 1. logical standby不支持cascading standby 2. 11.2.0.2之前版本cascading standby不支持RAC 3. 11.2.0.3之前版 ...
- 洛谷 P1760 通天之汉诺塔
P1760 通天之汉诺塔 题目背景 直达通天路·小A历险记第四篇 题目描述 在你的帮助下,小A成功收集到了宝贵的数据,他终于来到了传说中连接通天路的通天山.但是这距离通天路仍然有一段距离,但是小A突然 ...
- 【Python】用Python的“结巴”模块进行分词
之前都是用计算所的分词工具进行分词,效果不错可是比較麻烦,近期開始用Python的"结巴"模块进行分词,感觉很方便.这里将我写的一些小程序分享给大家,希望对大家有所帮助. 以下这个 ...
- tomcat的一些简单配置
一.管理tomcatusernamepassword conf文件夹下,tomcat-users.xml <span style="font-size:24px;">& ...
- Docker---(4)Docker 部署spring web项目
原文:Docker---(4)Docker 部署spring web项目 版权声明:欢迎转载,请标明出处,如有问题,欢迎指正!谢谢!微信:w1186355422 https://blog.csdn.n ...
- 《社交红利》读书总结--如何从微信微博QQ空间等社交网络带走海量用户、流量与收入
<社交红利--如何从微信微博QQ空间等社交网络带走海量用户.流量与收入>--徐志斌 著 <社交红利>这本书2013年9月才上市,卖的非常火. 我最初是在公司内部的期刊上,看到有 ...