机器学习笔记1——Linear Regression with One Variable
Linear Regression with One Variable
Model Representation
Recall that in *regression problems*, we are taking input variables and trying to map the output onto a *continuous* expected result function.
Linear regression with one variable is also known as "univariate linear regression."
Univariate linear regression is used when you want to predict a single output value from a single input value. We're doing supervised learning here, so that means we already have an idea what the input/output cause and effect should be.
The Hypothesis Function
Our hypothesis function has the general form:
hθ(x)=θ0+θ1x
We give to hθ values for θ0 and θ1 to get our output 'y'. In other words, we are trying to create a function called hθ that is able to reliably map our input data (the x's) to our output data (the y's).
Example:
| x (input) | y (output) |
| 0 | 4 |
| 1 | 7 |
| 2 | 7 |
| 3 | 8 |
Now we can make a random guess about our hθ function: θ0=2 and θ1=2. The hypothesis function becomes hθ(x)=2+2x.
So for input of 1 to our hypothesis, y will be 4. This is off by 3.
Cost Function
We can measure the accuracy of our hypothesis function by using a cost function. This takes an average (actually a fancier version of an average) of all the results of the hypothesis with inputs from x's compared to the actual output y's.
J(θ0,θ1)=(1/2m)∑i=1m(hθ(x(i))−y(i))2
To break it apart, it is 12x¯ where x¯ is the mean of the squares of hθ(x(i))−y(i), or the difference between the predicted value and the actual value.
This function is otherwise called the "Squared error function", or Mean squared error. The mean is halved (12m) as a convenience for the computation of the gradient descent, as the derivative term of the square function will cancel out the 12 term.
Now we are able to concretely measure the accuracy of our predictor function against the correct results we have so that we can predict new results we don't have.
Gradient Descent
So we have our hypothesis function and we have a way of measuring how accurate it is. Now what we need is a way to automatically improve our hypothesis function. That's where gradient descent comes in.
Imagine that we graph our hypothesis function based on its fields θ0 and θ1 (actually we are graphing the cost function for the combinations of parameters). This can be kind of confusing; we are moving up to a higher level of abstraction. We are not graphing x and y itself, but the guesses of our hypothesis function.
We put θ0 on the x axis and θ1 on the z axis, with the cost function on the vertical y axis. The points on our graph will be the result of the cost function using our hypothesis with those specific theta parameters.
We will know that we have succeeded when our cost function is at the very bottom of the pits in our graph, i.e. when its value is the minimum.
The way we do this is by taking the derivative (the line tangent to a function) of our cost function. The slope of the tangent is the derivative at that point and it will give us a direction to move towards. We make steps down that derivative by the parameter α, called the learning rate.
The gradient descent equation is:
repeat until convergence:
θj:=θj−α∂∂θjJ(θ0,θ1)
for j=0 and j=1
Intuitively, this could be thought of as:
repeat until convergence:
θj:=θj−α[Slope of tangent aka derivative]
Gradient Descent for Linear Regression
When specifically applied to the case of linear regression, a new form of the gradient descent equation can be derived. We can substitute our actual cost function and our actual hypothesis function and modify the equation to (the derivation of the formulas are out of the scope of this course, but a really great one can be found here:
repeat until convergence: {θ0:=θ1:=}θ0−α1m∑i=1m(hθ(x(i))−y(i))θ1−α1m∑i=1m((hθ(x(i))−y(i))x(i))
where m is the size of the training set, θ0 a constant that will be changing simultaneously with θ1 and x(i),y(i)are values of the given training set (data).
Note that we have separated out the two cases for θj and that for θ1 we are multiplying x(i) at the end due to the derivative.
The point of all this is that if we start with a guess for our hypothesis and then repeatedly
apply these gradient descent equations, our hypothesis will become more and more accurate.
What's Next
Instead of using linear regression on just one input variable, we'll generalize and expand our concepts so that we can predict data with multiple input variables. Also, we'll solve for θ0 and θ1 exactly without needing an iterative function like gradient descent.
机器学习笔记1——Linear Regression with One Variable的更多相关文章
- Stanford机器学习---第一讲. Linear Regression with one variable
原文:http://blog.csdn.net/abcjennifer/article/details/7691571 本栏目(Machine learning)包括单参数的线性回归.多参数的线性回归 ...
- Machine Learning 学习笔记2 - linear regression with one variable(单变量线性回归)
一.Model representation(模型表示) 1.1 训练集 由训练样例(training example)组成的集合就是训练集(training set), 如下图所示, 其中(x,y) ...
- 机器学习笔记-1 Linear Regression(week 1)
1.Linear Regression with One variable Linear Regression is supervised learning algorithm, Because th ...
- 机器学习笔记-1 Linear Regression with Multiple Variables(week 2)
1. Multiple Features note:X0 is equal to 1 2. Feature Scaling Idea: make sure features are on a simi ...
- 机器学习 (一) 单变量线性回归 Linear Regression with One Variable
文章内容均来自斯坦福大学的Andrew Ng教授讲解的Machine Learning课程,本文是针对该课程的个人学习笔记,如有疏漏,请以原课程所讲述内容为准.感谢博主Rachel Zhang的个人笔 ...
- Stanford机器学习---第二讲. 多变量线性回归 Linear Regression with multiple variable
原文:http://blog.csdn.net/abcjennifer/article/details/7700772 本栏目(Machine learning)包括单参数的线性回归.多参数的线性回归 ...
- 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 2_Linear regression with one variable 单变量线性回归
Lecture2 Linear regression with one variable 单变量线性回归 2.1 模型表示 Model Representation 2.1.1 线性回归 Li ...
- Ng第二课:单变量线性回归(Linear Regression with One Variable)
二.单变量线性回归(Linear Regression with One Variable) 2.1 模型表示 2.2 代价函数 2.3 代价函数的直观理解 2.4 梯度下降 2.5 梯度下 ...
- MachineLearning ---- lesson 2 Linear Regression with One Variable
Linear Regression with One Variable model Representation 以上篇博文中的房价预测为例,从图中依次来看,m表示训练集的大小,此处即房价样本数量:x ...
随机推荐
- @font-face
/** * jQuery.hhNewSilder 滚动图片插件 * User: huanhuan * QQ: 651471385 * Email: th.wanghuan@gmail.com ...
- (转载)与OpenDialog相关的一个问题
OpenDialog的一个问题 有一个功能要求就是[每次打开文件的对话框的默认路径是上一次保存文件的路径],本来这个就是设置OpenDialog控件的InitialDir属性就行了,但是第一次打开的时 ...
- 可视化查看MongoDB - MongoVUE
OS:Window 7 1.下载安装MongoVUE:http://www.mongovue.com/downloads/ 2.运行MongoVUE,点击“OK”按钮,使用免费版 3.选择一个已经存在 ...
- 问自己----也是自己该怎么走的路(phper)
1.首先看了PHP的源码API函数,对于许多口水仗的争论一笑而过,只是停留在脚本级别上的什么效率,安全...之争完全就是无稽之谈,没有深入理解API,所有的争论都是臆测和不科学的态度.你做了吗? 2. ...
- Ubuntu下Memcache的安装与基本使用
安装Memcache Memcache分为两部分,Memcache服务端和客户端.Memcache服务端是作为服务来运行的,所有数据缓存的建立,存储,删除实际上都是在这里完成的.客户端,在这里我们指的 ...
- iOS代码规范文档
文件命名规范: 1. 项目统一使用类前缀ZY. 2. 分类命名+后面统一使用ZYExtension,例:NSDictionary+ZYExtension.h,常用分类定义在内部并写好文档注释.如果功能 ...
- python中xrange()和range()函数的区别使用:
1.range()函数: 函数说明:range([start,] stop[, step]),根据start与stop指定的范围以及step设定的步长,生成一个序列. >>> #ra ...
- 【java并发】线程同步工具Semaphore的使用
Semaphore通常用于限制可以访问某些资源(物理或逻辑的)的线程数目,我们可以自己设定最大访问量.它有两个很常用的方法是acquire()和release(),分别是获得许可和释放许可. 官方J ...
- 【简译】JavaScript闭包导致的闭合变量问题以及解决方法
本文是翻译此文 预先阅读此文:闭合循环变量时被认为有害的(closing over the loop variable considered harmful) JavaScript也有同样的问题.考虑 ...
- app:layout_scrollFlags不起作用
http://stackoverflow.com/questions/31722798/enteralwayscollapsed-does-not-bring-back-the-toolbar-whe ...