(原创)Stanford Machine Learning (by Andrew NG) --- (week 1) Linear Regression
Andrew NG的Machine learning课程地址为:https://www.coursera.org/course/ml
在Linear Regression部分出现了一些新的名词,这些名词在后续课程中会频繁出现:
| Cost Function | Linear Regression | Gradient Descent | Normal Equation | Feature Scaling | Mean normalization | 
| 损失函数 | 线性回归 | 梯度下降 | 正规方程 | 特征归一化 | 均值标准化 | 
Model Representation
- m: number of training examples
- x(i): input (features) of ith training example
- xj(i): value of feature j in ith training example
- y(i): “output” variable / “target” variable of ith training example
- n: number of features
- θ: parameters
- Hypothesis: hθ(x) = θ0 + θ1x1 + θ2x2 + … +θnxn
Cost Function
IDEA: Choose θso that hθ(x) is close to y for our training examples (x, y).
A.Linear Regression with One Variable Cost Function
Cost Function:    
Goal:    
Contour Plot:

B.Linear Regression with Multiple Variable Cost Function
Cost Function:  
Goal:   
Gradient Descent
Outline

Gradient Descent Algorithm

迭代过程收敛图可能如下:

(此为等高线图,中间为最小值点,图中蓝色弧线为可能的收敛路径。)
Learning Rate α:
1) If α is too small, gradient descent can be slow to converge;
2) If α is too large, gradient descent may not decrease on every iteration or may not converge;
3) For sufficiently small α , J(θ) should decrease on every iteration;
Choose Learning Rate α: Debug, 0.001, 0.003, 0.006, 0.01, 0.03, 0.06, 0.1, 0.3, 0.6, 1.0;
“Batch” Gradient Descent: Each step of gradient descent uses all the training examples;
“Stochastic” gradient descent: Each step of gradient descent uses only one training examples.
Normal Equation
IDEA: Method to solve for θ analytically.
 for every j, then
 for every j, then   
Restriction: Normal Equation does not work when (XTX) is non-invertible.
PS: 当矩阵为满秩矩阵时,该矩阵可逆。列向量(feature)线性无关且行向量(样本)线性无关的个数大于列向量的个数(特征个数n).
Gradient Descent Algorithm VS. Normal Equation
Gradient Descent:
- Need to choose α;
- Needs many iterations;
- Works well even when n is large; (n > 1000 is appropriate)
Normal Equation:
- No need to choose α;
- Don’t need to iterate;
- Need to compute (XTX)-1 ;
- Slow if n is very large. (n < 1000 is OK)
Feature Scaling
IDEA: Make sure features are on a similar scale.
好处: 减少迭代次数,有利于快速收敛
Example: If we need to get every feature into approximately a -1 ≤ xi ≤ 1 range, feature values located in [-3, 3] or [-1/3, 1/3] fields are acceptable.
Mean normalization:  
HOMEWORK
好了,既然看完了视频课程,就来做一下作业吧,下面是Linear Regression部分作业的核心代码:
1.computeCost.m/computeCostMulti.m
J=/(*m)*sum((theta'*X'-y').^2);
2.gradientDescent.m/gradientDescentMulti.m
h=X*theta-y;
v=X'*h;
v=v*alpha/m;
theta1=theta;
theta=theta-v;
(原创)Stanford Machine Learning (by Andrew NG) --- (week 1) Linear Regression的更多相关文章
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 3) Logistic Regression & Regularization
		coursera上面Andrew NG的Machine learning课程地址为:https://www.coursera.org/course/ml 我曾经使用Logistic Regressio ... 
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 10) Large Scale Machine Learning & Application Example
		本栏目来源于Andrew NG老师讲解的Machine Learning课程,主要介绍大规模机器学习以及其应用.包括随机梯度下降法.维批量梯度下降法.梯度下降法的收敛.在线学习.map reduce以 ... 
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 8) Clustering & Dimensionality Reduction
		本周主要介绍了聚类算法和特征降维方法,聚类算法包括K-means的相关概念.优化目标.聚类中心等内容:特征降维包括降维的缘由.算法描述.压缩重建等内容.coursera上面Andrew NG的Mach ... 
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 7) Support Vector Machines
		本栏目内容来源于Andrew NG老师讲解的SVM部分,包括SVM的优化目标.最大判定边界.核函数.SVM使用方法.多分类问题等,Machine learning课程地址为:https://www.c ... 
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 9)  Anomaly Detection&Recommender Systems
		这部分内容来源于Andrew NG老师讲解的 machine learning课程,包括异常检测算法以及推荐系统设计.异常检测是一个非监督学习算法,用于发现系统中的异常数据.推荐系统在生活中也是随处可 ... 
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 4) Neural Networks  Representation
		Andrew NG的Machine learning课程地址为:https://www.coursera.org/course/ml 神经网络一直被认为是比较难懂的问题,NG将神经网络部分的课程分为了 ... 
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 1) Introduction
		最近学习了coursera上面Andrew NG的Machine learning课程,课程地址为:https://www.coursera.org/course/ml 在Introduction部分 ... 
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 5) Neural Networks Learning
		本栏目内容来自Andrew NG老师的公开课:https://class.coursera.org/ml/class/index 一般而言, 人工神经网络与经典计算方法相比并非优越, 只有当常规方法解 ... 
- (原创)Stanford Machine Learning (by Andrew NG) --- (week 6) Advice for Applying Machine Learning &  Machine Learning System Design
		(1) Advice for applying machine learning Deciding what to try next 现在我们已学习了线性回归.逻辑回归.神经网络等机器学习算法,接下来 ... 
随机推荐
- sass_安装问题(ERROR: Could not find a valid gem 'sass' (>= 0), here is why: Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: cert)
			安装sass前需安装ruby 安装好ruby好打开命令行,输入 gem install sass 出现错误: ERROR: Could not find a valid gem 'sass' (> ... 
- 纠结于arch+xfce还是xubuntu
			现在用的是ubuntu gnome版 http://www.tuicool.com/articles/6r22eyU 现在纠结于arch+xfce还是xubuntu,因为不想在gnome下面搞什么美化 ... 
- 直观理解js自执行函数
			要在函数体后面加括号就能立即调用,则这个函数必须是函数表达式,不能是函数声明: Jslint推荐的写法: (function(){alert(1);}()); 针对函数声明,使用().!.+.-.=. ... 
- Style2Paints:用AI技术为线稿快速上色的工具(GitHub 3310颗星)
			python 开源项目: Style2Paints:用AI技术为线稿快速上色的工具(GitHub 3310颗星) https://github.com/lllyasviel/style2paints 
- linux中时间精度的获取问题【转】
			转自:http://www.xuebuyuan.com/877633.html 目前项目需要,需要对时间进行基准,基准的精度在微秒.下午老刘给我说不能用do_gettimeofday因为他的精度虽然可 ... 
- sicily 1009. Mersenne Composite N
			Description One of the world-wide cooperative computing tasks is the "Grand Internet Mersenne P ... 
- HDU 6112 今夕何夕 蔡勒公式
			题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=6112题意:中文题目 分析:关键点在与如何计算一个日期是星期几,这个可以通过蔡勒公式来计算.基姆拉尔森计 ... 
- mui页面跳转
			$('.mui-title').on('click',function(){ mui.openWindow({ //跳转到指导信息页面 url:"/index.php?m=mobile&am ... 
- 虚拟机 VMware Workstation12 安装OS X 系统
			Windows下虚拟机安装Mac OS X —– VMware Workstation12安装Mac OS X 10.11 本文即将介绍WIN虚拟MAC的教程.完整详细教程(包含安装中的一些问题) ... 
- 关于大O法的几点解释
			大O表示法指出算法有多快.例如,假设列表包含n个元素.简单查找需要检查每个元素,因此需要执行n次操作.使用大O表示法,这个运行时间为O(n).主要单位不是秒啊,大O表示法值得并非以秒为单位的速度,而是 ... 
