Week1:

Machine Learning:

  • A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
  • Supervised Learning:We already know what our correct output should look like.
  1. Regression:Try to map input variables to some continuous function.
  2. Classification:Try to map input variables into discrete categories.
  • Unsupervised Learning:We only have little or no idea what our results should look like.
  1. Clustering:Find a way to automatically group data into groups that are somehow similar or related by different variables.
  2. Non-clustering:Find structure in a chaotic environment,like the "Cocktail Party Algorithm".

Model Representation:

  • x(i):Input features
  • y(i):Target variable
  • (x(i),y(i)):Training example
  • (x(i),y(i));i=1,...,m:Training set
  • m:Number of training examples
  • h(x):Hypothesis,θ0+θ1x1
Cost Function:
  • This takes an average difference of all the results of the hypothesis with inputs from x's and the actual output y's.
  • Algorithm:(The mean is halved 1/2 as a convenience for the computation of the gradient descent, as the derivative term of the square function will cancel out the 1/2 term.)
  • We use contour plot to show how to minimize the cost function.
 
Gradient Descent:
  • Help us to estimate the parameters in the hypothesis function.
  • Algorithm:(repeat until convergence)
  • j=0,1:Feature index numbe
  • α:Learning rate or the size of each step.If α is too small,gradient descent can be slow.If α is too large,gradient descent can overshoot the minimum.
  • Partial Derivative of J:Direction of each step
  • At each iteration j, one should simultaneously update all of the parameters.

Gradient Descent For Linear Regression:

  • Algorithm:
  • This method looks at every example in the entire training set on every step, and is calledbatch gradient descent.
Linear Algebra:
  • I have learned liner algebra in my college so I will skip this part in my note.
 
Week2:
Mutiple Features:
  • n:number of features
  • x(i):input of ith training example
  • x(i)j:value of feature j in ith training example
  • hθ(x):θ0x0+θ1x1+θ2x2+θ3x3+⋯+θnxn=(assume x0 = 1)
Gradient Descent for Multiple Variables:
  • Algorithm:
  • Feature Scaling:
  1. Feature Scaling:Dividing the input values by the range (max - min) of the input variable.Get every feature into approximately  a -1 <= xi <= 1 range.
  2. Mean Normalization:Subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero.
  3. Where μi is the average of all the values for feature i and si is the range of values (max - min), or si is the standard deviation.
  • Learning Rate:Make a plot with number of iterations on the x-axis. and J(θ) on the y-axis.If J(θ) ever increases, then you probably need to decrease α.It has been proven that if learning rate α is sufficiently small, then J(θ) will decrease on every iteration.To choose α,try 0.001,0.003,0.01......
  • Features and Polynomial Regression:We can improve our features and the form of our hypothesis function in a couple different ways
  1. We can combine multiple features into one.We can get a new feature x3 by taking x1 * x2
  2. We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic or square root function (or any other form).
  3. if you choose your features this way then feature scaling becomes very important.
Normal Equation:
  • Formula:
  • Example:
  • There is no need to do feature scaling with the normal equation.
  • If (X^TX) is non-invertibale:
  1. Delete redundant features such as x1 = size in feet^2 and x2 = size in m^2.
  2. Delete features to make sure that m > n or use regularization.
Octave:
 
Week3:
Classfication:
  • The classification problem is just like the regression problem, except that the values we now want to predict take on only a small number of discrete values.
  • x(i):Feature
  • y(i):Label for the tranning example
Logistic Regression:
  • We change the form for our hypotheses to satisfy 0 <= h(x) =1 by pluggin θ^Tx into the Logistic Function.
  • Formula:
  • Decision Boundary:The line that separates the area where y = 0 and where y = 1.It is created by hypothesis function(θ^Tx=0).
  • Cost Function:

We can compress our cost function's two conditional cases into one case:

  • Gradient Descent:                  This algorithm is identical to the one we used in linear regression.But the h(x) is changed.

Optimization Algorithms:

  • Conjugate gradient
  • BFGS
  • L-BGFS
  • We can write codes below to use Octave's "fminunc()"

Multiclass Classification:

  • Train a logistic regression classifier hθ(x) for each class to predict the probability that  y = i . To make a prediction on a new x, pick the class that maximizes hθ(x)

Overfitting:

  • Even though the fitted curve passes through the data perfectly, we would not expect this to be a very good predictor.
  • Options to address overfitting:
  1. Reduce the number of features.
  2. Regularzation.
  • Regularized Linear Regression:
  1. Cost Funcion:(lambda is the regularization parameter.)
  2. Gradient Descent:                                                                                           
  3. Normal Equation:
  • Regularized Logistic Regression:
  1. Cost Function:
  2. Gradient Descent:
 

Week4:

Neural Network:Representation:
  • If we had one hidden layer, it would look like:
  • The values for each of the "activation" nodes:
  • Each layer gets its own matrix of weights:(The '+1' comes from the 'bias nodes',the output nodes will not include the bias nodes while the inputs will.)
  • Vectorized:
  • We can set different theta matrix to construct fundamental options by using a small neural network.
  • We can construct more complex options by using hidden layers.
  • Multiclass Classification:We use one-vs-all method and let hypothesis function return a vector of values.
 
Week 5:
Neural Network:Learning:
 
Cost Function:

  • L:Total number of layers in the network
  • Sl:Number of units (not counting bias unit) in layer l
  • K:number of output units/classes

Backpropagation Algorithm:

  • "Backpropagation" is neural-network terminology for minimizing our cost function.
  • Algorithm:For t = 1 to m:
  1. We get
  • Using code like this to unroll all the elements and put them into one long vector.Using code like this to get back original matrices.
  • Gradient Checking:We can approximate the derivative with respect to θj as follows:
  • Training:
Week 6:
Applying Machine Learning:
 
Evaluating a Hypothesis:
  • Set 70% of date to be the training set and the remainning 30% to be the test set.
  • In order to choose the model of your hypothesis, we can test each degree of polynomial by using cross validation set.(20% training set,20% cross validation set,60% test set)
Bias vs. Variance:
  • High bias is underfitting and high variance is overfitting.Ideally, we need to find a golden mean between these two.
  • High Bias:
  • High Variance:
  • In order to choose the model and the regularization term λ, we need to:
  • If a learning algorithm is suffering from high bias, getting more training data will not help much.
  • If a learning algorithm is suffering from high variance, getting more training data is likely to help.
  • A neural neural network with fewer parameters is prone to underfitting. It is also computationally cheaper.
  • A large neural network with more parameters is prone to overfitting. It is also computationally expensive.
 
Machine Learning System Desing:
  • The recommended approach:
  1. Start with a simple algorithm, implement it quickly, and test it early on your cross validation data.
  2. Plot learning curves to decide if more data, more features, etc. are likely to help.
  3. Manually examine the errors on examples in the cross validation set and try to spot a trend where most of the errors were made.
  • It is very important to get error results as a single, numerical value.
  • Precision
Handling Skewed Data:
  • Skewed Classes:The ratio of positive to negative examples is very close to one of two extremes.
  •                            (y = 1 in presence of rare class that we want to detect)
  • Precision Rate:TP / (TP + FP)
  • Recall Rate:TP / (TP + FN)
  • F1 Score:(2 * P * R) / (P + R)
 
Week 7:
Support Vector Machines:
 
Optimization Objective:
  • Because constant doesn't change value of the theta that achieves the miinmum,so we multiplying objective function in logistic regression by M.
  • We can both use (A + λB) or (CA + B) to control the relative.
  • A support vector machine just makes a prediction of y being equal to one or zero, directly. So the hypothesis will predict one
Large Margin Intuition:
  • The SVM decision boundary will become like this:
  • The black line gives SVM a robustness because it has a large margin:
Kernels:
  • Given (xi,yi),we choose li = xi as landmarks,then let fi = sim(x,li).
  • We compute new features depending on proximity to landmarks.So our function become theta0 + theta1*f1 + theta2*f2......
  • Gaussian Kernels:
  • C and Sigma:
  • Do perform feature scaling before using the Gaussian kernel.
  • Linear kernel:meanning no kernel.
Week8:

Unsupervised Learning:

Clustering:

  • We give unlabeled training set to an algorithm and we ask the algorithm find some structure in the data for us.
  • K-meas Algorithm:
  • Cost Function:
  • Random Initialization:Randomly pick k training examples and set Mu1 of MuK equal to these k examples.
  • Elbow Method:
  • Better way to choose the number of clusters is to ask, for what purpose are you running K-means.
Dimensionality Reduction:
  • Reason:Data compression or speed up our learning algorithm.
  • Visualization:We can use dimensionality reduction to reduce data from high dimensions down to 2 or 3 dimensions,so that we can plot it and understand our data better.
Principal Component Analysis:
  • PCA:Find a lower dimensional surface onto which to project the data, so as to minimize the square distance between each point and the location of where it gets projected.
  • Reduce from 2D to 1D:Find a vector onto which to project the data to minimize the projection error.
  • Reduce from nD to kD:Find k vectors onto which to project the data to minimize the projection error.
  • Data preprocessing:Feature scaling/Mean normalization
  • Algorithm:
  1. If we want to reduce the data from n dimensions down to k dimensions, we need to do is take the first k vectors from U(n * n) as Ureduce(n * k).
  2. z = Ureduce' * x.
  • Reconstruction from Compressed Representation:Xapprox = Ureduce * z.
  • Applying:(Only if your algorithm doesn't do what you want then implement PCA)
Week 9:

Anomaly Detection:

Density Estimation:

  • We build a model of the probability of x,if p of x-test is less than some epsilon then we flag this as an anomaly.
  • Gaussian Distribution(Normal Distribution):,
  • Parameter Estimation:
  • Algorithm:
  • Evaluation:Assume we have some labled data of anomalous and nonanomalous examples.Using training set(unlabled,assume normal examples),cross validation set and test set.
  • Anomaly Detection vs. Supervised Learning:
  • Non-gaussian Features:Let xNew = log(x)(logarithmic normal distribution),or xNew = x^(0.1)
  • Choose Features:Choose features that migth take on unusually large or small values in the event of an anomaly

Multivariate Gaussian Distribution:

Recommender Systems:

  • n.u = number of users
  • n.m = number of moives
  • r(i,j) = 1 if user j have rated movie i
  • y(i,j) = rating given by user j to movie i(only if r(i,j) = 1)
  • theta(j) = parameter vector for user j
  • x(i) = feature vector for movie i

Content Based Recommendations:

  • We assume we have features for different movies.
  • For each user j,learn a parameter.Predict user j as rating movie i with  stars.
  • Optimization Objective:
  • Gradient Descent:

Collaborative Filtering:

  • We assume that each of our users has told us how much they like the romantic movies and how much they like action packed movies.
  • Optimization Algorithm:
  • Given x and movie ratings can estimate theta.
  • Given theta and movie ratings can estimate x.
  • Optimization Objective:
  • Mean Normalization:Compute the average rating that each movie obtained and subtract off the meaning rating.So the rating of movie become  + average rating.

Week 10:

Large Scale Machine Learning:

Stochastic Gradient Descent:

  • Algorithm:
  1. Randomly shuffle the data set.
  2. For i = 1...m:
  • SGD will only try to fit one training example at a time. This way we can make progress in gradient descent without having to scan all m training examples first.
  • We will usually take 1-10 passes through data set to get near the global minimum.
  • Convergence:Plot the average cost of the hypothesis applied to every 1000 or so training examples. We can compute and save these costs during the gradient descent iterations.
  • One strategy for trying to actually converge at the global minimum is to slowly decrease α over time.

Mini-Batch Gradient Descent:

  • Use b examples in each iteration.(b = mini-batch size)
  • Algorithm:
  • The advantage is that we can use vectorized implementations over the b examples.

Online Learning:

  • With a continuous stream of users to a website, we can run an endless loop that gets (x,y), where we collect some user actions for the features in x to predict some behavior y.
  • You can update θ for each individual (x,y) pair as you collect them. This way, you can adapt to new pools of users, since you are continuously updating theta.

Map Reduce and Data Parallelism:

  • Many learning algorithms can be expressed as computing sums of functions over the training set.
  • We can divide up batch gradient descent and dispatch the cost function for a subset of the data to many different machines so that we can train our algorithm in parallel.

Week 11:

Photo OCR:

  • Pipeline:
  1. Text detection
  2. Character segmentation
  3. Character classification
  • Using sliding windows and expansion to text detection and character segmentation
  • Ceiling Analysis

Artificial Data Synthesis:

  • Creating new data from scratch(using the ramming funds as an example)
  • Taking existing label examples and introducing distortions to it, to sort of create extra label examples.

Machine Learning|Andrew Ng|Coursera 吴恩达机器学习笔记的更多相关文章

  1. Machine Learning|Andrew Ng|Coursera 吴恩达机器学习笔记(完结)

    Week 1: Machine Learning: A computer program is said to learn from experience E with respect to some ...

  2. Machine Learning - Andrew Ng - Coursera

    Machine Learning - Andrew Ng - Coursera Contents 1 Notes 1 Notes What is Machine Learning? Two defin ...

  3. Coursera 学习笔记|Machine Learning by Standford University - 吴恩达

    / 20220404 Week 1 - 2 / Chapter 1 - Introduction 1.1 Definition Arthur Samuel The field of study tha ...

  4. Machine Learning——吴恩达机器学习笔记(酷

    [1] ML Introduction a. supervised learning & unsupervised learning 监督学习:从给定的训练数据集中学习出一个函数(模型参数), ...

  5. 吴恩达机器学习笔记(十一) —— Large Scale Machine Learning

    主要内容: 一.Batch gradient descent 二.Stochastic gradient descent 三.Mini-batch gradient descent 四.Online ...

  6. 吴恩达机器学习笔记60-大规模机器学习(Large Scale Machine Learning)

    一.随机梯度下降算法 之前了解的梯度下降是指批量梯度下降:如果我们一定需要一个大规模的训练集,我们可以尝试使用随机梯度下降法(SGD)来代替批量梯度下降法. 在随机梯度下降法中,我们定义代价函数为一个 ...

  7. 吴恩达机器学习笔记54-开发与评价一个异常检测系统及其与监督学习的对比(Developing and Evaluating an Anomaly Detection System and the Comparison to Supervised Learning)

    一.开发与评价一个异常检测系统 异常检测算法是一个非监督学习算法,意味着我们无法根据结果变量

  8. 吴恩达机器学习笔记37-学习曲线(Learning Curves)

    学习曲线就是一种很好的工具,我经常使用学习曲线来判断某一个学习算法是否处于偏差.方差问题.学习曲线是学习算法的一个很好的合理检验(sanity check).学习曲线是将训练集误差和交叉验证集误差作为 ...

  9. coursera吴恩达 机器学习编程作业原文件 及我的作业

    保存在github上供广大网友下载:点击 8个zip,原文件,没有任何改动. 另外,不定期上传我自己关于这门课的学习过程笔记和心得,有兴趣的盆友可以点击这里查看.

随机推荐

  1. Spring MVC (JDK8+Tomcat8)

    1 Spring MVC概述 Spring MVC是Spring为表现层提供的基于MVC设计理念的优秀的web框架,是目前最主流的MVC框架之一. Spring3.0后全面超越Struts2,成为最优 ...

  2. Eclipse运行Java简单实例

    运行eclipse前首先配置好JDK环境变量等  双击这句话可跳转配置环境变量详细步骤 运行eclipse软件 1.File菜单-New - project 2.Java Project - Next ...

  3. IOS 时间字符串转换时间戳失败问题

    链接:https://pan.baidu.com/s/1nw6VWoD 密码:1peh 有时候获取到的时间带有毫秒数或者是(2018-2-6 11:11:11)格式的(别说你没遇到过,也别什么都让后台 ...

  4. JMeter基础教程2:正则表达式使用

    0. 正则表达式简介 正则表达式,又称规则表达式(Regular Expression,在代码中通常简写为regex.regexp或RE)描述了一种字符串匹配的模式(pattern),可以用来检查一个 ...

  5. 简易发号SQL,可用于生成指定前缀自增序列,如订单号,生成优惠券码等

    需求1:订单号要求唯一.长度不太长.自增.但不能通过早上订单号和晚上订单号相减推算出平台大概一天的单量 需求2:要求生成10w张优惠券,要求券码唯一.不能太长,不能轻易猜测出其他券码 根据这些需求提供 ...

  6. 从返回的HTTP Header信息中隐藏Apache的版本号及PHP的X-Powered-By信息

    默认情况下,很多apache安装时会显示版本号及操作系统版本,甚至会显示服务器上安装的是什么样的apache模块.这些信息可以为黑客所用,并且黑客还可以从中得知你所配置的服务器上的很多设置都是默认状态 ...

  7. 转换number为千分位计数形式js

    JS实现转换千分位计数 350000.00-------350,000.00 var num=0;function format (num) { return (num.toFixed(2) + '' ...

  8. 洛谷 [p2294] [HNOI2005] 狡猾的商人

    差分约束做法 又是一道转换成前缀和的差分约束题,已知从s月到t月的收入w,设数组pre[i]代表从开始到第i个月的总收入 构造差分不等式 $ pre[s-1]-pre[t]==w $ 为了满足松弛操作 ...

  9. Azure Automation (6) 执行Azure SQL Job

    <Windows Azure Platform 系列文章目录> 因为China Azure SQL Database目前还没有SQL Job的功能,如果要异步执行SQL 存储过程,可以使用 ...

  10. 5.C++里的4种新型类型转换

    1首先来回顾C的强制转换 大家都知道,在编译C语言中的强制转换时,编译器不会检查转换是否成功,都会编译正确. 比如: #include "stdio.h" struct Posit ...