摘要: 本文是吴恩达 (Andrew Ng)老师《机器学习》课程,第二章《单变量线性回归》中第6课时《模型概述》的视频原文字幕。为本人在视频学习过程中逐字逐句记录下来以便日后查阅使用。现分享给大家。如有错误,欢迎大家批评指正,在此表示诚挚地感谢!同时希望对大家的学习能有所帮助。

Our first learning algorithm will be linear regression. In this video (article), you'll see what the model looks like. And more importantly, you'll also see what the overall process of supervised learning looks like.

Let's use some motivating example of predicting housing prices. We're going to use a data set of housing prices from the city of Portland, Oregon. And here I'm gonna plot my data set of a number of houses that were different sizes that were sold for a range of different prices. Let's say that given this data set, you have a friend that's trying to sell a house and let's see if your friend's house is size of 1,250 square feet, and you want to tell them how much they might be able to sell the house for. Well one thing you could do is fit a model. Maybe fit a straight line to this data. Looks something like that, and based on that, maybe you could tell your friend that, let's say maybe, he can sell the house for around $220,000. So, this is an example of a supervised learning algorithm. And it's supervised learning because we're given the, quotes, "right answer" for each of following examples. Namely we're told what was the actual house, what was the house price of each of the houses in our data set were sold for. And moreover, this is an example of a regression problem, where the term regression refers to the fact that we're predicting a real-valued output, namely the price. And just to remind you the other most common type of supervised learning is called the classification problem, where we predict discrete-valued output. Such as if we are looking at cancer tumors and trying to decide if a tumor is malignant or benign. So that's a zero-one valued discrete output.

More formally, in supervised learning, we have a data set and this data set is called a training set. So, for housing price example, we have a training set of different housing prices, and our job is to learn from this data how to predict prices of houses. Let's define some notation that we're using throughout this course. We're going to define quite a lot of symbols. It's okay if you don't remember all the symbols right now, but as the course progresses, it will be useful to have a convenient notation. So, I'm gonna use lower case m throughout this course to denote the number of training examples. So, in this data set, if I have, you know, let's say 47 rows in this table, then I have 47 training examples, and m=47. Let me use lowercase x to denote the input variables, often also called the features. So that would be the x's here, it would be our input features. And I'm gonna use y to denote my output variables, or the target variable which I'm going to predict. And so that's the second column here. Looking on notation, I'm going to use (x,y) to denote a single training example. So, a single row in this table corresponds to a single training example. And to refer to a specific training example, I'm going to use this notation . And, we're going to use this to refer to the training example. So, this superscript i over here, this is not exponentiation right? This , the superscript i in parentheses that's just an index into my training set, and refers to the row in this table, okay? So, this is not x to the power of i, y to power of i. Instead just refers to the of this table. So, for example, refers to the input value for the first training example, so that's 2104. That's the x in the first row. will be equal to 1416 right? That's the second x, and will be equal to 460. That's the y value for my first training example. That's what that (1) refers to.

So as mentioned, occasionally I'll ask you a question to let you check your understanding, and a few seconds in this video a multiple-choice question will pop up in the video. When it does, please use your mouse to select what you think is the right answer. What defined by the training set is? So here's how this supervised learning algorithm works. We saw that with the training set like our training set of housing prices and we feed that to our learning algorithm. Is the job of a learning algorithm to then output a function, which by convention is usually denoted lowercase h, and h stands for hypothesis. And what the job of the hypothesis is, it's a function that takes as input the size of a house, like maybe the size of the new house your friend trying to sell, so it takes in the value of x, and it tries to output the estimated value of y for the corresponding house. So h is a function that maps from x's to y's. People often ask me, you know, why is the function called hypothesis. Some of you may know the meaning of the term hypothesis, from the dictionary or from the science or whatever. It turns out that in machine learning, this is a name that was used in early days of machine learning and it kinda stuck. Because maybe not a great name for this sort of function, for mapping from sizes of houses to the predictions, I think the term hypothesis, isn't the best possible name for this, but this is the standard terminology that people use in machine learning. So don't worry too much about why people call it that. When designing a learning algorithm, the next thing we need to decide is how do we represent this hypothesis h. For this and next few videos, I'm going to choose our initial choice, for representing the hypothesis, will be the following. We're going to represent h as follows. And we will write this as:

.

And as a shorthand, sometimes instead of writing , I'll just write this as . But more often I'll write it as a subscript over there. And plotting this in the pictures, all this means is that, we are going to predict that y is a linear function of x. Right, so that's the data set. And what this function is doing, is predicting that y is some straight-line function of x. That's , okay? And why a linear function? Well, sometimes we'll want to fit more complicated perhaps non-linear functions as well. But since this linear case is the simple building block, we'll start with this example first of fitting linear functions, and we'll build on this to eventually have more complex models, and more complex learning algorithms. Let me also give this particular model a name. This model is called linear regression or this, for example, is actually linear regression with one variable, with the variable being x. That's the predicting all the prices as functions of one variable x. And another name for this model is univariate linear regression. And univariate is just a fancy way of saying one variable. So, that's linear regression. In the next video (article) we'll start to talk about just how we go about implementing this model.

<end>

Linear regression with one variable - Model representation的更多相关文章

  1. 机器学习笔记1——Linear Regression with One Variable

    Linear Regression with One Variable Model Representation Recall that in *regression problems*, we ar ...

  2. MachineLearning ---- lesson 2 Linear Regression with One Variable

    Linear Regression with One Variable model Representation 以上篇博文中的房价预测为例,从图中依次来看,m表示训练集的大小,此处即房价样本数量:x ...

  3. Lecture0 -- Introduction&&Linear Regression with One Variable

    Introduction What is machine learning? Tom Mitchell provides a more modern definition: "A compu ...

  4. Machine Learning 学习笔记2 - linear regression with one variable(单变量线性回归)

    一.Model representation(模型表示) 1.1 训练集 由训练样例(training example)组成的集合就是训练集(training set), 如下图所示, 其中(x,y) ...

  5. 机器学习 (一) 单变量线性回归 Linear Regression with One Variable

    文章内容均来自斯坦福大学的Andrew Ng教授讲解的Machine Learning课程,本文是针对该课程的个人学习笔记,如有疏漏,请以原课程所讲述内容为准.感谢博主Rachel Zhang的个人笔 ...

  6. Stanford机器学习---第二讲. 多变量线性回归 Linear Regression with multiple variable

    原文:http://blog.csdn.net/abcjennifer/article/details/7700772 本栏目(Machine learning)包括单参数的线性回归.多参数的线性回归 ...

  7. Stanford机器学习---第一讲. Linear Regression with one variable

    原文:http://blog.csdn.net/abcjennifer/article/details/7691571 本栏目(Machine learning)包括单参数的线性回归.多参数的线性回归 ...

  8. Ng第二课:单变量线性回归(Linear Regression with One Variable)

    二.单变量线性回归(Linear Regression with One Variable) 2.1  模型表示 2.2  代价函数 2.3  代价函数的直观理解 2.4  梯度下降 2.5  梯度下 ...

  9. 【cs229-Lecture2】Linear Regression with One Variable (Week 1)(含测试数据和源码)

    从Ⅱ到Ⅳ都在讲的是线性回归,其中第Ⅱ章讲得是简单线性回归(simple linear regression, SLR)(单变量),第Ⅲ章讲的是线代基础,第Ⅳ章讲的是多元回归(大于一个自变量). 本文的 ...

随机推荐

  1. TreadPool

    ThreadPool概述 提供一个线程池,该线程池可用于执行任务.发送工作项.处理异步 I/O.代表其他线程等待以及处理计时器. 创建线程需要时间.如果有不同的小任务要完成,就可以事先创建许多线程/在 ...

  2. Java锁--非公平锁

    转载请注明出处:http://www.cnblogs.com/skywang12345/p/3496651.html 参考代码 下面给出Java1.7.0_40版本中,ReentrantLock和AQ ...

  3. [一道区间dp][String painter]

    http://acm.hdu.edu.cn/showproblem.php?pid=2476 String painter Time Limit: 5000/2000 MS (Java/Others) ...

  4. BZOJ 3218 A + B Problem (可持久化线段树+最小割)

    做法见dalao博客 geng4512的博客, 思路就是用线段树上的结点来进行区间连边.因为有一个只能往前面连的限制,所以还要可持久化.(duliu) 一直以来我都是写dinicdinicdinic做 ...

  5. 冒泡排序之javascript

    冒泡排序是一种简单的排序算法.它重复地走访过要排序的数列,一次比较两个元素,如果它们的顺序错误就把它们交换过来.走访数列的工作是重复地进行直到没有再需要交换,也就是说该数列已经排序完成.这个算法的名字 ...

  6. MySQL 几种性能测试的工具使用

    近期由于要比较mysql及其分支mariadb, percona的性能,了解了几个这方面的工具,包括:mysqlslap sysbench tpcc-mysql,做一个整理,备忘,分享 1.mysql ...

  7. umeditor实现ctrl+v粘贴word图片并上传

    图片的复制无非有两种方法,一种是图片直接上传到服务器,另外一种转换成二进制流的base64码目前限chrome浏览器使用首先以um-editor的二进制流保存为例:打开umeditor.js,找到UM ...

  8. ECMAScript 提案阶段

    stage0 strawman任何讨论.想法.改变或者还没加到提案的特性都在这个阶段.只有TC39成员可以提交. stage1 proposal (1)产出一个正式的提案. (2)发现潜在的问题,例如 ...

  9. ZOJ 2592 Think Positive ——(xjbg)

    做法是,先求出前缀和pre.然后枚举端点i,[i+1,n]中pre最小的找出来,减去pre[i-1]大于0,这是第一个条件:第二个条件是,从i开始的后缀和和i之前的最小的一个pre相加大于0.只要满足 ...

  10. 学python必须知道的30个技巧

    收集这些有用的捷径技巧 1. 原地进行交换两个数字 我们对赋值的右侧进行一个新的元组,左侧解析(unpack)那个(未被引用的)元组到变量 <a> 和 <b> 赋值完成时,新的 ...