Question 1

Consider the problem of predicting how well a student does in her second year of college/university, given how well they did in their first year. Specifically, let x be equal to the number of "A" grades (including A-. A and A+ grades) that a student receives in their first year of college (freshmen year). We would like to predict the value of y, which we define as the number of "A" grades they get in their second year (sophomore year).

Questions 1 through 4 will use the following training set of a small sample of different students' performances. Here each row is one training example. Recall that in linear regression, our hypothesis is $hθ(x)=θ_0+θ_1x$, and we use m to denote the number of training examples.

x y
5 4
3 4
0 1
4 3

For the training set given above, what is the value of m? In the box below, please enter your answer (which should be a number between 0 and 10).

Answer

m is the number of training examples. In this example, we have m=4 examples.

4

Question 2

Many substances that can burn (such as gasoline and alcohol) have a chemical structure based on carbon atoms; for this reason they are called hydrocarbons. A chemist wants to understand how the number of carbon atoms in a molecule affects how much energy is released when that molecule combusts (meaning that it is burned). The chemists obtains the dataset below. In the column on the right, “kJ/mol” is the unit measuring the amount of energy released. examples.

You would like to use linear regression (hθ(x)=θ_0+θ_1x) to estimate the amount of energy released (y) as a function of the number of carbon atoms (x). Which of the following do you think will be the values you obtain for θ_0 and θ_1? You should be able to select the right answer without actually implementing linear regression.

Answer

Since the carbon atoms (x) increase and the released heat (y) decreases, θ_1 has to be negative. θ_0 functionas as the offset. Looking at the table: a few θ_0 should be higher than -1000

  • θ_0=−1780.0,θ_1=−530.9
  • θ_0=−569.6,θ_1=−530.9
  • θ_0=−1780.0,θ_1=530.9
  • θ_0=−569.6,θ_1=530.9

Question Explanation

We can give an approximate estimate of the θ0 and θ1 values observing the trend of the data in the training set. We see that the y values decrease quite regularly when the x values increase, then θ1 must be negative. θ0 is the value that the hypothesis takes when x is equal to zero, therefore it must be superior to y(1) in order to satisfy the decreasing trend of the data. Among the proposed answers, the only one that meets both the conditions is hθ(x)=−569.6−530.9x. We can better appreciate these considerations observing the graph of the training data and the linear regression (below):


Question 3

Suppose we set θ_0=−1,θ_1=0.5. What is hθ(4)?

Answer

hθ(x) = θ_0 + θ_1x
hθ(x) = -1 + 0.5x
hθ(4) = -1 + 0.5 * 4
hθ_θ(4) = 1

Question 4

Let f be some function so that f(θ0,θ1) outputs a number. For this problem, f is some arbitrary/unknown smooth function (not necessarily the cost function of linear regression, so f may have local optima). Suppose we use gradient descent to try to minimize f(θ0,θ1) as a function of θ0 and θ1. Which of the following statements are true? (Check all that apply.)

Answer

  • If θ0 and θ1 are initialized so that θ0=θ1, then by symmetry (because we do simultaneous updates to the two parameters), after one iteration of gradient descent, we will still have θ0=θ1.

    • The updates to θ0 and θ1 are different (even though we're doing simultaneous updates), so there's no particular reason to expect them to be the same after one iteration of gradient descent.
  • Setting the learning rate α to be very small is not harmful, and can only speed up the convergence of gradient descent.
    • If the learning rate is small, gradient descent ends up taking an extremely small step on each iteration, so this would actually slow down (rather than speed up) the convergence of the algorithm.
  • If the first few iterations of gradient descent cause f(θ0,θ1) to increase rather than decrease, then the most likely cause is that we have set the learning rate α to too large a value.
    • If alpha were small enough, then gradient descent should always successfully take a tiny small downhill and decrease f(\theta_0,\theta_1) at least a little bit. If gradient descent instead increases the objective value, that means alpha is too large (or you have a bug in your code!).
  • If the learning rate is too small, then gradient descent may take a very long time to converge.
    • If the learning rate is small, gradient descent ends up taking an extremely small step on each iteration, and therefore can take a long time to converge.

Question 5

Suppose that for some linear regression problem (say, predicting housing prices as in the lecture), we have some training set, and for our training set we managed to find some θ0, θ1 such that J(θ0,θ1)=0. Which of the statements below must then be true? (Check all that apply.)

Answer

  • For this to be true, we must have θ0=0 and θ1=0 so that hθ(x)=0

    • If J(θ0,θ1)=0, that means the line defined by the equation "y=θ0+θ1x" perfectly fits all of our data. There's no particular reason to expect that the values of θ0 and θ1 that achieve this are both 0 (unless y(i)=0 for all of our training examples).
  • Our training set can be fit perfectly by a straight line, i.e., all of our training examples lie perfectly on some straight line.
    • If J(θ0,θ1)=0, that means the line defined by the equation "y=θ0+θ1x" perfectly fits all of our data.
  • For this to be true, we must have y(i)=0 for every value of i=1,2,…,m.
    • So long as all of our training examples lie on a straight line, we will be able to find θ0 and θ1 so that J(θ0,θ1)=0. It is not necessary that y(i)=0 for all of our examples.
  • We can perfectly predict the value of y even for new examples that we have not yet seen. (e.g., we can perfectly predict prices of even new houses that we have not yet seen.)
    • Even though we can fit our training set perfectly, this does not mean that we'll always make perfect predictions on houses in the future/on houses that we have not yet seen.

【Coursera - machine learning】 Linear regression with one variable-quiz的更多相关文章

  1. machine learning (2)-linear regression with one variable

    machine learning- linear regression with one variable(2) Linear regression with one variable = univa ...

  2. Machine Learning #Lab1# Linear Regression

    Machine Learning Lab1 打算把Andrew Ng教授的#Machine Learning#相关的6个实验一一实现了贴出来- 预计时间长度战线会拉的比較长(毕竟JOS的7级浮屠还没搞 ...

  3. 【cs229-Lecture2】Linear Regression with One Variable (Week 1)(含测试数据和源码)

    从Ⅱ到Ⅳ都在讲的是线性回归,其中第Ⅱ章讲得是简单线性回归(simple linear regression, SLR)(单变量),第Ⅲ章讲的是线代基础,第Ⅳ章讲的是多元回归(大于一个自变量). 本文的 ...

  4. 【机器学习Machine Learning】资料大全

    昨天总结了深度学习的资料,今天把机器学习的资料也总结一下(友情提示:有些网站需要"科学上网"^_^) 推荐几本好书: 1.Pattern Recognition and Machi ...

  5. CheeseZH: Stanford University: Machine Learning Ex1:Linear Regression

    (1) How to comput the Cost function in Univirate/Multivariate Linear Regression; (2) How to comput t ...

  6. Machine learning(2-Linear regression with one variable )

    1.Model representation Our Training Set [训练集]: We will start with this ''Housing price prediction'' ...

  7. 【Machine Learning】机器学习及其基础概念简介

    机器学习及其基础概念简介 作者:白宁超 2016年12月23日21:24:51 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本系列文章是作者结 ...

  8. 【Machine Learning】决策树案例:基于python的商品购买能力预测系统

    决策树在商品购买能力预测案例中的算法实现 作者:白宁超 2016年12月24日22:05:42 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本 ...

  9. 【Machine Learning】KNN算法虹膜图片识别

    K-近邻算法虹膜图片识别实战 作者:白宁超 2017年1月3日18:26:33 摘要:随着机器学习和深度学习的热潮,各种图书层出不穷.然而多数是基础理论知识介绍,缺乏实现的深入理解.本系列文章是作者结 ...

随机推荐

  1. Android使用OpenGL ES2.0显示YUV,您的手机上的数据要解决两个方面的坐标

    如果说 ,我不知道,如果你不明白这个话题.连接到:http://blog.csdn.net/wangchenggggdn/article/details/8896453(下称链接①), 里面评论有非常 ...

  2. CentOS 7 结构体GCC 4.8.2 32位编译环境

    centos 7 结构体gcc 32位编译环境 1介绍 1.1背景 学习新 C++ 2011和C11标准. 1.2使用软件 CentOS 7(Linux version 3.10.0-123.el7. ...

  3. leetcode第27题--Implement strStr()

    Implement strStr(). Returns a pointer to the first occurrence of needle in haystack, or null if need ...

  4. hdu 2795 段树--点更新

    http://acm.hdu.edu.cn/showproblem.php?pid=2795 在第一和第三多学校都出现线段树,我在比赛中并没有这样做.,热身下,然后31号之前把那两道多校的线段树都搞了 ...

  5. 10.读google测试之道有感

    (一)读google测试之道有感.  

  6. Winform无边框窗体(FormBorderStyle属性设为None)自定义移动

    为了界面的好看,有时候需要将窗体FormBorderStyle属性设为None,这样就可以根据自己的喜欢来设计界面.但这样窗体无法进行移动的.而且默认的窗体(FormBorderStyle=Sizab ...

  7. MVC为什么不再需要注册通配符(*.*)了?

    MVC为什么不再需要注册通配符(*.*)了? 文章内容 很多教程里都提到了,在部署MVC程序的时候要配置通配符映射(或者是*.mvc)到aspnet_ISPAI.dll上,在.NET4.0之前确实应该 ...

  8. idea执行go

    因为经常在不同的地方调代码,每次都调整环境很麻烦,于是在犯懒的时候发现了更直接简便的办法,关于idea集成go环境的,不需要按部就班的部署. 首先下代码,比如https://github.com/sa ...

  9. mysql 备份数据

    想在mysql库中某些数据备份下来. 1,创建一个新表,我们应需要保持表的原有属性 CREATE TABLE A LIKE B 这种方式可以把主键和索引一起copy过来. 2,把需要数据copy到新表 ...

  10. N个骰子的点数和的概率分布

    程序设计思路: 假设有n个骰子,关键是需要统计每个点数出现的次数.首先分析第一个骰子点数和有1到6的点数,计算出1到6的每种点数 的次数,并将结果用一个数组pos1记录.然后分析有两个骰子时, 点数为 ...