https://www.coursera.org/learn/machine-learning/exam/7pytE/linear-regression-with-multiple-variables

1。

Suppose m=4 students have taken some class, and the class had a midterm exam and a final exam. You have collected a dataset of their scores on the two exams, which is as follows:

midterm exam

(midterm exam)2

final exam

89

7921

96

72

5184

74

94

8836

87

69

4761

78

You'd like to use polynomial regression to predict a student's final exam score from their midterm exam score. Concretely, suppose you want to fit a model of the form hθ(x)=θ0+θ1x1+θ2x2, where x1 is the midterm score and x2 is (midterm score)2. Further, you plan to use both feature scaling (dividing by the "max-min", or range, of a feature) and mean normalization.

What is the normalized feature x2(2)? (Hint: midterm = 72, final = 74 is training example 2.) Please round off your answer to two decimal places and enter in the text box below.

答案: -0.37

平均值 :(7921+5184+8836+4761)/4 = 6675.5

Max-Min: 8836-4761=4075

x=(xn-平均值)/(Max-Min)

training example 2   (5184-6675.5)/4075=-0.37

2。

You run gradient descent for 15 iterations

with α=0.3 and compute J(θ) after each

iteration. You find that the value of J(θ) increases over

time. Based on this, which of the following conclusions seems

most plausible?

α=0.3 is an effective choice of learning rate.

Rather than use the current value of α, it'd be more promising to try a larger value of α (say α=1.0).

Rather than use the current value of α, it'd be more promising to try a smaller value of α (say α=0.1).

答案:B. Rather than use the current value of α, it'd be more promising to try a larger value of α (say α=1.0).

a越大下降越快,a越小下降越慢。

3。

Suppose you have m=23 training examples with n=5 features (excluding the additional all-ones feature for the intercept term, which you should add). The normal equation is θ=(XTX)−1XTy. For the given values of m and n, what are the dimensions of θ, X, and y in this equation?

X is 23×6, y is 23×6, θ is 6×6

X is 23×5, y is 23×1, θ is 5×5

X is 23×6, y is 23×1, θ is 6×1

X is 23×5, y is 23×1, θ is 5×1

答案:C. X is 23×6, y is 23×1, θ is 6×1

X n+1 列 ,  y 1 列 , θ  n+1 行

4。

Suppose you have a dataset with m=50 examples and n=15 features for each example. You want to use multivariate linear regression to fit the parameters θ to our data. Should you prefer gradient descent or the normal equation?

Gradient descent, since it will always converge to the optimal θ.

The normal equation, since it provides an efficient way to directly find the solution.

Gradient descent, since (XTX)−1 will be very slow to compute in the normal equation.

The normal equation, since gradient descent might be unable to find the optimal θ.

答案: B. The normal equation, since it provides an efficient way to directly find the solution.

比较梯度下降与normal equation

梯度下降需要Feature Scaling;normal equation 简单方便不需Feature Scaling。

normal equation 时间复杂度较大,适用于Feature数量较少的情况。

当Feature数量<100000时  Normal Equation
当Feature数量>100000时  Gradient Descent
 

5。

Which of the following are reasons for using feature scaling?

It speeds up solving for θ using the normal equation.

It prevents the matrix XTX (used in the normal equation) from being non-invertable (singular/degenerate).

It speeds up gradient descent by making it require fewer iterations to get to a good solution.

It is necessary to prevent gradient descent from getting stuck in local optima.

答案 :C. It speeds up gradient descent by making it require fewer iterations to get to a good solution.

上一题也考到这个点:normal equation 不需要 Feature Scaling,排除AB, 特征缩放减少迭代数量,加快梯度下降,然而不能防止梯度下降陷入局部最优。

Coursera machine learning 第二周 quiz 答案 Linear Regression with Multiple Variables的更多相关文章

  1. Coursera machine learning 第二周 编程作业 Linear Regression

    必做: [*] warmUpExercise.m - Simple example function in Octave/MATLAB[*] plotData.m - Function to disp ...

  2. Coursera machine learning 第二周 quiz 答案 Octave/Matlab Tutorial

    https://www.coursera.org/learn/machine-learning/exam/dbM1J/octave-matlab-tutorial Octave Tutorial 5  ...

  3. [Machine Learning (Andrew NG courses)]IV.Linear Regression with Multiple Variables

    watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvenFoXzE5OTE=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA ...

  4. 【原】Coursera—Andrew Ng机器学习—Week 2 习题—Linear Regression with Multiple Variables 多变量线性回归

    Gradient Descent for Multiple Variables [1]多变量线性模型  代价函数 Answer:AB [2]Feature Scaling 特征缩放 Answer:D ...

  5. Machine Learning – 第2周(Linear Regression with Multiple Variables、Octave/Matlab Tutorial)

    Machine Learning – Coursera Octave for Microsoft Windows GNU Octave官网 GNU Octave帮助文档 (有900页的pdf版本) O ...

  6. Stanford机器学习---第二讲. 多变量线性回归 Linear Regression with multiple variable

    原文:http://blog.csdn.net/abcjennifer/article/details/7700772 本栏目(Machine learning)包括单参数的线性回归.多参数的线性回归 ...

  7. Linear regression with multiple variables(多特征的线型回归)算法实例_梯度下降解法(Gradient DesentMulti)以及正规方程解法(Normal Equation)

    ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, , ...

  8. 机器学习 (二) 多变量线性回归 Linear Regression with Multiple Variables

    文章内容均来自斯坦福大学的Andrew Ng教授讲解的Machine Learning课程,本文是针对该课程的个人学习笔记,如有疏漏,请以原课程所讲述内容为准.感谢博主Rachel Zhang 的个人 ...

  9. 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 4_Linear Regression with Multiple Variables 多变量线性回归

    Lecture 4 Linear Regression with Multiple Variables 多变量线性回归 4.1 多维特征 Multiple Features4.2 多变量梯度下降 Gr ...

随机推荐

  1. matlab与C++以.mat文件方式进行数据相互流动

    年前,放假回家之前,使用了C++与matlab之间的数据的互动的一个实验,感觉效果挺好.初步达到了目的,所以整理下来方便大家使用.减少大家编程学习的时间.希望对你们有用. #include " ...

  2. Android - toolbar navigation 样式

    1.修改title 边距 修改边距使用系统的app属性来引入使用,即: xmlns:app="http://schemas.android.com/apk/res-auto" 1 ...

  3. python里的“__all__ ”作用

    转载:http://python-china.org/t/725 参考:http://www.cnblogs.com/alamZ/p/6943869.html 用 __all__ 暴露接口,这是一种约 ...

  4. util.select.js

    ylbtech-JavaScript-util: util.select.js 筛选工具 1.A,JS-效果图返回顶部   1.B,JS-Source Code(源代码)返回顶部 1.B.1, m.y ...

  5. python字典里的update函数

    >>> print d {'age': 34, 'name': 'jeapedu'} >>> d1={"age":38} >>> ...

  6. 2017.4.26 慕课网--Java 高并发秒杀API配置文件(持续更新)

    新建项目,new maven project. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi=&q ...

  7. 向git库提交代码出现”There are no staged files"怎么办?

    1.选择菜单“Window”->"Preference" 2.左边树菜单选择“Team”->"Git"->"Committing&q ...

  8. 安装openstack 时 遇见的一些问题及解决方法!

    感谢朋友支持本博客,欢迎共同探讨交流.因为能力和时间有限,错误之处在所难免.欢迎指正! 假设转载.请保留作者信息. 博客地址:http://blog.csdn.net/qq_21398167 原博文地 ...

  9. python项目导出所需要的依赖库

    使用pip freeze $ pip freeze > requirements.txt 这种方式是把整个环境中的包都列出来了,如果是虚拟环境可以使用. 通常情况下我们只需要导出当前项目的req ...

  10. Laravel的本地化

    一.简介 Laravel 的本地化功能提供方便的方法来获取多语言的字符串.语言包存放在 resources/lang 文件夹的文件里.在此文件夹内应该有网站对应支持的语言并将其对应到每一个子目录: / ...