Loss函数

题目一:完成computeCost.m

function J = computeCost(X, y, theta)
%COMPUTECOST Compute cost for linear regression
% J = COMPUTECOST(X, y, theta) computes the cost of using theta as the
% parameter for linear regression to fit the data points in X and y % Initialize some useful values
m = length(y); % number of training examples % You need to return the following variables correctly
J = 0; % ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta
% You should set J to the cost. J = (1 / (2 * m))* sum((X * theta - y).^ 2); % ========================================================================= end

直接套用公式编写:

\[J = \frac{1}{2m} \sum_{i=0}^{m}(wx - y)^2
\]

Loss函数升级版:computeCostMulti.m

function J = computeCostMulti(X, y, theta)
%COMPUTECOSTMULTI Compute cost for linear regression with multiple variables
% J = COMPUTECOSTMULTI(X, y, theta) computes the cost of using theta as the
% parameter for linear regression to fit the data points in X and y % Initialize some useful values
m = length(y); % number of training examples % You need to return the following variables correctly
J = 0; % ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta
% You should set J to the cost. J = (1 / (2 * m))* sum((X * theta - y).^ 2); % ========================================================================= end

其实写法一模一样……

GD算法:gradientDescent.m

function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
% theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha % Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1); for iter = 1:num_iters % ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
temp = zeros(length(theta), 1);
for i = 1:length(theta)
temp(i, 1) = theta(i, 1) + alpha * (sum((y - X * theta) .* X(:, i))) / length(X);
end
theta = temp;
% ============================================================ % Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta); end end

由于当时写题目的时候就直接按照矩阵的写法写,所以其实复杂版写法也是一样的

GD复杂版算法:gradientDescent.m

function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)
%GRADIENTDESCENTMULTI Performs gradient descent to learn theta
% theta = GRADIENTDESCENTMULTI(x, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha % Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1); for iter = 1:num_iters % ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCostMulti) and gradient here.
% temp = zeros(length(theta), 1);
for i = 1:length(theta)
temp(i, 1) = theta(i, 1) + alpha * (sum((y - X * theta) .* X(:, i))) / length(X);
end
theta = temp; % ============================================================ % Save the cost J in every iteration
J_history(iter) = computeCostMulti(X, y, theta); end end

特征缩放:featureNormalize.m

function [X_norm, mu, sigma] = featureNormalize(X)
%FEATURENORMALIZE Normalizes the features in X
% FEATURENORMALIZE(X) returns a normalized version of X where
% the mean value of each feature is 0 and the standard deviation
% is 1. This is often a good preprocessing step to do when
% working with learning algorithms. % You need to set these values correctly
X_norm = X;
mu = zeros(1, size(X, 2));
sigma = zeros(1, size(X, 2)); % ====================== YOUR CODE HERE ======================
% Instructions: First, for each feature dimension, compute the mean
% of the feature and subtract it from the dataset,
% storing the mean value in mu. Next, compute the
% standard deviation of each feature and divide
% each feature by it's standard deviation, storing
% the standard deviation in sigma.
%
% Note that X is a matrix where each column is a
% feature and each row is an example. You need
% to perform the normalization separately for
% each feature.
%
% Hint: You might find the 'mean' and 'std' functions useful.
%
mu = mean(X);
sigma = std(X); for i = 1:length(X)
X(i, :) = (X(i, :) - mu) ./ sigma;
end
X_norm = X; % ============================================================ end

公式采用:

\[X = \frac{X - \mu}{\sigma} \\
\mu:avg \\
\sigma:std
\]

公式法:normalEqn.m

function [theta] = normalEqn(X, y)
%NORMALEQN Computes the closed-form solution to linear regression
% NORMALEQN(X,y) computes the closed-form solution to linear
% regression using the normal equations. theta = zeros(size(X, 2), 1); % ====================== YOUR CODE HERE ======================
% Instructions: Complete the code to compute the closed form solution
% to linear regression and put the result in theta.
% % ---------------------- Sample Solution ----------------------
theta = (X' * X)^-1 * X' * y; % ------------------------------------------------------------- % ============================================================ end

线性回归与梯度下降(ML作业)的更多相关文章

  1. 线性回归、梯度下降(Linear Regression、Gradient Descent)

    转载请注明出自BYRans博客:http://www.cnblogs.com/BYRans/ 实例 首先举个例子,假设我们有一个二手房交易记录的数据集,已知房屋面积.卧室数量和房屋的交易价格,如下表: ...

  2. Andrew Ng机器学习公开课笔记 -- 线性回归和梯度下降

    网易公开课,监督学习应用.梯度下降 notes,http://cs229.stanford.edu/notes/cs229-notes1.pdf 线性回归(Linear Regression) 先看个 ...

  3. 机器学习算法整理(一)线性回归与梯度下降 python实现

    回归算法 以下均为自己看视频做的笔记,自用,侵删! 一.线性回归 θ是bias(偏置项) 线性回归算法代码实现 # coding: utf-8 ​ get_ipython().run_line_mag ...

  4. 斯坦福机器学习视频笔记 Week1 线性回归和梯度下降 Linear Regression and Gradient Descent

    最近开始学习Coursera上的斯坦福机器学习视频,我是刚刚接触机器学习,对此比较感兴趣:准备将我的学习笔记写下来, 作为我每天学习的签到吧,也希望和各位朋友交流学习. 这一系列的博客,我会不定期的更 ...

  5. 线性回归和梯度下降代码demo

    程序所用文件:https://files.cnblogs.com/files/henuliulei/%E5%9B%9E%E5%BD%92%E5%88%86%E7%B1%BB%E6%95%B0%E6%8 ...

  6. Machine Learning--week2 多元线性回归、梯度下降改进、特征缩放、均值归一化、多项式回归、正规方程与设计矩阵

    对于multiple features 的问题(设有n个feature),hypothesis 应该改写成 \[ \mathit{h} _{\theta}(x) = \theta_{0} + \the ...

  7. 2018.4.23-ml笔记(线性回归、梯度下降)

    线性回归:找到最合适的一条线来最好的拟合我们的数据点. hθ(x) = θixi=θTx    θ被称之为权重参数    θ0为拟合参数 对每个样本yi=θTxi + εi    误差ε是独立并且具有 ...

  8. 机器学习算法(优化)之一:梯度下降算法、随机梯度下降(应用于线性回归、Logistic回归等等)

    本文介绍了机器学习中基本的优化算法—梯度下降算法和随机梯度下降算法,以及实际应用到线性回归.Logistic回归.矩阵分解推荐算法等ML中. 梯度下降算法基本公式 常见的符号说明和损失函数 X :所有 ...

  9. 从梯度下降到Fista

    前言: FISTA(A fast iterative shrinkage-thresholding algorithm)是一种快速的迭代阈值收缩算法(ISTA).FISTA和ISTA都是基于梯度下降的 ...

随机推荐

  1. 「题解」小 R 打怪兽 monster

    本文将同步发布于: 洛谷博客: csdn: 博客园: 简书. 题目 题目描述 小 R 最近在玩一款游戏.在游戏中,小 R 要依次打 \(n\) 个怪兽,他需要打败至少 \(k\) 个怪兽才能通关.小 ...

  2. 「题解」300iq Contest 2 B Bitwise Xor

    本文将同步发布于: 洛谷博客: csdn: 博客园: 简书. 题目 题目链接:gym102331B. 题意概述 给你一个长度为 \(n\) 的序列 \(a_i\),求一个最长的子序列满足所有子序列中的 ...

  3. NX二次开发-将3X3矩阵修正为正交且长度为单位长度的矩阵

    函数:UF_MTX3_ortho_normalize() 函数说明:将矩阵修正为正交且xyz长度为单位长度的矩阵.下图中输入的矩阵为三条线段的端点,经过修正后,生成一个坐标系. 1 #include ...

  4. Unicode编码转换, MD5加密,URL16进制加密解密

    一.站长网址:http://www.msxindl.com/ 1.Unicode与中文互转 16进制Unicode编码转换.还原   :http://www.msxindl.com/tools/uni ...

  5. 免费获取思维导图Mindmaster会员教程

    免费获取思维导图Mindmaster会员教程 步骤一.下载安装 搜索亿图官方网页,下载安装免费版Mindmaster思维导图. 步骤二.点击登录 打开Mindmaster思维导图,点击登录,可以使用微 ...

  6. 『心善渊』Selenium3.0基础 — 14、Selenium对单选和多选按钮的操作

    目录 1.页面中的单选按钮和多选按钮 2.判断按钮是否选中is_selected() 3.单选按钮的操作 4.多选按钮的操作 5.选择部分多选按钮的操作 1.页面中的单选按钮和多选按钮 页面中的单选按 ...

  7. 温故知新,Blazor遇见大写人民币翻译机(ChineseYuanParser),践行WebAssembly SPA的实践之路

    背景 在之前<温故知新,.Net Core遇见Blazor(FluentUI),属于未来的SPA框架>中我们已经初步了解了Blazor的相关概念,并且根据官方的指引完成了<创建我的第 ...

  8. Linux-NFS存储

    1.什么是NFS NFS是Network File System 的缩写,中文意思是网络文件共享系统,它的主要功能是通过网络(一般是局域网)让不同的主机系统之间可以共享文件或目录. 2.NFS存储服务 ...

  9. 6、负载均衡HAproxy介绍

    6.1.常用负载均衡器软件说明: 现在常用的三大开源软件负载均衡器分别是Nginx.HAProxy.LVS. 1.nginx负载均衡的特点: (1)工作在网络的七层之上,可以针对http应用做一些分流 ...

  10. 重新整理 .net core 实践篇————跨域问题四十一]

    前言 简单整理一下.net core 的跨域问题,这个以前也整理过比较详细的,故而在此简单整理一下. 正文 对跨域相对的就是同源,什么是同源呢? 协议相同(http/https) 主机(域名)相同 端 ...