Machine Learning, Homework 9, Neural Nets
Machine Learning, Homework 9, Neural Nets
April 15, 2019
Contents
Boston Housing with a Single Layer and R package nnet 1
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Digit Recognition with R package h2o 5
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Boston Housing with a Single Layer and R package nnet
Let’s do a very simple example with single layer neural nets.
We’ll do the Boston housing data with x=lstat and y =mdev so that we have one numeric x and a numeric y.
We’ve used this classic data set a few times so we are very familiar with it.
Let’s get the data, pull off x and y and standardize x.
library(MASS) ## a library of example datasets
attach(Boston)
## standardize lstat
rg = range(Boston$lstat)
lstats = (Boston$lstat-rg[1])/(rg[2]-rg[1])
##make data frame with standardized lstat values sorted for plotting
ddf = data.frame(lstats,medv=Boston$medv)
oo = order(ddf$lstats) #order the data by x, convenient for plotting
ddf = ddf[oo,]
head(ddf)
## lstats medv
## 162 0.000000000 50.0
## 163 0.005242826 50.0
## 41 0.006898455 34.9
## 233 0.020419426 41.7
## 193 0.031456954 36.4
## 205 0.031732892 50.0
And here is the familiar plot:
plot(ddf)
1
0.0 0.2 0.4 0.6 0.8 1.0
10 20 30 40 50
lstats
medv
Let’s fit a simple neural net.
One hidden layer with 5 units (neurons).
library(nnet)
set.seed(14)
nn1 = nnet(medv~lstats,ddf,size=5,decay=.1,linout=T,maxit=1000)
## # weights: 16
## initial value 274435.143486
## iter 10 value 14655.902880
## iter 20 value 13675.210318
## iter 30 value 13618.543249
## iter 40 value 13593.167670
## iter 50 value 13548.561442
## iter 60 value 13545.520754
## iter 70 value 13544.330448
## iter 80 value 13541.583759
## iter 90 value 13540.386199
## iter 100 value 13539.604916
## iter 110 value 13536.860853
## iter 120 value 13535.643158
## iter 130 value 13535.589069
## final value 13535.578458
## converged
summary(nn1)
## a 1-5-1 network with 16 weights
## options were - linear output units decay=0.1
## b->h1 i1->h1
## 1.06 0.69
## b->h2 i1->h2
## 2.38 -38.17
## b->h3 i1->h3
## 2.49 -7.61
## b->h4 i1->h4
## 2.05 0.55
## b->h5 i1->h5
## 2.53 -7.60
## b->o h1->o h2->o h3->o h4->o h5->o
## 4.67 3.64 21.22 9.19 3.48 8.93
Now let’s plot the fit:
yhat1 = predict(nn1,ddf)
plot(ddf)
lines(ddf$lstats,yhat1,lty=1,col="red",lwd=3)
2
0.0 0.2 0.4 0.6 0.8 1.0
10 20 30 40 50
lstats
medv
Notice that you understand exactly how the single layer neural fit did this !!!
Now let’s fit the 5 unit neural net for a set of decay values.
代写Neural Nets作业、代做MASS留学生作业、R程序设计作业代写
Let’s do this in parallel using the R parallel package. This is simple enough that we don’t really need to
speed it up, but we can illustrate the approach. You may want to use if for some of the more complicated
model fits!
library(doParallel) #library for parallel computing
## Loading required package: foreach
## Loading required package: iterators
## Loading required package: parallel
registerDoParallel()
cat("number of workers is: ",getDoParWorkers(),"\n")
## number of workers is: 4
#you could pick the number of workers with:
# registerDoParallel(cores=num) where num is the number of workers.
Now we will use the function foreach to fit neural net models in parallel. First we set up a vector of decay
values to try. Then we use foreach to run the neural net fits. foreach will return a list, with the i
th list
element corresponding to the results obtained in the i
th loop iteration.
decv = c(.5,.1,.01,.005,.0025,.001,.0001,.00001)
#do a parallel loop over decay values
modsL = foreach(i=1:length(decv)) %dopar% {
library(nnet) #I did not have to do this when I was not in Rmarkdown.
set.seed(5*i) #I did have to to this.
nnfit = nnet(medv~lstats,ddf,size=5,decay=decv[i],linout=T,maxit=10000)
nnfit
}
is.list(modsL)
## [1] TRUE
length(modsL)
## [1] 8
The function foreach will launch a bunch of R processes so things like random number seeds may have to be
reset for each process.
Now we can plot all the fits by looping over the list of models.
plot(ddf)
for(i in 1:length(modsL)) {
yhat = predict(modsL[[i]],ddf)
3
lines(ddf$lstats,yhat,col=i,lty=i,lwd=2)
}
0.0 0.2 0.4 0.6 0.8 1.0
10 20 30 40 50
lstats
medv
Problem
fit the neural net model with size=100 and decay=.001, plot the fits. How does it look? Try running
the fit at least twice to see that it changes.
Redo the the loop over decay values with size=100. How does it look now? Do we need 100? Will
decay be more important with 100 than it was with 5 units?
4
Digit Recognition with R package h2o
First, let’s fire up h2o.
print(date())
## [1] "Tue Apr 16 16:22:12 2019"
library(h2o)
##
## ----------------------------------------------------------------------
##
## Your next step is to start H2O:
## > h2o.init()
##
## For H2O package documentation, ask for help:
## > ??h2o
##
## After starting H2O, you can use the Web UI at http://localhost:54321
## For more information visit http://docs.h2o.ai
##
## ----------------------------------------------------------------------
##
## Attaching package: 'h2o'
## The following objects are masked from 'package:stats':
##
## cor, sd, var
## The following objects are masked from 'package:base':
##
## &&, %*%, %in%, ||, apply, as.factor, as.numeric, colnames,
## colnames<-, ifelse, is.character, is.factor, is.numeric, log,
## log10, log1p, log2, round, signif, trunc
h2o.init()
##
## H2O is not running yet, starting it now...
##
## Note: In case of errors look at the following log files:
## /tmp/RtmpzaPmRq/h2o_root_started_from_r.out
## /tmp/RtmpzaPmRq/h2o_root_started_from_r.err
##
##
## Starting H2O JVM and connecting: . Connection successful!
##
## R is connected to the H2O cluster:
## H2O cluster uptime: 1 seconds 203 milliseconds
## H2O cluster timezone: America/Phoenix
## H2O data parsing timezone: UTC
## H2O cluster version: 3.20.0.8
## H2O cluster version age: 6 months and 25 days !!!
## H2O cluster name: H2O_started_from_R_root_jrw534
## H2O cluster total nodes: 1
## H2O cluster total memory: 6.84 GB
## H2O cluster total cores: 8
## H2O cluster allowed cores: 8
## H2O cluster healthy: TRUE
## H2O Connection ip: localhost
## H2O Connection port: 54321
## H2O Connection proxy: NA
## H2O Internal Security: FALSE
## H2O API Extensions: XGBoost, Algos, AutoML, Core V3, Core V4
## R Version: R version 3.5.1 (2018-07-02)
## Warning in h2o.clusterInfo():
## Your H2O cluster version is too old (6 months and 25 days)!
## Please download and install the latest version from http://h2o.ai/download/
Now we can read in the data.
In order to make things run faster I’ll down sample to just ns=10,000 observations.
train60D = read.csv("http://www.rob-mcculloch.org/data/mnist-train.csv")
train60D$C785 = as.factor(train60D$C785)
n = nrow(train60D)
5
set.seed(99)
ns = 10000
trainDS = train60D[sample(1:n,ns),]
trainS = as.h2o(trainDS,"trainS")
##
|
| | 0%
|
|=================================================================| 100%
testD = read.csv("http://www.rob-mcculloch.org/data/mnist-test.csv")
testD$C785 = as.factor(testD$C785)
test = as.h2o(testD,"test")
##
|
| | 0%
|
|=================================================================| 100%
x=1:784;y=785
print(ls())
## [1] "ddf" "decv" "i" "lstats" "modsL" "n"
## [7] "nn1" "ns" "oo" "rg" "test" "testD"
## [13] "train60D" "trainDS" "trainS" "x" "y" "yhat"
## [19] "yhat1"
print(h2o.ls())
## key
## 1 test
## 2 trainS
Let’s run h2o.deeplearning at settings similar to the ones that were found to work in the lecture notes. I
dropped the layer/node architecture down to (50,50) so it would run faster. On my laptap in took about 90
seconds to run the one below.
I don’t know how long it will take on your machine.
fp = file.path("./files","mDNNdrop")
if(file.exists(fp)) {
mDNNdrop = h2o.loadModel(fp)
} else {
tm = system.time({
mDNNdrop = h2o.deeplearning(x,y,training_frame = trainS,
hidden=c(50,50),
activation="TanhWithDropout",
hidden_dropout_ratios=c(.1,.1),
l1=1e-4,
epochs=2000,
model_id="mDNNdrop",
validation_frame=test)
})
}
## Warning in .h2o.startModelJob(algo, params, h2oRestApiVersion): Dropping bad and constant columns: [C646, C645, C644, C365, C760, C51, C53, C52, C55, C54, C57, C56, C59, C58, C533, C253, C60, C703, C702, C701, C700, C1, C422, C2, C784, C3, C420, C783, C4, C782, C5, C143, C781, C6, C142, C780, C7, C141, C8, C9, C674, C673, C672, C393, C84, C83, C86, C85, C88, C87, C729, C728, C727, C726, C169, C561, C281, C11, C10, C12, C15, C617, C616, C17, C16, C19, C18, C699, C732, C731, C730, C450, C170, C20, C22, C21, C24, C23, C26, C25, C28, C505, C27, C29, C589, C225, C588, C31, C30, C32, C35, C759, C758, C757, C756, C755, C754, C115, C753, C477, C113, C112, C111, C197].
##
|
| | 0%
|
| | 1%
|
|= | 1%
|
|= | 2%
|
|== | 2%
|
|== | 3%
|
|=================================================================| 100%
cat("the time is: ",tm,"\n")
## the time is: 0.617 0.005 80.098 0 0
6
print(h2o.confusionMatrix(mDNNdrop,valid=TRUE))
## Confusion Matrix: Row labels: Actual class; Column labels: Predicted class
## 0 1 2 3 4 5 6 7 8 9 Error Rate
## 0 959 0 2 1 0 9 6 1 2 0 0.0214 = 21 / 980
## 1 0 1111 1 7 0 1 5 3 7 0 0.0211 = 24 / 1,135
## 2 20 3 954 14 7 1 10 7 16 0 0.0756 = 78 / 1,032
## 3 0 1 18 946 0 15 2 16 9 3 0.0634 = 64 / 1,010
## 4 1 0 3 0 938 1 11 4 3 21 0.0448 = 44 / 982
## 5 6 2 5 33 10 775 15 11 29 6 0.1312 = 117 / 892
## 6 10 4 5 1 9 12 911 3 3 0 0.0491 = 47 / 958
## 7 3 5 19 8 8 0 2 964 1 18 0.0623 = 64 / 1,028
## 8 11 6 7 23 12 13 10 11 874 7 0.1027 = 100 / 974
## 9 7 6 3 12 33 8 2 22 4 912 0.0961 = 97 / 1,009
## Totals 1017 1138 1017 1045 1017 835 974 1042 948 967 0.0656 = 656 / 10,000
missclass = h2o.performance(mDNNdrop,valid=TRUE)@metrics$mean_per_class_error
cat("the mean per class error is: ",missclass,"\n")
## the mean per class error is: 0.06676157
## if you like it, keep it
#h2o.saveModel(mDNNdrop,path="./files")
print(date())
## [1] "Tue Apr 16 16:25:01 2019"
Problem
I always used dropout. Is that a good idea? Change the settings to not use dropout. Is it worse or
better? Do a couple of runs.
look at h2o.deeplearning. Pick another option and try changing it to see if you can improve the
prediction.
因为专业,所以值得信赖。如有需要,请加QQ:99515681 或邮箱:99515681@qq.com
微信:codinghelp
Machine Learning, Homework 9, Neural Nets的更多相关文章
- CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)
1. Feedforward and cost function; 2.Regularized cost function: 3.Sigmoid gradient The gradient for t ...
- Machine Learning No.5: Neural networks
1. advantage: when number of features is too large, so previous algorithm is not a good way to learn ...
- [Machine Learning]学习笔记-Neural Networks
引子 对于一个特征数比较大的非线性分类问题,如果采用先前的回归算法,需要很多相关量和高阶量作为输入,算法的时间复杂度就会很大,还有可能会产生过拟合问题,如下图: 这时就可以选择采用神经网络算法. 神经 ...
- [C5] Andrew Ng - Structuring Machine Learning Projects
About this Course You will learn how to build a successful machine learning project. If you aspire t ...
- What is machine learning?
What is machine learning? One area of technology that is helping improve the services that we use on ...
- 使用神经网络识别手写数字Using neural nets to recognize handwritten digits
The human visual system is one of the wonders of the world. Consider the following sequence of handw ...
- Machine Learning and Data Mining(机器学习与数据挖掘)
Problems[show] Classification Clustering Regression Anomaly detection Association rules Reinforcemen ...
- [Hinton] Neural Networks for Machine Learning - Hopfield Nets and Boltzmann Machine
Lecture 11 — Hopfield Nets Lecture 12 — Boltzmann machine learning Ref: 能量模型(EBM).限制波尔兹曼机(RBM) 高大上的模 ...
- [Hinton] Neural Networks for Machine Learning - Basic
Link: Neural Networks for Machine Learning - 多伦多大学 Link: Hinton的CSC321课程笔记1 Link: Hinton的CSC321课程笔记2 ...
随机推荐
- POJ 1556 The Doors(线段相交+最短路)
题目: Description You are to find the length of the shortest path through a chamber containing obstruc ...
- Docker入门-安装(一)
Docker 在CentOS 7.0下安装Docker, CentOS 7.0默认使用的是firewall作为防火墙 查看防火墙状态 firewall-cmd --state 停止firewall ...
- 【python3 自动化基础之pip】pip常用命令归类
1.升级pippython -m pip install --upgrade pip(包名) 2.安装pymysql pip install pymysql 3.pip按照到指定目录 python - ...
- PHP源码安装后设置别名
PHP源码安装后测试是否能正常运行 每次在php目录./bin./php调用php很不方便,可以设置别名(方法一) vi ~/.bash_profile (修改根目录下这个文件) 设置完成后还 ...
- Emacs Org-mode 1 下载、安装、基本使用
1.1 总述 Org 是一种帮助我们做笔记.日常事件或者项目计划的快速高效的文本格式系统. Org 有以下特点: Org mode 基于组织结构(outline-mode)对文本进行组织.具有良好的快 ...
- RxJS操作符(一)
一.创建类操作符 创建类操作符是连接传统编程和响应式编程的强梁 from: 可以把数组.Promise.以及Iterable转化为Observable. fromEvent: 可以把事件转化为Obse ...
- .net 枚举类型转换
数字转化名称 Enum.GetName(typeof(枚举), 数字); 名称转化数字 (int)枚举
- spark_to_kakfa
package kafka import java.io.InputStream import java.text.SimpleDateFormat import java.util.{Date, H ...
- less那些事儿
1.计算函数 less写法要特殊处理一下,否则会被识别成calc(60%); /* css */ width: calc(100% - 40px); / * less */ width : calc( ...
- [R]dplyr及ggplot2中的变量引用列的问题
问题描述: 存在这么一个场景,当需要动态选择列作为dplyr或ggplot2的输入时,列名的指定会出现问题. 以iris举例: # 以iris dataset为例 colnames <- c(& ...