Introduction

For this first programming assignment you will write three functions that are meant to interact with dataset that accompanies this assignment. The dataset is contained in a zip file specdata.zip that you can download from the Coursera web site.

Data

The zip file containing the data can be downloaded here:

The zip file contains 332 comma-separated-value (CSV) files containing pollution monitoring data for fine particulate matter (PM) air pollution at 332 locations in the United States. Each file contains data from a single monitor and the ID number for each monitor is contained in the file name. For example, data for monitor 200 is contained in the file “200.csv”. Each file contains three variables:

  • Date: the date of the observation in YYYY-MM-DD format (year-month-day)
  • sulfate: the level of sulfate PM in the air on that date (measured in micrograms per cubic meter)
  • nitrate: the level of nitrate PM in the air on that date (measured in micrograms per cubic meter)

For this programming assignment you will need to unzip this file and create the directory ‘specdata’. Once you have unzipped the zip file, do not make any modifications to the files in the ‘specdata’ directory. In each file you’ll notice that there are many days where either sulfate or nitrate (or both) are missing (coded as NA). This is common with air pollution monitoring data in the United States.

Part 1

Write a function named ‘pollutantmean’ that calculates the mean of a pollutant (sulfate or nitrate) across a specified list of monitors. The function ‘pollutantmean’ takes three arguments: ‘directory’, ‘pollutant’, and ‘id’. Given a vector monitor ID numbers, ‘pollutantmean’ reads that monitors’ particulate matter data from the directory specified in the ‘directory’ argument and returns the mean of the pollutant across all of the monitors, ignoring any missing values coded as NA. A prototype of the function is as follows
pollutantmean <- function(directory, pollutant, id = 1:332) {
## 'directory' is a character vector of length 1 indicating
## the location of the CSV files ## 'pollutant' is a character vector of length 1 indicating
## the name of the pollutant for which we will calculate the
## mean; either "sulfate" or "nitrate". ## 'id' is an integer vector indicating the monitor ID numbers
## to be used ## Return the mean of the pollutant across all monitors list
## in the 'id' vector (ignoring NA values)
## NOTE: Do not round the result!
}

You can see some example output from this function. The function that you write should be able to match this output. Please save your code to a file named pollutantmean.R.

Part 2

Write a function that reads a directory full of files and reports the number of completely observed cases in each data file. The function should return a data frame where the first column is the name of the file and the second column is the number of complete cases. A prototype of this function follows

complete <- function(directory, id = 1:332) {
## 'directory' is a character vector of length 1 indicating
## the location of the CSV files ## 'id' is an integer vector indicating the monitor ID numbers
## to be used ## Return a data frame of the form:
## id nobs
## 1 117
## 2 1041
## ...
## where 'id' is the monitor ID number and 'nobs' is the
## number of complete cases
}

ou can see some example output from this function. The function that you write should be able to match this output. Please save your code to a file named complete.R. To run the submit script for this part, make sure your working directory has the file complete.R in it.

Part 3

Write a function that takes a directory of data files and a threshold for complete cases and calculates the correlation between sulfate and nitrate for monitor locations where the number of completely observed cases (on all variables) is greater than the threshold. The function should return a vector of correlations for the monitors that meet the threshold requirement. If no monitors meet the threshold requirement, then the function should return a numeric vector of length 0. A prototype of this function follows

corr <- function(directory, threshold = 0) {
## 'directory' is a character vector of length 1 indicating
## the location of the CSV files ## 'threshold' is a numeric vector of length 1 indicating the
## number of completely observed observations (on all
## variables) required to compute the correlation between
## nitrate and sulfate; the default is 0 ## Return a numeric vector of correlations
## NOTE: Do not round the result!
}

For this function you will need to use the ‘cor’ function in R which calculates the correlation between two vectors. Please read the help page for this function via ‘?cor’ and make sure that you know how to use it.
You can see some example output from this function. The function that you write should be able to match this output. Please save your code to a file named corr.R. To run the submit script for this part, make sure your working directory has the file corr.R in it.

--------------------------------------------------------------作答區------------------------------------------------------------------------

可以直接點選連結下載檔案再行解壓縮

或是自訂R的 get_specdata()函數來執行上述步驟

# 設立get_specdata()
get_specdata <- function(dest_file) {
specdata_url <- "https://storage.googleapis.com/jhu_rprg/specdata.zip" #擷取檔案下載的url
download.file(specdata_url, destfile = dest_file)             #以download.file下載,destfile = 指定位置 *註:此處~會為R主程式的wd
unzip(dest_file)                     #unzip檔案至Rstudio的wd
}
get_specdata("~/specdata.zip") #可指定解壓位置的get_specdata()
get_specdata <- function(dest_file, ex_dir) {
specdata_url <- "https://storage.googleapis.com/jhu_rprg/specdata.zip"
download.file(specdata_url, destfile = dest_file)           
unzip(dest_file, exdir = ex_dir)                     #exdir為指定位置*註:此處~會為R主程式的wd
}
get_specdata("~/specdata.zip", "D:/R/Project")

pollutantmean()

pollutantmean <- function(directory,pollutant,id = 1:332) {
CSV_files_dir <- list.files(directory, full.names = T) #將茲目標料夾中的files,匯成list
dataf <-data.frame()
for(i in id){
dataf <- rbind(dataf,read.csv(CSV_files_dir[i])) #rbind將for迴圈的資料綁成新row
}
mean(dataf[,pollutant],na.rm = T) #所有row的 指定column做計算
}

另一種參考

pollutantmean <- function(directory, pollutant, id= 1:332){
pollutants = c() #設立空vector用於接數據
filenames = list.files(directory) #此處沒有用 full_name參數,只會有files name for(i in id){
filepath=paste(directory,"/" ,filenames[i], sep="") #將檔名與路徑貼起來,製成完整路徑fliepath
data = read.csv(filepath, header = TRUE) #讀取目標檔案及其header,存至data
pollutants = c(pollutants, data[,pollutant]) #將每筆數據加長至vector中,存至pollutants
}
pollutants_mean = mean(pollutants, na.rm=TRUE) #計算並存至pollutants_mean pollutants_mean #回報
}
練習
pollutantmean("specdata", "sulfate", 1:10)
[1] 4.064
pollutantmean("specdata", "nitrate", 70:72)
[1] 1.706
pollutantmean("specdata", "sulfate", 34)
[1] 1.477
pollutantmean("specdata", "nitrate")
[1] 1.703

complete()

complete <- function(directory, id = 1:332) {
CSV_files <- list.files(directory, full.names = TRUE)
datadf <- data.frame()
for (i in id) {
moni_i <- read.csv(CSV_files[i])
nobs <- sum(complete.cases(moni_i)) #complete.cases()可得是否為complete的邏輯vector,sum()加總True值
tmpdf <- data.frame(i, nobs) #將測站ID及其結果存成 df
datadf <- rbind(datadf, tmpdf) #將新的資料綁至新row
}
colnames(datadf) <- c("id", "nobs") #將column賦名
datadf #回報
}

輸出data frame

練習
查看指定感測器中,具有完整資訊的筆數
cc <- complete("specdata", c(6, 10, 20, 34, 100, 200, 310)) #cc5中有"id" "nobs" 兩columns
print(cc$nobs) #nobs的 vector [1] 228 148 124 165 104 460 232
查看指定感測器中,具有完整資訊的筆數
cc <- complete("specdata", 54) #cc中有"id" "nobs" 兩columns
print(cc$nobs) #nobs的 vector
[1] 219
隨機抽樣查看10組感測器,具有完整資訊的筆數
set.seed(42)
cc <- complete("specdata", 332:1) #cc中有 "id" "nobs"兩columns *row是反讀,但此處沒差
use <- sample(332, 10) #332中亂數取10個成 use vector
print(cc[use, "nobs"]) #第 use row 的 "nobs" [1] 711 135 74 445 178 73 49 0 687 237

corr()

corr <- function(directory, threshold = 0) {                           #門檻defalut = 0
CSV_files <- list.files(directory, full.names = TRUE)
dat <- vector(mode = "numeric", length = 0) #設置空的numeric vector
for (i in 1:length(CSV_files)) {
moni_i <- read.csv(CSV_files[i]) #此處沒有指定id,直接以length讀長度
csum <- sum((!is.na(moni_i$sulfate)) & (!is.na(moni_i$nitrate))) #獲得兩側相都沒na測值的True數量
if (csum > threshold) { #超出門檻的
tmp <- moni_i[which(!is.na(moni_i$sulfate)), ] #留下sulfate是True的
submoni_i <- tmp[which(!is.na(tmp$nitrate)), ] #再留下nitrate是True的
dat <- c(dat, cor(submoni_i$sulfate, submoni_i$nitrate)) #將cor()值綁長至dat vector 中
}
}
dat
}

輸出numeric vector

練習
從排序完成的相關係數中,隨機抽樣5組,並四捨五入至小數點下第四位
cr <- corr("specdata")
cr <- sort(cr)
set.seed(868)
out <- round(cr[sample(length(cr), 5)], 4)
print(out) [1] 0.2688 0.1127 -0.0085 0.4586 0.0447
資料完整數大於129筆的資料組數,其相關係數排序完成後隨機抽樣5組,並四捨五入至小數點下第四位
cr <- corr("specdata", 129)
cr <- sort(cr)
n <- length(cr)
set.seed(197)
out <- c(n, round(cr[sample(n, 5)], 4))
print(out) [1] 243.0000 0.2540 0.0504 -0.1462 -0.1680 0.5969
資料完整度大於2000筆的資料組數,與資料完整度大於1000筆的資料,其相關係數排序完成後以四捨五入呈現至小數點下第四位
cr <- corr("specdata", 2000)
n <- length(cr)
cr <- corr("specdata", 1000)
cr <- sort(cr)
print(c(n, round(cr, 4))) [1] 0.0000 -0.0190 0.0419 0.1901

[R] [Johns Hopkins] R Programming 作業 Week 2 - Air Pollution的更多相关文章

  1. [R] [Johns Hopkins] R Programming -- week 3

    library(datasets) head(airquality) #按月分組 s <- split(airquality, airquality$Month) str(s) summary( ...

  2. [R] [Johns Hopkins] R Programming -- week 4

    #Generating normal distribution (Pseudo) random number x<-rnorm(10) x x2<-rnorm(10,2,1) x2 set ...

  3. T100——程序从标准签出客制后注意r.c和r.l

    标准签出客制后,建议到对应4gl目录,客制目录 r.c afap280_01 r.l afap280_01 ALL 常用Shell操作命令: r.c:编译程序,需在4gl路径之下执行,产生的42m会自 ...

  4. R语言 启动报错 *** glibc detected *** /usr/lib64/R/bin/exec/R: free(): invalid next size (fast): 0x000000000263a420 *** 错误 解决方案

    *** glibc detected *** /usr/lib64/R/bin/exec/R: free(): invalid next size (fast): 0x000000000263a420 ...

  5. 【R笔记】R语言函数总结

    R语言与数据挖掘:公式:数据:方法 R语言特征 对大小写敏感 通常,数字,字母,. 和 _都是允许的(在一些国家还包括重音字母).不过,一个命名必须以 . 或者字母开头,并且如果以 . 开头,第二个字 ...

  6. [转]2010 Ruby on Rails 書單 與 練習作業

    原帖:http://wp.xdite.net/?p=1754 ========= 學習 Ruby on Rails 最快的途徑無非是直接使用 Rails 撰寫產品.而這個過程中若有 mentor 指導 ...

  7. Python获取爬虫数据, r.text 与 r.content 的区别

    1.简单粗暴来讲: text 返回的是unicode 型的数据,一般是在网页的header中定义的编码形式. content返回的是bytes,二级制型的数据. 如果想要提取文本就用text 但是如果 ...

  8. 判斷作業系統為 64bit 或 32bit z

    有時我們在開發Windows 桌面應用程式時,會發生一些弔詭的事情,作業系統位元數就是一個蠻重要的小細節,若您寫的應用程式在Windows 的32bit 作業系統上可以完美的運行,但不見得在64bit ...

  9. python文件操作打开模式 r,w,a,r+,w+,a+ 区别辨析

    主要分成三大类: r 和 r+     "读"功能 r  只读 r+ 读写(先读后写) 辨析:对于r,只有读取功能,利用光标的移动,可以选择要读取的内容. 对于r+,同时具有读和写 ...

随机推荐

  1. node.js学习6---第三方依赖(模块或者说是包)的导入 npm 以及 cnpm命令的使用

    npm命令用于导入node.js的第三方包,相当于java中使用maven来导入第三方依赖: 1.npm init -y 命令:在命令窗口中执行后,会出现如下的json文件: 右边记录了安装的第三方包 ...

  2. ES6箭头函数Arrow Function

    果然,隔了很长时间都没有来博客园上逛了...... 前一段时间一直在忙我们参加的一个比赛,转眼已经好久没有来逛过博客园了,果然还是很难坚持的...... 今天总算还是想起来要过来冒个泡,强行刷一波存在 ...

  3. Linux 驱动——Button驱动6(mutex、NBLOCK、O_NONBLOCK)互斥信号量、阻塞、非阻塞

    button_drv.c驱动文件: #include <linux/module.h>#include <linux/kernel.h>#include <linux/f ...

  4. oracle mysql 比较

    转载:https://www.cnblogs.com/qq765065332/p/9293029.html 一.数据的存储结构 mysql: 1.对数据的管理可以有很多个用户,登录用户后可以看到该用户 ...

  5. ubuntu安装tensorboardx

    先安装tensorboardX,因为tensorboard依赖于tensorflow中的一些东西,所以安装完tensorboard之后,需要再安装tensorflow pip install tens ...

  6. linux解压缩文件名乱码问题 亲测可用

    unar 这个工具会自动检测文件的编码,也可以通过-e来指定:unar file.zip 即可解压出中文文件.

  7. 看书记笔记 书名21天学C#

    ☆:为重点★:为科普△:注▲:术语 前言概述 ☆一门语言必须包括诸如异常处理,无用单元收集,可扩展数据类型以及代码安全性等特征☆C#特性:简单性,面向对象,模块性,灵活性,简明性 ☆C#面向对象的封装 ...

  8. 当Django中Debug=False,静态文件处理方式。

    Django设置DEBUG为False时,'django.contrib.staticfiles'会关闭,即Django不会自动搜索静态文件,静态文件不能加载导致的问题有两个: 1.页面排版不正常,即 ...

  9. 前端开发模拟数据------webpack-api-mocker

    应用场景: 在实际的项目开发过程中,一般都会进行前后端分离的开发模式,前端通过mock或者其他的插件模拟后台返回数据的功能.在常用的webpack构建工程项目中,通过和webpack-dev-serv ...

  10. Byte数组和字符串相互转换的问题

    第一:需求:将文件转成byte数组,之后转成字符串返回.过滤器接收到响应内容后,需要将响应的内容转成byte数组. 第二:我刚开始的做法: Controller:byteArr = Conversio ...