使用R语言和XML包抓取网页数据-Scraping data from web pages in R with XML package
In the last years a lot of data has been released publicly in different formats, but sometimes the data we're interested in are still inside the HTML of a web page: let's see how to get those data.
One of the existing packages for doing this job is the XML package. This package allows us to read and create XML and HTML documents; among the many features, there's a function called readHTMLTable() that analyze the parsed HTML and returns the tables present in the page. The details of the package are available in the official documentation of the package.
Let's start.
Suppose we're interested in the italian demographic info present in this page http://sdw.ecb.europa.eu/browse.do?node=2120803 from the EU website. We start loading and parsing the page:
page <- "http://sdw.ecb.europa.eu/browse.do?node=2120803"
parsed <- htmlParse(page)
Now that we have parsed HTML, we can use the readHTMLTable() function to return a list with all the tables present in the page; we'll call the function with these parameters:
- parsed: the parsed HTML
- skip.rows: the rows we want to skip (at the beginning of this table there are a couple of rows that don't contain data but just formatting elements)
- colClasses: the datatype of the different columns of the table (in our case all the columns have integer values); the rep() function is used to replicate 31 times the "integer" value
table <- readHTMLTable(parsed, skip.rows=c(1,3,4,5), colClasses = c(rep("integer", 31)))
As we can see from the page source code, this web page contains six HTML tables; the one that contains the data we're interested in is the fifth, so we extract that one from the list of tables, as a data frame:
values <- as.data.frame(table[5])
Just for convenience, we rename the columns with the period and italian data:
# renames the columns for the period and Italy
colnames(values)[1] <- 'Period'
colnames(values)[19] <- 'Italy'
The italian data lasts from 1990 to 2014, so we have to subset only those rows and, of course, only the two columns of period and italian data:
# subsets the data: we are interested only in the first and the 19th column (period and italian info)
ids <- values[c(1,19)] # Italy has only 25 years of info, so we cut away the other rows
ids <- as.data.frame(ids[1:25,])
Now we can plot these data calling the plot function with these parameters:
- ids: the data to plot
- xlab: the label of the X axis
- ylab: the label of the Y axis
- main: the title of the plot
- pch: the symbol to draw for evey point (19 is a solid circle: look here for an overview)
- cex: the size of the symbol
plot(ids, xlab="Year", ylab="Population in thousands", main="Population 1990-2014", pch=19, cex=0.5)
and here is the result:

Here's the full code, also available on my github:
library(XML) # sets the URL
url <- "http://sdw.ecb.europa.eu/browse.do?node=2120803" # let the XML library parse the HTMl of the page
parsed <- htmlParse(url) # reads the HTML table present inside the page, paying attention
# to the data types contained in the HTML table
table <- readHTMLTable(parsed, skip.rows=c(1,3,4,5), colClasses = c(rep("integer", 31) )) # this web page contains seven HTML pages, but the one that contains the data
# is the fifth
values <- as.data.frame(table[5]) # renames the columns for the period and Italy
colnames(values)[1] <- 'Period'
colnames(values)[19] <- 'Italy' # now subsets the data: we are interested only in the first and
# the 19th column (period and Italy info)
ids <- values[c(1,19)] # Italy has only 25 year of info, so we cut away the others
ids <- as.data.frame(ids[1:25,]) # plots the data
plot(ids, xlab="Year", ylab="Population in thousands", main="Population 1990-2014", pch=19, cex=0.5) from: http://andreaiacono.blogspot.com/2014/01/scraping-data-from-web-pages-in-r-with.html
使用R语言和XML包抓取网页数据-Scraping data from web pages in R with XML package的更多相关文章
- java抓取网页数据,登录之后抓取数据。
最近做了一个从网络上抓取数据的一个小程序.主要关于信贷方面,收集的一些黑名单网站,从该网站上抓取到自己系统中. 也找了一些资料,觉得没有一个很好的,全面的例子.因此在这里做个笔记提醒自己. 首先需要一 ...
- Asp.net 使用正则和网络编程抓取网页数据(有用)
Asp.net 使用正则和网络编程抓取网页数据(有用) Asp.net 使用正则和网络编程抓取网页数据(有用) /// <summary> /// 抓取网页对应内容 /// </su ...
- 使用JAVA抓取网页数据
一.使用 HttpClient 抓取网页数据 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ...
- 使用HtmlAgilityPack批量抓取网页数据
原文:使用HtmlAgilityPack批量抓取网页数据 相关软件点击下载登录的处理.因为有些网页数据需要登陆后才能提取.这里要使用ieHTTPHeaders来提取登录时的提交信息.抓取网页 Htm ...
- web scraper 抓取网页数据的几个常见问题
如果你想抓取数据,又懒得写代码了,可以试试 web scraper 抓取数据. 相关文章: 最简单的数据抓取教程,人人都用得上 web scraper 进阶教程,人人都用得上 如果你在使用 web s ...
- c#抓取网页数据
写了一个简单的抓取网页数据的小例子,代码如下: //根据Url地址得到网页的html源码 private string GetWebContent(string Url) { string strRe ...
- 【iOS】正則表達式抓取网页数据制作小词典
版权声明:本文为博主原创文章,未经博主同意不得转载. https://blog.csdn.net/xn4545945/article/details/37684127 应用程序不一定要自己去提供数据. ...
- 01 UIPath抓取网页数据并导出Excel(非Table表单)
上次转载了一篇<UIPath抓取网页数据并导出Excel>的文章,因为那个导出的是table标签中的数据,所以相对比较简单.现实的网页中,有许多不是通过table标签展示的,那又该如何处理 ...
- Node.js的学习--使用cheerio抓取网页数据
打算要写一个公开课网站,缺少数据,就决定去网易公开课去抓取一些数据. 前一阵子看过一段时间的Node.js,而且Node.js也比较适合做这个事情,就打算用Node.js去抓取数据. 关键是抓取到网页 ...
随机推荐
- 洛谷P1482 Cantor表(升级版) 题解
题目传送门 此题zha一看非常简单. 再一看特别简单. 最后瞟一眼,还是很简单. 所以在此就唠一下GCD大法吧: int gcd(int x,int y){ if(x<y) return gcd ...
- [转]基于Protel DXP软件的PCB高级编辑技巧大全
来源:基于Protel DXP软件的PCB高级编辑技巧大全 一.放置坐标指示 放置坐标指示可以显示出PCB板上任何一点的坐标位置. 启用放置坐标的方法如下:从主菜单中执行命令 Place/Coordi ...
- Java经典设计模式之五大创建型模式
转载: Java经典设计模式之五大创建型模式 一.概况 总体来说设计模式分为三大类: (1)创建型模式,共五种:工厂方法模式.抽象工厂模式.单例模式.建造者模式.原型模式. (2)结构型模式,共七种: ...
- 关于swiper在vue中不生效的问题
在初始化swiper中加入这两个属性: observer:true observeParents:true var swiper = new Swiper('.swiper-container', { ...
- Python之路【第六篇】:模块与包
目录 一 模块 3.1 import 3.2 from ... import... 3.3 把模块当做脚本执行 3.4 模块搜索路径 3.5 编译python文件 3.6 标准模块 3.7 dir ...
- Django+Nginx+uwsgi搭建自己的博客(八)
在这篇博客中,我们开始为我们的博客开发Blogs App和Users App相关的管理功能,以便每个用户都能管理自己的博客以及评论.目前,Users App的管理功能相对简单,主要功能为查看用户资料以 ...
- [转]如何在 JS 代码中消灭 for 循环
一,用好 filter,map,和其它 ES6 新增的高阶遍历函数 二,理解和熟练使用 reduce 三,用递归代替循环(可以break!) 四,使用高阶函数遍历数组时可能遇到的陷阱 五,死磕到底,T ...
- Linux下安装scapy-python3
安装scapy # pip3 install scapy-python3 # yum install libffi-devel # pip3 install cryptography 新建scapy软 ...
- Scrapy实战篇(五)爬取京东商城文胸信息
创建scrapy项目 scrapy startproject jingdong 填充 item.py文件 在这里定义想要存储的字段信息 import scrapy class JingdongItem ...
- FastReport.Net使用:[9]多栏报表(多列报表)
方法一:使用页的列属性(Page Columns) 1.绘制报表标题 2.设置页的列数量为3,其他默认不变.报表设计界面便如下呈现. 3.报表拷贝前面[分组]报表的内容. 4.就这么简单,一张多栏报表 ...