阳光热线问政平台

URL地址:http://wz.sun0769.com/index.php/question/questionType?type=4&page=

爬取字段:帖子的编号、投诉类型、帖子的标题、帖子的URL地址、部门、状态、网友、时间。

1.items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html import scrapy class SunwzspiderItem(scrapy.Item):
# define the fields for your item here like:
# 爬取投诉帖子的编号、投诉类型、帖子的标题、帖子的URL、部门、状态、网友、时间。
# 帖子的编号
post_id = scrapy.Field()
# 投诉类型
post_type = scrapy.Field()
# 帖子的标题
post_title = scrapy.Field()
# 帖子的URL
post_url = scrapy.Field()
# 部门
sector = scrapy.Field()
# 状态
post_state = scrapy.Field()
# 网友
net_friend = scrapy.Field()
# 时间
post_time = scrapy.Field()

2.spiders/sunwz.py

# -*- coding: utf-8 -*-
import scrapy
from sunwzSpider.items import SunwzspiderItem class SunwzSpider(scrapy.Spider):
name = 'sunwz'
allowed_domains = ['wz.sun0769.com']
url = "http://wz.sun0769.com/index.php/question/questionType?type=4&page="
offset = 0
start_urls = [url + str(offset)] def parse(self, response):
table = response.xpath("//table[@width='98%']")[0]
trs = table.xpath("./tr")
# 是否爬取下一页的标记
next_flag = False
for tr in trs:
next_flag = True
try:
item = SunwzspiderItem()
# 帖子的编号
post_id = tr.xpath("./td/text()").extract()[0]
td2 = tr.xpath("./td")[1]
# 投诉类型
post_type = td2.xpath("./a/text()").extract()[0]
# 帖子的标题
post_title = td2.xpath("./a/text()").extract()[1]
# 帖子的URL
post_url = td2.xpath("./a/@href").extract()[1]
# 部门
sector = td2.xpath("./a/text()").extract()[2]
td3 = tr.xpath("./td")[2]
# 状态
post_state = td3.xpath("./span/text()").extract()[0]
# 网友
net_friend = tr.xpath("./td/text()").extract()[3]
# 时间
post_time = tr.xpath("./td/text()").extract()[4] item["post_id"] = post_id
item["post_type"] = post_type
item["post_title"] = post_title
item["post_url"] = post_url
item["sector"] = sector
item["post_state"] = post_state
item["net_friend"] = net_friend
item["post_time"] = post_time yield item
except:
pass # 判断是否继续爬取下一页
if next_flag:
self.offset += 30
yield scrapy.Request(self.url + str(self.offset), callback = self.parse)

3.pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html import json class SunwzspiderPipeline(object):
def __init__(self):
self.file = open("阳光问政平台.json", "w", encoding = "utf-8")
self.first_flag = True def process_item(self, item, spider):
if self.first_flag:
self.first_flag = False
content = "[\n" + json.dumps(dict(item), ensure_ascii = False)
else:
content = ",\n" + json.dumps(dict(item), ensure_ascii = False)
self.file.write(content) return item def close_spider(self, spider):
self.file.write("\n]")
self.file.close()

4.settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for sunwzSpider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# http://doc.scrapy.org/en/latest/topics/settings.html
# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html BOT_NAME = 'sunwzSpider' SPIDER_MODULES = ['sunwzSpider.spiders']
NEWSPIDER_MODULE = 'sunwzSpider.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'sunwzSpider (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 2
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36'
# 'Accept-Language': 'en',
} # Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'sunwzSpider.middlewares.SunwzspiderSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'sunwzSpider.middlewares.MyCustomDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'sunwzSpider.pipelines.SunwzspiderPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

爬虫——Scrapy框架案例二:阳光问政平台的更多相关文章

  1. 爬虫——Scrapy框架案例一:手机APP抓包

    以爬取斗鱼直播上的信息为例: URL地址:http://capi.douyucdn.cn/api/v1/getVerticalRoom?limit=20&offset=0 爬取字段:房间ID. ...

  2. 爬虫scrapy框架之CrawlSpider

    爬虫scrapy框架之CrawlSpider   引入 提问:如果想要通过爬虫程序去爬取全站数据的话,有几种实现方法? 方法一:基于Scrapy框架中的Spider的递归爬取进行实现(Request模 ...

  3. 爬虫Ⅱ:scrapy框架

    爬虫Ⅱ:scrapy框架 step5: Scrapy框架初识 Scrapy框架的使用 pySpider 什么是框架: 就是一个具有很强通用性且集成了很多功能的项目模板(可以被应用在各种需求中) scr ...

  4. python爬虫scrapy框架——人工识别登录知乎倒立文字验证码和数字英文验证码(2)

    操作环境:python3 在上一文中python爬虫scrapy框架--人工识别知乎登录知乎倒立文字验证码和数字英文验证码(1)我们已经介绍了用Requests库来登录知乎,本文如果看不懂可以先看之前 ...

  5. scrapy框架(二)

    scrapy框架(二) 一.scrapy 选择器 概述: Scrapy提供基于lxml库的解析机制,它们被称为选择器. 因为,它们“选择”由XPath或CSS表达式指定的HTML文档的某部分. Sca ...

  6. 安装爬虫 scrapy 框架前提条件

    安装爬虫 scrapy 框架前提条件 (不然 会 报错) pip install pypiwin32

  7. Python3爬虫(十八) Scrapy框架(二)

    对Scrapy框架(一)的补充 Infi-chu: http://www.cnblogs.com/Infi-chu/ Scrapy优点:    提供了内置的 HTTP 缓存 ,以加速本地开发 .   ...

  8. python爬虫scrapy框架

    Scrapy 框架 关注公众号"轻松学编程"了解更多. 一.简介 Scrapy是用纯Python实现一个为了爬取网站数据.提取结构性数据而编写的应用框架,用途非常广泛. 框架的力量 ...

  9. Python爬虫Scrapy框架入门(2)

    本文是跟着大神博客,尝试从网站上爬一堆东西,一堆你懂得的东西 附上原创链接: http://www.cnblogs.com/qiyeboy/p/5428240.html 基本思路是,查看网页元素,填写 ...

随机推荐

  1. 01_NIO基本概念

    [NIO的几个概念] Buffer(缓冲区) Channel(通道,管道) Selector(选择器,多路复用器) [Buffer] Buffer是一个对象,它包括一些要写入或者要读取的数据.在NIO ...

  2. webgis开发-开始向JS转向

    官方的一个blog Final Release and Support Plan for the ArcGIS APIs / Viewers for Flex and Silverlight 网址: ...

  3. 深入理解并发编程之----synchronized实现原理

    版权声明:本文为博主原创文章,请尊重原创,未经博主允许禁止转载,保留追究权 https://blog.csdn.net/javazejian/article/details/72828483 [版权申 ...

  4. golang闭包

    http://blog.51cto.com/speakingbaicai/1703229 https://www.jianshu.com/p/fa21e6fada70 所谓闭包就是一个函数" ...

  5. 测试mysql性能工具

    mysqlslap mysqlslap可以模拟服务器的负载,并输出计时信息.它包含在MySQL 5.1 的发行包中,应该在MySQL 4.1或者更新的版本中都可以使用.测试时可以执行并发连接数,并指定 ...

  6. centos7和centos6区别

    CentOS 7 vs CentOS 6的不同   (1)桌面系统[CentOS6] GNOME 2.x[CentOS7] GNOME 3.x(GNOME Shell) (2)文件系统[CentOS6 ...

  7. Detecting Client Connection in WCF Long Running Service (Heartbeat Implementation) z

    Download source - 45.3 KB Introduction Hello everyone! This is my first blog on WCF and I hope that ...

  8. 判断ORACLE启动时使用spfile还是pfile

    自Oracle 9i以后启动的时候默认使用的初始化文件是spfile,我们可以通过如下三种方式来判断是SPFILE还是PFILE方式启动数据库.1.show parameter spfile2.sho ...

  9. UVa 12186 - Another Crisis(树形DP)

    链接: https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem& ...

  10. webpack的正确安装方式

    webpack是基于node开发的模块打包工具,所以他本质上是由node实现的. 我们要保持node版本尽量的新,另一个要保持webpack版本尽量的新,高版本的webpack会利用新版本中的一些特性 ...