阳光热线问政平台

URL地址:http://wz.sun0769.com/index.php/question/questionType?type=4&page=

爬取字段:帖子的编号、投诉类型、帖子的标题、帖子的URL地址、部门、状态、网友、时间。

1.items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html import scrapy class SunwzspiderItem(scrapy.Item):
# define the fields for your item here like:
# 爬取投诉帖子的编号、投诉类型、帖子的标题、帖子的URL、部门、状态、网友、时间。
# 帖子的编号
post_id = scrapy.Field()
# 投诉类型
post_type = scrapy.Field()
# 帖子的标题
post_title = scrapy.Field()
# 帖子的URL
post_url = scrapy.Field()
# 部门
sector = scrapy.Field()
# 状态
post_state = scrapy.Field()
# 网友
net_friend = scrapy.Field()
# 时间
post_time = scrapy.Field()

2.spiders/sunwz.py

# -*- coding: utf-8 -*-
import scrapy
from sunwzSpider.items import SunwzspiderItem class SunwzSpider(scrapy.Spider):
name = 'sunwz'
allowed_domains = ['wz.sun0769.com']
url = "http://wz.sun0769.com/index.php/question/questionType?type=4&page="
offset = 0
start_urls = [url + str(offset)] def parse(self, response):
table = response.xpath("//table[@width='98%']")[0]
trs = table.xpath("./tr")
# 是否爬取下一页的标记
next_flag = False
for tr in trs:
next_flag = True
try:
item = SunwzspiderItem()
# 帖子的编号
post_id = tr.xpath("./td/text()").extract()[0]
td2 = tr.xpath("./td")[1]
# 投诉类型
post_type = td2.xpath("./a/text()").extract()[0]
# 帖子的标题
post_title = td2.xpath("./a/text()").extract()[1]
# 帖子的URL
post_url = td2.xpath("./a/@href").extract()[1]
# 部门
sector = td2.xpath("./a/text()").extract()[2]
td3 = tr.xpath("./td")[2]
# 状态
post_state = td3.xpath("./span/text()").extract()[0]
# 网友
net_friend = tr.xpath("./td/text()").extract()[3]
# 时间
post_time = tr.xpath("./td/text()").extract()[4] item["post_id"] = post_id
item["post_type"] = post_type
item["post_title"] = post_title
item["post_url"] = post_url
item["sector"] = sector
item["post_state"] = post_state
item["net_friend"] = net_friend
item["post_time"] = post_time yield item
except:
pass # 判断是否继续爬取下一页
if next_flag:
self.offset += 30
yield scrapy.Request(self.url + str(self.offset), callback = self.parse)

3.pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html import json class SunwzspiderPipeline(object):
def __init__(self):
self.file = open("阳光问政平台.json", "w", encoding = "utf-8")
self.first_flag = True def process_item(self, item, spider):
if self.first_flag:
self.first_flag = False
content = "[\n" + json.dumps(dict(item), ensure_ascii = False)
else:
content = ",\n" + json.dumps(dict(item), ensure_ascii = False)
self.file.write(content) return item def close_spider(self, spider):
self.file.write("\n]")
self.file.close()

4.settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for sunwzSpider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# http://doc.scrapy.org/en/latest/topics/settings.html
# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html BOT_NAME = 'sunwzSpider' SPIDER_MODULES = ['sunwzSpider.spiders']
NEWSPIDER_MODULE = 'sunwzSpider.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'sunwzSpider (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 2
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36'
# 'Accept-Language': 'en',
} # Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'sunwzSpider.middlewares.SunwzspiderSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'sunwzSpider.middlewares.MyCustomDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'sunwzSpider.pipelines.SunwzspiderPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

爬虫——Scrapy框架案例二:阳光问政平台的更多相关文章

  1. 爬虫——Scrapy框架案例一:手机APP抓包

    以爬取斗鱼直播上的信息为例: URL地址:http://capi.douyucdn.cn/api/v1/getVerticalRoom?limit=20&offset=0 爬取字段:房间ID. ...

  2. 爬虫scrapy框架之CrawlSpider

    爬虫scrapy框架之CrawlSpider   引入 提问:如果想要通过爬虫程序去爬取全站数据的话,有几种实现方法? 方法一:基于Scrapy框架中的Spider的递归爬取进行实现(Request模 ...

  3. 爬虫Ⅱ:scrapy框架

    爬虫Ⅱ:scrapy框架 step5: Scrapy框架初识 Scrapy框架的使用 pySpider 什么是框架: 就是一个具有很强通用性且集成了很多功能的项目模板(可以被应用在各种需求中) scr ...

  4. python爬虫scrapy框架——人工识别登录知乎倒立文字验证码和数字英文验证码(2)

    操作环境:python3 在上一文中python爬虫scrapy框架--人工识别知乎登录知乎倒立文字验证码和数字英文验证码(1)我们已经介绍了用Requests库来登录知乎,本文如果看不懂可以先看之前 ...

  5. scrapy框架(二)

    scrapy框架(二) 一.scrapy 选择器 概述: Scrapy提供基于lxml库的解析机制,它们被称为选择器. 因为,它们“选择”由XPath或CSS表达式指定的HTML文档的某部分. Sca ...

  6. 安装爬虫 scrapy 框架前提条件

    安装爬虫 scrapy 框架前提条件 (不然 会 报错) pip install pypiwin32

  7. Python3爬虫(十八) Scrapy框架(二)

    对Scrapy框架(一)的补充 Infi-chu: http://www.cnblogs.com/Infi-chu/ Scrapy优点:    提供了内置的 HTTP 缓存 ,以加速本地开发 .   ...

  8. python爬虫scrapy框架

    Scrapy 框架 关注公众号"轻松学编程"了解更多. 一.简介 Scrapy是用纯Python实现一个为了爬取网站数据.提取结构性数据而编写的应用框架,用途非常广泛. 框架的力量 ...

  9. Python爬虫Scrapy框架入门(2)

    本文是跟着大神博客,尝试从网站上爬一堆东西,一堆你懂得的东西 附上原创链接: http://www.cnblogs.com/qiyeboy/p/5428240.html 基本思路是,查看网页元素,填写 ...

随机推荐

  1. 2013年未之wpf项目乱述

    不知识为何现已很少在网上发帖,貌似人生的方向已经看的七七八八.要么用心工作,要么自主创业.无论怎么样,对于现在的我来说都是一种淡定的选择.作为一个c#程序员,今年下半年开始使用wpf,更觉得wpf将来 ...

  2. Android MediaRecorder实现暂停断点录音功能

    基本原理如下:MediaRecorder通过MIC录音,系统没有自带的pause功能,每次暂停录音,都会结束本次的录音.现在本人的设计思路是:MediaRecorder录音暂停时,保存这段所录下的音频 ...

  3. HTML 5篇(持续更新)

    1.sessionStorage .localStorage 和 cookie 之间的区别 (一)共同点:都是保存在浏览器端,且同源的. (二)区别:cookie数据始终在同源的http请求中携带(即 ...

  4. maven项目怎么引入另一个maven项目

    yi      最近在做项目的时候,遇到多模块(mudul)开发,里面的maven包相互引用,刚开始不知道怎么导入,费了好大尽总算搞定了.把遇到的问题记录下. 1.怎么导入依赖的maven模块 选择I ...

  5. phantomjs rendering

    http://wwwy3y3.ghost.io/pageres-phantomjs-capture-sreenshot-chinese-fonts-not-render-correctly/ 在使用中 ...

  6. jquery事件绑定函数

    1.bind 使用语法: jQueryObject.bind( events [, data ], handler ) jQueryObject.bind( events [, data ] [, i ...

  7. JBOSS参数调优

        阅读目录 JBOSS参数调优 jvm调优讲解1 JVM调优讲解2 JVM常见配置汇总 JBOSS生产环境下JVM调优 JBOSS瘦身 JBoss性能优化:内存紧张的问题终于解决了(转载)--- ...

  8. git error:【fatal: unable to access 'https://github.com/userId/prjName.git/': err or setting certificate verify locations:】

    $ git pull origin master fatal: unable to access 'https://github.com/userId/prjName.git/': err or se ...

  9. Task执行内幕与结果处理解密

    本课主题 Task执行内幕与结果处理解密 引言 这一章我们主要关心的是 Task 是怎样被计算的以及结果是怎么被处理的 了解 Task 是怎样被计算的以及结果是怎么被处理的 Task 执行原理流程图 ...

  10. July 06th 2017 Week 27th Thursday

    Knowledge is the antidote to fear. 知识可以解除恐惧. Fear always steps from unknown things. Once we know wha ...