注意点:

"""
1、用命令创建一个crawlspider的模板:scrapy genspider -t crawl <爬虫名> <all_domain>,也可以手动创建
2、CrawlSpider中不能再有以parse为名字的数据提取方法,这个方法被CrawlSpider用来实现基础url提取等功能
3、一个Rule对象接受很多参数,首先第一个是包含url规则的LinkExtractor对象,
常有的还有callback(制定满足规则的解析函数的字符串)和follow(response中提取的链接是否需要跟进)
4、不指定callback函数的请求下,如果follow为True,满足rule的url还会继续被请求
5、如果多个Rule都满足某一个url,会从rules中选择第一个满足的进行操作
"""

1、创建工程

scrapy startproject zjh

2、创建项目

scrapy genspider -t crawl circ bxjg.circ.gov.cn
与scrapy不同的是添加了-t crawl参数

3、settings文件添加日志级别,USER_AGENT

# -*- coding: utf-8 -*-

# Scrapy settings for zjh project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'zjh' SPIDER_MODULES = ['zjh.spiders']
NEWSPIDER_MODULE = 'zjh.spiders' LOG_LEVEL = "WARNING"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'zjh (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'zjh.middlewares.ZjhSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'zjh.middlewares.ZjhDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'zjh.pipelines.ZjhPipeline': 300,
#} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

4、circ.py文件提取数据

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule import re
class CircSpider(CrawlSpider):
name = 'circ'
allowed_domains = ['bxjg.circ.gov.cn']
start_urls = ['http://bxjg.circ.gov.cn/web/site0/tab5240/module14430/page1.htm'] #定义提取url地址规则
rules = (
#一个Rule一条规则,LinkExtractor表示链接提取器,提取url地址
#allow,提取的url,url不完整,但是crawlspider会帮我们补全,然后再请求
#callback 提取出来的url地址的response会交给callback处理
#follow 当前url地址的响应是否重新将过rules来提取url地址
Rule(LinkExtractor(allow=r'/web/site0/tab5240/info\d+\.htm'), callback='parse_item'), #详情页数据,不需要follow
Rule(LinkExtractor(allow=r'/web/site0/tab5240/module14430/page\d+\.htm'),follow=True), # 下一页,不需要callback处理,但是需要follow不断循环翻页 )
#parse函数有特殊功能,不能定义
def parse_item(self, response):
item = {}
item["title"]= re.findall("<!--TitleStart-->(.*?)<!--TitleEnd-->",response.body.decode())[0]
item["publish_date"] =re.findall("发布时间:20\d{2}-\d{2}-\d{2}",response.body.decode())[0]
print(item)
#也可以使用Request()自动构造请求
# yield scrapy.Request(
# url,
# callback=parse_detail
# meta={"item":item}
# )
def parse_detail(self,response):
pass

5、扩展知识

python之crawlspider初探的更多相关文章

  1. Python 装饰器初探

    Python 装饰器初探 在谈及Python的时候,装饰器一直就是道绕不过去的坎.面试的时候,也经常会被问及装饰器的相关知识.总感觉自己的理解很浅显,不够深刻.是时候做出改变,对Python的装饰器做 ...

  2. python 单步调试初探(未完待续)

    pdb 调试: import pdb pdb.set_trace()     pudb 调试: http://python.jobbole.com/82638/

  3. python requests 模块初探

    现在经常需要在网页中获取相关内容. 其中无非获取网页返回状态,以及查看网页获取的内容几个方面,那么在这方面来看requests可能比urllib2库更简便一些. 比如:先用方法获取网页 r = req ...

  4. [转] Python的import初探

    转载自:http://www.lingcc.com/2011/12/15/11902/#sec-1 日常使用python编程时,为了用某个代码模块,通常需要在代码中先import相应的module.那 ...

  5. intel python加速效果初探

    python3安装intel的加速库: conda config --add channels intel conda create --name intelpy intelpython3_full ...

  6. Python Socket编程初探

    python 编写server的步骤: 1. 第一步是创建socket对象.调用socket构造函数.如: socket = socket.socket( family, type ) family参 ...

  7. Python爬虫系列 - 初探:爬取旅游评论

    Python爬虫目前是基于requests包,下面是该包的文档,查一些资料还是比较方便. http://docs.python-requests.org/en/master/ POST发送内容格式 爬 ...

  8. python 验证码识别初探

    使用 pytesser 与 pytesseract 识别验证码 前置 :  首先需要安装  tesserract tesserract windows 安装包及中文 https://pan.baidu ...

  9. Python爬虫系列 - 初探:爬取新闻推送

    Get发送内容格式 Get方式主要需要发送headers.url.cookies.params等部分的内容. t = requests.get(url, headers = header, param ...

随机推荐

  1. 通过mysql 连接远程数据库时,输入密码后,提示10060错误

    能提示输入密码,说明网络能够连接,而且能连到服务器.输入密码后提示错误,说明应该是权限问题 解决方法: ​一.进入mysql数据库命令行 ​二.输入use mysql; ​三.设置root账号密码为1 ...

  2. 【异常】ERROR main:com.cloudera.enterprise.dbutil.SqlFileRunner: Exception while executing ddl scripts. com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'ROLES' already exists

    1 详细异常 2019-10-11 10:33:55,865 INFO main:com.cloudera.server.cmf.Main: ============================= ...

  3. Django form表单修改数据

    form: #!/usr/bin/env python #coding:utf8 from django.forms import Form,ModelForm import models class ...

  4. LoadRunner(4)

    一.LoadRunner工具的组成 1.VuGen 虚拟用户脚本生成器 脚本好比:武器 VuGen好比:兵工厂 VU好比:士兵 2.Controller 压力调度控制台 好比:总指挥部 3.Analy ...

  5. Java字节流read函数

    问题引入 做Java作业从标准输入流获取用户输入,用到了System.in.read(),然后出现了bug. //随机生成一个小写字母,用户猜5次,读取用户输入,并判断是否猜对 import java ...

  6. 关于Java的Object.clone()方法与对象的深浅拷贝

    文章同步更新在个人博客:关于Java的Object.clone()方法与对象的深浅拷贝 引言 在某些场景中,我们需要获取到一个对象的拷贝用于某些处理.这时候就可以用到Java中的Object.clon ...

  7. Access viewchild from another component

    https://stackoverflow.com/questions/50935728/access-viewchild-from-another-component =============== ...

  8. iOS RAC使用补充

    1  延迟执行 [[RACScheduler mainThreadScheduler] afterDelay: schedule:^{ NSLog(@"延迟执行.."); }]; ...

  9. BZOJ2278 [Poi2011]Garbage[欧拉回路求环]

    首先研究环上性质,发现如果状态不变的边就不需要动了,每次改的环上边肯定都是起末状态不同的边且仅改一次,因为如果有一条边在多个环上,相当于没有改,无视这条边之后,这几个环显然可以并成一个大环.所以,我们 ...

  10. ACM-ICPC 2018 南京赛区网络预赛 K. The Great Nim Game(博弈)

    题目链接:https://nanti.jisuanke.com/t/31000 题意:有N堆石子(N为大数),每堆的个数按一定方式生成,问先手取若干堆进行尼姆博弈,必胜的方式有多少种. 题解:因为 k ...