1、redis的使用,自己可以多学习下,个人也是在学习

https://www.cnblogs.com/ywjfx/p/10262662.html
官网可以自己搜索下。

2、下载安装scrapy-redis

pip install scrapy-redis

3、下载好了,就可以使用了,使用也很简单,只需要在settings.py配置文件添加一下四个

#######redis配置#######
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_URL = "redis://127.0.0.1:6379"
#######redis配置#######

如:settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for circ project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# http://doc.scrapy.org/en/latest/topics/settings.html
# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html BOT_NAME = 'circ' SPIDER_MODULES = ['circ.spiders']
NEWSPIDER_MODULE = 'circ.spiders' LOG_LEVEL = "WARNING"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36' # Obey robots.txt rules
ROBOTSTXT_OBEY = True #######redis配置#######
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_URL = "redis://127.0.0.1:6379"
#######redis配置#######
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'circ.middlewares.CircSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'circ.middlewares.RandomUserAgentMiddleware': 543,
# 'circ.middlewares.CheckUserAgent': 544,
#} # Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'circ.pipelines.CircPipeline': 300,
#} # Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' USER_AGENTS_LIST = [ "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)", "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)", "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)", "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0", "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5" ]

4、其他的可以不用管了,直接开始scrapy爬虫就可以了

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import re class CfSpider(CrawlSpider):
name = 'cf'
allowed_domains = ['bxjg.circ.gov.cn']
start_urls = ['http://bxjg.circ.gov.cn/web/site0/tab5240/module14430/page1.htm'] #定义提取url地址规则
rules = (
#LinkExtractor 连接提取器,提取url地址
#callback 提取出来的url地址的response会交给callback处理
#follow 当前url地址的响应是够重新进过rules来提取url地址,
Rule(LinkExtractor(allow=r'/web/site0/tab5240/info\d+\.htm'), callback='parse_item'),
Rule(LinkExtractor(allow=r'/web/site0/tab5240/module14430/page\d+\.htm'),follow=True),
) #parse函数有特殊功能,不能定义
def parse_item(self, response):
item = {}
item["title"] = re.findall("<!--TitleStart-->(.*?)<!--TitleEnd-->",response.body.decode())[0]
item["publish_date"] = re.findall("发布时间:(20\d{2}-\d{2}-\d{2})",response.body.decode())[0]
#print(item)
yield item
# yield scrapy.Request(
# url,
# callback=self.parse_detail,
# meta = {"item":item}
# )
#
# def parse_detail(self,response):
# item = response.meta["item"]
# item["price"] = "///"
# yield item

5、登陆redis,通过命令查看

keys *

type  "cf:dupefilter"

SMEMBERS "cf:dupefilter"

出现数据,说明成功了

python之scrapy模块scrapy-redis使用的更多相关文章

  1. Python模块Scrapy导入出错:ImportError: cannot import name xmlrpc_client

    Mac(OS version: OS X Yosemite 10.10.5)上安装Scrapy模块,使用时出现: from six.moves import xmlrpc_client as xmlr ...

  2. 爬虫scrapy模块

    首先下载scrapy模块 这里有惊喜 https://www.cnblogs.com/bobo-zhang/p/10068997.html 创建一个scrapy文件 首先在终端找到一个文件夹 输入 s ...

  3. Python之定向爬虫Scrapy

    1.Scrapy介绍 Scrapy,Python开发的一个快速,高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结构化的数据.Scrapy用途广泛,可以用于数据挖掘.监测和自动化测试 ...

  4. python爬虫入门(六) Scrapy框架之原理介绍

    Scrapy框架 Scrapy简介 Scrapy是用纯Python实现一个为了爬取网站数据.提取结构性数据而编写的应用框架,用途非常广泛. 框架的力量,用户只需要定制开发几个模块就可以轻松的实现一个爬 ...

  5. python爬虫scrapy之scrapy终端(Scrapy shell)

    Scrapy终端是一个交互终端,供您在未启动spider的情况下尝试及调试您的爬取代码. 其本意是用来测试提取数据的代码,不过您可以将其作为正常的Python终端,在上面测试任何的Python代码. ...

  6. 第三百二十四节,web爬虫,scrapy模块介绍与使用

    第三百二十四节,web爬虫,scrapy模块介绍与使用 Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中.其最初是为了 ...

  7. 第三百二十三节,web爬虫,scrapy模块以及相关依赖模块安装

    第三百二十三节,web爬虫,scrapy模块以及相关依赖模块安装 当前环境python3.5 ,windows10系统 Linux系统安装 在线安装,会自动安装scrapy模块以及相关依赖模块 pip ...

  8. 二 web爬虫,scrapy模块以及相关依赖模块安装

    当前环境python3.5 ,windows10系统 Linux系统安装 在线安装,会自动安装scrapy模块以及相关依赖模块 pip install Scrapy 手动源码安装,比较麻烦要自己手动安 ...

  9. scrapy模块之分页处理,post请求,cookies处理,请求传参

    一.scrapy分页处理 1.分页处理 如上篇博客,初步使用了scrapy框架了,但是只能爬取一页,或者手动的把要爬取的网址手动添加到start_url中,太麻烦接下来介绍该如何去处理分页,手动发起分 ...

随机推荐

  1. safari同步google书签

    1 直接通过safari的导入书签,from chrome就可以了

  2. 通过快捷方式lnk获得文件真实路径

    通过快捷方式.lnk获得文件真实路径前提最近开发资源管理,需要预先上传大量资源,负责整理资源的同学因为空间不足,直接用快捷键方式整理视频资源OTZ,所以只能想办法通过.lnk文件获得文件的真实地址. ...

  3. sql 语句用法

    一.基础 1.说明:创建数据库CREATE DATABASE database-name 2.说明:删除数据库drop database dbname 3.说明:备份sql server--- 创建 ...

  4. 遍历二叉树 - 基于队列的BFS

    之前学过利用递归实现BFS二叉树搜索(http://www.cnblogs.com/webor2006/p/7262773.html),这次学习利用队列(Queue)来实现,关于什么时BFS这里不多说 ...

  5. iView - DatePicker组件神坑,如何处理?

    最近使用iView - DatePicker组件时发现一些问题,明明设置是正常的日期时间格式,当需要使用这个时间的时候,页面却显示 Fri Jun 09 2017 12:00:10 GMT+0800 ...

  6. Python: sqlite3模块

    sqlite3 --- SQLite 数据库 DB-API 2.0 接口模块 SQLite 是一个C语言库,它可以提供一种轻量级的基于磁盘的数据库,这种数据库不需要独立的服务器进程,也允许需要使用一种 ...

  7. BZOJ2140 稳定婚姻[强连通分量]

    发现如果$B_i$和$G_j$配对,那么$B_j$又要找一个$G_k$配对,$B_k$又要找一个$G_l$配对,一直到某一个$B_x$和$G_i$配对上为止,才是不稳定的. 暴力是二分图匹配.匈牙利算 ...

  8. PHP配置文件(php.ini)详解

    [PHP] ; PHP还是一个不断发展的工具,其功能还在不断地删减 ; 而php.ini的设置更改可以反映出相当的变化, ; 在使用新的PHP版本前,研究一下php.ini会有好处的 ;;;;;;;; ...

  9. bind支持mysql

    最近打算将bind的记录信息存入到数据库中去,网上找了下,原来早有老外写好了mysql-bind的补丁,重新编译bind即可实现bind支持mysql存储.(http://mysql-bind.sou ...

  10. nodejs解析url参数的三种方法

    要解析的url:http://127.0.0.1:8090/?name=cpc&age=21 利用js字符串操作函数进行解析 const myserver = require("ht ...