python之scrapy模块scrapy-redis使用
1、redis的使用,自己可以多学习下,个人也是在学习
https://www.cnblogs.com/ywjfx/p/10262662.html
官网可以自己搜索下。
2、下载安装scrapy-redis
pip install scrapy-redis
3、下载好了,就可以使用了,使用也很简单,只需要在settings.py配置文件添加一下四个
#######redis配置#######
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_URL = "redis://127.0.0.1:6379"
#######redis配置#######
如:settings.py
# -*- coding: utf-8 -*- # Scrapy settings for circ project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# http://doc.scrapy.org/en/latest/topics/settings.html
# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html BOT_NAME = 'circ' SPIDER_MODULES = ['circ.spiders']
NEWSPIDER_MODULE = 'circ.spiders' LOG_LEVEL = "WARNING"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36' # Obey robots.txt rules
ROBOTSTXT_OBEY = True #######redis配置#######
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_URL = "redis://127.0.0.1:6379"
#######redis配置#######
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'circ.middlewares.CircSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'circ.middlewares.RandomUserAgentMiddleware': 543,
# 'circ.middlewares.CheckUserAgent': 544,
#} # Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'circ.pipelines.CircPipeline': 300,
#} # Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' USER_AGENTS_LIST = [ "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)", "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)", "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)", "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0", "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5" ]
4、其他的可以不用管了,直接开始scrapy爬虫就可以了
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import re class CfSpider(CrawlSpider):
name = 'cf'
allowed_domains = ['bxjg.circ.gov.cn']
start_urls = ['http://bxjg.circ.gov.cn/web/site0/tab5240/module14430/page1.htm'] #定义提取url地址规则
rules = (
#LinkExtractor 连接提取器,提取url地址
#callback 提取出来的url地址的response会交给callback处理
#follow 当前url地址的响应是够重新进过rules来提取url地址,
Rule(LinkExtractor(allow=r'/web/site0/tab5240/info\d+\.htm'), callback='parse_item'),
Rule(LinkExtractor(allow=r'/web/site0/tab5240/module14430/page\d+\.htm'),follow=True),
) #parse函数有特殊功能,不能定义
def parse_item(self, response):
item = {}
item["title"] = re.findall("<!--TitleStart-->(.*?)<!--TitleEnd-->",response.body.decode())[0]
item["publish_date"] = re.findall("发布时间:(20\d{2}-\d{2}-\d{2})",response.body.decode())[0]
#print(item)
yield item
# yield scrapy.Request(
# url,
# callback=self.parse_detail,
# meta = {"item":item}
# )
#
# def parse_detail(self,response):
# item = response.meta["item"]
# item["price"] = "///"
# yield item
5、登陆redis,通过命令查看
keys * type "cf:dupefilter" SMEMBERS "cf:dupefilter"
出现数据,说明成功了
python之scrapy模块scrapy-redis使用的更多相关文章
- Python模块Scrapy导入出错:ImportError: cannot import name xmlrpc_client
Mac(OS version: OS X Yosemite 10.10.5)上安装Scrapy模块,使用时出现: from six.moves import xmlrpc_client as xmlr ...
- 爬虫scrapy模块
首先下载scrapy模块 这里有惊喜 https://www.cnblogs.com/bobo-zhang/p/10068997.html 创建一个scrapy文件 首先在终端找到一个文件夹 输入 s ...
- Python之定向爬虫Scrapy
1.Scrapy介绍 Scrapy,Python开发的一个快速,高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结构化的数据.Scrapy用途广泛,可以用于数据挖掘.监测和自动化测试 ...
- python爬虫入门(六) Scrapy框架之原理介绍
Scrapy框架 Scrapy简介 Scrapy是用纯Python实现一个为了爬取网站数据.提取结构性数据而编写的应用框架,用途非常广泛. 框架的力量,用户只需要定制开发几个模块就可以轻松的实现一个爬 ...
- python爬虫scrapy之scrapy终端(Scrapy shell)
Scrapy终端是一个交互终端,供您在未启动spider的情况下尝试及调试您的爬取代码. 其本意是用来测试提取数据的代码,不过您可以将其作为正常的Python终端,在上面测试任何的Python代码. ...
- 第三百二十四节,web爬虫,scrapy模块介绍与使用
第三百二十四节,web爬虫,scrapy模块介绍与使用 Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中.其最初是为了 ...
- 第三百二十三节,web爬虫,scrapy模块以及相关依赖模块安装
第三百二十三节,web爬虫,scrapy模块以及相关依赖模块安装 当前环境python3.5 ,windows10系统 Linux系统安装 在线安装,会自动安装scrapy模块以及相关依赖模块 pip ...
- 二 web爬虫,scrapy模块以及相关依赖模块安装
当前环境python3.5 ,windows10系统 Linux系统安装 在线安装,会自动安装scrapy模块以及相关依赖模块 pip install Scrapy 手动源码安装,比较麻烦要自己手动安 ...
- scrapy模块之分页处理,post请求,cookies处理,请求传参
一.scrapy分页处理 1.分页处理 如上篇博客,初步使用了scrapy框架了,但是只能爬取一页,或者手动的把要爬取的网址手动添加到start_url中,太麻烦接下来介绍该如何去处理分页,手动发起分 ...
随机推荐
- select * 和 select 字段的速度对比
拿WordPress的数据库做一个对比 SELECT ID,post_title, post_author FROM wp_posts ORDER BY ID LIMIT 100; OK, Time: ...
- (十三)r18 cpu id
1.cpuid内核接口 lichee/linux-4.4/drivers/soc/sunxi/sunxi-sid.c int sunxi_get_soc_chipid(u8 *chipid) int ...
- Java通过JDBC连接SQL Server2017数据库
一.需要明白的基础知识 数据库名 驱动jar(x表示版本号) 具体驱动类 连接字符串(ip地址,端口号,名字) Oracle ojdbc-x.jar oracle.jdbc.oracleDriver ...
- day01_人类社会货币的演变
1.货币的自然演变 1.1:从实物货币(贝壳.金银等一般等价物的稀有性等价于被交换物品的价值)---纸质货币(国家信用背书,使得一文不值的纸币可以兑换价值百元的商品)---记账货币(微信.二维码.银行 ...
- Java错误和异常解析
Java错误和异常解析 错误和异常 在Java中, 根据错误性质将运行错误分为两类: 错误和异常. 在Java程序的执行过程中, 如果出现了异常事件, 就会生成一个异常对象. 生成的异常对象将传递Ja ...
- 【Java基础 项目实例--Bank项目5】Account 和 customer 对象等 继承、多态、方法的重写
延续 Java基础 项目实例--Bank项目4 实验要求 实验题目 5: 在银行项目中创建 Account 的两个子类:SavingAccount 和 CheckingAccount 实验目的: 继承 ...
- shell字符串处理
一.字符串切片: ${#var}:返回字符串变量var的长度${var:offset}:返回字符串变量var中从第offset个字符后(不包括第offset个字符)的字符开始,到最后的部分,offse ...
- HDU 6415 Rikka with Nash Equilibrium (计数DP)
题意:给两个整数n,m,让你使用 1 ~ n*m的所有数,构造一个矩阵n*m的矩阵,此矩阵满足:只有一个元素在它的此行和此列中都是最大的,求有多种方式. 析:根据题意,可以知道那个元素一定是 n * ...
- 《深入理解Java虚拟机》之(三、虚拟机性能监控与故障处理工具)
一.JDK的命令行工具 1.jps:虚拟机进程状况工具 功能:可以列出正在运行的虚拟机进程,并显示虚拟机执行朱磊名称以及这些进程的本地虚拟机唯一ID. 2.jstat:虚拟机统计信息监控工具 Jsta ...
- C#新增按钮
代码亲测可用,似乎不需要“ADD”,如下:form_load段:for (int i = 0; i < 10; i++){btn = new Button();btn.Parent = this ...