python之scrapy模块pipelines
1、知识点
""""
pipelines使用:
1、在spiders里面使用yield生成器
list_li = response.xpath("//div[@class='swiper-wrapper']//li")
#print(list_li)
for li in list_li:
#print(li.extract_first())
item = { }
item["name"] = li.xpath(".//h3/text()").extract_first()
item["content"] = li.xpath(".//p[@class='teacherBrief']/text()").extract_first()
#item["content"] = li.xpath(".//p[@class='teacherIntroduction']/text()").extract_first()
#print(item)
yield item #将数据传递道pipelines 2、在pipelines中打印item
class MyspiderPipeline(object):
"""
#第一个管道,这个process_item方法名是不能改
"""
def process_item(self, item, spider):
item["hello"] = "world"
print(item)
return item class MyspiderPipeline1(object):
"""
#第二个管道
"""
def process_item(self, item, spider):
print(item)
return item 3、在settings文件添加pipelines的支持
ITEM_PIPELINES = {
#执行顺序为从小到大,即先执行300,然后在301
'myspider.pipelines.MyspiderPipeline': 300,
'myspider.pipelines.MyspiderPipeline1': 301,
}
"""
2、spider.py文件中通过
yield item #将数据传递道pipelines.py中的item
JulyeduSpider.py文件代码
# -*- coding: utf-8 -*-
import scrapy
import logging logger = logging.getLogger(__name__)
class JulyeduSpider(scrapy.Spider):
name = 'julyedu'
allowed_domains = ['julyedu.com']
start_urls = ['http://julyedu.com/']
#这个parse方法名不能改
def parse(self, response):
"""
爬虫七月在线的导师名单
:param response:
:return:
"""
list_li = response.xpath("//div[@class='swiper-wrapper']//li")
#print(list_li)
item = {}
for li in list_li:
item["name"] = li.xpath(".//h3/text()").extract_first()
item["content"] = li.xpath(".//p[@class='teacherBrief']/text()").extract_first()
#item["content"] = li.xpath(".//p[@class='teacherIntroduction']/text()").extract_first()
#print(item)
#将数据传递道pipelines,yield只接受Request,BaseItem,dict,None四种类型
logger.warning(item) #打印日志
yield item
2、修改pipelines.py文件,对其中的item可以操作
# -*- coding: utf-8 -*- # Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html class MyspiderPipeline(object):
"""
第一个管道,这个process_item方法名是不能改
"""
def process_item(self, item, spider):
"""
针对不同的爬虫的数据处理
:param item:spider 传过来的值
:param spider: 传递过来spider的类
:return:
"""
if spider.name == "julyedu":
#print(item)
return item
else:
return item class MyspiderPipeline1(object):
"""
第二个管道
"""
def process_item(self, item, spider):
#print(item)
return item
3、对settings.py文件添加pipelines配置
# -*- coding: utf-8 -*- # Scrapy settings for myspider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'myspider' SPIDER_MODULES = ['myspider.spiders']
NEWSPIDER_MODULE = 'myspider.spiders' LOG_LEVEL = 'WARNING' #增加log日志
LOG_FILE='./log.log' #将log日志保存到文件中
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'myspider (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'myspider.middlewares.MyspiderSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'myspider.middlewares.MyspiderDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
#执行顺序为从小到大,即先执行300,然后在301
'myspider.pipelines.MyspiderPipeline': 300,
'myspider.pipelines.MyspiderPipeline1': 301,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
python之scrapy模块pipelines的更多相关文章
- python之scrapy模块scrapy-redis使用
1.redis的使用,自己可以多学习下,个人也是在学习 https://www.cnblogs.com/ywjfx/p/10262662.html官网可以自己搜索下. 2.下载安装scrapy-red ...
- python之scrapy模块下载中间件
知识点 使用方法: 编写一个Downloader Middlewares和我们编写一个pipeline一样,定义一个类,然后在setting中开启 Downloader Middlewares默认的方 ...
- python之scrapy模块logging日志
1.知识点 """ logging : scrapy: settings中设置LOG_LEVEL="WARNING" settings中设置LOG_F ...
- python 安装 Scrapy 模块
环境的安装总是让人多愁善感,爱恨交叉... 本人安装环境:win7 64 + python2.7 先来几个网站 https://doc.scrapy.org/en/latest/intro/insta ...
- Python之Scrapy爬虫框架安装及简单使用
题记:早已听闻python爬虫框架的大名.近些天学习了下其中的Scrapy爬虫框架,将自己理解的跟大家分享.有表述不当之处,望大神们斧正. 一.初窥Scrapy Scrapy是一个为了爬取网站数据,提 ...
- [Python爬虫] scrapy爬虫系列 <一>.安装及入门介绍
前面介绍了很多Selenium基于自动测试的Python爬虫程序,主要利用它的xpath语句,通过分析网页DOM树结构进行爬取内容,同时可以结合Phantomjs模拟浏览器进行鼠标或键盘操作.但是,更 ...
- python爬虫scrapy项目详解(关注、持续更新)
python爬虫scrapy项目(一) 爬取目标:腾讯招聘网站(起始url:https://hr.tencent.com/position.php?keywords=&tid=0&st ...
- 第三百二十四节,web爬虫,scrapy模块介绍与使用
第三百二十四节,web爬虫,scrapy模块介绍与使用 Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中.其最初是为了 ...
- 爬虫scrapy模块
首先下载scrapy模块 这里有惊喜 https://www.cnblogs.com/bobo-zhang/p/10068997.html 创建一个scrapy文件 首先在终端找到一个文件夹 输入 s ...
随机推荐
- 【4】Kafka集群启动/关闭脚本
说明:本脚本基于SSH服务器免密登录,如集群未配置SSH,参照:<SSH安装配置> . 一.启动脚本:start-kafka-cluster.sh #!/bin/bash brokers= ...
- python并发编程之线程(二):死锁和递归锁&信号量&定时器&线程queue&事件evevt
一 死锁现象与递归锁 进程也有死锁与递归锁,在进程那里忘记说了,放到这里一切说了额 所谓死锁: 是指两个或两个以上的进程或线程在执行过程中,因争夺资源而造成的一种互相等待的现象,若无外力作用,它们都将 ...
- 六:MVC数据建模(增删改查)
今天我们来学习mvc增删改查等操作(试着结合前面学习的LINQ方法语法结合查询) 我创建了一个car的数据库,只有一个Cars表 表里面就几个字段 插入了一些数据 想要创建一个ADO.NET实体数据模 ...
- 如何正确清理C盘?
Windows电脑操作系统一般是安装在磁盘驱动器的C盘中,一旦运行,便会产生许多垃圾文件,C盘空间在一定程度上都会越来越小.伴随着电脑工作的时间越久,C盘常常会提示显示其内存已不足.那么C盘容量不足对 ...
- shell脚本编程进阶及RAID和LVM应用1
bash脚本编程 脚本文件格式: 第一行,顶格写: #!/bin/bash 注释行:#开头 代码注释:写清楚注释 规范写脚本:适度缩进,添加空白行 编程语言:有编程语法格式,库,算法和数据结构 编程思 ...
- POI读取Excel数据
POI读取Excel表格数据 * {所需相关jar下载: * commons-collections4-4.4.jar * commons-compress-1.19.jar * poi-4.1.1. ...
- config.json读取和存储
json格式的配置文件的读取和存储 public class ConfigHelper { public static T GetConfig<T>(string path) { if ( ...
- rac 数组之遍历
rac的数组遍历其实很简单.但是有个点需要注意. 以下先举个例子说明遍历的用法 NSArray *temArr = @["]; [temArr.rac_sequence.signal sub ...
- 在cubemx中使用freertos中的注意事项
就是使用信号量等rtos自带特性的时候,务必先初始化然后在发生信号量或接收. 而且在中断中发送信号量或队列的时候,务必把使能中断的语句放在初始化freertos之后,尤其是cubemx生成的代码,默认 ...
- HDU 6107 - Typesetting | 2017 Multi-University Training Contest 6
比赛的时候一直念叨链表怎么加速,比完赛吃饭路上突然想到倍增- - /* HDU 6107 - Typesetting [ 尺取法, 倍增 ] | 2017 Multi-University Train ...