1、知识点

""""
pipelines使用:
1、在spiders里面使用yield生成器
list_li = response.xpath("//div[@class='swiper-wrapper']//li")
#print(list_li)
for li in list_li:
#print(li.extract_first())
item = { }
item["name"] = li.xpath(".//h3/text()").extract_first()
item["content"] = li.xpath(".//p[@class='teacherBrief']/text()").extract_first()
#item["content"] = li.xpath(".//p[@class='teacherIntroduction']/text()").extract_first()
#print(item)
yield item #将数据传递道pipelines 2、在pipelines中打印item
class MyspiderPipeline(object):
"""
#第一个管道,这个process_item方法名是不能改
"""
def process_item(self, item, spider):
item["hello"] = "world"
print(item)
return item class MyspiderPipeline1(object):
"""
#第二个管道
"""
def process_item(self, item, spider):
print(item)
return item 3、在settings文件添加pipelines的支持
ITEM_PIPELINES = {
#执行顺序为从小到大,即先执行300,然后在301
'myspider.pipelines.MyspiderPipeline': 300,
'myspider.pipelines.MyspiderPipeline1': 301,
}
"""

2、spider.py文件中通过

yield  item  #将数据传递道pipelines.py中的item
JulyeduSpider.py文件代码
# -*- coding: utf-8 -*-
import scrapy
import logging logger = logging.getLogger(__name__)
class JulyeduSpider(scrapy.Spider):
name = 'julyedu'
allowed_domains = ['julyedu.com']
start_urls = ['http://julyedu.com/']
#这个parse方法名不能改
def parse(self, response):
"""
爬虫七月在线的导师名单
:param response:
:return:
"""
list_li = response.xpath("//div[@class='swiper-wrapper']//li")
#print(list_li)
item = {}
for li in list_li:
item["name"] = li.xpath(".//h3/text()").extract_first()
item["content"] = li.xpath(".//p[@class='teacherBrief']/text()").extract_first()
#item["content"] = li.xpath(".//p[@class='teacherIntroduction']/text()").extract_first()
#print(item)
#将数据传递道pipelines,yield只接受Request,BaseItem,dict,None四种类型
logger.warning(item) #打印日志
yield item

2、修改pipelines.py文件,对其中的item可以操作

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html class MyspiderPipeline(object):
"""
第一个管道,这个process_item方法名是不能改
"""
def process_item(self, item, spider):
"""
针对不同的爬虫的数据处理
:param item:spider 传过来的值
:param spider: 传递过来spider的类
:return:
"""
if spider.name == "julyedu":
#print(item)
return item
else:
return item class MyspiderPipeline1(object):
"""
第二个管道
"""
def process_item(self, item, spider):
#print(item)
return item

3、对settings.py文件添加pipelines配置

# -*- coding: utf-8 -*-

# Scrapy settings for myspider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'myspider' SPIDER_MODULES = ['myspider.spiders']
NEWSPIDER_MODULE = 'myspider.spiders' LOG_LEVEL = 'WARNING' #增加log日志
LOG_FILE='./log.log' #将log日志保存到文件中
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'myspider (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'myspider.middlewares.MyspiderSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'myspider.middlewares.MyspiderDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
#执行顺序为从小到大,即先执行300,然后在301
'myspider.pipelines.MyspiderPipeline': 300,
'myspider.pipelines.MyspiderPipeline1': 301,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

python之scrapy模块pipelines的更多相关文章

  1. python之scrapy模块scrapy-redis使用

    1.redis的使用,自己可以多学习下,个人也是在学习 https://www.cnblogs.com/ywjfx/p/10262662.html官网可以自己搜索下. 2.下载安装scrapy-red ...

  2. python之scrapy模块下载中间件

    知识点 使用方法: 编写一个Downloader Middlewares和我们编写一个pipeline一样,定义一个类,然后在setting中开启 Downloader Middlewares默认的方 ...

  3. python之scrapy模块logging日志

    1.知识点 """ logging : scrapy: settings中设置LOG_LEVEL="WARNING" settings中设置LOG_F ...

  4. python 安装 Scrapy 模块

    环境的安装总是让人多愁善感,爱恨交叉... 本人安装环境:win7 64 + python2.7 先来几个网站 https://doc.scrapy.org/en/latest/intro/insta ...

  5. Python之Scrapy爬虫框架安装及简单使用

    题记:早已听闻python爬虫框架的大名.近些天学习了下其中的Scrapy爬虫框架,将自己理解的跟大家分享.有表述不当之处,望大神们斧正. 一.初窥Scrapy Scrapy是一个为了爬取网站数据,提 ...

  6. [Python爬虫] scrapy爬虫系列 <一>.安装及入门介绍

    前面介绍了很多Selenium基于自动测试的Python爬虫程序,主要利用它的xpath语句,通过分析网页DOM树结构进行爬取内容,同时可以结合Phantomjs模拟浏览器进行鼠标或键盘操作.但是,更 ...

  7. python爬虫scrapy项目详解(关注、持续更新)

    python爬虫scrapy项目(一) 爬取目标:腾讯招聘网站(起始url:https://hr.tencent.com/position.php?keywords=&tid=0&st ...

  8. 第三百二十四节,web爬虫,scrapy模块介绍与使用

    第三百二十四节,web爬虫,scrapy模块介绍与使用 Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中.其最初是为了 ...

  9. 爬虫scrapy模块

    首先下载scrapy模块 这里有惊喜 https://www.cnblogs.com/bobo-zhang/p/10068997.html 创建一个scrapy文件 首先在终端找到一个文件夹 输入 s ...

随机推荐

  1. java——double数据精度问题

    代码:使用BigDecimal来代替double public class BigDecimalUtil { public static BigDecimal add(double v1,double ...

  2. Java NIO 学习笔记一

    缓冲区操作 进程执行I/O操作,归结起来就是向操作系统发出请求,它要么把缓存区例的数据排干(写),要么用数据把数据区填满(读).进程使用这一机制处理所有数据进出操作. 进程使用read()系统调用,要 ...

  3. 牛客练习赛48 E 小w的矩阵前k大元素

    E 思路: 优先队列,将迭代器变量作为结构体的变量. 迭代器走的时候只能像一个方向走,另外一个方向只有最开始才走.如下图所示: 如果两个方向同时走,同一个值会被遍历多次,像上图那样就能保证每个位置都走 ...

  4. HDU 2897 bash 博弈变形

    一堆石子N个 每个人最少取P个 最多取Q个 最后取光的人输 问谁赢 X=N%(P+Q)  X=0则先手取Q个必胜 X<=P则后手胜 X>P则先手取P个必胜 #include <ios ...

  5. 【概率dp】vijos 3747 随机图

    没有养成按状态逐步分析问题的思维 题目描述 在一张图内,两点$i,j$之间有$p$的概率的概率生成一条边.求该图不出现大小$\ge 4$连通块的概率. $n \le 100,答案在实数意义下$ 题目分 ...

  6. BZOJ 4522: [Cqoi2016]密钥破解 (Pollard-Rho板题)

    Pollard-Rho 模板 板题-没啥说的- 求逆元出来后如果是负的记得加回正数 CODE #include<bits/stdc++.h> using namespace std; ty ...

  7. Appium Python测试环境搭建

    详细参考地址:https://www.cnblogs.com/amoyshmily/p/10500687.html 1,Appium安装:https://github.com/appium/appiu ...

  8. ie中兼容性问题

    由于项目要要兼容到ie8原本没有问题的代码一但用ie8打开js的报错找不到对象就都来了,其实总结起来就是ie越老的版本就越多方法名识别不到,那就少什么方法添加什么,比如说我的项目就要引入<scr ...

  9. C++cctype软件包函数摆脱,ASCII码!

    对于字符,你是否还在用ASCII码? 下面是C++的函数库,摆脱ASCI码! 1.isalnum(): 判断是否为数字和字母 2.isalpha(): 判断是否是字母 3.iscntrl(): 判断是 ...

  10. ZR#710

    雷劈数 题意: 现在给出两个整数,求出位于两个整数之间的所有的"雷劈数. 解法: 因为雷劈数特殊的性质,所以在数据范围中的雷劈数实际很少,直接暴力打表就行. CODE: #include&l ...