# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals class DownloadtestSpiderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects. @classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s def process_spider_input(self, response, spider):
# Called for each response that goes through the spider
# middleware and into the spider. # Should return None or raise an exception.
return None def process_spider_output(self, response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response. # Must return an iterable of Request, dict or Item objects.
for i in result:
yield i def process_spider_exception(self, response, exception, spider):
# Called when a spider or process_spider_input() method
# (from other spider middleware) raises an exception. # Should return either None or an iterable of Response, dict
# or Item objects.
pass def process_start_requests(self, start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn’t have a response associated. # Must return only requests (not items).
for r in start_requests:
yield r def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name) class DownloadtestDownloaderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects. @classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s def process_request(self, request, spider):
# Called for each request that goes through the downloader
# middleware. # Must either:
# - return None: continue processing this request
# - or return a Response object
# - or return a Request object
# - or raise IgnoreRequest: process_exception() methods of
# installed downloader middleware will be called
return None def process_response(self, request, response, spider):
# Called with the response returned from the downloader. # Must either;
# - return a Response object
# - return a Request object
# - or raise IgnoreRequest
return response def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception. # Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name) import random #设置请求头
class RandomUA():
def __init__(self):
self.user_agent=[
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.1 Safari/605.1.16',
'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)'
] def process_request(self, request, spider):
request.headers['User-Agent'] = random.choice(self.user_agent) def process_response(self, request, response, spider):
response.status=201
return response #设置代理
class ProxyMiddleware:
proxy_list=[
"http://127.0.0.1:8080"
# "http://183.91.33.41:80",
# "http://111.29.3.190:80",
# "http://124.156.108.71:82"
] def process_request(self, request, spider):
ip=random.choice(self.proxy_list)
request.meta['proxy']=ip

Scrapy框架: middlewares.py设置的更多相关文章

  1. Scrapy框架: settings.py设置

    # -*- coding: utf-8 -*- # Scrapy settings for maitian project # # For simplicity, this file contains ...

  2. Scrapy框架: pipelines.py设置

    保存数据到json文件 # -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your p ...

  3. Scrapy框架之高级 转

    一.CrawlSpider模板 创建项目 scrapy startproject 项目名称 查看模板 scrapy genspider -l 创建crawl模板 scrapy genspider -t ...

  4. 基于scrapy框架的爬虫

    Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 可以应用在包括数据挖掘,信息处理或存储历史数据等一系列的程序中. scrapy 框架 高性能的网络请求 高性能的数据解析 高性能的 ...

  5. scrapy框架的中间件

    中间件的使用 作用:拦截所有的请求和响应 拦截请求:process_request拦截正常的请求,process_exception拦截异常的请求 篡改请求的头信息 def process_reque ...

  6. scrapy框架设置代理

    网易音乐在单ip请求下经常会遇到网页返回码503的情况经查询,503为单个ip请求流量超限,猜测是网易音乐的一种反扒方式因原音乐下载程序采用scrapy框架,所以需要在scrapy中通过代理的方式去解 ...

  7. scrapy框架设置代理ip,headers头和cookies

    [设置代理ip] 根据最新的scrapy官方文档,scrapy爬虫框架的代理配置有以下两种方法: 一.使用中间件DownloaderMiddleware进行配置使用Scrapy默认方法scrapy s ...

  8. 网络爬虫之scrapy框架设置代理

    前戏 os.environ()简介 os.environ()可以获取到当前进程的环境变量,注意,是当前进程. 如果我们在一个程序中设置了环境变量,另一个程序是无法获取设置的那个变量的. 环境变量是以一 ...

  9. Python爬虫Scrapy框架入门(2)

    本文是跟着大神博客,尝试从网站上爬一堆东西,一堆你懂得的东西 附上原创链接: http://www.cnblogs.com/qiyeboy/p/5428240.html 基本思路是,查看网页元素,填写 ...

随机推荐

  1. VMware虚拟机重新挂载共享目录

    经常遇到设置的共享目录在重启虚拟机后找不到的情况,于是写了个脚本:mount-shared-folders. 前提是你的虚拟机中已经安装了VMware的相关工具(一般装完虚拟机都会自动安装上的) #! ...

  2. while语句基本练习(求和思想,统计思想)

    A:循环结构while语句的格式: 初始化语句; while(判断条件语句) { 循环体语句; 控制条件语句; } B:执行流程: a:执行初始化语句 b:执行判断条件语句,看其返回值是true还是f ...

  3. BZOJ 3611 大工程 (虚树)

    题面 国家有一个大工程,要给一个非常大的交通网络里建一些新的通道. 我们这个国家位置非常特殊,可以看成是一个单位边权的树,城市位于顶点上. 在 2 个国家 a,b 之间建一条新通道需要的代价为树上 a ...

  4. CopyOnWriteArrayList(复制数组 去实现)

    一.Vector和SynchronizedList 1.1回顾线程安全的Vector和SynchronizedList 我们知道ArrayList是用于替代Vector的,Vector是线程安全的容器 ...

  5. money (dp)

    牛客网暑假训练第二场D题: 链接:https://www.nowcoder.com/acm/contest/140/D来源:牛客网 题目描述 White Cloud has built n store ...

  6. zoj 3777 Problem Arrangement(壮压+背包)

    Problem Arrangement Time Limit: 2 Seconds      Memory Limit: 65536 KB The 11th Zhejiang Provincial C ...

  7. 小程序中this.setData的使用和注意事项

    前言:微信小程序中经常需要用到this.setData({})把变量值渲染到视图层,那到底什么是this.setData,如何使用?需要注意哪些?作为一个初学者,分享一点我的经验,希望大家批评指正. ...

  8. The Preliminary Contest for ICPC Asia Nanjing 2019 D. Robots

    题意:给出一个DAG,一只机器人从1点出发,等概率地选择一条出边走或者停留在原点,机器人的第k次行动消耗能量是k(无论是走还是停留都算一次行动).问1到n的期望. 解法:因为行动消耗的能量跟行动次数有 ...

  9. noip2010机器翻译

    以下题面摘自洛谷1540 题目背景 小晨的电脑上安装了一个机器翻译软件,他经常用这个软件来翻译英语文章. 题目描述 这个翻译软件的原理很简单,它只是从头到尾,依次将每个英文单词用对应的中文含义来替换. ...

  10. 利用Stream模式进行文件拷贝

    const fs = require('fs'); const file = fs.createReadStream("readfile.js"); const outputFil ...