scrapy工作原理概述
当运行scrapy crawl spider 时,会生成一个crawl命令对象,scrapy是调用execute函数(cmdlin.py)来执行命令的,execute函数会给命令对象添加crawler_process属性(cmd.crawler_process = CrawlerProcess(settings)),CrawlerProcess调用crawle时会调用到spider的crawl方法。
def _create_crawler(self, spidercls):
if isinstance(spidercls, six.string_types):
spidercls = self.spider_loader.load(spidercls)
return Crawler(spidercls, self.settings)#生成一个爬虫
class Crawler(object):#爬虫类
def __init__(self, spidercls, settings=None):
if isinstance(settings, dict) or settings is None:
settings = Settings(settings)
self.spidercls = spidercls#爬虫的蜘蛛类
self.settings = settings.copy()
self.spidercls.update_settings(self.settings)
self.signals = SignalManager(self)
self.stats = load_object(self.settings['STATS_CLASS'])(self)
handler = LogCounterHandler(self, level=self.settings.get('LOG_LEVEL'))
logging.root.addHandler(handler)
if get_scrapy_root_handler() is not None:
# scrapy root handler alread installed: update it with new settings
install_scrapy_root_handler(self.settings)
# lambda is assigned to Crawler attribute because this way it is not
# garbage collected after leaving __init__ scope
self.__remove_handler = lambda: logging.root.removeHandler(handler)
self.signals.connect(self.__remove_handler, signals.engine_stopped)
lf_cls = load_object(self.settings['LOG_FORMATTER'])
self.logformatter = lf_cls.from_crawler(self)
self.extensions = ExtensionManager.from_crawler(self)
self.settings.freeze()
self.crawling = False#该爬虫是否在爬行
self.spider = None
self.engine = None#引擎,对蜘蛛、schudler、download的控制
@property
def spiders(self):
if not hasattr(self, '_spiders'):
warnings.warn("Crawler.spiders is deprecated, use "
"CrawlerRunner.spider_loader or instantiate "
"scrapy.spiderloader.SpiderLoader with your "
"settings.",
category=ScrapyDeprecationWarning, stacklevel=2)
self._spiders = _get_spider_loader(self.settings.frozencopy())
return self._spiders
@defer.inlineCallbacks
def crawl(self, *args, **kwargs):#调用爬虫的crawl方法时,会创建蜘蛛和引擎
assert not self.crawling, "Crawling already taking place"
self.crawling = True
try:
self.spider = self._create_spider(*args, **kwargs)
self.engine = self._create_engine()
start_requests = iter(self.spider.start_requests())
yield self.engine.open_spider(self.spider, start_requests)#引擎的打开蜘蛛
yield defer.maybeDeferred(self.engine.start)
except Exception:
# In Python 2 reraising an exception after yield discards
# the original traceback (see http://bugs.python.org/issue7563),
# so sys.exc_info() workaround is used.
# This workaround also works in Python 3, but it is not needed,
# and it is slower, so in Python 3 we use native `raise`.
if six.PY2:
exc_info = sys.exc_info()
self.crawling = False
if self.engine is not None:
yield self.engine.close()
if six.PY2:
six.reraise(*exc_info)
raise
scrapy/core/engine.py
def _next_request_from_scheduler(self, spider):
slot = self.slot
request = slot.scheduler.next_request()#从调度器中取出一个请求
if not request:
return
d = self._download(request, spider)#去下载,生成一个defer对象
d.addBoth(self._handle_downloader_output, request, spider)
d.addErrback(lambda f: logger.info('Error while handling downloader output',
exc_info=failure_to_exc_info(f),
extra={'spider': spider}))
d.addBoth(lambda _: slot.remove_request(request))
d.addErrback(lambda f: logger.info('Error while removing request from slot',
exc_info=failure_to_exc_info(f),
extra={'spider': spider}))
d.addBoth(lambda _: slot.nextcall.schedule())
d.addErrback(lambda f: logger.info('Error while scheduling new request',
exc_info=failure_to_exc_info(f),
extra={'spider': spider}))
return d
def _handle_downloader_output(self, response, request, spider):
assert isinstance(response, (Request, Response, Failure)), response
# downloader middleware can return requests (for example, redirects)
if isinstance(response, Request):#如果返回的是请求,调用crawl,产生一个请求。
self.crawl(response, spider)
return
# response is a Response or Failure
d = self.scraper.enqueue_scrape(response, request, spider)#由scraper处理一个爬行结果
d.addErrback(lambda f: logger.error('Error while enqueuing downloader output',
exc_info=failure_to_exc_info(f),
extra={'spider': spider}))
return d
scrapy主要是由inlinecallback实现的。
core/scraper.py
def _scrape2(self, request_result, request, spider):
"""Handle the different cases of request's result been a Response or a
Failure"""
if not isinstance(request_result, Failure):
return self.spidermw.scrape_response(
self.call_spider, request_result, request, spider)
else:
# FIXME: don't ignore errors in spider middleware
dfd = self.call_spider(request_result, request, spider)#爬行成功,调用call
return dfd.addErrback(
self._log_download_errors, request_result, request, spider) def call_spider(self, result, request, spider):
result.request = request
dfd = defer_result(result)
dfd.addCallbacks(request.callback or spider.parse, request.errback)#调用请求的回调函数或蜘蛛的parse
return dfd.addCallback(iterate_spider_output)
scrapy工作原理概述的更多相关文章
- Scrapy工作原理
目录 1. Scrapy旧版架构图(绿线是数据流向) 2. Scrapy新版架构图 1. 组件介绍 2. 数据流(Data Flow) 3. 使用Scrapy框架爬虫的重要命令 4. Middlewa ...
- scrapy工作原理探秘
def _next_request_from_scheduler(self, spider):#engine从调度器取得下一个request slot = self.slot request = sl ...
- Web服务器的工作原理
Web服务器的工作原理 Web服务器工作原理概述 很多时候我们都想知道,web容器或web服务器(比如Tomcat或者jboss)是怎样工作的?它们是怎样处理来自全世界的http请求的?它们在幕后做了 ...
- web服务器工作原理
Web服务器工作原理概述 转载自http://www.importnew.com/15020.html 很多时候我们都想知道,web容器或web服务器(比如Tomcat或者jboss)是怎样工作的?它 ...
- 代码中理解CPU结构及工作原理
一.前言 从研究生开始到工作半年,陆续在接触MCU SOC这些以CPU为核心的控制器,但由于专业的原因一直对CPU的内部结构和工作原理一知半解.今天从一篇博客中打破一直以来的盲区.特此声明,本文设计思 ...
- 一篇文章教会你理解Scrapy网络爬虫框架的工作原理和数据采集过程
今天小编给大家详细的讲解一下Scrapy爬虫框架,希望对大家的学习有帮助. 1.Scrapy爬虫框架 Scrapy是一个使用Python编程语言编写的爬虫框架,任何人都可以根据自己的需求进行修改,并且 ...
- Scrapy 框架结构及工作原理
1.下图为 Scrapy 框架的组成结构,并从数据流的角度揭示 Scrapy 的工作原理 2.首先.简单了解一下 Scrapy 框架中的各个组件 组 件 描 述 类 型 EN ...
- scrapy框架结构与工作原理
组件: ENGINE:引擎,框架的核心,其他组件在其控制下协同工作. SCHEDULER:调度器,负责对SPIDER提交的下载请求进行调度 DOWNLOADER:下载器,负责下载页面,发送HTTP请求 ...
- Python爬虫-Scrapy框架的工作原理
Scrapy框架工作原理 Scrapy框架架构图 Scrapy框架主要由六大组件组成,分别为: 调度器(Scheduler),下载器(Downler),爬虫(Spiders),中间件(Middwa ...
随机推荐
- SpringSecurity-ConcurrentSessionFilter的作用
ConcurrentSessionFilter主要有两个功能: (1)每次request时调用SessionRegistry的refreshLastRequest(String)更新session的最 ...
- python3学习笔记三(数字类型,字符串)
数字(Number)类型 有四种类型:整数.布尔型.浮点数和复数 int整数 bool布尔,如True float浮点数,1.23 complex复数,1+2j.1.2+2.3j 内置的 type() ...
- bzoj5007: TCP协议
Description 在如今的网络中,TCP是一种被广泛使用的网络协议,它在传输层提供了可靠的通信服务.众所周知,网络是存在 时延的,例如用户先后向服务器发送了两个指令op1和op2,并且希望服务器 ...
- mac nginx 安装教程
eeking a satisfactory solution to create a local web server for programming in macOS with PHP and My ...
- Spring-framework应用程序启动loadtime源码分析笔记(一)
1,实例化DefaultListableBeanFactory DefaultListableBeanFactory是AnnotationConfigApplicationContext的组件,Def ...
- [转][C#]HttpClient 代码示例
转自:https://www.cnblogs.com/amosli/p/3918538.html 也参考了:https://www.cnblogs.com/ShadowFiend007/p/80668 ...
- rtpproxy 配置
1.下载rtpproxy并安装 cd /home/hi 下载rtpproxy最新版,比如rtpproxy-2.1.0.tar.gz tar –xzvf rtpproxy-2.1.0.tar.gz cd ...
- mysql命令行批量插入100条数据命令
先介绍一个关键字的使用: delimiter 定好结束符为"$$",(定义的时候需要加上一个空格) 然后最后又定义为";", MYSQL的默认结束符为" ...
- pandas函数get_dummies的坑
转载:https://blog.csdn.net/mvpboss1004/article/details/79188190 pandas中的get_dummies得到的one-hot编码数据类型是ui ...
- Html——拖放
设置元素为可拖放 首先,为了使元素可拖动,把 draggable 属性设置为 true : <img draggable="true" /> 拖动什么 - ondrag ...