Scrapy Architecture overview--官方文档
原文地址:https://doc.scrapy.org/en/latest/topics/architecture.html
This document describes the architecture of Scrapy and how its components interact.
Overview
The following diagram shows an overview of the Scrapy architecture with its components and an outline of the data flow that takes place inside the system (shown by the red arrows). A brief description of the components is included below with links for more detailed information about them. The data flow is also described below.
Data flow

The data flow in Scrapy is controlled by the execution engine, and goes like this:
- The Engine gets the initial Requests to crawl from the Spider.
- The Engine schedules the Requests in the Scheduler and asks for the next Requests to crawl.
- The Scheduler returns the next Requests to the Engine.
- The Engine sends the Requests to the Downloader, passing through the Downloader Middlewares (see
process_request()). - Once the page finishes downloading the Downloader generates a Response (with that page) and sends it to the Engine, passing through the Downloader Middlewares (see
process_response()). - The Engine receives the Response from the Downloader and sends it to the Spider for processing, passing through the Spider Middleware (see
process_spider_input()). - The Spider processes the Response and returns scraped items and new Requests (to follow) to the Engine, passing through the Spider Middleware (see
process_spider_output()). - The Engine sends processed items to Item Pipelines, then send processed Requests to the Scheduler and asks for possible next Requests to crawl.
- The process repeats (from step 1) until there are no more requests from the Scheduler.
Components
Scrapy Engine
The engine is responsible for controlling the data flow between all components of the system, and triggering events when certain actions occur. See the Data Flow section above for more details.
Scheduler
The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.
Downloader
The Downloader is responsible for fetching web pages and feeding them to the engine which, in turn, feeds them to the spiders.
Spiders
Spiders are custom classes written by Scrapy users to parse responses and extract items (aka scraped items) from them or additional requests to follow. For more information see Spiders.
Item Pipeline
The Item Pipeline is responsible for processing the items once they have been extracted (or scraped) by the spiders. Typical tasks include cleansing, validation and persistence (like storing the item in a database). For more information see Item Pipeline.
Downloader middlewares
Downloader middlewares are specific hooks that sit between the Engine and the Downloader and process requests when they pass from the Engine to the Downloader, and responses that pass from Downloader to the Engine.
Use a Downloader middleware if you need to do one of the following:
- process a request just before it is sent to the Downloader (i.e. right before Scrapy sends the request to the website);
- change received response before passing it to a spider;
- send a new Request instead of passing received response to a spider;
- pass response to a spider without fetching a web page;
- silently drop some requests.
For more information see Downloader Middleware.
Spider middlewares
Spider middlewares are specific hooks that sit between the Engine and the Spiders and are able to process spider input (responses) and output (items and requests).
Use a Spider middleware if you need to
- post-process output of spider callbacks - change/add/remove requests or items;
- post-process start_requests;
- handle spider exceptions;
- call errback instead of callback for some of the requests based on response content.
For more information see Spider Middleware.
Event-driven networking
Scrapy is written with Twisted, a popular event-driven networking framework for Python. Thus, it’s implemented using a non-blocking (aka asynchronous) code for concurrency.
Scrapy Architecture overview--官方文档的更多相关文章
- hbase官方文档(转)
FROM:http://www.just4e.com/hbase.html Apache HBase™ 参考指南 HBase 官方文档中文版 Copyright © 2012 Apache Soft ...
- HBase官方文档
HBase官方文档 目录 序 1. 入门 1.1. 介绍 1.2. 快速开始 2. Apache HBase (TM)配置 2.1. 基础条件 2.2. HBase 运行模式: 独立和分布式 2.3. ...
- Spark官方文档 - 中文翻译
Spark官方文档 - 中文翻译 Spark版本:1.6.0 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 引入Spark(Linki ...
- Spring 4 官方文档学习 Spring与Java EE技术的集成
本部分覆盖了以下内容: Chapter 28, Remoting and web services using Spring -- 使用Spring进行远程和web服务 Chapter 29, Ent ...
- Spark SQL 官方文档-中文翻译
Spark SQL 官方文档-中文翻译 Spark版本:Spark 1.5.2 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 Data ...
- 人工智能系统Google开源的TensorFlow官方文档中文版
人工智能系统Google开源的TensorFlow官方文档中文版 2015年11月9日,Google发布人工智能系统TensorFlow并宣布开源,机器学习作为人工智能的一种类型,可以让软件根据大量的 ...
- Google Android官方文档进程与线程(Processes and Threads)翻译
android的多线程在开发中已经有使用过了,想再系统地学习一下,找到了android的官方文档,介绍进程与线程的介绍,试着翻译一下. 原文地址:http://developer.android.co ...
- OGR 官方文档
OGR 官方文档 http://www.gdal.org/ogr/index.html The OGR Simple Features Library is a C++ open source lib ...
- cassandra 3.x官方文档(5)---探测器
写在前面 cassandra3.x官方文档的非官方翻译.翻译内容水平全依赖本人英文水平和对cassandra的理解.所以强烈建议阅读英文版cassandra 3.x 官方文档.此文档一半是翻译,一半是 ...
- Cuda 9.2 CuDnn7.0 官方文档解读
目录 Cuda 9.2 CuDnn7.0 官方文档解读 准备工作(下载) 显卡驱动重装 CUDA安装 系统要求 处理之前安装的cuda文件 下载的deb安装过程 下载的runfile的安装过程 安装完 ...
随机推荐
- Kafka 分布式消息系统详解
实际上kafka对机器的需求与Hadoop的类似. 原来,对于Linkin这样的互联网企业来说,用户和网站上产生的数据有三种: 需要实时响应的交易数据,用户提交一个表单,输入一段内容,这种数据最后是存 ...
- Jenkins介绍-安装-部署...
1.背景 大师Martin Fowler对持续集成是这样定义的:持续集成是一种软件开发实践,即团队开发成员经常集成他们的工作,通常每个成员每天至少集成一次,也就意味着每天可能会发生多次集成. ...
- C#中的引用传递和值传递。
最近在写项目中有同事碰到这样的传值问题,可能很多对这个参数的传递还有点疑惑,自己也是对这个基础知识做一个回顾和巩固. 首先什么是值类型和引用类型可以去园里看一下相关的资料,都有介绍. 常用值类型包括: ...
- java真实面试题(2)
1,递归算法的实行过程,一般来说,可以分为()和()两个阶段,若一个问题的求解既可以用递归也可以用递推时,则往往用(),因为().贪婪法是一种()的算法. 答:递归算法分为递推和回归两个阶段,递推效率 ...
- BZOJ 1180 / 2843 LCT模板题_双倍经验
一大早上到机房想先拍一下模板,热热身. 结果....对照着染色敲的 LCT 竟然死活也调不过去(你说我抄都能抄错) 干脆自己重新敲了一遍,10min就敲完了....... 还是要相信自己 Code: ...
- awk一次性分别赋值多个value给多个变量,速度对比
方法 #方法1: echo "apple banana orange" | awk '{print $1,$2,$3}' | while read a b c do echo a= ...
- IOS让自定义类是用下标
在ios中,有个非常有用的特性,就是可以为自己写的类增加下标访问功能. 如果我们自己的类中有个数组items,我们想直接给类加下标的方式来访问这个数组的元素,就像访问系统的数组一样,其实只要增加一个方 ...
- Java多线程中Sleep与Wait的区别
Java中的多线程是一种抢占式的机制 而不是分时机制.抢占式机制指的是有多个线程处于可运行状态,但是只有一个线程在运行. 共同点: 1. 他们都是在多线程的环境下,都可以在程序的调用处阻塞指定的毫秒数 ...
- Project Euler 19 Counting Sundays( 蔡勒公式计算星期数 )
题意:在二十世纪(1901年1月1日到2000年12月31日)中,有多少个月的1号是星期天? 蔡勒公式:计算 ( year , month , day ) 是星期几 以下图片仅供学习! /****** ...
- HDU1867 - A + B for you again
Generally speaking, there are a lot of problems about strings processing. Now you encounter another ...