【scrapy】Item及Spider
Items
Item objects are simple containers used to collect the scraped data.They provide a dictionary-like api with a convenient syntax for declaring their available fields.
import scrapy;
class Product(scrapy.Item):
name=scrapy.Field()
price=scrapy.Field()
stock=scrapy.Field()
last_updated=scrapy.Field(serializer=str)
Extending Items
you can extend Items(to add more fields or to change some metadata for some fields)by declaring a subclass of your original Item.
class DiscountedProduct(Product):
discount_percent=scrapy.Field(serializer=str)
You can also extend fields metadata by using the previous field metadata and appending more values,or changind existing values.
class SpecificProduct(Product):
name=scrapy.Field(Product.fields['name'],serializer=my_serializer)
Item Objects
1.class scrapy.item.Item([arg])
Return a new Item optionally initialized from the given argument
The only additional attribute provided by Items is:fields
2.Field objects
class scrapy.item.Field([arg])
The Field class is just an alias to the built-in dict class and doesn't provide any extra functionality or attributes.
_______________________________________________________________________________________________________________________________
Built-in spiders reference
Scrapy comes with some useful generic spiders that you can use,to subclass your spiders from .Their aim is to provide convenient functionality for a few common scraping cases.
class scrapy.spider.Spider
This is the simplest spider,and the one from which every other spider must inherit from.
重要属性:
name
A string which defines the name for this spider.it must be unique.This is the most important spider attribute and it is required.
allowed_domains
An optional list of strings containing domains that this spider is allowed to crawl.Requests for URLs not belonging to the domain names specified in this list won't be followed if offsiteMiddleware is enabled.
start_urls
A list of URLs where the spider will begin to crawl from,when no particular URLs are specified.
start_requests()
This is the method called by Scrapy when the spider is opened for scraping when no particular URLs are specified.If particular URLs are specified,the make_requests_from_url() is used instead to create the Requests.
make_requests_from_url(url)
A method that receives a URL and returns a Request object to scrape.Unless overridden,this method returns Requests with the parse() method as their callback function.
parse(response)
The parse method is in charge of processing the response and returning scraped data .
log(message[,level,component])
Log a message
closed(reason)
called when the spider closes.
class scrapy.contrib.spiders.CrawlSpider
This is the most commonly used spider for crawling regular websites,as it provides a convenient mechanism for following links by defining a set of rules.
除了继承自Spider的属性,CrawlSpider还提供以下属性。
rules
Which is a list of one or more Rule objects.Each Rule defines a certain behaviour for crawling the site.
关于Rule对象:
class scrapy.contrib.spiders.Rule(link_extractor,callback=None,cb_kwargs=None,follow=None,process_links=None,process_request=None)
link_extractor is a Link Extractor object which defines how links will be extracted from each crawled page.
callback is a callable or a string to be called for each link extracted with the specified link_extractor.
注意:when writing crawl spider rules,avoid using parse as callback,since the CrawlSpider uses the parse method itself to implement its logic.
cb_kwargs is a dict containing the keyword arguments to be passed to the callback function.
follow is a boolean which specifies if links should be followed from each response extracted with this rule.If callback is None,follow defaults to true(即继续爬取这个链接),otherwise it default to false.
process_request is a callable or a string which will be called with every request extracted by this rule,and must return a request or None.
------------------------------------------------------------------------------------------------------------------------------------
LinkExtractors are objects whose only purpose is to extract links from web pages(scrapy.http.Response objects).
Scrapy内置的Link Extractors有两个,可以根据需要自己来写。
All availble link extractors classes bundled with scrapy are provided in the scrapy.contrib.linkextractors module.
SgmlLinkExtractor
class scrapy.contrib.linkextractors.sgml.SgmlLinkExtractor(allow,...)
The SgmlLinkExtrator extends the base BaseSgmlLinkExtractor by providing additional filters that you can specify to extract links.
allow(a (a list of)regular expression):a single regular expression that the urls must match in order to be extracted,if not given,it will match all links.
【scrapy】Item及Spider的更多相关文章
- scrapy 原理,结构,基本命令,item,spider,selector简述
原理,结构,基本命令,item,spider,selector简述 原理 (1)结构 (2)运行流程 实操 (1) scrapy命令: 注意先把python安装目录的scripts文件夹添加到环境变量 ...
- 爬虫(十六):Scrapy框架(三) Spider Middleware、Item Pipeline
1. Spider Middleware Spider Middleware是介入到Scrapy的Spider处理机制的钩子框架. 当Downloader生成Response之后,Response会被 ...
- python爬虫入门(七)Scrapy框架之Spider类
Spider类 Spider类定义了如何爬取某个(或某些)网站.包括了爬取的动作(例如:是否跟进链接)以及如何从网页的内容中提取结构化数据(爬取item). 换句话说,Spider就是您定义爬取的动作 ...
- [scrapy]Item Loders
Items Items就是结构化数据的模块,相当于字典,比如定义一个{"title":"","author":""},i ...
- scrapy框架之spider
爬取流程 Spider类定义如何爬取指定的一个或多个网站,包括是否要跟进网页里的链接和如何提取网页内容中的数据. 爬取的过程是类似以下步骤的循环: 1.通过指定的初始URL初始化Request,并指定 ...
- Scrapy框架之Spider模板 转
一.安装scrapy 首先安装依赖库Twisted pip install (依赖库的路径) 在这个网址http://www.lfd.uci.edu/~gohlke/pythonlibs#twiste ...
- 第三百四十四节,Python分布式爬虫打造搜索引擎Scrapy精讲—craw母版l创建自动爬虫文件—以及 scrapy item loader机制
第三百四十四节,Python分布式爬虫打造搜索引擎Scrapy精讲—craw母版l创建自动爬虫文件—以及 scrapy item loader机制 用命令创建自动爬虫文件 创建爬虫文件是根据scrap ...
- 二十三 Python分布式爬虫打造搜索引擎Scrapy精讲—craw母版l创建自动爬虫文件—以及 scrapy item loader机制
用命令创建自动爬虫文件 创建爬虫文件是根据scrapy的母版来创建爬虫文件的 scrapy genspider -l 查看scrapy创建爬虫文件可用的母版 Available templates: ...
- scrapy item
item item定义了爬取的数据的model item的使用类似于dict 定义 在items.py中,继承scrapy.Item类,字段类型scrapy.Field() 实例化:(假设定义了一个名 ...
随机推荐
- 阿里P7/P8学习路线图——技术封神之路
一.基础篇 JVM JVM内存结构 堆.栈.方法区.直接内存.堆和栈区别 Java内存模型 内存可见性.重排序.顺序一致性.volatile.锁.final 垃圾回收 内存分配策略.垃圾收集器(G1) ...
- sqlserver 数据库主外键关联错误
话题引入: 在建立主外键关系时,系统提示表"table2"中的列与现有的主键或UNIQUE约束不匹配 原因: 数据库表中只有一个主键,这个主键可以是多个列共同组成.所以table2 ...
- hasOneOf # if (data.otherDescArr.some(_ => '7'.indexOf(_) > -1)) {
if (data.otherDescArr.some(_ => '7'.indexOf(_) > -1)) { export const hasOneOf = (targetarr, ar ...
- Qt中常用的类
QApplication 应用程序类 管理图形用户界面应用程序的控制流和主要设置 QLabel 标签类 提供 ...
- final关键字所修饰的类有什么特点
Java关键字final有“这是无法改变的”或者“终态的”含义,它可以修饰非抽象类.非抽象类成员方法和变量. final类不能被继承,没有子类,final类中的方法默认是final的. final方法 ...
- opencv加载图片和视频
一.加载图片: 1.先放一段最简单的加载图片的代码 import cv2 as cv #引用opencv库image = "D:/Image/test.jpg" #确定图片所在路径 ...
- [CF] 219D Choosing Capital for Treeland
题意翻译 题目描述 Treeland国有n个城市,这n个城市连成了一颗树,有n-1条道路连接了所有城市.每条道路只能单向通行.现在政府需要决定选择哪个城市为首都.假如城市i成为了首都,那么为了使首都能 ...
- 第七章:systemverilog过程语句
systemverilog增加了一些新的操作符和过程语句: 1.新的操作符 递增/递减 赋值操作符 设置成员操作符inside 有无关通配符==?/!=? 操作数改进(类型/尺寸/符号强制转换) 2. ...
- Postfix mail for azengna.com loops back to myself -solve
设置 /etc/postfix/main.cf 原配置 mydestination = $myhostname, localhost.$mydomain, localhost 改为 mydestina ...
- 单链表 C语言 学习记录
概念 链接方式存储 链接方式存储的线性表简称为链表(Linked List). 链表的具体存储表示为: 用一组任意的存储单元来存放线性表的结点(这组存储单元既可以是连续的,也可以是不连续的). 链表中 ...