1. Pause and resume a crawl

Scrapy supports this functionality out of the box by providing > the following facilities:

  • a scheduler that persists scheduled > >requests on disk

  • a duplicates filter that persists >visited requests on disk

  • an extension that keeps some spider state (key/value pairs) > persistent between > batches

run a crawl by

scrapy crawl somespider -s JOBDIR=crawls/somespider_dir

use Ctrl+C to close a drawl and resume by the same command above

2. 发起一次get请求

e.g.

页面A是新闻的列表,包含了每个新闻的链接

要发起一个请求去获取新闻的内容

通过设置request.meta,可以将参数带到callback函数中去,用response.meta接收

def parse(self, response):
newslist = response.xpath('//ul[@class="linkNews"]/li') for item in newslist:
news = News()
news['title'] = item.xpath('a/text()').extract_first(default = '') contentUri = item.xpath('a/@href').extract_first(default = '')
request = scrapy.Request(contentUri,
callback = self.getContent_callback,
headers = headers)
request.meta['item'] = news
yield request def getContent_callback(self, response):
news = response.meta['item']
item['content'] = response.xpath('//article[@class="art_box"]').xpath('string(.)').extract_first(default = '').strip()
yield item

3. 交互式shell

可以在这里交互式地获取各种信息,如response.status

我主要用来调试xpath(!shell中调试结果并不可靠)

PS C:\Users\patrick\Documents\Visual Studio 2017\Projects\ScrapyProjects> scrapy shell --nolog 'http://mil.news.sina.com.cn/2011-03-31/1342640379.html'
[s] Available Scrapy objects:
[s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s] crawler <scrapy.crawler.Crawler object at 0x0000026EA72752B0>
[s] item {}
[s] request <GET http://mil.news.sina.com.cn/2011-03-31/1342640379.html>
[s] response <200 http://mil.news.sina.com.cn/2011-03-31/1342640379.html>
[s] settings <scrapy.settings.Settings object at 0x0000026EA8586940>
[s] spider <DefaultSpider 'default' at 0x26ea884bb38>
[s] Useful shortcuts:
[s] fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s] fetch(req) Fetch a scrapy.Request and update local objects
[s] shelp() Shell help (print this help)
[s] view(response) View response in a browser
In [1]: response.status
Out[1]: 200

在交互式环境里设置自定义headers

$ scrapy shell --nolog
...
...
>>> from scrapy import Request
>>> req = Request('douban.com', headers = {'User-Agent' : '...'})
>>> fetch(req)

if you just want to set user agent

scrapy shell -s USER_AGENT='useragent' 'https://movie.douban.com'

4. 命令行下向爬虫传参数

scrapy crawl myspider -a category=electronics

在爬虫中获取参数,直接通过参数名获取,如下面代码中的category

import scrapy

class MySpider(scrapy.Spider):
name = 'myspider' def __init__(self, category=None, *args, **kwargs):
super(MySpider, self).__init__(*args, **kwargs)
self.start_urls = ['http://www.example.com/categories/%s' % category]
# ...

5. 去除网页中的\r\n

用xpath中的normalize-space

以及extract_first是个好东西,还能加默认值

item['content'] = response.xpath('normalize-space(//div[@class="blkContainerSblkCon" and @id="artibody"])').extract_first(default = '')

6. 以编程方式停止一个爬虫

方法是抛出一个内置的异常CloseSpider

exception scrapy.exceptions.CloseSpider(reason='cancelled')

This exception can be raised from a spider callback to request the spider to be closed/stopped. Supported arguments:

Parameters: reason (str) – the reason for closing

def parse_page(self, response):
if 'Bandwidth exceeded' in response.body:
raise CloseSpider('bandwidth_exceeded')

7. [mysql] Incorrect string value: '\xF0\x9F\x8C\xB9' for column 'title' at row 1

连接数据库时的charset参数设置成utf8mb4

8. 写入文件时为utf-8编码而不是中文

在settings.py 文件末加上 FEED_EXPORT_ENCODING = 'utf-8'

9. soome things about Item

>>> import scrapy
>>> class A(scrapy.Item):
... post_id = scrapy.Field()
... user_id = scrapy.Field()
... content = scrapy.Field()
...
>>> type(A)
<class 'scrapy.item.ItemMeta'>

这里的post_iduser_id可以存储任何类型的数据

取数据的时候也可以像是操作dic一样

>>> a = A(post_id = '12312312', author_id = '2342_author_id')
>>> a['post_id']
'12312312'
>>> a['author_id']
'2342_author_id'

如果field未被赋值,直接用dic['key']的方法取数据会报'KeyError',解决办法是改用get方法

>>> a.get('content', default = 'empty')
'empty'
>>> a.get('content', 'empty')
'empty'

判断Item中是否存在某个field以及是否被赋值

>>> 'name' in a   # name是否被赋值
False
>>> 'name' in a.fields # a的属性里是否有 'name
False
>>> 'content' in a # content是否被赋值
False
>>> 'content' in a.fields
True

建议所有dic['key']都改成dic.get('key', '')

10. 日志写入到文件

settings.py中插入

LOG_STDOUT = True
LOG_FILE = 'scrapy_log.txt'

scrapy crawl MyCrawler -s LOG_FILE=/var/log/crawler_mycrawler.log

Reference

  1. Set headers for scrapy shell request

  2. Scrapy 1.5 documentation

[Scrapy] Some things about Scrapy的更多相关文章

  1. 从零安装Scrapy心得 | Install Python Scrapy from scratch

    1. 介绍 Scrapy,是基于python的网络爬虫框架,它能从网络上爬下来信息,是data获取的一个好方式.于是想安装下看看. 进到它的官网,安装的介绍页面 https://docs.scrapy ...

  2. Scrapy:学习笔记(2)——Scrapy项目

    Scrapy:学习笔记(2)——Scrapy项目 1.创建项目 创建一个Scrapy项目,并将其命名为“demo” scrapy startproject demo cd demo 稍等片刻后,Scr ...

  3. scrapy基础知识之 Scrapy 和 scrapy-redis的区别:

    Scrapy 和 scrapy-redis的区别 Scrapy 是一个通用的爬虫框架,但是不支持分布式,Scrapy-redis是为了更方便地实现Scrapy分布式爬取,而提供了一些以redis为基础 ...

  4. scrapy的安装,scrapy创建项目

    简要: scrapy的安装 # 1)pip install scrapy -i https://pypi.douban.com/simple(国内源) 一步到位 # 2) 报错1: building ...

  5. Scrapy:Python实现scrapy框架爬虫两个网址下载网页内容信息——Jason niu

    import scrapy class DmozSpider(scrapy.Spider): name ="dmoz" allowed_domains = ["dmoz. ...

  6. Scrapy基础(十四)————Scrapy实现知乎模拟登陆

    模拟登陆大体思路见此博文,本篇文章只是将登陆在scrapy中实现而已 之前介绍过通过requests的session 会话模拟登陆:必须是session,涉及到验证码和xsrf的写入cookie验证的 ...

  7. scrapy下载图片报[scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt:错误

    本文转自:http://blog.csdn.net/zzk1995/article/details/51628205 先说结论,关闭scrapy自带的ROBOTSTXT_OBEY功能,在setting ...

  8. 【Scrapy】关于使用Scrapy框架爬虫遇到的问题1

    class testScrapy(scrapy.Spider): name = "testLogs" allowed_domains=["cnblogs.com" ...

  9. scrapy(一)scrapy 安装问题

    一.安装scrapy pip install scrapy 二.出现Microsoft Visual C++ 14.0相关问题 注:若出现以下安装错误 building 'twisted.test.r ...

随机推荐

  1. Installing the JMeter CA certificate for HTTPS recording

    参考: http://jmeter.apache.org/usermanual/component_reference.html#HTTP(S)_Test_Script_Recorder User m ...

  2. 杭电第六场 hdu6362 oval-and-rectangle 积分求期望

    oval-and-rectangle Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Other ...

  3. 【JavaScript】ESlint & Prettier & Flow组合,得此三神助,混沌归太清

    Flow Flow的意义 Flow是faceBook开源的一个JavaScript静态类型检查工具,作用类似TypeScript,但是它不像TS那样是一门独立的语言,而是作为一个babel-plugi ...

  4. python实现煲机脚本

    生日的时候女票送了一副新耳机,还挺帅气. 装逼界的人都知道,新耳机是有"煲"这个步骤的 至于有没有效果?怎么煲?煲多久?这些问题都是耳机界常年争执的问题,各路高手分成各种门派常年杀 ...

  5. 【Nginx】 中的配置命令

    一.location 1.1 概述 1.2 location的语法 1.3 Location正则案例 二.nginx rewrite 2.1 rewrite全局变量 2.2 判断IP地址来源 2.3 ...

  6. webpack中clean-webpack-plugin插件使用遇到的问题及解决方法

    webpack 会生成文件,然后将这些文件放置在 /dist 文件夹中,但是 webpack 无法追踪到哪些文件是实际在项目中用到的. 通常,在每次构建前清理 /dist 文件夹,是比较推荐的做法,因 ...

  7. 《程序实现》从xml、txt文件里读取数据写入excel表格

    直接上码 import java.io.BufferedReader; import java.io.DataInputStream; import java.io.File; import java ...

  8. java教程系列一:什么是Java语言?

    海上生明月,天涯共此时. Java是一种通用的计算机编程语言,它具有卓越的通用性.高效性.平台移植性和安全性.它旨在让应用程序开发人员"write once, run anywhere&qu ...

  9. Winform中对ZedGraph的曲线标签进行设置,比如去掉标签边框

    场景 Winforn中设置ZedGraph曲线图的属性.坐标轴属性.刻度属性: https://blog.csdn.net/BADAO_LIUMANG_QIZHI/article/details/10 ...

  10. sqoop导oracle数据到hive中并动态分区

    静态分区: 在hive中创建表可以使用hql脚本: test.hql USE TEST; CREATE TABLE page_view(viewTime INT, userid BIGINT, pag ...