1、debug了解

2、scrapy shell了解

Scrapy shell是一个交互终端,我们可以在未启动spider的情况下尝试及调试代码,也可以用来测试XPath表达式

使用方法:
scrapy shell https://gosuncn.zhiye.com/social/ response.url:当前响应的url地址
response.request.url:当前响应对应的请求的url地址
response.headers:响应头
response.body:响应体,也就是html代码,默认是byte类型
response.requests.headers:当前响应的请求头

3、settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for gosuncn project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#settings文件一般存取全局变量,如数据库的账号,密码等###
BOT_NAME = 'gosuncn' SPIDER_MODULES = ['gosuncn.spiders']
NEWSPIDER_MODULE = 'gosuncn.spiders' LOG_LEVEL="WARNING"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'gosuncn (+http://www.yourdomain.com)' # Obey robots.txt rules
#遵守ROBOT协议,也就是会先请求ROBOT协议
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#并发数
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#下载延迟,每次请求前,先睡3秒
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#和DOWNLOAD_DELAY配合使用
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#设置COOKIES,默认携带COOKIES信息
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#默认请求头
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} #中间件
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'gosuncn.middlewares.GosuncnSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'gosuncn.middlewares.GosuncnDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} #配置pipelines,数字为权重值,越小越先执行
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'gosuncn.pipelines.GosuncnPipeline': 300,
}
LOG_LEVEL ="WARNING"
LOG_FILE = "./log.log" #对爬虫进行限速
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False #HTTP缓存
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

4、pipelines的open_spider和close_spider函数

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import re
from gosuncn.items import GosuncnItem
class GosuncnPipeline(object): def open_spider(self,spider):
"""
爬虫启动时执行,启动数据库连接
:param spider:
:return:
"""
spider.hello = "open" def process_item(self, item, spider):
if isinstance(item,GosuncnItem):
item["content"] = self.process_content(item["content"])
print(item)
return item def process_content(self,content):
content =[re.sub(r"\r\n|' '","",i) for i in content]
content = [i for i in content if len(i)>0]
return content def close_spider(self,spider):
"""
爬虫结束时执行,关闭数据库连接
:param spider:
:return:
"""
spider.hello = "close"
# class GosuncnPipeline1(object):
# def process_item(self, item, spider):
# if isinstance(item,GosuncnItem):
# print(item)
# return item

python之scrapy的debug、shell、settings、pipelines的更多相关文章

  1. python之scrapy入门教程

    看这篇文章的人,我假设你们都已经学会了python(派森),然后下面的知识都是python的扩展(框架). 在这篇入门教程中,我们假定你已经安装了Scrapy.如果你还没有安装,那么请参考安装指南. ...

  2. python爬虫scrapy项目详解(关注、持续更新)

    python爬虫scrapy项目(一) 爬取目标:腾讯招聘网站(起始url:https://hr.tencent.com/position.php?keywords=&tid=0&st ...

  3. python爬虫scrapy学习之篇二

    继上篇<python之urllib2简单解析HTML页面>之后学习使用Python比较有名的爬虫scrapy.网上搜到两篇相应的文档,一篇是较早版本的中文文档Scrapy 0.24 文档, ...

  4. scrapy框架之shell

    scrapy shell scrapy shell是一个交互式shell,您可以在其中快速调试 scrape 代码,而不必运行spider.它本来是用来测试数据提取代码的,但实际上您可以使用它来测试任 ...

  5. Python之Scrapy爬虫框架安装及简单使用

    题记:早已听闻python爬虫框架的大名.近些天学习了下其中的Scrapy爬虫框架,将自己理解的跟大家分享.有表述不当之处,望大神们斧正. 一.初窥Scrapy Scrapy是一个为了爬取网站数据,提 ...

  6. [Python爬虫] scrapy爬虫系列 <一>.安装及入门介绍

    前面介绍了很多Selenium基于自动测试的Python爬虫程序,主要利用它的xpath语句,通过分析网页DOM树结构进行爬取内容,同时可以结合Phantomjs模拟浏览器进行鼠标或键盘操作.但是,更 ...

  7. Python使用Scrapy框架爬取数据存入CSV文件(Python爬虫实战4)

    1. Scrapy框架 Scrapy是python下实现爬虫功能的框架,能够将数据解析.数据处理.数据存储合为一体功能的爬虫框架. 2. Scrapy安装 1. 安装依赖包 yum install g ...

  8. python爬虫Scrapy(一)-我爬了boss数据

    一.概述 学习python有一段时间了,最近了解了下Python的入门爬虫框架Scrapy,参考了文章Python爬虫框架Scrapy入门.本篇文章属于初学经验记录,比较简单,适合刚学习爬虫的小伙伴. ...

  9. windows下使用python的scrapy爬虫框架,爬取个人博客文章内容信息

    scrapy作为流行的python爬虫框架,简单易用,这里简单介绍如何使用该爬虫框架爬取个人博客信息.关于python的安装和scrapy的安装配置请读者自行查阅相关资料,或者也可以关注我后续的内容. ...

随机推荐

  1. docker HealthCheck健康检查

    需求 最近遇到的问题:线上跑的一个 Node 镜像是在运行的,状态为 up ,但是访问报 502 ,重启镜像无效,重新拉了个镜像运行才恢复正常.于是想研究下如何从应用层面去检查容器的状态 为什么 do ...

  2. EasyUi Datagrid中footer renderFooter

    默认的'rowStyler' 选项不支持footer,想让footer支持rowStyler的话,dategird就得重写.代码如下. var myview = $.extend({}, $.fn.d ...

  3. body element height id small, but the backgroud color is full screen

    http://www.cnblogs.com/xiaoyuersdch/p/9156240.html ------------------------------------------------- ...

  4. 点击a链接防止滚动条滚动

    href="javascript:void(0)"而不是 href="#"

  5. CF700E Cool Slogans——SAM+线段树合并

    RemoteJudge 又是一道用线段树合并来维护\(endpos\)的题,还有一道见我的博客CF666E 思路 先把\(SAM\)建出来 如果两个相邻的串\(s_i\)和\(s_{i+1}\)要满足 ...

  6. Codeforces Round #509 (Div. 2) F. Ray in the tube(思维)

    题目链接:http://codeforces.com/contest/1041/problem/F 题意:给出一根无限长的管子,在二维坐标上表示为y1 <= y <= y2,其中 y1 上 ...

  7. mysql关于索引的一些零碎知识点(持续更新)

    1.is null可以使用索引(网上很多文章存在误导,这个确实可以使用索引),is not null无法使用索引. 2.为什么重复数据较多的列不适合使用索引? 假如索引列TYPE有5个键值,如果有1万 ...

  8. 外观模式(Facade)---结构型模式

    1 基础知识 定义:提供了一个统一的接口(外观类),用来访问子系统中的一群接口.特征:定义了一个高层接口让子系统更容易使用,减少了外部与子系统内多个模块的耦合. 本质:封装交互,简化调用. 优点:简化 ...

  9. react-native-page-listview使用方法(自定义FlatList/ListView下拉刷新,上拉加载更多,方便的实现分页)

    react-native-page-listview 对ListView/FlatList的封装,可以很方便的分页加载网络数据,还支持自定义下拉刷新View和上拉加载更多的View.兼容高版本Flat ...

  10. 小米 oj 马走日 (bfs 或 双向bfs)

     马走日 序号:#56难度:困难时间限制:1500ms内存限制:10M 描述 在中国象棋中,马只能走日字型.现在给出一个由 N*M 个格子组成的中国象棋棋盘( 有(N+1)*(M+1)个交叉点可以落子 ...