Day 22 22.1.2:增量式爬虫 - 场景2的实现
场景2的实现:
数据指纹
- 使用详情页的url充当数据指纹即可。
创建爬虫爬虫文件:
- cd project_name(进入项目目录)
- scrapy genspider 爬虫文件的名称(自定义一个名字即可) 起始url
- (例如:scrapy genspider first www.xxx.com)
- 创建成功后,会在爬虫文件夹下生成一个py的爬虫文件
进入爬虫文件:
- cd 爬虫文件的名称(即自定义的名字)
爬虫文件
import scrapy
import redis
import random
from ..items import Zlsdemo2ProItem
class JianliSpider(scrapy.Spider):
name = "jianli"
# allowed_domains = ["www.xxx.com"]
start_urls = ["https://sc.chinaz.com/jianli/free.html"]
# 连接数据库
conn = redis.Redis(
host='127.0.0.1',
port=6379,
)
def parse(self, response):
div_list = response.xpath('//*[@id="container"]/div')
for div in div_list:
title = div.xpath('./p/a/text()').extract_first()
detail_url = div.xpath('./p/a/@href').extract_first()
# print(title, detail_url)
ex = self.conn.sadd('data_id', detail_url)
#建立item对象
item = Zlsdemo2ProItem()
item['title'] = title#将title传给item
if ex == 1:#数据库中没有存储爬取到的数据
print('有最新数据更新,正在采集中......')
#做请求传参,将title通过meta传递给parse_detail函数
yield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item})
else:#数据库中已存储爬取到的数据
print('暂无数据更新!')
def parse_detail(self,response):
item = response.meta['item']
download_li = response.xpath('//*[@id="down"]/div[2]/ul/li')
download_urls = []
for li in download_li:
download_url_e = li.xpath('./a/@href').extract_first()
download_urls.append(download_url_e)
download_url = random.choice(download_urls)
item['download_url'] = download_url
yield item
items.py
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class Zlsdemo2ProItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = scrapy.Field()
download_url = scrapy.Field()
pipelines.py
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
class Zlsdemo2ProPipeline:
def process_item(self, item, spider):
conn = spider.conn
conn.lpush('data1', item)
return item
settings
# Scrapy settings for zlsDemo2Pro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = "zlsDemo2Pro"
SPIDER_MODULES = ["zlsDemo2Pro.spiders"]
NEWSPIDER_MODULE = "zlsDemo2Pro.spiders"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR'
LOG_LEVEL = 'WARNING'
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
# "Accept-Language": "en",
#}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# "zlsDemo2Pro.middlewares.Zlsdemo2ProSpiderMiddleware": 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# "zlsDemo2Pro.middlewares.Zlsdemo2ProDownloaderMiddleware": 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# "scrapy.extensions.telnet.TelnetConsole": None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
"zlsDemo2Pro.pipelines.Zlsdemo2ProPipeline": 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage"
# Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"
Day 22 22.1.2:增量式爬虫 - 场景2的实现的更多相关文章
- Scrapy 增量式爬虫
Scrapy 增量式爬虫 https://blog.csdn.net/mygodit/article/details/83931009 https://blog.csdn.net/mygodit/ar ...
- 基于Scrapy框架的增量式爬虫
概述 概念:监测 核心技术:去重 基于 redis 的一个去重 适合使用增量式的网站: 基于深度爬取的 对爬取过的页面url进行一个记录(记录表) 基于非深度爬取的 记录表:爬取过的数据对应的数据指纹 ...
- 增量式爬虫 Scrapy-Rredis 详解及案例
1.创建scrapy项目命令 scrapy startproject myproject 2.在项目中创建一个新的spider文件命令: scrapy genspider mydomain mydom ...
- 爬虫 crawlSpider 分布式 增量式 提高效率
crawlSpider 作用:为了方便提取页面整个链接url,不必使用创参寻找url,通过拉链提取器,将start_urls的全部符合规则的URL地址全部取出 使用:创建文件scrapy startp ...
- python爬虫---CrawlSpider实现的全站数据的爬取,分布式,增量式,所有的反爬机制
CrawlSpider实现的全站数据的爬取 新建一个工程 cd 工程 创建爬虫文件:scrapy genspider -t crawl spiderName www.xxx.com 连接提取器Link ...
- 爬虫07 /scrapy图片爬取、中间件、selenium在scrapy中的应用、CrawlSpider、分布式、增量式
爬虫07 /scrapy图片爬取.中间件.selenium在scrapy中的应用.CrawlSpider.分布式.增量式 目录 爬虫07 /scrapy图片爬取.中间件.selenium在scrapy ...
- 爬虫---scrapy分布式和增量式
分布式 概念: 需要搭建一个分布式的机群, 然后在每一台电脑中执行同一组程序, 让其对某一网站的数据进行联合分布爬取. 原生的scrapy框架不能实现分布式的原因 调度器不能被共享, 管道也不能被共享 ...
- [开源 .NET 跨平台 数据采集 爬虫框架: DotnetSpider] [三] 配置式爬虫
[DotnetSpider 系列目录] 一.初衷与架构设计 二.基本使用 三.配置式爬虫 四.JSON数据解析与配置系统 上一篇介绍的基本的使用方式,虽然自由度很高,但是编写的代码相对还是挺多.于是框 ...
- 增量式PID计算公式4个疑问与理解
一开始见到PID计算公式时总是疑问为什么是那样子?为了理解那几道公式,当时将其未简化前的公式“活生生”地算了一遍,现在想来,这样的演算过程固然有助于理解,但假如一开始就带着对疑问的答案已有一定看法后再 ...
- 增量式PID简单翻板角度控制
1.研究背景 随着电子技术.信息技术和自动控制理论技术的完善与发展,近来微型处理器在控制方面的应用也越来越多.随之逐渐渗透到我们生活的各个领域.如导弹导航装置,飞机上仪表的控制,网络通讯与数据传输,工 ...
随机推荐
- (pymssql._pymssql.OperationalError) (8152, b'String or binary data would be truncated.DB-Lib error message 20 018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n')
(pymssql._pymssql.OperationalError) (8152, b'String or binary data would be truncated.DB-Lib error m ...
- Git 提交和拉取服务器最新版本代码方法
1. 客户端提交: 方法1: git add --all 或 git add 文件1 文件2 ... git commit -m '提交备注信息' git push 方法2: git a ...
- 前端 ArrayBuffer 与 Blob 互转
我们在使用ajax向后端发送请求时,responseType可以设置返回数据的格式,它支持的格式有"text"."arraybuffer"."blob ...
- Neo4j安装及简单使用【转】
转载防丢失. 一.Neo4j和图数据库简介 neo4j是基于Java语言编写图形数据库.图是一组节点和连接这些节点的关系.图形数据库也被称为图形数据库管理系统或GDBMS. Neo4j的是一种流行的图 ...
- 分析网络工具 Wireshark与tcpdump
一.安装使用 1. 安装 2. 选择网卡:我们的主机就是通过其中一块网卡和其他主机进行数据交互: 3. 点击开始:打开wireshark,点击左上角那个蓝色的鲨鱼鳍按钮,开始捕获新的分组并清空之前的分 ...
- node版本和用的包不兼容问题,头疼
经常遇到node版本和包不兼容的问题,在茫茫大海中学习的时候发现一个nvm,可以随时切换node版本,简直不要太开心,附上流程 环境windows 首先:下载一个nvm包https://github. ...
- CCF 201812-2 小明放学
#include <iostream> #include <bits/stdc++.h> #include <string> using namespace std ...
- C#校验GPS数据
从#或$后开始,到*之前是GPS数据,*之后是校验位. public bool Verified(string gps) { gps = gps.TrimStart('#', '$'); var s ...
- js防止表单重复方法
用flag标识,下面的代码设置checkSubmitFlg标志: <script language=""javascript""> var chec ...
- 前端导出文件流[object Object]
情景:后台返回文件流,前端导出. 参照网上的文章配置responseType:'blob' :blob导出文件乱码_前端小菜鸟__简单的博客-CSDN博客_blob导出乱码 后台管理项目blob导出文 ...