场景2的实现:

数据指纹

  • 使用详情页的url充当数据指纹即可。

创建爬虫爬虫文件:

  • cd project_name(进入项目目录)
  • scrapy genspider 爬虫文件的名称(自定义一个名字即可) 起始url
    • (例如:scrapy genspider first www.xxx.com)
  • 创建成功后,会在爬虫文件夹下生成一个py的爬虫文件

进入爬虫文件:

  • cd 爬虫文件的名称(即自定义的名字)

爬虫文件

import scrapy
import redis
import random
from ..items import Zlsdemo2ProItem
class JianliSpider(scrapy.Spider):
name = "jianli"
# allowed_domains = ["www.xxx.com"]
start_urls = ["https://sc.chinaz.com/jianli/free.html"]
# 连接数据库
conn = redis.Redis(
host='127.0.0.1',
port=6379,
)
def parse(self, response):
div_list = response.xpath('//*[@id="container"]/div')
for div in div_list:
title = div.xpath('./p/a/text()').extract_first()
detail_url = div.xpath('./p/a/@href').extract_first()
# print(title, detail_url)
ex = self.conn.sadd('data_id', detail_url)
#建立item对象
item = Zlsdemo2ProItem()
item['title'] = title#将title传给item
if ex == 1:#数据库中没有存储爬取到的数据
print('有最新数据更新,正在采集中......')
#做请求传参,将title通过meta传递给parse_detail函数
yield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item}) else:#数据库中已存储爬取到的数据
print('暂无数据更新!') def parse_detail(self,response):
item = response.meta['item']
download_li = response.xpath('//*[@id="down"]/div[2]/ul/li')
download_urls = []
for li in download_li:
download_url_e = li.xpath('./a/@href').extract_first()
download_urls.append(download_url_e)
download_url = random.choice(download_urls)
item['download_url'] = download_url yield item

items.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class Zlsdemo2ProItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = scrapy.Field()
download_url = scrapy.Field()

pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html # useful for handling different item types with a single interface
from itemadapter import ItemAdapter class Zlsdemo2ProPipeline:
def process_item(self, item, spider): conn = spider.conn
conn.lpush('data1', item) return item

settings

# Scrapy settings for zlsDemo2Pro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = "zlsDemo2Pro" SPIDER_MODULES = ["zlsDemo2Pro.spiders"]
NEWSPIDER_MODULE = "zlsDemo2Pro.spiders" # Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'ERROR'
LOG_LEVEL = 'WARNING' # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
# "Accept-Language": "en",
#} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# "zlsDemo2Pro.middlewares.Zlsdemo2ProSpiderMiddleware": 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# "zlsDemo2Pro.middlewares.Zlsdemo2ProDownloaderMiddleware": 543,
#} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# "scrapy.extensions.telnet.TelnetConsole": None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
"zlsDemo2Pro.pipelines.Zlsdemo2ProPipeline": 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = "httpcache"
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = "scrapy.extensions.httpcache.FilesystemCacheStorage" # Set settings whose default value is deprecated to a future-proof value
REQUEST_FINGERPRINTER_IMPLEMENTATION = "2.7"
TWISTED_REACTOR = "twisted.internet.asyncioreactor.AsyncioSelectorReactor"
FEED_EXPORT_ENCODING = "utf-8"

Day 22 22.1.2:增量式爬虫 - 场景2的实现的更多相关文章

  1. Scrapy 增量式爬虫

    Scrapy 增量式爬虫 https://blog.csdn.net/mygodit/article/details/83931009 https://blog.csdn.net/mygodit/ar ...

  2. 基于Scrapy框架的增量式爬虫

    概述 概念:监测 核心技术:去重 基于 redis 的一个去重 适合使用增量式的网站: 基于深度爬取的 对爬取过的页面url进行一个记录(记录表) 基于非深度爬取的 记录表:爬取过的数据对应的数据指纹 ...

  3. 增量式爬虫 Scrapy-Rredis 详解及案例

    1.创建scrapy项目命令 scrapy startproject myproject 2.在项目中创建一个新的spider文件命令: scrapy genspider mydomain mydom ...

  4. 爬虫 crawlSpider 分布式 增量式 提高效率

    crawlSpider 作用:为了方便提取页面整个链接url,不必使用创参寻找url,通过拉链提取器,将start_urls的全部符合规则的URL地址全部取出 使用:创建文件scrapy startp ...

  5. python爬虫---CrawlSpider实现的全站数据的爬取,分布式,增量式,所有的反爬机制

    CrawlSpider实现的全站数据的爬取 新建一个工程 cd 工程 创建爬虫文件:scrapy genspider -t crawl spiderName www.xxx.com 连接提取器Link ...

  6. 爬虫07 /scrapy图片爬取、中间件、selenium在scrapy中的应用、CrawlSpider、分布式、增量式

    爬虫07 /scrapy图片爬取.中间件.selenium在scrapy中的应用.CrawlSpider.分布式.增量式 目录 爬虫07 /scrapy图片爬取.中间件.selenium在scrapy ...

  7. 爬虫---scrapy分布式和增量式

    分布式 概念: 需要搭建一个分布式的机群, 然后在每一台电脑中执行同一组程序, 让其对某一网站的数据进行联合分布爬取. 原生的scrapy框架不能实现分布式的原因 调度器不能被共享, 管道也不能被共享 ...

  8. [开源 .NET 跨平台 数据采集 爬虫框架: DotnetSpider] [三] 配置式爬虫

    [DotnetSpider 系列目录] 一.初衷与架构设计 二.基本使用 三.配置式爬虫 四.JSON数据解析与配置系统 上一篇介绍的基本的使用方式,虽然自由度很高,但是编写的代码相对还是挺多.于是框 ...

  9. 增量式PID计算公式4个疑问与理解

    一开始见到PID计算公式时总是疑问为什么是那样子?为了理解那几道公式,当时将其未简化前的公式“活生生”地算了一遍,现在想来,这样的演算过程固然有助于理解,但假如一开始就带着对疑问的答案已有一定看法后再 ...

  10. 增量式PID简单翻板角度控制

    1.研究背景 随着电子技术.信息技术和自动控制理论技术的完善与发展,近来微型处理器在控制方面的应用也越来越多.随之逐渐渗透到我们生活的各个领域.如导弹导航装置,飞机上仪表的控制,网络通讯与数据传输,工 ...

随机推荐

  1. RabbitMQ的安装与基本使用(windows版)

    基本结构 windows安装 1.  先安装erlang开发语言,然后安装rabbitmq-server服务器(需要比对rabbitmq版本和erlang版本对应下载安装,我项目中选用的版本为otp_ ...

  2. 【Linux命令】在Linux服务器上与windows通过SCP命令互传文件时出现的问题排查过程

    1,在linux 执行 scp 1.txt adminitrator@10.10.10.10:/d:/后,报连接超时 原因:windows不支持ssh,可以安装支持SSH服务的工具,如:winsshd ...

  3. 蓝牙mesh组网实践(常见调试问题整理)

    目录 ①初始化及配网过程中出错 ②发送模型返回错误代码 ③发送方成功,接收方丢包 ①初始化及配网过程中出错 1-1.返回错误代码-16,表示给dataflash分配的单个扇区的空间不够,需要加大CON ...

  4. 若依gateway

    1.若依后端gateway模块配置白名单 顾名思义,就是允许访问的地址.且无需登录就能访问.在ignore中设置whites,表示允许匿名访问. 2. SpringCloud Gateway网关配置( ...

  5. 初识swoole

    环境: 腾讯云服务器  centos7 在安装完swoole服务之后 使用 php -m  查看是否有该组件 确认存在后 在根目录下 创建一个文件夹 当做专门测试swoole使用 如 8 在该文件夹下 ...

  6. web开发(1): html简介/ sublime text3使用/VScode使用

    导论 web设计概述 web的核心特征是超链接. web应用:浏览器看新闻:访问网页 非web的网络应用: QQ. 微信 web的组织:W3C 1994年成立,负责管理和维护与web相关的各种技术标准 ...

  7. python之shapely库的使用

    参考链接:  https://www.pudn.com/news/6228d5049ddf223e1ad1d411.html : https://desktop.arcgis.com/zh-cn/ar ...

  8. Linux安装Nginx安装并配置stream

    编译安装 1.下载可编译的nginx cd /opt wget http://nginx.org/download/nginx-1.20.1.tar.gz tar -zxvf nginx-1.20.1 ...

  9. python接口测试常见问题。

    一.入参问题 1.body字段类型 1.如果数据是从excel提取的那么中文数据会提示错误. 解决方法# media_value['body'] = media_value['body'].encod ...

  10. 【DM论文阅读杂记】复杂社区网络

    Paper Title Community Structure in Time-Dependent, Multiscale, and Multiplex Networks Basic algorith ...