scrapy的CrawlSpider使用
1.创建项目
我这里的项目名称为scrapyuniversal,然后我创建在D盘根目录。创建方法如下
打开cmd,切换到d盘根目录。然后输入以下命令:
scrapy startproject scrapyuniversal
如果创建成功,d盘的根目录下将生成一个名为scrapyuniversal的文件夹。
2.创建crawl模板
打开命令行窗口,然后定位到d盘刚才创建的scrapyuniversal文件夹。然后输入以下命令
scrapy genspider -t crawl china tech.china.com
如创建成功会在scrapyuniversal目录下的spider目录里多一个spider文件,下面我们就来看这个spider文件。 代码含注释
3.目录结构
scrapyuniversal
│ scrapy.cfg
│ spider.sql
│ start.py
│
└─scrapyuniversal
│ items.py
│ loaders.py
│ middlewares.py
│ pipelines.py
│ settings.py
│ __init__.py
│
├─spiders
│ │ china.py
│ │ __init__.py
│ │
4.china.py
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from ..items import NewsItem
from ..loaders import ChinaLoader
class ChinaSpider(CrawlSpider):
name = 'china'
allowed_domains = ['tech.china.com'] start_urls = ['http://tech.china.com/articles/']
#当我们follow一个链接时, 我们其实是用rules把这个链接返回的response再提取一遍.
#第二个rule是设定设定只取两页
rules = (
Rule(LinkExtractor(allow='article\/.*\.html', restrict_xpaths='//div[@id="left_side"]//div[@class="con_item"]'),
callback='parse_item'),
Rule(LinkExtractor(restrict_xpaths="//div[@id='pageStyle']//span[text()<3]")),
) def parse_item(self, response):
#item存储方式,保存格式乱所以改用itemloader
# item=NewsItem()
# item['title']=response.xpath("//h1[@id='chan_newsTitle']/text()").extract_first()
# item['url']=response.url
# item['text']=''.join(response.xpath("//div[@id='chan_newsDetail']//text()").extract()).strip()
# #re_first正则表达式提取时间
# item['datetime']=response.xpath("//div[@id='chan_newsInfo']/text()").re_first('(\d+-\d+-\d+\s\d+:\d+:\d+)')
# item['source']=response.xpath('//div[@id="chan_newsInfo"]/text()').re_first("来源: (.*)").strip()
# item['website']="中华网"
# yield item loader = ChinaLoader(item=NewsItem(), response=response)
loader.add_xpath('title', '//h1[@id="chan_newsTitle"]/text()')
loader.add_value('url', response.url)
loader.add_xpath('text', '//div[@id="chan_newsDetail"]//text()')
loader.add_xpath('datetime', '//div[@id="chan_newsInfo"]/text()', re='(\d+-\d+-\d+\s\d+:\d+:\d+)')
loader.add_xpath('source', '//div[@id="chan_newsInfo"]/text()', re='来源:(.*)')
loader.add_value('website', '中华网')
yield loader.load_item()
5.loader.py
#!/usr/bin/env python
# encoding: utf-8
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst,Join,Compose class NewsLoader(ItemLoader):
"""
定义一个通用Out Processor为TakeFirst
TakeFirst:取迭代对象中第一个非空元素,相当于之前item用的extract_first
"""
default_output_processor = TakeFirst()
class ChinaLoader(NewsLoader):
"""
Compose第一个参数
Join:把列表拼成字符串
Compose第二个参数是一个匿名函数
对字符串进一步处理
"""
text_out=Compose(Join(),lambda s: s.strip())
source_out=Compose(Join(),lambda s:s.strip())
6.items.py
# -*- coding: utf-8 -*- # Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html from scrapy import Field,Item class NewsItem(Item):
#标题
title=Field()
#链接
url=Field()
#正文
text=Field()
#发布时间
datetime=Field()
#来源
source=Field()
#站点名称,直接赋值中华网
website=Field()
7.中间件的修改,随机获取useragent逻辑
# -*- coding: utf-8 -*- # Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals
import random
class ProcessHeaderMidware():
"""process request add request info"""
def __init__(self):
self.USER_AGENT_LIST= ["Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
]
def process_request(self, request, spider):
"""
随机从列表中获得header, 并传给user_agent进行使用
"""
ua = random.choice(self.USER_AGENT_LIST)
spider.logger.info(msg='now entring download midware')
if ua:
request.headers['User-Agent'] = ua
# Add desired logging message here.
spider.logger.info(u'User-Agent is : {} {}'.format(request.headers.get('User-Agent'), request))
8.settings.py
# -*- coding: utf-8 -*- # Scrapy settings for scrapyuniversal project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'scrapyuniversal' SPIDER_MODULES = ['scrapyuniversal.spiders']
NEWSPIDER_MODULE = 'scrapyuniversal.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'scrapyuniversal (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'scrapyuniversal.middlewares.ScrapyuniversalSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'scrapyuniversal.middlewares.ScrapyuniversalDownloaderMiddleware': 543,
#}
DOWNLOADER_MIDDLEWARES = {
'scrapyuniversal.middlewares.ProcessHeaderMidware': 543, }
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'scrapyuniversal.pipelines.ScrapyuniversalPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' HTTP_PROXY="127.0.0.1:5000" #替换成需要的代理
scrapy的CrawlSpider使用的更多相关文章
- python爬虫之Scrapy框架(CrawlSpider)
提问:如果想要通过爬虫程序去爬取”糗百“全站数据新闻数据的话,有几种实现方法? 方法一:基于Scrapy框架中的Spider的递归爬去进行实现的(Request模块回调) 方法二:基于CrawlSpi ...
- scrapy——3 crawlSpider——爱问
scrapy——3 crawlSpider crawlSpider 爬取一般网站常用的爬虫类.其定义了一些规则(rule)来提供跟进link的方便的机制. 也许该spider并不是完全适合您的特定网 ...
- Scrapy框架-CrawlSpider
目录 1.CrawlSpider介绍 2.CrawlSpider源代码 3. LinkExtractors:提取Response中的链接 4. Rules 5.重写Tencent爬虫 6. Spide ...
- scrapy 中crawlspider 爬虫
爬取目标网站: http://www.chinanews.com/rss/rss_2.html 获取url后进入另一个页面进行数据提取 检查网页: 爬虫该页数据的逻辑: Crawlspider爬虫类: ...
- Scrapy 框架 CrawlSpider 全站数据爬取
CrawlSpider 全站数据爬取 创建 crawlSpider 爬虫文件 scrapy genspider -t crawl chouti www.xxx.com import scrapy fr ...
- Scrapy 使用CrawlSpider整站抓取文章内容实现
刚接触Scrapy框架,不是很熟悉,之前用webdriver+selenium实现过头条的抓取,但是感觉对于整站抓取,之前的这种用无GUI的浏览器方式,效率不够高,所以尝试用CrawlSpider来实 ...
- 全栈爬取-Scrapy框架(CrawlSpider)
引入 提问:如果想要通过爬虫程序去爬取”糗百“全站数据新闻数据的话,有几种实现方法? 方法一:基于Scrapy框架中的Spider的递归爬取进行实现(Request模块递归回调parse方法). 方法 ...
- Scrapy框架——CrawlSpider类爬虫案例
Scrapy--CrawlSpider Scrapy框架中分两类爬虫,Spider类和CrawlSpider类. 此案例采用的是CrawlSpider类实现爬虫. 它是Spider的派生类,Spide ...
- Scrapy框架——CrawlSpider爬取某招聘信息网站
CrawlSpider Scrapy框架中分两类爬虫,Spider类和CrawlSpider类. 它是Spider的派生类,Spider类的设计原则是只爬取start_url列表中的网页, 而Craw ...
- python框架Scrapy中crawlSpider的使用——爬取内容写进MySQL
一.先在MySQL中创建test数据库,和相应的site数据表 二.创建Scrapy工程 #scrapy startproject 工程名 scrapy startproject demo4 三.进入 ...
随机推荐
- 剑指offer-树的子结构17
题目描述 输入两棵二叉树A,B,判断B是不是A的子结构.(ps:我们约定空树不是任意一个树的子结构) class Solution: def issubTree(self,pRoot1,pRoot2) ...
- Android之 GPS学习笔记
========================================GPS:全球定位系统 GPS由三部分组成:GPS卫星组成的空间部分,若干地面组成的控制站,用户手中的接收机.Androi ...
- MongoDB复制二:复制集的管理
1.修改oplog的大小 需要在每个机器上都配置.先在secondary上操作,最后在primary上操作. 1)以单机的方式重启复制集的实例 db.shutdownServer() 在新的端口中启 ...
- Pro Git - 笔记1
Getting Started About Version Control Local Version Control Systems Centralized Version Control Syst ...
- 【集训试题】exam 信心考 最小割
题意概述: 有N个人,A,B两个考场.如果学生i在A考场,总信心值增加xi:如果学生i在B考场,总信心值增加yi.其中还有m对好友,当第i对好友的两个人都在A考场时,总信心值增加ai:如果两人都在B考 ...
- BST插入与查找
B树: 二叉查找树,所有左节点都比父节点要小,所有右节点都比父节点要大.查找,插入的时间复杂度为O(logn) public class BTreeTest { public static int[] ...
- C#的internal访问修饰符
文章:C# 访问修饰符internal的访问范围误区释疑 internal访问修饰符限定的类,只能在本程序集中访问.
- Jpeg-Baseline和Progressive JPEG的区别
原文来自 http://www.hdj.me/use-progressive-jpeg-in-web 看着不错,于是粘贴了过来 今天才认识到原来JPEG文件有两种保存方式他们分别是Baseline J ...
- React & styled component
React & styled component https://www.styled-components.com/#your-first-styled-component tagged t ...
- sqoop工具从oracle导入数据2
sqoop工具从oracle导入数据 sqoop工具是hadoop下连接关系型数据库和Hadoop的桥梁,支持关系型数据库和hive.hdfs,hbase之间数据的相互导入,可以使用全表导入和增量导入 ...