python之scrapy篇(二)
一、创建工程
scarpy startproject xxx
二、编写iteam文件
# -*- coding: utf-8 -*- # Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class TestScrapyItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
name = scrapy.Field()
title = scrapy.Field()
info = scrapy.Field()
二、编写setting文件
# -*- coding: utf-8 -*- # Scrapy settings for test_scrapy project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'test_scrapy' SPIDER_MODULES = ['test_scrapy.spiders']
NEWSPIDER_MODULE = 'test_scrapy.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'test_scrapy (+http://www.yourdomain.com)' # Obey robots.txt rules
# ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'test_scrapy.middlewares.TestScrapySpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'test_scrapy.middlewares.TestScrapyDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'test_scrapy.pipelines.TestScrapyPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
三、进入spider文件(cmd创建自定义爬虫文件)
scrapy genspider demo 'www.douyu.com'
编写代码
import scrapy
from test_scrapy.items import TestScrapyItem class ItcastSpider(scrapy.Spider):
# 爬虫名
name = "itcast"
# 允许爬虫作用的范围
allowd_domains = ['http://www.xxxx.cn/']
# 爬虫起始的url
start_urls = ["http://www.xxxx.cn/channel/teacher.shtml#ajavaee"] def parse(self, response):
# with open("teacher.html", "w") as f:
# f.write(response.body)
# 通过自带的xpath匹配出所有老师的根节点列表集合
teacher_list = response.xpath('//div[@class="li_txt"]') # 所有老师信息的列表集合
# 遍历根节点集合
for each in teacher_list:
# 保存数据
item = TestScrapyItem()
# name,extract()将匹配出来的结果转换为unicode字符串
# 不加extract()结果为xpath匹配对象
name = each.xpath('./h3/text()').extract()
# title
title = each.xpath('./h4/text()').extract()
# info
info = each.xpath('./p/text()').extract() item['name'] = name[0]
item['title'] = title[0]
item['info'] = info[0] yield item
四、运行
scrapy crawl xxxx
OVER!
python之scrapy篇(二)的更多相关文章
- Python:Scrapy(二) 实例分析与总结、写一个爬虫的一般步骤
学习自:Scrapy爬虫框架教程(二)-- 爬取豆瓣电影TOP250 - 知乎 Python Scrapy 爬虫框架实例(一) - Blue·Sky - 博客园 1.声明Item 爬虫爬取的目标是从非 ...
- [Python学习]错误篇二:切换当前工作目录时出错——FileNotFoundError: [WinError 3] 系统找不到指定的路径
REFERENCE:<Head First Python> ID:我的第二篇[Python学习] BIRTHDAY:2019.7.13 EXPERIENCE_SHARING:解决切换当前工 ...
- python之scrapy篇(三)
一.创建工程(cmd) scrapy startproject xxxx 二.编写item文件 # -*- coding: utf-8 -*- # Define here the models for ...
- python之scrapy篇(一)
一.首先创建工程(cmd中进行) scrapy startproject xxx 二.编写Item文件 添加要字段 # -*- coding: utf-8 -*- # Define here the ...
- Python项目--Scrapy框架(二)
本文主要是利用scrapy框架爬取果壳问答中热门问答, 精彩问答的相关信息 环境 win8, python3.7, pycharm 正文 1. 创建scrapy项目文件 在cmd命令行中任意目录下执行 ...
- [Python爬虫] scrapy爬虫系列 <一>.安装及入门介绍
前面介绍了很多Selenium基于自动测试的Python爬虫程序,主要利用它的xpath语句,通过分析网页DOM树结构进行爬取内容,同时可以结合Phantomjs模拟浏览器进行鼠标或键盘操作.但是,更 ...
- python爬虫Scrapy(一)-我爬了boss数据
一.概述 学习python有一段时间了,最近了解了下Python的入门爬虫框架Scrapy,参考了文章Python爬虫框架Scrapy入门.本篇文章属于初学经验记录,比较简单,适合刚学习爬虫的小伙伴. ...
- Python的单元测试(二)
title: Python的单元测试(二) date: 2015-03-04 19:08:20 categories: Python tags: [Python,单元测试] --- 在Python的单 ...
- Python爬虫Scrapy框架入门(0)
想学习爬虫,又想了解python语言,有个python高手推荐我看看scrapy. scrapy是一个python爬虫框架,据说很灵活,网上介绍该框架的信息很多,此处不再赘述.专心记录我自己遇到的问题 ...
随机推荐
- 大数据开发——Hive笔记
写在前面 hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供简单的sql查询功能,可以将sql语句转换为MapReduce任务进行运行.Hive的运行原理- ...
- PyQt(Python+Qt)学习随笔:什么是信号绑定(Unbound and Bound Signals)?
老猿Python博文目录 专栏:使用PyQt开发图形界面Python应用 老猿Python博客地址 1.概述 信号的绑定是由在类的实例变量中第一次通过类实例的方式(即"self.信号&quo ...
- 手写mini版MVC框架
目录 1, Springmvc基本原理流程 2,注解开发 编写测试代码: 目录结构: 3,编写自定义DispatcherServlet中的初始化流程: 3.1 加载配置文件 3.2 扫描相关的类,扫描 ...
- 关于我 About Me
重庆某大学计算机专业大三学渣 CTF酱油选手 web安全菜鸡 SRC低危小子 精通多门语言 hello world 输出 和 windows linux单词拼写 扣扣:MjU4NTYxNDQ2NA== ...
- js动态加载js文件(js异步加载之性能优化篇)
1.[基本优化] 将所有需要的<script>标签都放在</body>之前,确保脚本执行之前完成页面渲染而不会造成页面堵塞问题,这个大家都懂. 2.[合并JS代码,尽可能少的使 ...
- 【题解】P3629 [APIO2010]巡逻
link 题意 有 \(n\) 个村庄,编号为 \(1, 2, ..., n\) .有 \(n – 1\) 条道路连接着这些村 庄,从任何一个村庄都可以到达其他任一个村庄.道路长度均为 1. 巡警车每 ...
- svn提交时提示 Aborting commit: remains in conflict 解决办法,更改svn服务地址
TortoiseSVN客户端如何更改新的URL 问题: 我们的服务器换了新的URL地址,这时候我们本地的SVN访问帐号和地址就要重新定义了. 解决步骤: 1:重新定义SVN的URL,右键(Tortoi ...
- vue+axois 封装请求+拦截器(请求锁+统一错误)
需求 封装常用请求 拦截器-请求锁 统一处理错误码 一.封装常用的请求 解决痛点:不要每一个模块的api都还要写get,post,patch请求方法.直接将这些常用的方法封装好. 解决方案:写一个类 ...
- CET4词汇
abandon vt.丢弃:放弃,抛弃 ability n.能力:能耐,本领 abnormal a.不正常的:变态的 aboard ad.在船(车)上:上船 abroad ad.(在)国外:到处 ab ...
- PB级大规模Elasticsearch集群运维与调优实践【>>戳文章免费体验Elasticsearch服务30天】
[活动]Elasticsearch Service免费体验馆>> Elasticsearch Service自建迁移特惠政策>>Elasticsearch Service新用户 ...