python之scrapy篇(二)
一、创建工程
scarpy startproject xxx
二、编写iteam文件
# -*- coding: utf-8 -*- # Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class TestScrapyItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
name = scrapy.Field()
title = scrapy.Field()
info = scrapy.Field()
二、编写setting文件
# -*- coding: utf-8 -*- # Scrapy settings for test_scrapy project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'test_scrapy' SPIDER_MODULES = ['test_scrapy.spiders']
NEWSPIDER_MODULE = 'test_scrapy.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'test_scrapy (+http://www.yourdomain.com)' # Obey robots.txt rules
# ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'test_scrapy.middlewares.TestScrapySpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'test_scrapy.middlewares.TestScrapyDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'test_scrapy.pipelines.TestScrapyPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
三、进入spider文件(cmd创建自定义爬虫文件)
scrapy genspider demo 'www.douyu.com'
编写代码
import scrapy
from test_scrapy.items import TestScrapyItem class ItcastSpider(scrapy.Spider):
# 爬虫名
name = "itcast"
# 允许爬虫作用的范围
allowd_domains = ['http://www.xxxx.cn/']
# 爬虫起始的url
start_urls = ["http://www.xxxx.cn/channel/teacher.shtml#ajavaee"] def parse(self, response):
# with open("teacher.html", "w") as f:
# f.write(response.body)
# 通过自带的xpath匹配出所有老师的根节点列表集合
teacher_list = response.xpath('//div[@class="li_txt"]') # 所有老师信息的列表集合
# 遍历根节点集合
for each in teacher_list:
# 保存数据
item = TestScrapyItem()
# name,extract()将匹配出来的结果转换为unicode字符串
# 不加extract()结果为xpath匹配对象
name = each.xpath('./h3/text()').extract()
# title
title = each.xpath('./h4/text()').extract()
# info
info = each.xpath('./p/text()').extract() item['name'] = name[0]
item['title'] = title[0]
item['info'] = info[0] yield item
四、运行
scrapy crawl xxxx
OVER!
python之scrapy篇(二)的更多相关文章
- Python:Scrapy(二) 实例分析与总结、写一个爬虫的一般步骤
学习自:Scrapy爬虫框架教程(二)-- 爬取豆瓣电影TOP250 - 知乎 Python Scrapy 爬虫框架实例(一) - Blue·Sky - 博客园 1.声明Item 爬虫爬取的目标是从非 ...
- [Python学习]错误篇二:切换当前工作目录时出错——FileNotFoundError: [WinError 3] 系统找不到指定的路径
REFERENCE:<Head First Python> ID:我的第二篇[Python学习] BIRTHDAY:2019.7.13 EXPERIENCE_SHARING:解决切换当前工 ...
- python之scrapy篇(三)
一.创建工程(cmd) scrapy startproject xxxx 二.编写item文件 # -*- coding: utf-8 -*- # Define here the models for ...
- python之scrapy篇(一)
一.首先创建工程(cmd中进行) scrapy startproject xxx 二.编写Item文件 添加要字段 # -*- coding: utf-8 -*- # Define here the ...
- Python项目--Scrapy框架(二)
本文主要是利用scrapy框架爬取果壳问答中热门问答, 精彩问答的相关信息 环境 win8, python3.7, pycharm 正文 1. 创建scrapy项目文件 在cmd命令行中任意目录下执行 ...
- [Python爬虫] scrapy爬虫系列 <一>.安装及入门介绍
前面介绍了很多Selenium基于自动测试的Python爬虫程序,主要利用它的xpath语句,通过分析网页DOM树结构进行爬取内容,同时可以结合Phantomjs模拟浏览器进行鼠标或键盘操作.但是,更 ...
- python爬虫Scrapy(一)-我爬了boss数据
一.概述 学习python有一段时间了,最近了解了下Python的入门爬虫框架Scrapy,参考了文章Python爬虫框架Scrapy入门.本篇文章属于初学经验记录,比较简单,适合刚学习爬虫的小伙伴. ...
- Python的单元测试(二)
title: Python的单元测试(二) date: 2015-03-04 19:08:20 categories: Python tags: [Python,单元测试] --- 在Python的单 ...
- Python爬虫Scrapy框架入门(0)
想学习爬虫,又想了解python语言,有个python高手推荐我看看scrapy. scrapy是一个python爬虫框架,据说很灵活,网上介绍该框架的信息很多,此处不再赘述.专心记录我自己遇到的问题 ...
随机推荐
- DRF的ModelSerializer的使用
在views中添加 from django.shortcuts import render # Create your views here. from rest_framework.views im ...
- 2019 ACM/ICPC North America Qualifier G.Research Productivity Index(概率期望dp)
https://open.kattis.com/problems/researchproductivityindex 这道题是考场上没写出来的一道题,今年看看感觉简单到不像话,当时自己对于dp没有什么 ...
- java8+ Lambda表达式基本用法
LIST public class LambdaTest { @Getter @Setter @AllArgsConstructor static class Student{ private Lon ...
- 2020武汉dotNET俱乐部分享交流圆满结束
经过长达2个多月的准备,终于在12月5日圆满的举行了武汉首届dotNET俱乐部线下分享交流活动.我们一共精心准备了3个目前比较热门的主题,分别如下: Jason分享的<ABP开发框架的扩展应用& ...
- 中间件面试专题:kafka高频面试问题
- filebeat输出结果到elasticsearch的多个索引
基本环境: filebeat版本:6.5.4 (Linux,x86-64) elasticsearch版本:6.54 (一)需求说明 在一台服务器上有多个日志需要使用filebeat日志收集到el ...
- 【题解】Fuzzy Google Suggest(UVA1462)
题目链接 题意 给定一个字符串集合,有n次搜索,每次有一个整数x和一个字符串,表示可以对字符串进行x次修改, 包括增加.修改和删除一个字符,问修改后的字符串可能是字符集中多少个字符串的前缀. 思路 简 ...
- CCF统一省选 Day2 题解
此题解是教练给我的作业,AK了本场比赛的人,以及认为题目简单的人可以不必看 T1 算法一 暴力枚举对信号站顺序的不同排列,然后对代价取\(\min\)即可. 时间复杂度\(O(m! \cdot n)\ ...
- 【Codeforces 1181E】A Story of One Country (Easy & Hard)(分治 & set)
Description 在一个二维平面上有若干个矩形.定义一个矩形的(或有边在无限远处)区域为符合条件的条件为: 这个区域仅包含一个矩形,且不能使边界穿过任何一个矩形的内部. 这个区域可以用一个水平或 ...
- 题解-CF643G Choosing Ads
CF643G Choosing Ads \(n\) 和 \(m\) 和 \(p\) 和序列 \(a_i(1\le i\le n)\).\(m\) 种如下操作: 1 l r id 令 \(i\in[l, ...