一、创建工程

scarpy startproject xxx

二、编写iteam文件

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html import scrapy class TestScrapyItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
name = scrapy.Field()
title = scrapy.Field()
info = scrapy.Field()

二、编写setting文件

# -*- coding: utf-8 -*-

# Scrapy settings for test_scrapy project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'test_scrapy' SPIDER_MODULES = ['test_scrapy.spiders']
NEWSPIDER_MODULE = 'test_scrapy.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'test_scrapy (+http://www.yourdomain.com)' # Obey robots.txt rules
# ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
} # Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'test_scrapy.middlewares.TestScrapySpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'test_scrapy.middlewares.TestScrapyDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'test_scrapy.pipelines.TestScrapyPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

三、进入spider文件(cmd创建自定义爬虫文件)

scrapy genspider demo 'www.douyu.com'

编写代码

import scrapy
from test_scrapy.items import TestScrapyItem class ItcastSpider(scrapy.Spider):
# 爬虫名
name = "itcast"
# 允许爬虫作用的范围
allowd_domains = ['http://www.xxxx.cn/']
# 爬虫起始的url
start_urls = ["http://www.xxxx.cn/channel/teacher.shtml#ajavaee"] def parse(self, response):
# with open("teacher.html", "w") as f:
# f.write(response.body)
# 通过自带的xpath匹配出所有老师的根节点列表集合
teacher_list = response.xpath('//div[@class="li_txt"]') # 所有老师信息的列表集合
# 遍历根节点集合
for each in teacher_list:
# 保存数据
item = TestScrapyItem()
# name,extract()将匹配出来的结果转换为unicode字符串
# 不加extract()结果为xpath匹配对象
name = each.xpath('./h3/text()').extract()
# title
title = each.xpath('./h4/text()').extract()
# info
info = each.xpath('./p/text()').extract() item['name'] = name[0]
item['title'] = title[0]
item['info'] = info[0] yield item

四、运行

scrapy crawl xxxx

OVER!

python之scrapy篇(二)的更多相关文章

  1. Python:Scrapy(二) 实例分析与总结、写一个爬虫的一般步骤

    学习自:Scrapy爬虫框架教程(二)-- 爬取豆瓣电影TOP250 - 知乎 Python Scrapy 爬虫框架实例(一) - Blue·Sky - 博客园 1.声明Item 爬虫爬取的目标是从非 ...

  2. [Python学习]错误篇二:切换当前工作目录时出错——FileNotFoundError: [WinError 3] 系统找不到指定的路径

    REFERENCE:<Head First Python> ID:我的第二篇[Python学习] BIRTHDAY:2019.7.13 EXPERIENCE_SHARING:解决切换当前工 ...

  3. python之scrapy篇(三)

    一.创建工程(cmd) scrapy startproject xxxx 二.编写item文件 # -*- coding: utf-8 -*- # Define here the models for ...

  4. python之scrapy篇(一)

    一.首先创建工程(cmd中进行) scrapy startproject xxx 二.编写Item文件 添加要字段 # -*- coding: utf-8 -*- # Define here the ...

  5. Python项目--Scrapy框架(二)

    本文主要是利用scrapy框架爬取果壳问答中热门问答, 精彩问答的相关信息 环境 win8, python3.7, pycharm 正文 1. 创建scrapy项目文件 在cmd命令行中任意目录下执行 ...

  6. [Python爬虫] scrapy爬虫系列 <一>.安装及入门介绍

    前面介绍了很多Selenium基于自动测试的Python爬虫程序,主要利用它的xpath语句,通过分析网页DOM树结构进行爬取内容,同时可以结合Phantomjs模拟浏览器进行鼠标或键盘操作.但是,更 ...

  7. python爬虫Scrapy(一)-我爬了boss数据

    一.概述 学习python有一段时间了,最近了解了下Python的入门爬虫框架Scrapy,参考了文章Python爬虫框架Scrapy入门.本篇文章属于初学经验记录,比较简单,适合刚学习爬虫的小伙伴. ...

  8. Python的单元测试(二)

    title: Python的单元测试(二) date: 2015-03-04 19:08:20 categories: Python tags: [Python,单元测试] --- 在Python的单 ...

  9. Python爬虫Scrapy框架入门(0)

    想学习爬虫,又想了解python语言,有个python高手推荐我看看scrapy. scrapy是一个python爬虫框架,据说很灵活,网上介绍该框架的信息很多,此处不再赘述.专心记录我自己遇到的问题 ...

随机推荐

  1. Cypress系列(101)- intercept() 命令详解

    如果想从头学起Cypress,可以看下面的系列文章哦 https://www.cnblogs.com/poloyy/category/1768839.html 作用 使用该命令在网络层管理 HTTP ...

  2. 第15.3节 PyCharm程序调试功能介绍

    一. 代码调试 点击工具栏的调试按钮(如下图蓝色圈标记按钮)可以进行程序调试,可以在调试前先设置断点,断点设置就是在打开文件的行与前面的行号之间用鼠标单击进行设置和取消(如下图蓝色下划线上面的实体圆点 ...

  3. IKUN python 反序列化

    题目过程1.一开始提示说要买到V6,观察源码,发现/static/img/lv/lv4.png.注册之后尝试寻找V6.观察url发现/shop?page=2.尝试写脚本匹配一下.发现在第181页. i ...

  4. swpuCTF2019 web1 无列名注入

    上周参加的swpuctf比赛第一道web题做了好久,在最后一个小时用非预期的方法做出来了,看了官方题解之后记录一下wp里面的无列名注入. 关于无列名注入可以看一下这篇链接 https://www.ch ...

  5. 水星路由器自动更换IP工具

    这个工具是本人抢火车票的时候,自己写的换IP工具,仅支持自己的水星,其他水星不知道,请自测!!!! 点击更换IP,他会断开链接,重新拨号!!(达到更换IP的目的) !!开发语言:易语言(源码在下方)使 ...

  6. LeetCode初级算法之数组:26 删除排序数组中的重复项

    删除排序数组中的重复项 题目地址:https://leetcode-cn.com/problems/remove-duplicates-from-sorted-array/ 给定一个排序数组,你需要在 ...

  7. AcWing 362. 区间

    听书上说有贪心 + 数据结构的做法,研究了一下. 朴素贪心 考虑把所有线段按照右端点 \(b\) 从小到大排序,依次考虑每一条线段的要求: 如果已经满足要求则跳过 否则尽量选择靠后的数(因为之后的线段 ...

  8. 函数动态参数 *args **kwargs

    def f1(*args,**kwargs): print(args,type(args)) print(kwargs,type(kwargs))li = [11,22,33]dic = {'k1': ...

  9. MySQL锁:02.InnoDB锁

    目录 InnoDB锁 InnoDB行锁实现机制 InnoDB隐式.显式锁 InnoDB锁类型 共享锁 排他锁 意向锁 InnoDB锁兼容性 InnoDB行锁范围.粒度 InnoDB行锁粒度一览 意向插 ...

  10. 跨站点脚本编制 - SpringBoot配置XSS过滤器(基于mica-xss)

    1. 简介   XSS,即跨站脚本编制,英文为Cross Site Scripting.为了和CSS区分,命名为XSS.   XSS是最普遍的Web应用安全漏洞.这类漏洞能够使得攻击者嵌入恶意脚本代码 ...