1. 安装scrapy
    1.   

      conda install scrapy
  2. 生成一个scrapy项目
    1.   

      scrapy startproject douban
  3. settings文件
    1.   

      # -*- coding: utf-8 -*-
      
      # Scrapy settings for douban project
      #
      # For simplicity, this file contains only settings considered important or
      # commonly used. You can find more settings consulting the documentation:
      #
      # http://doc.scrapy.org/en/latest/topics/settings.html
      # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
      # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html BOT_NAME = 'douban' SPIDER_MODULES = ['douban.spiders']
      NEWSPIDER_MODULE = 'douban.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
      #USER_AGENT = 'douban (+http://www.yourdomain.com)'
      # USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0'
      USER_AGENT='Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
      # Obey robots.txt rules
      ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16)
      #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
      # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
      # See also autothrottle settings and docs
      #DOWNLOAD_DELAY = 3
      # The download delay setting will honor only one of:
      #CONCURRENT_REQUESTS_PER_DOMAIN = 16
      #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
      #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
      #TELNETCONSOLE_ENABLED = False # Override the default request headers:
      #DEFAULT_REQUEST_HEADERS = {
      # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
      # 'Accept-Language': 'en',
      #} # Enable or disable spider middlewares
      # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
      #SPIDER_MIDDLEWARES = {
      # 'douban.middlewares.DoubanSpiderMiddleware': 543,
      #} # Enable or disable downloader middlewares
      # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
      #DOWNLOADER_MIDDLEWARES = {
      # 'douban.middlewares.MyCustomDownloaderMiddleware': 543,
      #} # Enable or disable extensions
      # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
      #EXTENSIONS = {
      # 'scrapy.extensions.telnet.TelnetConsole': None,
      #} # Configure item pipelines
      # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
      ITEM_PIPELINES = {
      'douban.pipelines.DoubanPipeline': 300,
      } # Enable and configure the AutoThrottle extension (disabled by default)
      # See http://doc.scrapy.org/en/latest/topics/autothrottle.html
      #AUTOTHROTTLE_ENABLED = True
      # The initial download delay
      #AUTOTHROTTLE_START_DELAY = 5
      # The maximum download delay to be set in case of high latencies
      #AUTOTHROTTLE_MAX_DELAY = 60
      # The average number of requests Scrapy should be sending in parallel to
      # each remote server
      #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
      # Enable showing throttling stats for every response received:
      #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
      # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
      #HTTPCACHE_ENABLED = True
      #HTTPCACHE_EXPIRATION_SECS = 0
      #HTTPCACHE_DIR = 'httpcache'
      #HTTPCACHE_IGNORE_HTTP_CODES = []
      #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
  4. 编写爬虫文件,位于spiders下面
    1.   

      import scrapy
      
      from douban.items    import  DoubanItem
      
      class MeijuSpider(scrapy.Spider):
      name = 'doubanmovie'
      allowed_domains=['book.douban.com']
      start_urls=['https://book.douban.com/tag/%E4%BA%92%E8%81%94%E7%BD%91']
      number=1;
      def parse(self, response):
      self.number = self.number+1;
      # print(response.text)
      objs=response.xpath('//*[@id="subject_list"]/ul/li')
      # print(objs)
      # print('test',response.xpath('//*[@id="content"]/div/div[1]/div/div/table[1]/tr/td[2]/div/a/text()').extract_first())
      print(len(objs))
      for i in objs :
      item=DoubanItem()
      item['name']=i.xpath('./div[2]/h2/a/text()').extract_first() or ''
      item['score']=i.xpath('./div[2]/div[2]/span[2]/text()').extract_first() or ''
      item['author']=i.xpath('./div[2]/div[1]/text()').extract_first() or ''
      item['describe']=i.xpath('./div[2]/p/text()').extract_first() or ''
      yield item
      next_page = response.xpath('//*[@id="subject_list"]/div[2]/span[4]/a/@href').extract()
      print(self.number);
      if(self.number >11):
      next_page = response.xpath('//*[@id="subject_list"]/div[2]/span[5]/a/@href').extract()
      print(next_page)
      if next_page:
      next_link=next_page[0];
      print("https://book.douban.com"+next_link)
      yield scrapy.Request("https://book.douban.com"+next_link,callback=self.parse)
  5. 编写pipe文件
    1.   

      # -*- coding: utf-8 -*-
      
      # Define your item pipelines here
      #
      # Don't forget to add your pipeline to the ITEM_PIPELINES setting
      # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html class DoubanPipeline(object):
      def process_item(self, item, spider):
      with open('douban.txt','a',encoding='utf-8') as file:
      file.write(str(item['name'].replace("\n","").replace("\t","").replace(",",",").replace("\r","").strip() )
      +','+item['score'].replace("\n","").replace("\t","").replace(",",",").replace("\r","").strip()
      +','+str(item['author']).replace("\n","").replace("\t","").replace(",",",").replace("\r","").strip()
      +','+item['describe'].replace("\n","").replace("\t","").replace(",",",").replace("\r","").strip()
      + "\n"
      )
  6. 执行
    1.   编写一个start.py文件

      from scrapy.cmdline import execute
      execute("scrapy crawl doubanmovie".split())
    2. 效果
      1.   

scrapy 爬取豆瓣互联网图书的更多相关文章

  1. scrapy爬取豆瓣电影top250

    # -*- coding: utf-8 -*- # scrapy爬取豆瓣电影top250 import scrapy from douban.items import DoubanItem class ...

  2. 爬取豆瓣网图书TOP250的信息

    爬取豆瓣网图书TOP250的信息,需要爬取的信息包括:书名.书本的链接.作者.出版社和出版时间.书本的价格.评分和评价,并把爬取到的数据存储到本地文件中. 参考网址:https://book.doub ...

  3. Scrapy爬取豆瓣图书数据并写入MySQL

    项目地址 BookSpider 介绍 本篇涉及的内容主要是获取分类下的所有图书数据,并写入MySQL 准备 Python3.6.Scrapy.Twisted.MySQLdb等 演示 代码 一.创建项目 ...

  4. 使用scrapy爬取豆瓣上面《战狼2》影评

    这几天一直在学习scrapy框架,刚好学到了CrawlSpider和Rule的搭配使用,就想着要搞点事情练练手!!! 信息提取 算了,由于爬虫运行了好几次,太过分了,被封IP了,就不具体分析了,附上& ...

  5. scrapy爬取豆瓣电影信息

    最近在学python,对python爬虫框架十分着迷,因此在网上看了许多大佬们的代码,经过反复测试修改,终于大功告成! 原文地址是:https://blog.csdn.net/ljm_9615/art ...

  6. Scrapy爬取豆瓣电影top250的电影数据、海报,MySQL存储

    从GitHub得到完整项目(https://github.com/daleyzou/douban.git) 1.成果展示 数据库 本地海报图片 2.环境 (1)已安装Scrapy的Pycharm (2 ...

  7. Scrapy中用xpath/css爬取豆瓣电影Top250:解决403HTTP status code is not handled or not allowed

    好吧,我又开始折腾豆瓣电影top250了,只是想试试各种方法,看看哪一种的方法效率是最好的,一直进行到这一步才知道 scrapy的强大,尤其是和selector结合之后,速度飞起.... 下面我就采用 ...

  8. python系列之(3)爬取豆瓣图书数据

    上次介绍了beautifulsoup的使用,那就来进行运用下吧.本篇将主要介绍通过爬取豆瓣图书的信息,存储到sqlite数据库进行分析. 1.sqlite SQLite是一个进程内的库,实现了自给自足 ...

  9. Scrapy 通过登录的方式爬取豆瓣影评数据

    Scrapy 通过登录的方式爬取豆瓣影评数据 爬虫 Scrapy 豆瓣 Fly 由于需要爬取影评数据在来做分析,就选择了豆瓣影评来抓取数据,工具使用的是Scrapy工具来实现.scrapy工具使用起来 ...

随机推荐

  1. Java使用算数运算符实现两个整数互换

    有很简单的方法可以实现,不过还是用一步一个脚印的方法来试试 首先分析一下流程 这里有两个变量. int a = 10,b = 40; //此时 a 为10,b 为40 然后我们开始走路,在不依靠第三者 ...

  2. 工控随笔_12_西门子_WinCC的VBS脚本_03_变量类型

    说到编程语言,总是绕不开数据类型,因为数据类型决定了数据可以进行什么样的操作.同时数据类型 从广义上来说是一种数据结构,在过程式编程的过程中,曾经有过这样一种说法: 程序 = 数据结构 + 算法 可见 ...

  3. python基础知识2---核心风格

    阅读目录 一.语句和语法 二.变量定义与赋值 三.内存管理 内存管理: 引用计数: 简单例子 四.python对象 五.标识符 六.专用下划线标识符 七.编写模块基本风格 八.示范 一.语句和语法 # ...

  4. Web高级 HTTP报文

    1. 报文结构 1.1 请求报文结构 Start-Line 单行,包括 Method + URL + HTTP Version Headers 多行,形式为 Name:Value Body 可选,主体 ...

  5. 服务器变更IP地址后SSH链接失败的解决办法

    客户端未变,服务器端变更IP地址,导致客户端链接失败,这种情况提示如下: 原因是服务器端更改IP地址后,秘钥也需更新 在客户端输入以下格式的命令: ssh-keygen-f"/home/用户 ...

  6. 20164310Exp2后门原理与实践

    一.基础问题回答 1.例举你能想到的一个后门进入到你系统中的可能方式 答:在莫名其妙的网站下载某些莫名奇妙的播放器. 2.例举你知道的后门如何启动起来(win及linux)的方式? 答:对于windo ...

  7. tomcat和iis共用80端口的简明手册

    ​​对于使用tomcat-connector实现iis与tomcat实现80端口共用的问题,网上的信息异常混乱,很多地方误人子弟,浪费时间.本文给出简明手册式的做法: 首先列出我们需要做的事项: 1. ...

  8. GPIO输入输出各种模式(推挽、开漏、准双向端口)详解

    转自:https://blog.csdn.net/techexchangeischeap/article/details/72569999 概述 能将处理器的GPIO(General Purpose ...

  9. 问题-python3.6找不到tkinter

    问题:import tkinter失败 然后直接pip安装也不ok python3.6安装过程中会提示是否选择安装tkinter,如此只有打开原来的安装程序 勾选箭头所示

  10. https://www.oschina.net/project/lang/19/java

    https://www.oschina.net/project/lang/19/java