之前一直在学习关于滑块验证码的爬虫知识,最接近的当属于模拟人的行为进行鼠标移动,登录页面之后在获取了,由于一直找不到滑块验证码的原图,无法通过openCV获取当前滑块所需要移动的距离。

1.机智如我开始找关系通过截取滑块图片,然后经过PS,再进行比较(就差最后的验证了)

2.Selenium+Scrapy:登录部分--自己操作鼠标通过验证,登录之后页面--爬取静态页面

给大家讲了答题思路,现在就来拿实例验证一下可行性,拿自己博客开刀--"https://i.cnblogs.com"

二、先给大家看下效果,我不坑人

一、配置代码

1.1_items.py

import scrapy

class BokeItem(scrapy.Item):
name = scrapy.Field()
summary = scrapy.Field()
read_num = scrapy.Field()
comment = scrapy.Field()

1.2_middlewares.py(设置代理)

class BokeSpiderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.
def __init__(self,ip=''):
self.ip = ip
def process_request(self,request,spider):
print('http://10.240.252.16:911')
request.meta['proxy']= 'http://10.240.252.16:911'

1.3_pipelines.py(存储mongodb)

import scrapy
import pymongo
from scrapy.item import Item class BokePipeline(object):
def process_item(self, item, spider):
return item class MongoDBPipeline(object): #存储到mongodb中
@classmethod
def from_crawler(cls,crawler):
cls.DB_URL = crawler.settings.get("MONGO_DB_URL",'mongodb://localhost:27017/')
cls.DB_NAME = crawler.settings.get("MONGO_DB_NAME",'scrapy_data')
return cls() def open_spider(self,spider):
self.client = pymongo.MongoClient(self.DB_URL)
self.db = self.client[self.DB_NAME] def close_spider(self,spider):
self.client.close() def process_item(self,item,spider):
collection = self.db[spider.name]
post = dict(item) if isinstance(item,Item) else item
collection.insert(post) return item

1.4_settings.py(重要设置)

import random

BOT_NAME = 'Boke'

SPIDER_MODULES = ['Boke.spiders']
NEWSPIDER_MODULE = 'Boke.spiders' MONGO_DB_URL = 'mongodb://localhost:27017/'
MONGO_DB_NAME = 'myboke' USER_AGENT =[ #设置浏览器的User_agent
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
] FEED_EXPORT_FIELDS = ['name','summary','read_num','comment'] ROBOTSTXT_OBEY = False
CONCURRENT_REQUESTS = 10
DOWNLOAD_DELAY = 0.5
COOKIES_ENABLED = False
# Crawled (400) <GET https://www.cnblogs.com/eilinge/> (referer: None)
DEFAULT_REQUEST_HEADERS = { 'User-Agent': random.choice(USER_AGENT), 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en', } DOWNLOADER_MIDDLEWARES = {
#'Boss.middlewares.BossDownloaderMiddleware': 543,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':543,
'Boke.middlewares.BokeSpiderMiddleware':123, } ITEM_PIPELINES = {
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware':1,
'Boke.pipelines.MongoDBPipeline': 300, }

1.5_spiders/boke.py

#-*- coding:utf-8 -*-
import time
from selenium import webdriver
import pdb
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from lxml import etree
import re
from bs4 import BeautifulSoup
import scrapy
from Boke.items import BokeItem
from Boke.settings import USER_AGENT
from scrapy.linkextractors import LinkExtractor
import random
import re chrome_options = Options()
driver = webdriver.Chrome() class BokeSpider(scrapy.Spider):
name = 'boke'
allowed_domains = ['www.cnblogs.com','passport.cnblogs.com']
start_urls = ['https://passport.cnblogs.com/user/signin'] def start_requests(self): driver.get(
self.start_urls[0]
)
time.sleep(3)
driver.find_element_by_id('input1').send_keys(u'xxx')
time.sleep(3)
driver.find_element_by_id('input2').send_keys(u'xxx')
time.sleep(3)
driver.find_element_by_id('signin').click()
time.sleep(20) new_url = driver.current_url.encode('utf8')
print(driver.current_url.encode('utf8')) yield scrapy.Request(new_url) def parse(self, response): bokeitem = BokeItem()
sels = response.css('div.day')
for sel in sels:
bokeitem['name'] = sel.css('div.postTitle>a ::text').extract()[0]
bokeitem['summary'] = sel.css('div.c_b_p_desc ::text').extract()[0]
summary = sel.css('div.postDesc ::text').extract()[0]
bokeitem['read_num'] = re.findall(r'\((\d{0,5})',summary)[0]
bokeitem['comment'] = re.findall(r'\((\d{0,5})', summary)[1] print bokeitem
yield bokeitem

scrapy--cnblogs的更多相关文章

  1. scrapy爬取cnblogs文章列表

    scrapy爬取cnblogs文章 目标任务 安装爬虫 创建爬虫 编写 items.py 编写 spiders/cnblogs.py 编写 pipelines.py 编写 settings.py 运行 ...

  2. cnblogs 博客爬取 + scrapy + 持久化 + 分布式

    目录 普通 scrapy 分布式爬取 cnblogs_spider.py 普通 scrapy # -*- coding: utf-8 -*- import scrapy from ..items im ...

  3. scrapy架构与目录介绍、scrapy解析数据、配置相关、全站爬取cnblogs数据、存储数据、爬虫中间件、加代理、加header、集成selenium

    今日内容概要 scrapy架构和目录介绍 scrapy解析数据 setting中相关配置 全站爬取cnblgos文章 存储数据 爬虫中间件和下载中间件 加代理,加header,集成selenium 内 ...

  4. python3 安装scrapy

    twisted(网络异步框架) wget https://pypi.python.org/packages/dc/c0/a0114a6d7fa211c0904b0de931e8cafb5210ad82 ...

  5. scrapy爬虫结果插入mysql数据库

    1.通过工具创建数据库scrapy

  6. Scrapy爬取自己的博客内容

    python中常用的写爬虫的库有urllib2.requests,对于大多数比较简单的场景或者以学习为目的,可以用这两个库实现.这里有一篇我之前写过的用urllib2+BeautifulSoup做的一 ...

  7. Python爬虫Scrapy框架入门(2)

    本文是跟着大神博客,尝试从网站上爬一堆东西,一堆你懂得的东西 附上原创链接: http://www.cnblogs.com/qiyeboy/p/5428240.html 基本思路是,查看网页元素,填写 ...

  8. Python爬虫Scrapy框架入门(0)

    想学习爬虫,又想了解python语言,有个python高手推荐我看看scrapy. scrapy是一个python爬虫框架,据说很灵活,网上介绍该框架的信息很多,此处不再赘述.专心记录我自己遇到的问题 ...

  9. scrapy爬虫笔记(三)------写入源文件的爬取

    开始爬取网页:(2)写入源文件的爬取 为了使代码易于修改,更清晰高效的爬取网页,我们将代码写入源文件进行爬取. 主要分为以下几个步骤: 一.使用scrapy创建爬虫框架: 二.修改并编写源代码,确定我 ...

  10. [转]Scrapy入门教程

    关键字:scrapy 入门教程 爬虫 Spider 作者:http://www.cnblogs.com/txw1958/ 出处:http://www.cnblogs.com/txw1958/archi ...

随机推荐

  1. Day3下

    少女[问题描述]你是能看到第一题的 friends呢.—— hja少女在图上开车, 她们希望把每条边分配给与其相连的点中一个并且每个点最多被分配一条边,问可能的方案数.[输入格式]第一行两个整数

  2. ANDROID_HOME is not set and "android" command not in your PATH解决

    使用nodejs安装cordova后在项目里面添加平台时出现错误: 原因就是没有配环境变量 使用phonegap开发不仅要配JDK环境变量,还要配ADT环境变量,出现这个错误很显示就是没配ADT环境变 ...

  3. 对fastdfs 文件清单进行检查,打印无效的文件

    对fastdfs 文件清单进行检查,打印无效的文件2017年12月12日 18:37:18 守望dfdfdf 阅读数:281 标签: fastdfssftpmysql 更多个人分类: 工作 问题编辑版 ...

  4. Python异常处理及元类

    一.异常处理 异常是错误发生的信号,一旦程序出错就会产生一个异常,如果该异常没有被应用程序处理,那么该异常就会跑出来,程序的执行也随之终止,也就是说异常就是一个事件,该事件会在程序执行过程中发生,影响 ...

  5. viewport信息设置

  6. EF--Model First

    Model First先设计Model对象,再由对象生成数据库. 1.新建控制台项目,名称ModelFirst,确定. 2.点击选中项目,右键-->添加-->新建项目--选择数据模板--& ...

  7. python模块详解 XML

    XML模块 XML是实现不同语言或程序之间进行数据交换的协议,和json一样. XML格式: <?xml version="1.0" encoding="UTF-8 ...

  8. Node.js-ReferenceError: _filename is not defined

    简直不要被坑得太惨!!!你能?看出来这前面是两根下划线!两根下划线!两根下划线!太尴尬了~找了半天原因居然是这个!

  9. HTML?这些还不懂咋办?

    1.什么是空白折叠现象?为什么要空白折叠呢? 对于我们大多数人的习惯来讲,大都喜欢利用空格或者换行来调整文章的文字结构.这样往往可以使我们可以更轻松的阅读.但是,在HTML中却不允许我们这么做,这是为 ...

  10. 257. Binary Tree Paths (dfs recurive & stack)

    Given a binary tree, return all root-to-leaf paths. Note: A leaf is a node with no children. Example ...