1、创建工程

scrapy startproject jd

2、创建项目

scrapy genspider jingdong

3、安装pymysql

pip install pymysql

4、settings.py文件,主要是全局字段的定义,包括数据库信息

# -*- coding: utf-8 -*-

# Scrapy settings for jd project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'jd' SPIDER_MODULES = ['jd.spiders']
NEWSPIDER_MODULE = 'jd.spiders' LOG_LEVEL="WARNING"
LOG_FILE="./jingdong1.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jd (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'jd.middlewares.JdSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'jd.middlewares.JdDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'jd.pipelines.JdPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' # 连接数据MySQL
# 数据库地址
MYSQL_HOST = 'localhost'
# 数据库用户名:
MYSQL_USER = 'root'
# 数据库密码
MYSQL_PASSWORD = 'yang156122'
# 数据库端口
MYSQL_PORT = 3306
# 数据库名称
MYSQL_DBNAME = 'test'
# 数据库编码
MYSQL_CHARSET = 'utf8'

5、items.py文件定义数据库字段

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class JdItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
appTime = scrapy.Field()
applicantErp = scrapy.Field()
formatPublishTime = scrapy.Field()
jobType = scrapy.Field()
positionName = scrapy.Field()
positionNameOpen = scrapy.Field()
publishTime = scrapy.Field()
qualification= scrapy.Field()
pass

6、jingdong.py文件主要是爬取所需数据

# -*- coding: utf-8 -*-
import scrapy import logging
import json
logger = logging.getLogger(__name__)
class JingdongSpider(scrapy.Spider):
name = 'jingdong'
allowed_domains = ['zhaopin.jd.com']
start_urls = ['http://zhaopin.jd.com/web/job/job_list?page=1']
pageNum = 1
def parse(self, response):
content = response.body.decode()
content = json.loads(content)
##########去除列表中字典集中的空值###########
for i in range(len(content)):
#list(content[i].keys()获取当前字典中的key
# for key in list(content[i].keys()): #content[i]为字典
# if not content[i].get(key):#content[i].get(key)根据key获取value
# del content[i][key] #删除空值字典
yield content[i]
# for i in range(len(content)):
# logging.warning(content[i]) self.pageNum = self.pageNum+1
if self.pageNum<=355:
next_url = "http://zhaopin.jd.com/web/job/job_list?page="+str(self.pageNum)
yield scrapy.Request(
next_url,
callback=self.parse
)
pass

7、pipelines.py文件主要是对爬取的数据进行清洗和处理,包括数据的入库操作

  这里和tencent相比,主要是增加了时间处理

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import logging
from pymysql import cursors
from twisted.enterprise import adbapi
import time
import copy
class JdPipeline(object):
# 函数初始化
def __init__(self, db_pool):
self.db_pool = db_pool @classmethod
def from_settings(cls, settings):
"""类方法,只加载一次,数据库初始化"""
db_params = dict(
host=settings['MYSQL_HOST'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASSWORD'],
port=settings['MYSQL_PORT'],
database=settings['MYSQL_DBNAME'],
charset=settings['MYSQL_CHARSET'],
use_unicode=True,
# 设置游标类型
cursorclass=cursors.DictCursor
)
# 创建连接池
db_pool = adbapi.ConnectionPool('pymysql', **db_params)
# 返回一个pipeline对象
return cls(db_pool) def process_item(self, item, spider):
myItem = {}
myItem["appTime"]=item["appTime"]
myItem["applicantErp"] = item["applicantErp"]
myItem["formatPublishTime"] = item["formatPublishTime"]
myItem["jobType"] = item["jobType"]
myItem["positionName"] = item["positionName"]
#时间转换
publishTime = item["publishTime"]
publishTime = time.localtime(int(str(publishTime)[:10])) #时间格式转换
myItem["publishTime"] = time.strftime("%Y-%m-%d %H:%M:%S", publishTime) myItem["positionNameOpen"]=item["positionNameOpen"]
myItem["qualification"] = item["qualification"] logging.warning(item)
# 对象拷贝,深拷贝 --- 这里是解决数据重复问题!!!
asynItem = copy.deepcopy(myItem)
# 把要执行的sql放入连接池
query = self.db_pool.runInteraction(self.insert_into, asynItem)
# 如果sql执行发送错误,自动回调addErrBack()函数
query.addErrback(self.handle_error, myItem, spider)
return myItem # 处理sql函数
def insert_into(self, cursor, item):
# 创建sql语句
sql = "INSERT INTO jingdong (appTime,applicantErp,formatPublishTime,jobType,positionName,publishTime,positionNameOpen,qualification) " \
"VALUES ('{}','{}','{}','{}','{}','{}','{}','{}')".format(
item['appTime'], item['applicantErp'],item['formatPublishTime'] , item['jobType'],
item['positionName'], item['publishTime'], item['positionNameOpen'],item['qualification'])
# 执行sql语句
cursor.execute(sql) # 错误函数
def handle_error(self, failure, item, spider):
# #输出错误信息
print("failure", failure)

完美收官!!!

python之scrapy爬取jingdong招聘信息到mysql数据库的更多相关文章

  1. 爬虫框架之Scrapy——爬取某招聘信息网站

    案例1:爬取内容存储为一个文件 1.建立项目 C:\pythonStudy\ScrapyProject>scrapy startproject tenCent New Scrapy projec ...

  2. python scrapy爬取前程无忧招聘信息

    使用scrapy框架之前,使用以下命令下载库: pip install scrapy -i https://pypi.tuna.tsinghua.edu.cn/simple 1.创建项目文件夹 scr ...

  3. 爬取拉勾网招聘信息并使用xlwt存入Excel

    xlwt 1.3.0 xlwt 文档 xlrd 1.1.0 python操作excel之xlrd 1.Python模块介绍 - xlwt ,什么是xlwt? Python语言中,写入Excel文件的扩 ...

  4. pymysql 使用twisted异步插入数据库:基于crawlspider爬取内容保存到本地mysql数据库

    本文的前提是实现了整站内容的抓取,然后把抓取的内容保存到数据库. 可以参考另一篇已经实现整站抓取的文章:Scrapy 使用CrawlSpider整站抓取文章内容实现 本文也是基于这篇文章代码基础上实现 ...

  5. Python爬取拉勾网招聘信息并写入Excel

    这个是我想爬取的链接:http://www.lagou.com/zhaopin/Python/?labelWords=label 页面显示如下: 在Chrome浏览器中审查元素,找到对应的链接: 然后 ...

  6. [Python学习] 简单爬取CSDN下载资源信息

    这是一篇Python爬取CSDN下载资源信息的样例,主要是通过urllib2获取CSDN某个人全部资源的资源URL.资源名称.下载次数.分数等信息.写这篇文章的原因是我想获取自己的资源全部的评论信息. ...

  7. python之简单爬取一个网站信息

    requests库是一个简介且简单的处理HTTP请求的第三方库 get()是获取网页最常用的方式,其基本使用方式如下 使用requests库获取HTML页面并将其转换成字符串后,需要进一步解析HTML ...

  8. python之scrapy爬取某集团招聘信息以及招聘详情

    1.定义爬取的字段items.py # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See do ...

  9. python之scrapy爬取jd和qq招聘信息

    1.settings.py文件 # -*- coding: utf-8 -*- # Scrapy settings for jd project # # For simplicity, this fi ...

随机推荐

  1. RobHess的SIFT代码解析步骤一

    平台:win10 x64 +VS 2015专业版 +opencv-2.4.11 + gtk_-bundle_2.24.10_win32 主要参考:1.代码:RobHess的SIFT源码:SIFT+KD ...

  2. 05_Redis_List命令

    一:Redis 列表(List) -- LinkedList Redis列表是简单的字符串列表,按照插入顺序排序.你可以添加一个元素到列表的头部(左边)或者尾部(右边):一个列表最多可以包含 232 ...

  3. PAT Basic 1085 PAT单位排行 (25 分)

    每次 PAT 考试结束后,考试中心都会发布一个考生单位排行榜.本题就请你实现这个功能. 输入格式: 输入第一行给出一个正整数 N(≤),即考生人数.随后 N 行,每行按下列格式给出一个考生的信息: 准 ...

  4. Git---Ubuntu下的安装与使用

    Git---Ubuntu下的安装与使用 注意:学会Git的唯一方式是在实际使用中学习,切记不要尝试先记住一大堆理论知识或者Git命令.

  5. Java错误和异常解析

    Java错误和异常解析 错误和异常 在Java中, 根据错误性质将运行错误分为两类: 错误和异常. 在Java程序的执行过程中, 如果出现了异常事件, 就会生成一个异常对象. 生成的异常对象将传递Ja ...

  6. SSE

    最近尝试了一下服务器端的推送,之前的做法都是客户端轮询,定时向服务器发送请求.但这造成了我的一些困扰: 1:轮询是由客户端发起的,那么在服务端就不能判别我要推送的内容是否已经过期,因为我很难判断某个信 ...

  7. Java入门第二季——第4章 多态

    第4章 多态 多态性是允许你将父对象设置成为和一个或更多的他的子对象相等的技术,赋值之后,父对象就可以根据当前赋值给它的子对象的特性以不同的方式运作 4-1 Java 中的多态 注意:不能通过父类的引 ...

  8. linux基础_关机重启注销

    1.关机重启命令 (1)shutdown shutdown -h now:表示立即关机 shutdown -h 1:表示1分钟后关机 shutdown -r  now:立即重启 (2)halt:就是直 ...

  9. P5496 【模板】回文自动机(PAM)

    做一下强制在线处理即可 #include <cstdio> #include <algorithm> #include <cstring> using namesp ...

  10. 置换的玩笑——DFS&&暴力

    题目 链接 题意:一个$1$到$n$的序列被去掉空格,需要将其还原.例如$4111109876532$可还原成$4 \ 1 \  11 \  10 \  9 \  8 \  7 \  6 \  5 \ ...