1、创建工程

scrapy startproject jd

2、创建项目

scrapy genspider jingdong

3、安装pymysql

pip install pymysql

4、settings.py文件,主要是全局字段的定义,包括数据库信息

# -*- coding: utf-8 -*-

# Scrapy settings for jd project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'jd' SPIDER_MODULES = ['jd.spiders']
NEWSPIDER_MODULE = 'jd.spiders' LOG_LEVEL="WARNING"
LOG_FILE="./jingdong1.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jd (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'jd.middlewares.JdSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'jd.middlewares.JdDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'jd.pipelines.JdPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' # 连接数据MySQL
# 数据库地址
MYSQL_HOST = 'localhost'
# 数据库用户名:
MYSQL_USER = 'root'
# 数据库密码
MYSQL_PASSWORD = 'yang156122'
# 数据库端口
MYSQL_PORT = 3306
# 数据库名称
MYSQL_DBNAME = 'test'
# 数据库编码
MYSQL_CHARSET = 'utf8'

5、items.py文件定义数据库字段

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class JdItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
appTime = scrapy.Field()
applicantErp = scrapy.Field()
formatPublishTime = scrapy.Field()
jobType = scrapy.Field()
positionName = scrapy.Field()
positionNameOpen = scrapy.Field()
publishTime = scrapy.Field()
qualification= scrapy.Field()
pass

6、jingdong.py文件主要是爬取所需数据

# -*- coding: utf-8 -*-
import scrapy import logging
import json
logger = logging.getLogger(__name__)
class JingdongSpider(scrapy.Spider):
name = 'jingdong'
allowed_domains = ['zhaopin.jd.com']
start_urls = ['http://zhaopin.jd.com/web/job/job_list?page=1']
pageNum = 1
def parse(self, response):
content = response.body.decode()
content = json.loads(content)
##########去除列表中字典集中的空值###########
for i in range(len(content)):
#list(content[i].keys()获取当前字典中的key
# for key in list(content[i].keys()): #content[i]为字典
# if not content[i].get(key):#content[i].get(key)根据key获取value
# del content[i][key] #删除空值字典
yield content[i]
# for i in range(len(content)):
# logging.warning(content[i]) self.pageNum = self.pageNum+1
if self.pageNum<=355:
next_url = "http://zhaopin.jd.com/web/job/job_list?page="+str(self.pageNum)
yield scrapy.Request(
next_url,
callback=self.parse
)
pass

7、pipelines.py文件主要是对爬取的数据进行清洗和处理,包括数据的入库操作

  这里和tencent相比,主要是增加了时间处理

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import logging
from pymysql import cursors
from twisted.enterprise import adbapi
import time
import copy
class JdPipeline(object):
# 函数初始化
def __init__(self, db_pool):
self.db_pool = db_pool @classmethod
def from_settings(cls, settings):
"""类方法,只加载一次,数据库初始化"""
db_params = dict(
host=settings['MYSQL_HOST'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASSWORD'],
port=settings['MYSQL_PORT'],
database=settings['MYSQL_DBNAME'],
charset=settings['MYSQL_CHARSET'],
use_unicode=True,
# 设置游标类型
cursorclass=cursors.DictCursor
)
# 创建连接池
db_pool = adbapi.ConnectionPool('pymysql', **db_params)
# 返回一个pipeline对象
return cls(db_pool) def process_item(self, item, spider):
myItem = {}
myItem["appTime"]=item["appTime"]
myItem["applicantErp"] = item["applicantErp"]
myItem["formatPublishTime"] = item["formatPublishTime"]
myItem["jobType"] = item["jobType"]
myItem["positionName"] = item["positionName"]
#时间转换
publishTime = item["publishTime"]
publishTime = time.localtime(int(str(publishTime)[:10])) #时间格式转换
myItem["publishTime"] = time.strftime("%Y-%m-%d %H:%M:%S", publishTime) myItem["positionNameOpen"]=item["positionNameOpen"]
myItem["qualification"] = item["qualification"] logging.warning(item)
# 对象拷贝,深拷贝 --- 这里是解决数据重复问题!!!
asynItem = copy.deepcopy(myItem)
# 把要执行的sql放入连接池
query = self.db_pool.runInteraction(self.insert_into, asynItem)
# 如果sql执行发送错误,自动回调addErrBack()函数
query.addErrback(self.handle_error, myItem, spider)
return myItem # 处理sql函数
def insert_into(self, cursor, item):
# 创建sql语句
sql = "INSERT INTO jingdong (appTime,applicantErp,formatPublishTime,jobType,positionName,publishTime,positionNameOpen,qualification) " \
"VALUES ('{}','{}','{}','{}','{}','{}','{}','{}')".format(
item['appTime'], item['applicantErp'],item['formatPublishTime'] , item['jobType'],
item['positionName'], item['publishTime'], item['positionNameOpen'],item['qualification'])
# 执行sql语句
cursor.execute(sql) # 错误函数
def handle_error(self, failure, item, spider):
# #输出错误信息
print("failure", failure)

完美收官!!!

python之scrapy爬取jingdong招聘信息到mysql数据库的更多相关文章

  1. 爬虫框架之Scrapy——爬取某招聘信息网站

    案例1:爬取内容存储为一个文件 1.建立项目 C:\pythonStudy\ScrapyProject>scrapy startproject tenCent New Scrapy projec ...

  2. python scrapy爬取前程无忧招聘信息

    使用scrapy框架之前,使用以下命令下载库: pip install scrapy -i https://pypi.tuna.tsinghua.edu.cn/simple 1.创建项目文件夹 scr ...

  3. 爬取拉勾网招聘信息并使用xlwt存入Excel

    xlwt 1.3.0 xlwt 文档 xlrd 1.1.0 python操作excel之xlrd 1.Python模块介绍 - xlwt ,什么是xlwt? Python语言中,写入Excel文件的扩 ...

  4. pymysql 使用twisted异步插入数据库:基于crawlspider爬取内容保存到本地mysql数据库

    本文的前提是实现了整站内容的抓取,然后把抓取的内容保存到数据库. 可以参考另一篇已经实现整站抓取的文章:Scrapy 使用CrawlSpider整站抓取文章内容实现 本文也是基于这篇文章代码基础上实现 ...

  5. Python爬取拉勾网招聘信息并写入Excel

    这个是我想爬取的链接:http://www.lagou.com/zhaopin/Python/?labelWords=label 页面显示如下: 在Chrome浏览器中审查元素,找到对应的链接: 然后 ...

  6. [Python学习] 简单爬取CSDN下载资源信息

    这是一篇Python爬取CSDN下载资源信息的样例,主要是通过urllib2获取CSDN某个人全部资源的资源URL.资源名称.下载次数.分数等信息.写这篇文章的原因是我想获取自己的资源全部的评论信息. ...

  7. python之简单爬取一个网站信息

    requests库是一个简介且简单的处理HTTP请求的第三方库 get()是获取网页最常用的方式,其基本使用方式如下 使用requests库获取HTML页面并将其转换成字符串后,需要进一步解析HTML ...

  8. python之scrapy爬取某集团招聘信息以及招聘详情

    1.定义爬取的字段items.py # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See do ...

  9. python之scrapy爬取jd和qq招聘信息

    1.settings.py文件 # -*- coding: utf-8 -*- # Scrapy settings for jd project # # For simplicity, this fi ...

随机推荐

  1. docker images 导入和导出

    目录 docker images 导入和导出 1.前言 2.docker image 的保存 3.docker image 的导入 docker images 导入和导出 1.前言 前提是现在有一个可 ...

  2. printf固定一行打印倒计时的实现

    @2019-07-15 [小记] #include<stdlib.h> #include <stdio.h> #include <time.h> #include ...

  3. Kinect 深度测量原理

    和其他摄像机一样,近红外摄像机也有视场.Kinect摄像机的视野是有限的,如下图所示: 如图,红外摄像机的视场是金字塔形状的.离摄像机远的物体比近的物体拥有更大的视场横截面积.这意味着影像的高度和宽度 ...

  4. iframe通信相关:父操作子,子操作父,兄弟通信

    这里写window和document是认为代表了BOM和DOM(个人理解不一定对) 拿到了window就能操作全局变量和函数 拿到了document就能获取dom,操作节点 父操作子 window:选 ...

  5. mac系统 flutter从安装到第一个应用

    mac系统 安装flutter 分三步: 1. 安装flutter sdk 2. flutter环境变量配置 3. 建立flutter应用 Flutter SDK下载 打开终端执行命令 git clo ...

  6. 软件测试能满足测试的sql

    作为一个软件测试工程师,我们在测试过程中往往需要对数据库数据进行操作,但是我们的操作大多以查询居多,有时会涉及到新增,修改,删除等操作,所以我们其实并不需要对数据库的操作有特别深入的了解,以下是我在工 ...

  7. vue今日总结

    ''' 复习 ''' """ 1.vue框架的优势 中文API 单一面应用 - 提升移动端app运行速度 数据的双向绑定 - 变量全局通用 数据驱动 - 只用考虑数据,不 ...

  8. 16-SQLServer强制走索引

    一.注意点 1.使用with(index(索引名称))来使SQL强制走索引. 二.示例截图 1.创建非聚集索引 2.不使用with,不走索引的截图 3.使用with,强制走索引的截图

  9. MyBatis注解Annotation介绍及Demo(转)

    MyBatis可以利用SQL映射文件来配置,也可以利用Annotation来设置.MyBatis提供的一些基本注解如下表所示. 注解 目标 相应的XML 描述 @CacheNamespace 类 &l ...

  10. 2、docker安装:内核要求、docker三要素、安装、helloworld、底层原理

    1.前提说明 1.CentOS Docker 安装 Docker支持以下的CentOS版本: CentOS 7 (64-bit) CentOS 6.5 (64-bit) 或更高的版本 2.前提条件:内 ...