python之scrapy爬取jingdong招聘信息到mysql数据库
1、创建工程
scrapy startproject jd
2、创建项目
scrapy genspider jingdong
3、安装pymysql
pip install pymysql
4、settings.py文件,主要是全局字段的定义,包括数据库信息
# -*- coding: utf-8 -*- # Scrapy settings for jd project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'jd' SPIDER_MODULES = ['jd.spiders']
NEWSPIDER_MODULE = 'jd.spiders' LOG_LEVEL="WARNING"
LOG_FILE="./jingdong1.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jd (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'jd.middlewares.JdSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'jd.middlewares.JdDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'jd.pipelines.JdPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' # 连接数据MySQL
# 数据库地址
MYSQL_HOST = 'localhost'
# 数据库用户名:
MYSQL_USER = 'root'
# 数据库密码
MYSQL_PASSWORD = 'yang156122'
# 数据库端口
MYSQL_PORT = 3306
# 数据库名称
MYSQL_DBNAME = 'test'
# 数据库编码
MYSQL_CHARSET = 'utf8'
5、items.py文件定义数据库字段
# -*- coding: utf-8 -*- # Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class JdItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
appTime = scrapy.Field()
applicantErp = scrapy.Field()
formatPublishTime = scrapy.Field()
jobType = scrapy.Field()
positionName = scrapy.Field()
positionNameOpen = scrapy.Field()
publishTime = scrapy.Field()
qualification= scrapy.Field()
pass
6、jingdong.py文件主要是爬取所需数据
# -*- coding: utf-8 -*-
import scrapy import logging
import json
logger = logging.getLogger(__name__)
class JingdongSpider(scrapy.Spider):
name = 'jingdong'
allowed_domains = ['zhaopin.jd.com']
start_urls = ['http://zhaopin.jd.com/web/job/job_list?page=1']
pageNum = 1
def parse(self, response):
content = response.body.decode()
content = json.loads(content)
##########去除列表中字典集中的空值###########
for i in range(len(content)):
#list(content[i].keys()获取当前字典中的key
# for key in list(content[i].keys()): #content[i]为字典
# if not content[i].get(key):#content[i].get(key)根据key获取value
# del content[i][key] #删除空值字典
yield content[i]
# for i in range(len(content)):
# logging.warning(content[i]) self.pageNum = self.pageNum+1
if self.pageNum<=355:
next_url = "http://zhaopin.jd.com/web/job/job_list?page="+str(self.pageNum)
yield scrapy.Request(
next_url,
callback=self.parse
)
pass
7、pipelines.py文件主要是对爬取的数据进行清洗和处理,包括数据的入库操作
这里和tencent相比,主要是增加了时间处理
# -*- coding: utf-8 -*- # Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import logging
from pymysql import cursors
from twisted.enterprise import adbapi
import time
import copy
class JdPipeline(object):
# 函数初始化
def __init__(self, db_pool):
self.db_pool = db_pool @classmethod
def from_settings(cls, settings):
"""类方法,只加载一次,数据库初始化"""
db_params = dict(
host=settings['MYSQL_HOST'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASSWORD'],
port=settings['MYSQL_PORT'],
database=settings['MYSQL_DBNAME'],
charset=settings['MYSQL_CHARSET'],
use_unicode=True,
# 设置游标类型
cursorclass=cursors.DictCursor
)
# 创建连接池
db_pool = adbapi.ConnectionPool('pymysql', **db_params)
# 返回一个pipeline对象
return cls(db_pool) def process_item(self, item, spider):
myItem = {}
myItem["appTime"]=item["appTime"]
myItem["applicantErp"] = item["applicantErp"]
myItem["formatPublishTime"] = item["formatPublishTime"]
myItem["jobType"] = item["jobType"]
myItem["positionName"] = item["positionName"]
#时间转换
publishTime = item["publishTime"]
publishTime = time.localtime(int(str(publishTime)[:10])) #时间格式转换
myItem["publishTime"] = time.strftime("%Y-%m-%d %H:%M:%S", publishTime) myItem["positionNameOpen"]=item["positionNameOpen"]
myItem["qualification"] = item["qualification"] logging.warning(item)
# 对象拷贝,深拷贝 --- 这里是解决数据重复问题!!!
asynItem = copy.deepcopy(myItem)
# 把要执行的sql放入连接池
query = self.db_pool.runInteraction(self.insert_into, asynItem)
# 如果sql执行发送错误,自动回调addErrBack()函数
query.addErrback(self.handle_error, myItem, spider)
return myItem # 处理sql函数
def insert_into(self, cursor, item):
# 创建sql语句
sql = "INSERT INTO jingdong (appTime,applicantErp,formatPublishTime,jobType,positionName,publishTime,positionNameOpen,qualification) " \
"VALUES ('{}','{}','{}','{}','{}','{}','{}','{}')".format(
item['appTime'], item['applicantErp'],item['formatPublishTime'] , item['jobType'],
item['positionName'], item['publishTime'], item['positionNameOpen'],item['qualification'])
# 执行sql语句
cursor.execute(sql) # 错误函数
def handle_error(self, failure, item, spider):
# #输出错误信息
print("failure", failure)
完美收官!!!
python之scrapy爬取jingdong招聘信息到mysql数据库的更多相关文章
- 爬虫框架之Scrapy——爬取某招聘信息网站
案例1:爬取内容存储为一个文件 1.建立项目 C:\pythonStudy\ScrapyProject>scrapy startproject tenCent New Scrapy projec ...
- python scrapy爬取前程无忧招聘信息
使用scrapy框架之前,使用以下命令下载库: pip install scrapy -i https://pypi.tuna.tsinghua.edu.cn/simple 1.创建项目文件夹 scr ...
- 爬取拉勾网招聘信息并使用xlwt存入Excel
xlwt 1.3.0 xlwt 文档 xlrd 1.1.0 python操作excel之xlrd 1.Python模块介绍 - xlwt ,什么是xlwt? Python语言中,写入Excel文件的扩 ...
- pymysql 使用twisted异步插入数据库:基于crawlspider爬取内容保存到本地mysql数据库
本文的前提是实现了整站内容的抓取,然后把抓取的内容保存到数据库. 可以参考另一篇已经实现整站抓取的文章:Scrapy 使用CrawlSpider整站抓取文章内容实现 本文也是基于这篇文章代码基础上实现 ...
- Python爬取拉勾网招聘信息并写入Excel
这个是我想爬取的链接:http://www.lagou.com/zhaopin/Python/?labelWords=label 页面显示如下: 在Chrome浏览器中审查元素,找到对应的链接: 然后 ...
- [Python学习] 简单爬取CSDN下载资源信息
这是一篇Python爬取CSDN下载资源信息的样例,主要是通过urllib2获取CSDN某个人全部资源的资源URL.资源名称.下载次数.分数等信息.写这篇文章的原因是我想获取自己的资源全部的评论信息. ...
- python之简单爬取一个网站信息
requests库是一个简介且简单的处理HTTP请求的第三方库 get()是获取网页最常用的方式,其基本使用方式如下 使用requests库获取HTML页面并将其转换成字符串后,需要进一步解析HTML ...
- python之scrapy爬取某集团招聘信息以及招聘详情
1.定义爬取的字段items.py # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See do ...
- python之scrapy爬取jd和qq招聘信息
1.settings.py文件 # -*- coding: utf-8 -*- # Scrapy settings for jd project # # For simplicity, this fi ...
随机推荐
- 微信小程序文章收录
基础篇 03-04 微信登入小程序与后端实现 - 小猿取经 - 博客园 我做的小程序 - 小y - 博客园 小程序二维码和小程序带参数二维码生成 - Likwo - 博客园 accesstoken 微 ...
- Linux内核移植的若干问题
- Linux学习笔记(二)Linux常用命令:权限、目录操作以及常见目录作用
一.Linux命令格式 命令 [选项] [参数] 注:(1)简化选项和完整选项 -a --all (2)当有多个选项是可以写在一起 -l -a 可以写为-la 二.权限 -rw-r--r--.&quo ...
- delphi Tidhttp 发送json格式报文
type TwmsThreadpostJson = class(TThread) private Furl: string; Fpostcmd: string; FResult: string; FB ...
- chrome上一些好用的插件
1. Super Auto Refresh Plus - 这个插件可以自动刷新网页 2. 屏蔽百度推广 - 这个插件可以屏蔽百度搜索的推广广告
- Gym - 102141D 通项公式 最短路
题目很长,但是意思就是给你n,A,B,C,D n表示有n个城市 A是飞机的重量 B是一个常数表示转机代价 C是单位燃油的价格 D是一个常数 假设一个点到另外一个点的距离为整数L 起飞前的油量为f 则 ...
- deferred shading , tile deferred, cluster forward 对tranparent支持问题的思考
cluster对 trans的支持我大概理解了 http://efficientshading.com/wp-content/uploads/tiled_shading_siggraph_2012.p ...
- appium自动化 - android
1. 获取driver appium通过生成driver来识别和操作app的UI元素.生成driver时,需要给出被测设备的相关信息.appium官方上的例子如下: https://github.co ...
- git命令行提交流程
一.顺利提交无冲突情况(diff->add->fetch->pull->commit->push) 1.git status 查看状态 2. git diff head ...
- Series和Dataframe分组时使用groupby函数的区别
1. Dataframe分组用groupby("列名")或者groupby(["列名1","列名2"]) import pandas as ...