python之scrapy爬取jingdong招聘信息到mysql数据库
1、创建工程
scrapy startproject jd
2、创建项目
scrapy genspider jingdong
3、安装pymysql
pip install pymysql
4、settings.py文件,主要是全局字段的定义,包括数据库信息
# -*- coding: utf-8 -*- # Scrapy settings for jd project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'jd' SPIDER_MODULES = ['jd.spiders']
NEWSPIDER_MODULE = 'jd.spiders' LOG_LEVEL="WARNING"
LOG_FILE="./jingdong1.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jd (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'jd.middlewares.JdSpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'jd.middlewares.JdDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'jd.pipelines.JdPipeline': 300,
} # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' # 连接数据MySQL
# 数据库地址
MYSQL_HOST = 'localhost'
# 数据库用户名:
MYSQL_USER = 'root'
# 数据库密码
MYSQL_PASSWORD = 'yang156122'
# 数据库端口
MYSQL_PORT = 3306
# 数据库名称
MYSQL_DBNAME = 'test'
# 数据库编码
MYSQL_CHARSET = 'utf8'
5、items.py文件定义数据库字段
# -*- coding: utf-8 -*- # Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class JdItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
appTime = scrapy.Field()
applicantErp = scrapy.Field()
formatPublishTime = scrapy.Field()
jobType = scrapy.Field()
positionName = scrapy.Field()
positionNameOpen = scrapy.Field()
publishTime = scrapy.Field()
qualification= scrapy.Field()
pass
6、jingdong.py文件主要是爬取所需数据
# -*- coding: utf-8 -*-
import scrapy import logging
import json
logger = logging.getLogger(__name__)
class JingdongSpider(scrapy.Spider):
name = 'jingdong'
allowed_domains = ['zhaopin.jd.com']
start_urls = ['http://zhaopin.jd.com/web/job/job_list?page=1']
pageNum = 1
def parse(self, response):
content = response.body.decode()
content = json.loads(content)
##########去除列表中字典集中的空值###########
for i in range(len(content)):
#list(content[i].keys()获取当前字典中的key
# for key in list(content[i].keys()): #content[i]为字典
# if not content[i].get(key):#content[i].get(key)根据key获取value
# del content[i][key] #删除空值字典
yield content[i]
# for i in range(len(content)):
# logging.warning(content[i]) self.pageNum = self.pageNum+1
if self.pageNum<=355:
next_url = "http://zhaopin.jd.com/web/job/job_list?page="+str(self.pageNum)
yield scrapy.Request(
next_url,
callback=self.parse
)
pass
7、pipelines.py文件主要是对爬取的数据进行清洗和处理,包括数据的入库操作
这里和tencent相比,主要是增加了时间处理
# -*- coding: utf-8 -*- # Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import logging
from pymysql import cursors
from twisted.enterprise import adbapi
import time
import copy
class JdPipeline(object):
# 函数初始化
def __init__(self, db_pool):
self.db_pool = db_pool @classmethod
def from_settings(cls, settings):
"""类方法,只加载一次,数据库初始化"""
db_params = dict(
host=settings['MYSQL_HOST'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASSWORD'],
port=settings['MYSQL_PORT'],
database=settings['MYSQL_DBNAME'],
charset=settings['MYSQL_CHARSET'],
use_unicode=True,
# 设置游标类型
cursorclass=cursors.DictCursor
)
# 创建连接池
db_pool = adbapi.ConnectionPool('pymysql', **db_params)
# 返回一个pipeline对象
return cls(db_pool) def process_item(self, item, spider):
myItem = {}
myItem["appTime"]=item["appTime"]
myItem["applicantErp"] = item["applicantErp"]
myItem["formatPublishTime"] = item["formatPublishTime"]
myItem["jobType"] = item["jobType"]
myItem["positionName"] = item["positionName"]
#时间转换
publishTime = item["publishTime"]
publishTime = time.localtime(int(str(publishTime)[:10])) #时间格式转换
myItem["publishTime"] = time.strftime("%Y-%m-%d %H:%M:%S", publishTime) myItem["positionNameOpen"]=item["positionNameOpen"]
myItem["qualification"] = item["qualification"] logging.warning(item)
# 对象拷贝,深拷贝 --- 这里是解决数据重复问题!!!
asynItem = copy.deepcopy(myItem)
# 把要执行的sql放入连接池
query = self.db_pool.runInteraction(self.insert_into, asynItem)
# 如果sql执行发送错误,自动回调addErrBack()函数
query.addErrback(self.handle_error, myItem, spider)
return myItem # 处理sql函数
def insert_into(self, cursor, item):
# 创建sql语句
sql = "INSERT INTO jingdong (appTime,applicantErp,formatPublishTime,jobType,positionName,publishTime,positionNameOpen,qualification) " \
"VALUES ('{}','{}','{}','{}','{}','{}','{}','{}')".format(
item['appTime'], item['applicantErp'],item['formatPublishTime'] , item['jobType'],
item['positionName'], item['publishTime'], item['positionNameOpen'],item['qualification'])
# 执行sql语句
cursor.execute(sql) # 错误函数
def handle_error(self, failure, item, spider):
# #输出错误信息
print("failure", failure)
完美收官!!!
python之scrapy爬取jingdong招聘信息到mysql数据库的更多相关文章
- 爬虫框架之Scrapy——爬取某招聘信息网站
案例1:爬取内容存储为一个文件 1.建立项目 C:\pythonStudy\ScrapyProject>scrapy startproject tenCent New Scrapy projec ...
- python scrapy爬取前程无忧招聘信息
使用scrapy框架之前,使用以下命令下载库: pip install scrapy -i https://pypi.tuna.tsinghua.edu.cn/simple 1.创建项目文件夹 scr ...
- 爬取拉勾网招聘信息并使用xlwt存入Excel
xlwt 1.3.0 xlwt 文档 xlrd 1.1.0 python操作excel之xlrd 1.Python模块介绍 - xlwt ,什么是xlwt? Python语言中,写入Excel文件的扩 ...
- pymysql 使用twisted异步插入数据库:基于crawlspider爬取内容保存到本地mysql数据库
本文的前提是实现了整站内容的抓取,然后把抓取的内容保存到数据库. 可以参考另一篇已经实现整站抓取的文章:Scrapy 使用CrawlSpider整站抓取文章内容实现 本文也是基于这篇文章代码基础上实现 ...
- Python爬取拉勾网招聘信息并写入Excel
这个是我想爬取的链接:http://www.lagou.com/zhaopin/Python/?labelWords=label 页面显示如下: 在Chrome浏览器中审查元素,找到对应的链接: 然后 ...
- [Python学习] 简单爬取CSDN下载资源信息
这是一篇Python爬取CSDN下载资源信息的样例,主要是通过urllib2获取CSDN某个人全部资源的资源URL.资源名称.下载次数.分数等信息.写这篇文章的原因是我想获取自己的资源全部的评论信息. ...
- python之简单爬取一个网站信息
requests库是一个简介且简单的处理HTTP请求的第三方库 get()是获取网页最常用的方式,其基本使用方式如下 使用requests库获取HTML页面并将其转换成字符串后,需要进一步解析HTML ...
- python之scrapy爬取某集团招聘信息以及招聘详情
1.定义爬取的字段items.py # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See do ...
- python之scrapy爬取jd和qq招聘信息
1.settings.py文件 # -*- coding: utf-8 -*- # Scrapy settings for jd project # # For simplicity, this fi ...
随机推荐
- IntelliJ IDEA安装及破解
百度搜索IntelliJ IDEA,进入官网. 下载完成后进入安装界面 根据自己的情况选择安装路径 等待下载和安装完成. 安装完成 接下来我们运行IntelliJ IDEA 之后这里就要我们进行激活了 ...
- Mysql 指定字段数据排序 以及django的实现
业务场景: mysql 查询 select * from dormitory_applysettleorder order by FIELD(status,40) desc django 实现: or ...
- C# Winfrom 自定义控件添加图标
Winfrom自定义控件添加自定义图标实现方式: 1.新建UserControl——略 2.寻找合适的图标文件——将文件和控件放置同一目录下(相同目录.自定义控件类名.图标文件名相同) 注:如果路径不 ...
- 关于rtos中任务切换时的程序流程
今天和一个小伙伴讨论了一下基于cortex-m3内核的RTOS在任务切换时的程序流程,小伙伴说国内某搜索引擎都搜不到这类的信息,所以我才打算写下来,硬件平台是stm32f1. 这里的切换有两种情况: ...
- 七:mvc使用CodeFirst(代码优先)创建数据库
1. 理解EF CodeFirst模式特点 2. 使用CodeFirst模式生成数据库 1. CodeFirst模式(代码优先) Code First是Entity Framework提供的一种新的编 ...
- Linux rpm和yum软件管理
rpm是管理程序的一个小工具,rpm常来用作查询 什么源码包:大多数都是tar.gz,bz.bz2结尾的包 zip结尾的包 压缩格式为 zip –r 命名.zip ./* 解压格式为 unzip 命名 ...
- Files的常用方法都有哪些?(未完成)
Files的常用方法都有哪些?(未完成)
- sql 语句用法
一.基础 1.说明:创建数据库CREATE DATABASE database-name 2.说明:删除数据库drop database dbname 3.说明:备份sql server--- 创建 ...
- Mac安装chromedriver和geckodriver
DY@MacBook-Pro bin$brew install chromedriver Error: No available formula with the name "chromed ...
- Elasticsearch:fuzzy 搜索 (模糊搜索)
在实际的搜索中,我们有时候会打错字,从而导致搜索不到.在Elasticsearch中,我们可以使用fuzziness属性来进行模糊查询,从而达到搜索有错别字的情形. match查询具有"fu ...