Scrapy:配置日志
Scrapy logger 在每个spider实例中提供了一个可以访问和使用的实例,方法如下:
import scrapy class MySpider(scrapy.Spider):
name = 'myspider'
start_url = ['https://www.baidu.com'] def parse(self,response):
self.logger.info('Parse function called on %s',response.url)
方法二:
该记录器是使用spider的名称创建的,当然也可以应用到任意项目中
import logging
import scrapy logger = logging.getLogger('mycustomlogger')
#创建logger模块 class MySpider(scrapy.Spider):
name = 'myspider'
start_url = ['https://www.baidu.com'] #触发模块
def parse(self,response):
logger.info('Parse function called on %s',response.url)
只需使用logging.getLogger函数获取其名称即可使用其记录器:
import logging
logger = logging.getLogger('mycustomlogger')
logger.warning('This is a warning')
so anyway:我们也可以使用__name__变量填充当前模块的路径,确保正在处理的任何模块设置自定义记录器:
import logging logger = logging.getLogger(__name__)
logger.warning('This is a warning')
在scrapy项目的settings 文件中配置
LOG_ENABLED = True #是否启动日志记录,默认True
LOG_ENCODING = 'UTF-8'
LOG_FILE = 'TEST1.LOG'#日志输出文件,如果为NONE,就打印到控制台
LOG_LEVEL = 'INFO'#日志级别,默认debug
LOG_FORMAT #日志格式
LOG_DATEFORMAT#日志日期格式
LOG_STDOUT #日志标准输出,默认False,如果True所有标准输出都将写入日志中,比如代码中的print输出也会被写入到文件
LOG_SHORT_NAMES#短日志名,默认为false,如果为True将不输出组件名
日志如下
2019-04-26 15:48:33 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: patencent)
2019-04-26 15:48:33 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Windows-10-10.0.17134-SP0
2019-04-26 15:48:33 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'patencent', 'LOG_ENCODING': 'UTF8', 'LOG_FILE': 'TEST1.LOG', 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'patencent.spiders', 'SPIDER_MODULES': ['patencent.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
2019-04-26 15:48:34 [scrapy.extensions.telnet] INFO: Telnet Password: 08683c5af998704b
2019-04-26 15:48:34 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-04-26 15:48:34 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-04-26 15:48:34 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-04-26 15:48:34 [scrapy.middleware] INFO: Enabled item pipelines:
['patencent.pipelines.PatencentPipeline']
2019-04-26 15:48:34 [scrapy.core.engine] INFO: Spider opened
2019-04-26 15:48:34 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-26 15:48:34 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-04-26 15:49:34 [scrapy.extensions.logstats] INFO: Crawled 1384 pages (at 1384 pages/min), scraped 1255 items (at 1255 items/min)
2019-04-26 15:49:52 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: patencent)
2019-04-26 15:49:52 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Windows-10-10.0.17134-SP0
2019-04-26 15:49:52 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'patencent', 'CONCURRENT_REQUESTS': 32, 'LOG_ENCODING': 'UTF8', 'LOG_FILE': 'TEST1.LOG', 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'patencent.spiders', 'SPIDER_MODULES': ['patencent.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
2019-04-26 15:49:52 [scrapy.extensions.telnet] INFO: Telnet Password: b2f1951d137cf133
2019-04-26 15:49:52 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-04-26 15:49:52 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-04-26 15:49:52 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-04-26 15:49:52 [scrapy.middleware] INFO: Enabled item pipelines:
['patencent.pipelines.PatencentPipeline']
2019-04-26 15:49:52 [scrapy.core.engine] INFO: Spider opened
2019-04-26 15:49:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-26 15:49:52 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-04-26 15:50:43 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: patencent)
2019-04-26 15:50:43 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Windows-10-10.0.17134-SP0
2019-04-26 15:50:43 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'patencent', 'CONCURRENT_REQUESTS': 32, 'LOG_ENCODING': 'UTF8', 'LOG_FILE': 'TEST1.LOG', 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'patencent.spiders', 'SPIDER_MODULES': ['patencent.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
2019-04-26 15:50:43 [scrapy.extensions.telnet] INFO: Telnet Password: 24d0317609676d2e
2019-04-26 15:50:43 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-04-26 15:50:44 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-04-26 15:50:44 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-04-26 15:50:44 [scrapy.middleware] INFO: Enabled item pipelines:
['patencent.pipelines.PatencentPipeline']
2019-04-26 15:50:44 [scrapy.core.engine] INFO: Spider opened
2019-04-26 15:50:44 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-26 15:50:44 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-04-26 15:51:44 [scrapy.extensions.logstats] INFO: Crawled 1364 pages (at 1364 pages/min), scraped 1238 items (at 1238 items/min)
Scrapy:配置日志的更多相关文章
- scrapy之日志等级
scrapy之日志等级 在settings.py中配置如下项: LOG_LEVEL = 'ERROR' # 当LOG_LEVEL设置为ERROR时,在进行日志打印时,只是打印ERROR级别的日志 这样 ...
- 微信小程序开发工具的数据,配置,日志等目录在哪儿? 怎么找?
原文地址:http://www.wxapp-union.com/portal.php?mod=view&aid=359 本文由本站halfyawn原创:感谢原创者:如有疑问,请在评论内回复 ...
- BEA WebLogic Server 10 查看和配置日志
查看和配置日志 WebLogic Server 内的每个子系统都可生成日志消息来传达其状态.例如,当启动 WebLogic Server 实例时,安全子系统会输出消息以报告其初始化状态.为了记录其子系 ...
- 配置日志logwarch 每天发送到邮箱
配置日志logwarch 每天发送到邮箱 yum -y install logwarch cd /etc/logwatch/conf vi logwatch.conf 增加 ...
- scrapy配置
scrapy配置 增加并发 并发是指同时处理的request的数量.其有全局限制和局部(每个网站)的限制. Scrapy默认的全局并发限制对同时爬取大量网站的情况并不适用,因此您需要增加这个值. 增加 ...
- Linux配置日志服务器
title: Linux配置日志服务器 tags: linux, 日志服务器 --- Linux配置日志服务器 日志服务器配置文件:/etc/rsyslog.conf 服务器端: 服务器IP如下: 编 ...
- python之配置日志的三种方式
以下3种方式来配置logging: 1)使用Python代码显式的创建loggers, handlers和formatters并分别调用它们的配置函数: 2)创建一个日志配置文件,然后使用fileCo ...
- log4j 配置日志输出(log4j.properties)
轉: https://blog.csdn.net/qq_29166327/article/details/80467593 一.入门log4j实例 1.1 下载解压log4j.jar(地址:http: ...
- Python之配置日志的几种方式(logging模块)
原文:https://blog.csdn.net/WZ18810463869/article/details/81147167 作为开发者,我们可以通过以下3种方式来配置logging: 1)使用Py ...
- 配置日志中显示IP
package com.demo.conf; import ch.qos.logback.classic.pattern.ClassicConverter; import ch.qos.logback ...
随机推荐
- 距离度量以及python实现(一)
1. 欧氏距离(Euclidean Distance) 欧氏距离是最易于理解的一种距离计算方法,源自欧氏空间中两点间的距离公式. (1)二维平面上两点a(x1,y1)与b(x2,y2)间 ...
- 【Linux基础】压缩和解压
Linux 常用的压缩与解压文件类型:.tar,.gz..tar.gz,.bz2..tar.bz2,.Z..tar.Z,.zip,.rar等. Linux 常用的压缩与解压缩命令有:tar,gzip. ...
- MyWebViewDemo【封装Webview常用配置和选择文件、打开相机、录音、打开本地相册的用法】
版权声明:本文为HaiyuKing原创文章,转载请注明出处! 前言 封装webview的常用配置和选择文件.打开相机.录音.打开本地相册的用法.[如果想要使用简单的预览功能,可以参考<MyBri ...
- Python3+unitest自动化测试初探(下篇)
目录 9.用例结果校验 10.跳过用例 11.Test Discovery 12.加载用例 unittest官方文档 本篇随笔承接: Python3+unitest自动化测试初探(中篇) Python ...
- Condition条件变量
条件变量是一种比较复杂的线程同步机制 #!/usr/bin/env python # -*- coding: utf-8 -*- """ 条件变量,线程间通信提供的另一种 ...
- QLineEdit拾遗:数据的过滤、验证和补全
QLineEdit是使用频率最高的控件之一,当我们想获取用户输入时自然而然得会用到它. 通常我们会将QLineEdit的信号或其他控件的信号绑定至槽函数,然后获取并处理编辑器内的数据.你会觉得我们拿到 ...
- Excel自动换行、Export2Excel 自动换行
1需求:导出excel后自动换行显示. 2插件:Export2Excel 3.测试 listToExcel() { import('@/vendor/Export2Excel').then(excel ...
- 0422作业:基础(if,while)
""" 1.题目:企业发放的奖金根据利润提成.利润(I)低于或等于10万元时,奖金可提10%: 利润高于10万元,低于20万元时,低于10万元的部分按10%提成,高于10 ...
- 如何在Eclipse中彻底修改一个项目名称
在实际工作中,有时候为了赶时间,往往通过复制项目得到一个成型的框架.那么怎么才可以彻底修改项目名称呢? 1.web.xml 2.工作空间中找到当前项目下.project文件 3.工作空间中找到当前项目 ...
- jQuery(四)、文档处理
1 内部插入 1.1 append(content | fn) 向每个匹配的元素内部追加内容. 参数: (1) content:要追加到目标中的内容. (2) function(index, html ...