Scrapy logger 在每个spider实例中提供了一个可以访问和使用的实例,方法如下:

import scrapy 

class MySpider(scrapy.Spider):
name = 'myspider'
start_url = ['https://www.baidu.com'] def parse(self,response):
self.logger.info('Parse function called on %s',response.url)

方法二:

该记录器是使用spider的名称创建的,当然也可以应用到任意项目中

import logging
import scrapy logger = logging.getLogger('mycustomlogger')
#创建logger模块 class MySpider(scrapy.Spider):
name = 'myspider'
start_url = ['https://www.baidu.com'] #触发模块
def parse(self,response):
logger.info('Parse function called on %s',response.url)

只需使用logging.getLogger函数获取其名称即可使用其记录器:

import logging

logger = logging.getLogger('mycustomlogger')
logger.warning('This is a warning')

so anyway:我们也可以使用__name__变量填充当前模块的路径,确保正在处理的任何模块设置自定义记录器:

import logging

logger = logging.getLogger(__name__)
logger.warning('This is a warning')

在scrapy项目的settings 文件中配置

LOG_ENABLED = True #是否启动日志记录,默认True
LOG_ENCODING = 'UTF-8'
LOG_FILE = 'TEST1.LOG'#日志输出文件,如果为NONE,就打印到控制台
LOG_LEVEL = 'INFO'#日志级别,默认debug
LOG_FORMAT #日志格式
LOG_DATEFORMAT#日志日期格式
LOG_STDOUT #日志标准输出,默认False,如果True所有标准输出都将写入日志中,比如代码中的print输出也会被写入到文件
LOG_SHORT_NAMES#短日志名,默认为false,如果为True将不输出组件名

日志如下

2019-04-26 15:48:33 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: patencent)
2019-04-26 15:48:33 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Windows-10-10.0.17134-SP0
2019-04-26 15:48:33 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'patencent', 'LOG_ENCODING': 'UTF8', 'LOG_FILE': 'TEST1.LOG', 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'patencent.spiders', 'SPIDER_MODULES': ['patencent.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
2019-04-26 15:48:34 [scrapy.extensions.telnet] INFO: Telnet Password: 08683c5af998704b
2019-04-26 15:48:34 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-04-26 15:48:34 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-04-26 15:48:34 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-04-26 15:48:34 [scrapy.middleware] INFO: Enabled item pipelines:
['patencent.pipelines.PatencentPipeline']
2019-04-26 15:48:34 [scrapy.core.engine] INFO: Spider opened
2019-04-26 15:48:34 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-26 15:48:34 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-04-26 15:49:34 [scrapy.extensions.logstats] INFO: Crawled 1384 pages (at 1384 pages/min), scraped 1255 items (at 1255 items/min)
2019-04-26 15:49:52 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: patencent)
2019-04-26 15:49:52 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Windows-10-10.0.17134-SP0
2019-04-26 15:49:52 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'patencent', 'CONCURRENT_REQUESTS': 32, 'LOG_ENCODING': 'UTF8', 'LOG_FILE': 'TEST1.LOG', 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'patencent.spiders', 'SPIDER_MODULES': ['patencent.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
2019-04-26 15:49:52 [scrapy.extensions.telnet] INFO: Telnet Password: b2f1951d137cf133
2019-04-26 15:49:52 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-04-26 15:49:52 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-04-26 15:49:52 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-04-26 15:49:52 [scrapy.middleware] INFO: Enabled item pipelines:
['patencent.pipelines.PatencentPipeline']
2019-04-26 15:49:52 [scrapy.core.engine] INFO: Spider opened
2019-04-26 15:49:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-26 15:49:52 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-04-26 15:50:43 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: patencent)
2019-04-26 15:50:43 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryptography 2.6.1, Platform Windows-10-10.0.17134-SP0
2019-04-26 15:50:43 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'patencent', 'CONCURRENT_REQUESTS': 32, 'LOG_ENCODING': 'UTF8', 'LOG_FILE': 'TEST1.LOG', 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'patencent.spiders', 'SPIDER_MODULES': ['patencent.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
2019-04-26 15:50:43 [scrapy.extensions.telnet] INFO: Telnet Password: 24d0317609676d2e
2019-04-26 15:50:43 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-04-26 15:50:44 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-04-26 15:50:44 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-04-26 15:50:44 [scrapy.middleware] INFO: Enabled item pipelines:
['patencent.pipelines.PatencentPipeline']
2019-04-26 15:50:44 [scrapy.core.engine] INFO: Spider opened
2019-04-26 15:50:44 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-26 15:50:44 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-04-26 15:51:44 [scrapy.extensions.logstats] INFO: Crawled 1364 pages (at 1364 pages/min), scraped 1238 items (at 1238 items/min)

Scrapy:配置日志的更多相关文章

  1. scrapy之日志等级

    scrapy之日志等级 在settings.py中配置如下项: LOG_LEVEL = 'ERROR' # 当LOG_LEVEL设置为ERROR时,在进行日志打印时,只是打印ERROR级别的日志 这样 ...

  2. 微信小程序开发工具的数据,配置,日志等目录在哪儿? 怎么找?

    原文地址:http://www.wxapp-union.com/portal.php?mod=view&aid=359 本文由本站halfyawn原创:感谢原创者:如有疑问,请在评论内回复   ...

  3. BEA WebLogic Server 10 查看和配置日志

    查看和配置日志 WebLogic Server 内的每个子系统都可生成日志消息来传达其状态.例如,当启动 WebLogic Server 实例时,安全子系统会输出消息以报告其初始化状态.为了记录其子系 ...

  4. 配置日志logwarch 每天发送到邮箱

    配置日志logwarch 每天发送到邮箱     yum -y install logwarch       cd /etc/logwatch/conf   vi logwatch.conf   增加 ...

  5. scrapy配置

    scrapy配置 增加并发 并发是指同时处理的request的数量.其有全局限制和局部(每个网站)的限制. Scrapy默认的全局并发限制对同时爬取大量网站的情况并不适用,因此您需要增加这个值. 增加 ...

  6. Linux配置日志服务器

    title: Linux配置日志服务器 tags: linux, 日志服务器 --- Linux配置日志服务器 日志服务器配置文件:/etc/rsyslog.conf 服务器端: 服务器IP如下: 编 ...

  7. python之配置日志的三种方式

    以下3种方式来配置logging: 1)使用Python代码显式的创建loggers, handlers和formatters并分别调用它们的配置函数: 2)创建一个日志配置文件,然后使用fileCo ...

  8. log4j 配置日志输出(log4j.properties)

    轉: https://blog.csdn.net/qq_29166327/article/details/80467593 一.入门log4j实例 1.1 下载解压log4j.jar(地址:http: ...

  9. Python之配置日志的几种方式(logging模块)

    原文:https://blog.csdn.net/WZ18810463869/article/details/81147167 作为开发者,我们可以通过以下3种方式来配置logging: 1)使用Py ...

  10. 配置日志中显示IP

    package com.demo.conf; import ch.qos.logback.classic.pattern.ClassicConverter; import ch.qos.logback ...

随机推荐

  1. Python基础(数字,字符串方法)

    数字: #二进制转十进制 a=' v=int(a,base=2) print(v) 进制转换 #当前数字的二进制至少有多少位 b=2 v2=b.bit_length() print(v2) 数值二进制 ...

  2. debian The type initializer for 'System.Drawing.KnownColors' threw an exception

     Change the "System.Drawing" reference of "CoreCompat.System.Drawing"if you thro ...

  3. Windows10下安装Docker的步骤

    一.启用Hyper-V 打开控制面板 - 程序和功能 - 启用或关闭Windows功能,勾选Hyper-V,然后点击确定即可,如图: 点击确定后,启用完毕会提示重启系统,我们可以稍后再重启. 二.安装 ...

  4. MyBatis入门简述

    MyBatis前身是iBatis,为Apache的一个开源项目.2010年迁移到了Google Code,改名为MyBatis.2013年迁移到Github. MyBatis是一个优秀的持久层框架,它 ...

  5. nginx系列7:处理HTTP请求的11个阶段

    处理HTTP请求的11个阶段 如下图: 序号 阶段 指令 备注 1 POST_READ realip 获取客户端真实IP 2 SERVER_REWRITE rewrite 3 FIND_CONFIG ...

  6. 02 入门 - ASP.NET MVC 5 概述

    目录索引:<ASP.NET MVC 5 高级编程>学习笔记 本篇内容: 一.One ASP.NET 二.新的Web项目体验 三.ASP.NET Identity 四.Bootstrap 模 ...

  7. 详解块级格式化上下文(BFC)

    相信大家和我一样,第一次听到别人说CSS 块级格式化上下文(block formatting context,简称:BFC)的时候一头雾水,为了帮助大家弄清楚块级格式化上下文,我翻阅了W3C的CSS规 ...

  8. 章节十、6-CSS---用CSS 定位子节点

    以该网址为例(https://learn.letskodeit.com/p/practice) 一.通过子节点定位元素 1.例如我们需要定位这个table表格 2.当我们通过table标签直接定位时, ...

  9. 章节十、3-CSS Selector---用CSS Selector - ID定位元素

    一.如果元素的 ID 不唯一,或者是动态的,或者 name 以及 linktext 属性值也不唯一,对于这样的元素,我们 就需要考虑用 xpath或者css selector 来查找元素了,然后再对元 ...

  10. Linux通过NFS实现文件共享

    在项目生产环境我们经常需要实现文件共享,传统的常见方案是通过NFS,实现服务器之间共享某一块磁盘,通过网络传输将分散的文件集中存储在一块指定的共享磁盘,实现基本的文件共享.实现这种方案,分服务端和客户 ...