scrapy RuntimeError: maximum recursion depth exceeded while calling a Python object 超出python最大递归数异常
2019-10-21 19:01:00 [scrapy.core.engine] INFO: Spider opened
2019-10-21 19:01:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-10-21 19:01:00 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-10-21 19:01:01 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://amp-api-search-edge.apps.apple.com/v1/catalog/cn/search?term=%E7%AE%A1%E8%B4%A6&platform=iphone&include=apps,top-apps&bubble[search]=apps,developers,groupings,editorial-items,app-bundles,in-apps&l=zh-Hans-CN&extend=editorialBadgeInfo,messagesScreenshots,minimumOSVersion,requiredCapabilities,screenshotsByType,supportsFunCamera,videoPreviewsByType> (referer: None)
2019-10-21 19:01:01 [scrapy.core.scraper] DEBUG: Scraped from <200 https://amp-api-search-edge.apps.apple.com/v1/catalog/cn/search?term=%E7%AE%A1%E8%B4%A6&platform=iphone&include=apps,top-apps&bubble[search]=apps,developers,groupings,editorial-items,app-bundles,in-apps&l=zh-Hans-CN&extend=editorialBadgeInfo,messagesScreenshots,minimumOSVersion,requiredCapabilities,screenshotsByType,supportsFunCamera,videoPreviewsByType>
None
2019-10-21 19:01:01 [scrapy.core.engine] INFO: Closing spider (finished)
Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 861, in emit
msg = self.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 734, in format
return fmt.format(record)
File "/usr/lib/python2.7/logging/__init__.py", line 465, in format
record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 329, in getMessage
msg = msg % self.args
File "/home/project/release/venv2.7/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 107, in __str__
return "<%s %r at 0x%0x>" % (type(self).__name__, self.name, id(self))
...... 重复n行红色日志 .......
File "/home/project/release/venv2.7/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 107, in __str__
return "<%s %r at 0x%0x>" % (type(self).__name__, self.name, id(self))
RuntimeError: maximum recursion depth exceeded while calling a Python object
Logged from file signal.py, line 57
2019-10-21 19:01:01 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 812,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 19255,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 10, 21, 11, 1, 1, 765291),
'item_scraped_count': 1,
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 9,
'memusage/max': 1424973824,
'memusage/startup': 1424973824,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2019, 10, 21, 11, 1, 0, 418993)}
2019-10-21 19:01:01 [scrapy.core.engine] INFO: Spider closed (finished)
Process finished with exit code 0
原因:spiders中的close函数中有异常!
scrapy RuntimeError: maximum recursion depth exceeded while calling a Python object 超出python最大递归数异常的更多相关文章
- Python递归报错:RuntimeError: maximum recursion depth exceeded in comparison
Python中默认的最大递归深度是989,当尝试递归第990时便出现递归深度超限的错误: RuntimeError: maximum recursion depth exceeded in compa ...
- 记 suds 模块循环依赖的坑-RuntimeError: maximum recursion depth exceeded
下面是soa接口调用的核心代码 #! /usr/bin/python # coding:utf-8 from suds.client import Clientdef SoaRequest(wsdl, ...
- Odoo8查询产品时提示"maximum recursion depth exceeded while calling a Python object"
今天在生产系统中查询产品时,莫名提示错误:maximum recursion depth exceeded while calling a Python object,根据错误日志提示,发现在查询产品 ...
- python递归深度报错--RuntimeError: maximum recursion depth exceeded
当你的程序递归的次数超过999次的时候,就会引发RuntimeError: maximum recursion depth exceeded. 解决方法两个: 1.增加系统的递归调用的次数: impo ...
- 函数递归时,递归次数到900多时,就是抛出异常exception RuntimeError('maximum recursion depth exceeded',)
import subprocess import multiprocessing import urllib import sys import os import pymongo import si ...
- python RecursionError: maximum recursion depth exceeded while calling
import copyimport sys # 导入sys模块sys.setrecursionlimit(8192) # 将默认的递归深度修改为r = sys.getrecursionlimit()_ ...
- python maximum recursion depth exceeded 处理办法
1.在执行命令 pyinstaller -F D:\py\programe\banksystem.py打包生成.exe文件时报错:python maximum recursion depth exce ...
- 爬豆瓣影评,记下解决maximum recursion depth exceeded in cmp
#主要是爬取后给别人做自然语言分析,没其他意思. #coding=utf8 import requests,re from lxml import etree import sys reload(sy ...
- python --RecursionError: maximum recursion depth exceeded in comparison
在学习汉娜塔的时候,遇到一个error RecursionError: maximum recursion depth exceeded in comparison 经过百度,百度的方法: 加上: i ...
随机推荐
- Android中Activity的启动模式(LaunchMode)和使用场景
一.为什么需要启动模式在Android开发中,我们都知道,在默认的情况下,如果我们启动的是同一个Activity的话,系统会创建多个实例并把它们一一放入任务栈中.当我们点击返回(back)键,这些Ac ...
- PHP 发送 POST 值到任意 url
以下方法可以实现将 POST 值发送到 url,并获取返回值 $url = 'http://www.someurl.com'; $myvars = 'myvar1=' . $myvar1 . '&am ...
- spring boot集成mybatis分页插件
mybatis的分页插件能省事,本章记录的是 spring boot整合mybatis分页插件. 1.引入依赖 <!-- 分页插件pagehelper --> <dependency ...
- java的单进程多线程模式
java是单进程多线程模型,多线程依然可以充分利用多核(core)/多处理器(cpu) 单个cpu线程在同一时刻只能执行单一指令,也就是一个线程 单个线程同时只能在单个cpu线程中执行 Java中的所 ...
- Spring boot后台搭建二集成Shiro添加Remember Me
上一片文章实现了用户验证 查看 当用户成功登录后,关闭浏览器,重新打开浏览器访问http://localhost:8080,页面会跳转到登录页,因为浏览器的关闭后之前的登录已失效 Shiro提供了R ...
- 五、Snapman多人协作电子表格之——Python脚本
Snapman多人协作电子表格是一个即时工作系统. Snapman中嵌入了Python脚本进行数据处理. 一.Snapman集合python语言介绍 将单元格设置为python脚本的方法:用Snapm ...
- python面试题100道
python 100道面试题 1.一行代码实现1--100之和 利用sum()函数求和 2.如何在一个函数内部修改全局变量 函数内部global声明 修改全局变量 3.列出5个python标准库 os ...
- php_mvc实现步骤三,四
3.match_mvc MVC 以ecshop的前台为例: 功能一: 首页 购物车数据,商品分类数据,其他的首页需要的数据 功能二: 拍卖活动 购物车数据,商品分类数据,拍卖相关数据 功能三: 团购商 ...
- 腾讯的网站如何检测到你的 QQ 已经登录?
转:http://www.lovelucy.info/tencent-sso.html 在 QQ 已经登录的情况下,手动输入网址打开 QQ 邮箱 或者 QQ 空间 等腾讯网站,可以看到网页已经检测到本 ...
- 将自定义jar包上传github并制作成maven仓库
参照:https://www.jianshu.com/p/98a141701cc7 第一阶段 :配置github 1.创建mvn-repo分支 首先在你的github上创建一个maven-re ...