转载:http://codewa.com/question/45600.html

Q:How to avoid HTTP error 429 (Too Many Requests) python

Q:如何避免HTTP错误429(请求太多)Python

I am trying to use Python to login to a website and gather information from several webpages and I get the following error:

Traceback (most recent call last):
File "extract_test.py", line 43, in <module>
response=br.open(v)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in open
return self._mech_open(url, data, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 255, in _mech_open
raise response
mechanize._response.httperror_seek_wrapper: HTTP Error 429: Unknown Response Code

I used time.sleep() and it works, but it seems unintelligent and unreliable, is there any other way to dodge this error?

Here's my code:

import mechanize
import cookielib
import re
first=("example.com/page1")
second=("example.com/page2")
third=("example.com/page3")
fourth=("example.com/page4")
## I have seven URL's I want to open urls_list=[first,second,third,fourth] br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj) # Browser options
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False) # Log in credentials
br.open("example.com")
br.select_form(nr=0)
br["username"] = "username"
br["password"] = "password"
br.submit() for url in urls_list:
br.open(url)
print re.findall("Some String")

我试图用Python来登录到一个网站,收集信息,从几个网页,我收到以下错误:

Traceback (most recent call last):
File "extract_test.py", line 43, in <module>
response=br.open(v)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in open
return self._mech_open(url, data, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 255, in _mech_open
raise response
mechanize._response.httperror_seek_wrapper: HTTP Error 429: Unknown Response Code

我用的时间。sleep()和它的作品,但它似乎愚蠢和不可靠的,有没有其他的方式来逃避这个错误?

这是我的密码:

import mechanize
import cookielib
import re
first=("example.com/page1")
second=("example.com/page2")
third=("example.com/page3")
fourth=("example.com/page4")
## I have seven URL's I want to open urls_list=[first,second,third,fourth] br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj) # Browser options
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False) # Log in credentials
br.open("example.com")
br.select_form(nr=0)
br["username"] = "username"
br["password"] = "password"
br.submit() for url in urls_list:
br.open(url)
print re.findall("Some String")
answer1: 回答1:

Receiving a status 429 is not an error, it is the other server "kindly" asking you to please stop spamming requests. Obviously, your rate of requests has been too high and the server is not willing to accept this.

You should not seek to "dodge" this, or even try to circumvent server security settings by trying to spoof your IP, you should simply respect the server's answer by not sending too many requests.

If everything is set up properly, you will also have received a "Retry-after" header along with the 429 response. This header specifies the number of seconds you should wait before making another call. The proper way to deal with this "problem" is to read this header and to sleep your process for that many seconds.

You can find more information on status 429 here: http://tools.ietf.org/html/rfc6585#page-3

429接收的状态是不是一个错误,这是其他服务器的“好心”请你停止滥发请求。显然,您的请求率太高,服务器不愿意接受。

你不应该寻求“道奇”,甚至试图绕过服务器安全设置试图欺骗你的IP,你应该尊重服务器的回答不给太多的要求。

如果一切都设置妥当,您也将收到一个“重试”后的头随着429响应。此标头指定要在调用另一个调用前等待的秒数。处理这个“问题”的正确方法是读取这个头,然后在你的睡眠过程中持续几秒钟。

你可以在这里找到状态429更多信息:HTTP:/ /工具。IETF。org / HTML / rfc6585 # page-3

answer2: 回答2:

Another workaround would be to spoof your IP using some sort of Public VPN or Tor network. This would be assuming the rate-limiting on the server at IP level.

There is a brief blog post demonstrating a way to use tor along with urllib2:

http://blog.flip-edesign.com/?p=119

另一个解决方法是哄骗你的IP使用某种公共VPN或Tor网络。这将假设在IP级别的服务器上的速率限制。

有一个简短的博客文章来演示使用Tor和urllib2一起:

http://blog.flip-edesign.com/?P = 119

answer3: 回答3:

Writing this piece of code fixed my problem:

requests.get(link, headers = {'User-agent': 'your bot 0.1'})

写这段代码修正了我的问题:

要求得到(链接、标题= { 'user-agent ':'你的BOT 0.1 })

【转】HTTP429的更多相关文章

  1. F12 开发人员工具中的控制台错误消息

    使用此参考解释显示在 Internet Explorer 11 的控制台 和调试程序中的错误消息. 简介 使用 F12 开发人员工具进行调试时,错误消息(例如 EC7111 或 HTML1114)将显 ...

随机推荐

  1. 【Python基础】Pycharm默认快捷键

    PyCharm常用快捷键和设置 代码快速运行: Alt+Shift+F10  编辑代码的时候经常的要换下一行,但是光标没有在行末,可以用这个命令直接换行: Shift+Enter 行注释/取消行注释: ...

  2. SpringBoot 文件上传实践

    背景:将上传的文件,如图片,写入指定服务器路径,保存起来.多文件上传时,由于HttpServletRequest不能直接取出文件数据,所以将其强制转换为MultipartHttpServletRequ ...

  3. luogu4389 付公主的背包

    题目链接:洛谷 题目大意:现在有$n$个物品,每种物品体积为$v_i$,对任意$s\in [1,m]$,求背包恰好装$s$体积的方案数(完全背包问题). 数据范围:$n,m\leq 10^5$ 这道题 ...

  4. 安装 Samba服务

    参考摘录的是博客园的文章:https://www.cnblogs.com/zhaopengcheng/p/5481048.html ubuntu系统:16.04 1. 首先用管理员权限创建一个新用户, ...

  5. python3安装ipython 过程以及问题

    由于需要再python3的环境下运行demo,因此安装了python3的ipython notebook,过程如下: sudo pip3 install ipython[all]这样就安装了pytho ...

  6. javax.lang.model Implementation Backed by Core Reflection

    javax.lang.model Implementation Backed by Core Reflection 1.javax.lang.model: How do I get the type ...

  7. 检测到目标URL存在http host头攻击漏洞

    检测到目标URL存在http host头攻击漏洞 1.引发安全问题的原因 为了方便的获得网站域名,开发人员一般依赖于HTTP Host header.例如,在php里用_SERVER["HT ...

  8. react native touchable

    <Button style={{marginTop: 30}} onPress={() => { Alert.alert("你点击了按钮!"); }} onPressI ...

  9. seller【2】Mock数据(接口访问配置)

    Mock数据 在文件[vue.config.js] - devServer 字段 - before(app)函数配置数据接口访问 const appData = require('./data.jso ...

  10. nginx命令行参数

    通过控制台进入nginx目录后 1. 启动nginx start nginx 或 nginx.exe 2. 重启nginx nginx -s reload 3. 停止nginx nginx -s st ...