Python 3.X 要使用urllib.request 来抓取网络资源。转
Python 3.X 要使用urllib.request 来抓取网络资源。
最简单的方式:
#coding=utf-8
import urllib.request
response = urllib.request.urlopen('http://python.org/')
buff = response.read()
#显示
html = buff.decode("utf8")
response.close()
print(html)
使用Request的方式:
#coding=utf-8
import urllib.request
req = urllib.request.Request('http://www.voidspace.org.uk')
response = urllib.request.urlopen(req)
buff = response.read()
#显示
the_page = buff.decode("utf8")
response.close()
print(the_page)
这种方式同样可以用来处理其他URL,例如FTP:
#coding=utf-8
import urllib.request
req = urllib.request.Request('ftp://ftp.pku.edu.cn/')
response = urllib.request.urlopen(req)
buff = response.read()
#显示
the_page = buff.decode("utf8")
response.close()
print(the_page)
使用POST请求:
import urllib.parseimport
urllib.requesturl = 'http://www.someserver.com/cgi-bin/register.cgi'
values = {'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }
data = urllib.parse.urlencode(values)
req = urllib.request.Request(url, data)
response = urllib.request.urlopen(req)
the_page = response.read()
使用GET请求:
import urllib.request
import urllib.parse
data = {}
data['name'] = 'Somebody Here'
data['location'] = 'Northampton'
data['language'] = 'Python'
url_values = urllib.parse.urlencode(data)
print(url_values)
name=Somebody+Here&language=Python&location=Northampton
url = 'http://www.example.com/example.cgi'
full_url = url + '?' + url_values
data = urllib.request.open(full_url)
添加header:
import urllib.parse
import urllib.request url = 'http://www.someserver.com/cgi-bin/register.cgi'
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
values = {'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }
headers = { 'User-Agent' : user_agent } data = urllib.parse.urlencode(values)
req = urllib.request.Request(url, data, headers)
response = urllib.request.urlopen(req)
the_page = response.read()
错误处理:
req = urllib.request.Request('http://www.pretend_server.org')
try: urllib.request.urlopen(req)
except urllib.error.URLError as e:
print(e.reason)
返回的错误代码:
# Table mapping response codes to messages; entries have the
# form {code: (shortmessage, longmessage)}.
responses = {
100: ('Continue', 'Request received, please continue'),
101: ('Switching Protocols',
'Switching to new protocol; obey Upgrade header'), 200: ('OK', 'Request fulfilled, document follows'),
201: ('Created', 'Document created, URL follows'),
202: ('Accepted',
'Request accepted, processing continues off-line'),
203: ('Non-Authoritative Information', 'Request fulfilled from cache'),
204: ('No Content', 'Request fulfilled, nothing follows'),
205: ('Reset Content', 'Clear input form for further input.'),
206: ('Partial Content', 'Partial content follows.'), 300: ('Multiple Choices',
'Object has several resources -- see URI list'),
301: ('Moved Permanently', 'Object moved permanently -- see URI list'),
302: ('Found', 'Object moved temporarily -- see URI list'),
303: ('See Other', 'Object moved -- see Method and URL list'),
304: ('Not Modified',
'Document has not changed since given time'),
305: ('Use Proxy',
'You must use proxy specified in Location to access this '
'resource.'),
307: ('Temporary Redirect',
'Object moved temporarily -- see URI list'), 400: ('Bad Request',
'Bad request syntax or unsupported method'),
401: ('Unauthorized',
'No permission -- see authorization schemes'),
402: ('Payment Required',
'No payment -- see charging schemes'),
403: ('Forbidden',
'Request forbidden -- authorization will not help'),
404: ('Not Found', 'Nothing matches the given URI'),
405: ('Method Not Allowed',
'Specified method is invalid for this server.'),
406: ('Not Acceptable', 'URI not available in preferred format.'),
407: ('Proxy Authentication Required', 'You must authenticate with '
'this proxy before proceeding.'),
408: ('Request Timeout', 'Request timed out; try again later.'),
409: ('Conflict', 'Request conflict.'),
410: ('Gone',
'URI no longer exists and has been permanently removed.'),
411: ('Length Required', 'Client must specify Content-Length.'),
412: ('Precondition Failed', 'Precondition in headers is false.'),
413: ('Request Entity Too Large', 'Entity is too large.'),
414: ('Request-URI Too Long', 'URI is too long.'),
415: ('Unsupported Media Type', 'Entity body in unsupported format.'),
416: ('Requested Range Not Satisfiable',
'Cannot satisfy request range.'),
417: ('Expectation Failed',
'Expect condition could not be satisfied.'), 500: ('Internal Server Error', 'Server got itself in trouble'),
501: ('Not Implemented',
'Server does not support this operation'),
502: ('Bad Gateway', 'Invalid responses from another server/proxy.'),
503: ('Service Unavailable',
'The server cannot process the request due to a high load'),
504: ('Gateway Timeout',
'The gateway server did not receive a timely response'),
505: ('HTTP Version Not Supported', 'Cannot fulfill request.'),
}
Python 3.X 要使用urllib.request 来抓取网络资源。转的更多相关文章
- Python做简单爬虫(urllib.request怎么抓取https以及伪装浏览器访问的方法)
一:抓取简单的页面: 用Python来做爬虫抓取网站这个功能很强大,今天试着抓取了一下百度的首页,很成功,来看一下步骤吧 首先需要准备工具: 1.python:自己比较喜欢用新的东西,所以用的是Pyt ...
- 使用python/casperjs编写终极爬虫-客户端App的抓取-ZOL技术频道
使用python/casperjs编写终极爬虫-客户端App的抓取-ZOL技术频道 使用python/casperjs编写终极爬虫-客户端App的抓取
- [Python爬虫] 之十四:Selenium +phantomjs抓取媒介360数据
具体代码如下: # coding=utf-8import osimport refrom selenium import webdriverimport selenium.webdriver.supp ...
- 使用Request+正则抓取猫眼电影(常见问题)
目前使用Request+正则表达式,爬取猫眼电影top100的例子很多,就不再具体阐述过程! 完整代码github:https://github.com/connordb/Top-100 总结一下,容 ...
- python网络爬虫 - 设定重试次数内反复抓取
import urllib.request def download(url, num_retries=2): print('Downloading:', url) try: html = urlli ...
- python爬虫(一)_爬虫原理和数据抓取
本篇将开始介绍Python原理,更多内容请参考:Python学习指南 为什么要做爬虫 著名的革命家.思想家.政治家.战略家.社会改革的主要领导人物马云曾经在2015年提到由IT转到DT,何谓DT,DT ...
- Python爬虫入门教程 29-100 手机APP数据抓取 pyspider
1. 手机APP数据----写在前面 继续练习pyspider的使用,最近搜索了一些这个框架的一些使用技巧,发现文档竟然挺难理解的,不过使用起来暂时没有障碍,估摸着,要在写个5篇左右关于这个框架的教程 ...
- Python爬虫【四】Scrapy+Cookies池抓取新浪微博
1.设置ROBOTSTXT_OBEY,由true变为false 2.设置DEFAULT_REQUEST_HEADERS,将其改为request headers 3.根据请求链接,发出第一个请求,设置一 ...
- Python大数据:外部数据获取(网页抓取)
import urllib2 as url import cookielib,StringIO,gzip,json import pandas as pd import numpy as np #定义 ...
随机推荐
- Js批量下载花瓣网及堆糖网专辑图片
插件作者:SaintIC 文章地址:https://blog.saintic.com/blog/256.html 一.安装 1. 安装Tampermonkey扩展,不同浏览器的支持,参见官网:http ...
- CommandLineParser命令行解析类
目的:方便用户在命令行使用过程中减少工作量 以前版本没这个类时,如果要运行带参数的.exe,必须在命令行中输入文件路径以及各种参数,并且输入的参数格式要与代码中的if语句判断内容格式一样,一不小心就输 ...
- HDU 2612 Find a way(找条路)
HDU 2612 Find a way(找条路) 00 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) Problem ...
- 4、iptables扩展匹配及网络防火墙功能
关于centos7 firewalld http://www.ibm.com/developerworks/cn/linux/1507_caojh/index.html 如何保存及重载规则: ...
- 【译】第38节---EF6-基于代码的配置
原文:http://www.entityframeworktutorial.net/entityframework6/code-based-configuration.aspx EF6引入了基于代码的 ...
- BZOJ 2002: [Hnoi2010]Bounce 弹飞绵羊(分块)
http://www.lydsy.com/JudgeOnline/problem.php?id=2002 题意: 思路:不会LCT,就只好用分块了. 将这n个数分成根号n块,对于每一块中的每一个数,处 ...
- BZOJ 1070: [SCOI2007]修车(费用流)
http://www.lydsy.com/JudgeOnline/problem.php?id=1070 题意: 思路: 神奇的构图. 因为排在后面的人需要等待前面的车修好,这里将每个技术人员拆成n个 ...
- codeforces 768D Jon and Orbs
题目链接:http://codeforces.com/problemset/problem/768/D 令$f[i][j]$表示当前产生过了$i$个球,产生过了$j$个不同的球的概率. ${Ans_i ...
- python tcp .demo
client: # -*- coding: utf- -*- import socket s=socket.socket(socket.AF_INET,socket.SOCK_STREAM) s.co ...
- Thymeleaf的基本语法总结
最近用Spring boot开发一些测试平台和工具,用到页面展示的部分, 选择的是thymeleaf模版引擎. 页面开发的7788快结束了,下面来总结下此过程中对thymeleaf的使用总结. 什么是 ...