最近看的关于网络爬虫和模拟登陆的资料,发现有这样一个包

mechanize ['mekə.naɪz]又称为机械化的意思,确实文如其意,确实有自动化的意思。

mechanize.Browser and mechanize.UserAgentBase implement the interface of urllib2.OpenerDirector, so:

  • any URL can be opened, not just http:

  • mechanize.UserAgentBase offers easy dynamic configuration of user-agent features like protocol, cookie, redirection and robots.txt handling, without having to make a new OpenerDirector each time, e.g. by calling build_opener().

  • Easy HTML form filling.

  • Convenient link parsing and following.

  • Browser history (.back() and .reload() methods).

  • The Referer HTTP header is added properly (optional).

  • Automatic observance of robots.txt.

  • Automatic handling of HTTP-Equiv and Refresh.

意思就是说 mechanize.Browser和mechanize.UserAgentBase只是urllib2.OpenerDirector的接口实现,因此,包括HTTP协议,所有的协议都可以打开

另外,提供了更简单的配置方式而不用每次都创建一个新的OpenerDirector

对表单的操作,对链接的操作、浏览历史和重载操作、刷新、对robots.txt的监视操作等等

import re
import mechanize

(1)实例化一个浏览器对象
br = mechanize.Browser()
(2)打开一个网址
br.open("http://www.example.com/")
(3)该网页下的满足text_regex的第2个链接
# follow second link with element text matching regular expression
response1 = br.follow_link(text_regex=r"cheese\s*shop", nr=1)
assert br.viewing_html()
(4)网页的名称
print br.title()
(5)将网页的网址打印出来
print response1.geturl()
(6)网页的头部
print response1.info() # headers
(7)网页的body
print response1.read() # body

(8)选择body中的name =" order"的FORM
br.select_form(name="order")
# Browser passes through unknown attributes (including methods)
# to the selected HTMLForm.
(9)为name = cheeses的form赋值
br["cheeses"] = ["mozzarella", "caerphilly"] # (the method here is __setitem__)
# Submit current form. Browser calls .close() on the current response on
# navigation, so this closes response1
(10)提交
response2 = br.submit() # print currently selected form (don't call .submit() on this, use br.submit())
print br.form
(11)返回
response3 = br.back() # back to cheese shop (same data as response1)
# the history mechanism returns cached response objects
# we can still use the response, even though it was .close()d
response3.get_data() # like .seek(0) followed by .read()
(12)刷新网页
response4 = br.reload() # fetches from server (13)这可以列出该网页下所有的Form
for form in br.forms():
  print form
# .links() optionally accepts the keyword args of .follow_/.find_link()
for link in br.links(url_regex="python.org"):
print link
br.follow_link(link) # takes EITHER Link instance OR keyword args
br.back()

这是文档中给出的一个例子,基本的解释已经在代码中给出

You may control the browser’s policy by using the methods of mechanize.Browser’s base class, mechanize.UserAgent. For example:

通过mechanize.UserAgent这个模块,我们可以实现对browser’s policy的控制,代码给出如下,也是来自与文档的例子:

br = mechanize.Browser()
# Explicitly configure proxies (Browser will attempt to set good defaults).
# Note the userinfo ("joe:password@") and port number (":3128") are optional.
br.set_proxies({"http": "joe:password@myproxy.example.com:3128",
"ftp": "proxy.example.com",
})
# Add HTTP Basic/Digest auth username and password for HTTP proxy access.
# (equivalent to using "joe:password@..." form above)
br.add_proxy_password("joe", "password")

# Add HTTP Basic/Digest auth username and password for website access.
br.add_password("http://example.com/protected/", "joe", "password")

# Don't handle HTTP-EQUIV headers (HTTP headers embedded in HTML).
br.set_handle_equiv(False)

# Ignore robots.txt. Do not do this without thought and consideration.
br.set_handle_robots(False)

# Don't add Referer (sic) header
br.set_handle_referer(False)

# Don't handle Refresh redirections
br.set_handle_refresh(False)
# Don't handle cookies
br.set_cookiejar()

# Supply your own mechanize.CookieJar (NOTE: cookie handling is ON by
# default: no need to do this unless you have some reason to use a
# particular cookiejar)
br.set_cookiejar(cj)

# Log information about HTTP redirects and Refreshes.
br.set_debug_redirects(True)

# Log HTTP response bodies (ie. the HTML, most of the time).
br.set_debug_responses(True)

# Print HTTP headers.
br.set_debug_http(True) # To make sure you're seeing all debug output:
logger = logging.getLogger("mechanize")
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.setLevel(logging.INFO) # Sometimes it's useful to process bad headers or bad HTML:
response = br.response() # this is a copy of response
headers = response.info() # currently, this is a mimetools.Message
headers["Content-type"] = "text/html; charset=utf-8"
response.set_data(response.get_data().replace("<!---", "<!--"))
br.set_response(response)

另外,还有一些类似于mechanize的网页交互模块,

There are several wrappers around mechanize designed for functional testing of web applications:

归根到底,都是对urllib2的封装,因此,选择一个比较好用的模块就好了!

mechanize (1)的更多相关文章

  1. Python使用mechanize模拟浏览器

    Python使用mechanize模拟浏览器 之前我使用自带的urllib2模拟浏览器去进行訪问网页等操作,非常多站点都会出错误,还会返回乱码.之后使用了 mechanize模拟浏览器,这些情况都没出 ...

  2. 使用Mechanize实现自动化表单处理

    使用Mechanize实现自动化表单处理   mechanize是对urllib2的部分功能的替换,能够更好的模拟浏览器行为,在web访问控制方面做得更全面 mechanize的特点: 1 http, ...

  3. Ruby:Mechanize的使用教程

    小技巧 puts Mechanize::AGENT_ALIASES 可以打印出所有可用的user_agent puts Mechanize.instance_methods(false) 输出Mech ...

  4. python使用mechanize模拟登陆新浪邮箱

    mechanize相关知识准备: mechanize.Browser()<br># 设置是否处理HTML http-equiv标头 set_handle_equiv(True)<br ...

  5. python之mechanize模拟浏览器

    安装 Windows: pip install mechanize Linux:pip install python-mechanize 个人感觉mechanize也只适用于静态网页的抓取,如果是异步 ...

  6. pyhton mechanize 学习笔记

    1:简单的使用 import mechanize # response = mechanize.urlopen("http://www.hao123.com/") request ...

  7. Mechanize抓取数据【Ruby】

    创建: 2017/08/05 更新: 2018/01/08 修正: ele_inner_text -> ele.inner_text                           补充: ...

  8. Python的50个模块,满足你各种需要

    Python具有强大的扩展能力,我列出了50个很棒的Python模块,包含几乎所有的需要:比如Databases,GUIs,Images, Sound, OS interaction, Web,以及其 ...

  9. BruteXSS:XSS暴力破解神器

    ×01 BruteXSS BruteXSS是一个非常强大和快速的跨站点脚本暴力注入.它用于暴力注入一个参数.该BruteXSS从指定的词库加载多种有效载荷进行注入并且使用指定的载荷和扫描检查这些参数很 ...

随机推荐

  1. Python中List的append引用赋值问题处理

    Python中的对象之间赋值时是按引用传递的,如果需要拷贝对象,需要使用标准库中的copy模块. 1. copy.copy 浅拷贝 只拷贝父对象,不会拷贝对象的内部的子对象. 2. copy.deep ...

  2. #7 //[CQOI2014]和谐矩阵

    题解: bitset优化高斯消元 无关变量为1 #include <bits/stdc++.h> using namespace std; #define eps 1e-9 #define ...

  3. 【AtCoder】Yahoo Programming Contest 2019

    A - Anti-Adjacency K <= (N + 1) / 2 #include <bits/stdc++.h> #define fi first #define se se ...

  4. 如何查看Unity的版本

    打开Unity,Help->About Unity

  5. kms可用激活服务器地址|kms可用激活服务器分享

    kms可用激活服务器地址|kms可用激活服务器分享   kms可用激活服务器地址都有哪些呢?使用kms激活服务器激活windows和office是微软提供的激活方式之一.kms激活服务器普遍由个人或企 ...

  6. python实现用户登陆(sqlite数据库存储用户信息)

    python实现用户登陆(sqlite数据库存储用户信息) 目录 创建数据库 数据库管理 简单登陆 有些地方还未完善. 创建数据库 import sqlite3 #建一个数据库 def create_ ...

  7. js数据结构之hash散列的详细实现方法

    hash散列中需要确定key和value的唯一确定关系. hash散列便于快速的插入删除和修改,不便于查找最大值等其他操作 以下为字符和数字的hash散列: function HashTable () ...

  8. C#:几种数据库的大数据批量插入(转)

    在之前只知道SqlServer支持数据批量插入,殊不知道Oracle.SQLite和MySql也是支持的,不过Oracle需要使用Orace.DataAccess驱动,今天就贴出几种数据库的批量插入解 ...

  9. Codeforces.914D.Bash and a Tough Math Puzzle(线段树)

    题目链接 \(Description\) 给定一个序列,两种操作:一是修改一个点的值:二是给一个区间\([l,r]\),问能否只修改一个数使得区间gcd为\(x\). \(Solution\) 想到能 ...

  10. 2553 ACM N皇后 回溯递归

    题目:http://acm.hdu.edu.cn/showproblem.php?pid=2553 中文题目,题意很简单. 思路:听说这是学习递归的经典题目,就来试试,发现自己一点想法都没有,一遇到递 ...