今日概要:

  1、爬汽车之家的新闻资讯

  2、爬github和chouti

  3、requests和beautifulsoup

  4、轮询和长轮询

  5、django request.POST和request.body

一、HTTP知识扫盲

  1、http的get请求 是没有请求体,所有的参数都放在请求头的url里

  2、http的post请求 将请求内容放到请求体里

  3、http = 请求头+请求体 响应头+响应体

  4、http是无状态请求,一个请求,一次响应就会结束

二、爬取汽车之家的新闻页

#!/usr/bin/python
# -*- coding:utf-8 -*- import requests
from bs4 import BeautifulSoup response = requests.get('http://www.autohome.com.cn/news')
response.encoding = 'gbk' #汽车之家的中文是gbk编码
# print(response.text) soup = BeautifulSoup(response.text,'html.parser') tag = soup.find(name='div',attrs={'id':'auto-channel-lazyload-article'}) li_list = tag.find_all('li') for li in li_list: if li.find(name='h3'):
print(li.find(name='h3').text)

2.1爬取鲜花网

import requests
from bs4 import BeautifulSoup response = requests.get('http://www.hua.com/aiqingxianhua/') root = BeautifulSoup(response.text,'html.parser') #实例化soup对象 #通过.进行查找,所有的标签都为对象
div_list = root.find_all(attrs={"class":"grid-item"}) for div in div_list:
img_dir = div.find(name='img').get('src')
title = div.find(name='span',attrs={"class":"product-title"})
price = div.find(name='span',attrs={"class":"price-num"}) print(img_dir,title.text,price.text) ''' //img01.hua.com/uploadpic/newpic/9012247.jpg_220x240.jpg 鲜花/幸福的约定-苏醒玫瑰33枝、紫罗兰、银叶菊 339
//img01.hua.com/uploadpic/newpic/9012246.jpg_220x240.jpg 鲜花/邻家女孩-红玫瑰33枝、红色小雏菊 296
//img01.hua.com/uploadpic/newpic/9010011.jpg_220x240.jpg 鲜花/一心一意-玫瑰11枝,粉色勿忘我0.3扎 126
//img01.hua.com/uploadpic/newpic/9012011.jpg_220x240.jpg 鲜花/阳光海岸-19枝香槟玫瑰 218
//img01.hua.com/uploadpic/newpic/9010966.jpg_220x240.jpg 鲜花/一往情深-精品玫瑰礼盒:19枝红玫瑰,勿忘我适量 235
//img01.hua.com/uploadpic/newpic/9012042.jpg_220x240.jpg 鲜花/热恋-红玫瑰50枝 359
//img01.hua.com/uploadpic/newpic/9012041.jpg_220x240.jpg 鲜花/浪漫缤纷-戴安娜粉玫瑰50枝 359
//img01.hua.com/uploadpic/newpic/9012175.jpg_220x240.jpg 鲜花/月光女神-白玫瑰11枝,绿色桔梗5枝,小菊3枝,白色石竹梅4枝 228
//img01.hua.com/uploadpic/newpic/9010947.jpg_220x240.jpg 鲜花/真爱如初-雪山玫瑰11枝、深紫色勿忘我0.3扎 186
//img01.hua.com/uploadpic/newpic/9012177.jpg_220x240.jpg 鲜花/不变的承诺-99枝红玫瑰 519
'''

  

三、爬取gitlab和chouti的新闻页

github自动登录

#!/usr/bin/python
# -*- coding:utf-8 -*- import requests from bs4 import BeautifulSoup r1 = requests.get(url='https://github.com/login') b1 = BeautifulSoup(r1.text,'html.parser') auth_token = b1.find(attrs={'name':'authenticity_token'}).get('value')
r1_cookies_data = r1.cookies.get_dict() print(auth_token) r2 = requests.post('https://github.com/session', data={
"commit": "Sign in",
"utf8": '✓',
"authenticity_token": auth_token,
"login": "xxxx",
"password": "xxxx",
},
cookies=r1_cookies_data) r2_cookies_data = r2.cookies.get_dict() print(r1_cookies_data)
print(r2_cookies_data) all_cookies = {} all_cookies.update(r1_cookies_data)
all_cookies.update(r2_cookies_data) #github直接用带token之后的cookies就行
r3 = requests.get('https://github.com/settings/emails',cookies=r2_cookies_data)
print(r3.text)

登录抽屉并自动点赞

#!/usr/bin/python
# -*- coding:utf-8 -*- import requests r1 = requests.get(url='http://dig.chouti.com/') r1_cookies_data = r1.cookies.get_dict() r2 = requests.post('http://dig.chouti.com/login',data={'phone':'xxx',"password":"xxx","oneMonth":1} ,cookies=r1_cookies_data) r2_cookies_data = r2.cookies.get_dict() print(r1_cookies_data)
print(r2_cookies_data) all_cookies = {} all_cookies.update(r1_cookies_data)
all_cookies.update(r2_cookies_data) '''
session_id 在第一次请求
{'JSESSIONID': 'aaaIZQdBA4siraQ2m0t8v', 'route': '0c5178ac241ad1c9437c2aafd89a0e50', 'gpsd': 'dd55c4cda0a45f6bc3274a79a7e50316'}
{'puid': '417d102e3c72e88cd6003bc984c569b4', 'gpid': '4c91ec17bd8340bdb75116916e19bc20'} ''' r3 = requests.post('http://dig.chouti.com/link/vote?linksId=14708906',cookies=r1_cookies_data)
print(r3.text) '''
{"result":{"code":"9999", "message":"推荐成功", "data":{"jid":"cdu_50096919787","likedTime":"1508043437615000","lvCount":"6","nick":"congratula","uvCount":"3","voteTime":"小于1分钟前"}}} '''

注意:有的登录页面,登录的时候不一定会给cookie,需要get一次才给cookie,而登录的时候仅仅是授权,get的时候的cookie,这样就不需要带第二次的cookie去请求

四、轮询和长轮询  

  1. 轮询:客户端定时向服务器发送Ajax请求,服务器接到请求后马上返回响应信息并关闭连接。
    优点:后端程序编写比较容易。
    缺点:请求中有大半是无用,浪费带宽和服务器资源。
    实例:适于小型应用。

  2. 长轮询:客户端向服务器发送Ajax请求,服务器接到请求后hold住连接,直到有新消息才返回响应信息并关闭连接,客户端处理完响应信息后再向服务器发送新的请求,服务器端会设置超时时间,当出现超时的时候,服务端会断开链接,客户端会再次请求服务端hold住
    优点:在无消息的情况下不会频繁的请求。
    缺点:服务器hold连接会消耗资源。
    实例:WebQQ、Hi网页版、Facebook IM。

  另外,对于长连接和socket连接也有区分:

    1. 长连接:在页面里嵌入一个隐蔵iframe,将这个隐蔵iframe的src属性设为对一个长连接的请求,服务器端就能源源不断地往客户端输入数据。
      优点:消息即时到达,不发无用请求。
      缺点:服务器维护一个长连接会增加开销。
      实例:Gmail聊天

五、requests的用法

  1、GET请求:    

requests.get(url="http://www.oldboyedu.com")
# data="http GET / http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n" requests.get(url="http://www.oldboyedu.com/index.html?p=1")
# data="http GET /index.html?p=1 http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n" requests.get(url="http://www.oldboyedu.com/index.html",params={'p':1})
# data="http GET /index.html?p=1 http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n"

  2、POST请求:

requests.post(url="http://www.oldboyedu.com",data={'name':'alex','age':18}) # 默认请求头:application/x-www-form-urlencoded
data="http POST / http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\nname=alex&age=18" requests.post(url="http://www.oldboyedu.com",json={'name':'alex','age':18}) # 默认请求头:application/json
data="http POST / http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n{"name": "alex", "age": 18}" requests.post(
url="http://www.oldboyedu.com",
params={'p':1},
json={'name':'alex','age':18}
) # 默认请求头:application/json data="http POST /?p=1 http1.1\r\nhost:oldboyedu.com\r\n....\r\n\r\n{"name": "alex", "age": 18}"

 3、更多参数

def request(method, url, **kwargs):
"""Constructs and sends a :class:`Request <Request>`. :param method: method for the new :class:`Request` object.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
to add for the file.
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How long to wait for the server to send data
before giving up, as a float, or a :ref:`(connect timeout, read
timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
:param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``.
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
:return: :class:`Response <Response>` object
:rtype: requests.Response Usage:: >>> import requests
>>> req = requests.request('GET', 'http://httpbin.org/get')
<Response [200]>
""" 参数列表

更多参数

  verify一般和cert合着用

def param_method_url():
# requests.request(method='get', url='http://127.0.0.1:8000/test/')
# requests.request(method='post', url='http://127.0.0.1:8000/test/')
pass def param_param():
# - 可以是字典
# - 可以是字符串
# - 可以是字节(ascii编码以内) # requests.request(method='get',
# url='http://127.0.0.1:8000/test/',
# params={'k1': 'v1', 'k2': '水电费'}) # requests.request(method='get',
# url='http://127.0.0.1:8000/test/',
# params="k1=v1&k2=水电费&k3=v3&k3=vv3") # requests.request(method='get',
# url='http://127.0.0.1:8000/test/',
# params=bytes("k1=v1&k2=k2&k3=v3&k3=vv3", encoding='utf8')) # 错误
# requests.request(method='get',
# url='http://127.0.0.1:8000/test/',
# params=bytes("k1=v1&k2=水电费&k3=v3&k3=vv3", encoding='utf8'))
pass def param_data():
# 可以是字典
# 可以是字符串
# 可以是字节
# 可以是文件对象 # requests.request(method='POST',
# url='http://127.0.0.1:8000/test/',
# data={'k1': 'v1', 'k2': '水电费'}) # requests.request(method='POST',
# url='http://127.0.0.1:8000/test/',
# data="k1=v1; k2=v2; k3=v3; k3=v4"
# ) # requests.request(method='POST',
# url='http://127.0.0.1:8000/test/',
# data="k1=v1;k2=v2;k3=v3;k3=v4",
# headers={'Content-Type': 'application/x-www-form-urlencoded'}
# ) # requests.request(method='POST',
# url='http://127.0.0.1:8000/test/',
# data=open('data_file.py', mode='r', encoding='utf-8'), # 文件内容是:k1=v1;k2=v2;k3=v3;k3=v4
# headers={'Content-Type': 'application/x-www-form-urlencoded'}
# )
pass def param_json():
# 将json中对应的数据进行序列化成一个字符串,json.dumps(...)
# 然后发送到服务器端的body中,并且Content-Type是 {'Content-Type': 'application/json'}
requests.request(method='POST',
url='http://127.0.0.1:8000/test/',
json={'k1': 'v1', 'k2': '水电费'}) def param_headers():
# 发送请求头到服务器端
requests.request(method='POST',
url='http://127.0.0.1:8000/test/',
json={'k1': 'v1', 'k2': '水电费'},
headers={'Content-Type': 'application/x-www-form-urlencoded'}
) def param_cookies():
# 发送Cookie到服务器端
requests.request(method='POST',
url='http://127.0.0.1:8000/test/',
data={'k1': 'v1', 'k2': 'v2'},
cookies={'cook1': 'value1'},
)
# 也可以使用CookieJar(字典形式就是在此基础上封装)
from http.cookiejar import CookieJar
from http.cookiejar import Cookie obj = CookieJar()
obj.set_cookie(Cookie(version=0, name='c1', value='v1', port=None, domain='', path='/', secure=False, expires=None,
discard=True, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False,
port_specified=False, domain_specified=False, domain_initial_dot=False, path_specified=False)
)
requests.request(method='POST',
url='http://127.0.0.1:8000/test/',
data={'k1': 'v1', 'k2': 'v2'},
cookies=obj) def param_files():
# 发送文件
# file_dict = {
# 'f1': open('readme', 'rb')
# }
# requests.request(method='POST',
# url='http://127.0.0.1:8000/test/',
# files=file_dict) # 发送文件,定制文件名
# file_dict = {
# 'f1': ('test.txt', open('readme', 'rb'))
# } #元祖里套元祖
# requests.request(method='POST',
# url='http://127.0.0.1:8000/test/',
# files=file_dict) # 发送文件,定制文件名
# file_dict = {
# 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf")
# }
# requests.request(method='POST',
# url='http://127.0.0.1:8000/test/',
# files=file_dict) # 发送文件,定制文件名
# file_dict = {
# 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf", 'application/text', {'k1': '0'})
# }
# requests.request(method='POST',
# url='http://127.0.0.1:8000/test/',
# files=file_dict) pass def param_auth():
from requests.auth import HTTPBasicAuth, HTTPDigestAuth
#一些非form表单,浏览器自带的认证框,它们都是通过固定的算法得出来的,可以用这个去认证
#r.headers['Authorization'] = _basic_auth_str(self.username, self.password) 查看源码 ret = requests.get('https://api.github.com/user', auth=HTTPBasicAuth('wupeiqi', 'sdfasdfasdf'))
print(ret.text) # ret = requests.get('http://192.168.1.1',
# auth=HTTPBasicAuth('admin', 'admin'))
# ret.encoding = 'gbk'
# print(ret.text) # ret = requests.get('http://httpbin.org/digest-auth/auth/user/pass', auth=HTTPDigestAuth('user', 'pass'))
# print(ret)
# def param_timeout():
# 设置超时时间
# ret = requests.get('http://google.com/', timeout=1)
# print(ret)
# 设置超时时间和断开时间
# ret = requests.get('http://google.com/', timeout=(5, 1))
# print(ret)
pass def param_allow_redirects():
#比如访问一个网站redirect另外一个地址,这次http请求是有返回值的,allow_redirects可以设置是否跳转到新的地址,重新发请求
ret = requests.get('http://127.0.0.1:8000/test/', allow_redirects=False)
print(ret.text) def param_proxies():
# 设置代理,可以设置很多
# proxies = {
# "http": "61.172.249.96:80",
# "https": "http://61.185.219.126:3128",
# } # proxies = {'http://10.20.1.128': 'http://10.10.1.10:5323'} # ret = requests.get("http://www.proxy360.cn/Proxy", proxies=proxies)
# print(ret.headers) # from requests.auth import HTTPProxyAuth
# 设置需要认证的代理
#
# proxyDict = {
# 'http': '77.75.105.165',
# 'https': '77.75.105.165'
# }
# auth = HTTPProxyAuth('username', 'mypassword')
#
# r = requests.get("http://www.google.com", proxies=proxyDict, auth=auth)
# print(r.text) pass def param_stream():
#将一个大的数据比如30g,进行分段传输
ret = requests.get('http://127.0.0.1:8000/test/', stream=True)
print(ret.content)
ret.close() # from contextlib import closing
# with closing(requests.get('http://httpbin.org/get', stream=True)) as r:
# # 在此处理响应。
# for i in r.iter_content():
# print(i) def requests_session():
#建议刚开始 别使用,能记录cookie等内容 不必每次都去 取
import requests session = requests.Session() ### 1、首先登陆任何页面,获取cookie i1 = session.get(url="http://dig.chouti.com/help/service") ### 2、用户登陆,携带上一次的cookie,后台对cookie中的 gpsd 进行授权
i2 = session.post(
url="http://dig.chouti.com/login",
data={
'phone': "8615131255089",
'password': "xxxxxx",
'oneMonth': ""
}
) i3 = session.post(
url="http://dig.chouti.com/link/vote?linksId=8589623",
)
print(i3.text)

六、beautifulsoup

    BeautifulSoup是一个模块,该模块用于接收一个HTML或XML字符串,然后将其进行格式化,之后遍可以使用他提供的方法进行快速查找指定元素,从而使得在HTML或XML中查找指定元素变得简单。

    soup = BeautifulSoup(html_doc, features="lxml") #会以xml格式解析,需要额外安装lxml,比html.parser节省资源

使用示例:  

from bs4 import BeautifulSoup

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
...
</body>
</html>
""" soup = BeautifulSoup(html_doc, features="lxml")

   1. name,标签名称

# tag = soup.find('a')
# name = tag.name # 获取
# print(name)
# tag.name = 'span' # 设置
# print(soup)

  2. attr,标签属性

# tag = soup.find('a')
# attrs = tag.attrs # 获取 结果是一个集合
# print(attrs)
# tag.attrs = {'ik':123} # 设置
# tag.attrs['id'] = 'iiiii' # 设置
# print(soup)

  3. children,所有子标签

# body = soup.find('body')
# v = body.children

  4. children,所有子子孙孙标签

# body = soup.find('body')
# v = body.descendants

  5. clear,将标签的所有子标签全部清空(保留标签名)

# tag = soup.find('body')
# tag.clear()
# print(soup)

  6. decompose,递归的删除所有的标签,(自己也会删除)

# body = soup.find('body')
# body.decompose()
# print(soup)

  7. extract,递归的删除所有的标签,并获取删除的标签(类似于dict里的pop)

# body = soup.find('body')
# v = body.extract()
# print(soup)

  8. decode,转换为字符串(含当前标签);decode_contents(不含当前标签) (将标签对象转换为格式)

# body = soup.find('body')
# v = body.decode()
# v = body.decode_contents()
# print(v)

  9. encode,转换为字节(含当前标签);encode_contents(不含当前标签)

# body = soup.find('body')
# v = body.encode()
# v = body.encode_contents()
# print(v)

  10. find,获取匹配的第一个标签

# tag = soup.find('a')
# print(tag)
# tag = soup.find(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
# tag = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
# print(tag)

  11. find_all,获取匹配的所有标签 class是类变量  可以用class_ 去代替

# tags = soup.find_all('a')
# print(tags) # tags = soup.find_all('a',limit=1)
# print(tags) # tags = soup.find_all(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
# # tags = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
# print(tags) # ####### 列表 #######
# v = soup.find_all(name=['a','div'])
# print(v) # v = soup.find_all(class_=['sister0', 'sister'])
# print(v) # v = soup.find_all(text=['Tillie'])
# print(v, type(v[0])) # v = soup.find_all(id=['link1','link2'])
# print(v) # v = soup.find_all(href=['link1','link2'])
# print(v) # ####### 正则 #######
import re
# rep = re.compile('p')
# rep = re.compile('^p')
# v = soup.find_all(name=rep)
# print(v) # rep = re.compile('sister.*')
# v = soup.find_all(class_=rep)
# print(v) # rep = re.compile('http://www.oldboy.com/static/.*')
# v = soup.find_all(href=rep)
# print(v) # ####### 方法筛选 #######
# def func(tag):
# return tag.has_attr('class') and tag.has_attr('id')
# v = soup.find_all(name=func)
# print(v) # ## get,获取标签属性
# tag = soup.find('a')
# v = tag.get('id')
# print(v)

  12. has_attr,检查标签是否具有该属性

# tag = soup.find('a')
# v = tag.has_attr('id')
# print(v)

  13. get_text,获取标签内部文本内容

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<a id='a1'>123</a>
</body>
</html>
""" soup = BeautifulSoup(html_doc, features="lxml") tag = soup.find('a')
v = tag.get_text('id')
print(v)

  14. index,检查标签在某标签中的索引位置

# tag = soup.find('body')
# v = tag.index(tag.find('div'))
# print(v) # tag = soup.find('body')
# for i,v in enumerate(tag):
# print(i,v)

  15. is_empty_element,是否是空标签(是否可以是空)或者自闭合标签,

    判断是否是如下标签:'br' , 'hr', 'input', 'img', 'meta','spacer', 'link', 'frame', 'base'

# tag = soup.find('br')
# v = tag.is_empty_element
# print(v)

  16. 当前的关联标签

#from bs4.element import  Tag

# soup.next #查该标签内部子代
# soup.next_element
# soup.next_elements #递归查找子代
# soup.next_sibling
# soup.next_siblings #只查下面的 #
# tag.previous #查标签外部
# tag.previous_element
# tag.previous_elements
# tag.previous_sibling #查兄弟标签
# tag.previous_siblings #只向上查找 #
# tag.parent #查父亲
# tag.parents

  17. 查找某标签的关联标签

# tag.find_next(...)  #可以加条件筛选
# tag.find_all_next(...)
# tag.find_next_sibling(...)
# tag.find_next_siblings(...) # tag.find_previous(...)
# tag.find_all_previous(...)
# tag.find_previous_sibling(...)
# tag.find_previous_siblings(...) # tag.find_parent(...)
# tag.find_parents(...) # 参数同find_all

  18. select,select_one, CSS选择器

soup.select("title")

soup.select("p nth-of-type(3)")

soup.select("body a")

soup.select("html head title")

tag = soup.select("span,a")

soup.select("head > title")

soup.select("p > a")

soup.select("p > a:nth-of-type(2)")

soup.select("p > #link1")

soup.select("body > a")

soup.select("#link1 ~ .sister")

soup.select("#link1 + .sister")

soup.select(".sister")

soup.select("[class~=sister]")

soup.select("#link1")

soup.select("a#link2")

soup.select('a[href]')

soup.select('a[href="http://example.com/elsie"]')

soup.select('a[href^="http://example.com/"]')

soup.select('a[href$="tillie"]')

soup.select('a[href*=".com/el"]')

from bs4.element import Tag

def default_candidate_generator(tag):
for child in tag.descendants:
if not isinstance(child, Tag):
continue
if not child.has_attr('href'):
continue
yield child tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator)
print(type(tags), tags) from bs4.element import Tag
def default_candidate_generator(tag):
for child in tag.descendants:
if not isinstance(child, Tag):
continue
if not child.has_attr('href'):
continue
yield child tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator, limit=1)
print(type(tags), tags)

  19. 标签的内容

# tag = soup.find('span')
# print(tag.string) # 获取
# tag.string = 'new content' # 设置
# print(soup) # tag = soup.find('body')
# print(tag.string)
# tag.string = 'xxx'
# print(soup) # tag = soup.find('body')
# v = tag.stripped_strings # 递归内部获取所有标签的文本
# print(v)

  string和text的区别:

    1.string 可以赋值,text不可以

2.string 这个类型<class 'bs4.element.NavigableString'>   text是这个类型 <class 'str'>

  20.append在当前标签内部追加一个标签

# tag = soup.find('body')
# tag.append(soup.find('a'))
# print(soup)
#
# from bs4.element import Tag
# obj = Tag(name='i',attrs={'id': 'it'})
# obj.string = '我是一个新来的'
# tag = soup.find('body')
# tag.append(obj)
# print(soup)

  21.insert在当前标签内部指定位置插入一个标签

# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一个新来的'
# tag = soup.find('body')
# tag.insert(2, obj)
# print(soup)

  22. insert_after,insert_before 在当前标签后面或前面插入

# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一个新来的'
# tag = soup.find('body')
# # tag.insert_before(obj)
# tag.insert_after(obj)
# print(soup)

  23. replace_with 在当前标签替换为指定标签

# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一个新来的'
# tag = soup.find('div')
# tag.replace_with(obj)
# print(soup)

  24. 创建标签之间的关系(非常规思维,创建的关系在soup里是无法看到的)

# tag = soup.find('div')
# a = soup.find('a')
# tag.setup(previous_sibling=a)
# print(tag.previous_sibling)

   25. wrap,将指定标签把当前标签包裹起来

# from bs4.element import Tag
# obj1 = Tag(name='div', attrs={'id': 'it'})
# obj1.string = '我是一个新来的'
#
# tag = soup.find('a')
# v = tag.wrap(obj1)
# print(soup) # tag = soup.find('a')
# v = tag.wrap(soup.find('p'))
# print(soup)

   26. unwrap,去掉当前标签,将保留其包裹的标签

# tag = soup.find('a')
# v = tag.unwrap()
# print(soup)

爬取知乎:

  

python自动化开发-[第二十三天]-初识爬虫的更多相关文章

  1. python自动化开发-[第二十四天]-高性能相关与初识scrapy

    今日内容概要 1.高性能相关 2.scrapy初识 上节回顾: 1. Http协议 Http协议:GET / http1.1/r/n...../r/r/r/na=1 TCP协议:sendall(&qu ...

  2. python自动化开发-[第二天]-基础数据类型与编码(续)

    今日简介: - 编码 - 进制转换 - 初识对象 - 基本的数据类型 - 整数 - 布尔值 - 字符串 - 列表 - 元祖 - 字典 - 集合 - range/enumcate 一.编码 encode ...

  3. python自动化开发-[第二十五天]-scrapy进阶与flask使用

    今日内容概要 1.cookie操作 2.pipeline 3.中间件 4.扩展 5.自定义命令 6.scrapy-redis 7.flask使用 - 路由系统 - 视图 - 模版 - message( ...

  4. python自动化开发-[第二十二天]-bbs多级评论、点赞、上传文件

    今日概要: 1.related_name和related_query_name的区别 2.through_fields的用途 3.django的事务提交 4.点赞的动画效果 5.多级评论的原理 6.上 ...

  5. python自动化开发-[第十三天]-javascript

    今日概要 1.javascript简单语法 1.javascript的历史 1992年Nombas开发出C-minus-minus(C--)的嵌入式脚本语言(最初绑定在CEnvi软件中).后将其改名S ...

  6. python自动化开发-[第十三天]-前端Css续

    今日概要: 1.伪类选择器 2.选择器优先级 3.vertical-align属性 4.backgroud属性 5.边框border属性 6.display属性 7.padding,margine(见 ...

  7. python自动化开发学习 进程, 线程, 协程

    python自动化开发学习 进程, 线程, 协程   前言 在过去单核CPU也可以执行多任务,操作系统轮流让各个任务交替执行,任务1执行0.01秒,切换任务2,任务2执行0.01秒,在切换到任务3,这 ...

  8. python自动化开发学习 I/O多路复用

    python自动化开发学习 I/O多路复用   一. 简介 socketserver在内部是由I/O多路复用,多线程和多进程,实现了并发通信.IO多路复用的系统消耗很小. IO多路复用底层就是监听so ...

  9. Python之路【第二十三篇】爬虫

    difference between urllib and urllib2 自己翻译的装逼必备 What is the difference between urllib and urllib2 mo ...

随机推荐

  1. Get started with Docker for Windows

    Welcome to Docker for Windows! Docker is a full development platform for creating containerized apps ...

  2. HttpWebRequest using Basic authentication

    System.Net.CredentialCache credentialCache = new System.Net.CredentialCache(); credentialCache.Add( ...

  3. codeforces732C

    Sanatorium CodeForces - 732C Vasiliy spent his vacation in a sanatorium, came back and found that he ...

  4. Nginx 浏览器缓存

    L:97 一般都是 同时使用 浏览器与Nginx缓存

  5. Nginx 安装与详解

    nginx简介 nginx是一个开源的,支持高性能,高并发的www服务和代理服务软件.它是一个俄罗斯人lgor sysoev开发的,作者将源代码开源出来供全球使用.nginx比它大哥apache性能改 ...

  6. bzoj 3123 [Sdoi2013]森林(主席树+启发式合并+LCA)

    Description Input 第一行包含一个正整数testcase,表示当前测试数据的测试点编号.保证1≤testcase≤20. 第二行包含三个整数N,M,T,分别表示节点数.初始边数.操作数 ...

  7. MT【289】含参绝对值的最大值之三

    已知$a>0$,函数$f(x)=e^x+3ax^2-2e x-a+1$,(1)若$f(x)$在$[0,1]$上单调递减,求$a$的取值范围.(2)$|f(x)|\le1$对任意$x\in[0,1 ...

  8. [luogu2286][HNOI2004]宠物收养场【平衡树】

    [传送门] 前言 这一篇题解并不是为了讲什么算法,只是总结一下平衡树在OI考试中的注意事项. 题意简化(给不想看题目的小伙伴们一点福利) 给你两堆数,每一次给你一个数,每一次在另外一堆数中找到这个数的 ...

  9. ES6中箭头函数与普通函数this的区别

    普通函数中的this: 1. this总是代表它的直接调用者, 例如 obj.func ,那么func中的this就是obj 2.在默认情况(非严格模式下,未使用 'use strict'),没找到直 ...

  10. [FJOI2015]火星商店问题(分治+可持久化)

    题目描述 火星上的一条商业街里按照商店的编号1,2 ,…,n ,依次排列着n个商店.商店里出售的琳琅满目的商品中,每种商品都用一个非负整数val来标价.每个商店每天都有可能进一些新商品,其标价可能与已 ...