自动补全代码:

import requests
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
print(soup.prettify())#如果html代码补全,则自动补全
print(soup.title.string)

查找标签

#基本使用
soup.title#<title>xxxxxxx</title>
soup.title.string#xxxxxx

获取名称

#基本使用
soup.title#<title>xxxxxxx</title>
soup.title.name#title

获取属性

#基本使用
soup.a#<a>xxxxxxx</a>
soup.a['name']#a标签的name属性值

获取内容

soup.title.string#xxxxxx

嵌套选择

print(soup.head.title.string)

子节点

import requests
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
print(soup.div.contents)

或者

import requests
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
a=soup.div.children
print(a)
for i,j in enumerate(a):
print(i,j)

子孙节点

import requests
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
a=soup.div.descendants
print(a)
for i,j in enumerate(a):
print(i,j)

获取父节点

import requests
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
a=soup.div.parent
print(a)

获取祖先节点

import requests
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
a=soup.div.parents
for i,j in enumerate(a):
print(i,j)

获取兄弟节点

a=soup.div.next_siblings#后面的兄弟节点(迭代器)

前面的兄弟节点

a=soup.div.next_siblings#前面的兄弟节点(迭代器)

标准选择器
find_all(name,attrs,recursive,**kwargs)

name

import requests
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
print(soup.find_all('ul'))#根据标签名查找
import requests
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
for ul in soup.find_all('ul'):
for li in ul.find_all('li'):
print(li)

attrs

import requests
import re
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
a=soup.find_all(attrs={'class':'lazy'})#<a class='lazy'>xxxxx</lazy>
for index,i in enumerate(a):
result=re.findall(r'[a-zA-z]+://[^\s]*png',str(i))
url=result[0]
res = requests.get(url)
with open('%d.png'%index,'wb')as f:
f.write(res.content)
import requests
import re
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
a=soup.find_all(class_='lazy')
a=soup.find_all(id='lazy')

CSS选择器

import requests
import re
from bs4 import BeautifulSoup
response=requests.get('https://www.ithome.com/html/it/340684.htm',timeout=9)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
a=soup.select('.lazy') for index,i in enumerate(a):
result=re.findall(r'[a-zA-z]+://[^\s]*png',str(i))
url=result[0]
res = requests.get(url)
with open('./test/%d.png'%(index+1),'wb')as f:
f.write(res.content)

获取css属性

import requests
import re
from bs4 import BeautifulSoup
headers={
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36"
}
response=requests.get('https://www.zhihu.com/question/20519068/answer/215288567',headers=headers,timeout=None)
result=response.text
soup=BeautifulSoup(response.content,'lxml')
# print(soup.prettify())
a=soup.select('.lazy')
# print(a)
for index,i in enumerate(a):
url=i['data-original']
#
   #result=re.findall(r'[a-zA-z]+://[^\s]*jpg',str(i))
# url=result[0]
res = requests.get(url)
with open('./test/%d.jpg'%(index+1),'wb')as f:
f.write(res.content)

获取内容

li.get_text()

beautifulsoup的一些使用的更多相关文章

  1. Python爬虫小白入门(三)BeautifulSoup库

    # 一.前言 *** 上一篇演示了如何使用requests模块向网站发送http请求,获取到网页的HTML数据.这篇来演示如何使用BeautifulSoup模块来从HTML文本中提取我们想要的数据. ...

  2. 使用beautifulsoup与requests爬取数据

    1.安装需要的库 bs4 beautifulSoup  requests lxml如果使用mongodb存取数据,安装一下pymongo插件 2.常见问题 1> lxml安装问题 如果遇到lxm ...

  3. BeautifulSoup :功能使用

    # -*- coding: utf-8 -*- ''' # Author : Solomon Xie # Usage : 测试BeautifulSoup一些用法及容易出bug的地方 # Envirom ...

  4. BeautifulSoup研究一

    BeautifulSoup的文档见 https://www.crummy.com/software/BeautifulSoup/bs4/doc.zh/ 其中.contents 会将换行也记录为一个子节 ...

  5. BeautifulSoup

    参考:http://www.freebuf.com/news/special/96763.html 相关资料:http://www.jb51.net/article/65287.htm 1.Pytho ...

  6. BeautifulSoup Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.

    BeautifulSoup很赞的东西 最近出现一个问题:Python 3.3 soup=BeautifulSoup(urllib.request.urlopen(url_path),"htm ...

  7. beautifulSoup(1)

    import re from bs4 import BeautifulSoupdoc = ['<html><head><title>Page title</t ...

  8. python BeautifulSoup模块的简要介绍

    常用介绍: pip install beautifulsoup4 # 安装模块 from bs4 import BeautifulSoup # 导入模块 soup = BeautifulSoup(ht ...

  9. BeautifulSoup 的用法

    转自:http://cuiqingcai.com/1319.html Beautiful Soup支持Python标准库中的HTML解析器,还支持一些第三方的解析器,如果我们不安装它,则 Python ...

  10. BeautifulSoup的选择器

    用BeautifulSoup查找指定标签(元素)的时候,有几种方法: soup=BeautifulSoup(html) 1.soup.find_all(tagName),返回一个指定Tag元素的列表 ...

随机推荐

  1. 聊聊、Java 网络编程

    Socket 编程大家都不陌生,Java 学习中必学的部分,也是 Java网络编程核心内容之一.Java 网络编程又包括 TCP.UDP,URL 等模块.TCP 对应 Socket模块,UDP 对应  ...

  2. Python开启进程的2中方式

    知识点一:进程的理论 进程:正在进行的一个程序或者说一个任务,而负责执行任务的则是CPU 进程运行的的三种状态: 1.运行:(由CPU来执行,越多越好,可提高效率) 2.阻塞:(遇到了IQ,3个里面可 ...

  3. [uiautomator篇][9]遇到问题

    1 (1) 修改apk的存储权限,不要创建文件会提示:文件找不到 (2) 退出应用 mDevice.executeShellCommand("am force-stop com.antutu ...

  4. 面向对象编程(四)继承,概念及super关键字,final关键字,Object类常见方法

    继承 概念: ①   继承背后的思想就是基于已存在的类来构建新类; ②   当从已存在类继承时,就重用了它的方法和属性,还可以添加新的方法和属性来定制新类以应对需求; ③   当从其它类导出的类叫作子 ...

  5. 九度oj 题目1513:二进制中1的个数

    题目描述: 输入一个整数,输出该数二进制表示中1的个数.其中负数用补码表示. 输入: 输入可能包含多个测试样例. 对于每个输入文件,第一行输入一个整数T,代表测试样例的数量.对于每个测试样例输入为一个 ...

  6. POJ 1656 Counting Black

    Counting Black Time Limit: 1000MS   Memory Limit: 10000K Total Submissions: 9772   Accepted: 6307 De ...

  7. 许式伟看 Facebook 发币(上): 区块链, 比特币与 Libra 币

    你好,我是七牛云许式伟. Facebook(脸书)于6月18日发布了其加密数字货币项目白皮书.该数字货币被命名为 Libra(天秤座),象征着平衡与公正.此前,BBC 报道说这个数字货币叫 Globa ...

  8. Editplus注册码,汉化官方版本

    官方认证Edit+简体中文版 https://www.editplus.com/download.html EditPlus注册码在线生成 http://www.jb51.net/tools/edit ...

  9. 【CF1015A】Points in Segments(签到)

    题意:有一条上面有n个点的数轴,给定m次操作,每次覆盖(x[i],y[i]),求最后没有被覆盖过的点的数量与他们的编号 n,m<=100 思路: #include<cstdio> # ...

  10. 【BZOJ4472】salesman(树形DP)

    题意: 给定一颗有点权的树,每个树上的节点最多能走到lim[u]次,求一条路径,使路径上的点权和最大,每个节点上的点权如果走了多次只能算一次.还要求方案是否唯一. 思路:每个点只能取lim[u]-1个 ...