BeautifulSoup4

官方文档

  是一个Python库,用于从HTML和XML文件中提取数据。它与您最喜欢的解析器一起使用,提供导航,搜索和修改解析树的惯用方法。它通常可以节省程序员数小时或数天的工作量。

1.安装BeautifulSoup4

pip install bs4

2.详细操作

from bs4 import BeautifulSoup
from urllib import request
#获取网页内容
base_url = 'http://langlang2017.com/route.html'
response = request.urlopen(base_url)
html = response.read() #数据解析(从页面当中提取数据)
#创建bs4对象
soup = BeautifulSoup(html,'lxml')
#格式化输出对象中的内容
content = soup.prettify() #提取页面当中的指定内容
# print(soup.title) #获取title内容 #一 只能匹配到第一个标签内容
#1.tag(name)
# print(soup.name) #输出文档类型
# print(soup.div.name) #输出标签名
#2attrs
# print(soup.title.attrs)
# print(soup.img.attrs) #3修改属性值
img = soup.img.attrs
# print(img)
domain = 'http://www.langlang2017.com'
img["src"] = domain+ img["src"]
# print(img) #4删除
img= soup.img.attrs
# print(img)
del img["alt"]
# print(img) #二
#1获取文本
# print(soup.title)
# print(soup.title.attrs)
# print(soup.title.name)
#格式:标签名.string
# print(soup.title.string) #三 标签名.contents 获取子节点列表
head = soup.head.contents
# print(head)
# print(head[3]) #标签名.children --子节点
head_children = soup.head.children
# for i in head_children:
# print(i) #便签名.descendants --子孙节点
# print(soup.div)
# for i in soup.div.descendants:
# print(i) #搜索文档 find_all()
# print(soup.meta) #只能获取一个
# for i in soup.find_all('meta'):
# print(i) #标签列表
# print(soup.find_all(["h1","h2"])) #关键词
# print(soup.find_all(id='weixin')) #四 css选择器 soup.select()
#通过类名查找
# print(soup.select('.logo'))
#通过标签名查找
# print(soup.select('a'))
#通过id查找
# print(soup.select('#weixin'))

3.注意:运行报错

bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need 

解决:安装 lxml包 

pip install lxml
05-11 14:44