系统城装机大师 - 固镇县祥瑞电脑科技销售部宣传站!

当前位置:首页 > 脚本中心 > python > 详细页面

使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作详解

时间:2020-01-27来源:系统城作者:电脑系统城

下面就是使用Python爬虫库BeautifulSoup对文档树进行遍历并对标签进行操作的实例,都是最基础的内容


 
  1. html_doc = """
  2. <html><head><title>The Dormouse's story</title></head>
  3.  
  4. <p class="title">The Dormouse's story</p>
  5.  
  6. <p class="story">Once upon a time there were three little sisters; and their names were
  7. <a href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>,
  8. <a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow"rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow"class="sister" id="link2">Lacie</a> and
  9. <a href="http://example.com/tillie" rel="external nofollow" rel="external nofollow"rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>;
  10. and they lived at the bottom of a well.</p>
  11.  
  12. <p class="story">...</p>
  13. """
  14.  
  15. from bs4 import BeautifulSoup
  16. soup = BeautifulSoup(html_doc,'lxml')

一、子节点

一个Tag可能包含多个字符串或者其他Tag,这些都是这个Tag的子节点.BeautifulSoup提供了许多操作和遍历子结点的属性。

1.通过Tag的名字来获得Tag


 
  1. print(soup.head)
  2. print(soup.title)

 
  1. <head><title>The Dormouse's story</title></head>
  2. <title>The Dormouse's story</title>

通过名字的方法只能获得第一个Tag,如果要获得所有的某种Tag可以使用find_all方法


 
  1. soup.find_all('a')

 
  1. [<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>,
  2. <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>,
  3. <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]

2.contents属性:将Tag的子节点通过列表的方式返回


 
  1. head_tag = soup.head
  2. head_tag.contents

 
  1. [<title>The Dormouse's story</title>]

 
  1. title_tag = head_tag.contents[0]
  2. title_tag

 
  1. <title>The Dormouse's story</title>

 
  1. title_tag.contents

 
  1. ["The Dormouse's story"]

3.children:通过该属性对子节点进行循环


 
  1. for child in title_tag.children:
  2. print(child)

 
  1. The Dormouse's story

4.descendants: 不论是contents还是children都是返回直接子节点,而descendants对所有tag的子孙节点进行递归循环


 
  1. for child in head_tag.children:
  2. print(child)

 
  1. <title>The Dormouse's story</title>

 
  1. for child in head_tag.descendants:
  2. print(child)

 
  1. <title>The Dormouse's story</title>
  2. The Dormouse's story

5.string 如果tag只有一个NavigableString类型的子节点,那么tag可以使用.string得到该子节点


 
  1. title_tag.string

 
  1. "The Dormouse's story"

如果一个tag只有一个子节点,那么使用.string可以获得其唯一子结点的NavigableString.


 
  1. head_tag.string

 
  1. "The Dormouse's story"

如果tag有多个子节点,tag无法确定.string对应的是那个子结点的内容,故返回None


 
  1. print(soup.html.string)

 
  1. None

6.strings和stripped_strings

如果tag包含多个字符串,可以使用.strings循环获取


 
  1. for string in soup.strings:
  2. print(string)

 
  1. The Dormouse's story
  2.  
  3.  
  4. The Dormouse's story
  5.  
  6.  
  7. Once upon a time there were three little sisters; and their names were
  8.  
  9. Elsie
  10. ,
  11.  
  12. Lacie
  13. and
  14.  
  15. Tillie
  16. ;
  17. and they lived at the bottom of a well.
  18.  
  19.  
  20. ...

.string输出的内容包含了许多空格和空行,使用strpped_strings去除这些空白内容


 
  1. for string in soup.stripped_strings:
  2. print(string)

 
  1. The Dormouse's story
  2. The Dormouse's story
  3. Once upon a time there were three little sisters; and their names were
  4. Elsie
  5. ,
  6. Lacie
  7. and
  8. Tillie
  9. ;
  10. and they lived at the bottom of a well.
  11. ...

二、父节点

1.parent:获得某个元素的父节点


 
  1. title_tag = soup.title
  2. title_tag.parent

 
  1. <head><title>The Dormouse's story</title></head>

字符串也有父节点


 
  1. title_tag.string.parent

 
  1. <title>The Dormouse's story</title>

2.parents:递归的获得所有父辈节点


 
  1. link = soup.a
  2. for parent in link.parents:
  3. if parent is None:
  4. print(parent)
  5. else:
  6. print(parent.name)

 
  1. p
  2. body
  3. html
  4. [document]

三、兄弟结点


 
  1. sibling_soup = BeautifulSoup("<a>text1<c>text2</c></a>",'lxml')
  2. print(sibling_soup.prettify())

 
  1. <html>
  2. <body>
  3. <a>
  4.  
  5. text1
  6.  
  7. <c>
  8. text2
  9. </c>
  10. </a>
  11. </body>
  12. </html>

1.next_sibling和previous_sibling


 
  1. sibling_soup.b.next_sibling

 
  1. <c>text2</c>

 
  1. sibling_soup.c.previous_sibling

 
  1. text1

在实际文档中.next_sibling和previous_sibling通常是字符串或者空白符


 
  1. soup.find_all('a')

 
  1. [<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>,
  2. <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>,
  3. <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]

 
  1. soup.a.next_sibling # 第一个<a></a>的next_sibling是,\n

 
  1. ',\n'

 
  1. soup.a.next_sibling.next_sibling

 
  1. <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>

2.next_siblings和previous_siblings


 
  1. for sibling in soup.a.next_siblings:
  2. print(repr(sibling))

 
  1. ',\n'
  2. <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
  3. ' and\n'
  4. <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>
  5. ';\nand they lived at the bottom of a well.'

 
  1. for sibling in soup.find(id="link3").previous_siblings:
  2. print(repr(sibling))

 
  1. ' and\n'
  2. <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
  3. ',\n'
  4. <a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>
  5. 'Once upon a time there were three little sisters; and their names were\n'

四、回退与前进

1.next_element和previous_element

指向下一个或者前一个被解析的对象(字符串或tag),即深度优先遍历的后序节点和前序节点


 
  1. last_a_tag = soup.find("a", id="link3")
  2. print(last_a_tag.next_sibling)
  3. print(last_a_tag.next_element)

 
  1. ;
  2. and they lived at the bottom of a well.
  3. Tillie

 
  1. last_a_tag.previous_element

 
  1. ' and\n'

2.next_elements和previous_elements

通过.next_elements和previous_elements可以向前或向后访问文档的解析内容,就好像文档正在被解析一样


 
  1. for element in last_a_tag.next_elements:
  2. print(repr(element))

 
  1. 'Tillie'
  2. ';\nand they lived at the bottom of a well.'
  3. '\n'
  4. <p class="story">...</p>
  5. '...'
  6. '\n'

更多关于使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作的方法与文章大家可以点击下面的相关文章

分享到:

相关信息

系统教程栏目

栏目热门教程

人气教程排行

站长推荐

热门系统下载