本文介绍了使用 python 和 BeautifulSoup 从网页中检索链接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
如何使用Python检索网页链接并复制链接的url地址?
How can I retrieve the links of a webpage and copy the url address of the links using Python?
推荐答案
以下是使用 BeautifulSoup 中的 SoupStrainer 类的简短片段:
Here's a short snippet using the SoupStrainer class in BeautifulSoup:
import httplib2
from bs4 import BeautifulSoup, SoupStrainer
http = httplib2.Http()
status, response = http.request('http://www.nytimes.com')
for link in BeautifulSoup(response, parse_only=SoupStrainer('a')):
if link.has_attr('href'):
print(link['href'])
BeautifulSoup 的文档其实还不错,涵盖了很多典型场景:
The BeautifulSoup documentation is actually quite good, and covers a number of typical scenarios:
https://www.crummy.com/software/BeautifulSoup/bs4/doc/
请注意,我使用了 SoupStrainer 类,因为如果您提前知道要解析的内容,它会更高效(内存和速度方面).
Note that I used the SoupStrainer class because it's a bit more efficient (memory and speed wise), if you know what you're parsing in advance.
这篇关于使用 python 和 BeautifulSoup 从网页中检索链接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!