我一直在尝试从成功提取的URL列表中读取链接。我的问题是,当我尝试阅读整个列表时得到一个TypeError Traceback (most recent call last)。但是,当我阅读单个链接时,urlopen(urls).read()行会毫无问题地执行。

response = requests.get('some_website')
doc = BeautifulSoup(response.text, 'html.parser')
headlines = doc.find_all('h3')

links = doc.find_all('a', { 'rel':'bookmark' })
for link in links:
    print(link['href'])

for urls in links:
    raw_html = urlopen(urls).read()  <----- this row here
    articles = BeautifulSoup(raw_html, "html.parser")

最佳答案

考虑将BeautifulSouprequests.Session()一起使用,以提高重用连接和添加标头的效率

import requests
from bs4 import BeautifulSoup

with requests.Session() as s:

    url = 'https://newspunch.com/category/news/us/'
    headers = {'User-Agent': 'Mozilla/5'}
    r = s.get(url, headers = headers)
    soup = BeautifulSoup(r.text, 'lxml')
    links = [item['href'] for item in soup.select('[rel=bookmark]')]

    for link in links:
        r = s.get(link)
        soup = BeautifulSoup(r.text, 'lxml')
        print(soup.select_one('.entry-title').text)

关于python - 阅读美丽汤的链接列表,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/55785182/

10-15 09:03