This question already has answers here:
Web Crawler To get Links From New Website
                                
                                    (3个答案)
                                
                        
                                6年前关闭。
            
                    
我正在为不同的新闻媒体创建网络抓取工具。我试图为The Hindu报纸创建一个。

我想从档案中提到的各个链接中获取新闻。假设我想在第二天提到的链接上获取新闻:http://www.thehindu.com/archive/web/2010/06/19/即2010年6月19日。

现在,我编写了以下代码行:

import mechanize
from bs4 import BeautifulSoup

url = "http://www.thehindu.com/archive/web/2010/06/19/"

br =  mechanize.Browser()
htmltext = br.open(url).read()

articletext = ""
soup = BeautifulSoup(htmltext)
for tag in soup.findAll('li', attrs={"data-section":"Business"}):
    articletext += tag.contents[0]
print articletext


但是我无法获得所需的结果。我基本上被困住了。有人可以帮我解决吗?

最佳答案

试试下面的代码:

import mechanize
from bs4 import BeautifulSoup

url = "http://www.thehindu.com/archive/web/2010/06/19/"

br =  mechanize.Browser()
htmltext = br.open(url).read()

articletext = ""
for tag_li in soup.findAll('li', attrs={"data-section":"Op-Ed"}):
    for link in tag_li.findAll('a'):
        urlnew = urlnew = link.get('href')
        brnew =  mechanize.Browser()
        htmltextnew = brnew.open(urlnew).read()
        articletext = ""
        soupnew = BeautifulSoup(htmltextnew)
        for tag in soupnew.findAll('p'):
            articletext += tag.text
        print re.sub('\s+', ' ', articletext, flags=re.M)

driver.close()


对于re,您可能必须导入re模块。

关于python - 网上搜刮以建立新闻数据库,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/19915384/

10-13 04:38