因此,我试图从here中删除标题。整整十年。years
是包含以下内容的列表
/resources/archive/us/2007.html
/resources/archive/us/2008.html
/resources/archive/us/2009.html
/resources/archive/us/2010.html
/resources/archive/us/2011.html
/resources/archive/us/2012.html
/resources/archive/us/2013.html
/resources/archive/us/2014.html
/resources/archive/us/2015.html
/resources/archive/us/2016.html
因此,我的代码在这里执行的操作是:打开每年的页面,收集所有日期链接,然后分别打开每个链接,并获取所有
.text
,并将每个标题和对应的日期作为一行添加到数据框headlines
headlines = pd.DataFrame(columns=["date", "headline"])
for y in years:
yurl = "http://www.reuters.com"+str(y)
response=requests.get(yurl,headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36', })
bs= BeautifulSoup(response.content.decode('ascii', 'ignore'),'lxml')
days =[]
links = bs.findAll('h5')
for mon in links:
for day in mon.next_sibling.next_sibling:
days.append(day)
days = [e for e in days if str(e) not in ('\n')]
for ind in days:
hlday = ind['href']
date = re.findall('(?!\/)[0-9].+(?=\.)', hlday)[0]
date = date[4:6] + '-' + date[6:] + '-' + date[:4]
print(date.split('-')[2])
yurl = "http://www.reuters.com"+str(hlday)
response=requests.get(yurl,headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36', })
if response.status_code == 404 or response.content == b'':
print('')
else:
bs= BeautifulSoup(response.content.decode('ascii', 'ignore'),'lxml')
lines = bs.findAll('div', {'class':'headlineMed'})
for h in lines:
headlines = headlines.append([{"date":date, "headline":h.text}], ignore_index = True)
它需要永远地运行,所以我没有运行for循环,而是只运行了
/resources/archive/us/2008.html
年已经3个小时了,它仍在运行。
由于我是Python的新手,所以我不明白我做错了什么或如何做得更好。
是
pandas.append
永远占用了它,因为它每次运行都必须读取和写入更大的数据帧吗? 最佳答案
您正在使用此反模式:
headlines = pd.DataFrame()
for for y in years:
for ind in days:
headlines = headlines.append(blah)
相反,请执行以下操作:
headlines = []
for for y in years:
for ind in days:
headlines.append(pd.DataFrame(blah))
headlines = pd.concat(headlines)
第二个潜在问题是您正在发出3650个Web请求。如果我经营这样的网站,那么我会节流以减慢像您这样的刮板的速度。您可能会发现最好收集一次原始数据,然后将其存储在磁盘上,然后进行第二遍处理。这样,您就不必每次都需要调试程序时花费3650个Web请求。