用python爬取知乎的热榜,获取标题和链接。

环境和方法:ubantu16.04、python3、requests、xpath

1.用浏览器打开知乎,并登录

爬取知乎热榜标题和连接 (python,requests,xpath)-LMLPHP

2.获取cookie和User—Agent

爬取知乎热榜标题和连接 (python,requests,xpath)-LMLPHP

3.上代码

 import requests
from lxml import etree def get_html(url):
headers={
'Cookie':'你的Cookie',
#'Host':'www.zhihu.com',
'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
} r=requests.get(url,headers=headers) if r.status_code==200:
deal_content(r.text) def deal_content(r):
html = etree.HTML(r)
title_list = html.xpath('//*[@id="TopstoryContent"]/div/section/div[2]/a/h2')
link_list = html.xpath('//*[@id="TopstoryContent"]/div/section/div[2]/a/@href')
for i in range(0,len(title_list)):
print(title_list[i].text)
print(link_list[i])
with open("zhihu.txt",'a') as f:
f.write(title_list[i].text+'\n')
f.write('\t链接为:'+link_list[i]+'\n')
f.write('*'*50+'\n') def main():
url='https://www.zhihu.com/hot'
get_html(url) main()

4.爬取结果

爬取知乎热榜标题和连接 (python,requests,xpath)-LMLPHP

05-11 11:08