Python-爬取CVE漏洞库👻
最近吧准备复现一下近几年的漏洞👻,一个一个的去找太麻烦了。今天做到第几页后面过几天再来可能就不记得了。所以我想这搞个爬虫给他爬下来做个excel表格,那就清楚多了。😂奈何还没写过爬虫,之前就一直对爬虫挺感兴趣的,但是一直没去研究过。今天正好碰到了,躲是躲不掉了,我也尝试找找网上有没有现成的,毕竟我们强大的互联网,找是找到了,还找到好几个,奈何都用不了😂,并且还看不太懂大佬写的代码。今天搞了一下午搞出了个蹩脚的代码😂,以后慢慢的再去改进改进吧,毕竟我觉着速度不太快。爬了36000多条数据就花了我20分钟😂,后面慢慢改进吧。有大佬指导指导更好!👻如果代码有什么问题可以直接在下面评论!,下面上代码:
python3写的代码:
#作者:胖三斤的博客
#时间:2021/11/5
import requests
from bs4 import BeautifulSoup
import xlsxwriter
workbook = xlsxwriter.Workbook('loudong.xlsx') # 建立文件
worksheet = workbook.add_worksheet()
worksheet.write(0,0,'URL') #这个是写进第一行第一列
worksheet.write(0,1,'cve') #这个是写进第一行第二列
worksheet.write(0,2,'time') #后面以此类推
worksheet.write(0,3,'name')
k=1
i=1
for j in range(1,3601): # 开始页数到结束页数,自行设置
burp0_url = f"http://cve.scap.org.cn:80/vulns/{j}?view=global"
burp0_cookies = {"_csrf_token": "629b8310c3efb5aca85b39726ef56d29b505dc91", "session": "eyJfY3NyZl90b2tlbiI6IjYyOWI4MzEwYzNlZmI1YWNhODViMzk3MjZlZjU2ZDI5YjUwNWRjOTEiLCJfZnJlc2giOmZhbHNlfQ.YYTTIA.E-cwfm_arSSLqc772cS3GqCIPu0", "Hm_lvt_1ac51b9b492db88525810a29c7aa73cd": "1636092045", "Hm_lpvt_1ac51b9b492db88525810a29c7aa73cd": "1636094752"}
burp0_headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8", "Accept-Language": "zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2", "Accept-Encoding": "gzip, deflate", "Connection": "close", "Referer": "http://cve.scap.org.cn/vulns/2?view=global", "Upgrade-Insecure-Requests": "1"}
data = requests.get(burp0_url, headers=burp0_headers, cookies=burp0_cookies).text
soup = BeautifulSoup(data, 'lxml')
for link in soup.find_all('td'):
if i%6 == 1:
href = link.a.attrs['href']
cve = link.a.string.strip()
worksheet.write(k, 0, "http://cve.scap.org.cn"+href)
worksheet.write(k, 1, cve)
if i%6 == 2:
time = link.string
worksheet.write(k, 2, time)
if i%6 == 4:
name = link.string
worksheet.write(k, 3, name)
if i%6 == 0:
k = k+1
i = i+1
print(f"已爬取数据:第{j}条")
workbook.close()