我正在从其他API中提取数据。问题在于数据大小很大,因此响应分页。通过首先读取有多少页数据,然后为每个页面重复请求,可以解决此问题。唯一的问题是页面总数约为1.5K,这需要大量时间才能实际获取并附加到CSV。有没有更快的解决方法?
这是我定位的端点:https://developer.keeptruckin.com/reference#get-logs
import requests
import json
import csv
url='https://api.keeptruckin.com/v1/logs?start_date=2019-03-09'
header={'x-api-key':'API KEY HERE'}
r=requests.get(url,headers=header)
result=r.json()
result = json.loads(r.text)
num_pages=result['pagination']['total']
print(num_pages)
for page in range (2,num_pages+1):
r=requests.get(url,headers=header, params={'page_no': page})
result=r.json()
result = json.loads(r.text)
csvheader=['First Name','Last Name','Date','Time','Type','Location']
with open('myfile.csv', 'a+', newline='') as csvfile:
writer = csv.writer(csvfile, csv.QUOTE_ALL)
##writer.writerow(csvheader)
for log in result['logs']:
username = log['log']['driver']['username']
first_name=log['log']['driver']['first_name']
last_name=log['log']['driver']['last_name']
for event in log['log']['events']:
start_time = event['event']['start_time']
date, time = start_time.split('T')
event_type = event['event']['type']
location = event['event']['location']
if not location:
location = "N/A"
if (username=="barmx1045" or username=="aposx001" or username=="mcqkl002" or username=="coudx014" or username=="ruscx013" or username=="loumx001" or username=="robkr002" or username=="masgx009"or username=="coxed001" or username=="mcamx009" or username=="linmx024" or username=="woldj002" or username=="fosbl004"):
writer.writerow((first_name, last_name,date, time, event_type, location))
最佳答案
第一种选择:大多数分页响应具有可以编辑的页面大小。
https://developer.keeptruckin.com/reference#pagination
尝试将per_page字段更新为100,而不是将每次拉取的默认值更新为25。
第二种选择:可能您可以通过使用多个线程/进程并拆分每个页面负责哪一部分来一次拉多个页面。
关于python - 使用Python从REST API提取大分页数据,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/56370305/