问题描述
我有一个文件URL列表,这些URL是下载链接.我已经编写了Python代码,将文件下载到我的计算机上.这就是问题所在,列表中大约有500个文件,而Chrome在下载了大约50个文件后变得无响应.我最初的目标是将所有已下载的文件上传到s3中的存储桶.有没有办法使文件直接进入S3?这是我到目前为止所写的内容:
I have a list of file URLs which are download links. I have written Python code to download the files to my computer. Here's the problem, there are about 500 files in the list and Chrome becomes unresponsive after downloading about 50 of these files. My initial goal was to upload all the files that I have downloaded to a Bucket in s3. Is there a way to make the files go to S3 directly? Here is what I have written so far:
import requests
from itertools import chain
import webbrowser
url = "<my_url>"
username = "<my_username>"
password = "<my_password>"
headers = {"Content-Type":"application/xml","Accept":"*/*"}
response = requests.get(url, auth=(username, password), headers = headers)
if response.status_code != 200:
print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:', response.json())
exit()
data = response.json()
values = list(chain.from_iterable(data.values()))
links = [lis['download_link'] for lis in values]
for item in links:
webbrowser.open(item)
推荐答案
使用python3和boto3(AWS SDK)非常简单,例如:
Its quite simple using python3 and boto3 (AWS SDK), eg.:
import boto3
s3 = boto3.client('s3')
with open('filename.txt', 'rb') as data:
s3.upload_fileobj(data, 'bucketname', 'filenameintos3.txt')
有关更多信息,您可以在此处阅读boto3文档: http://boto3.readthedocs.io/en/latest/guide/s3-example-creating-buckets.html
for more information you can read boto3 documentation here:http://boto3.readthedocs.io/en/latest/guide/s3-example-creating-buckets.html
享受
这篇关于使用Python将文件上传到S3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!