本文介绍了用户速率限制在几个请求后超出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我通过使用Google Drive API在两个Google云端硬盘帐户之间移动文件。我一直在测试16个文件的文件夹。我的代码总是在第六个文件中产生错误


$ b

I know that there is a limit for the number of request (10/s or 1000/100s), but I have tried the exponential backoff suggested by the Google Drive API to handle this error. Even after 248s it still raises the same error.

Here an example what I am doing

def MoveToFolder(self,files,folder_id,drive):
    total_files = len(files)
    for cont in range(total_files):
        success = False
        n=0
        while not success:
            try:
                drive.auth.service.files().copy(fileId=files[cont]['id'],
                                                body={"parents": [{"kind": "drive#fileLink", "id": folder_id}]}).execute()
                time.sleep(random.randint(0,1000)/1000)
                success = True
            except:
                wait = (2**n) + (random.randint(0,1000)/1000)
                time.sleep(wait)
                success = False
                n += 1

I tried to use "Batching request" to copy the files, but it raises the same errors for 10 files.

def MoveToFolderBatch(self,files,folder_id,drive):
    cont=0
    batch = drive.auth.service.new_batch_http_request()
    for file in files:
        cont+=1
        batch.add(drive.auth.service.files().copy(fileId=file['id'],
                                                 body={"parents": [
                                                     {"kind": "drive#fileLink", "id": folder_id}]}))
    batch.execute()

Does anyone have any tips?

EDIT:According to google support:

解决方案

See 403 rate limit after only 1 insert per second and 403 rate limit on insert sometimes succeeds

The key points are:-

  • backoff but do not implement exponential backoff!. This will simply kill your application throughput

  • instead you need to proactively throttle your requests to avoid the 304's from occurring. In my testing I've found that the maximum sustainable throughput is about 1 transaction every 1.5 seconds.

  • batching makes the problem worse because the batch is unpacked before the 304 is raised. Ie. a batch of 10 is interpreted as 10 rapid transactions, not 1.

Try this algorithm

delay=0                              // start with no backoff
while (haveFilesInUploadQueue) {
   sleep(delay)                      // backoff
   upload(file)                      // try the upload
   if (403 Rate Limit) {             // if rejected with a rate limit
      delay += 2s                    // add 2s to delay to guarantee the next attempt will be OK
   } else {
      removeFileFromQueue(file)      // if not rejected, mark file as done
      if (delay > 0) {               // if we are currently backing off
         delay -= 0.2s               // reduce the backoff delay
      }
   }
}
// You can play with the 2s and 0.2s to optimise throughput. The key is to do all you can to avoid the 403's

One thing to be aware of is that there was (is?) a bug with Drive that sometimes an upload gets rejected with a 403, but, despite sending the 403, Drive goes ahead and creates the file. The symptom will be duplicated files. So to be extra safe, after a 403 you should somehow check if the file is actually there. Easiest way to do this is use pre-allocated ID's, or to add your own opaque ID to a property.

这篇关于用户速率限制在几个请求后超出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-20 08:52
查看更多