并行获取请求的异常处理

并行获取请求的异常处理

本文介绍了并行获取请求的异常处理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有以下代码:

  try:
     responses = yield [httpClient.fetch(url) for url in urls]
  except (HTTPError, IOError, ValueError) as e:
     print("caught")

我不能保证给出的网址是有效的.我希望能够使用异常来验证网址.如何判断捕获的异常中哪个 url(s) 失败?

I can't guarantee the urls given are valid. I want to be able to use the exception to validate the urls. How can I tell which url(s) fail within the caught exception?

此外,如果一次提取失败(比如第一次),那么其余的提取似乎都会中断?有没有办法防止这种情况?或者在实际获取之前是否有更好的方法来检查可以获取的 URL?有没有更好的模式.基本上我想并行获取所有 URL 并知道哪一个可能会失败.

Also if one fetch fails (say the first) it looks like it breaks for the rest of the fetches? Is there a way to prevent this? Or is there a better way to check the URL can be fetched before actually fetching? Is there a better pattern for this. Basically I want to fetch all the URLs in parallel and know which one potentially fails.

推荐答案

最简单的解决方案是将 raise_error=False 传递给 fetch().这将始终为您提供响应,您将能够检查 response.error 或使用 response.rethrow():

The simplest solution is to pass raise_error=False to fetch(). This will always give you a response, and you will be able to inspect response.error or use response.rethrow():

responses = yield [httpClient.fetch(url, raise_error=False) for url in urls]
for url, resp in zip(urls, responses):
    try:
        resp.rethrow()
        print("succeeded")
    except (HTTPError, IOError, ValueError) as e:
        print("caught")

这篇关于并行获取请求的异常处理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-19 17:50