问题描述
我正在尝试在 HttpClient
DelegatingHandler
中进行重试,以使诸如 503服务器不可用
和超时被视为短暂故障并自动重试。
I'm trying to build in retrying in an HttpClient
DelegatingHandler
such that responses such as 503 Server Unavailable
and timeouts are treated as transient failures and retried automatically.
我从适用于 403服务器不可用
的情况,但确实不能将超时视为短暂故障。尽管如此,我还是喜欢使用Microsoft瞬态故障处理块来处理重试逻辑的一般想法。
I was starting from the code at http://blog.devscrum.net/2014/05/building-a-transient-retry-handler-for-the-net-httpclient/ which works for the 403 Server Unavailable
case, but does not treat timeouts as transient failures. Still, I like the general idea of using the Microsoft Transient Fault Handling Block to handle the retry logic.
这是我当前的代码。它使用自定义的 Exception
子类:
Here is my current code. It uses a custom Exception
subclass:
public class HttpRequestExceptionWithStatus : HttpRequestException {
public HttpRequestExceptionWithStatus(string message) : base(message)
{
}
public HttpRequestExceptionWithStatus(string message, Exception inner) : base(message, inner)
{
}
public HttpStatusCode StatusCode { get; set; }
public int CurrentRetryCount { get; set; }
}
这是瞬态故障检测器类:
And here is the transient fault detector class:
public class HttpTransientErrorDetectionStrategy : ITransientErrorDetectionStrategy {
public bool IsTransient(Exception ex)
{
var cex = ex as HttpRequestExceptionWithStatus;
var isTransient = cex != null && (cex.StatusCode == HttpStatusCode.ServiceUnavailable
|| cex.StatusCode == HttpStatusCode.BadGateway
|| cex.StatusCode == HttpStatusCode.GatewayTimeout);
return isTransient;
}
}
这个想法是,超时应该变成 ServiceUnavailable
异常,就像服务器已返回该HTTP错误代码一样。这是 DelegatingHandler
子类:
The idea is that timeouts should be turned into ServiceUnavailable
exceptions as if the server had returned that HTTP error code. Here is the DelegatingHandler
subclass:
public class RetryDelegatingHandler : DelegatingHandler {
public const int RetryCount = 3;
public RetryPolicy RetryPolicy { get; set; }
public RetryDelegatingHandler(HttpMessageHandler innerHandler) : base(innerHandler)
{
RetryPolicy = new RetryPolicy(new HttpTransientErrorDetectionStrategy(), new ExponentialBackoff(retryCount: RetryCount,
minBackoff: TimeSpan.FromSeconds(1), maxBackoff: TimeSpan.FromSeconds(10), deltaBackoff: TimeSpan.FromSeconds(5)));
}
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
var responseMessage = (HttpResponseMessage)null;
var currentRetryCount = 0;
EventHandler<RetryingEventArgs> handler = (sender, e) => currentRetryCount = e.CurrentRetryCount;
RetryPolicy.Retrying += handler;
try {
await RetryPolicy.ExecuteAsync(async () => {
try {
App.Log("Sending (" + currentRetryCount + ") " + request.RequestUri +
" content " + await request.Content.ReadAsStringAsync());
responseMessage = await base.SendAsync(request, cancellationToken);
} catch (Exception ex) {
var wex = ex as WebException;
if (cancellationToken.IsCancellationRequested || (wex != null && wex.Status == WebExceptionStatus.UnknownError)) {
App.Log("Timed out waiting for " + request.RequestUri + ", throwing exception.");
throw new HttpRequestExceptionWithStatus("Timed out or disconnected", ex) {
StatusCode = HttpStatusCode.ServiceUnavailable,
CurrentRetryCount = currentRetryCount,
};
}
App.Log("ERROR awaiting send of " + request.RequestUri + "\n- " + ex.Message + ex.StackTrace);
throw;
}
if ((int)responseMessage.StatusCode >= 500) {
throw new HttpRequestExceptionWithStatus("Server error " + responseMessage.StatusCode) {
StatusCode = responseMessage.StatusCode,
CurrentRetryCount = currentRetryCount,
};
}
return responseMessage;
}, cancellationToken);
return responseMessage;
} catch (HttpRequestExceptionWithStatus ex) {
App.Log("Caught HREWS outside Retry section: " + ex.Message + ex.StackTrace);
if (ex.CurrentRetryCount >= RetryCount) {
App.Log(ex.Message);
}
if (responseMessage != null) return responseMessage;
throw;
} catch (Exception ex) {
App.Log(ex.Message + ex.StackTrace);
if (responseMessage != null) return responseMessage;
throw;
} finally {
RetryPolicy.Retrying -= handler;
}
}
}
问题在于,一旦第一次超时时,随后的重试会立即超时,因为所有共享一个取消令牌。但是,如果我创建一个新的 CancellationTokenSource
并使用其令牌,就不会发生超时,因为我无法访问原始的 HttpClient
的取消令牌来源。
The problem is that once the first timeout happens, the subsequent retries immediately time out because everything shares a cancellation token. But if I make a new CancellationTokenSource
and use its token, no timeouts ever happen because I don't have access to the original HttpClient
's cancellation token source.
我考虑过将 HttpClient
子类化并覆盖 SendAsync
,但是它的主要重载不是虚拟的。我可能只是创建一个新函数,而不是 SendAsync
,但这不是直接替换,而我不得不替换所有<$ c $之类的情况c> GetAsync 。
I thought about subclassing HttpClient
and overriding SendAsync
but the main overload of it is not virtual. I could potentially just make a new function not called SendAsync
but then it's not a drop-in replacement and I'd have to replace all the cases of things like GetAsync
.
还有其他想法吗?
推荐答案
您可能只想子类化(或包装) HttpClient
;对我来说,在 HttpClient
级别而不是处理程序级别重试请求似乎更干净。如果这不是可口的话,则您需要拆分超时值。
You may just want to subclass (or wrap) HttpClient
; it seems cleaner to me to retry the requests at the HttpClient
level rather than at the handler level. If this is not palatable, then you'll need to split up the "timeout" values.
由于您的处理程序实际上是在一个结果中进行多个结果,因此 HttpClient.Timeout
适用于整个过程,包括重试。您可以向处理程序添加另一个超时值,该超时值将是每个请求超时,并将其与链接的取消令牌源一起使用:
Since your handler is actually doing multiple results in one, the HttpClient.Timeout
applies to the entire process, including retries. You could add another timeout value to your handler which would be the per-request timeout, and use that with a linked cancellation token source:
public class RetryDelegatingHandler : DelegatingHandler {
public TimmeSpan PerRequestTimeout { get; set; }
...
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
var cts = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken);
cts.CancelAfter(PerRequestTimeout);
var token = cts.Token;
...
responseMessage = await base.SendAsync(request, token);
...
}
}
这篇关于重试C#HttpClient不成功的请求和超时的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!