我有一个帮助程序例程,试图从S3中进行线程下载。很多时候(大约占请求的1%),我会收到一条有关NoHttpResponseException的日志消息,过一会儿,当从SocketTimeoutException中读取时会导致S3ObjectInputStream

我是在做错什么,还是仅仅是我的路由器/互联网?或者这是S3所期望的?我没有注意到其他地方的问题。

  public void
fastRead(final String key, Path path) throws StorageException
    {
        final int pieceSize = 1<<20;
        final int threadCount = 8;

        try (FileChannel channel = (FileChannel) Files.newByteChannel( path, WRITE, CREATE, TRUNCATE_EXISTING ))
        {
            final long size = s3.getObjectMetadata(bucket, key).getContentLength();
            final long pieceCount = (size - 1) / pieceSize + 1;

            ThreadPool pool = new ThreadPool (threadCount);
            final AtomicInteger progress = new AtomicInteger();

            for(int i = 0; i < size; i += pieceSize)
            {
                final int start = i;
                final long end = Math.min(i + pieceSize, size);

                pool.submit(() ->
                {
                    boolean retry;
                    do
                    {
                        retry = false;
                        try
                        {
                            GetObjectRequest request = new GetObjectRequest(bucket, key);
                            request.setRange(start, end - 1);
                            S3Object piece = s3.getObject(request);
                            ByteBuffer buffer = ByteBuffer.allocate ((int)(end - start));
                            try(InputStream stream = piece.getObjectContent())
                            {
                                IOUtils.readFully( stream, buffer.array() );
                            }
                            channel.write( buffer, start );
                            double percent = (double) progress.incrementAndGet() / pieceCount * 100.0;
                            System.err.printf("%.1f%%\n", percent);
                        }
                        catch(java.net.SocketTimeoutException | java.net.SocketException e)
                        {
                            System.err.println("Read timed out. Retrying...");
                            retry = true;
                        }
                    }
                    while (retry);

                });
            }

            pool.<IOException>await();
        }
        catch(AmazonClientException | IOException | InterruptedException e)
        {
            throw new StorageException (e);
        }
    }

2014-05-28 08:49:58 INFO com.amazonaws.http.AmazonHttpClient executeHelper Unable to execute HTTP request: The target server failed to respond
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)
at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doReceiveResponse(SdkHttpRequestExecutor.java:66)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:127)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:713)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:518)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:385)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:233)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3569)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1130)
at com.syncwords.files.S3Storage.lambda$fastRead$0(S3Storage.java:123)
at com.syncwords.files.S3Storage$$Lambda$3/1397088232.run(Unknown Source)
at net.almson.util.ThreadPool.lambda$submit$8(ThreadPool.java:61)
at net.almson.util.ThreadPool$$Lambda$4/1980698753.call(Unknown Source)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)

最佳答案

更新:为响应我在GitHub上创建的问题,对AWS开发工具包进行了更新。我不确定情况如何变化。该答案的第二部分(批评getObject)可能(希望如此)是错误的。

S3被设计为失败,并且它经常失败。

幸运的是,适用于Java的AWS开发工具包具有用于重试请求的内置工具。不幸的是,在下载S3对象时并没有涵盖SocketExceptions的情况(它们在上载和执行其他操作时可以完成的工作)。因此,类似于问题中的代码是必要的(请参见下文)。

当该机制按预期工作时,您仍然会在日志中看到消息。您可以选择通过从INFO过滤com.amazonaws.http.AmazonHttpClient日志事件来隐藏它们。 (AWS SDK使用Apache Commons Logging。)

根据您的网络连接和Amazon服务器的运行状况,重试机制可能会失败。如 lvlv 所指出的,配置相关参数的方法是通过ClientConfiguration。我建议更改的参数是重试次数,默认情况下为3。您可以尝试的其他方法是增加或减少连接和套接字超时(默认值为50s,不仅足够长,而且考虑到无论如何无论如何都会超时),可能太长了,并使用TCP KeepAlive(默认值)离开)。

ClientConfiguration cc = new ClientConfiguration()
    .withMaxErrorRetry (10)
    .withConnectionTimeout (10_000)
    .withSocketTimeout (10_000)
    .withTcpKeepAlive (true);
AmazonS3 s3Client = new AmazonS3Client (credentials, cc);

甚至可以通过设置RetryPolicy(同样在ClientConfiguration中)来覆盖重试机制。它最有趣的元素是RetryCondition,默认情况下:



请参见SDKDefaultRetryCondition javadocsource

SDK中隐藏在其他地方的半屁股重试工具

内置机制(在整个AWS开发工具包中使用的)无法处理的是读取S3对象数据。

如果您调用AmazonS3.getObject (GetObjectRequest getObjectRequest, File destinationFile),则AmazonS3Client使用其自己的重试机制。该机制位于ServiceUtils.retryableDownloadS3ObjectToFile(source)内部,该机制使用次优的硬连线重试行为(它将仅重试一次,而不会在SocketException上重试!)。 ServiceUtils中的所有代码似乎设计欠佳(issue)。

我使用类似于以下代码:
  public void
read(String key, Path path) throws StorageException
    {
        GetObjectRequest request = new GetObjectRequest (bucket, key);

        for (int retries = 5; retries > 0; retries--)
        try (S3Object s3Object = s3.getObject (request))
        {
            if (s3Object == null)
                return; // occurs if we set GetObjectRequest constraints that aren't satisfied

            try (OutputStream outputStream = Files.newOutputStream (path, WRITE, CREATE, TRUNCATE_EXISTING))
            {
                byte[] buffer = new byte [16_384];
                int bytesRead;
                while ((bytesRead = s3Object.getObjectContent().read (buffer)) > -1) {
                    outputStream.write (buffer, 0, bytesRead);
                }
            }
            catch (SocketException | SocketTimeoutException e)
            {
                // We retry exceptions that happen during the actual download
                // Errors that happen earlier are retried by AmazonHttpClient
                try { Thread.sleep (1000); } catch (InterruptedException i) { throw new StorageException (i); }
                log.log (Level.INFO, "Retrying...", e);
                continue;
            }
            catch (IOException e)
            {
                // There must have been a filesystem problem
                // We call `abort` to save bandwidth
                s3Object.getObjectContent().abort();
                throw new StorageException (e);
            }

            return; // Success
        }
        catch (AmazonClientException | IOException e)
        {
            // Either we couldn't connect to S3
            // or AmazonHttpClient ran out of retries
            // or s3Object.close() threw an exception
            throw new StorageException (e);
        }

        throw new StorageException ("Ran out of retries.");
    }

09-11 18:50
查看更多