本文介绍了Java Netty负载测试问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用文本协议编写了​​接受连接和轰炸消息(~100字节)的服务器,我的实现能够使用3rt方客户端发送有关环回400K /秒的消息。我选择Netty完成这项任务,SUSE 11 RealTime,JRockit RTS。
但是当我开始开发基于Netty的自己的客户端时,我面临着大幅度的吞吐量减少(从400K降至1.3K msg /秒)。客户端的代码非常简单。请你提出建议或展示如何编写更有效的客户的例子。实际上,我更关心延迟,但是从吞吐量测试开始,我不认为在环回上有1.5Kmsg /秒是正常的。
P.S.客户端目的只是从服务器接收消息,很少发送心跳。

I wrote the server that accepts connection and bombards messages ( ~100 bytes ) using text protocol and my implementation is able to send about loopback 400K/sec messages with the 3rt party client. I picked Netty for this task, SUSE 11 RealTime, JRockit RTS.But when I started developing my own client based on Netty I faced drastic throughput reduction ( down from 400K to 1.3K msg/sec ). The code of the client is pretty straightforward. Could you, please, give an advice or show examples how to write much more effective client. I,actually, more care about latency, but started with throughput tests and I don't think that it is normal to have 1.5Kmsg/sec on loopback.P.S. client purpose is only receiving messages from server and very seldom send heartbits.

Client.java

public class Client {

private static ClientBootstrap bootstrap;
private static Channel connector;
public static boolean start()
{
    ChannelFactory factory =
        new NioClientSocketChannelFactory(
                Executors.newCachedThreadPool(),
                Executors.newCachedThreadPool());
    ExecutionHandler executionHandler = new ExecutionHandler( new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));

    bootstrap = new ClientBootstrap(factory);

    bootstrap.setPipelineFactory( new ClientPipelineFactory() );

    bootstrap.setOption("tcpNoDelay", true);
    bootstrap.setOption("keepAlive", true);
    bootstrap.setOption("receiveBufferSize", 1048576);
    ChannelFuture future = bootstrap
            .connect(new InetSocketAddress("localhost", 9013));
    if (!future.awaitUninterruptibly().isSuccess()) {
        System.out.println("--- CLIENT - Failed to connect to server at " +
                           "localhost:9013.");
        bootstrap.releaseExternalResources();
        return false;
    }

    connector = future.getChannel();

    return connector.isConnected();
}
public static void main( String[] args )
{
    boolean started = start();
    if ( started )
        System.out.println( "Client connected to the server" );
}

}

ClientPipelineFactory.java

public class ClientPipelineFactory  implements ChannelPipelineFactory{

private final ExecutionHandler executionHandler;
public ClientPipelineFactory( ExecutionHandler executionHandle )
{
    this.executionHandler = executionHandle;
}
@Override
public ChannelPipeline getPipeline() throws Exception {
    ChannelPipeline pipeline = pipeline();
    pipeline.addLast("framer", new DelimiterBasedFrameDecoder(
              1024, Delimiters.lineDelimiter()));
    pipeline.addLast( "executor", executionHandler);
    pipeline.addLast("handler", new MessageHandler() );

    return pipeline;
}

}

MessageHandler.java
public class MessageHandler extends SimpleChannelHandler{

long max_msg = 10000;
long cur_msg = 0;
long startTime = System.nanoTime();
@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
    cur_msg++;

    if ( cur_msg == max_msg )
    {
        System.out.println( "Throughput (msg/sec) : " + max_msg* NANOS_IN_SEC/(     System.nanoTime() - startTime )   );
        cur_msg = 0;
        startTime = System.nanoTime();
    }
}

@Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
    e.getCause().printStackTrace();
    e.getChannel().close();
}

}

更新。在服务器端,有一个周期性线程写入接受的客户端通道。渠道很快就变得不可写了。
更新N2。在管道中添加了OrderedMemoryAwareExecutor,但仍然存在非常低的吞吐量(约4k msg / sec)

Update. On the server side there is a periodic thread that writes to the accepted client channel. And the channel soon become unwritable.Update N2. Added OrderedMemoryAwareExecutor in the pipeline, but still there is very low throughput ( about 4k msg/sec )

已修复。我把执行器放在整个管道堆栈的前面,然后就算完了!

Fixed. I put executor in front of the whole pipeline stack and it worked out!

推荐答案

如果服务器正在发送固定大小的消息(~100字节),您可以将ReceiveBufferSizePredictor设置为客户端引导程序,这将优化读取

If the server is sending messages with a fixed size (~100 bytes), you can set the ReceiveBufferSizePredictor to the client bootstrap, this will optimize the read

bootstrap.setOption("receiveBufferSizePredictorFactory",
            new AdaptiveReceiveBufferSizePredictorFactory(MIN_PACKET_SIZE, INITIAL_PACKET_SIZE, MAX_PACKET_SIZE));

根据您发布的代码段:客户端的nio工作线程正在执行管道中的所有操作,所以它将忙于解码和执行消息处理程序。你必须添加一个执行处理程序。

According to the code segment you have posted: The client's nio worker thread is doing everything in the pipeline, so it will be busy with decoding and executing the message handlers. You have to add a execution handler.

你说过,通道从服务器端变得不可写,所以你可能需要调整服务器引导程序中的水印大小。您可以定期监视写入缓冲区大小(写入队列大小)并确保通道无法写入,因为消息无法写入网络。它可以通过如下所示的util类来完成。

You have said that, channel is becoming unwritable from server side, so you may have to adjust the watermark sizes in the server bootstrap. you can periodically monitor the write buffer size (write queue size) and make sure that channel is becoming unwritable because of messages can not written to the network. It can be done by having a util class like below.

package org.jboss.netty.channel.socket.nio;

import org.jboss.netty.channel.Channel;

public final class NioChannelUtil {
  public static long getWriteTaskQueueCount(Channel channel) {
    NioSocketChannel nioChannel = (NioSocketChannel) channel;
    return nioChannel.writeBufferSize.get();
  }
}

这篇关于Java Netty负载测试问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-16 21:11
查看更多