问题描述
我已经将"receiveBufferSize"选项设置为1024,但是由于某种原因,我仍然在messageReceived中仅获得768字节.数据标题表示正在发送的数据大小为1004.
I have set the "receiveBufferSize" option to 1024, but for some reason I'm still getting only 768 bytes in messageReceived. The header of the data indicates that size of the data being sent is 1004.
下面是服务器的初始化代码:
Below is the initialization code for the server:
public static void main(String[] args) throws Exception {
ConnectionlessBootstrap b = new ConnectionlessBootstrap(new NioDatagramChannelFactory());
// Options for a new channel
b.setOption("receiveBufferSize", 1024);
System.out.println(b.getOptions());
b.setPipelineFactory(new ChannelPipelineFactory() {
@Override
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(
new MyUDPPacketDecoder(),
new StdOutPrintHandler());
}
});
b.bind(new InetSocketAddress(myPort));
}
推荐答案
您需要设置其他选项-receiveBufferSizePredictorFactory.
You need to set an additional option - receiveBufferSizePredictorFactory.
为了预测需要多少空间来容纳传入消息,netty使用了一个预测器来预测要分配的字节数.
in order to predict how much space it needs to allocate in order to hold the incoming message, netty uses a predictor that predicts the amount of byte to allocate.
有两种类型的接收缓冲区大小预测值,即自适应大小和固定大小.预测变量是由预测变量工厂创建的,该工厂会为引导程序创建的每个通道创建一个.
there are two type of receive buffer size predictors, adaptive and fixed-size.the predictors are created by a predictor factory, which creates one for each channel created by the bootstrap.
如果没有为引导程序设置任何预测变量工厂(或没有为通道手动设置任何预测变量),则该通道使用默认的768字节固定大小的预测变量.所有大于768字节的消息都将缩减为该大小.
if no predictor factory is set for the bootstrap (or no predictor is set manually for the channel), the channel uses the default 768 byte fixed-size predictor. all messages bigger then 768 bytes are cut down to that size.
您可以添加:
b.setOption("receiveBufferSizePredictorFactory", new FixedReceiveBufferSizePredictorFactory(1024));
您可以在网络文档中阅读有关预测变量及其工厂的信息
you can read about the predictors and their factories in netty documentation
ReceiveBufferSizePredictorFactory接口
这篇关于为什么Netty从UDP消息中仅给我768个字节的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!