问题描述
我想使用TLS 1.3与HiveMQ进行安全通信.我已将HiveMQ社区版服务器config.xml
文件配置为指定使用TLS 1.3密码套件,并将其指向包含使用曲线的256位椭圆曲线密钥(EC NOT DSA)密钥对的密钥库:secp256r1
(这是TLS 1.3支持的几条曲线之一). 256位密钥对适用于我要使用的TLS 1.3密码套件:TLS_AES_128_GCM_SHA256
.我还为TLS_AES_256_GCM_SHA384
生成了384位椭圆曲线密钥,但是我只关注TLS_AES_128_GCM_SHA256
,因为如果我使AES 128有效,则AES 256套件将起作用.我已经为两个密钥对生成了证书,并将它们都放在了JAVA HOME Folder
的cacerts
文件中.我仍然收到javax.net.ssl.SSLHandshakeException:
I want to use TLS 1.3 for my secure communication with HiveMQ. I've configured the HiveMQ community edition server config.xml
file to specify to use TLS 1.3 cipher suites and I pointed it to the keystore containing a key pair for a 256-bit Elliptic curve key (EC NOT DSA) using the curve: secp256r1
(which is one of the few curves supported by TLS 1.3). The 256-bit key pair is for this TLS 1.3 cipher suite I want to use: TLS_AES_128_GCM_SHA256
. I'm also generated a 384-bit elliptic curve key for TLS_AES_256_GCM_SHA384
but I'm just focusing on TLS_AES_128_GCM_SHA256
as the AES 256 suite will work if I get the AES 128 to work. I already generated certificates for both of the key pairs and put them both in the cacerts
file in the JAVA HOME Folder
. I'm still getting a javax.net.ssl.SSLHandshakeException:
javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify
我尝试使用以下TLS 1.2密码套件:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
(具有适当的证书),并且没有任何问题,因此看来此问题专门针对TLS 1.3.我的项目在Java 12.0.1
中.我注意到,虽然HiveMQ服务器识别TLSv1.3,但它启用了TLSv1.2协议,但没有说它启用了任何TLSv1.3密码套件.我是否需要以某种方式在HiveMQ中手动启用TLSv1.3密码套件,因为即使指定了特定的协议,TLSv1.3密码套件看起来也不像是启用了密码套件?我在下面留下了服务器控制台输出的副本以及Java代码和异常.
I've tried using this TLS 1.2 cipher suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
(with the appropriate certificate) and it worked without any issues so it appears this issue is specifically with TLS 1.3. My project is in Java 12.0.1
. I noticed that while the HiveMQ server recognized TLSv1.3 it enabled TLSv1.2 protocols, but didn't say it enabled any TLSv1.3 cipher suites. Do I need to manually enable TLSv1.3 cipher suites in HiveMQ somehow because it doesn't look like they are on even when specifying the specific protocol? I left a copy of the servers console output below along with Java code and the exception.
更新:我已指定客户端在sslConfig
中的.protocols()
方法中使用TLS1.3.我尝试将密码套件:TLS_AES_128_GCM_SHA256
手动添加到config.xml文件中,但这次却收到SSL异常错误.更新后的输出和异常如下.我怀疑HiveMQ正在过滤我要使用的密码套件.我尝试创建一个SSL引擎作为测试,并使用了.getEnabledCipherSuites()
和getSupportedCipherSuites()
,它说我的JVM和TLS1.3协议本身都支持上述TLS 1.3密码套件.
Update: I've specified the client to use TLS1.3 with the .protocols()
method in sslConfig
. I've tried manually adding the cipher suite: TLS_AES_128_GCM_SHA256
to the config.xml file but I get an SSL exception error this time. The updated outputs and exceptions are below. I suspect that HiveMQ is filtering out the cipher suite that I'm trying to use. I tried creating an SSL engine as a test and used .getEnabledCipherSuites()
and getSupportedCipherSuites()
and it says that the TLS 1.3 cipher suites above supported by my JVM and also the TLS1.3 protocol itself.
HiveMQ服务器控制台输出(来自run.sh
文件,在logback.xml
中启用了DEBUG):
HiveMQ Server Console Output (From run.sh
file with DEBUG enabled in logback.xml
):
2019-07-06 12:06:42,394 INFO - Starting HiveMQ Community Edition Server
2019-07-06 12:06:42,398 INFO - HiveMQ version: 2019.1
2019-07-06 12:06:42,398 INFO - HiveMQ home directory: /Users/chigozieasikaburu/git/IoT-HiveMqtt-Community-Edition/build/zip/hivemq-ce-2019.1
2019-07-06 12:06:42,508 INFO - Log Configuration was overridden by /Users/someuser/git/IoT-HiveMqtt-Community-Edition/build/zip/hivemq-ce-2019.1/conf/logback.xml
2019-07-06 12:06:42,619 DEBUG - Reading configuration file /Users/someuser/git/IoT-HiveMqtt-Community-Edition/build/zip/hivemq-ce-2019.1/conf/config.xml
2019-07-06 12:06:42,838 DEBUG - Adding TCP Listener with TLS of type TlsTcpListener on bind address 0.0.0.0 and port 8883.
2019-07-06 12:06:42,839 DEBUG - Setting retained messages enabled to true
2019-07-06 12:06:42,839 DEBUG - Setting wildcard subscriptions enabled to true
2019-07-06 12:06:42,839 DEBUG - Setting subscription identifier enabled to true
2019-07-06 12:06:42,839 DEBUG - Setting shared subscriptions enabled to true
2019-07-06 12:06:42,839 DEBUG - Setting maximum qos to EXACTLY_ONCE
2019-07-06 12:06:42,840 DEBUG - Setting topic alias enabled to true
2019-07-06 12:06:42,840 DEBUG - Setting topic alias maximum per client to 5
2019-07-06 12:06:42,840 DEBUG - Setting the number of max queued messages per client to 1000 entries
2019-07-06 12:06:42,841 DEBUG - Setting queued messages strategy for each client to DISCARD
2019-07-06 12:06:42,841 DEBUG - Setting the expiry interval for client sessions to 4294967295 seconds
2019-07-06 12:06:42,841 DEBUG - Setting the expiry interval for publish messages to 4294967296 seconds
2019-07-06 12:06:42,841 DEBUG - Setting the server receive maximum to 10
2019-07-06 12:06:42,841 DEBUG - Setting keep alive maximum to 65535 seconds
2019-07-06 12:06:42,841 DEBUG - Setting keep alive allow zero to true
2019-07-06 12:06:42,842 DEBUG - Setting the maximum packet size for mqtt messages 268435460 bytes
2019-07-06 12:06:42,842 DEBUG - Setting global maximum allowed connections to -1
2019-07-06 12:06:42,842 DEBUG - Setting the maximum client id length to 65535
2019-07-06 12:06:42,842 DEBUG - Setting the timeout for disconnecting idle tcp connections before a connect message was received to 10000 milliseconds
2019-07-06 12:06:42,842 DEBUG - Throttling the global incoming traffic limit 0 bytes/second
2019-07-06 12:06:42,842 DEBUG - Setting the maximum topic length to 65535
2019-07-06 12:06:42,843 DEBUG - Setting allow server assigned client identifier to true
2019-07-06 12:06:42,843 DEBUG - Setting validate UTF-8 to true
2019-07-06 12:06:42,843 DEBUG - Setting payload format validation to false
2019-07-06 12:06:42,843 DEBUG - Setting allow-problem-information to true
2019-07-06 12:06:42,843 DEBUG - Setting anonymous usage statistics enabled to false
2019-07-06 12:06:42,845 INFO - This HiveMQ ID is JAzWT
2019-07-06 12:06:43,237 DEBUG - Using disk-based Publish Payload Persistence
2019-07-06 12:06:43,259 DEBUG - 1024.00 MB allocated for qos 0 inflight messages
2019-07-06 12:06:45,268 DEBUG - Initializing payload reference count and queue sizes for client_queue persistence.
2019-07-06 12:06:45,690 DEBUG - Diagnostic mode is disabled
2019-07-06 12:06:46,276 DEBUG - Throttling incoming traffic to 0 B/s
2019-07-06 12:06:46,277 DEBUG - Throttling outgoing traffic to 0 B/s
2019-07-06 12:06:46,321 DEBUG - Set extension executor thread pool size to 4
2019-07-06 12:06:46,321 DEBUG - Set extension executor thread pool keep-alive to 30 seconds
2019-07-06 12:06:46,336 DEBUG - Building initial topic tree
2019-07-06 12:06:46,395 DEBUG - Started JMX Metrics Reporting.
2019-07-06 12:06:46,491 INFO - Starting HiveMQ extension system.
2019-07-06 12:06:46,536 DEBUG - Starting extension with id "hivemq-file-rbac-extension" at /Users/someuser/git/IoT-HiveMqtt-Community-Edition/build/zip/hivemq-ce-2019.1/extensions/hivemq-file-rbac-extension
2019-07-06 12:06:46,558 INFO - Starting File RBAC extension.
2019-07-06 12:06:46,795 INFO - Extension "File Role Based Access Control Extension" version 4.0.0 started successfully.
2019-07-06 12:06:46,818 INFO - Enabled protocols for TCP Listener with TLS at address 0.0.0.0 and port 8883: [TLSv1.3]
2019-07-06 12:06:46,819 INFO - Enabled cipher suites for TCP Listener with TLS at address 0.0.0.0 and port 8883: []
2019-07-06 12:06:46,823 WARN - Unknown cipher suites for TCP Listener with TLS at address 0.0.0.0 and port 8883: [TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384]
2019-07-06 12:06:46,827 INFO - Starting TLS TCP listener on address 0.0.0.0 and port 8883
2019-07-06 12:06:46,881 INFO - Started TCP Listener with TLS on address 0.0.0.0 and on port 8883
2019-07-06 12:06:46,882 INFO - Started HiveMQ in 4500ms
2019-07-06 12:10:32,396 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:10:38,967 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:23:29,721 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:23:35,990 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:24:17,436 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
2019-07-06 12:24:29,160 DEBUG - SSL Handshake failed for client with IP UNKNOWN: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
Java代码:
Mqtt5BlockingClient subscriber = Mqtt5Client.builder()
.identifier(UUID.randomUUID().toString()) // the unique identifier of the MQTT client. The ID is randomly generated between
.serverHost("localhost") // the host name or IP address of the MQTT server. Kept it localhost for testing. localhost is default if not specified.
.serverPort(8883) // specifies the port of the server
.addConnectedListener(context -> ClientConnectionRetreiver.printConnected("Subscriber1")) // prints a string that the client is connected
.addDisconnectedListener(context -> ClientConnectionRetreiver.printDisconnected("Subscriber1")) // prints a string that the client is disconnected
.sslConfig()
.cipherSuites(Arrays.asList("TLS_AES_128_GCM_SHA256"))
.applySslConfig()
.buildBlocking(); // creates the client builder
subscriber.connectWith() // connects the client
.simpleAuth()
.username("user1")
.password("somepassword".getBytes())
.applySimpleAuth()
.send();
异常(使用Ssl调试工具:-Djavax.net.debug = ssl):
Exception (using Ssl debugging tool: -Djavax.net.debug=ssl):
SubThread1 is running.
javax.net.ssl|DEBUG|0F|nioEventLoopGroup-2-1|2019-07-05 15:29:47.379 EDT|SSLCipher.java:463|jdk.tls.keyLimits: entry = AES/GCM/NoPadding KeyUpdate 2^37. AES/GCM/NOPADDING:KEYUPDATE = 137438953472
javax.net.ssl|ALL|0F|nioEventLoopGroup-2-1|2019-07-05 15:29:47.761 EDT|SSLEngineImpl.java:752|Closing outbound of SSLEngine
javax.net.ssl|ALL|0F|nioEventLoopGroup-2-1|2019-07-05 15:29:47.762 EDT|SSLEngineImpl.java:724|Closing inbound of SSLEngine
javax.net.ssl|ERROR|0F|nioEventLoopGroup-2-1|2019-07-05 15:29:47.765 EDT|TransportContext.java:312|Fatal (INTERNAL_ERROR): closing inbound before receiving peer's close_notify (
"throwable" : {
javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:133)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:307)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:263)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:254)
at
java.base/sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:733)
at io.netty.handler.ssl.SslHandler.setHandshakeFailure(SslHandler.java:1565)
at io.netty.handler.ssl.SslHandler.channelInactive(SslHandler.java:1049)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1429)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(De
faultChannelPipeline.java:947)
at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:826)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.nett
y.channel.nio.NioEventLoop.run(NioEventLoop.java:474)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:835)}
)
Subscriber1 disconnected.
Exception in thread "SubThread1" com.hivemq.client.mqtt.exceptions.ConnectionClosedException: Server closed connection without DISCONNECT.
at com.hivemq.client.internal.mqtt.MqttBlockingClient.connect(MqttBlockingClient.java:91)
at
com.hivemq.client.internal.mqtt.message.connect.MqttConnectBuilder$Send.send(MqttConnectBuilder.java:196)
at com.main.SubThread.run(SubThread.java:90)
at java.base/java.lang.Thread.run(Thread.java:835)
推荐答案
该问题归因于#27 是由于TLS 1.3的SSL上下文处理不正确而引起的. HiveMQ客户端中的#70 已解决此问题.
The issue was due to a bug #27 in HiveMQ Client Edition 1.1.0 caused by incorrect SSL Context Handling for TLS 1.3. This issue was fixed with #70 in HiveMQ Client.
这篇关于如何在HiveMQ中正确使用TLS 1.3密码套件? (获取SSL异常:在接收对等方的close_notify之前关闭入站)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!