问题描述
我目前正在调查代理网络中的内存问题。
据JConsole称,当代理开始阻止消息时,ActiveMQ.Advisory.TempQueue占用了已配置内存的99%。
I'm currently investigating a memory problem in my broker network.According to JConsole the ActiveMQ.Advisory.TempQueue is taking up 99% of the configured memory when the broker starts to block messages.
有关配置的详细信息
大部分是默认配置。 1个开放式stomp + nio连接器,1个开放式openwire连接器。所有代理都形成一个超立方体(与其他每个代理建立一个持续的连接(易于自动生成))。没有流量控制。
Default config for the most part. One open stomp+nio connector, one open openwire connector. All brokers form a hypercube (one on-way connection to every other broker (easier to auto-generate)). No flow-control.
问题详细信息
Web控制台显示类似1974234的内容在30个使用者(6个代理,一个使用者,其余为使用Java连接器的客户端)中使消息入队和45345条消息出队。据我所知,出队数应该不小于:入队*消费者。因此,在我的情况下,不会消耗大量咨询,并且开始填补我的临时消息空间。 (当前我将几个gb配置为临时空间)
The webconsole shows something like 1974234 enqueued and 45345 dequeued messages at 30 consumers (6 brokers, one consumer and the rest is clients that use the java connector). As far as I know the dequeue count should be not much smaller than: enqueued*consumers. so in my case a big bunch of advisories is not consumed and starts to fill my temp message space. (currently I configured several gb as temp space)
由于没有客户端主动使用临时队列,因此我觉得这很奇怪。看完临时队列后,我感到更加困惑。大多数消息如下所示(msg.toString):
Since no client actively uses temp queues I find this very strange. After taking a look at the temp queue I'm even more confused. Most of the messages look like this (msg.toString):
ActiveMQMessage {commandId = 0, responseRequired = false, messageId = ID:srv007210-36808-1318839718378-1:1:0:0:203650, originalDestination = null, originalTransactionId = null, producerId = ID:srv007210-36808-1318839718378-1:1:0:0, destination = topic://ActiveMQ.Advisory.TempQueue, transactionId = null, expiration = 0, timestamp = 0, arrival = 0, brokerInTime = 1318840153501, brokerOutTime = 1318840153501, correlationId = null, replyTo = null, persistent = false, type = Advisory, priority = 0, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = null, marshalledProperties = org.apache.activemq.util.ByteSequence@45290155, dataStructure = DestinationInfo {commandId = 0, responseRequired = false, connectionId = ID:srv007210-36808-1318839718378-2:2, destination = temp-queue://ID:srv007211-47019-1318835590753-11:9:1, operationType = 1, timeout = 0, brokerPath = null}, redeliveryCounter = 0, size = 0, properties = {originBrokerName=broker.coremq-behaviortracking-675-mq-01-master, originBrokerId=ID:srv007210-36808-1318839718378-0:1, originBrokerURL=stomp://srv007210:61612}, readOnlyProperties = true, readOnlyBody = true, droppable = false}
之后看到这些消息,我有几个问题:
- 我是否正确理解消息的来源是is脚的连接?
- 如果是,如何使用临时连接建立临时队列?
- 是否存在不使用建议的简单原因?
当前,我通过停用网络连接器上的bridgeTempDestinations属性来推迟该问题。这样,消息不会传播,并且它们填充临时空间的速度要慢得多。如果我无法修复这些消息的来源,我至少希望阻止它们填充商店:
Currently I sort of postponed the problem by deactivating the bridgeTempDestinations property on the network connectors. this way the messages are not spread and they fill the temp space much slower. If I can not fix the source of these messages I would at least like to stop them from filling the store:
- 我可以丢弃这些未消耗的东西吗?一定时间后出现消息?
- 这会带来什么后果?
更新:我监视了我的集群又发现了消息已被消耗。它们已入队并被调度,但是使用者(其他群集节点与使用activemq lib的java使用者同为一堆)无法确认消息。因此,他们留在已调度的消息队列中,并且此队列不断增长。
UPDATE: I monitored my cluster some more and found out that the messages are consumed. They are enqueued and dispatched but the consumers (the other cluster nodes as mutch as java consumers that use the activemq lib) fail to acknowledge the messages. so they stay in the dispatched messages queue and this queue grows and grows.
推荐答案
这是一个旧线程,但是如果有人运行遇到同样的问题,您可能想看看这篇文章:
This is an old thread but in case somebody runs into it having the same problem, you might want to check out this post: http://forum.spring.io/forum/spring-projects/integration/111989-jms-outbound-gateway-temporary-queues-never-deleted
该链接中的问题听起来很相似,即临时队列产生大量咨询消息。以我为例,我们使用临时队列来实现同步的请求/响应消息传递,但是咨询消息的数量导致ActiveMQ将其大部分时间都花费在GC上,并最终抛出GC Overhead Limit Exceeded Exception。这是在v5.11.1上。即使我们关闭了连接,会话,生产者,消费者,临时队列也不会被GC接收,并且会继续接收咨询消息。
The problem in that link sounds similar, i.e. temp queues producing large amount of advisory messages. In my case, we were using temp queues to implement synchronous request/response messaging but the volume of advisory messages caused ActiveMQ to spend most of its time in GC and eventually throw a GC Overhead Limit Exceeded Exception. This was on v5.11.1. Even though we closed connection, session, producer, consumer the temp queue would not be GC'd and would continue receiving advisory messages.
解决方案是显式删除临时清理其他资源时会排队(请参阅)
The solution was to explicitly delete the temp queues when cleaning up the other resources (see https://docs.oracle.com/javaee/7/api/javax/jms/TemporaryQueue.html)
这篇关于代理网络中充满了未使用的ActiveMQ.Advisory.TempQueue消息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!