本文介绍了Infinispan Jgroups 在战争部署后崩溃的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 Infinispan 7.2.3 开发 Wildfly 9.

I'm working on Wildfly 9 with Infinispan 7.2.3.

我正面临一个与分布式缓存相关的奇怪问题:

I'm facing up to a strange problem related to distributed cache:

  1. 在应用服务器上,我部署了 N 个公开 REST 服务的战争
  2. 每个服务代码都有共同的职责来检查 CacheManager si 是否已经存在于 JNDI 上,如果是,则使用它,否则我创建一个新的并将其绑定到 JNDI.因此,每场战争都使用唯一的 CacheManager 实例.
  3. Infinispan CacheManager 配置为分布式模式.

infinispan 和 jgroups 由应用服务器提供.在重新部署所有战争的操作(取消部署和部署)后,如果我突然开始向这些服务发送 REST 请求,我会收到此错误:

The infinispan and jgroups are provided from the application server.After a re-deploy operation (undploy and deploy) of all the wars if i suddenly start to send REST request to these services i get this error:

18:23:42,366 WARN  [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p2-t12) ISPN000197: Error updating cluster member list: org.infinispan.util.concurrent.Timeout
Exception: Replication timeout for ws-7-aor-58034
    at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:87)
    at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:586)
    at org.infinispan.topology.ClusterTopologyManagerImpl.confirmMembersAvailable(ClusterTopologyManagerImpl.java:402)
    at org.infinispan.topology.ClusterTopologyManagerImpl.updateCacheMembers(ClusterTopologyManagerImpl.java:393)
    at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:309)
    at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:590)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

18:23:42,539 WARN  [org.infinispan.topology.ClusterTopologyManagerImpl] (remote-thread--p11-t2) ISPN000329: Unable to read rebalancing status from coordinator ws-7-aor-19211: org.infinispan.util.concurrent.TimeoutException: Node ws-7-aor-19211 timed out
    at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:248)
    at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:561)
    at org.infinispan.topology.ClusterTopologyManagerImpl.fetchRebalancingStatusFromCoordinator(ClusterTopologyManagerImpl.java:129)
    at org.infinispan.topology.ClusterTopologyManagerImpl.start(ClusterTopologyManagerImpl.java:118)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
    at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
    at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
    at org.infinispan.factories.AbstractComponentRegistry.registerComponentInternal(AbstractComponentRegistry.java:207)
    at org.infinispan.factories.AbstractComponentRegistry.registerComponent(AbstractComponentRegistry.java:156)
    at org.infinispan.factories.AbstractComponentRegistry.getOrCreateComponent(AbstractComponentRegistry.java:277)
    at org.infinispan.factories.AbstractComponentRegistry.invokeInjectionMethod(AbstractComponentRegistry.java:227)
    at org.infinispan.factories.AbstractComponentRegistry.wireDependencies(AbstractComponentRegistry.java:132)
    at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$2.run(GlobalInboundInvocationHandler.java:156)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.jgroups.TimeoutException: timeout waiting for response from ws-7-aor-19211, request: org.jgroups.blocks.UnicastRequest@75770aa6, req_id=6, mode=GET_ALL, target=ws-7-aor-19211
    at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:427)
    at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:433)
    at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:241)
    ... 19 more

这是缓存管理器的初始化代码:

This is the initalization code for cachemanager:

    try {
            ctx = new InitialContext();
            cacheManager = (DefaultCacheManager)ctx.lookup(SessionConstants.CACHE_MANAGER_GLOBAL_JNDI_NAME);
        } catch (NamingException e1) {
            logger.error("SessionHooverJob not able to find: java:global/klopotekCacheManager ... a new instance will be created!");
        }

        if (cacheManager ==null){

         ...
       configurator = ConfiguratorFactory.getStackConfigurator("default-configs/default-jgroups-udp.xml");
                ProtocolConfiguration udpConfiguration = configurator.getProtocolStack().get(0);
                if ("UDP".equalsIgnoreCase(udpConfiguration.getProtocolName()) && mcastAddr != null){
                    udpConfiguration.getProperties().put("mcast_addr", mcastAddr);
                }
                GlobalConfigurationBuilder gcb = new GlobalConfigurationBuilder();
                gcb.globalJmxStatistics().enabled(true).allowDuplicateDomains(true);
                gcb.transport().defaultTransport()
                .addProperty(JGroupsTransport.CONFIGURATION_STRING, configurator.getProtocolStackString());
                //.addProperty(JGroupsTransport.CONFIGURATION_FILE, "config/jgroups.xml");

                ConfigurationBuilder builder = new ConfigurationBuilder();
                builder.clustering().cacheMode(CacheMode.DIST_SYNC).expiration().lifespan(24l, TimeUnit.HOURS);;

                cacheManager = new DefaultCacheManager(gcb.build(),
                        builder.build());

如果部署后经过大约 40-60 秒的时间,则不会出现此问题.如果我有 1 个构建了 jgroups 通道的 JNDI 会话管理器,即使我取消部署所有战争...为什么 jgroups 再次尝试重新平衡?

The problem doesn't occur if a time of around 40-60 seconds passes after deploying.If i have 1 JNDI session manager which have built the jgroups channel, even if i undeploy the all the war... why jgroups try to do rebalance again?

是否需要设置一些配置属性?

Is there some configuration property to set?

推荐答案

使用来自 WildFly 的 Infinispan 子系统的缓存并没有错,即使通过 JNDI,只要您了解服务器管理的 Infinispan 资源的生命周期要求/约束.在 WildFly 中,所有 Infinispan 资源都是按需创建/启动的,包括缓存管理器、缓存配置和缓存.如果没有服务需要给定的 Infinispan 资源,则它不会启动(也不会绑定到 JNDI).同样,当没有服务不再需要给定的 Infinispan 资源时,它就会停止(并删除其 JNDI 绑定).因此,为了通过 JNDI 查找 Infinispan 资源,您必须首先强制它启动.最简单的方法是创建一个资源引用(即资源引用或资源环境引用).例如

There's nothing wrong with using the caches from WildFly's Infinispan subsystem, even via JNDI, so long as you are aware of the lifecycle requirements/constraints of server managed Infinispan resources. In WildFly, all Infinispan resources are created/started on demand, including cache managers, cache configurations, and caches. If no service requires a given Infinispan resource, it is not started (nor is it bound to JNDI). Likewise, when no service any longer requires a given Infinispan resource, it is stopped (and its JNDI binding removed). Thus, in order to lookup an Infinispan resource via JNDI, you must first force it to start.The easiest way to do this is to create a resource reference (i.e. a resource-ref or resource-env-ref).e.g.

<resource-ref>
    <res-ref-name>infinispan/mycontainer</res-ref-name>
    <lookup-name>java:jboss/infinispan/container/mycontainer</lookup-name>
</resource-ref>

您现在可以在应用程序 jndi 命名空间中查找缓存管理器.例如

You can now lookup the cache manager in your application jndi namespace.e.g.

Context ctx = new InitialContext();
EmbeddedCacheManager manager = (EmbeddedCacheManager) ctx.lookup("java:comp/env/infinispan/mycontainer");

缓存管理器已经启动.此外,您永远不应该尝试停止服务器管理的缓存管理器.此外,您不能保证已安装在 Infinispan 子系统中为此容器定义的任何缓存配置.因此,使用 getCache("...") 方法并不是获取对服务器管理缓存的引用的可靠方法.如果您想依赖子系统中定义的特定缓存,您将为缓存本身创建一个资源引用.例如

The cache manager will already be started. Also, you should never attempt to stop a server managed cache manager. Additionally, you cannot guarantee that any of the cache configurations that are defined within the Infinispan subsystem for this container are installed. Thus, the use of getCache("...") methods are not a reliable way of obtaining a reference to a server managed cache. If you want to depend on a specific cache as defined in the subsystem, you would create a resource reference for the cache itself.e.g.

<resource-ref>
    <res-ref-name>infinispan/mycache</res-ref-name>
    <lookup-name>java:jboss/infinispan/cache/mycontainer/mycache</lookup-name>
</resource-ref>

您现在可以直接查找缓存.

You can now lookup the cache directly.

Cache<?, ?> cache = (Cache) ctx.lookup("java:comp/env/infinispan/mycache");

缓存已经启动.同样,您不应尝试停止服务器管理的缓存.当您取消部署应用程序或关闭服务器时,它会自动停止.

The cache will already be started. Likewise, you should not attempt to stop a server managed cache. It will stop automatically when you application is undeployed or the server is shutdown.

这篇关于Infinispan Jgroups 在战争部署后崩溃的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-04 04:48
查看更多