问题描述
我使用 JProfiler 检查 Java 微服务,同时使用 JMeter 模拟并发用户.使用 JProfiler,我可以看到:导航到 find() 方法,我发现该方法有 synchronized 关键字
I am using JProfiler to inspect a Java microservice while I simulate concurrent users with JMeter.With JProfiler I can see:Navigating to the method find(), I realized the method has synchronized keyword
在我看来,这种方法会导致线程阻塞的问题.但为什么要使用它?我可以从微服务中禁用此缓存机制吗?微服务是用 Java 编写的,它使用 Spring、Spring Boot.
In my opinion this method causes the problem with blocked threads. But why is it used? May I disabled this cache mechanism from microservice? The microservice is written in Java and it uses Spring, Spring Boot.
谢谢
我为 Monitor History 添加了来自同一个 JProfiler 快照的屏幕截图,以显示在 ResolvedTypeCache 类中花费的时间.有时时间较短,但有时却很大.
I added screenshot from the same JProfiler snapshot for Monitor History to show the time spent in the ResolvedTypeCache class. Sometimes the time is less but sometimes is huge.
推荐答案
为什么使用LRU
?大概是因为有些东西值得缓存.
Why is LRU
used? Presumably because there's something worth caching.
为什么要同步
?因为这里用作缓存的 LinkedHashMap
不是线程安全的.不过,它确实提供了惯用的 LRU 机制.
Why is it synchronized
? Because the LinkedHashMap
that's being used as a cache here is not thread-safe. It does provide the idiomatic LRU mechanism though.
它可以被替换为 ConcurrentMap
以减轻同步,但是这样你就会有一个不断增长的非 LRU 缓存,这完全不是一回事.
It could be replaced with a ConcurrentMap
to mitigate the synchronization, but then you'd have a constantly growing non-LRU cache and that's not at all the same thing.
现在你无能为力了.最好的主意可能是联系开发人员并让他们知道这一点.总而言之,图书馆可能只是不适合您通过它的流量,或者您可能正在模拟会表现出病态行为的流量,或者您可能高估了这种影响(不是冒犯,我是非常Mulderesque关于SO帖子,即信任no1").
Now there's not much you can do about it. The best idea might be to contact the devs and let them know about this. All in all the library may just not be suitable for the amount of traffic your putting through it, or you may be simulating the kind of traffic that would exhibit pathological behaviour, or you may overestimate the impact of this (not offense, I'm just very Mulderesque about SO posts, i.e. "trust no1").
最后,无争议的同步很便宜,所以如果有可能将流量分配到缓存的多个实例,它可能会以某种方式影响性能(不一定是积极的).我不知道图书馆的架构,所以可能是完全不可能的.
Finally, uncontested synchronization is cheap so if there's a possibility to divide traffic to multiple instances of the cache it may affect performance in some way (not necessarily positive). I don't know about the architecture of the library though, so it may be completely impossible.
这篇关于为什么使用简单的最近最少使用缓存机制?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!