问题描述
我已经得到了使用性能计数器的应用程序,已经好几个月的工作。现在,我的开发机器,另一个开发商的机器上,它已经开始时,我打电话PerformanceCounterCategory.Exists挂。据我所知,它无限期地挂起。没关系我作为输入使用哪一类,并使用API的其他应用程序具有相同的行为。
I've got an application that uses performance counters, that has worked for months. Now, on my dev machine and another developers machine, it has started hanging when I call PerformanceCounterCategory.Exists. As far as I can tell, it hangs indefinitely. It does not matter which category I use as input, and other applications using the API exhibits the same behaviour.
调试(使用MS符号服务器)已经表明,它是一个来电Microsoft.Win32.RegistryKey该挂起。进一步的调查表明,在这条线的挂起:
Debugging (using MS Symbol Servers) has shown that it is a call to Microsoft.Win32.RegistryKey that hangs. Further investigation shows that it is this line that hangs:
while (Win32Native.ERROR_MORE_DATA == (r = Win32Native.RegQueryValueEx(hkey, name, null, ref type, blob, ref sizeInput))) {
这基本上是一个循环试图为性能计数器数据分配足够的内存。它开始于尺寸= 65000
,并做了一些反复。在4日的呼叫,当尺寸= 520000
, Win32Native.RegQueryValueEx
挂起。
This is basically a loop that tries to allocate enough memory for the performance counter data. It starts at size = 65000
and does a few iterations. In the 4th call, when size = 520000
, Win32Native.RegQueryValueEx
hangs.
此外,而worringly,我发现在PerformanceCounterLib.GetData参考源此评论:
Furthermore, rather worringly, I found this comment in the reference source for PerformanceCounterLib.GetData:
// Win32 RegQueryValueEx for perf data could deadlock (for a Mutex) up to 2mins in some
// scenarios before they detect it and exit gracefully. In the mean time, ERROR_BUSY,
// ERROR_NOT_READY etc can be seen by other concurrent calls (which is the reason for the
// wait loop and switch case below). We want to wait most certainly more than a 2min window.
// The curent wait time of up to 10mins takes care of the known stress deadlock issues. In most
// cases we wouldn't wait for more than 2mins anyways but in worst cases how much ever time
// we wait may not be sufficient if the Win32 code keeps running into this deadlock again
// and again. A condition very rare but possible in theory. We would get back to the user
// in this case with InvalidOperationException after the wait time expires.
有没有人见过这种行为?我能做些什么来解决这个?
Has anyone seen this behaviour before ? What can I do to resolve this ?
推荐答案
现在这个问题是固定的,因为一直在这里没有答案,我会在情况下,此处添加一个答案的问题在将来的搜索中找到。
This issue is now fixed, and since there has been no answers here, I will add an answer here in case the question is found in future searches.
我停止Print Spooler服务(作为临时措施)最终修正了这个错误。
I ultimately fixed this error by stopping the print spooler service (as a temporary measure).
看起来性能计数器实际需要来枚举系统(由一挂过程的WinDbg的转储证实,对打印机的阅读,在那里我可以在堆栈中看到跟踪该WINSPOOL是枚举打印机,卡在一个网络调用)。这是什么实际上未能在系统上(果然,打开设备和打印机窗口还挂着)。这令我感到困惑,一个打印机/网络的问题,实际上可以使性能计数器下去。有人会认为,有某种故障安全建在这样的情况下。
It looks like the reading of Performance counters actually needs to enumerate the printers on the system (confirmed by a WinDbg dump of a hanging process, where I can see in the stack trace that winspool is enumerating printers, and is stuck in a network call). This was what was actually failing on the system (and sure enough, opening the "Devices and printers" window also hung). It baffles me that a printer/network issue can actually make the performance counters go down. One would think that there was some sort of fail-safe built in for such a case.
我猜是,这是造成由坏打印机/驱动器在网络上。我还没有重新启动受影响的系统尚未印刷,因为我们正在追捕的坏打印机。
What I am guessing, is that this is cause by a bad printer/driver on the network. I haven't re-enabled printing on the affected systems yet, since we are hunting for the bad printer.
这篇关于是什么让PerformanceCounterCategory.Exists挂无限期?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!