问题描述
在 Windows 7 和 Windows 8 中有计时器合并支持,例如参见:计时器合并.净
Windows 7 有一个功能 SetWaitableTimerEx
据称它支持合并 此处 和 此处.
Windows 8 还有一个功能 SetCoalescableTimer
根据 MSDN 支持合并.
关于 Windows 7 和 Windows 8 中的计时器合并的讨论很多.但似乎它可能已经在更早的时候实现了.是吗?
首先,SetThreadpoolTimer
自 Vista 起可用 在 Vista 下提供定时器合并.或者它是否只提供接口并且实际上仅从 Windows 7 开始实现合并?
来自 "线程池定时器和 I/O" 我能读懂
这实际上是一个影响能源效率并有助于降低整体功耗的功能.它基于一种称为计时器合并的技术."
这句话是否适用于所有支持 SetThreadpoolTimer
函数的 Windows 版本?
其次,现在我开始怀疑了.我可以看到 timeSetEvent
自 XP 起可用 有一个名为 uResolution
的参数.此参数是否只是更改全局计时器分辨率,例如 timeBeginPeriod
是在计时器事件等待的持续时间内,还是仅影响此特定计时器,还提供计时器合并?>
最后,在 Windows XP 或 Vista 下,是否有任何附加或替代功能可以提供计时器合并?
总的几句话:
定时器合并提供了一种减少中断次数的方法.允许应用程序为其时序要求指定容差.这允许操作系统批量"中断并产生以下几个后果:
- 中断次数可能会减少.(+)
- 上下文切换的次数可能较少.(+)
- 可以降低功耗.(+)
- 可能必须在这些批处理中断 (-) 处完成大量操作
- 调度程序此时可能需要调度大量进程 (-)
- 时间分辨率更差 (-)
Windows 以及其他基于中断的操作系统总是批处理"定时事件.任何设置为在特定时间发生的事情都依赖于到期时间,并带有中断.因此,事件与中断合并.该方案的粒度由中断频率决定.对定时器合并感兴趣的人必读:MSDN:Windows 计时器合并.
出于性能原因,应尽一切努力减少中断的数量,因为尽可能.不幸的是,许多软件包确实将系统计时器分辨率设置得非常高,例如通过多媒体定时器接口timeBeginPeriod
/timeEndPeriod
或底层API NtSetTimerResolution
.就像 Hans 提到的:Chrome"是一个很好的例子,说明如何严重夸大这些功能的使用.
其次,现在我开始想... timeSetEvent
是多媒体定时器功能之一.它在底层使用 timeBeginPeriod
.
它使用得很糟糕:它设置系统计时器分辨率以在执行平台上可用的计时器分辨率内尽可能匹配uResolution.在 uDelay 值较大时,它可以以低分辨率等待,直到接近延迟到期,然后才提高系统定时器分辨率,但它将整个等待周期的定时器分辨率设置为指定的uResolution.这很痛苦,因为知道高分辨率也适用于长时间的延迟.然而,多媒体定时器功能不建议用于大延迟.但是一遍又一遍地设置分辨率也不好(见下面的注释).
关于timeSetEvent
的总结:这个函数根本没有做任何类似合并的事情,它做的是相反的事情:它可选地增加中断的数量;从这个意义上说,它会将事件传播到更多的中断上,并分批"它们.
SetThreadpoolTimer
首次引入了批处理"事件的想法.这主要是由于对 Windows 笔记本电脑电池寿命的抱怨越来越多.SetWaitableTimerEx
进一步推动了该策略,而 SetCoalescableTimer
是用于访问合并计时器的最新 API.后者引入了 TIMERV_DEFAULT_COALESING 和 TIMERV_NO_COALESING ,它们值得考虑,因为它们允许忽略某些事实.
借此机会对系统计时器分辨率进行一些说明:
改变系统定时器分辨率的后果不仅仅是增加中断频率.一些使用 timeBeginPeriod
/NtSetTimerResolution
带来的效果:
- 中断频率变化
- 线程量子变化(线程时间片)(!)
- 系统时间打嗝(MSDN:...频繁调用会显着影响系统时钟")
- 当系统时间调整处于活动状态时出现打嗝 (
SetSystemTimeAdjustment
)
第 3. 点在 Windows 7 中得到部分处理,第 4. 点仅在 Window 8.1 中得到解决.系统时间的中断可能与支持的最小计时器分辨率(典型系统上为 15.625 毫秒)一样大,并且在 timeBeginPeriod
/NtSetTimerResolution
频繁时会累积.当尝试调整系统时间以匹配 NTP 参考时,这可能会导致相当大的跳跃.NTP 客户端需要在较高的计时器分辨率下运行,以便在 Windows 版本 < 上运行时获得合理的准确性.视窗 8.
最后:Windows 本身会在看到这样做的好处时更改系统计时器分辨率.支持的计时器分辨率的数量取决于底层硬件和 Windows 版本.可用分辨率列表可以通过调用 timeBeginPeriod
以增加周期然后调用 NtQueryTimerResolution.Windows 在特定平台上可能不喜欢"某些支持的分辨率,并对其进行修改以更好地满足 Windows 需求.示例:XP 可能会在某些平台上短时间后将 ~ 4 ms 的用户设置"分辨率更改为 1 ms.特定的 Windows 版本8.1 确实会在不可预测的时间更改计时器分辨率.
如果应用程序需要完全独立于这些人工制品,则它必须自己获得最高的可用分辨率.通过这种方式,应用程序控制系统范围的分辨率,而不必理会其他应用程序或操作系统更改计时器分辨率.更现代的平台确实支持 0.5 毫秒的计时器分辨率.timeBeginPeriod
不允许获取此分辨率,但 NtSetTimerResolution 确实如此.此处我已经描述了如何使用 NtSetTimerResolution 获得 0.5 毫秒的分辨率.
在这种情况下,功耗可能会增加,但这是为获得可靠解决方案而付出的代价:在现代硬件上,上下文切换的能源成本通常为 0.05 mJ 到 0.2 mJ(有没有人估计过全球上下文切换的总量)每年?).Windows 将线程量(时间片)减少到大约.2/3 当定时器分辨率设置为最大值时.结果,功耗增加了大约.30%!
There is timer coalescing support in Windows 7 and Windows 8, see for example this: Timer coalescing in .net
Windows 7 has a function SetWaitableTimerEx
about which it is claimed that it supports coalescing here and here.
Windows 8 has additionally a function SetCoalescableTimer
which supports coalescing according to MSDN.
So lots of talk about timer coalescing in Windows 7 and Windows 8. But then it seems like it may have been implemented already earlier. Is it so?
First, is it correct that SetThreadpoolTimer
available since Vista provides timer coalescing under Vista. Or does it only offer the interface and actually implements coalescing only since Windows 7?
From "Thread Pool Timers and I/O" I can read that
Is that sentence correct for all Windows versions that support SetThreadpoolTimer
function?
Secondly, now that I started wondering. I can see that timeSetEvent
available since XP has parameter called uResolution
. Does this parameter just change the global timer resolution like timeBeginPeriod
does, for the duration of the timer event wait, or does it affect only this particular timer, providing also timer coalescing?
Finally, are there any additional or alternative functions that provide timer coalescing under Windows XP or Vista?
A few words in general:
Timer coalescing provides a way to reduce the number of interrupts. Applications are allowed to specify a tolerance for their timing demands. This allows the operating system to "batch" interrupts with a couple of consequences:
- the number of interrupts may be reduced. (+)
- the number of context switches may be lower. (+)
- the power consumption may be reduced. (+)
- a bulk of of operations may have to be done at those batched interrupts (-)
- the scheduler may have to schedule a large number of processes at this time (-)
- the resolution in time is worse (-)
Windows, as well as other interrupt based operating systems, has always "batched" timed events. Anything set up to occurr at a specific time relies on a due time to expire with an interrut. Consequently, the events are coalesced with the interrupt. The granularity of this scheme is determined by the interrupt frequency. A must read for those interested in timer coalescing: MSDN: Windows Timer Coalescing.
For performance reasons every effort should be made to reduce the amout of interrupts asmuch as possible. Unfortunately lots of packages do set the systems timer resolution very high, e.g. by means of the multimedia timer interface timeBeginPeriod
/ timeEndPeriod
or the underlying API NtSetTimerResolution
. Like Hans mentioned: "Chrome" is a good example for how the use of these functions can be badly exaggerated.
Secondly, now that I started wondering... timeSetEvent
is one of the multimedia timer functions. It uses timeBeginPeriod
under the hood.
And it uses it badly: It sets the system timer resolution to match uResolution as good as it can within the timer resolutions available on the executing platform. On large values of uDelay it could wait at low resolution until it gets close to the expiry of the delay and only then raise the system timer resolution, but it sets the timer resolution for the entire wait period to the specified uResolution. That is painful, knowing that the high resolution will apply for long delays as well. However, the multimedia timer functions are not proposed for use at large delays. But setting the resolution over and over again isn't good either (see notes below).
Summary on timeSetEvent
: This function is not doing anything like coalescing at all, what it does it the opposite: It optionally increases the number of interrupts; in this sense it spreads events over more interrupts, it "de-batches" them.
The SetThreadpoolTimer
introduces the idea of "batching" events for the first time. This was primarely forced due to increasing complaints about battery life time on Windows notebooks. SetWaitableTimerEx
has pushed that strategy further and SetCoalescableTimer
is the most recent API to access coalescing timers. The latter introduces TIMERV_DEFAULT_COALESCING and TIMERV_NO_COALESCING which are worth thinking about since they allow to ignore certain facts.
Taking the opportunity for some notes on system timer resolutions:
Changing the system timer resolution has more consequeces than just an increased interrupt frequency. Some effects coming along with the use of timeBeginPeriod
/ NtSetTimerResolution
:
- Interrupt frequency changes
- Thread quantum changes (threads time slice) (!)
- Hiccups of the system time (MSDN: "...frequent calls can significantly affect the system clock")
- Hiccups when a system time adjustment is active (
SetSystemTimeAdjustment
)
Point 3. was partly taken care of with Windows 7 and Point 4. was only addressed with Window 8.1.Hiccups of the system time can be as big as the minimum supported timer resolution (15.625 ms on typical systems) and they accumulate when timeBeginPeriod
/ NtSetTimerResolution
frequently. This may result in a considerable jump when trying to adjust the system time to match an NTP reference. NTP clients need to operate at high timer resolution to obtain reasonable accuracy when running on Windows version < Windows 8.
Finally: Windows itself changes the system timer resolution whenever it sees advantages to do so. The number of supported timer resolutions depends on the underlying hardware and the Windows version. A list of available resolution may be obtianed by scanning through them by calling timeBeginPeriod
with increasing periods followed by a call to NtQueryTimerResolution. Some of the supported resolutions may be "disliked" by Windows on specific platforms and modified to better suit the Windows needs. Example: XP may change a "user set" resolution of ~ 4 ms to 1 ms after a short period of time on certain platforms. Particular Windows versions < 8.1 does change the timer resolution at unpredictable times.
If an application is required to be completely independent of these artefacts it has to acquired the highest available resolution on its own. This way the application dominates the system wide resolution and it doesn't have to bother about other applications or the OS changing timer resolutions. More modern platforms do support a timer resolution of 0.5 ms. timeBeginPeriod
does not allow to acquire this resolution but NtSetTimerResolution does. Here I've described how do use NtSetTimerResolution to obtain 0.5 ms resolution.
Power consumption is likely to raise under such conditions but that's the fee to pay for reliable resolution: The energy cost of a context switch typically is 0.05 mJ to 0.2 mJ on modern hardware (Has anyone estimated the worldwide total amount of context switches per year?). Windows cuts the thread quantum (time slice) to approx. 2/3 when the timer resolution it set to maximum. Consequenty, the power consumption raises by approx. 30%!
这篇关于Windows 7 之前的计时器合并的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!