Partilhar via


Why are the Multimedia Timer APIs (timeSetEvent) not as accurate as I would expect?

The Multimedia Timer APIs (MM Timer APIs) get their high accuracy by using the Programmable Interrupt Controller (PIC) built into the hardware on the machine. By default Windows specifies a PIC duration of about 10 – 16 milliseconds. Every time the PIC fires the operating system kernel “wakes up”. Any executing user-mode threads are suspended and the kernel scheduler kicks in to determine what thread should be executed next. You can use the MM Timer APIs to actually change the resolution of the PIC (timeBeginPeriod). That’s right an unassuming user mode function can radically alter how the OS goes about its business.

As I mentioned earlier the default PIC resolution for the OS when it starts up is around 16 milliseconds. Let’s say that you set a periodic timer to fire every 5 milliseconds, with the PIC set at 16 milliseconds you will only be alerted every 16 milliseconds (at best).  This level of accuracy is usually good enough for most applications. However, for time critical applications such as audio and video playback this resolution just is not good enough.

The MM Timer APIs allow the developer to reprogram the Programmable Interrupt Controller (PIC) on the machine. You can specify the new timer resolution. Typically, we will set this to 1 millisecond. This is the maximum resolution of the timer. We can’t get sub-millisecond accuracy. The effect of this reprogramming of the PIC is to cause the OS to wake up more often. This increases the chances that our application will be notified by the operating system at the time we specified. I say, “Increases the chances” because we still can’t guarantee that we will actually receive the notification even though the OS work up when we told it.

Remember that the PIC is used to wake up the OS so that it can decide what thread should be run next. The OS uses some very complex rules to determine what thread gets to occupy the processor next. Two of the things that the OS looks at to determine if it should run a thread or not are thread priority and thread quantum. Thread priority is easy. The higher the thread’s priority the more likely the OS will be to schedule it next. The thread quantum is a bit more complex. The thread’s quantum is the maximum amount of time the thread can run before another thread has a chance to occupy the processor. By default the thread quantum on Windows NT based systems is about 100 milliseconds. This means that a thread can “hog” the CPU for up to 100 milliseconds before another thread has a chance to be scheduled and actually execute. 

Here is an example. Let’s say that we reprogrammed the PIC to fire every 1 millisecond (timeBeginPeriod). We also setup a periodic timer to fire every 1 millisecond (timeSetEvent). We expect that exactly every one millisecond the OS will alert our application and allow us to do some processing. If all of the virtual stars are aligned just right then we get our callback once every millisecond as we expect. In reality we get called after 100 milliseconds and receive 10 timer messages in rapid succession. Why is this? This is what probably happened. The OS decided that there was a thread that had a higher priority than the MM Timer thread and the high priority thread got priority over us. This thread must have been really busy. The OS decided to continue to schedule it until its entire quantum was used up. Once the thread’s quantum was used up the OS was forced to schedule a different thread. We were lucky enough that the next thread that got scheduled was our timer thread.

Even though we reprogrammed the PIC the OS will make decisions behind our back and can cause our timer callback to get delayed. We have no way to change the thread quantum that the OS assigns at startup (that I know of). This is just the way the OS works. There are no workarounds for this issue. Luckily, this is the worst-case scenario. It rarely happens in the real world. The MM Timer callbacks tend to occur about when we expect. Typically, we can see the timer delayed between 1 – 20 milliseconds. The actual delay will depend on what else the OS is trying to do during the timer callback. It’s rare to see the timer delayed more than about 20 milliseconds but it can certainly happen.

I just want to point out there are side effects to using the (timeBeginPeriod) API. To quote from Larry Osterman’s blog: “[Calling timeBeginPeriod and timeEndPeriod] …has a number of side effects - it increases the responsiveness of the system to periodic events (when event timeouts occur at a higher resolution, they expire closer to their intended time).  But that increased responsiveness comes at a cost - since the system scheduler is running more often, the system spends more time scheduling tasks, context switching, etc.  This can ultimately reduce overall system performance, since every clock cycle the system is processing "system stuff" is a clock cycle that isn't being spent running your application.”

I hope this helps to explain the inherent problems with using the Multimedia Timer APIs and the reason why you aren’t getting the accuracy that you would expect.