Jaa


Understanding the Variable Tick Timer

Posted by Sha Viswanathan.

Today I wanted to talk a bit about the variable tick timer, and how it affects the Windows CE scheduler. The timer is interesting because it provides the ‘heartbeat’ for every Windows CE system. On each timer interrupt, the kernel analyzes threads in the sleep- and run-queues to decide which thread will run next. Although beyond the scope of this article, learning the system timer is a good step to understanding common CE themes such as real-time operation, the behavior of sleeping threads and the effect of priority, and the power saving potential of OEMIdle().

OVERVIEW

Execution in a Windows CE system is interrupt-driven; a key press, call notification from the radio, or insertion of an SD card all cause interrupts that trigger specific code to execute. The kernel is no different; it maintains control of the system by tying itself to a hardware interrupt. For a particular chipset to support Windows CE, it must have a dedicated timer interrupt as part of the OEM Adaptation Layer (OAL).

Typically, interrupts in a system trigger a notification, or ‘SYSINTR’ event to the kernel. If a driver has requested a SYSINTR mapping for a physical interrupt line such as a keypad, its associated thread will be woken up and scheduled every time the SYSINTR event occurs. Timer interrupts are unique because they always return the dedicated SYSINTR_RESCHED. Instead of a waking up a thread, SYSINTR_RESCHED is a ‘heads up’ telling the system that the scheduler needs to be run. I won’t go into detail here, but a lot can occur after a reschedule: if no threads exist, or all are sleeping, we will enter idle. The scheduler can also choose to change the thread context if a higher priority thread has entered the run-queue, etc.

Generally, timer hardware consists of at least two registers. One is a free-running counter, which increments or decrements by one at a pre-determined frequency. The second is a match register, which is programmable. When the free-running counter equals the match register, an interrupt is generated. This paradigm is known as ‘count-compare’. Another paradigm consists of just one register, which can be programmed at any time, and will decrement until it hits 0, at which point an interrupt is generated (‘count-down’).

VARIABLE vs. FIXED TICK

Our goal then, when implementing a timer, is to manage the flow of interrupts to the system. Two common timer implementations exist in Microsoft-produced OALs. The first is called the ‘fixed tick’ timer. As the name implies, we always set the next timer interrupt to occur one millisecond into the future. When the interrupt occurs, we check to see if the current time has passed dwReschedTime (a kernel global variable), and if so we return SYSINTR_RESCHED, otherwise, SYSINTR_NOP (false interrupt). So, any running thread will be quickly interrupted each millisecond so the kernel can check to see if the scheduler needs to run.

Variable tick works differently. Instead of setting the next timer interrupt to occur in one millisecond, we expose a function to the kernel to set the next interrupt. This ensures that no unnecessary timer interrupts are taken. This is the major advantage of using a variable tick timer—the kernel calculates and sets the ‘earliest required wakeup time’ instead of ‘polling’ every millisecond. Here are just a few scenarios that describe when the next timer interrupt will occur: • One quantum (100 ms) when there is only one active thread in the system. • If a low priority thread is about to run because a higher-priority thread is calling Sleep(50), the next interrupt will occur in 50 ms. • If no threads are running, the next interrupt will be set for the maximum time allowable by hardware (The system has nothing to do but wait for external interrupts, like a keypress).

TIMERS IN IDLE

When all threads are sleeping or inactive, OEMIdle is called to sleep the core. In a fixed tick system, OEMIdle should reset the next timer interrupt to around dwReschedTime (a global timer variable) to save power; otherwise, we will wake up every millisecond unnecessarily (we’ll see this happen later on CEPC, which is wall-powered). Not being able to save power with fixed tick is a common misconception. Again, a platform that implements the fixed tick timer and wants idle power savings should simply reprogram the timer interrupt. Another way to think about it is that, regardless of timer, idle is always ‘variable tick’.

With a variable tick timer, the timer interrupt will already be set to dwReschedTime by the kernel, before OEMIdle is called. Aside from this difference, behavior in OEMIdle is very much platform specific, and can be completely independent of the timer. It turns out that even variable-tick platforms may still have to reprogram the timer interrupt. This is because some low power modes require many milliseconds of recovery time. In that case we end up having to ‘roll-back’ the next timer interrupt by roughly [dwReschedTime – wakeupTime].

That’s all I’ll say about timers in idle for now; if the readers want to know more about optimizing OEMIdle, there is enough information to write another blog (which we can/will do ).

TIMER IN ACTION

Let’s compare the basic operation of a fixed tick timer with a variable tick timer. If we build a TinyKernel image with IMGPROFILER=1, and run CELog/KernelTracker, we can look at all physical interrupts that fired in the system. The simplest case: what happens when only one thread is running in the system? I ran this simple test program with both types of timers to point out the differences (spin for 200ms, then sleep): CELog

In the Kernel Tracker output above, the ‘i’ symbols represent interrupts. Interrupt 1 is the timer interrupt, and Interrupt 16 is KITL (dumping the CELog data to my desktop), which we will ignore.

In the first Kernel Tracker picture, I started vartest.exe at about the ‘250’ mark. The kernel was idling previously, so vartest.exe was the only thread in the system, aside from the occasional KITL interrupt. After I launch vartest.exe, we see a timer interrupt come in, and afterwards, a process switch from nk.exe (OEMIdle) to vartest.exe. Because no other threads were found in the run/sleep queues, the kernel set the next reschedule time to 100ms in the future. This is (you guessed it!) the variable tick timer.

The second Kernel Tracker picture is the fixed tick timer. This looks almost exactly the same as the variable tick timer, but with one key difference: Interrupt 0 is very active. Interrupt 0 corresponds to SYSINTR_NOP, which is returned when the timer interrupt fired, but the current time didn’t actually exceed dwReschedTime. While our thread was running its 100ms quantum, we effectively ignored 99 timer interrupts. This is CEPC; notice that its implementation of OEMIdle does not bother to reprogram or stifle timer interrupts, because it is wall powered.

In summary, the major difference between these two timers is that fixed tick timers ‘poll’, while variable tick timers are truly ‘interrupt’-driven. The major advantage to using a variable tick timer is that you can save cycles. To find out if this could result in savings on your platform, measure the time spent handling one ‘fake’ interrupt: the time it takes to execute your ISR and return SYSINTR_NOP, plus some kernel overhead, and potentially hardware-related overhead as it vectors into the ISR. This is the amount of execution time you can save per millisecond.

Which timer should you choose? The trade-off is difficulty of implementation vs. avoiding unnecessary interrupts. On the Mainstone platform, I measured the amount of time spent executing the ISR, as a conservative estimate of what the variable tick timer can save. In the ‘one spinning thread’ example, we took 99 unnecessary interrupts in one quantum. For each, the Mainstone ISR took 25µs to return SYSINTR_NOP. This corresponds to about a 2.5% loss in performance. Hold on though! The ‘one spinning thread’ example is actually an extreme case. When the scheduler is interacting with hundreds of threads, it is not always the case that 99 out of 100 timer interrupts are ‘wasted’. Real scheduler behavior is so complex that your real savings with variable tick is, well….variable. Think of 2.5% as ‘the ballpark’, if not less. To accurately measure the potential gains of implementing variable tick on your system, I recommend timing one SYSINTR_NOP as I have done, and count them while using your device.

Currently, most platforms still use a fixed-tick timer. CEPC was used to gather data for the fixed-tick timer, and the Mainstone platform was used for variable-tick. FSample is a good fixed-tick implementation that still does variable idling. To learn more about the variable tick timer, or if you write one yourself, I would suggest referencing the PXA27X (Bulverde) vartick timer code. Core timer handling is done in OALTimerInit, OALTimerIntrHandler, and OALTimerUpdateRescheduleTime.

varpic.bmp

Comments

  • Anonymous
    September 29, 2006
    Very enlightening!  After reading your writeup, I now understand the reschedule timer interrupt on my platform much better.

    Thanks!
  • Anonymous
    October 04, 2006
    Clear and understandable. Thanks!!!
  • Anonymous
    October 05, 2006
    The comment has been removed
  • Anonymous
    October 06, 2006
    Rui, my apologies. Thanks for pointing this out.I didn't mention that the Bulverde variable tick timer implementation was only added AFTER CE 5.0 released. If you install the Mainstone BSP Update for Windows CE 5.0 from microsoft.com...http://www.microsoft.com/downloads/details.aspx?FamilyID=BDF43D00-55B6-4E51-82A5-F0A8395D4903&displaylang=en...you will receive an updated Mainstone III BSP. If you then look in publiccsp_pxa27xoakoalarmintelpxa27xtimervarticktimer.c, you will find the OALTimerUpdateRescheduleTime I was referencing in this blog.I recommend the Bulverde implementation of OALTimerUpdateRescheduleTime, over the one you found in our common code because it is newer, and is actually used by the Mainstone BSP.Hope this helps.
  • Anonymous
    October 06, 2006
    Thanks. Now it's clear to me :-)
  • Anonymous
    December 04, 2006
    http://www.windowsfordevices.com/articles/AT5251613143.html