Wall Clock Timer Tests (Compact 2013)
3/26/2014
Use the Wall Clock Drift Tests to measure drift on individual clocks. A good way to use the Wall Clock Drift Tests is to run the "Compare All Three Timers Drift Test - Busy Sleep" to confirm the clocks do not drift relative to each other, and run at least one of the individual drift tests to confirm that the clock does not drift relative to outside sources. Ideally, one would compare the RTC, since this clock is not expected to drift with respect to outside clock.
The test verifies the behavior and drift of the device timers (System timer, RTC, and high performance counter) against a desktop PC clock, or a known and accurate wall clock source provided by the user. The tests can find subtle timer drift issues which may have hard to find and difficult to diagnose stability issues in the overall system.
The subtests measure drift on individual clocks, and allow you to compare the timers on the target device to external clock sources. These test cases can be used to catch a condition where the clocks for the GetTickCount function, the high-performance counter, and the Real-Time Clock agree with each other but do not keep accurate time. This condition might occur if all three clocks derive from a common clock that does not keep accurate time.
Test Prerequisites
Your device must meet the following requirements before you run this test.
The test should be run on a minimal OS design, such as one with only the following SYSGENs set to minimize the likelihood of interactions from other system components:
SYSGEN_NETWORKING=1
SYSGEN_SHELL=1
The following table shows the hardware requirements for the test:
Requirements |
Description |
---|---|
GetTickCount Clock |
The clock that corresponds to the GetTickCount function |
High-performance Counter |
The clock that corresponds to the QueryPerformanceCounter function |
Real-Time Clock (RTC) |
The clock that corresponds to the GetSystemTime function |
The following table shows the software requirements for the test:
Requirements |
Description |
---|---|
Tux.exe |
Tux test harness, required for executing the test |
kato.dll |
Kato logging engine, required for logging test data |
oaltesttimers.dll |
Library containing OAL Timers Test files |
Subtests
The table below lists the subtests included in this test.
SubTest ID |
Description |
---|---|
6010 |
Measures the drift between the system timer (GTC) and an off-device trusted clock |
6020 |
Measures the drift between the high performance counter (QPC) and an off-device trusted clock |
6030 |
Measures the drift between the Real-Time Clock (RTC) and an off-device trusted clock |
Setting Up the Test
The tests compare the device timer tick to an off-device clock, either manually supplied by the user or by using the release filesystem (when available). If the release filesystem is available, the tests will create a file in the release filesystem and obtain the (presumed to be) accurate time from the host desktop system. Otherwise, the test relies on the user verifying the test against a trusted accurate clock.
Running the Test
The test performs the following on default execution: tux -o -d oaltesttimers -x6010,6020,6030
Optional Command Line Information
Command Line Parameter |
Description |
---|---|
-wcRunTime <time> |
Specifies the runtime for the test in seconds. The default time per subtest is 3 hours (according to the device clock). |
If the release directory is not available to the run-time image, you will need to perform the test manually. Before starting one of these tests, obtain a stopwatch or other external clock that you can use to measure the length of events during the test.
The test case displays a start time, waits for the length of time that you specified in the command line, and then displays a stop time. The test displays messages that tell you when to start your stopwatch, and then periodically displays messages that tell you how much time remains for the test to end. Observe these messages and prepare to stop the stopwatch when the test completes. The test displays messages more frequently when the test nears completion. If the time that you record is the same as the time reported by the test, the clock kept accurate time. If the time that you record differs from the time reported by the test, the clock does not keep accurate time.
These test cases display the following error message if a timer suddenly jumps forward in time during the test: "ERROR: The current time overshot the stop time for the test." This error causes the test to end. This error occurs if the Sleep function does not behave correctly or if a timer does not behave correctly.
Verifying the Test
- If the "\release" directory is available from the run-time image, the tests will use the desktop time as a point of comparison, and will pass/fail the test accordingly.
- If the "\release" directory is not available, the test will print the runtime of the test and ask the user to verify the runtime against the user's trusted timer. If the resulting time is within +/- 2 seconds, the Test Engineer should pass the test.
Troubleshooting the Test
- Try to decrease the runTime of the test with the -wcRunTime option and see if the error is still reproducible.
- Verify that there is not a consistent drift by running the test with an Increased runTime (using the -wcRunTime option).
- Ensure that the code is measuring the trusted clock source accurately and consistently.
- Determine the point of failure and record the exact error message. Check that the setup steps were followed and that all pre-requisites were met. If the source code is available, examine the point of failure in code to see if any additional information can be gathered about the failure domain.