次の方法で共有


PerfLog Parsing Considerations and Examples (Windows Embedded CE 6.0)

1/6/2010

Certain strings that sometimes appear in the output log file can be safely skipped. The following table shows the format of such strings.

String description Output format

Timer calibration

## PERF ## APP [app_name] CAL [calibration_timing]

CPU calibration

## PERF ## APP [app_name] CPU CAL [calibration_timing]

CPU monitor frequency

## PERF ## APP [app_name] CPU MON [app_name] FREQ [frequency]

If the CPU or Memory logging information is not required for performance measurement, any Marker Registration or Event strings related to system monitoring can also be ignored in the parsing process.

Example #1

The following is a simple example of a log where the performance tests run sequentially and do not recycle Marker IDs. No CPU or Memory usage information is logged.

## PERF ## OSVERSION =[6.0] BUILD=[1234]

## PERF ## PLATFORM=[CEPC] CPU=[x86]

## PERF ## DEVNAME=[CEPC]

## PERF ## REGISTERED APP [myperfapp] PROCCESSID [0x03d90056]

## PERF ## RESOLUTION [1193180] TICKS PER SECOND

## PERF ## APP [myperfapp] CAL [15]

## PERF ## APP [myperfapp] CAL [12]

## PERF ## APP [myperfapp] CAL [13]

## PERF ## REGISTERED MARKER [Test=WriteFile] AS [0] BY APP [myperfapp]

## PERF ## APP [myperfapp] EVT [0] DUR [122519]

## PERF ## APP [myperfapp] EVT [0] DUR [108950]

## PERF ## APP [myperfapp] EVT [0] DUR [110128]

## PERF ## APP [myperfapp] EVT [0] DUR [109121]

## PERF ## REGISTERED MARKER [Test=ReadFile] AS [1] BY APP [myperfapp]

## PERF ## APP [myperfapp] EVT [1] DUR [54899]

## PERF ## APP [myperfapp] EVT [1] DUR [40007]

## PERF ## APP [myperfapp] EVT [1] DUR [41775]

## PERF ## APP [myperfapp] EVT [1] DUR [40279]

This log contains the following items:

  1. Resolution information: on this machine there are 1193180 clock ticks per second.
  2. First registered test marker: The test (WriteFile) registered a timer marker with Marker ID 0.
  3. Four timer logging events associated with Marker ID = 0. The WriteFile test started and stopped the timer 4 times, resulting in 4 duration values.
  4. Four timer logging events associated with Marker ID = 1. The ReadFile test started and stopped the timer 4 times, resulting in 4 duration values.

Once we’ve extracted the relevant information from the log, we can perform the following calculations:

  1. The average duration for the WriteFile test was: (122519+108950+110128+109121) /4 = 112680 ticks. The resolution is 1193180 ticks / second, so on average the WriteFile test ran for 0.09 seconds.
  2. Using the same method we can calculate that the average duration for ReadFile was 0.04 seconds.

Example #2

The following is an example of a log where two tests were started in parallel, and their performance output is interleaved in the log. No CPU or Memory usage information is logged.

## PERF ## OSVERSION =[6.0] BUILD=[1234]

## PERF ## PLATFORM=[CEPC] CPU=[x86]

## PERF ## DEVNAME=[CEPC]

## PERF ## REGISTERED APP [myperfapp] PROCCESSID [0x03d90056]

## PERF ## RESOLUTION [1193180] TICKS PER SECOND

## PERF ## APP [myperfapp] CAL [11]

## PERF ## APP [myperfapp] CAL [12]

## PERF ## APP [myperfapp] CAL [14]

## PERF ## REGISTERED MARKER [Test=ReadFile] AS [1] BY APP [myperfapp]

## PERF ## REGISTERED MARKER [Test=WriteFile] AS [0] BY APP [myperfapp]

## PERF ## APP [myperfapp] EVT [0] DUR [122519]

## PERF ## APP [myperfapp] EVT [1] DUR [54899]

## PERF ## APP [myperfapp] EVT [0] DUR [108950]

## PERF ## APP [myperfapp] EVT [0] DUR [110128]

## PERF ## APP [myperfapp] EVT [1] DUR [40007]

## PERF ## APP [myperfapp] EVT [0] DUR [109121]

## PERF ## APP [myperfapp] EVT [1] DUR [41775]

## PERF ## APP [myperfapp] EVT [1] DUR [40279]

In this example the timing data is the same as the previous example, but the order in the output is interleaved between the two tests. In order to parse this correctly, the parser would have to first note that tests with markers 1 and 0 were registered. As the parser scans the output, it will have to determine which event corresponds to which test, based on the Marker ID (0 or 1), and internally record, tally, and/or average the duration data until no more data is available in the log file.

Example #3

In this example a performance Marker ID is re-used within the test. The rest of the data is the same as in the previous two examples.

## PERF ## OSVERSION =[6.0] BUILD=[1234]

## PERF ## PLATFORM=[CEPC] CPU=[x86]

## PERF ## DEVNAME=[CEPC]

## PERF ## REGISTERED APP [myperfapp] PROCCESSID [0x03d90056]

## PERF ## RESOLUTION [1193180] TICKS PER SECOND

## PERF ## APP [myperfapp] CAL [15]

## PERF ## APP [myperfapp] CAL [12]

## PERF ## APP [myperfapp] CAL [13]

## PERF ## REGISTERED MARKER [Test=WriteFile] AS [0] BY APP [myperfapp]

## PERF ## APP [myperfapp] EVT [0] DUR [122519]

## PERF ## APP [myperfapp] EVT [0] DUR [108950]

## PERF ## APP [myperfapp] EVT [0] DUR [110128]

## PERF ## APP [myperfapp] EVT [0] DUR [109121]

## PERF ## REGISTERED MARKER [Test=ReadFile] AS [0] BY APP [myperfapp]

## PERF ## APP [myperfapp] EVT [0] DUR [54899]

## PERF ## APP [myperfapp] EVT [0] DUR [40007]

## PERF ## APP [myperfapp] EVT [0] DUR [41775]

## PERF ## APP [myperfapp] EVT [0] DUR [40279]

To parse this correctly the parse would have to recognize that the ReadFile test has recycled the Marker ID = 0, so that any new data seen in the log no longer applies to the WriteFile test, but should instead be tabulated towards the ReadFile test.

Example #4

This example shows a log file that contains CPU and Memory usage information.

## PERF ## OSVERSION=[6.0] BUILD=[1908]

## PERF ## PLATFORM=[CEPC] CPU=[x86]

## PERF ## DEVNAME=[CEPC]

## PERF ## REGISTERED APP [myperfapp] PROCCESSID [0x03d3002e]

## PERF ## RESOLUTION [1193180] TICKS PER SECOND

## PERF ## APP [myperfapp] CAL [14]

## PERF ## APP [myperfapp] CAL [14]

## PERF ## APP [myperfapp] CAL [13]

## PERF ## REGISTERED MARKER [CPU: myperfapp] AS [103] BY APP [myperfapp]

## PERF ## APP [myperfapp] CPU CAL [74328]

## PERF ## REGISTERED MARKER [MEM: myperfapp] AS [102] BY APP [myperfapp]

## PERF ## APP [myperfapp] CPU MON [myperfapp] FREQ [100]

## PERF ## REGISTERED MARKER [Test=CeWriteRecord] AS [10] BY APP [myperfapp]

## PERF ## APP [myperfapp] EVT [10] DUR [320]

## PERF ## APP [myperfapp] EVT [10] DUR [295]

## PERF ## APP [myperfapp] EVT [10] DUR [302]

## PERF ## APP [myperfapp] EVT [10] DUR [295]

## PERF ## APP [myperfapp] EVT [103] CPU [77.205093]

## PERF ## APP [myperfapp] EVT [10] DUR [475]

## PERF ## APP [myperfapp] EVT [102] MEM [10080256]

## PERF ## APP [myperfapp] EVT [10] DUR [297]

## PERF ## APP [myperfapp] EVT [10] DUR [298]

## PERF ## APP [myperfapp] EVT [10] DUR [303]

## PERF ## APP [myperfapp] EVT [103] CPU [74.625984]

## PERF ## APP [myperfapp] EVT [10] DUR [589]

## PERF ## APP [myperfapp] EVT [102] MEM [10080256]

## PERF ## APP [myperfapp] EVT [10] DUR [335]

The log shows the Memory and CPU monitoring threads being registered with Marker IDs 102 and 103, respectively. The output then shows output for the CeWriteRecord test interleaved with output from the system monitoring threads. To parse this data, the parser must be able to separate the test timing information (here with Marker ID = 10) from the CPU and Memory information.

There are many meaningful ways of processing the CPU and Memory statistics from the log file. For instance, the parser could record the fact that the CPU and Memory monitoring threads were active while the CeWriteRecord test was running. The parser can tabulate this information and output together with the CeWriteRecord test statistics.

See Also

Other Resources

Managing Performance Results Using PerfLog