Jaa


Why doesn’t sampling show the actual time spent in each function?

Some people have asked for a “wall clock time” column in the sampling profiler report.   Unfortunately, the actual time spent in a function cannot be reliably deduced from the collected data. 

Sampling counts “hits” on a function when a certain event occurs.  By default, this event is a CPU cycle counter counting N cycles, but you can also sample based on the Nth page fault, the Nth system call.  

When an event occurs, if the CPU happens to be running in a process that is being sampled, and is running user-mode code, this will count as a hit on the current function and every function above it on the stack.    The hit on the current function is counted as exclusive; the hit on the callers is counted as inclusive.  

Because of this, the number of hits does not represent the wall clock time spent in the function.   A function that is blocked waiting on a resource, waiting for a different process (e.g. calling a remote process), or just calling an expensive system call, may get a number of hits that is not proportional to the time spent in the function.

Comments