Condividi tramite


My new favourite diagnostic query for CPU

So, what do you do when you’ve been running a stress test, Sql CPU has been maxing out, and you’ve not been running profiler (to analyze in RML Utils) so have got no record of what happened – ans, provided te DMVs haven’t been cleared by the time you get to the box, the following will give you a fighting chance of finding the most cpu intensive statements, and maybe even their plans:

with cte as (
SELECT top 100 percent
query_stats.plan_handle,
SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS “Avg CPU Time”,
MIN(query_stats.statement_text) AS “Statement Text”,
MAX(query_stats.execution_count) as Execution_Count,
SUM(query_stats.total_worker_time) as Total_Worker_Time
FROM
(
SELECT QS.*,
SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(st.text) ELSE QS.statement_end_offset END – QS.statement_start_offset)/2) + 1) AS statement_text
FROM sys.dm_exec_query_stats AS QS
CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST
) as query_stats
GROUP BY query_stats.plan_handle
ORDER BY 2 DESC)
SELECT cte.*, PL.query_plan
FROM cte CROSS APPLY sys.dm_exec_query_plan(cte.plan_handle) as PL
ORDER BY Total_Execution_Time desc, Execution_Count desc

Hopefully you’ll recognise it as a slightly re-hashed version of the BOL Sample. Thanks to Stuart Leeks for letting me use this in his Lab.

Note, you get a lot of data from this (our busy box returned 24,000 rows and 750Mb of data) so be aware that running this will record it’s own activity.

 

Original post by Ryan Simpson on Novemeber 12th 2010, here: https://rionisimpsoni.wordpress.com/2010/11/12/my-new-favourate-diagnostic-query-for-cpu/