Partager via


My Favorite Query For Investigating SQL Server Performance

I work in a lab environment, often examining running SQL workloads for performance analysis.  This is a query I've used many times to see what's running _right now_ on a server and what kind of resources it's using.  It dumps the running queries and query plans and takes a snapshot of the system wait stats.  One cool thing is that it adds CPU_TIME as a "wait stat".  It's kind of sideways to think about CPU_TIME as a wait stat, but that's what eats up the time when a session isn't in any other wait.  And is really helps you get a feel for whether the other waits you see are really a problem.  For instance 6000ms of PAGEIOLATCH_SH dosn't look so important next to 90,000ms of CPU_TIME.

The batch should take 5sec to run, as it as a 5sec WAITFOR, used seperate two snapshots of sys.dm_os_wait_stats, but it's not an expensive query.

One caviat is that I run this query on lab systems.  The mechanism it uses to aggregate CPU time relies on the @@CPU_BUSY function which doesn't return the correct value on systems that have been running for a long time.

Anyway here it is, and let me know if you can think of a more robust replacement for @@CPU_BUSY:

 

 select s.session_id, 
 s.host_name, 
 r.status,
 r.wait_type,
 r.wait_time,
 r.last_wait_type,
 r.total_elapsed_time,
 r.logical_reads,
 r.reads,
 r.writes,
 SUBSTRING(st.text, (r.statement_start_offset/2)+1, 
   ((CASE r.statement_end_offset 
    WHEN -1 THEN DATALENGTH(st.text)
    ELSE r.statement_end_offset END 
   - r.statement_start_offset)/2) + 1) AS statement_text,
 qp.query_plan
 from sys.dm_exec_requests r
 join sys.dm_exec_sessions s
  on r.session_id = s.session_id 
 outer apply sys.dm_exec_query_plan(r.plan_handle) qp
 outer apply sys.dm_exec_sql_text(r.sql_handle) st
 where r.plan_handle is not null
   and r.session_id <> @@spid
 order by logical_reads desc
 
 declare @interval int = 5
 
 declare @waitfor varchar(50) =      cast(@interval/(60*60) as varchar(10)) +':' 
 + cast((@interval%(60*60))/60 as varchar(10)) +':' 
 + right('0' + cast(@interval%60 as varchar(10)),2)
 
 declare @t table (wait_type varchar(50) collate SQL_Latin1_General_CP1_CI_AS, wait_time_ms bigint )
 
 insert into @t
 select wait_type, wait_time_ms
 from sys.dm_os_wait_stats
 union all 
 select 'CPU_USED', @@CPU_BUSY * (@@TIMETICKS / 1000.)
 order by wait_time_ms desc
 
 waitfor delay @waitfor
 
 select t.wait_type, (s.wait_time_ms - t.wait_time_ms) / cast(@interval as float) wait_time_ms_per_sec
 from (
   select wait_type, wait_time_ms
   from sys.dm_os_wait_stats
   union all 
   select 'CPU_USED', @@CPU_BUSY * (@@TIMETICKS / 1000.)
 ) s
 join @t t
 on s.wait_type collate SQL_Latin1_General_CP1_CI_AS 
  = t.wait_type collate SQL_Latin1_General_CP1_CI_AS
 where s.wait_type not in ('DIRTY_PAGE_POLL',
 'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
 'DISPATCHER_QUEUE_SEMAPHORE',
 'SQLTRACE_FILE_BUFFER',
 'BROKER_RECEIVE_WAITFOR',
 'XE_DISPATCHER_WAIT',
 'CHECKPOINT_QUEUE',
 'BROKER_EVENTHANDLER',
 'CLR_AUTO_EVENT',
 'FT_IFTS_SCHEDULER_IDLE_WAIT',
 'ONDEMAND_TASK_QUEUE',
 'XE_TIMER_EVENT',
 'REQUEST_FOR_DEADLOCK_SEARCH',
 'WAITFOR',
 'LAZYWRITER_SLEEP',
 'BROKER_TASK_STOP',
 'LOGMGR_QUEUE',
 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
 'SLEEP_TASK',
 'BROKER_TO_FLUSH',
 'WAIT_XTP_OFFLINE_CKPT_NEW_LOG','SP_SERVER_DIAGNOSTICS_SLEEP') 
 and (s.wait_time_ms - t.wait_time_ms) > 0
 order by wait_time_ms_per_sec desc


Comments

  • Anonymous
    May 01, 2013
    This is really helpful...

  • Anonymous
    July 21, 2013
    The comment has been removed

  • Anonymous
    September 03, 2013
    I'm getting the same error.

  • Anonymous
    September 03, 2013
    What's your database collation?

  • Anonymous
    February 11, 2014
    Thanks You can fix collation error with collate instruction : join @t t on s.wait_type collate SQL_Latin1_General_CP1_CI_AS= t.wait_type collate SQL_Latin1_General_CP1_CI_AS where s.wait_type not in ('DIRTY_PAGE_POLL',

  • Anonymous
    February 12, 2015
    The comment has been removed

  • Anonymous
    May 12, 2015
    @ixpe    Thanks.  I make your suggested change.