SQL Server performance investigation
Introduction
I frequently help teams inside and outside of Microsoft investigate SQL Server database performance issues. As well, I monitor some large internal SharePoint databases to understand their performance characteristics and to make sure we are improving our overall performance. Here are some of the queries I use to investigate performance issues and monitor performance. I hope you will find them useful.
SQL Server provides an extensive list of views and functions that are useful for understanding the internal behavior of SQL Server (see Microsoft SQL Server 2008 Internals by Kalen Delaney et al for a good explanation of SQL Server’s internals). Performance issues manifest themselves as excessive use of some system resource. In most applications, the resources of interest are memory and CPU. These are of interest in SQL Server but also of interest are resources like locks, latches and storage bandwidth.
Most SQL Server resources are managed at the SQL Server instance level. In general, a SQL Server instance should correspond to a single application. Thus the queries below work at the SQL Server level. Where sensible, I have added a commented out line that will scope the query to the current database.
Performance analysis and improvements is a tradeoff. It may not be cost effective to fix minor issues. Thus the queries below focus on highlighting the significant issues.
In general, the queries require view server state permission:
grant view server state to [login name];
Storage operations
SQL Server applications are frequently limited by disk bandwidth. Whenever SQL Server reads a data page that is called a logical read. When the page is not in memory and thus must be read from disk that is called a physical read. The situation is slightly different for writes. Whenever SQL Server changes a data page that is called a logical write. However, the page is not immediately written to disk. Instead it is queued and the lazy writer will eventually write it to disk. Since the page may be modified by several T-SQL statements before it is written to disk, a single T-SQL statement cannot be directly associated with the write. For example, code that appends rows to the end of a table may make many logical writes but cause relatively few physical writes.
Module logical reads
Here is a query to find the modules (i.e. procedures and triggers) with the highest logical read volumes.
select top(10) D.total_physical_reads as [Total physical reads],
D.total_logical_reads as [Total logical reads],
D.execution_count as [Executions],
coalesce(quotename(db_name(D.database_id)), N'-') + N'.' + coalesce(quotename(object_schema_name(D.object_id, D.database_id)), N'-') + N'.' + coalesce(quotename(object_name(D.object_id, D.database_id)), N'-') as [Module],
QP.query_plan as [Query plan]
from sys.dm_exec_procedure_stats as D
outer apply sys.dm_exec_query_plan(D.plan_handle) as QP
--where D.database_id = db_id()
order by [Total logical reads] desc;
If a module is performing more logical reads than expected, it may indicate poor logic or missing indexes. Adding the missing indexes will make some updates more expensive but overall it may improve the performance of the application. In SQL Server Management Studio (SSMS), you can click on the result set’s query plan column to see the query plan and further investigate the cause of unexpected logical reads.
Logical reads, that do not result in physical reads, may cause issues if the pages should not otherwise be in memory. Essentially, the module is causing memory to be wasted that might be better used elsewhere or not even be required.
Module physical reads
Here is a query to find the modules with the highest physical read volumes.
select top(10) D.total_physical_reads as [Total physical reads],
D.total_logical_reads as [Total logical reads],
D.execution_count as [Executions],
coalesce(quotename(db_name(D.database_id)), N'-') + N'.' + coalesce(quotename(object_schema_name(D.object_id, D.database_id)), N'-') + N'.' + coalesce(quotename(object_name(D.object_id, D.database_id)), N'-') as [Module],
QP.query_plan as [Query plan]
from sys.dm_exec_procedure_stats as D
outer apply sys.dm_exec_query_plan(D.plan_handle) as QP
--where D.database_id = db_id()
order by [Total physical reads] desc;
If a module is performing more physical reads than expected, it may indicate poor logic or missing indexes. Since physical reads take much more time than pure logical reads they are a major cause of latency in database operations.
Statement logical reads
Often individual T-SQL statements cause high logical and physical read volumes. Here is a query to find statements with the highest logical read volumes.
select top(10) sum(Q.total_physical_reads) as [Physical reads],
sum(Q.total_logical_reads) as [Logical reads],
sum(Q.execution_count) as [Execution count],
min(coalesce(quotename(db_name(Q.dbid)), N'-') + N'.' + coalesce(quotename(object_schema_name(Q.objectid, Q.dbid)), N'-') + N'.' + coalesce(quotename(object_name(Q.objectid, Q.dbid)), N'-')) as [Module],
min(Q.statement_text) as [Statement]
from (select QS.*,
ST.dbid as dbid,
ST.objectid as objectid,
substring(ST.text,
(QS.statement_start_offset/2) + 1,
((case statement_end_offset
when -1 then datalength(ST.text)
else QS.statement_end_offset end
- QS.statement_start_offset)/2) + 1) as statement_text
from sys.dm_exec_query_stats as QS
cross apply sys.dm_exec_sql_text(QS.sql_handle) as ST) as Q
--where Q.dbid = db_id()
group by Q.query_hash
order by [Logical Reads] desc;
Statement physical reads
Here is a query to find statements with the highest physical read volumes.
select top(10) sum(Q.total_physical_reads) as [Physical reads],
sum(Q.total_logical_reads) as [Logical reads],
sum(Q.execution_count) as [Execution count],
min(coalesce(quotename(db_name(Q.dbid)), N'-') + N'.' + coalesce(quotename(object_schema_name(Q.objectid, Q.dbid)), N'-') + N'.' + coalesce(quotename(object_name(Q.objectid, Q.dbid)), N'-')) as [Module],
min(Q.statement_text) as [Statement]
from (select QS.*,
ST.dbid as dbid,
ST.objectid as objectid,
substring(ST.text,
(QS.statement_start_offset/2) + 1,
((case statement_end_offset
when -1 then datalength(ST.text)
else QS.statement_end_offset end
- QS.statement_start_offset)/2) + 1) as statement_text
from sys.dm_exec_query_stats as QS
cross apply sys.dm_exec_sql_text(QS.sql_handle) as ST) as Q
--where Q.dbid = db_id()
group by Q.query_hash
order by [Physical Reads] desc;
These two queries can be particularly useful in environments where the application runs queries directly.
Module writes
Module and statement logical write volumes are often not comparable across modules and statements. This is because of the lazy writer issue mentioned above. Many logical writes that append to an index may have lower actual impact than a few writes to random places in the same index. Thus the results of these queries needs to be interpreted carefully.
Here is a query to find the modules with the highest logical write volumes.
select top(10) D.total_logical_writes as [Total logical writes],
D.execution_count as [Executions],
coalesce(quotename(db_name(D.database_id)), N'-') + N'.' + coalesce(quotename(object_schema_name(D.object_id, D.database_id)), N'-') + N'.' + coalesce(quotename(object_name(D.object_id, D.database_id)), N'-') as [Module],
QP.query_plan as [Query plan]
from sys.dm_exec_procedure_stats as D
outer apply sys.dm_exec_query_plan(D.plan_handle) as QP
--where D.database_id = db_id()
order by [Total logical writes] desc;
Statement writes
Here is a query to find statements with the highest logical write volumes.
select top(10) sum(Q.total_logical_writes) as [Logical writes],
sum(Q.execution_count) as [Execution count],
min(coalesce(quotename(db_name(Q.dbid)), N'-') + N'.' + coalesce(quotename(object_schema_name(Q.objectid, Q.dbid)), N'-') + N'.' + coalesce(quotename(object_name(Q.objectid, Q.dbid)), N'-')) as [Module],
min(Q.statement_text) as [Statement]
from (select QS.*,
ST.dbid as dbid,
ST.objectid as objectid,
substring(ST.text,
(QS.statement_start_offset/2) + 1,
((case statement_end_offset
when -1 then datalength(ST.text)
else QS.statement_end_offset end
- QS.statement_start_offset)/2) + 1) as statement_text
from sys.dm_exec_query_stats as QS
cross apply sys.dm_exec_sql_text(QS.sql_handle) as ST) as Q
-- where Q.dbid = db_id()
group by Q.query_hash
order by [Logical Writes] desc;
Indexes
Often adding indexes can improve the performance of critical queries. Here is a query to find the indexes that are most likely to improve the overall performance of the database application if added.
select coalesce(quotename(db_name(MID.database_id)), N'-') + N'.' + coalesce(quotename(object_schema_name(MID.object_id, MID.database_id)), N'-') + N'.' + coalesce(quotename(object_name(MID.object_id, MID.database_id)), N'-') as [Object],
round(MIGS.avg_total_user_cost *(MIGS.avg_user_impact/100.0) *(MIGS.user_seeks + MIGS.user_scans), 0) as [Relative cost],
round(MIGS.avg_total_user_cost, 2) [Average cost],
round(MIGS.avg_user_impact, 2) [Percentage improvement],
MIGS.user_scans as [User scans],
MIGS.user_seeks as [User seeks],
coalesce(MID.equality_columns, N'') as [Equi-join],
coalesce(MID.inequality_columns, N'') as [Inequi-join],
coalesce(MID.included_columns, N'') as [Included]
from sys.dm_db_missing_index_group_stats as MIGS
inner join sys.dm_db_missing_index_groups as MIG on MIG.index_group_handle = MIGS.group_handle
inner join sys.dm_db_missing_index_details as MID on MID.index_handle = MIG.index_handle
--where MID.database_id = db_id()
order by [Percentage Improvement] desc;
Indexes should be added judiciously since they will slow down some parts of the database application (e.g. updates to the columns in the index) and increase the storage size of the database.
Sometimes database applications contain indexes that are rarely used. These indexes slow down updates and increase storage and related costs (e.g. backup). Here is a query to find the usage of all indexes in a database.
select coalesce(quotename(db_name(US.database_id)), N'-') + N'.' + coalesce(quotename(object_schema_name(US.object_id, US.database_id)), N'-') + N'.' + coalesce(quotename(object_name(US.object_id, US.database_id)), N'-') as [Object],
I.name as [Index],
convert(decimal(38,2), PS.[Page count]/128.0) as [Size (MB)],
(US.user_seeks + US.user_scans + US.user_lookups) as [User operations],
US.user_seeks as [User seeks],
US.user_scans as [User scans],
US.user_lookups as [User bookmark lookups],
US.user_updates as [User updates]
from sys.dm_db_index_usage_stats as US
cross apply(select IPS.object_id, IPS.index_id, sum(IPS.page_count)
from sys.dm_db_index_physical_stats(US.database_id, US.object_id, US.index_id, null, N'SAMPLED' /* For non-leaf pages use N'DETAILED'*/) as IPS
group by IPS.object_id, IPS.index_id) as PS([Object], [Index], [Page count])
inner join sys.indexes as I on I.object_id = US.object_id and
I.index_id = US.index_id
where D.database_id = db_id()
order by [User Operations], [Object], [Index];
Indexes with low use are candidates for removal.
Indexes may become fragmented in these ways:
1. The pages may become logically out-of-order on disk. This was once undesirable because logical order was closely related to physical order, sequential disk access was faster than non-sequential access and indexes are frequently scanned in page order. However, this is rarely an issue now because RAID, SANs, Windows Server storage and many disk drives and controllers reorder pages on disk so the logical order does not reflect the physical order and thus pages being in logical order does not give any performance gain. Also, for some technologies like SSDs sequential physical access may give no gain in performance.
2. Secondly, the pages may be non-full. Often, many pages in an index must be read to complete a SQL Server operation and non-full pages have less data per page than full pages thus non-full pages will often mean more pages need to be read from disk and memory buffers are less efficient. However, when data is inserted into a full page it will split into two half full pages which are less efficient. Thus in general, the best performance is usually gained by a target of pages that are almost full but with enough room for inserts to rarely split a page.
There are numerous considerations in determining if index fragmentation is likely to cause performance issues. For example, SQL Server fetches entire extents (a group of 8 pages of 8Kb each) not pages thus fragmentation in indexes of just a few extents is unlikely to have a performance impact. These considerations lead to the following heuristic based query for determining which indexes need defragmentation in a database.
select coalesce(quotename(D.name), N'-') + N'.' + coalesce(quotename(object_schema_name(IPS.object_id, D.database_id)), N'-') + N'.' + coalesce(quotename(object_name(IPS.object_id, D.database_id)), N'-') as [Object],
I.name as [Index],
IPS.partition_number as [Partition],
IPS.index_level as [Level],
convert(decimal(38,2), IPS.page_count/128.0) as [Size (MB)],
IPS.avg_fragmentation_in_percent as [Mean fragementation (%)],
IPS.avg_page_space_used_in_percent as [Mean space Used (%)],
case
when IPS.avg_fragmentation_in_percent < 30.0 then N'Reorganize'
else N'Rebuild'
end as [Action]
from sys.databases as D
cross apply sys.dm_db_index_physical_stats(D.database_id, null, null, null, N'SAMPLED' /* For non-leaf pages use N'DETAILED'*/) as IPS
inner join sys.indexes as I on I.object_id = IPS.object_id and
I.index_id = IPS.index_id
where D.database_id = db_id() and
IPS.alloc_unit_type_desc = 'IN_ROW_DATA' and
((IPS.page_count > 24 and IPS.avg_fragmentation_in_percent > 5.0) or -- Fragmented (only relevant where in-order pages are faster to read or write).
(IPS.page_count > 8 and IPS.avg_page_space_used_in_percent < I.fill_factor*0.9)) -- Compaction needed.
order by [Object], [Index], [Partition], [Level];
Sometimes, it is more efficient to reorganize an index than to rebuild it. The above query also heuristically determines if the index should be reorganized or rebuilt. Here is a query that does not use any heuristics to filter the results:
select coalesce(quotename(D.name), N'-') + N'.' + coalesce(quotename(object_schema_name(IPS.object_id, D.database_id)), N'-') + N'.' + coalesce(quotename(object_name(IPS.object_id, D.database_id)), N'-') as [Object],
I.name as [Index],
IPS.partition_number as [Partition],
IPS.index_level as [Level],
convert(decimal(38,2), IPS.page_count/128.0) as [Size (MB)],
IPS.avg_fragmentation_in_percent as [Mean fragementation (%)], -- Only relevant where in-order pages are faster to read or write.
IPS.avg_page_space_used_in_percent as [Mean space Used (%)]
from sys.databases as D
cross apply sys.dm_db_index_physical_stats(D.database_id, null, null, null, N'SAMPLED' /* For non-leaf pages use N'DETAILED'*/) as IPS
inner join sys.indexes as I on I.object_id = IPS.object_id and I.index_id = IPS.index_id
where D.database_id = db_id()
order by [Object], [Index], [Partition], [Level];
For efficiency reasons, both queries sample leaf pages in the B+-trees. Using the DETAILED option instead will give data for all pages.
Storage
Storage is usually cheap compared to the engineering cost of reducing it. However, for databases that consume a large amount of storage it may be worth the engineering investment to reduce storage costs. In a storage cost analysis there are usually two values of interest: the number of rows and the mean size of each row. If the number of rows is unexpectedly large there may an error in the application. Often when troubleshooting database size growth this value indicates where the issue is. If the mean row size is unexpectedly large it may indicate over indexing of large values.
Here is a query to find the approximate storage used by database objects
select top(10) db_name() + N'.' + coalesce(quotename(object_schema_name(PS.object_id)), N'-') + N'.' + coalesce(quotename(object_name(PS.object_id)), N'-') as [Object],
convert(decimal(38,2), sum(PS.used_page_count)/128.0) as [Size (MB)],
sum(case when PS.index_id in(0, 1) then PS.row_count else 0 end) as [Rows],
(sum(PS.used_page_count)*8000)/sum(case when PS.index_id in(0, 1) then PS.row_count else 0 end) as [Mean row size (Bytes)]
from sys.dm_db_partition_stats as PS
group by db_name() + N'.' + coalesce(quotename(object_schema_name(PS.object_id)), N'-') + N'.' + coalesce(quotename(object_name(PS.object_id)), N'-')
order by [Size (MB)] desc
The query runs quickly but is based on cached information that may be out-of-date. Here is a query to find the exact storage used by database objects:
select top (10) coalesce(quotename(db_name(D.database_id)), N'-') + N'.' + coalesce(quotename(object_schema_name(PS.object_id, D.database_id)), N'-') + N'.' + coalesce(quotename(object_name(PS.object_id, D.database_id)), N'-') as [Object],
convert(decimal(38,2), sum(PS.page_count)/128.0) as [Size (MB)]
from sys.databases as D
cross apply sys.dm_db_index_physical_stats(D.database_id, null, null, null, N'DETAILED') as PS
--where D.database_id = db_id()
group by coalesce(quotename(db_name(D.database_id)), N'-') + N'.' + coalesce(quotename(object_schema_name(PS.object_id, D.database_id)), N'-') + N'.' + coalesce(quotename(object_name(PS.object_id, D.database_id)), N'-')
order by [Size (MB)] desc
Processor
Most SQL Server application performance issues are related to storage bandwidth. However, occasionally computation bandwidth becomes an issue.
Module processor usage
Here is a query to find the modules that are the most computationally expensive.
select top(10) convert(decimal(38, 6), D.total_worker_time)/1000000 as [CPU (secs)],
D.execution_count as [Executions],
coalesce(quotename(db_name(D.database_id)), N'-') + N'.' + coalesce(quotename(object_schema_name(D.object_id, D.database_id)), N'-') + N'.' + coalesce(quotename(object_name(D.object_id, D.database_id)), N'-') as [Module],
QP.query_plan as [Query plan]
from sys.dm_exec_procedure_stats as D
outer apply sys.dm_exec_query_plan(D.plan_handle) as QP
--where D.database_id = db_id()
order by D.total_worker_time desc;
Statement usage
Here is a query to find the statements that are the most computationally expensive.
select top(10) sum(convert(decimal(38, 6), Q.total_worker_time)/1000000) as [CPU (secs)],
sum(Q.execution_count) as [Execution count],
min(coalesce(quotename(db_name(Q.dbid)), N'-') + N'.' + coalesce(quotename(object_schema_name(Q.objectid, Q.dbid)), N'-') + N'.' + coalesce(quotename(object_name(Q.objectid, Q.dbid)), N'-')) as [Module],
min(Q.statement_text) as [Statement]
from (select QS.*,
ST.dbid as dbid,
ST.objectid as objectid,
substring(ST.text,
(QS.statement_start_offset/2) + 1,
((case statement_end_offset
when -1 then datalength(ST.text)
else QS.statement_end_offset end
- QS.statement_start_offset)/2) + 1) as statement_text
from sys.dm_exec_query_stats as QS
cross apply sys.dm_exec_sql_text(QS.sql_handle) as ST) as Q
--where Q.dbid = db_id()
group by Q.query_hash
order by [CPU (secs)] desc;
Waits
Some SQL Server resources cannot be used concurrently. SQL Server uses various mechanisms to control usage of these resources. When one SQL Server process needs to use a resource that another is currently using it must wait for the resource to be freed. Thus one operation can cause another operation to take longer than it would otherwise take.
Wait analysis requires some sophistication with the internals of SQL Server but is vital to performance tuning a high performance database application. To understand the resources in contention, during a time period, run the following statements at the beginning of the time period.
checkpoint;
dbcc freeproccache with no_infomsgs;
dbcc dropcleanbuffers with no_infomsgs;
dbcc sqlperf('sys.dm_os_wait_stats', clear);
dbcc sqlperf('sys.dm_os_latch_stats', clear);
These statements reset internal counters and the state of the server to give more accurate results. At the end of the time period run the following query
select N'Lock total: ' + WS.wait_type as [Type],
WS.waiting_tasks_count as [Waits (number)],
convert(decimal(38,2), WS.wait_time_ms/1000.0) as [Total wait time (secs)],
WS.wait_time_ms/WS.waiting_tasks_count as [Average wait time (ms)]
from sys.dm_os_wait_stats as WS
where WS.waiting_tasks_count > 0
union all
select N'Lock post signal: ' + WS.wait_type,
WS.waiting_tasks_count,
convert(decimal(38,2), WS.signal_wait_time_ms/1000.0),
WS.signal_wait_time_ms/WS.waiting_tasks_count
from sys.dm_os_wait_stats as WS
where WS.waiting_tasks_count > 0
union all
select N'Latch total: ' + LS.latch_class,
LS.waiting_requests_count,
convert(decimal(38,2), LS.wait_time_ms/1000.0),
LS.wait_time_ms/LS.waiting_requests_count
from sys.dm_os_latch_stats as LS
where LS.waiting_requests_count > 0
order by [Total wait time (secs)] desc;
See https://msdn.microsoft.com/en-us/library/ms179984.aspx for a description of wait types. To run these two scripts you will need to be a member of either the sysadmin or serveradmin roles.
For a deep analysis, use the Windows Performance Toolkit to understand the deep behavior of a SQL Server application. SQL Server publishes data about its internals through ETW and this data can be combined with other ETW traces to build a detailed understanding of resource contention issues.
Wait analysis can be useful in troubleshooting the cause of latency in a database application. The following script reports which statements are blocking other statements (i.e. using a resource the blocked statement is waiting for).
set nocount on;
while 1=1
begin
declare @results table (
[Id] int identity not null primary key,
[Blocked module] nvarchar(776) not null,
[Blocked statement] nvarchar(max) not null,
[Blocking module] nvarchar(776) null,
[Blocking statement] nvarchar(max) null,
[Wait type] nvarchar(60) not null,
[Total wait time (MS)] bigint not null,
[Sessions waiting (count)] int not null
);
delete from @results;
with Statements([Blocked module], [Blocked statement start], [Blocked statement end],
[Blocking module], [Blocking statement start], [Blocking statement end],
[Wait type], [Total wait time (MS)], [Sessions waiting (count)])
as
(
select BlockedR.sql_handle as [Blocked Module],
BlockedR.statement_start_offset as [Blocked Statement start],
BlockedR.statement_end_offset as [Blocked Statement end],
BlockingR.sql_handle as [Blocking Module],
BlockingR.statement_start_offset as [Blocking statement start],
BlockingR.statement_end_offset as [Blocking statement end],
WT.wait_type as [Wait type],
sum(convert(bigint, WT.wait_duration_ms)) as [Total wait time (MS)],
count(*) as [Sessions waiting (count)]
from sys.dm_os_waiting_tasks as WT
inner join sys.dm_exec_requests as BlockedR on BlockedR.session_id = WT.session_id
left outer join sys.dm_exec_requests as BlockingR on BlockingR.session_id = WT.blocking_session_id
group by BlockedR.sql_handle, BlockedR.statement_start_offset, BlockedR.statement_end_offset,
BlockingR.sql_handle, BlockingR.statement_start_offset, BlockingR.statement_end_offset,
WT.wait_type
)
insert into @results([Blocked module], [Blocked statement],
[Blocking module], [Blocking statement],
[Wait type], [Total wait time (MS)], [Sessions waiting (count)])
select coalesce(quotename(db_name(BlockedText.dbid)), N'-') + N'.' + coalesce(quotename(object_schema_name(BlockedText.objectid, BlockedText.dbid)), N'-') + N'.' + coalesce(quotename(object_name(BlockedText.objectid, BlockedText.dbid)), N'-') as [Blocked module],
substring(BlockedText.text,
(S.[Blocked statement start]/2) + 1,
((case S.[Blocked statement end]
when -1 then datalength(BlockingText.text)
else S.[Blocked statement end]
end
- S.[Blocked statement start])/2) + 1) as [Blocked statement],
coalesce(quotename(db_name(BlockingText.dbid)), N'-') + N'.' + coalesce(quotename(object_schema_name(BlockingText.objectid, BlockingText.dbid)), N'-') + N'.' + coalesce(quotename(object_name(BlockingText.objectid, BlockingText.dbid)), N'-') as [Blocking module],
substring(BlockingText.text,
(S.[Blocking statement start]/2) + 1,
((case S.[Blocking statement end]
when -1 then datalength(BlockingText.text)
else S.[Blocking statement end]
end
- S.[Blocking statement start])/2) + 1) as [Blocking statement],
S.[Wait type] as [Wait type],
S.[Total wait time (MS)] as [Total wait time (MS)],
S.[Sessions waiting (count)] as [Sessions waiting (count)]
from Statements as S
cross apply sys.dm_exec_sql_text(S.[Blocked module]) as BlockedText
cross apply sys.dm_exec_sql_text(S.[Blocking module]) as BlockingText
--where S.[Total wait time (MS)] > 50
order by S.[Total wait time (MS)] desc
option (force order);
if exists(select * from @results)
begin
select getdate(),
[Blocked module], [Blocked statement],
[Blocking module], [Blocking statement],
[Wait type], [Total wait time (MS)], [Sessions waiting (count)]
from @results;
end;
waitfor delay '00:00:01'
end
Every second the script reports the blocked and blocking statements. Care is needed in interpreting the report because a blocked statement may itself be blocked waiting for another resource.
Script
The downloadable T-SQL script contains the above queries and some less commonly used scripts for:
· Finding the databases that have the highest read volumes.
· Finding the databases that have the highest write volumes.
· Finding the total SQL Server instance read and write volume.
· The modules that are recompiled the most.
Comments
- Anonymous
December 09, 2012
Really helpful, thanks......... - Anonymous
August 29, 2013
nice post ... but i get the following error:Correlated parameters or sub-queries are not supported by the inline function "sys.dm_db_index_physical_stats".what's the best way to solve this. Split query and use function? - Anonymous
March 05, 2014
The comment has been removed - Anonymous
June 05, 2014
Very accurate informations!Thank you so much!