分析資料庫監看員監視資料 (預覽版)
適用於: Azure SQL 資料庫 Azure SQL 受控執行個體
除了使用 Azure 入口網站中的儀表板,或建置視覺效果來檢視和分析 Power BI、Grafana、Azure 資料總管或 Microsoft Fabric 中的即時分析中的 SQL 監視資料,您也可以直接查詢監視資料存放區。
本文包含 KQL 和 T-SQL 查詢的範例,可協助您開始分析收集的監視資料。
使用 KQL 分析監視資料
若要分析收集的監視資料,建議的方法是使用 Kusto 查詢語言 (KQL)。 KQL 最適合查詢遙測、計量和記錄。 它為文字搜尋和剖析、時間序列運算子和函數、分析和匯總,以及許多其他有助於資料分析的語言建構提供了廣泛的支援。
KQL 在概念上類似於 SQL。 它在資料表和資料行等架構實體上運作,並支援關係運算子 (例如專案、限制、聯結和彙總),對應於 SQL 中的 SELECT
、JOIN
、WHERE
和 GROUP BY
子句。
若要撰寫和執行 KQL 查詢,您可以使用 Kusto Explorer 或 Azure 資料總管 Web UI。 Kusto Explorer 是功能完整的 Windows 桌面軟體,而 Azure 資料總管 Web UI 可讓您在任何平台上的瀏覽器中執行 KQL 查詢,並視覺化結果。
您也可以使用這些工具,在 Microsoft Fabric 的即時分析中查詢資料庫。 若要連線,請使用即時分析資料庫的查詢 URI 來新增連線。 此外,如果使用即時分析,您可以使用 KQL 查詢集來分析監視資料。 KQL 查詢集可以儲存為可共用網狀架構成品,並用於建立 Power BI 報表。
如果不熟悉 KQL,下列資源可協助您開始使用:
下列範例可協助您撰寫自己的 KQL 查詢,以檢視和分析收集的 SQL 監視資料。 您也可以使用這些範例作為建置您自己的資料視覺效果和儀表板的起點。
使用 KQL 查詢一段時間的資源耗用量
在此範例中,查詢會傳回資料庫、彈性集區或 SQL 受控執行個體在過去一小時內的主要複本的資源耗用量計量 (CPU、背景工作角色、記錄寫入輸送量等)。 除了傳回結果集之外,也會將結果集視覺化為時間圖表。
在此範例和其他範例中,變更 let 陳述式中的變數,以符合伺服器、資料庫、彈性集區或 SQL 受控執行個體的名稱。 若要使用不同的時間間隔,請變更 duration
變數。 如需詳細資訊,請參閱時間範圍常值。
let logicalServer = @"your-server-name";
let databaseName = @"your-database-name";
let replicaType = "Primary";
let duration = 1h;
sqldb_database_resource_utilization
| where sample_time_utc > ago(duration)
| where logical_server_name =~ logicalServer
| where database_name =~ databaseName
| where replica_type =~ replicaType
| project sample_time_utc,
avg_cpu_percent,
avg_instance_cpu_percent,
avg_data_io_percent,
avg_log_write_percent,
max_worker_percent
| sort by sample_time_utc desc
| render timechart;
使用 KQL 檢視資料庫、彈性集區或 SQL 受控執行個體屬性
在此範例中,查詢會傳回一組所有資料庫、彈性集區或 SQL 受控執行個體,在過去一天中至少收集了對應 [屬性] 資料集中的一個範例。 換句話說,每個資料列都代表具有最近觀察到屬性的監視目標。
arg_max() 函數會匯總資料,以傳回識別目標之指定資料行集的最新資料列。 例如,針對 Azure SQL 資料庫,此集為 logical_server_name
、database_name
、replica_type
。
let duration = 1d;
sqldb_database_properties
| where sample_time_utc > ago(duration)
| summarize arg_max(sample_time_utc, *) by logical_server_name, database_name, replica_type
| project-rename last_sample_time_utc = sample_time_utc
| sort by tolower(logical_server_name) asc,
tolower(database_name) asc,
case(
replica_type == "Primary", 0,
replica_type == "Geo-replication forwarder", 1,
replica_type == "Named secondary", 2,
replica_type == "HA secondary", 3,
4) asc;
使用 KQL 檢視查詢執行階段統計資料
此查詢會傳回 Azure SQL 資產中最耗用資源的查詢。 變更變數以依任何 查詢存放區計量來將查詢排名,包括 CPU 時間、已耗用時間、執行計數等。您也可以設定變數,依時間間隔、查詢執行類型和查詢文字進行篩選。 設定變數以專注於特定的邏輯伺服器、彈性集區、SQL 受控執行個體或資料庫。
查詢會使用 [查詢執行階段統計資料] 資料集來傳回您指定的前幾名查詢數目,並包括它們按每個其他資源耗用量計量的排名。
let topQueriesBy = "cpu_time"; // Set to one of the following metrics to return the top resource consuming queries:
// count_executions, duration, cpu_time, logical_io_reads, logical_io_writes, physical_io_reads,
// num_physical_io_reads, clr_time, dop, query_max_used_memory, rowcount, log_bytes_used, tempdb_space_used
let topQueries = 10; // Set the number of top queries to return
let endTime = now();
let startTime = endTime - 1d;
let logicalServerName = @""; // Optionally filter by logical server name
let elasticPoolName = @""; // Optionally filter by elastic pool name, if any databases are in elastic pools
let databaseName = @""; // Optionally filter by database name
let executionType = ""; // Optionally filter by execution type. Use Regular, Aborted, Exception.
let queryHash = ""; // Optionally filter by query hash (example: 0xBAAA461A6C93EA88)
let queryTextFragment = ""; // Optionally filter by a query text fragment
sqldb_database_query_runtime_stats
| where interval_start_time >= startTime and interval_end_time <= endTime
| where isempty(executionType) or execution_type_desc =~ executionType
| where isempty(logicalServerName) or logical_server_name =~ logicalServerName
| where isempty(elasticPoolName) or elastic_pool_name =~ elasticPoolName
| where isempty(databaseName) or database_name =~ databaseName
| summarize dcount_logical_servers = dcount(logical_server_name),
any_logical_server_name = take_any(logical_server_name),
dcount_elastic_pools = dcount(strcat(logical_server_name, "|", elastic_pool_name)),
any_elastic_pool_name = take_any(elastic_pool_name),
dcount_databases = dcount(strcat(logical_server_name, "|", database_name)),
any_database_name = take_any(database_name),
dcount_sql_module_name = dcount(sql_module_name),
any_sql_module_name = take_any(sql_module_name),
dcount_context_settings_id = dcount(context_settings_id),
any_context_settings_id = take_any(context_settings_id),
query_sql_text = take_any(query_sql_text),
count_executions = sum(toreal(count_executions)),
count_successful_executions = sumif(toreal(count_executions), execution_type_desc == "Regular"),
count_aborted_executions = sumif(toreal(count_executions), execution_type_desc == "Aborted"),
count_exception_executions = sumif(toreal(count_executions), execution_type_desc == "Exception"),
duration_us = sum(avg_duration_us * count_executions),
cpu_time_us = sum(avg_cpu_time_us * count_executions),
logical_io_reads = sum(avg_logical_io_reads * count_executions),
logical_io_writes = sum(avg_logical_io_writes * count_executions),
physical_io_reads = sum(avg_physical_io_reads * count_executions),
num_physical_io_reads = sum(avg_num_physical_io_reads * count_executions),
clr_time_us = sum(avg_clr_time_us * count_executions),
dop = sumif(avg_dop * count_executions, is_parallel_plan),
query_max_used_memory = sum(avg_query_max_used_memory * count_executions),
rowcount = sum(avg_rowcount * count_executions),
log_bytes_used = sum(avg_log_bytes_used * count_executions),
tempdb_space_used = sum(avg_tempdb_space_used * count_executions)
by query_hash
| project logical_server_name = iif(dcount_logical_servers == 1, any_logical_server_name, strcat(any_logical_server_name, " (+", tostring(dcount_logical_servers - 1), ")")),
elastic_pool_name = iif(dcount_elastic_pools == 1, any_elastic_pool_name, strcat(any_elastic_pool_name, " (+", tostring(dcount_elastic_pools - 1), ")")),
database_name = iif(dcount_databases == 1, any_database_name, strcat(any_database_name, " (+", tostring(dcount_databases - 1), ")")),
query_sql_text,
count_executions,
count_successful_executions,
count_aborted_executions,
count_exception_executions,
duration_us,
cpu_time_us,
logical_io_reads,
logical_io_writes,
physical_io_reads,
num_physical_io_reads,
clr_time_us,
dop,
query_max_used_memory_kb = query_max_used_memory * 8,
rowcount,
log_bytes_used,
tempdb_space_used_kb = tempdb_space_used * 8,
sql_module_name = iif(dcount_sql_module_name == 1, any_sql_module_name, strcat(any_sql_module_name, " (+", tostring(dcount_sql_module_name - 1), ")")),
context_settings_id = iif(dcount_context_settings_id == 1, tostring(any_context_settings_id), strcat(any_context_settings_id, " (+", tostring(dcount_context_settings_id - 1), ")")),
query_hash
| sort by count_executions desc | extend count_executions_rank = row_rank_dense(count_executions)
| sort by duration_us desc | extend duration_rank = row_rank_dense(duration_us)
| sort by cpu_time_us desc | extend cpu_time_rank = row_rank_dense(cpu_time_us)
| sort by logical_io_reads desc | extend logical_io_reads_rank = row_rank_dense(logical_io_reads)
| sort by logical_io_writes desc | extend logical_io_writes_rank = row_rank_dense(logical_io_writes)
| sort by physical_io_reads desc | extend physical_io_reads_rank = row_rank_dense(physical_io_reads)
| sort by num_physical_io_reads desc | extend num_physical_io_reads_rank = row_rank_dense(num_physical_io_reads)
| sort by clr_time_us desc | extend clr_time_rank = row_rank_dense(clr_time_us)
| sort by dop desc | extend dop_rank = row_rank_dense(dop)
| sort by query_max_used_memory_kb desc | extend query_max_used_memory_rank = row_rank_dense(query_max_used_memory_kb)
| sort by rowcount desc | extend rowcount_rank = row_rank_dense(rowcount)
| sort by log_bytes_used desc | extend log_bytes_used_rank = row_rank_dense(log_bytes_used)
| sort by tempdb_space_used_kb desc | extend tempdb_space_used_rank = row_rank_dense(tempdb_space_used_kb)
| sort by case(
topQueriesBy =~ "count_executions", toreal(count_executions),
topQueriesBy =~ "duration", toreal(duration_us),
topQueriesBy =~ "cpu_time", toreal(cpu_time_us),
topQueriesBy =~ "logical_io_reads", toreal(logical_io_reads),
topQueriesBy =~ "logical_io_writes", toreal(logical_io_writes),
topQueriesBy =~ "physical_io_reads", toreal(physical_io_reads),
topQueriesBy =~ "num_physical_io_reads", toreal(num_physical_io_reads),
topQueriesBy =~ "clr_time", toreal(clr_time_us),
topQueriesBy =~ "dop", toreal(dop),
topQueriesBy =~ "query_max_used_memory", toreal(query_max_used_memory_kb),
topQueriesBy =~ "rowcount", toreal(rowcount),
topQueriesBy =~ "log_bytes_used", toreal(log_bytes_used),
topQueriesBy =~ "tempdb_space_used", toreal(tempdb_space_used_kb),
real(null)
) desc,
count_executions desc
| project-away count_executions
| where isempty(queryHash) or query_hash == queryHash
| where isempty(queryTextFragment) or query_sql_text contains queryTextFragment
| take topQueries;
使用 KQL 分析一段時間的效能計數器
在此範例中,查詢會針對在指定結束時間前 30 分鐘開始的時間間隔,傳回效能計數器值。
此範例使用累積效能計數器,例如 Total request count
和 Query optimizations/sec
。 累積表示計數器值會隨著 SQL 查詢活動而持續增加。 此範例中的查詢會計算每個範例中計數器值與上一個範例中其值之間的不同或差異,以取得自上一個範例以來發生的要求和最佳化數目,然後在時間圖表上視覺化這些計量。
let logicalServer = @"your-server-name";
let databaseName = @"your-database-name";
let replicaType = "Primary";
let endTime = datetime("2023-12-19 22:10:00");
let startTime = endTime - 30m;
sqldb_database_performance_counters_common
| where sample_time_utc between (startTime .. endTime)
| where logical_server_name =~ logicalServer
| where database_name =~ databaseName
| where replica_type =~ replicaType
| where cntr_type == 272696576 // restrict to cumulative counters
| where object_name =~ "Workload Group Stats" and counter_name in ("Total request count","Query optimizations/sec")
| project replica_id, sample_time_utc, object_name, counter_name, cntr_value
| sort by replica_id asc, counter_name asc, sample_time_utc asc
| extend delta_cntr_value = iif(cntr_value >= prev(cntr_value) and counter_name == prev(counter_name) and replica_id == prev(replica_id), cntr_value - prev(cntr_value), real(null)),
delta_sample_time_utc = iif(sample_time_utc >= prev(sample_time_utc), datetime_diff("Millisecond", sample_time_utc, prev(sample_time_utc)), long(null))
| where isnotempty(delta_sample_time_utc)
| extend value = delta_cntr_value / delta_sample_time_utc * 1000
| summarize requests_per_sec = take_anyif(value, counter_name =~ "Total request count"),
query_optimizations_per_sec = take_anyif(value, counter_name =~ "Query optimizations/sec")
by sample_time_utc
| sort by sample_time_utc desc
| project sample_time_utc, requests_per_sec, query_optimizations_per_sec
| render timechart;
下列範例適用於報告最近觀察到值的時間點效能計數器,例如 Active memory grants count
、Pending memory grants count
和 Processes blocked
。 時間間隔是過去 30 分鐘。
let logicalServer = @"your-server-name";
let databaseName = @"your-database-name";
let replicaType = "Primary";
let duration = 30m;
sqldb_database_performance_counters_common
| where sample_time_utc > ago(duration)
| where logical_server_name =~ logicalServer
| where database_name =~ databaseName
| where replica_type =~ replicaType
| where cntr_type == 65792 // restrict to point-in-time counters
| where (object_name =~ "General Statistics" and counter_name in ("Processes blocked"))
or
(object_name =~ "Resource Pool Stats" and counter_name in ("Active memory grants count","Pending memory grants count"))
| project sample_time_utc, counter_name, cntr_value
| render timechart;
下列範例使用效能計數器 (詳細) 資料集,為 Azure SQL 資料庫中的使用者和內部資源集區與工作負載群組繪製 CPU 使用率圖表。 如需詳細資訊,請參閱使用者工作負載和內部處理序的資源使用量。
使用者工作負載是在 SloSharedPool1
或 UserPool
資源集區中執行,而所有其他資源集區則用於各種系統工作負載。
同樣,使用者工作負載在以 UserPrimaryGroup.DBId
開頭的工作負載群組中執行,而所有其他工作負載群組則用於各種系統工作負載。 例如,資料庫監看員監視查詢正在 SQLExternalMonitoringGroup
工作負載群組中執行。
let logicalServer = @"your-server-name";
let databaseName = @"your-database-name";
let replicaType = "Primary";
let endTime = datetime("2023-12-19 22:10:00");
let startTime = endTime - 30m;
sqldb_database_performance_counters_detailed
| where sample_time_utc between (startTime .. endTime)
| where logical_server_name =~ logicalServer
| where database_name =~ databaseName
| where replica_type =~ replicaType
| where cntr_type == 537003264 // restrict to ratio percentage counters
| where object_name =~ "Resource Pool Stats" and counter_name in ("CPU usage %")
| project sample_time_utc, resource_pool = instance_name, cpu_percentage = cntr_value
| render timechart;
let logicalServer = @"your-server-name";
let databaseName = @"your-database-name";
let replicaType = "Primary";
let endTime = datetime("2023-12-19 22:10:00");
let startTime = endTime - 30m;
sqldb_database_performance_counters_detailed
| where sample_time_utc between (startTime .. endTime)
| where logical_server_name =~ logicalServer
| where database_name =~ databaseName
| where replica_type =~ replicaType
| where cntr_type == 537003264 // restrict to ratio percentage counters
| where object_name =~ "Workload Group Stats" and counter_name in ("CPU usage %")
| project sample_time_utc, workload_group = instance_name, cpu_percentage = cntr_value
| render timechart;
使用 KQL 分析一段時間的累積等候
此範例示範如何在時間間隔內繪製前幾個 SQL 等候類型圖表。 查詢會計算每個等候類型的累積等候時間,以每秒經過時間的毫秒為單位。 您可以調整查詢變數來設定間隔開始和結束時間、要包含的前幾個等候類型數目,以及圖表上資料點之間的步驟。
查詢會使用兩種技術來改善效能:
- 如果存在,請使用分割區 KQL 運算子搭配
shuffle
策略,將查詢處理分散到多個叢集節點。 - 使用 materialize() 函數來保存中繼結果集,此結果集會重複使用以計算前幾個等候,以及建置要繪製的時間序列圖表。
let logicalServer = @"your-server-name";
let databaseName = @"your-database-name";
let replicaType = "Primary";
let endTime = datetime("2023-12-19 22:10:00");
let startTime = endTime - 30m;
let top_wait_types = 10;
let chart_step = 30s;
let wait_type_sample = materialize (
sqldb_database_wait_stats
| where sample_time_utc between (startTime .. endTime)
| where logical_server_name =~ logicalServer
| where database_name =~ databaseName
| where replica_type =~ replicaType
| project replica_id, sample_time_utc, wait_type, wait_time_ms
| partition hint.strategy=shuffle by wait_type
(
sort by replica_id asc, sample_time_utc asc
| extend delta_wait_time_ms = iif(wait_time_ms >= prev(wait_time_ms) and replica_id == prev(replica_id), wait_time_ms - prev(wait_time_ms), long(null)),
delta_sample_time_utc = iif(sample_time_utc >= prev(sample_time_utc), datetime_diff("Millisecond", sample_time_utc, prev(sample_time_utc)), long(null))
| where isnotempty(delta_sample_time_utc)
| extend wait_ms_per_s = toreal(delta_wait_time_ms) / delta_sample_time_utc * 1000
| project sample_time_utc, wait_type, wait_ms_per_s
)
);
let top_wait = (
wait_type_sample
| summarize total_wait_ms_per_s = sum(wait_ms_per_s) by wait_type
| top top_wait_types by total_wait_ms_per_s desc
| project-away total_wait_ms_per_s
);
wait_type_sample
| join kind=inner top_wait on wait_type
| project-away wait_type1
| make-series wait_ms_per_s = avgif(wait_ms_per_s, isfinite(wait_ms_per_s)) default = long(null) on sample_time_utc from startTime to endTime step chart_step by wait_type
| project wait_type, sample_time_utc, wait_ms_per_s
| render timechart;
使用 T-SQL 分析監視資料
如果已熟悉 T-SQL,您可以立即開始查詢和分析 SQL 監視資料,而不需要學習 KQL。 不過,KQL 是在 Azure 資料總管或即時分析中查詢資料的建議語言,因為它為查詢遙測資料提供了無與倫比的支援。
您可以從 SQL Server Management Studio (SSMS)、Azure Data Studio 和其他常用工具連線到 Azure 資料總管或即時分析資料庫。 您可以查詢 Azure 資料總管或 KQL 資料庫,就像是 SQL Server 或 Azure SQL 資料庫一樣。 如需詳細資訊,請參閱使用 SQL Server 模擬來查詢 Azure 資料總管中的資料。
注意
Azure 資料總管和即時分析並非支援所有 T-SQL 建構。 如需詳細資訊,請參閱 使用 T-SQL 查詢資料。
如果您發現 T-SQL 支援不足以滿足您的需求,或想要將 T-SQL 查詢轉換為 KQL 以使用其進階分析功能,則 SQL 對 Kusto 查詢語言的速查表可協助您將 T-SQL 查詢轉換為 KQL。
下列範例示範如何使用 T-SQL 查詢資料庫監看員資料存放區中的監視資料。
使用 T-SQL 分析一段時間的資源耗用量
在此範例中,查詢會傳回資料庫、彈性集區或 SQL 受控執行個體在過去一小時內的主要複本的資源耗用量計量 (CPU、背景工作角色、記錄寫入輸送量等)。
在此範例和其他範例中,變更 DECLARE
陳述式中的變數,以符合伺服器、資料庫、彈性集區或 SQL 受控執行個體的名稱。
DECLARE @LogicalServerName sysname = 'your-server-name',
@DatabaseName sysname = 'your-database-name',
@ReplicaType sysname = 'Primary',
@DurationMinutes int = 60;
SELECT sample_time_utc,
avg_cpu_percent,
avg_instance_cpu_percent,
avg_data_io_percent,
avg_log_write_percent,
max_worker_percent
FROM sqldb_database_resource_utilization
WHERE sample_time_utc > DATEADD(minute, -@DurationMinutes, SYSUTCDATETIME())
AND
logical_server_name = @LogicalServerName
AND
database_name = @DatabaseName
AND
replica_type = @ReplicaType
ORDER BY sample_time_utc DESC;
使用 T-SQL 檢視資料庫、彈性集區或 SQL 受控執行個體屬性
在此範例中,查詢會傳回一組所有資料庫、彈性集區或 SQL 受控執行個體,在過去 24 小時中至少收集了對應 [屬性] 資料集中的一個範例。 換句話說,每個資料列都代表具有最近觀察到屬性的監視目標。
DECLARE @DurationHours int = 24;
SELECT p.sample_time_utc,
p.logical_server_name,
p.database_name,
p.replica_type,
p.database_id,
p.elastic_pool_name,
p.service_tier,
p.service_level_objective,
p.logical_cpu_count,
p.database_engine_memory_mb,
p.compatibility_level,
p.updateability,
p.database_engine_build_time,
p.database_engine_start_time_utc
FROM sqldb_database_properties AS p
INNER JOIN (
SELECT logical_server_name,
database_name,
replica_type,
MAX(sample_time_utc) AS last_sample_time_utc
FROM sqldb_database_properties
WHERE sample_time_utc > DATEADD(hour, -@DurationHours, SYSUTCDATETIME())
GROUP BY logical_server_name,
database_name,
replica_type
) AS ls
ON p.logical_server_name = ls.logical_server_name
AND
p.database_name = ls.database_name
AND
p.replica_type = ls.replica_type
AND
p.sample_time_utc = ls.last_sample_time_utc
WHERE p.sample_time_utc > DATEADD(hour, -@DurationHours, SYSUTCDATETIME())
ORDER BY LOWER(logical_server_name) ASC,
LOWER(database_name) ASC,
CASE replica_type
WHEN 'Primary' THEN 0
WHEN 'Geo-replication forwarder' THEN 1
WHEN 'Named secondary' THEN 2
WHEN 'HA secondary' THEN 3
END ASC;
使用 T-SQL 檢視查詢執行階段統計資料
此查詢會傳回 Azure SQL 資產中最耗用資源的查詢。 變更 @TopQueriesBy
變數以任何查詢存放區指標尋找排名最前面的查詢,包括 CPU 時間、已耗用時間、執行計數等。您也可以設定變數,以依時間間隔、查詢執行類型和特定查詢的查詢雜湊進行篩選,或將焦點放在特定邏輯伺服器、彈性集區或 SQL 受控執行個體的資料庫上。
查詢會使用 [查詢執行階段統計資料] 資料集來傳回您指定的排名最前面的查詢。 它也會依其他每個資源耗用量計量傳回排名。
DECLARE @EndTime datetime2 = SYSUTCDATETIME(),
@StartTime datetime2 = DATEADD(hour, -24, SYSUTCDATETIME()),
/*
Set the next variable to one of the following metrics to return the top resource consuming queries:
executions, cpu_time, duration, logical_io_reads, physical_io_reads, num_physical_io_reads,
clr_time, query_max_used_memory, log_bytes_used, tempdb_space_used, row_count, dop
*/
@TopQueriesBy varchar(30) = 'cpu_time',
@TopQueries int = 10,
@LogicalServerName sysname = '', -- Optionally filter by logical server name
@ElasticPoolName sysname = '', -- Optionally filter by elastic pool name, if any databases are in elastic pools
@DatabaseName sysname = '', -- Optionally filter by database name
@ExecutionType varchar(30) = '', -- Optionally filter by execution type. Use Regular, Aborted, Exception.
@QueryHash varchar(18) = ''; -- Optionally filter by query hash (example: 0xBAAA461A6C93EA88)
SELECT TOP (@TopQueries)
CONCAT(logical_server_name, IIF(count_logical_servers > 1, CONCAT(' (+', CAST(count_logical_servers - 1 AS varchar(11)), ')'), '')) AS logical_server_name,
CONCAT(database_name, IIF(count_databases > 1, CONCAT(' (+', CAST(count_databases - 1 AS varchar(11)), ')'), '')) AS database_name,
query_sql_text,
CONCAT(CAST(query_id AS varchar(11)), IIF(count_queries > 1, CONCAT(' (+', CAST(count_queries - 1 AS varchar(11)), ')'), '')) AS query_id,
CONCAT(CAST(plan_id AS varchar(11)), IIF(count_plans > 1, CONCAT(' (+', CAST(count_plans - 1 AS varchar(11)), ')'), '')) AS plan_id,
regular_executions,
aborted_executions,
exception_executions,
cpu_time_us,
duration_us,
logical_io_reads,
physical_io_reads,
num_physical_io_reads,
clr_time_us,
query_max_used_memory_kb,
log_bytes_used,
tempdb_space_used_kb,
row_count,
dop,
query_hash,
executions_rank,
cpu_time_rank,
duration_rank,
logical_io_reads_rank,
physical_io_reads_rank,
num_physical_io_reads_rank,
clr_time_rank,
query_max_used_memory_rank,
log_bytes_used_rank,
tempdb_space_used_rank,
row_count_rank,
dop_rank
FROM (
SELECT *,
DENSE_RANK() OVER (ORDER BY executions DESC) AS executions_rank,
DENSE_RANK() OVER (ORDER BY cpu_time_us DESC) AS cpu_time_rank,
DENSE_RANK() OVER (ORDER BY duration_us DESC) AS duration_rank,
DENSE_RANK() OVER (ORDER BY logical_io_reads DESC) AS logical_io_reads_rank,
DENSE_RANK() OVER (ORDER BY physical_io_reads DESC) AS physical_io_reads_rank,
DENSE_RANK() OVER (ORDER BY num_physical_io_reads DESC) AS num_physical_io_reads_rank,
DENSE_RANK() OVER (ORDER BY clr_time_us DESC) AS clr_time_rank,
DENSE_RANK() OVER (ORDER BY query_max_used_memory_kb DESC) AS query_max_used_memory_rank,
DENSE_RANK() OVER (ORDER BY log_bytes_used DESC) AS log_bytes_used_rank,
DENSE_RANK() OVER (ORDER BY tempdb_space_used_kb DESC) AS tempdb_space_used_rank,
DENSE_RANK() OVER (ORDER BY row_count DESC) AS row_count_rank,
DENSE_RANK() OVER (ORDER BY dop DESC) AS dop_rank
FROM (
SELECT query_hash,
COUNT(DISTINCT(logical_server_name)) AS count_logical_servers,
MAX(logical_server_name) AS logical_server_name,
COUNT(DISTINCT(database_name)) AS count_databases,
MAX(database_name) AS database_name,
COUNT(DISTINCT(query_id)) AS count_queries,
MAX(query_id) AS query_id,
COUNT(DISTINCT(plan_id)) AS count_plans,
MAX(plan_id) AS plan_id,
MAX(query_sql_text) AS query_sql_text,
SUM(IIF(execution_type_desc = 'Regular', count_executions, 0)) AS regular_executions,
SUM(IIF(execution_type_desc = 'Aborted', count_executions, 0)) AS aborted_executions,
SUM(IIF(execution_type_desc = 'Exception', count_executions, 0)) AS exception_executions,
SUM(count_executions) AS executions,
SUM(avg_cpu_time_us * count_executions) AS cpu_time_us,
SUM(avg_duration_us * count_executions) AS duration_us,
SUM(avg_logical_io_reads * count_executions) AS logical_io_reads,
SUM(avg_physical_io_reads * count_executions) AS physical_io_reads,
SUM(avg_num_physical_io_reads * count_executions) AS num_physical_io_reads,
SUM(avg_clr_time_us * count_executions) AS clr_time_us,
SUM(avg_query_max_used_memory * count_executions) * 8 AS query_max_used_memory_kb,
SUM(avg_log_bytes_used * count_executions) AS log_bytes_used,
SUM(avg_tempdb_space_used * count_executions) * 8 AS tempdb_space_used_kb,
SUM(avg_rowcount * count_executions) AS row_count,
SUM(IIF(is_parallel_plan = 1, avg_dop * count_executions, NULL)) AS dop
FROM sqldb_database_query_runtime_stats
WHERE interval_start_time >= @StartTime AND interval_end_time <= @EndTime
AND
(@ExecutionType = '' OR LOWER(execution_type_desc) = LOWER(@ExecutionType))
AND
(@LogicalServerName = '' OR LOWER(logical_server_name) = LOWER(@LogicalServerName))
AND
(@ElasticPoolName = '' OR LOWER(elastic_pool_name) = LOWER(@ElasticPoolName))
AND
(@DatabaseName = '' OR LOWER(database_name) = LOWER(@DatabaseName))
GROUP BY query_hash
) AS rsa
) AS rsar
WHERE @QueryHash = '' OR LOWER(query_hash) = LOWER(@QueryHash)
ORDER BY CASE @TopQueriesBy
WHEN 'executions' THEN executions_rank
WHEN 'cpu_time' THEN cpu_time_rank
WHEN 'duration' THEN duration_rank
WHEN 'logical_io_reads' THEN logical_io_reads_rank
WHEN 'physical_io_reads' THEN physical_io_reads_rank
WHEN 'num_physical_io_reads' THEN num_physical_io_reads_rank
WHEN 'clr_time' THEN clr_time_rank
WHEN 'query_max_used_memory' THEN query_max_used_memory_rank
WHEN 'log_bytes_used' THEN log_bytes_used_rank
WHEN 'tempdb_space_used' THEN tempdb_space_used_rank
WHEN 'row_count' THEN row_count_rank
WHEN 'dop' THEN dop_rank
END ASC;
使用 T-SQL 分析一段時間的效能計數器
在此範例中,查詢會傳回過去 30 分鐘的效能計數器值。
此範例使用累積效能計數器,例如 Total request count
和 Query optimizations/sec
。 累積表示計數器值會隨著查詢活動而持續增加。 此查詢使用 LAG() 分析函數計算每個範例中計數器值與上一個範例中其值之間的不同或差異,以取得自上一個範例以來發生的要求和最佳化數目。
DECLARE @LogicalServerName sysname = 'your-server-name',
@DatabaseName sysname = 'your-database-name',
@ReplicaType sysname = 'Primary',
@DurationMinutes int = 30;
SELECT sample_time_utc,
SUM(IIF(
counter_name = 'Total request count',
CAST((cntr_value - prev_cntr_value) AS decimal) / DATEDIFF(millisecond, prev_sample_time_utc, sample_time_utc) * 1000,
NULL
)) AS requests_per_second,
SUM(IIF(
counter_name = 'Query optimizations/sec',
CAST((cntr_value - prev_cntr_value) AS decimal) / DATEDIFF(millisecond, prev_sample_time_utc, sample_time_utc) * 1000,
NULL
)) AS query_optimizations_per_second
FROM (
SELECT sample_time_utc,
LAG(sample_time_utc) OVER (PARTITION BY replica_id, object_name, counter_name ORDER BY sample_time_utc ASC) AS prev_sample_time_utc,
counter_name,
cntr_value,
LAG(cntr_value) OVER (PARTITION BY replica_id, object_name, counter_name ORDER BY sample_time_utc ASC) AS prev_cntr_value
FROM sqldb_database_performance_counters_common
WHERE sample_time_utc > DATEADD(minute, -@DurationMinutes, SYSUTCDATETIME())
AND
logical_server_name = @LogicalServerName
AND
database_name = @DatabaseName
AND
replica_type = @ReplicaType
AND
cntr_type = 272696576 /* restrict to cumulative counters */
AND
object_name = 'Workload Group Stats'
AND
counter_name IN ('Total request count','Query optimizations/sec')
) AS pc
WHERE cntr_value >= prev_cntr_value
AND
sample_time_utc >= prev_sample_time_utc
GROUP BY sample_time_utc
ORDER BY sample_time_utc DESC;
使用 T-SQL 分析時間點效能計數器
下一個範例適用於報告最近觀察到值的時間點效能計數器,例如 Active memory grants count
、Pending memory grants count
和 Processes blocked
。
DECLARE @LogicalServerName sysname = 'your-server-name',
@DatabaseName sysname = 'your-database-name',
@ReplicaType sysname = 'Primary',
@DurationMinutes int = 30;
SELECT sample_time_utc,
SUM(IIF(
counter_name = 'Processes blocked',
cntr_value,
NULL
)) AS processes_blocked,
SUM(IIF(
counter_name = 'Active memory grants count',
cntr_value,
NULL
)) AS active_memory_grants,
SUM(IIF(
counter_name = 'Pending memory grants count',
cntr_value,
NULL
)) AS pending_memory_grants
FROM (
SELECT sample_time_utc,
counter_name,
cntr_value
FROM sqldb_database_performance_counters_common
WHERE sample_time_utc > DATEADD(minute, -@DurationMinutes, SYSUTCDATETIME())
AND
logical_server_name = @LogicalServerName
AND
database_name = @DatabaseName
AND
replica_type = @ReplicaType
AND
cntr_type = 65792 /* restrict to point-in-time counters */
AND
(
(object_name = 'General Statistics' AND counter_name IN ('Processes blocked'))
OR
(object_name = 'Resource Pool Stats' AND counter_name IN ('Active memory grants count','Pending memory grants count'))
)
) AS pc
GROUP BY sample_time_utc
ORDER BY sample_time_utc DESC;
使用 T-SQL 分析一段時間的累積等候
在此範例中,查詢會依 30 分鐘間隔的平均累積等候時間傳回前 10 個等候類型。 累計表示查詢會計算每秒鐘所有要求在每個等候類型下等候的總時間,以毫秒為單位。 由於多個要求可以同時執行 (和等候),因此每秒的累積等候時間可能超過一秒。
DECLARE @LogicalServerName sysname = 'your-server-name',
@DatabaseName sysname = 'your-database-name',
@ReplicaType sysname = 'Primary',
@DurationMinutes int = 30;
SELECT TOP (10) wait_type,
SUM(CAST((wait_time_ms - prev_wait_time_ms) AS decimal)) * 1000
/
SUM(DATEDIFF(millisecond, prev_sample_time_utc, sample_time_utc))
AS wait_time_ms_per_sec
FROM (
SELECT sample_time_utc,
LAG(sample_time_utc) OVER (PARTITION BY replica_id, wait_type ORDER BY sample_time_utc ASC) AS prev_sample_time_utc,
wait_type,
wait_time_ms,
LAG(wait_time_ms) OVER (PARTITION BY replica_id, wait_type ORDER BY sample_time_utc ASC) AS prev_wait_time_ms
FROM sqldb_database_wait_stats
WHERE sample_time_utc > DATEADD(minute, -@DurationMinutes, SYSUTCDATETIME())
AND
logical_server_name = @LogicalServerName
AND
database_name = @DatabaseName
AND
replica_type = @ReplicaType
) AS w
WHERE sample_time_utc >= prev_sample_time_utc
AND
wait_time_ms >= prev_wait_time_ms
GROUP BY wait_type
ORDER BY wait_time_ms_per_sec DESC;