Monitor account activity with system tables
This article explains the concept of system tables in Azure Databricks and highlights resources you can use to get the most out of your system tables data.
What are system tables?
System tables are an Azure Databricks-hosted analytical store of your account’s operational data found in the system
catalog. System tables can be used for historical observability across your account.
Note
For documentation on system.information_schema
, see Information schema.
Requirements
To access system tables, your workspace must be enabled for Unity Catalog. For more information, see Enable system table schemas.
System tables are not available in the following regions:
- Azure China regions
- Azure Government regions
- West India
- Switzerland West
Which system tables are available?
Currently, Azure Databricks hosts the following system tables:
Table | Description | Location | Supports streaming | Free retention period | Includes global or regional data |
---|---|---|---|---|---|
Audit logs (Public Preview) | Includes records for all audit events from workspaces in your region. For a list of available audit events, see Diagnostic log reference. | system.access.audit |
Yes | 365 days | Regional for workspace-level events. Global for account-level events. |
Table lineage (Public Preview) | Includes a record for each read or write event on a Unity Catalog table or path. | system.access.table_lineage |
Yes | 365 days | Regional |
Column lineage (Public Preview) | Includes a record for each read or write event on a Unity Catalog column (but does not include events that do not have a source). | system.access.column_lineage |
Yes | 365 days | Regional |
Billable usage | Includes records for all billable usage across your account. | system.billing.usage |
Yes | 365 days | Global |
Pricing | A historical log of SKU pricing. A record gets added each time there is a change to a SKU price. | system.billing.list_prices |
No | Unlimited | Global |
Clusters (Public Preview) | A slow-changing dimension table that contains the full history of compute configurations over time for any cluster. | system.compute.clusters |
Yes | N/A | Regional |
Node timeline (Public Preview) | Captures the utilization metrics of your all-purpose and jobs compute resources. | system.compute.node_timeline |
Yes | 30 days | Regional |
Node types (Public Preview) | Captures the currently available node types with their basic hardware information. | system.compute.node_types |
No | N/A | Regional |
SQL warehouses (Public Preview) | Contains the full history of configurations over time for any SQL warehouse. | system.compute.warehouses |
Yes | 365 days | Regional |
SQL warehouse events (Public Preview) | Captures events related to SQL warehouses. For example, starting, stopping, running, scaling up and down. | system.compute.warehouse_events |
Yes | 365 days | Regional |
Jobs (Public Preview) | Tracks all jobs created in the account. | system.lakeflow.jobs |
Yes | 365 days | Regional |
Job tasks (Public Preview) | Tracks all job tasks that run in the account. | system.lakeflow.job_tasks |
Yes | 365 days | Regional |
Job run timeline (Public Preview) | Tracks the start and end times of job runs. | system.lakeflow. job_run_timeline |
Yes | 365 days | Regional |
Job task timeline (Public Preview) | Tracks the start and end times and compute resources used for job task runs. | system.lakeflow. job_task_run_timeline |
Yes | 365 days | Regional |
Marketplace funnel events (Public Preview) | Includes consumer impression and funnel data for your listings. | system.marketplace.listing_ funnel_events |
Yes | 365 days | Regional |
Marketplace listing access (Public Preview) | Includes consumer info for completed request data or get data events on your listings. | system.marketplace.listing_ access_events |
Yes | 365 days | Regional |
Predictive optimization (Public Preview) | Tracks the operation history of the predictive optimization feature. | system.storage.predictive_ optimization_operations_history |
No | 180 days | Regional |
Databricks Assistant events (Public Preview) | Tracks user messages sent to the Databricks Assistant. | system.access.assistant_events |
No | 365 days | Regional |
Query history (Public Preview) | Captures records for all queries run on SQL warehouses. | system.query.history |
No | 90 days | Regional |
Clean room events (Public Preview) | Captures events related to clean rooms. | system.access.clean_room_events |
Yes | 365 days | Regional |
Model serving endpoint usage (Public Preview) | Captures token counts for each request to a model serving endpoint and its responses. To capture the endpoint usage in this table, you must enable usage tracking on your serving endpoint. | system.serving.endpoint_usage . |
Yes | 90 days | Regional |
Model serving endpoint data (Public Preview) | A slow changing dimension table that stores metadata for each served external model in a model serving endpoint. | system.serving.served_entities |
Yes | 365 days | Regional |
Network access events (Public Preview) | A table that records an event for every time internet access is denied from your account. | system.access.outbound_network |
Yes | 365 days | Regional |
The billable usage and pricing tables are free to use. Tables in Public Preview are also free to use during the preview but could incur a charge in the future.
Note
You may see other system tables in your account besides the ones listed above. Those tables are currently in Private Preview and are empty by default. If you are interested in using any of these tables, please reach out to your Databricks account team.
Enable system table schemas
Since system tables are governed by Unity Catalog, you need to have at least one Unity Catalog-enabled workspace in your account to enable and access system tables. System tables include data from all workspaces in your account but they can only be accessed from a Unity Catalog-enabled workspace.
System tables are enabled at the schema level. If you enable a system schema, you enable all the tables within that schema. When new schemas are released, an account admin needs to manually enable the schema.
System tables must be enabled by an account admin. You can enable system tables using system-schemas
commands in the Databricks CLI or using the SystemSchemas API.
Note
The billing
schema is enabled by default. Other schemas must be enabled manually.
List available system schemas
Use the following curl command to list available system schemas:
curl -v -X GET -H "Authorization: Bearer <PAT Token>" "https://adb-<xxx>.azuredatabricks.net/api/2.0/unity-catalog/metastores/<metastore-id>/systemschemas"
The following is an example output of the GET
command:
{"schemas":[{"schema":"access","state":"<AVAILABLE OR EnableCompleted>"},{"schema":"billing","state":"<AVAILABLE OR EnableCompleted>"},{"schema":"information_schema","state":"<AVAILABLE OR EnableCompleted>"}]}
state: AVAILABLE
: The system schema is available but has not yet been enabled.
state: EnableCompleted
: You have enabled the system schema and it is visible in Catalog Explorer.
Enable a system schema
Use the following curl command to enable a system schema:
curl -v -X PUT -H "Authorization: Bearer <PAT Token>" "https://adb-<xxx>.azuredatabricks.net/api/2.0/unity-catalog/metastores/<metastore-id>/systemschemas/<SCHEMA_NAME>"
If the system schema is enabled successfully, result code 200
is returned.
If you attempt to re-enable a system schema, the following is returned: "error_code":"SCHEMA_ALREADY_EXISTS","message":"Schema <schema-name> already exists"
.
Disable a system schema
Use the following curl command to disable a system schema:
curl -v -X DELETE -H "Authorization: Bearer <PAT Token>" "https://adb-<xxx>.azuredatabricks.net/api/2.0/unity-catalog/metastores/<metastore-id>/systemschemas/<SCHEMA_NAME>"
Grant access to system tables
Access to system tables is governed by Unity Catalog. No user has access to these system schemas by default. To grant access, a user that is both a metastore admin and an account admin must grant USE
and SELECT
permissions on the system schemas. See Manage privileges in Unity Catalog.
System tables are read-only and cannot be modified.
Note
If your account was created after November 9, 2023, you might not have a metastore admin by default. For more information, see Set up and manage Unity Catalog.
Do system tables contain data for all workspaces in your account?
System tables contain operational data for all workspaces in your account deployed within the same cloud region. Billing system tables contain account-wide data.
Even though system tables can only be accessed through a Unity Catalog workspace, the tables also include operational data for the non-Unity Catalog workspaces in your account.
Where is system table data stored?
Your account’s system table data is stored in a Azure Databricks-hosted storage account located in the same region as your metastore. The data is securely shared with you using Delta Sharing.
Each table has a free data retention period. For information on extending the retention period, contact your Azure Databricks account team.
Where are system tables located in Catalog Explorer?
The system tables in your account are located in a catalog called system
, which is included in every Unity Catalog metastore. In the system
catalog you’ll see schemas such as access
and billing
that contain the system tables.
Considerations for streaming system tables
Azure Databricks uses Delta Sharing to share system table data with customers. Be aware of the following considerations when streaming with Delta Sharing:
- If you are using streaming with system tables, set the
skipChangeCommits
option totrue
. This ensures the streaming job is not disrupted from deletes in the system tables. See Ignore updates and deletes. Trigger.AvailableNow
is not supported with Delta Sharing streaming. It will be converted toTrigger.Once
.
If you use a trigger in your streaming job and find it isn’t catching up to the latest system table version, Databricks recommends increasing the scheduled frequency of the job.
Read incremental changes from streaming system tables
spark.readStream.option("skipChangeCommits", "true").table("system.billing.usage")
Known issues
Currently no support for real-time monitoring. Data is updated throughout the day. If you don’t see a log for a recent event, check back later.
To enable system tables, you might need to grant network access to the system tables Blob storage endpoint. To view a list of every region’s system tables’ storage endpoint, see Storage endpoint IP addresses.
The system schemas
system.operational_data
andsystem.lineage
are deprecated and will contain empty tables.The
__internal_logging
system table schema is used to support payload logging using inference tables. This schema is visible to account admins, but it cannot be enabled and should not be used for customer workflows.