Κοινή χρήση μέσω


Databricks SQL release notes

This article lists new Databricks SQL features and improvements, along with known issues and FAQs.

Release process

Databricks releases updates to the Databricks SQL web application user interface on an ongoing basis, with all users getting the same updates rolled out over a short period of time.

In addition, Databricks typically releases new SQL warehouse compute versions regularly. Two channels are always available: Preview and Current.

Note

Releases are staged. Your Databricks account might not be updated with a new SQL warehouse version or Databricks SQL feature until a week or more after the initial release date.

Note

Databricks SQL Serverless is not available in Azure China. Databricks SQL is not available in Azure Government regions.

Channels

Channels let you choose between the Current SQL warehouse compute version or the Preview version. Preview versions let you try out functionality before it becomes the Databricks SQL standard. Take advantage of preview versions to test your production queries and dashboards against upcoming changes.

Typically, a preview version is promoted to the current channel approximately two weeks after being released to the preview channel. Some features, such as security features, maintenance updates, and bug fixes, may be released directly to the current channel. From time to time, Databricks may promote a preview version to the current channel on a different schedule. Each new version will be announced in the following sections.

To learn how to switch an existing SQL warehouse to the preview channel, see Preview Channels. The features listed in the user interface updates sections are independent of the SQL Warehouse compute versions described in the Channels section of the release notes.

Available Databricks SQL versions

Current channel: Databricks SQL version 2024.40

Preview channel: Databricks SQL version 2024.50

January 30, 2025

The following features and updates were released during the week of January 30, 2025.

User interface updates

SQL warehouse

A Completed query count chart (Public Preview) is now available on the SQL warehouse monitoring UI. This new chart shows the number of queries finished in a time window, including canceled and failed queries. The chart can be used with the other charts and the Query History table to assess and troubleshoot the performance of the warehouse. The query is allocated in the time window it is completed. Counts are averaged per minute. For more information, see Monitor a SQL warehouse.

SQL editor

  • Expanded data display in charts: Visualizations created in the SQL editor now support up to 15,000 rows of data.

January 23, 2025

The following features and updates were released during the week of January 23, 2025.

Changes in 2024.50

Databricks SQL version 2024.50 includes th following behavioral changes, new features, and improvements.

Behavioral changes

  • The VARIANT data type can no longer be used with operations that require comparisons

You cannot use the following clauses or operators in queries that include a VARIANT data type:

  • DISTINCT
  • INTERSECT
  • EXCEPT
  • UNION
  • DISTRIBUTE BY

These operations perform comparisons, and comparisons that use the VARIANT data type produce undefined results and are not supported in Databricks. If you use the VARIANT type in your Azure Databricks workloads or tables, Databricks recommends the following changes:

  • Update queries or expressions to explicitly cast VARIANT values to non-VARIANT data types.
  • If you have fields that must be used with any of the above operations, extract those fields from the VARIANT data type and store them using non-VARIANT data types.

To learn more, see Query variant data.

New features and improvements

  • Support for parameterizing the USE CATALOG with IDENTIFIER clause

The IDENTIFIER clause is supported for the USE CATALOG statement. With this support, you can parameterize the current catalog based on a string variable or parameter marker.

  • COMMENT ON COLUMN support for tables and views

The COMMENT ON statement supports altering comments for view and table columns.

  • New SQL functions

The following new built-in SQL functions are available:

  • dayname(expr) returns the three-letter English acronym for the day of the week for the given date.
  • uniform(expr1, expr2 [,seed]) returns a random value with independent and identically distributed values within the specified range of numbers.
  • randstr(length) returns a random string of length alpha-numeric characters.
  • Named parameter invocation for more functions

The following functions support named parameter invocation:

Bug fixes

  • Nested types now properly accept NULL constraints

This release fixes a bug affecting some Delta generated columns of nested types, for example, STRUCT. These columns would sometimes incorrectly reject expressions based on NULL or NOT NULL constraints of nested fields. This has been fixed.

January 15, 2025

The following updates were released during the week of January 15, 2025.

User interface updates

SQL editor

The new SQL editor (Public Preview) now has the following features:

  • Download naming: Downloaded outputs are now named after the query.
  • Font size adjustments: Quickly adjust font size in the SQL editor using Alt + and Alt - for Windows/Linux, or Opt + and Opt - for macOS.
  • @Mentions in comments: Mention specific users with @ in comments. Mentioned users will receive email notifications.
  • Improved tab switching: Tab switching performance is up to 80% faster for loaded tabs and 62% faster for unloaded tabs.
  • See warehouse details: SQL Warehouse size is now visible in the compute selector without extra clicks.
  • Edit parameter values: Use Ctrl + Enter for Windows/Linux, or Cmd + Enter for macOS, to run a query while editing a parameter value.
  • Retain query results in version history: Query results are now stored with version history.

Visualizations

Known issues

  • Reads from data sources other than Delta Lake in multi-cluster load balanced SQL endpoints can be inconsistent.
  • Delta tables accessed in Databricks SQL upload their schema and table properties to the configured metastore. If you are using an external metastore, you will be able to see Delta Lake information in the metastore. Delta Lake tries to keep this information as up-to-date as possible on a best-effort basis. You can also use the DESCRIBE <table> command to ensure that the information is updated in your metastore.
  • Databricks SQL does not support zone offsets like ‘GMT+8’ as session time zones. The workaround is to use a region based time zone https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) like ‘Etc/GMT+8’ instead. See SET TIME ZONE for more information about setting time zones.

Frequently asked questions (FAQ)

Use the following list to learn the answers to common questions.

How are Databricks SQL workloads charged?

Databricks SQL workloads are charged according to the Standard Jobs Compute SKU.

Where do SQL warehouses run?

Classic and pro SQL warehouses are created and managed in your Azure account. SQL warehouses manage SQL-optimized clusters automatically in your account and scale to match end-user demand.

Serverless SQL warehouses, on the other hand, use compute resources in your Databricks account. serverless SQL warehouses simplify SQL warehouse configuration and usage and accelerate launch times. The serverless option is available only if it has been enabled for the workspace. For more information, see Serverless compute plane.

Can I use SQL warehouses from a notebook in the same workspace?

Yes. To learn how to attach a notebook to a SQL warehouse, see Use a notebook with a SQL warehouse.

I have been granted access to data using a cloud provider credential. Why can’t I access this data in Databricks SQL?

In Databricks SQL, all access to data is subject to data access control, and an administrator or data owner must first grant you the appropriate privileges.