Databricks SQL release notes
This article lists new Databricks SQL features and improvements, along with known issues and FAQs.
Release process
Databricks releases updates to the Databricks SQL web application user interface on an ongoing basis, with all users getting the same updates rolled out over a short period of time.
In addition, Databricks typically releases new SQL warehouse compute versions regularly. Two channels are always available: Preview and Current.
Note
Releases are staged. Your Databricks account might not be updated with a new SQL warehouse version or Databricks SQL feature until a week or more after the initial release date.
Channels
Channels let you choose between the Current SQL warehouse compute version or the Preview version. Preview versions let you try out functionality before it becomes the Databricks SQL standard. Take advantage of preview versions to test your production queries and dashboards against upcoming changes.
Typically, a preview version is promoted to the current channel approximately two weeks after being released to the preview channel. Some features, such as security features, maintenance updates, and bug fixes, may be released directly to the current channel. From time to time, Databricks may promote a preview version to the current channel on a different schedule. Each new version will be announced in the following sections.
To learn how to switch an existing SQL warehouse to the preview channel, see Preview Channels. The features listed in the user interface updates sections are independent of the SQL Warehouse compute versions described in the Fixed issues section of the release notes.
Available Databricks SQL versions
Current channel: Databricks SQL version 2024.40
See features in 2024.40.
November 13, 2024
Legacy dashboards:
Resolved an issue where templated tooltips were not displaying detailed content for dual-axis and multi-field axis charts.
November 6, 2024
Human-readable schedule support for Databricks SQL streaming tables and materialized views
Users can now start, create, and alter schedules for streaming tables and materialized views using human-readable syntax instad of CRON scheduling. See ALTER MATERIALIZED VIEW, ALTER STREAMING TABLE, CREATE MATERIALIZED VIEW, and CREATE STREAMING TABLE.
Streaming tables now support time travel queries
You can now use time travel to query previous table versions based on timestamp or table version (as recorded in the transaction log). You may need to refresh your streaming table before using time travel queries. See What is Delta Lake time travel?.
Time travel queries are not supported for materialized views.
October 31, 2024
User interface updates
New SQL editor (Public Preview)
You can now run the active SQL query using the keyboard shortcut
Command
(orCtrl
) +Shift
+Enter
.The parameters input area now shows a scrollbar when the text extends outside of the display window.
Fixed an issue that prevented the query profile details page from opening fully.
You can now rename queries by typing the new name into the tab title.
The Schedule button is now disabled for queries that have never been saved before.
October 24, 2024
Release notes for AI/BI tools
The release notes for AI/BI dashboards and AI/BI Genie have moved to AI/BI release notes. Future releases and updates will be documented there.
October 17, 2024
Notification destinations are now generally available
You can create and configure notification destinations that workspace users can add to certain workflows, like alerts, Databricks jobs, and AI/BI dashboard schedules, to send emails or webhooks when an event runs. See Manage notification destinations.
October 10, 2024
Materialized views and streaming tables are now generally available on Databricks SQL
Streaming tables enable incremental ingestion from cloud storage and message queues, while materialized views are pre-computed views that are automatically and incrementally updated as new data arrives. See Use materialized views in Databricks SQL and Load data using streaming tables in Databricks SQL.
Query insights
The new columns
query_source
,executed_as
, andexecuted_as_user_id
have been added to the query history system table. See Query history system table reference.
October 3, 2024
User interface updates
The features listed in this section are independent of the SQL warehouse compute versions described above.
Catalog Explorer
AI-generated comments are now supported for catalogs, schemas, volumes, models, and functions and users can use the inline chat Assistant to help edit their comments.
SQL AI functions
The vector_search()
function is now available in Public Preview. See vector_search function
September 11, 2024
User interface updates
The features listed in this section are independent of the SQL Warehouse compute versions described above.
SQL editor
You can now use named parameter marker syntax in the SQL editor. Named parameter marker syntax can be used across the SQL editor, notebooks, and AI/BI dashboards. See Work with query parameters.
Queries and legacy dashboards
For SQL queries and legacy dashboards, deleted items no longer appear in the listing pages. Find deleted items in the workspace trash folder. Workspace admins can view deleted items in all users’ trash folders.
September 5, 2024
Changes in 2024.40
Databricks SQL version 2024.40 includes the following behavioral changes, new features, and improvements.
Behavioral changes
Change to the default schema binding mode for views
Views now adapt to schema changes in the underlying query by using schema compensation with regular casting rules. This is a change from the previous default of
BINDING
mode, which raised errors when a safe cast could not be performed when referencing the view.See CREATE VIEW and cast function.
Disallow using the undocumented
!
syntax instead ofNOT
outside boolean expressionsWith this release, the use of
!
as a synonym forNOT
outside of boolean expressions is no longer allowed. For example, statements such as the following:CREATE ... IF ! EXISTS
, IS ! NULL, a! NULL
column or field property,! IN
and ! BETWEEN, must be replaced with:CREATE ... IF NOT EXISTS
,IS NOT NULL
, aNOT NULL
column or field property,NOT IN
andNOT BETWEEN
.This change ensures consistency, aligns with the SQL standard, and makes your SQL more portable.
The boolean prefix operator
!
(for example,!is_mgr
or!(true AND false)
) is unaffected by this change.Disallow undocumented column definition syntax in views
Databricks supports CREATE VIEW with named columns and column comments. Previously, the specification of column types,
NOT NULL
constraints, orDEFAULT
has been allowed. With this release, you can no longer use this syntax.This change ensures consistency, aligns with the SQL standard, and supports future enhancements.
Adding a
CHECK
constraint on an invalid column now returns theUNRESOLVED_COLUMN.WITH_SUGGESTION
error classTo provide more useful error messaging, in Databricks Runtime 15.3 and above, an
ALTER TABLE ADD CONSTRAINT
statement that includes aCHECK
constraint referencing an invalid column name returns the UNRESOLVED_COLUMN.WITH_SUGGESTION error class. Previously, anINTERNAL_ERROR
was returned.
New features and improvements
Enable UniForm Iceberg using ALTER TABLE
You can now enable UniForm Iceberg on existing tables without rewriting data files. See Enable by altering an existing table.
UTF-8 validation functions
This release introduces the following functions for validating UTF-8 strings:
is_valid_utf8 verified whether a string is a valid UTF-8 string.
make_valid_utf8 converts a potentially invalid UTF-8 string to a valid UTF-8 string using substitution characters
validate_utf8 raises an error if the input is not a valid UTF-8 string.
try_validate_utf8 returns
NULL
if the input is not a valid UTF-8 string.
to_avro and from_avro functions
The to_avro and from_avro functions allow conversion of SQL types to Avro binary data and back.
try_url_decode function
This release introduces the try_url_decode function, which decodes a URL-encoded string. If the string is not in the correct format, the function returns
NULL
instead of raising an error.Optionally allow the optimizer to rely on unenforced foreign key constraints
To improve query performance, you can now specify the
RELY
keyword onFOREIGN KEY
constraints when you CREATE or ALTER a table.Support for dropping the check constraints table feature
Selective overwrites using
replaceWhere
now run jobs that delete data and insert new data in parallel, improving query performance and cluster utilization.Parallelized job runs for selective overwrites
Selective overwrites using
replaceWhere
now run jobs that delete data and insert new data in parallel, improving query performance and cluster utilization.Improved performance for change data feed with selective overwrites
Selective overwrites using
replaceWhere
on tables with change data feed no longer write separate change data files for inserted data. These operations use a hidden_change_type
column present in the underlying Parquet data files to record changes without write amplification.Improved query latency for the COPY INTO command
This release includes a change that improves the query latency for the
COPY INTO
command. This improvement is implemented by making the loading of state by the RocksDB state store asynchronous. With this change, you should see an improvement in start times for queries with large states, such as queries with a large number of already ingested files.
August 22, 2024
Visualizations
For grouped and multi-field configurations, tooltips now show totals when you hover over chart elements.
August 15, 2024
Visualizations
Fixed an issue where row numbers in table visualizations didn’t update after changing the page size.
Data discovery
The ability to expand and collapse nested complex column types in Unity Catalog tables is now supported.
August 1, 2024
Visualizations:
Table sorting is now preserved when data changes due to filtering.
SQL Editor:
Increased readability by adding additional padding between the last line of a query and the result output.
July 25, 2024
Databricks REST API:
APIs for managing queries, alerts, data sources, and permissions have changed. The legacy version will continue to be supported for six months. This transition period is intended to give you sufficient time to migrate your applications and integrations to the new version before the older version is phased out. See Update to the latest Databricks SQL API version
July 18, 2024
User interface updates
Catalog explorer:
A new catalog configuration wizard is now available for setting up workspace bindings, catalog privileges, and metadata when creating a catalog.
SQL Warehouse monitoring:
CAN MONITOR permission is now generally available. It allows privileged users to monitor SQL warehouses, including the associated query history and query profiles. See Vector search endpoint ACLs.
Changes in 2024.35
Disable column mapping with drop feature
You can now use DROP FEATURE
to disable column mapping on Delta tables and downgrade the table protocol. See Disable column mapping.
Variant type syntax and functions in Public Preview
Built-in Apache Spark support for working with semi-structured data as VARIANT
type is now available in Spark DataFrames and SQL. See Query variant data.
Variant type support for Delta Lake in Public Preview
You can now use VARIANT
to store semi-structured data in tables backed by Delta Lake. See Variant support in Delta Lake.
Support for different modes of schema evolution in views
CREATE VIEW and ALTER VIEW now allow you to set a schema binding mode, enhancing how views handle schema changes in underlying objects. This feature enables views to either tolerate or adapt to schema changes in the underlying objects. It addresses changes in the query schema resulting from modifications to object definitions.
Performance improvement for some window functions
This release includes a change that improves the performance of some Spark window functions, specifically functions that do not include an ORDER BY
clause or a window_frame
parameter. In these cases, the system can rewrite the query to run it using an aggregate function. This change allows the query to run faster by using partial aggregation and avoiding the overhead of running window functions. The Spark configuration parameter spark.databricks.optimizer.replaceWindowsWithAggregates.enabled
controls this optimization and is set to true
by default. To turn this optimization off, set spark.databricks.optimizer.replaceWindowsWithAggregates.enabled
to false
.
Support for the try_mod
function added
This release adds support for the PySpark try_mod()
function. This function supports the ANSI SQL-compatible calculation of the integer remainder by dividing two numeric values. If the divisor argument is 0, the try_mod()
function returns null instead of throwing an exception. You can use the try_mod()
function instead of mod
or %
, which throws an exception if the divisor argument is 0 and ANSI SQL is enabled.
User interface updates
SQL Editor:
The inline assistant is now available in the SQL editor. Click the assistant icon in the editor box to toggle the input. Type a question or comment in English, then press Enter (not Shift+Enter, which runs a query) to generate a response with a different view directly in the editor.
Platform:
An API for notification destinations is now available. You can now programmatically manage webhook and email destinations for your alerts and job run notifications. See Notification Destinations.
Changes in 2024.30
Lakehouse Federation is generally available (GA)
Lakehouse Federation connectors across the following database types are now generally available (GA):
MySQL
PostgreSQL
Amazon Redshift
Snowflake
Microsoft SQL Server
Azure Synapse (SQL Data Warehouse)
Databricks
This release also introduces the following improvements:
Support for single sign-on (SSO) authentication in the Snowflake and Microsoft SQL Server connectors.
Stable egress IP support in serverless compute environments. See Step 1: Create a network connectivity configuration and copy the stable IPs.
Support for additional pushdowns (string, math, miscellaneous functions).
Improved pushdown success rate across different query shapes.
Additional pushdown debugging capabilities:
The
EXPLAIN FORMATTED
output displays the pushed-down query text.The query profile UI displays the pushed-down query text, federated node identifiers, and JDBC query execution times (in verbose mode). See View system-generated federated queries.
DESCRIBE HISTORY
now shows clustering columns for tables that use liquid clustering
When you run a DESCRIBE HISTORY
query, the operationParameters
column shows a clusterBy
field by default for CREATE OR REPLACE
and OPTIMIZE
operations. For a Delta table that uses liquid clustering, the clusterBy
field is populated with the table’s clustering columns. If the table does not use liquid clustering, the field is empty.
Support for primary and foreign keys is generally available
Support for primary and foreign keys in Databricks Runtime is generally available. The GA release includes the following changes to the privileges required to use primary and foreign keys:
To define a foreign key, you must have the
SELECT
privilege on the table with the primary key that the foreign key refers to. You do not need to own the table with the primary key, which was previously required.Dropping a primary key using the
CASCADE
clause does not require privileges on the tables that define foreign keys that reference the primary key. Previously, you needed to own the referencing tables.Dropping a table that includes constraints now requires the same privileges as dropping tables that do not include constraints.
To learn how to use primary and foreign keys with tables or views, see CONSTRAINT clause, ADD CONSTRAINT clause, and DROP CONSTRAINT clause.
Liquid clustering is GA
Support for liquid clustering is now generally available using Databricks Runtime 15.2 and above. See Use liquid clustering for Delta tables.
Type widening is in Public Preview
You can now enable type widening on tables backed by Delta Lake. Tables with type widening enabled allow changing the type of columns to a wider data type without rewriting underlying data files. See Type widening.
Schema evolution clause added to SQL merge syntax
You can now add the WITH SCHEMA EVOLUTION
clause to a SQL merge statement to enable schema evolution for the operation. See Schema evolution syntax for merge.
Vacuum inventory support
You can now specify an inventory of files to consider when running the VACUUM
command on a Delta table. See the OSS Delta docs.
Support for Zstandard compression functions
You can now use the zst_compress, zstd_decompress, and try_zstd_decompress functions to compress and decompress BINARY
data.
Query plans in the SQL UI now correctly display PhotonWriteStage
When displayed in the SQL UI, write
commands in query plans incorrectly showed PhotonWriteStage
as an operator. With this release, the UI is updated to show PhotonWriteStage
as a stage. This is a UI change only and does not affect how queries are run.
User interface updates
API support:
You can now manage notification destinations using the REST API. See Notification destinations.
June 27, 2024
Row Filters and Column Masks in Databricks SQL materialized views and streaming tables are Public Preview
Row filters and column masks in Databricks SQL materialized views and streaming tables are Public Preview. The Public Preview release includes the following changes:
You can add row filters and column masks to a Databricks SQL materialized view or streaming table.
You can define Databricks SQL materialized views or streaming tables on tables that include row filters and column masks.
User interface updates
Visualizations:
Improved interactivity in displaying tooltips when hovering over pie, scatter, and heatmap charts with many data points.
Catalog Explorer:
A revamped Catalog Explorer UI makes it easier to discover and favorite recent Unity Catalog assets from the Quick Access view. The navigation experience has also been simplified, allowing you to explore compute, storage, credentials, connections, DBFS, and management details using the Settings in the upper-left corner of the screen. Delta Sharing, Clean Rooms, and External Data now have dedicated pages.
June 6, 2024
Fix for Databricks SQL materialized views and streaming tables
The issue causing ALTER SCHEDULE
queries on Databricks SQL materialized views and streaming tables to take effect only after the next REFRESH operation has been fixed. Now, ALTER SCHEDULE
queries are applied immediately. See Schedule materialized view refreshes.
Materialized views and streaming tables in Databricks SQL are Public Preview
Materialized views and streaming tables in Databricks SQL are Public Preview and available to all customers. The public preview release includes the following changes:
REFRESH
of materialized views and streaming tables in Databricks SQL is now synchronous by default. See REFRESH (MATERIALIZED VIEW or STREAMING TABLE).Errors that occur during a refresh operation of a Databricks SQL materialized view or streaming table are returned in the SQL Editor.
To learn how to use materialized views and streaming tables in Databricks SQL, see Use materialized views in Databricks SQL and Load data using streaming tables in Databricks SQL.
User interface updates
Dashboards:
Account users can now download visualization data from published dashboards
Unpublished dashboards can now be published using the Draft/Publish dropdown near the top of a dashboard.
Fixed an issue where parameters named limit were not detected.
Dashboards now appear in the side navigation for AWS GovCloud.
Visualizations:
Improved box-plot rendering in dark mode.
Query insights:
For all notebooks attached to SQL warehouses, you can access the query profile by clicking See performance under the cell that contains the query. If the cell includes multiple queries, a link to the query profile for each is provided for each statement.
May 30, 2024
New permission level for SQL warehouses
Can monitor permission allows users to monitor SQL warehouses, including the associated query history and query profiles. The Can monitor permission is now in Public Preview. See Vector search endpoint ACLs.
May 23, 2024
User interface updates
You can now select multiple items in the workspace to move or delete. When multiple objects are selected, an action bar appears and provides options to move or delete items. Additionally, you can select multiple items using your mouse and drag them to a new location. Existing permissions on objects still apply during bulk move and delete operations.
You can now mark Unity Catalog assets as favorites in the Catalog Explorer and Schema Browser. This includes catalogs, schemas, tables, models, volumes, and functions. Unity Catalog assets that you mark as favorites are easily accessible from the Databricks homepage.
Dashboard updates:
Dual-axis combo charts now correctly display bar legends on the right axis and line legends accordingly.
Dual-axis charts now correctly show labels on bars.
Visualizations updates:
The table editor’s conditional format labels for if and then now support dark mode.
The redundant open link icon has been removed from the table editor’s format tooltips.
The default font color’s label in the table editor now aligns automatically.
May 16, 2024
Rollout schedule
Preview rollout for 2024.25: Completed May 1st
Current rollout for 2024.25: Between May 14th and May 21st
Note
An upgrade to the panda Python library (version 2.0.3) caused a breaking change in Databricks SQL version 2024.20. Databricks did not roll out version 2024.20 to the current channel. Instead, the preview channel was upgraded to 2024.25 on May 1, 2025. The current channel rollout goes directly from 2024.15 to 2024.25.
Changes in 2024.25
Data governance
Credential passthrough and Hive metastore table access controls are deprecated.
Credential passthrough and Hive metastore table access controls are legacy data governance models. Upgrade to Unity Catalog to simplify the security and governance of your data by providing a central place to administer and audit data access across multiple workspaces in your account. See What is Unity Catalog?.
Support for credential passthrough and Hive metastore table access controls will be removed in an upcoming DBR version.
SQL language features
The * (star) clause is now supported in the WHERE clause.
You can now use the star (*
) clause in the WHERE
clause to reference all columns from the SELECT
list.
For example, SELECT * FROM VALUES(1, 2) AS T(a1, a2) WHERE 1 IN(T.*)
.
Support for Cloudflare R2 storage to avoid cross-region egress fees since 2024.15
You can now use Cloudflare R2 as cloud storage for data registered in Unity Catalog. Cloudflare R2 is intended primarily for Delta Sharing use cases in which you want to avoid the data egress fees charged by cloud providers when data crosses regions.
Cloudflare R2 storage supports all of the Databricks data and AI assets supported in AWS S3.
See Use Cloudflare R2 replicas or migrate storage to R2 and Create a storage credential for connecting to Cloudflare R2.
User interface updates
The features listed in this section are independent of the SQL Warehouse compute versions described above.
Data discovery updates:
The Hive metastore to Unity Catalog update wizard supports upgrading Hive metastore managed tables using all-purpose compute or SQL warehouses. Updating more than 20 tables creates a new notebook that contains the SYNC
AND ALTER TABLE
commands that perform the conversion.
Dashboard updates:
Dual-axis functionality is now available for Area, Bar, Line, and Scatter chart types.
When you enable a dual-axis chart, the axis title and range is no longer copied to the secondary axis.
The last field identified in the visualization configuration is automatically relocated to the right-side y-axis.
May 9, 2024
SQL Editor fixes:
The admin setting Results table clipboard features now applies to the SQL editor’s New result table.
Dashboard improvements:
Query-based parameters allow authors to define a list of selectable values that viewers can use as parameters for other visualizations on a dashboard canvas. See Use query-based parameters.
Column order in files downloaded from a table widget is now preserved.
The table editor now includes hover tooltips that display the names of columns.
When switching from other visualization types to a histogram, information encoding is now better preserved.
Dashboard fixes:
Fixed an issue where a single grid height filter displayed an unnecessary overflow scrollbar.
Fixed an issue that caused incorrectly rendered visualizations on published dashboards where a referenced dataset column was deleted.
May 2, 2024
Serverless SQL Warehouse support expanded: Serverless SQL warehouses are now available in the following regions:
ca-central-1
ap-northeast-2
See Features with limited regional availability
Dashboard updates:
Queries and visualizations can now be copied to a new dashboard from SQL editor. You can still add visualizations to legacy dashboards from the SQL editor. See Edit, download, or add to a dashboard.
Dashboards will now maintain a 24-hour result cache to optimize initial loading times. See Dataset optimization and caching.
Bar charts with categorical X and quantitative Y are now sorted in Databricks Assistant responses.
Corrected migration issue with legacy histogram
COUNT (*)
to ensure accurate migration.Implemented the ability to mix numeric types and date types in a filter widget.
When creating charts, Databricks Assistant now automatically suggests relevant columns as you type.
Visualization updates:
User-selected color for tables now persists across light and dark modes in legacy charts.
Data truncation logic has been improved to enhance performance in combo, pie, heatmap, and histogram charts.
A tick mark is now always displayed at the top of a quantitative axis for basic charts.
April 23, 2024
UI updates:
For all Share dialogs in the UI, the
All Users
group has been renamed toAll Workspace Users
. The new name more accurately reflects the scope of the group, which has always included users assigned to the workspace. No change is made to group membership as part of this rename.
Dashboard improvements:
When a dashboard’s SQL warehouse is starting, a dialog appears to explain the wait time.
Scroll position is preserved when switching between the Canvas and Data tabs.
Cloning a legacy dashboard to create a Lakeview dashboard now supports some parameter conversion. See Adjust legacy parameters.
Relative dates, such as Today, Tomorrow, and 7 days ago, are now supported for date and date time parameters.
Number range sliders can be added as filters on a dashboard.
Histograms can now display disaggregated data.
Scatter plots now support size encoding.
Dashboard fixes:
Temporal color encoding can now change color assignments correctly.
Visualization updates:
Custom tooltop formats now function correctly for multi-axis charts.
The New charts preview tag is removed when users have not changed the toggle in the past 14 days.
April 18, 2024
Lakeview dashboards are generally available
Lakeview dashboards are now the default dashboarding tool. They have been renamed as Dashboards in the UI. Databricks SQL dashboards are now called Legacy dashboards. The names of the related API tools have not changed.
Dashboard improvements:
Audit logs are available for Lakeview dashboards. See Dashboards events.
Data downloaded from dashboards respect applied parameters.
Databricks Assistant is enabled on the Data tab without adding datasets first.
Stacked bar charts with multiple Y fields can support sorting the X-axis based on the sum of Y-axis values.
Toggle between Linear and Log(Symmetric) scale functions in visualization axis menus.
The default size of the filter widget is now more compact.
The initial load time for the text-entry filter widget has been reduced.
Improved automatic chart conversions when migrating from legacy dashboards.
Dashboard fixes:
The restricted viewing settings warning does not show if the dashboard has been shared with others.
The error messages in the Data tab SQL editor are now dark mode enabled.
User interface updates
The features listed in this section are independent of the SQL warehouse compute versions described above.
Improvements:
The tooltips on stacked charts now display the stack value and percentage by default.
The tooltips for multi-axis charts now highlight the hovered item.
Table visualizations for Databricks SQL now adapt a new query result’s data type when edited in the SQL editor.
The Catalog Explorer’s Query History table shows a tree-like view for Query Source attribution. You can use this to see which entities have triggered the query statement to run.
April 11, 2024
User interface updates
The features listed in this section are independent of the SQL warehouse compute versions described above.
Improvements:
You can now group by percentage when creating visualizations in Databricks SQL and notebooks.
For new charts (in Public Preview), you can zoom in along a single axis by clicking and dragging in a straight line parallel to the axis.
The Unity Catalog shared cluster Allowlist UI is now generally available. You can access it on the Metastore details page in Catalog Explorer. See How to add items to the allowlist.
Forms to create and edit external locations now open as a full page. They include the option to include a storage credential.
Fixes:
Corrected an issue for Histogram charts where negative values were erroneously marked as positive.
April 4, 2024
User interface updates
The features listed in this section are independent of the SQL warehouse compute versions described above.
Improvements:
Improvements to Histogram charts on Lakeview dashboards.
Added support for labels.
Bin settings are now retained when switching between different fields.
The samples gallery on the dashboard listing page now creates Lakeview dashboards. See Tutorial: Use sample dashboards.
Right-clicking on the border of a widget on a Lakeview dashboard opens a context menu.
The left-side navigation bar is retained for workspace users viewing published Lakeview dashboards.
Filter selections are retained when navigating between published and draft Lakeview dashboards.
Column names can now be inserted into the SQL editor when editing a query from the Data tab in a draft Lakeview dashboard.
Replacing a Lakeview dashboard keeps the existing dashboard name and replaces the contents.
Switching visualizations between heat maps and other chart types now preserves the relevant fields better.
Fixes:
Bar charts with color encodings now correctly restrict adding multiple Y-axis fields.
Resolved an issue where the Download as PNG button was missing from some visualizations.
Corrected formatting for negative big integers previously missing thousands of separators.
Fixed incorrect hover line placement when hovering over labels on line charts.
Changes in 2024.15
Delta updates
Delta UniForm is now generally available: UniForm is now generally available and uses the IcebergCompatV2 table feature. You can now enable or upgrade UniForm on existing tables. See Use UniForm to read Delta tables with Iceberg clients.
Recompute data skipping statistics for Delta tables: You can now recompute statistics stored in the Delta log after changing columns used for data skipping. See Specify Delta statistics columns.
SQL language updates
Declare temporary variables in a SQL session: This release introduces the ability to declare temporary variables in a session that can be set and then referred to from in queries. See Variables.
Native XML file format support (Public Preview): Native XML file format support is now in Public Preview. XML file format support enables ingestion, querying, and parsing of XML data for batch processing or streaming. It can automatically infer and evolve schema and data types, supports SQL expressions like
from_xml
, and can generate XML documents. It doesn’t require external jars and works seamlessly with Auto Loader,read_files
,COPY INTO
, and Delta Live Tables. See Read and write XML files.
Cloud Fetch is now enabled by default: Cloud Fetch is enabled by default in AWS workspaces with bucket versioning enabled. If you have bucket versioning enabled, Databricks recommends setting a lifecycle policy to remove old versions of uploaded query results automatically. See Cloud Fetch in ODBC (ODBC) and Cloud Fetch in JDBC (JDBC).
Apache Spark SQL updates
Databricks SQL 2024.15 include Apache Spark 3.5.0. Additional bug fixes and improvements for SQL are listed on the Databricks Runtime 14.3 release note. See Apache Spark and look for the [SQL]
tag for a complete list.
User interface updates
The features listed in this section are independent of the SQL warehouse compute versions described above.
A new overview tab in the entity page of Catalog Explorer shows important metadata like filesize, data source, owner, table schema, and comments.
Lakeview dashboard updates:
Lakeview dashboards now support parameters. Authors can add parameters to dataset queries from the Data tab and then set parameters on the canvas using single-value selectors and date-pickers. See Work with dashboard parameters.
Lakeview dashboards are now supported in the workspace permissions API. See PATCH /api/workspace/workspace/updatepermissions in the REST API reference.
Control widgets on the canvas have been renamed to Filter widgets.
Combo charts no longer allow disaggregated fields on the x-axis.
The Copy link button in the Share dialog now includes parameters stored in the URL.
Widgets on published dashboards no longer show borders when hovering.
Resolved an issue where the Databricks Assistant and Download PNG buttons overlap with chart visuals.
Switching visualizations between heatmaps and other chart types now better preserves the relevant fields.
Bar charts with color encodings now correctly restrict adding multiple Y-axis fields.
March 21, 2024
Unity Catalog model lineage is now in Public Preview.
The table view in Catalog Explorer now has an Overview tab to describe its primary metadata.
SQL warehouses for notebooks, now generally available, allow you to take advantage of fully managed, instant, and scalable compute for your SQL workloads in the rich, collaborative authoring environment of a notebook. See Use a notebook with a SQL warehouse.
The following fixes and improvements apply to Lakeview dashboards:
Expanded API support for Lakeview adds the ability to create, get, update, and trash dashboards. See Lakeview in the REST API reference.
Added a refresh button for the Catalog browser on the Data tab.
Lakeview dashboards now appear before Dashboards in the New menu in the workspace sidebar. On the dashboard listing page, the Lakeview dashboards tab appears to the left of the Dashboards tab.
The Databricks Assistant experience for Lakeview has been updated with an input box and suggestions to improve discoverability and help users understand the prompts they can pose.
Lakeview visualizations now support median aggregations.
Updated the color picker in the Lakeview dashboard visualization editor for a more streamlined user experience when creating tables.
Improved pie chart migration to exclusively support scenarios with angle or color definitions.
Fixed a bug preventing grouping by charts named count. Charts can now be grouped by fields named count.
For bar charts, group and stack layout controls are now hidden when not applicable to the user-selected configuration.
March 14, 2024
For Lakeview dashboards:
Histograms now support custom categorical colors.
Heatmaps now support quantitative scales.
Titles and descriptions are retained when switching between visualization types, including Combo charts.
You can now open the underlying dataset associated with a draft dashboard widget by right-clicking on it. The dataset opens in the Data tab.
New charts now apply aliases and custom colors for null values in numeric columns.
New charts now render tick marks to show the top of the y-axis.
March 7, 2024
When viewing a table in Catalog Explorer, the Create button includes an option to create a Lakeview dashboard rather than a Databricks SQL dashboard.
Histograms are now available for Lakeview dashboards. Histograms are commonly used to visualize the distribution of a numeric field.
When cloning a Databricks SQL dashboard to create a Lakeview dashboard, dataset conversion issues now show as errors in the new widget on the Lakeview dashboard.
Color gradients are available when a numerical field is used for a visualization on a Lakeview dashboard.
Color gradients are now exposed in the Lakeview dashboard visualization editor when a Color by field is specified.
The title and description associated with a visualization no longer appear editable if the viewer lacks editing privileges on a draft Lakeview dashboard.
Fixed an issue where tooltips in charts with over 100 series incorrectly showed all series. Now, only the focused series is shown.
Reduced typing latency in the SQL editor by 30% through performance optimizations.
When managing queries in the SQL editor, moving a query to trash automatically closes the tab.
Fixed an issue in the SQL editor where text was accidentally selected when adjusting the side panel width.
February 29, 2024
Serverless SQL warehouse support has been added in the following regions:
ap-south-1
ap-southeast-1
ap-northeast-1
sa-east-1
eu-west-3
See Databricks clouds and regions for a complete list of supported regions.
The schema browser in Catalog Explorer now displays column primary and foreign key constraints.
The retention time shown in the Lineage tab in Catalog Explorer has been increased to one year.
Tooltips on new charts in notebooks are now always rendered inside the visualization boundary.
Learn how to programmatically manage Lakeview dashboards using the REST API. See Manage dashboards with Workspace APIs.
Lakeview dashboards now support histograms.
Improved sharing and publishing in Lakeview dashboards:
Improved share and publish dialogs, allowing safe and easy sharing to any account user.
Dashboards opened from the workspace browser show the published dashboard if it exists. Viewers can now also see details of the latest published version, including publisher, time, and credentials.
For editors, a new drop-down switcher in the Lakeview Dashboard UI allows you to quickly move between draft and published versions.
Feburary 22, 2024
Improvements to the Sample Data tab in the Catalog Explorer table view enable you to sort columns, Copy selected data to your clipboard, and view line numbers. It can now better display special values, like JSON objects, dates, numeric and null values.
Lakeview dashboards now support sending periodic PDF snapshots of the dashboard to workspace users and notification destinations. See Schedules and subscriptions.
The list of visualization options in the Lakeview dropdown picker is now sorted alphabetically.
When copying Databricks SQL dashboards to Lakeview dashboards, widgets that cannot be converted now show the visualization configuration picker instead of an error message.
February 15, 2024
The documentation for code-based query filters, such as
SELECT action AS 'action::filter'
, has been removed. Databricks recommends updating queries to remove this pattern.
For Lakeview dashboards, pie charts now display equal-sized slices when no angle field is specified.
Lakeview now supports combo charts, which combine bar and line charts to show two different values on the same chart.
Heatmap charts, which use color intensity to show the magnitude of the correlation between two discrete variables, are now available in Lakeview.
February 8, 2024
You can now request access when opening a link to a Lakeview dashboard you do not have permissions on.
Lakeview dashboard filters now have explicit All and None options. Authors can choose to hide the All option in single select filters.
You can now set minimum and maximum values for axes on Lakeview dashboard charts.
February 1, 2024
Databricks SQL Version 2024.10 Available
Rollout Schedule
Preview rollout for 2024.10: Between Jan 30, 2024 and Feb 5, 2024
Current rollout for 2024.10: Between Feb 13, 2023 and Feb 20, 2024
Changes in 2024.10
Fixed corrupt file handling in DML commands: The DML commands
DELETE
,UPDATE
, andMERGE INTO
no longer respect the read optionsignoreCorruptFiles
andignoreMissingFiles
. When encountering an unreadable file in a table, these commands now fail even if these options are specified.Row-level concurrency is Generally Available and on by default: Row-level concurrency reduces conflicts between concurrent write operations by detecting changes at the row-level. Row-level concurrency is only supported on tables without partitioning, which includes tables with liquid clustering. Row-level concurrency is enabled by default on Delta tables with deletion vectors enabled. See Write conflicts with row-level concurrency.
Shallow clone for Unity Catalog external tables (Public Preview): You can now use shallow clone with Unity Catalog external tables. See Shallow clone for Unity Catalog tables.
Faster multi-threaded statistics collection: Statistics collection is up to 10 times faster on small clusters when running
CONVERT TO DELTA
or cloning from Iceberg and Parquet tables. See Convert to Delta Lake and Incrementally clone Parquet and Iceberg tables to Delta Lake.Pushdown filters in the DeltaSource on Delta files: For better utilization, partition filters on Delta tables streaming queries are now pushed down to Delta before rate limiting.
User interface updates
The features listed in this section are independent of the SQL Warehouse compute versions described above.
The Admin view tab on listing pages for Databricks SQL objects (queries, dashboards, and alerts) has been removed. Workspace admin users can view all objects from their respective listing pages. See Access and manage saved queries, Legacy dashboards, and What are Databricks SQL alerts?.
The query history page displays queries from the past 24 hours by default. See Query history.
A menu option, Clone to Lakeview dashboard, has been added to the Databricks SQL dashboard UI. You can use this tool to create a new Lakeview dashboard that includes the same queries and visualizations in your existing Databricks SQL dashboards. See Clone a legacy dashboard to an AI/BI dashboard.
Bar charts in Lakeview dashboards support stacking bars to normalize to 100%.
Fixed a problem where zooming in on a published Lakeview dashboard resulted in focusing on incorrect zoom intervals.
January 24, 2024
The Lakeview dashboard canvas automatically adjusts widget placement to remove empty vertical white space between rows when possible.
Reduced whitespace between title and description text in Lakeview dashboard visualizations.
January 18, 2024
Fixed a rendering issue for visualizations where bar charts showing a single date on the x-axis resulted in a very thin bar. New chart visualizations render as expected.
The Lakeview dashboard listing page shows your dashboards by default. You can use filters on that page to access Lakeview dashboards owned by other workspace users.
January 11, 2024
Databricks SQL Queries and Dashboard APIs support changing the Run as role setting programatically.
Lakeview supports exporting and importing dashboards as files to facilitate reproducing draft dashboards across workspaces. See Export, import, or replace a dashboard
January 4, 2024
Introduced primary key and foreign key entity relationship diagrams in Catalog Explorer. See View the Entity Relationship Diagram.
December 21, 2023
The Lakeview Counter visualization type shows colors when comparing
BigInt
values in the main Value and Target fields.The tooltips that appear when toggling column visibility on tables in Lakeview have been improved. They behave as expected and do not persist.
Users can now use Databricks Assistant to create visualizations in Lakeview. See Create visualizations with Databricks Assistant.
For new charts, heatmap-type charts respect the reverseY setting.
Fixed a rendering performance issue for notebooks with a large number of visualizations.
December 14, 2023
Fixed a bug where Lakeview dashboards were not appearing in the Lakeview listing page without a manual page refresh.
Use the escape key to cancel the creation of a Lakeview widget when placing it on the canvas.
Catalog Explorer now displays Vector Search indexes in the UI as part of the Mosaic AI Vector Search public preview.
December 7, 2023
User interface updates
Lakeview dashboards can be added to favorites for quick access.
Copy and paste keyboard shortcuts are supported while drafting a Lakeview dashboard. Also, the delete key removes selected widgets.
Enhanced Lakeview widget titles and descriptions to prevent clipping text during load.
Corrected visualization formatting issue where large integer values were mistakenly displayed as floats.
Fixed an issue with Databricks SQL dashboards where expanded chart views were sometimes showing blank charts.
Bar charts with quantitative fields on both X and Y axes render more legible data labels.
Fixed issue in the SQL Editor so that tables with the word
stream
in the title no longer conflict with the reserved keyword. These tables now appear as expected in the schema browser and are not error-highlighted.The query history page now supports column resize and column selections.
The query history page supports two new columns: Query source and Query source type.
BI options, like Tableau and Power BI, are easier to find in Catalog Explorer on eligible pages.
November 30, 2023
User interface updates
The features listed in this section are independent of the SQL Warehouse compute versions described above.
Pie charts in Lakeview can now have customized color assignments.
Visualization transformations in Lakeview are now retained when switching between compatible field types.
Added title settings for Lakeview pie chart angle channels.
The Lakeview dataset dropdown is now searchable for easier navigation.
Lakeview supports full numerical display for values under 10,000, eliminating abbreviations.
Added capability to color-code categorical date fields in Lakeview.
Lakeview users can now highlight chart legends with their cursor to copy and paste the values.
Pie charts in Lakeview now feature a label toggle option.
Standardized a default blue color across all Lakeview visualizations.
Lakeview column icons in transformations now consistently match the transformation method used.
Controls in Lakeview’s edit panel now auto-wrap for enhanced readability.
Released an enhanced color editor for Lakeview visualizations.
The control for table font conditions in Lakeview are now wrapped to improve readabilty.
Improved dark mode compatibility for labels in new charts.
New charts now consistently prioritize label display inside bars.
Fixed a bug where some right-click menu actions weren’t working in the SQL Editor.
November 16, 2023
User interface updates
The features listed in this section are independent of the SQL Warehouse compute versions described above.
Databricks SQL queries, alerts, and dashboards have a new scheduler and scheduling interface.
Lakeview widgets are now easier to resize due to a larger resize trigger zone.
Workspace admins can now change the owner of a Lakeview dashboard. From edit mode on a Lakeview dashboard:
Click Share
Click
Click Assign new owner
Users can toggle labels on or off in Lakeview.
Visualizations:
New chart labels now strongly prefer being inside a bar when possible.
New chart labels now appear properly on stacked bars that are wide enough to show the whole label.
Label colors inside bar charts are now more consistent.
November 9, 2023
Changes in 2023.50:
Highlights:
You can now use named parameter invocation on SQL and Python UDF.
SQL Language updates:
The following builtin functions have been added:
- `from_xml`: Parses an XML STRING
into a STRUCT
.
- `schema_of_xml`: Derives a schema from an XML STRING
.
- `session_user`: Returns the logged-in user.
- `try_reflect`: Returns NULL
instead of the exception if a Java method fails.
The following builtin functions have been enhanced:
function invocation. Table arguments to functions support partitioning and ordering: You can now use
PARTITION BY
andORDER BY
clauses to control how table arguments are passed to a function.`mode`: Support for an optional parameter forcing a deterministic result.
`to_char`: New support for
DATE
,TIMESTAMP
, andBINARY
.`to_varchar`: New support for
DATE
,TIMESTAMP
, andBINARY
.`array_insert()` is 1-based for negative indexes: The
array_insert
function is 1-based for both positive and negative indexes. It now inserts a new element at the end of input arrays for the index -1.
User interface updates
The features listed in this section are independent of the SQL warehouse compute versions described above.
Visualizations:
Tooltips for new charts show only hovered items for high-cardinality charts.
New charts automatically align dual-axis charts at zero.
Charts changed rotated label and axis text angle direction from -90 to 90 degrees.
Charts now use the label’s width to decide rotation.
Lakeview Dashboards:
Fix: Table rows in Lakeview dashboards no longer add vertical padding when only a small number of rows are returned.
Cloning a new Lakeview dashboard widget first attempts to place the clone to the right of the original, provided there is sufficient space on the canvas. If insufficient space is available, the clone is placed below the original.
You can now reassign the owner of a Lakeview dashboard via the Share dialog.
SQL Editor: Autocomplete is now less aggressive and dismisses automatically at the end of statements
November 2, 2023
Improvements:
The following are improvements to Lakeview dashboard lineage:
Added distinct icons for tables, views, and materialized views.
Added support for vertical scrolling in the event of many upstream data sources.
Improved error message when viewer lacks permissions on the upstream object.
Clarified messaging around sample data tables and HMS data.
Added key value tags to upstream data sources.
Fix:
Fixed an issue in new charts that prevented rendering after renaming a series with boolean values.
October 26, 2023
Changes in Databricks SQL version 2023.45
Highlights:
Predictive I/O for updates is now generally available. See What is predictive I/O?
Deletion vectors are now generally available. See What are deletion vectors?
Query optimizations:
Removed the outer join if they are all distinct aggregate functions. SPARK-42583
Optimized the order of filtering predicates. SPARK-40045
SQL function updates:
Added support for implicit lateral column alias resolution on
Aggregate
. SPARK-41631Support for implicit lateral column alias in queries with Window. SPARK-42217
Support for for Datasketches
HLLSketch
. See hll_sketch_agg aggregate function.Added the
try_aes_decrypt()
function. See try_aes_decrypt function.Support for CBC mode for
aes_encrypt()
andaes_decrypt()
. See aes_decrypt function.Added support for aes_encrypt IVs and AAD. SPARK-43290
Implement bitmap functions. SPARK-44154
Added the
to_varchar
alias forto_char
. See to_varchar function.Added
array_compact
support. See array_compact function.Support for udf
luhn_check
. See luhn_check function.Added analyzer support of named arguments for built-in functions. SPARK-44059
Support for
TABLE
argument parser rule forTableValuedFunction
. SPARK-44200array_insert
now fails with 0 index. SPARK-43011Added
NULL
values forINSERT
with user-specified lists of fewer columns than the target table. SPARK-42521DECODE
function returns wrong results when passed NULL. SPARK-41668
User interface updates
Improvements:
The Lakeview SQL editor now supports keyboard shortcuts to run queries.
Filters in Lakeview now list quick relative date range selections for the last 7, 14, 28, 30, 60, and 90 days.
Periods in data field names no longer result in blank charts.
The legacy schema browser now works with Unity Catalog.
Implemented performance improvements to load only the active tab in the SQL Editor, decreasing initial load time by up to 20%.
AI-generated table comments in Catalog Explorer are now generally available.
Fixes:
Bars on temporal bar charts are now centered over the date tick mark.
Data label templates with aggregate expressions now use the proper numeric formatters.
Zooming on new charts now works with a scale set to categorical using temporal data.
New article:
Released a new article showing all of the Lakeview visualizations, including screenshots and notes showing how to recreate each the visualizations in each screenshot. See Dashboard visualization types.
October 19, 2023
Improvements:
Drag and drop in schema browser is now available.
The Select Table modal in Lakeview dashboards makes it easier to select from all tables in a catalog or schema and now uses autocomplete search predictions as you type.
Fixes:
Legend selection tooltip instructions specify Mac users use
cmd
and Windows users usectrl
.
Enhanced visibility of truncated messages in Lakeview dashboards when rendered data exceeds limits.
Charts with truncated data display consistent colors as seen in the editor.
October 12, 2023
Improvements:
Text filter values containing special characters now filter correctly in Lakeview dashboards.
New charts support
@@name
data labels on scatter plots.Customized percentage formats apply to grouped chart tooltips in new charts.
Workspace binding extension UI is GA.
Fixes:
Improved label rendering on new charts to prevent labels from spilling outside chart boundaries.
Increased the contrast of tick and grid lines on new charts for improved visibility.
Increased the axis label spacing to improve readability on new charts.
October 5, 2023
Improvements:
Accurate tooltips have been added for publish mode actions and date lineage in Lakeview dashboards.
Conditional formatting and link templates in Lakeview table visualizations now support hidden columns.
Optimized label positioning for wide-bar temporal charts to enhance clarity in new charts.
Counter visualization in Lakeview retains its transformations even after other fields are removed, ensuring consistency.
Hovering over a series in a chart now dims the surrounding series in the tooltip to improve readability in new charts.
New charts using percentage values now display tooltips with absolute values.
Added autocomplete support for creating volumes.
Closing a non-active tab no longer switches tabs.
Selected run clearly indicates when highlighting text.
Fixes:
Improved error message wording in Pivot Tables when the data is truncated.
Fixed a rendering error in Pivot Tables where colors were not showing when using BigInt data types.
When downloading PNGs in new charts with numerous legend items, removed the color symbol for overflow legend entries.
Lines in new charts will maintain a consistent thickness even at the topmost view boundary.
In Lakeview, if no dataset exists, the dataset picker in the Canvas is empty.
Delta Live Tables are properly detected by the SQL Parser and won’t show-up as invalid tables in Schema Browser.
Tooltips was added to sidebar.
September 28, 2023
Improvements:
Published Lakeview dashboards now have a refresh button.
Improved error messages for users who do not have access to a Lakeview dashboard.
Filter configuration in Lakeview dashboards now lists valid fields at the top of the selection list.
Downloading a chart as a PNG from a Lakeview dashboard now retains the title and description.
Delta tables history improvement in Catalog Explorer includes adding filters for date range, user, and operation type, as well as sortable columns, and inline links to associated Jobs and Notebooks.
Dark mode support added across legends, tooltips and table visualization.
Fix:
Filter selections are no longer cleared when refreshing a Lakeview dashboard.
September 21, 2023
Improvements:
Pivot Table rendering performance has been improved.
New DuBois pattern for lineage tabular views in the UI.
September 14, 2023
Improvement:
File names are now preserved when downloading PNGs in new chart visualizations. See New chart visualizations in Databricks.
September 7, 2023
Databricks SQL version 2023.40 available
Rollout schedule
Preview rollout for 2023.40: Between Sep 5, 2023 and Sep 11, 2023
Current rollout for 2023.40: Between Sep 18, 2023 and Sep 25, 2023
Changes in 2023.40:
Tags are now available with Unity Catalog.
Databricks Runtime returns an error if a file is modified between query planning and invocation.
Databricks ODBC/JDBC driver support.
Enable time series column labeling.
New bitmap SQL functions.
Improved encryption functions.
Unity Catalog support for
REFRESH FOREIGN.
INSERT BY NAME
is now supported.Share materialized views with Delta Sharing.
User interface updates
Improvements:
New charts are now available, featuring faster render performance, beautiful colors, and improved interactivity. See New chart visualizations in Databricks.
In the graph view of Query Profile, you can now view the Join type on any node containing a join in the query plan.
Data Explorer is renamed to Catalog Explorer to recognize the fact that you can use it to work with all securable objects in Unity Catalog, not just data objects. See What is Catalog Explorer?.
The Databricks SQL Statement Execution API is now GA with Databricks SQL Version 2023.35 and above. The API allows you to submit SQL statements for execution on a Databricks SQL warehouse, check the status and fetch results, or cancel a running SQL statement execution. See Statement Execution API.
August 31, 2023
New feature:
Tagging for Unity Catalog is in Public Preview. You can use tags to simplify search and discovery of your data assets. See Apply tags to Unity Catalog securable objects.
August 24, 2023
Improvement:
Autocomplete stops suggesting recommendations after you press the spacebar.
Schema Browser no longer sees
live
Delta Live Tables as broken tables.
August 16, 2023
Improvement:
The Catalog dropdown in the SQL editor now closes when you switch tabs. Previously, when you switched tabs, the dropdown would remain open.
August 10, 2023
Improvement:
Autocomplete now supports the new syntax for setting Unity Catalog tags. For information on commands, see SQL language reference.
August 3, 2023
Improvements:
The underlying Monaco Editor now uses version 37.1.
Autocomplete support for
SHOW ARCHIVED FILES FOR
Delta commands.
July 27, 2023
Improvements:
The SQL editor is now compatible with Windows newline characters, ensuring that query formatting works as expected in all cases.
You can open the query profile navigation from notebook results. For queries, run your query with SQL warehouse.
July 20, 2023
Databricks SQL version 2023.35 available
Rollout schedule
Preview rollout for 2023.35: Between Jul 18, 2023 and Jul 24, 2023
Current rollout for 2023.35: Between Jul 31, 2023 and Aug 8, 2023
Changes in 2023.35:
Enhanced reliability for
VACUUM
with shallow clone in Unity Catalog.Support for Python UDFs in SQL.
Delta Lake UniForm for Iceberg is in Public Preview.
Delta Lake liquid clustering is in Public Preview.
Archival support for Delta Lake.
IDENTIFIER clause support.
Unity Catalog support for Python and Pandas User-Defined Functions (UDFs).
Improvement:
Table popularity in Catalog Explorer is available to all Unity Catalog users.
July 13, 2023
Improvement:
Unity Catalog users can now view additional table insights in Catalog Explorer such as frequently joined tables, frequent users of a given table, and frequently used notebooks and dashboards.
June 22, 2023
Public Preview:
Databricks SQL now supports large language models (LLMs) hosted on model serving endpoints. Call
ai_query()
to access your LLM. This function is only available in Public Preview on Databricks SQL Pro and Serverless. To participate in Public Preview, submit the AI Functions Public Preview enrollment form.
June 15, 2023
New feature:
SQL tasks in Jobs are now generally available. You can orchestrate Queries, Dashboards, and Alerts from the Jobs page. See SQL task for jobs.
A new schema browser is now in Public Preview, featuring an updated UX, a For You tab, and improved filters. The schema browser is available in Databricks SQL, Catalog Explorer, and notebooks. See Browse data.
June 8, 2023
DBSQL version 2023.30 available
Changes in 2023.30
New SQL built-in functions, such as
array_prepend(array, elem)
,try_aes_decrypt(expr, key [, mode [, padding]])
, andsql_keywords()
.You can now use shallow clone to create new Unity Catalog managed tables from existing Unity Catalog managed tables. See Shallow clone for Unity Catalog tables.
You can now use
CLONE
andCONVERT TO DELTA
with Iceberg tables that have partitions defined on truncated columns of typesint
,long
, andstring
. Truncated columns of typedecimal
are not supported.START VERSION
is now deprecated forALTER SHARE
.
June 1, 2023
Improvements:
Binary data will now render as a hex string when using the Arrow format.
In the SQL Statement API, the CSV format is now supported for the
EXTERNAL_LINKS
disposition. This allows clients to extract up to 100 GiB of data in CSV format with pre-signed URLs, whereas theINLINE
limit for JSON is 16 MiB.
May 29, 2023
New feature:
You can now use the add data UI to load data from a cloud object storage path that’s defined as a Unity Catalog external location. For more information, see Load data using a Unity Catalog external location.
May 25, 2023
Improvements:
You can now toggle the autocompletion result panel.
Disable the enter key so it no longer accepts autocomplete suggestions. Under DBSQL User Settings, click Editor Settings, then New Editor settings. Turn off *Enter key accepts autocomplete suggestions*.
Fixes:
Sorted table headers now have colors.
Chart lines now render correctly.
May 18, 2023
Improvement:
In the SQL Statement API, the
EXTERNAL_LINKS
disposition now supports theJSON_ARRAY
format. You can extract up to 100 GiB of data in JSON format with pre-signed URLs. TheINLINE
limit for JSON is 16 MiB.
May 11, 2023
New feature:
Schema Browser is now generally available in Catalog Explorer.
Improvements:
On-hover table detail panel showing is less sensitive.
The escape key now closes the autocomplete panel.
View definitions now have syntax highlighting in the Catalog Explorer details tab.
Fixes:
Pivot tables now correctly render on Windows devices.
Completion suggestions now properly follows the case of the first keyword.
May 4, 2023
Databricks SQL Version 2023.26 Available
Rollout Schedule
Preview rollout for 2023.26: Between April 19, 2023 and April 25, 2023
Current rollout for 2023.26: Between May 3, 2023 and May 10, 2023
Changes in 2023.26
Photon returns an error if a file is modified between query planning and execution.
New features and extended support for Predictive I/O features. See Databricks Runtime 13.0 (EoS).
Use the Databricks connector to connect to another Databricks workspace.
CREATE TABLE LIKE
feature for Delta tables.New metadata column fields denoting file block start and length.
New H3 geospatial functions. See H3 geospatial functions.
New SQL built-in functions. See Databricks Runtime 13.0 (EoS).
User interface updates
Improvements:
Administrators can change warehouse owners using the user interface or the API. See Manage a SQL warehouse.
Catalog Explorer now displays account service principals in user lists for assets in Unity Catalog. For example, account service principals are visible when editing privileges or changing owners in Catalog Explorer.
Custom chart labels support the ability to reference any column within the dataset.
Dashboard filters now load column names, even when using queries that don’t have catalog or schema info.
April 27, 2023
Improvements:
The SQL editor now relies on the Monaco editor for a more reliable editing experience.
SQL History list page (Queries) now includes the Dubois Design System.
April 20, 2023
Improvements:
Introduces new pivot tables that allow you to aggregate more than 64k results.
Databricks SQL tables and visualizations now support BigInt, 38bit Decimals, and non UTF-8 characters. For numbers, the default setting is now user-defined digit precision.
Autocomplete now suggests frequent past joins for Unity Catalog tables, powered by Unity Catalog lineage data in Databricks Runtime 12.0 and above.
Cloud Fetch is enabled by default in AWS workspaces with bucket versioning enabled. If you have bucket versioning enabled, Databricks recommends setting a lifecycle policy to automatically remove old versions of uploaded query results. See Cloud Fetch in ODBC (ODBC) and Cloud Fetch in JDBC (JDBC).
New feature:
Return text generated by a selected large language model (LLM) given the prompt with ai_generate_text. This function is only available as public preview on Databricks SQL Pro and Serverless. To participate in the public preview, populate and submit the AI Functions Public Preview enrollment form.
April 13, 2023
New feature:
The
TIMESTAMP_NTZ
type represents values comprising of fields year, month, day, hour, minute, and second. All operations are performed regardless of time zone. See TIMESTAMP_NTZ type.
Improvements:
Users can now send formatted results within alerts by using the
QUERY_RESULT_TABLE
tag in a custom alerts message.Users can now view the file data size for Unity Catalog tables in Catalog Explorer.
April 6, 2023
Databricks SQL Version 2023.20 Available
Rollout Schedule
Preview rollout for 2023.20: Between Mar 15, 2023 and Mar 23, 2023
Current rollout for 2023.20: Between Mar 27, 2023 and Apr 3, 2023
Changes in 2023.20
Delta Lake schema evolution supports specifying source columns in merge statements.
Remove all NULL elements from an array using array_compact.
To append elements to an array, use array_append.
To anonymize sensitive string values, use the mask function.
Common error conditions now return SQLSTATE.
Invoke table-valued generator functions in the regular
FROM
clause of a query.Use the
from_protobuf
andto_protobuf
functions to exchange data between binary and struct types. See Read and write protocol buffers.Improved consistency for Delta commit behavior for empty transactions relating to
update
,delete
, andmerge
commands.Behavior change
The lateral column alias feature introduces behavior changes during name resolution. See Behavior changes.
April 3, 2023
New feature:
The Create or modify table from file upload page now supports JSON file uploads. For more information, see Create or modify a table using file upload.
March 30, 2023
Improvements:
On the warehouse monitoring page, you can now view the query history for the selected time range along with your scaling charts. You can also view currently running and queued queries, active SQL sessions, the warehouse status, and the current cluster count. See Monitor a SQL warehouse.
Map clustering is now off by default in Marker maps.
Tooltips for visualization truncation and render limits have been added.
Fixes:
Charts now respect the original order when sorting is disabled for the axis values and the chart has the group by column.
March 23, 2023
Improvements:
Visualizations now support time binning by week.
Total
now calculates all data beyond the 100 X 100 cells up to 64k results in notebooks pivot tables.Users can now format cell values in the new notebooks pivot table.
File, query, and feature store lineage are available.
March 9, 2023
New feature:
The Databricks SQL Statement Execution API is now available in Public Preview. Statement Execution provides endpoints that are running SQL statements on a Databricks SQL warehouse. It can also check the status, fetch results, and cancel a running SQL statement.
Improvement:
The SQL admin console has been combined with the general admin settings to create a unified experience for admin users. All SQL admin settings are now accessed from the admin console.
Alert destinations are now called notification destinations.
Fixes:
Tables no longer display two scrollbars.
Blank widget titles no longer get overwritten on dashboards.
February 23, 2023
Improvement:
TINYINT
is now supported in results tables in notebooks.
Fix:
Fixed a bug where scrolling on the create dashboard filter modal resulted in an error.
February 16, 2023
Improvements:
Data labels on charts now show more characters to avoid truncating descriptions.
Autocomplete now recognizes
range()
and Python UDFcreate
functions.Autocomplete now avoids initiating snippets on decimals and within code comments.
Fixes:
Users can now zoom in on maps.
In notebooks, colors are now correctly assigned to charts.
February 2, 2023
Improvements:
Support for
DESCRIBE DETAILS
in the editor.Improved schema browser loading speed.
You can now view a list of possible columns on the side panel of a
SELECT *
.
January 26, 2023
Improvement:
Your query’s error messages now include links to the related documentation topic that describes the error.
January 19, 2023
Improvements:
You can now find a What’s New panel that highlights key enhancements. You can open and close this panel by clicking the gift icon.
Admins can now change ownership of SQL warehouses.
You can now filter across multiple visualizations in a dashboard by clicking Add > Filter and selecting the query and columns for filtering.
January 12, 2023
Improvements:
Visualization widget titles on dashboards are now formatted as Visualization name - Query name by default.
Added H3 Geospatial functions to the inline panel reference.
Added inline references for SQL syntax like
CREATE TABLE
andOVER
.
Fixes:
Sorting and reverse toggles are now available when the X-axis scale is set to automatic.
Heat maps and pivot tables are now responsive for certain edge cases.
The Categorical Y-axis tick marks are now sorted by default.
Query drafts are no longer lost after doing a browser refresh.
December 8, 2022
Databricks SQL alerts now support alerts for aggregations of query result columns such as
SUM
,COUNT
, andAVG
.The default visualization title is now ‘VisualizationName - QueryName’ when creating new widgets on dashboards.
November 17, 2022
Alerts:
Chatworks, Mattermost, and Google Hangouts Chat are no longer notification destinations.
Improvement:
The y-axis now displays numbers as percentages when checking percent values.
The workspace administrator setting to disable the upload data UI now applies to the new upload data UI. This setting applies to the Data Science & Engineering, Databricks Mosaic AI, and Databricks SQL personas.
Fixes:
Fixed an issue in Databricks SQL alerts where comparing against null values evaluated incorrectly.
Fixed an issue where scrollbars on pivot tables disappeared.
Fixed an issue where the schema browser couldn’t resize with overflowed tabs.
November 10, 2022
Improvement:
You can now create a dashboard filter that works across multiple queries at the same time. In Edit dashboard mode, choose Add, then Filter, then New Dashboard Filter.
Autocomplete now supports
CREATE MATERIALIZED VIEW
.
Fix:
Fixed an issue where scrolling to the end of a set of dashboard paged results would send an error.
Fixed an issue where switching from a stacked bar chart to a line chart kept the stacking property.
Fixed duplicated fetch calls.
November 3, 2022
Improvement:
When requesting access in Databricks SQL, the default permission is now “can run”.
Fixes:
Fixed an issue where sorting by
created_at
using the Queries and dashboards API did not return the correct sort order.Fixed an issue where columns containing URLs with HTML formatting had overly wide column widths.
Fixed an issue where
WHERE
keyword wasn’t highlighted.
October 27, 2022
Improvements:
The row limit for downloading query results to Excel has been increased from 64,000 rows to 100,000 rows. CSV & TSV download limits remain unchanged (about 1 GB of data).
Autocomplete now supports
LIST
syntax, URLs, and credentials.Consolidated and modernized Fix-me suggestion panels.
A new warehouse type, Databricks SQL Pro, is introduced for Databricks SQL. This warehouse type enables a Databricks SQL Warehouse to use Jobs integration, query federation, geospatial features, and predictive IO.
Fixes:
Fixed an issue where the warning banner in the editor overlapped full-height visualizations.
Fixed an issue where table column width was not preserved when columns of the table were moved.
Fixed an issue where the link to the dashboard in pop-up notifications was broken if a visualization was added from the SQL editor.
October 20, 2022
Improvements:
You can now find the query progress bar in the footer and the editing a visualization action in the kebab menu.
Autocomplete now supports Delta time travel, and provides column autocomplete when defining a foreign key.
Fix:
Fixed an issue where adding multiple visualizations to a dashboard in quick succession would result in visualizations not appearing on the dashboard.
October 13, 2022
Improvements:
You can now remove reported error messages.
COMMENT ON
instruction is now supported in the editor.You can now use Cmd+P or Ctr+P (for PC) as a shortcut for Top search. Use Cmd+I or Ctr+I (for PC) for Add parameters.
October 11, 2022
Improvements:
The add data UI provides access to common data sources configurations and file upload UIs. See Upload files to Databricks.
You can uploading small files to Delta Lake using a UI. See Create or modify a table using file upload.
October 6, 2022
Improvements:
EXTERNAL
is now a reserved table property. CommandsCREATE TABLE ... TBLPROPERTIES
andALTER TABLE ... SET TBLPROPERTIES
fail ifEXTERNAL
is specified in the properties.The
strfmt
informat_string(strfmt, obj, ...)
andprintf(strfmt, obj, ...)
no longer supports the use of0$
as the first argument. The first argument should be referenced by1$
when using an argument index to indicate the position of the argument in the argument list.Pie chart segments now have a thin border to delineate different segments.
You can now use Cmd+I (for Mac) or Ctr+I (for PC) as a shortcut for Add parameter. Use Cmd+P or Ctr+P (for PC) as a shortcut for Global search.
A feedback button is available for good or bad query error messages.
Fix me suggestions are now available as Quick fix.
Fixes:
lpad
andrpad
functions now work correctly withBINARY
string inputs. The output oflpad
andrpad
forBINARY
string inputs is now aBINARY
string.Fixed an issue where manual alert refreshes were not working.
Rolled back changes to automatic counter sizing to fix formatting issues.
September 29, 2022
Improvements:
You can now request access to Databricks SQL Queries, alerts, and dashboards from owners of those assets.
You can now filter by query duration and statement type on the Query history page.
To use the query duration filter, enter an amount of time and choose a time unit. The history result returns queries that run longer than the time specified.
To use the statement type filter, choose a statement type from the dropdown. The history result returns queries containing that statement.
Fix:
Fixed an issue where not all supported HTML tags were working in custom alert templates. All tags are available as documented.
Fixed an issue where Visualization notification toasts. For example, double clicking to zoom out on a visualization was previously not showing.
Fixed an issue where swapping the axes on a chart was not reflected in the chart name.
September 22, 2022
Improvements:
Counter widgets of the same size will have the same font sizing when multiple counter widgets are displayed on a dashboard.
Updated combination charts so that when using dual axis, only the same chart type (e.g., line, bar) can be used on the same axis. Series aliases are also applied to the axis.
Added autocomplete support for surrogate keys and
LIST
operations.
Fix:
Fixed issue where text parameters did not accept
Null
as a valid value.
September 15, 2022
Fix:
Fixed an issue where viewing query history from the SQL warehouses listing page did not work.
September 8, 2022
Improvement:
Introducing the new ‘Open Source Integrations’ card in DSE/SQL homepages that displays open source integration options such as Delta Live Tables and dbt core.
Fix:
Fixed an issue where parameter dropdown menus were blocked by the visualization tab.
September 1, 2022
Improvements:
Introducing a new simplified UI to add parameters and filters. Choose <button>+</button> and choose to add a filter or parameter.
The parentheses of SQL tokens, such as ‘OVER()’ now get autocompleted.
Fixes:
Fixed an issue where viewing the dashboard in full-screen ignored the color palette.
Fixed an issue where typing quickly and then using the Run shortcut ran the previous query text, instead of the newly typed query text.
Fixed issue where using the keyboard command, ctrl+enter to run queries would submit duplicate queries.
August 25, 2022
Fix:
Fixed an issue where dashboard filters were not updating when query parameters changed.
August 18, 2022
For Databricks SQL, Unity Catalog (Public Preview) is available in the preview channel. For more information, see What is Unity Catalog?.
Documentation: Alerts API documentation has been released.
Visualizations: Users can now set default values for date filters. Any time the filter is refreshed on a query or dashboard, the default value is applied.
Fixes:
Fixed an issue where apply changes did not work if a dashboard was still reloading.
Fixed an issue where columns were too narrow when a query returns no results.
August 11, 2022
Improvements:
Users can receive emails when their refreshes fail. To enable such notifications, navigate to the SQL settings tab of the admin console. Under Failure Emails, choose the type of object (Query, Dashboard, or Alert) for which you wish to receive failure notifications. Failure reports are sent hourly.
Visualizations
Introducing a new, modern color palette for visualizations and dashboards. To change a dashboard to the new color palette, go to your dashboard, click on <button>Edit</button> -> <button>Colors</button> -> <button>Import</button> and select the Databricks Color Palette. SQL Admins can also set the new color palette as the default option for a workspace by going to <button>Settings</button> -> <button>QL Admin Console</button> -> <button>Workspace Colors</button> -> <button>Import</button> and selecting the new palette.
Fixes:
Fixed an issue where previously selecting <button>Apply Changes</button> to apply a filter did not work if a query was already being executed.
August 4, 2022
Improvements:
On cloning a dashboard, there is now an option for whether or not queries should be cloned as well.
Tab content is synced across browser tabs. The state of your query will now be in sync across all browser tabs. This means that if you are working on query1 in browser tab 1 and then switch to browser tab 2, you’ll see query1 in the state you left it in while in the original browser tab.
Fix:
Labels for empty strings in pie chart now reflect that the string is empty rather than the index of the value.
July 28, 2022
Alerts
Custom alert email templates have been updated to disallow certain HTML tags that may pose a security risk. Disallowed HTML tags and attributes are automatically sanitized. For example, <button> is a disallowed HTML tag, so instead of rendering a button, the text “button” displays. See Alerts for the list of allowed HTML tags and attributes.
Users can now subscribe other users to alerts without needing to create a notification destination, which requires admin permissions.
Downloads: Users can now download up to approximately 1GB of results data from Databricks SQL in CSV and TSV format, up from 64,000 rows previously.
Visualizations
You can now edit visualizations directly on the dashboard. In edit mode, click on the kebab menu and select Edit visualization to begin editing the visualization.
When downloading results associated with a visualization leveraging aggregations, the downloaded results are also aggregated. The option to download is moving from bottom kebab to the kebab associated with the tab. The downloaded results are from the most recent execution of the query that created the visualization.
SQL editor: Results tables now display a message when data displayed by the in-browser table has been limited to 64,000 rows. TSV and CSV download will still be up to approximately 1GB of data.
Query filters:
Query filters have been updated to work dynamically on either client- or server-side to optimize performance. Previous query filters (now legacy) operated client-side only. Users can still use legacy filters with the
::
syntax, if desired.The updated filters are simpler: Users click a +Add Filter button and select a column from a dropdown. Previously, users had to modify the query text directly.
Relevant values are highlighted to make it easier to see which selections within a filter will return results given other filter selections.
Query history: Query details in Query History now show the Query Source, which is the origin of the executed query.
July 21, 2022
Notifications on share: Users will now be notified by email whenever a dashboard, query, or alert is shared with them.
Enhanced SQL editor experience via the new embedded editor toolkit
Live syntax error highlighting (for example, wrong keyword, table does not exist, and suggestions for fixing the error)
In context help: on hover (for example, full table name, detailed Function panel) and inline execution error messages (for example, highlight row with error post execution)
Intelligent ranking of suggestions (for example, parameter autocompletion, ranking formula, less noisy matching)
July 14, 2022
You can now upload TSV files using the
Create Table
UI in addition to CSV files.Databricks SQL now provides the option to notify users by email whenever a dashboard, query, or alert is shared with them.
Visualization tables now optionally include row numbers displayed next to results.
When you select a geographic region for the Chloropleth visualization, you now get inline hints for accepted values.
June 23, 2022
SQL endpoint name change: Databricks changed the name from SQL endpoint to SQL warehouse because it is more than just an API entry point for running SQL commands. A SQL warehouse is a computation resource for all your data warehousing needs, an integral part of the Databricks platform. Compute resources are infrastructure resources that provide processing capabilities in the cloud.
For Choropleth visualizations, the Key column and Target field selections in the visualization editor have been renamed to Geographic Column and Geographic Type. This renaming for understandability does not introduce any behavior changes to new or existing Choropleths.
The limit 1000 query option has moved from a checkbox in the SQL query editor to a checkbox in the run button.
Cached queries in Query History table are now marked with a Cache tag.
Manually refreshing a dashboard uses the dashboard’s warehouse (if available) instead of each individual query’s warehouse.
Refreshing an alert always uses the alert’s warehouse, regardless of the Run as Viewer/Owner setting.
June 9, 2022
When you hover in the endpoint selector, the full endpoint name is displayed as a tooltip.
When you filter in the SQL editor schema browser, the search term is now highlighted in the search results.
The Close All dialog box in the SQL editor now displays a list of unsaved queries.
To reopen the last closed tab in the SQL editor, use this new keyboard shortcut:
<Cmd> + <Shift> + <Option> + T
You can now add data labels to combination charts.
The list of visualization aggregations operations now includes variance and standard deviation.
May 26, 2022
Authoring improvements:
You can now bypass aggregations when you author visualizations. This is particularly useful when your query already includes an aggregation. For example, if your query is
SELECT AVG(price_per_sqft), isStudio, location GROUP BY location, isStudio
, the chart editor previously required explicitly specifying another layer of aggregation.When you author dashboards, you now have the ability to:
Duplicate textbox widgets
Expand the size of the edit textbox panel
The default aggregation for the error column when you author visualizations is standard deviation.
Fixes:
Edit actions for visualizations are only available when the dashboard is in edit mode. Edit actions are no longer available as a view mode action.
When you create a new query, it opens in a tab to the immediate right of the tab in focus rather than at the end of the list.
The open query modal shows which query is already open and provides the option to switch focus to that query tab.
The Sankey & Sunburst charts no longer treat 0 as null.
May 19, 2022
Fixed issue: When you have the focus of the SQL editor open on a specific visualization tab and share the link to another user, the user will have the same focus in the SQL editor when they click the shared link.
Improvements:
Microsoft Teams is now a supported notification destination.
The Date Range, Date and Time Range, and Date and Time Range (with seconds) parameters now support the option to designate the starting day of the week, with Sunday as the default.
May 12, 2022
Visualizations now support time binning directly in the UI. You can now easily switch between yearly, monthly, daily, or hourly bins of your data by changing a dropdown value rather than adding and modifying a
date_trunc()
function in the query text itself.Dashboards now have color consistency by default. If you have the same series across multiple charts, the series is always colored the same across all charts – without requiring any manual configuration.
May 3, 2022
When sharing a dashboard with a user or group, we now also provide the ability to share all upstream queries used by visualizations and parameters.
When you do not have permission to share one or more of the upstream queries, you will receive a warning message that not all queries could be shared.
The permissions granted when sharing a dashboard do not override, negate, or expand upon existing permissions on the upstream queries. For example, if a user or group has CAN RUN as Owner permissions on the shared dashboard but only has Run as Viewer permissions on an upstream query, the effective permissions on that upstream query will be Run as Viewer.
April 27, 2022
Your dashboard layout is now retained when exporting to PDF on demand and generating scheduled subscription emails.
March 17, 2022
Charts includes a new combination visualization option. This allows you to create charts that include both bars and lines.
March 10, 2022
Unity Catalog (Preview) allows you to manage governance and access to your data at the level of the account. You can manage metastores and data permissions centrally, and you can assign a metastore to multiple workspaces in your account. You can manage and interact with Unity Catalog data and objects using the Databricks SQL Catalog Explorer or the SQL editor, and you can use Unity Catalog data in dashboards and visualizations. See What is Unity Catalog?.
Note
Unity Catalog requires SQL endpoints to use version 2022.11, which is in the preview channel.
Delta Sharing (Preview) allows you to share read-only data with recipients outside your organization. Databricks SQL supports querying Delta Sharing data and using it in visualizations and dashboards.
Delta Sharing is subject to applicable terms that must be accepted by an account admin to enable the feature.
Each time a dashboard is refreshed manually or on a schedule, all queries in the dashboard and upstream, including those used by parameters, are refreshed. When an individual visualization is refreshed, all upstream queries, including those used by parameters, are refreshed.
March 3, 2022
The cohort visualization has been updated such that cohorts are interpolated from min and max values rather than 0 and 100. It’s now much easier to distinguish cohorts within the actual range of data available. Previously, if all numbers were close together, they used the same color. Now, numbers that are close together are more likely to use different colors because the cohort is divided from the max to min range to form each series.
It’s easier to see whether a dashboard subscription schedule is active or paused. When you click Subscribe, if the dashboard subscription schedule is paused, the message This schedule has been paused appears. When a dashboard subscription schedule is paused, you can subscribe or unsubscribe from the dashboard, but scheduled snapshots are not sent and the dashboard’s visualizations are not updated.
When you view Query History, you can now sort the list by duration. By default, queries are sorted by start time.
February 24, 2022
In Catalog Explorer, you can now view the permissions users or groups have on a table, view, schema, or catalog. Click the object, then click Permissions and use the new filter box.
February 17, 2022
Visualizations just became a little smarter! When a query results in one or two columns, a recommended visualization type is automatically selected.
You can now create histograms to visualize the frequency that each value occurs within a dataset and to understand whether a dataset has values that are clustered around a small number of ranges or are more spread out.
In both Query History and Query Profile, you can now expand to full width the query string and the error message of a failed query. This makes it easier to analyse query plans and to troubleshoot failed queries.
In bar, line, area, pie, and heatmap visualizations, you can now perform aggregation directly in the visualization configuration UI, without the need to modify the query itself. When leveraging these new capabilities, the aggregation is performed over the entire data set, rather than being limited to the first 64,000 rows. When editing a visualization created prior to this release, you will see a message that says
This visualization uses an old configuration. New visualizations support aggregating data directly within the editor.
If you want to leverage the new capabilities, you must re-create the visualization. See Enable aggregation in a visualization.
February 10, 2022
You can now set a custom color palette for a dashboard. All visualizations that appear in that dashboard will use the specified palette. Setting a custom palette does not affect how a visualization appears in other dashboards or the SQL editor.
You can specify hex values for a palette or import colors from another palette, whether provided by Databricks or created by a workspace admin.
When a palette is applied to a dashboard, all visualizations displayed in that dashboard will use the selected color palette by default, even if you configure custom colors when you create the visualization. To override this behavior, see Customize colors for a visualization.
Workspace admins can now create a custom color palette using the admin console. After the custom color palette is created, it can be used in new and existing dashboards. To use a custom color palette for a dashboard or to customize it, you can edit dashboard settings.
When you add a visualization that uses parameters to a dashboard from the SQL menu, the visualization now uses dashboard-level parameters by default. This matches the behavior when you add a widget using the Add Visualization button in a dashboard.
When you view the query history and filter the list by a combination of parameters, the number of matching queries is now displayed.
In visualizations, an issue was fixed where the Y-axis range could not be adjusted to specific values.
February 3, 2022
The tabbed SQL editor is now enabled by default for all users. For more information or to disable the tabbed editor, see Edit multiple queries.
January 27, 2022
Improvements have been made to how you can view, share, and import a query’s profile. See Query profile.
The Details visualization now allows you to rename columns just like the Table visualization.
You can now close a tab in the SQL editor by middle-clicking it.
The following Keyboard shortcuts have been added to the tabbed SQL editor:
Close all tabs: Cmd+Option+Shift+A (macOS) / Ctrl+Option+Shift+A (Windows)
Close other tabs: Cmd+Option+Shift+W (macOS) / Ctrl+Option+Shift+W (Windows)
These keyboard shortcuts provide an alternative to right-clicking on a tab to access the same actions. To view all keyboard shortcuts, click the Keyboard icon in the tabbed SQL editor.
January 20, 2022
The default formatting for integer and float data types in tables has been updated to not include commas. This means that by default, values like
10002343
will no longer have commas. To format these types to display with commas, click Edit Visualization, expand the area for the column, and modify the format to include a comma.To better align with browser rendering limits, visualizations now display a maximum of 10,000 data points. For example, a scatterplot will display a maximum of 10,000 dots. If the number of data points has been limited, a warning is displayed.
January 13, 2022
We fixed an issue where the Save button in the SQL editor was sometimes disabled. The Save button is is now always enabled, and includes an asterisk (
*
) when unsaved changes are detected.
December 15, 2021
Databricks SQL is Generally Available. This marks a major milestone in providing you with the first lakehouse platform that unifies data, AI, and BI workloads in one place. With GA, you can expect the highest level of stability, support, and enterprise-readiness from Databricks for mission-critical workloads. Read the GA announcement blog to learn more.
Alerts are now scheduled independently of queries. When you create a new alert and create a query, you are prompted to also create a schedule for the alert. If you had an existing alert, we’ve duplicated the schedule from the original query. This change also allows you to set alerts for both Run as Owner and Run as Viewer queries. Run as Owner queries run on the designated alert schedule with the query owner’s credential. Run as Viewer queries run on the designated alert schedule with the alert creator’s credential. See What are Databricks SQL alerts? and Schedule a query.
You can now re-order parameters in both the SQL editor and in dashboards.
The documentation for creating heatmap visualizations has been expanded. See Heatmap options.
December 9, 2021
When you create a table visualization, you can now set the font color for a column to a static value or a range of values based on the column’s field’s values. The literal value is compared to the threshold. For example, to colorize results whose values exceed
500000
, create the threshold> 500000
, rather than> 500,000
. See Conditionally format column colors.Icons in the tabbed SQL editor schema browser now allow you to distinguish between tables and views.
December 1,2021
You can now apply SQL configuration parameters at the workspace level. Those parameters automatically apply to all existing and new SQL endpoints in the workspace. See Configure SQL parameters.
November 18, 2021
You can now open the SQL editor by using a sidebar shortcut. To open the SQL editor, click SQL Editor.
If you have permission to create Data Science & Engineering clusters, you can now create SQL endpoints by clicking Create in the sidebar and clicking SQL Endpoint.
Administrators can now transfer ownership of a query, dashboard, or alert to a different user via the UI. See:
November 4, 2021
In a Map (Choropleth) visualization visualization, the maximum number of gradient steps for colors in the legend has been increased from 11 to 20. The default is 5 gradient steps inclusive of Min color and Max color.
The tabbed SQL editor now supports bulk tab management. If you right-click on a tab, you’ll see the option to Close others, Close left, Close right, and Close all. Note that if you right-click on the first or last tab, you won’t see the options to Close left or Close right.
October 28, 2021
When you view a table in Catalog Explorer, you have two options to simplify interacting with the table:
Click Create > Query to create a query that selects all columns and returns the first 1000 rows.
Click Create > Quick Dashboard to open a configuration page where you can select columns of interest and create a dashboard and supporting queries that provide some basic information using those columns and showcase dashboard-level parameters and other capabilities.
October 19, 2021
New keyboard shortcuts are now available in the tabbed editor:
Open new tab:
Windows:
Cmd+Alt+T
Mac:
Cmd+Option+T
Close current tab
Windows:
Cmd+Alt+W
Mac:
Cmd+Option+W
Open query dialog
Windows:
Cmd+Alt+O
Mac:
Cmd+Option+O
September 23, 2021
You can now create a new dashboard by cloning an existing dashboard, as long as you have the CAN RUN, CAN EDIT and CAN MANAGE permission on the dashboard and all upstream queries. See Clone a legacy dashboard.
You can now use
GROUP BY
in a visualization with multiple Y-axis columns. See Scatter chart.You can now use
{{ @@yPercent}}
to format data labels in an unnormalized stacked bar chart. See Bar chart.If you use SAML authentication and your SAML credential will expire within a few minutes, you are now proactively prompted to log in again before executing a query or refreshing a dashboard. This helps to prevent disruption due to a credential that expires during query execution.
September 20, 2021
You can now transfer ownership of dashboards, queries, and alerts using the Permissions REST API. See Query ACLs.
September 16, 2021
In query results,
BIGINT
results are now serialized as strings when greater than 9007199254740991. This fixes a problem whereBIGINT
results could be truncated in query results. Other integer results are still serialized as numbers. Number formatting on axis labels and tooltips does not apply toBIGINT
results that are serialized as strings. For more information about data types in Databricks SQL, see BIGINT type.
September 7, 2021
Databricks is rolling out the changes that follow over the course of a week. Your workspace may not be enabled for these changes until after September 7.
Databricks SQL is now in Public Preview and enabled for all users in new workspaces.
Note
If your workspace was enabled for Databricks SQL during the Public Preview—that is, before the week beginning September 7, 2021—users retain the entitlement assigned before that date, unless you change it. In other words, if a user did not have access to Databricks SQL during the Public Preview, they will not have it now unless an administrator gives it to them.
Administrators can manage which users have access to Databricks SQL by assigning the Databricks SQL access entitlement (
databricks-sql-access
in the API) to users or groups. By default, new users have this entitlement.Administrators can limit a user or group to accessing only Databricks SQL and prevent them from accessing Data Science & Engineering or Databricks Mosaic AI by removing the Workspace Access entitlement (
workspace-access
in the API) from the user or group. By default, new users have this entitlement.Important
To log in and access Databricks, a user must have either the Databricks SQL access or Workspace access entitlement (or both).
A small classic SQL endpoint called Starter Endpoint is pre-configured on all workspaces, so you can get started creating dashboards, visualizations, and queries right away. To handle more complex workloads, you can easily increase its size (to reduce latency) or the number of underlying clusters (to handle more concurrent users). To manage costs, the starter endpoint is configured to terminate after 120 minutes idle.
If serverless compute is enabled for your workspace and you enable Serverless SQL endpoints, a Serverless SQL endpoint called Serverless Starter Endpoint is automatically created, and you can use it for dashboards, visualizations, and queries. Serverless SQL endpoints start more quickly than classic SQL endpoints and automatically terminate after 10 minutes idle.
To help you get up and running quickly, a new guided onboarding experience is available for administrators and users. The onboarding panel is visible by default, and you can always see how many onboarding tasks are left in the sidebar. Click tasks left to reopen the onboarding panel.
You can get started using Databricks SQL quickly with two rich datasets in a read-only catalog called
SAMPLES
, which is available from all workspaces. When you learn about Databricks SQL, you can use these schemas to create queries, visualizations, and dashboards. No configuration is required, and all users have access to these schemas.The
nyctaxi
schema contains taxi trip data in thetrips
table.The
tpch
schema contains retail revenue and supply chain data in the following tables:customer
lineitem
nation
orders
part
partsupp
region
supplier
Click Run your first query in the onboarding panel to generate a new query of the
nyctaxi
schema.To learn about visualizing data in Databricks SQL with no configuration required, you can import dashboards from the Dashboard Samples Gallery. These dashboards are powered by the datasets in the
SAMPLES
catalog.To view the Dashboard Samples Gallery, click Import sample dashboard in the onboarding panel.
You can now create and drop native SQL functions using the CREATE FUNCTION and DROP FUNCTION commands.
September 2, 2021
Users with the CAN EDIT permission on a dashboard can now manage the dashboard’s refresh schedule and subscription list. Previously, the CAN MANAGE permission was required. For more information, see Automatically refresh a dashboard.
You can now temporarily pause scheduled export to dashboard subscribers without modifying the schedule. Previously, you had to remove all subscribers, disable the schedule, and then recreate. For more information, see Temporarily pause scheduled dashboard updates.
By default, visualizations no longer dynamically resize based on the number of results returned, but maintain the same height regardless of the number of results. To return to the previous behavior and configure a visualization to dynamically resize, enable Dynamically resize panel height in the visualization’s settings in the dashboard. For more information, see Table options.
If you have access to more than one workspace in the same account, you can switch workspaces from within Databricks SQL. Click in the lower left corner of your Databricks workspace, then select a workspace to switch to it.
August 30, 2021
Serverless SQL endpoints provide instant compute, minimal management, and cost optimization for SQL queries.
Until now, computation for SQL endpoints happened in the compute plane in your AWS account. The initial release of serverless compute adds Serverless SQL endpoints to Databricks SQL, moving those compute resources to your Databricks account.
You use serverless SQL warehouses with Databricks SQL queries just like you use the SQL endpoints that live in your own AWS account, now called Classic SQL endpoints. But serverless SQL warehouses typically start with low latency compared to Classic SQL endpoints, are easier to manage, and are optimized for cost.
Before you can create serverless SQL warehouses, an admin must enable the Serverless SQL endpoints option for your workspace. Once enabled, new SQL endpoints are Serverless by default, but you can continue to create SQL endpoints as Serverless or Classic as you like.
For details about the Serverless compute architecture and comparisons with the classic compute plane, see Serverless compute plane. For details about configuring serverless SQL warehouses—including how to convert Classic SQL endpoints to Serverless—see Enable serverless SQL warehouses.
For the list of supported regions for serverless SQL warehouses, see Databricks clouds and regions.
Important
Serverless Compute is subject to applicable terms that must be accepted by an account owner or account admin in order to enable the feature.
August 12, 2021
You can now send a scheduled dashboard update to email addresses that are not associated with Databricks accounts. When viewing a dashboard, click Scheduled to view or update the list of subscribed email addresses. If an email address is not associated with a Databricks account, it must be configured as a notification destination. For more information, see Automatically refresh a dashboard.
An administrator can now terminate another user’s query while it is executing. For more information, see Terminate an executing query.
August 05, 2021
To reduce latency on SQL endpoints when your workspace uses AWS Glue Data Catalog as the external metastore, you can now configure client-side caching. For more information, see Higher latency with Glue Catalog than Databricks Hive metastore and Configure data access properties for SQL warehouses.
Improved
EXPLAIN
result formattingExplain results are easier to read
Formatted as monospaced with no line wrap
July 29, 2021
Juggling multiple queries just got easier with support for multiple tabs in the query editor. To use the tabbed editor, see Edit multiple queries.
July 08, 2021
Visualization widgets in dashboards now have titles and descriptions so that you can tailor the title and description of visualizations used in multiple dashboards to the dashboard itself.
The sidebar has been updated for improved visibility and navigation:
Warehouses are now SQL Endpoints and History is renamed to Query History.
Account settings (formerly named Users) have been moved to Account. When you select Account you can change the Databricks workspace and log out.
User settings have been moved to Settings and have been split into Settings and SQL Admin Console. SQL Admin Console is visible only to admins.
The help icon changed to Help.
July 01, 2021
The new Catalog Explorer allows you to easily explore and manage permissions on databases and tables. Users can view schema details, preview sample data, and see table details and properties. Administrators can view and change data object owners, and data object owners can grant and revoke permissions. For details, see What is Catalog Explorer?.
Y-axes in horizontal charts have been updated to reflect the same ordering as in tables. If you have previously selected reverse ordering, you can use the Reverse Order toggle on the Y-axis tab to reverse the new ordering.
June 17, 2021
Photon, Databricks’ new vectorized execution engine, is now on by default for newly created SQL endpoints (both UI and REST API). Photon transparently speeds up
Writes to Parquet and Delta tables.
Many SQL queries. See Limitations.
Easily manage users and groups with
CREATE GROUP
,DROP GROUP
,ALTER GROUP
,SHOW GROUPS
, andSHOW USERS
commands. For details, see Security statements and Show statements.The query editor schema browser is snappier and faster on schemas with more than 100 tables. On such schemas, the schema browser will not load all columns automatically; the list of tables still shows as usual, but columns load only when you click a table. This change affects query autocomplete in the query editor, because it depends on this information to show suggestions. Until you expand a table and load its columns, those suggestions are not available.
June 03, 2021
Admins of newly enabled Databricks workspaces now receive the Databricks SQL entitlement by default and are no longer required to give themselves the Databricks SQL access entitlement using the admin console.
Photon is now in public preview and enabled by default for new SQL endpoints.
Multi-cluster load balancing is now in public preview.
You can now enable collaboration on dashboards and queries with other members of your organization using CAN EDIT permission. See Access control lists.
May 26, 2021
SQL Analytics is renamed to Databricks SQL. This change has the following customer-facing impacts:
References in the web UI have been updated.
The entitlement to grant access to Databricks SQL has been renamed:
UI: Databricks SQL access (previously SQL Analytics access)
SCIM API:
databricks-sql-access
(previouslysql-analytics-access
)
Users, groups, and service principals with the previous entitlement have been migrated to the new entitlement.
Tags for audit log events related to Databricks SQL have changed:
The prefix for Databricks SQL events is now
databrickssql
.changeSqlAnalyticsAcl
is nowchangeDatabricksSqlAcl
.
Dashboard updates
The dashboard export filename has been updated to be the name of the dashboard + timestamp, rather than a UUID.
Export records limit has been raised from 22k to 64k.
Dashboard authors now have the ability to periodically export and email dashboard snapshots. Dashboard snapshots are taken from the default dashboard state, meaning that any interaction with the visualizations will not be present in the snapshot.
If you are the owner of a dashboard, you can create a refresh schedule and subscribe other users, who’ll get email snapshots of the dashboard every time it’s refreshed.
If you have view permission for a dashboard, you can subscribe to existing refresh schedules.
Predicate pushdown expressions (
StartsWith
,EndsWith
,Contains
,Not(EqualTo())
, andDataType
) are disabled for AWS Glue Catalog since they are not supported.
May 20, 2021
You can now use your own key from AWS KMS to encrypt the Databricks SQL queries and query history stored in Databricks. If you’ve already configured your own key for a workspace to encrypt data for managed services (notebooks and secrets), then no further action is required. The same customer-managed key for managed services now also encrypts the Databricks SQL queries and query history. See Customer-managed keys for managed services. This change affects only new data that is stored at rest. Databricks SQL queries and query history that were stored before today are not guaranteed to be encrypted with this key.
Databricks SQL query results are stored in your root S3 bucket that you provided during workspace setup, and they are not encrypted by your managed services key. However, you can use your own key to encrypt them. See Customer-managed keys for workspace storage.
This feature is available with the Enterprise pricing plan.
The Past executions tab now shows relative time.
May 13, 2021
Databricks SQL no longer tries to guess column types. Previously, a column with the format
xxxx-yy-dd
was automatically treated as a date, even if it was an identification code. Now that column is no longer automatically treated as a date. You must specify that in the query if so desired. This change may cause some visualizations that relied on the previous behavior to no longer work. In this release, you can change > Settings > Backwards Compatibility option to return to the previous behavior. In a future release we will remove that capability.The query editor now has a query progress indicator. State changes are now visible in a continually updated progress bar.
May 06, 2021
You can now download the contents of the dashboard as a PDF. See Download as PDF.
An admin user now has view access to all the queries and dashboards. In this view an admin can view and delete any query or dashboard. However, the admin can’t edit the query or dashboard if it is not shared with the admin. See Query admin view and legacy dashboard admin view.
The ability to increase endpoint concurrency with multi-cluster load balancing is now available for all accounts. You can create endpoints that autoscale between specified minimum and maximum cluster counts. Overloaded endpoints will scale up and underloaded endpoints will scale down.
April 29, 2021
Query options and details are now organized in a set of tabs to the left of the query editor:
Data sources: Select from available data sources and schema. See Create a query.
Past executions: View past executions performed in the SQL editor. This does not show scheduled executions. See Write queries and explore data in the SQL editor.
Query info: Set the query description, view information about the query, and set the refresh schedule. See Write queries and explore data in the SQL editor and Schedule a query.
April 22, 2021
Fixed an issue in which endpoints were inaccessible and appeared to be deleted due to internal error.
April 16, 2021
Databricks SQL maintains compatibility with Apache Spark SQL semantics. This release updates the semantics to match those of Apache Spark 3.1. Previously Databricks SQL was aligned to Apache Spark 3.0 semantics.
Statistical aggregation functions, including
std
,stddev
,stddev_samp
,variance
,var_samp
,skewness
,kurtosis
,covar_samp
, andcorr
, returnNULL
instead ofDouble.NaN
whenDivideByZero
occurs during expression evaluation, for example, whenstddev_samp
applied on a single element set. Prior to this release, it would returnDouble.NaN
.grouping_id()
returns long values. Prior to this release, this function returned int values.The query plan explain results is now formatted.
from_unixtime
,unix_timestamp
,to_unix_timestamp
,to_timestamp
, andto_date
will fail if the specified datetime pattern is invalid. Prior to this release, they returnedNULL
.The Parquet, ORC, Avro, and JSON data sources throw the exception
org.apache.spark.sql.AnalysisException
: “Found duplicate column(s) in the data schema in read if they detect duplicate names in top-level columns as well in nested structures.”Structs and maps are wrapped by the
{}
brackets in casting them to strings. For instance, theshow()
action and theCAST
expression use such brackets.Prior to this release, the d brackets were used for the same purpose.NULL elements of structures, arrays and maps are converted to “null” in casting them to strings. Prior to this release,
NULL
elements were converted to empty strings.The sum of decimal type column overflows returns null. Prior to this release, in the case, the sum of decimal type column may return null or incorrect result, or even fails at runtime (depending on the actual query plan execution).
IllegalArgumentException
is returned for the incomplete interval literals, for example,INTERVAL '1'
,INTERVAL '1 DAY 2'
, which are invalid. Prior to this release, these literals result in NULLs.Loading and saving of timestamps from and to Parquet files fails if the timestamps are before
1900-01-01 00:00:00Z
, and loaded (saved) as theINT96
type. Prior to this release, the actions don’t fail but might lead to shifting of the input timestamps due to rebasing from/to Julian to/from Proleptic Gregorian calendar.The
schema_of_json
andschema_of_csv
functions return the schema in the SQL format in which field names are quoted. Prior to this release, the function returns a catalog string without field quoting and in lower case.CHAR
,CHARACTER
, andVARCHAR
types are supported in table schema. Table scan and insertion respects the char/varchar semantic. If char/varchar is used in places other than table schema, an exception is thrown (CAST is an exception that simply treats char/varchar as string like before).The following exceptions are thrown for tables from Hive external catalog:
ALTER TABLE .. ADD PARTITION
throwsPartitionsAlreadyExistException
if new partition exists already.ALTER TABLE .. DROP PARTITION
throwsNoSuchPartitionsException
for not existing partitions.
April 13, 2021
Improved query throughput with SQL endpoint queuing. Queries submitted to a SQL endpoint now queue when the endpoint is already saturated with running queries. This improves query throughput by not overloading the endpoint with requests. You can view the improved performance in the endpoint monitoring screen.
April 01, 2021
Quickly find the time spent in compilation, execution, and result fetching for a query in Query History. See Query profile. Previously this information was only available by clicking a query and opening the Execution Details tab.
SQL endpoints no longer scale beyond the maximum specified clusters. All clusters allocated to a SQL endpoint are recycled after 24 hours, which can create a brief window in which there is one additional cluster.
March 18, 2021
Autocomplete in the query editor now supports Databricks SQL syntax and is context and alias aware. See Create a query.
JDBC and ODBC requests no longer fail with invalid session errors after the session times out on the server. BI clients are now able to seamlessly recover when session timeouts occur.
March 11, 2021
Administrators and users in workspaces newly enabled for Databricks SQL no longer automatically have access to Databricks SQL. To enable access to Databricks SQL, the administrator must:
Go to the admin settings page.
Click the Users tab.
In the row for their account, click the Databricks SQL access checkbox.
Click Confirm.
Repeat steps 3 and 4 to grant users access to Databricks SQL or grant access to groups.
Easily create queries, dashboards, and alerts by selecting New > [Query | Dashboard | Alert] at the top of the sidebar.
Query Editor now saves drafts, and you can revert to a saved query. See Write queries and explore data in the SQL editor.
You can no longer create external data sources.
The reliability of the SQL endpoint monitoring chart has been improved. The chart no longer intermittently shows spurious error messages.
March 04, 2021
The Queries, Dashboards, and Alerts API documentation is now available. See Databricks REST API reference.
Scheduled dashboard refreshes are now always performed. The refreshes are performed in the web application, so you no longer need to keep the dashboard open in a browser. See Automatically refresh a dashboard.
New SQL endpoints created using the SQL Warehouse API now have Auto Stop enabled with a default timeout of two hours.
Tableau Online users can now connect to SQL endpoints. See the new Tableau Online quickstart.
SQL endpoints no longer fail to launch due to inadequate AWS resources in a single availability zone.
February 26, 2021
The new Power BI connector for Azure Databricks, released in public preview in September 2020, is now GA. It provides:
Simple connection configuration: the new Power BI Databricks connector is integrated into Power BI, and you configure it using a simple dialog with a couple of clicks.
Faster imports and optimized metadata calls, thanks to the new Databricks ODBC driver, which comes with significant performance improvements.
Access to Databricks data through Power BI respects Databricks table access control.
For more information, see Connect Power BI to Databricks.
February 25, 2021
Setting permissions on a SQL endpoint is now faster. It’s a step right after you create a new SQL endpoint and easily accessible when you edit an existing endpoint. See Connect to a SQL warehouse and Manage a SQL warehouse.
To reuse visualization settings you can now duplicate a visualization. See Clone a visualization.
Query results are now stored in your account instead of the Databricks account.
To prevent leaking information by listing all defined permissions on an object, to run
SHOW GRANTS [<user>] <object>
you must be either:A Databricks SQL administrator or the owner of
<object>
.The user specified in
[<user>]
.
January 07, 2021
To reduce spending on idle endpoints, new SQL endpoints now have Auto Stop enabled with a default timeout of two hours. After the timeout is reached, the endpoint is stopped. You can edit the timeout period or disable Auto Stop at any time.
Except for
TEXT
type query parameters, quotation marks are no longer added to query parameters. If you have usedDropdown List
,Query Based Dropdown List
, or anyDate
type query parameters, you must add quotation marks in order for the query to work. For example, if your query isSELECT {{ d }}
, now this query must beSELECT '{{ d }}'
.
November 18, 2020
Databricks is pleased to introduce the Public Preview of Databricks SQL, an intuitive environment for running ad-hoc queries and creating dashboards on data stored in your data lake. Databricks SQL empowers your organization to operate a multi-cloud lakehouse architecture that provides data warehousing performance with data lake economics. Databricks SQL:
Integrates with the BI tools you use today, like Tableau and Microsoft Power BI, to query the most complete and recent data in your data lake.
Complements existing BI tools with a SQL-native interface that allows data analysts and data scientists to query data lake data directly within Databricks.
Enables you to share query insights through rich visualizations and drag-and-drop dashboards with automatic alerting for important data changes.
Uses Connect to a SQL warehouse to bring reliability, quality, scale, security, and performance to your data lake, so you can run traditional analytics workloads using your most recent and complete data.
Introduces the
USAGE
privilege to simplify data access administration. In order to use an object in a schema, you must be granted the USAGE privilege on that schema in addition to any privileges you need to perform the action. TheUSAGE
privilege can be granted to schemas or to the catalog. For workspaces that already use table access control, theUSAGE
privilege is granted automatically to the users group on the rootCATALOG
. See Hive metastore privileges and securable objects (legacy) for details.
See the What is data warehousing on Databricks? for details.
Fixed issues
SQL editor. The SQL editor will now persist selected text and scroll position when switching between query tabs.
SQL editor. If you click ‘Run’ on a query in the SQL editor, then navigate to another page and return while the query is still executing, the editor will display the correct query state. If the query completes while you are on another page, query results will be available on return to the SQL editor page.
You can now use MySQL 8.0 as an external metastore.
DESCRIBE DETAIL
commands on Delta tables no longer fail withjava.lang.ClassCastException: java.sql.Timestamp cannot be cast to java.time.Instant.
Reading Parquet files with
INT96
timestamps no longer fails.When a user has CAN RUN permission on a query and runs it, if the query was created by another user, the query history displays the runner of the query as the user.
Null values are now ignored when rendering a chart, improving the usability of charts. For example, previously, bars in a bar chart would look very small when null values were present. Now the axes are set based on non-null values only.
Known issues
Reads from data sources other than Delta Lake in multi-cluster load balanced SQL endpoints can be inconsistent.
Delta tables accessed in Databricks SQL upload their schema and table properties to the configured metastore. If you are using an external metastore, you will be able to see Delta Lake information in the metastore. Delta Lake tries to keep this information as up-to-date as possible on a best-effort basis. You can also use the
DESCRIBE <table>
command to ensure that the information is updated in your metastore.Databricks SQL does not support zone offsets like ‘GMT+8’ as session time zones. The workaround is to use a region based time zone https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) like ‘Etc/GMT+8’ instead. See SET TIME ZONE for more information about setting time zones.
Frequently asked questions (FAQ)
How are Databricks SQL workloads charged?
Databricks SQL workloads are charged according to the SQL Compute SKU.
Where do SQL endpoints run?
Like Databricks clusters, Classic SQL endpoints are created and managed in your AWS account. Classic SQL endpoints manage SQL-optimized clusters automatically in your account and scale to match end-user demand.
Serverless SQL endpoints (Public Preview), on the other hand, use compute resources in your Databricks account. serverless SQL warehouses simplify SQL endpoint configuration and usage and accelerate launch times. The Serverless option is available only if it has been enabled for the workspace. For more information, see Serverless compute plane.
Can I use SQL endpoints from Data Science & Engineering workspace SQL notebooks?
No. You can use SQL endpoints from Databricks SQL queries, BI tools, and other JDBC and ODBC clients.
I have been granted access to data using a cloud provider credential. Why can’t I access this data in Databricks SQL?
In Databricks SQL, all access to data is subject to data access control, and an administrator or data owner must first grant you the appropriate privileges.