June 2022
These features and Databricks platform improvements were released in June 2022.
Note
Releases are staged. Your Databricks account may not be updated until a week or more after the initial release date.
ALTER TABLE
permission changes for Unity Catalog
June 30, 2022
In Unity Catalog there has been an update to the privileges required to run ALTER TABLE
statements. Previously, OWNERSHIP
of a table was required to run all ALTER TABLE
statements. Now OWNERSHIP
on the table is required only for changing the owner, granting permissions on the table, changing the table name, and modifying a view definition. For all other metadata operations on a table (for example updating comments, properties, or columns) you can make updates if you have the MODIFY
permission on the table.
See ALTER TABLE and ALTER TABLE.
Databricks Runtime 6.4 Extended Support reaches end of support
June 30, 2022
Support for Databricks Runtime 6.4 Extended Support ended on June 30. See Databricks support lifecycles.
Databricks Runtime 10.2 series support ends
June 22, 2022
Support for Databricks Runtime 10.2 and Databricks Runtime 10.2 for Machine Learning ended on June 22. See Databricks support lifecycles.
Databricks ODBC driver 2.6.24
June 22, 2022
We have released version 2.6.24 of the Databricks ODBC driver (download). This release adds support to configure query translation to CTAS syntax, allows users to override SQL_ATTR_QUERY_TIMEOUT
in the connector, and updates OpenSSL library.
This release also resolves the following issues:
The connector does not allow the use of server and intermediate certificates that do not have a CRL distribution points (CDP) entry.
When using a proxy, the connector sets the incorrect host name for SSL Server Name Indication (SNI).
Databricks Terraform provider is now GA
June 22, 2022
The Databricks Terraform provider is now generally available.
Terraform enables you to fully automate deployment for your data platforms with Terraform’s existing infrastructure-as-code (IaC) processes.
You can use the Databricks Terraform provider to define assets in Databricks workspaces, such as clusters and jobs, and to enforce access control through permissions for users, groups, and service principals.
The Databricks Terraform provider provides a complete audit trail of deployments. You can use the Databricks Terraform provider as a backbone for your disaster recovery and business continuity strategies.
The Databricks Terraform provider also supports Unity Catalog (Preview), allowing you to deploy this key governance feature with ease and at scale.
Serverless SQL warehouses available for E2 workspaces (Public Preview)
June 23, 2022
Serverless SQL warehouses are now available for accounts and workspaces on the E2 version of the platform. This feature requires the Premium or higher pricing tier. It is unsupported for accounts still on the Databricks free trial or if the account or workspace uses the compliance security profile. Before you create Serverless SQL warehouses, you must accept the applicable terms at the account level and enable Serverless SQL for each workspace.
Enable enhanced security controls with a security profile (Public Preview)
June 21, 2022
If a Databricks workspace has a compliance security profile enabled, the workspace has additional features and controls. The profile enables additional monitoring, enforced instance types for inter-node encryption, a hardened compute image, and other features. For details, see Compliance security profile.
The compliance security profile includes controls that help meet certain security requirements in some compliance standards, such as PCI and HIPAA. However, you can choose to enable the compliance security profile for its enhanced security features without the need to conform to any compliance standard.
PCI-DSS compliance controls (Public Preview)
June 21, 2022
PCI-DSS compliance controls on the E2 version of the Databricks platform provide enhancements that help you with payment card industry (PCI) compliance for your workspace. This requires enabling the compliance security profile and signing additional agreements.
HIPAA compliance controls for E2 (Public Preview)
June 21, 2022
HIPAA compliance controls on the E2 version of the Databricks platform provide enhancements that help you with HIPAA compliance for your workspace. This requires enabling the compliance security profile and signing additional agreements.
Enhanced security monitoring (Public Preview)
June 21, 2022
Enhanced security monitoring on the E2 version of the Databricks platform provides an enhanced hardened disk image and additional security monitoring agents that generate logs that you can review.
Databricks Runtime 11.0 and 11.0 ML are GA; 11.0 Photon is Public Preview
June 16, 2022
Databricks Runtime 11.0 and Databricks Runtime 11.0 ML are now generally available. Databricks Runtime 11.0 Photon is in Public Preview.
See Databricks Runtime 11.0 (EoS) and Databricks Runtime 11.0 for Machine Learning (EoS).
Change to Repos default working directory in Databricks Runtime 11.0
June 16, 2022
The Python working directory for notebooks in a Repo defaults to the directory containing the notebooks. For example, instead of /databricks/driver
, the default working directory is /Workspace/Repos/<user>/<repo>/<path-to-notebook>
. This allows importing and reading from Files in Repos to work by default on Databricks Runtime 11.0 clusters.
This also means that writing to the current working directory fails with a Read-only filesystem
error message. If you want to continue writing to the local file system for a cluster, write to /tmp/<filename>
or /databricks/driver/<filename>
.
Databricks Runtime 10.1 series support ends
June 14, 2022
Support for Databricks Runtime 10.1 and Databricks Runtime 10.1 for Machine Learning ended on June 14. See Databricks support lifecycles.
Audit logs can now record when a notebook command is run
June 14, 2022
You can now configure audit logs to record when a notebook command is run. To do so, use the workspace configuration setting verbose audit logs. See Enable verbose audit logs.
Delta Live Tables now supports SCD type 2
June 13-21, 2022: Version 3.74
Your Delta Live Tables pipelines can now use SCD type 2 to capture source data changes and retain the full history of updates to records. This enhances the existing Delta Live Tables support for SCD type 1. See The APPLY CHANGES APIs: Simplify change data capture with Delta Live Tables.
Create Delta Live Tables pipelines directly in the Databricks UI
June 13-21, 2022: Version 3.74
You can now create a Delta Live Tables pipeline from the Create menu on the sidebar of the Databricks UI.
Select the Delta Live Tables channel when you create or edit a pipeline
June 13-21, 2022: Version 3.74
You can now configure the channel for your Delta Live Tables pipeline with the Create pipeline and Edit pipeline settings dialogs. Previously, configuring the channel required editing the settings in the pipeline’s JSON configuration.
Communicate between tasks in your Databricks jobs with task values
June 13, 2022
You can now communicate values between tasks in your Databricks jobs with task values. For example, you can use task values to pass the output of a machine learning model to downstream tasks in the same job run. See taskValues subutility (dbutils.jobs.taskValues).
Enable account switching in the Databricks UI
June 8, 2022
If users belong to more than one account, they can now switch between accounts in the Databricks UI. To use the account switcher, click your email address at the top of the Databricks UI then hover over Switch account. Then select the account you want to navigate to.
Updating the AWS Region for a failed workspace is no longer supported
June 1, 2022
Updating the Region for a failed workspace is no longer supported. If you created a workspace in an incorrect Region, you must create a new workspace in the correct Region and delete the incorrect one at your convenience.
Databricks discovered rare scenarios in which a workspace can get into a failed state even after it was created successfully. Calling the update operation to update a region in such a state can result in unsupported and unusable workspaces. To make workspace management experience more consistent, updates to the region of a failed workspace are no longer supported in the UI and API. Updating the Region of a running workspace was never supported.