Lakeflow Declarative Pipelines release notes and the release upgrade process
This article explains the Lakeflow Declarative Pipelines release process, how the Lakeflow Declarative Pipelines runtime is managed, and provides links to release notes for each Lakeflow Declarative Pipelines release.
Lakeflow Declarative Pipelines runtime channels
To see the Databricks Runtime versions used with a Lakeflow Declarative Pipelines release, see the release notes for that release.
Lakeflow Declarative Pipelines clusters use runtimes based on Databricks Runtime release notes versions and compatibility. Databricks automatically upgrades the Lakeflow Declarative Pipelines runtimes to support enhancements and upgrades to the platform. You can use the channel
field in the Lakeflow Declarative Pipelines settings to control the Lakeflow Declarative Pipelines runtime version that runs your pipeline. The supported values are:
current
to use the current runtime version.preview
to test your pipeline with upcoming changes to the runtime version.
By default, your pipelines run using the current
runtime version. Databricks recommends using the current
runtime for production workloads. To learn how to use the preview
setting to test your pipelines with the next runtime version, see Automate testing of your pipelines with the next runtime version.
Features marked as generally available or Public Preview are available in the current
channel.
For more information about Lakeflow Declarative Pipelines channels, see the channel
field in the Lakeflow Declarative Pipelines pipeline settings.
To understand how Lakeflow Declarative Pipelines manages the upgrade process for each release, see How do Lakeflow Declarative Pipelines upgrades work?.
How do I find the Databricks Runtime version for a pipeline update?
You can query the Lakeflow Declarative Pipelines event log to find the Databricks Runtime version for a pipeline update. See Runtime information.
Lakeflow Declarative Pipelines release notes
Lakeflow Declarative Pipelines release notes are organized by year and week-of-year. Because Lakeflow Declarative Pipelines is versionless, both workspace and runtime changes take place automatically. The following release notes provide an overview of changes and bug fixes in each release:
- DLT release 2025.16
- DLT release 2025.15
- DLT release 2025.12
- DLT release 2025.04
- DLT release 2024.49
- DLT release 2024.42
- DLT release 2024.40
- DLT release 2024.37
- DLT release 2024.33
- DLT release 2024.29
- DLT release 2024.22
- DLT release 2024.20
- DLT release 2024.13
- DLT release 2024.11
- DLT release 2024.09
- DLT release 2024.05
- DLT release 2024.02
- DLT release 2023.50
- DLT release 2023.48
- DLT release 2023.45
- DLT release 2023.43
- DLT release 2023.41
- DLT release 2023.37
- DLT release 2023.35
- DLT release 2023.30
- DLT release 2023.27
- DLT release 2023.23
- DLT release 2023.21
- DLT release 2023.19
- DLT release 2023.17
- DLT release 2023.16
- DLT release 2023.13
- DLT release 2023.11
- DLT release 2023.06
- DLT release 2023.03
- DLT release 2023.01
- DLT release 2022.49
- DLT release 2022.46
- DLT release 2022.44
- DLT release 2022.42
- DLT release 2022.40
- DLT release 2022.37
How do Lakeflow Declarative Pipelines upgrades work?
Lakeflow Declarative Pipelines is considered to be a versionless product, which means that Databricks automatically upgrades the Lakeflow Declarative Pipelines runtime to support enhancements and upgrades to the platform. Databricks recommends limiting external dependencies for Lakeflow Declarative Pipelines.
Databricks proactively works to prevent automatic upgrades from introducing errors or issues to production Lakeflow Declarative Pipelines. See Lakeflow Declarative Pipelines upgrade process.
Especially for users that deploy Lakeflow Declarative Pipelines with external dependencies, Databricks recommends proactively testing pipelines with preview
channels. See Automate testing of your pipelines with the next runtime version.
Lakeflow Declarative Pipelines upgrade process
Databricks manages the Databricks Runtime used by Lakeflow Declarative Pipelines compute resources. Lakeflow Declarative Pipelines automatically upgrades the runtime in your Databricks workspaces and monitors the health of your pipelines after the upgrade.
If Lakeflow Declarative Pipelines detects that a pipeline cannot start because of an upgrade, the runtime version for the pipeline reverts to the previous version that is known to be stable, and the following steps are triggered automatically:
- The pipeline's Lakeflow Declarative Pipelines runtime is pinned to the previous known-good version.
- Databricks support is notified of the issue.
- If the issue is related to a regression in the runtime, Databricks resolves the issue.
- If the issue is caused by a custom library or package used by the pipeline, Databricks contacts you to resolve the issue.
- When the issue is resolved, Databricks initiates the upgrade again.
Lakeflow Declarative Pipelines only reverts pipelines running in production mode with the channel set to current
.
Automate testing of your pipelines with the next runtime version
To ensure changes in the next Lakeflow Declarative Pipelines runtime version do not impact your pipelines, use the Lakeflow Declarative Pipelines channels feature:
- Create a staging pipeline and set the channel to
preview
. - In the Lakeflow Declarative Pipelines UI, create a schedule to run the pipeline weekly and enable alerts to receive an email notification for pipeline failures. Databricks recommends scheduling weekly test runs of pipelines, especially if you use custom pipeline dependencies.
- If you receive a notification of a failure and are unable to resolve it, open a support ticket with Databricks.
Pipeline dependencies
Lakeflow Declarative Pipelines supports external dependencies in your pipelines; for example, you can install any Python package using the %pip install
command. Lakeflow Declarative Pipelines also supports using global and cluster-scoped init scripts. However, these external dependencies, particularly init scripts, increase the risk of issues with runtime upgrades. To mitigate these risks, minimize using init scripts in your pipelines. If your processing requires init scripts, automate testing of your pipeline to detect problems early; see Automate testing of your pipelines with the next runtime version. If you use init scripts, Databricks recommends increasing your testing frequency.