Skip to main content

Lakeflow Spark Declarative Pipelines Python language reference

This section has details for the Lakeflow Spark Declarative Pipelines (SDP) Python programming interface.

pipelines module overview

Lakeflow Spark Declarative Pipelines Python functions are defined in the pyspark.pipelines module (imported as dp). Your pipelines implemented with the Python API must import this module:

Python
from pyspark import pipelines as dp
note

The pipelines module is only available in the context of a pipeline. It is not available in Python running outside of pipelines. For more information about editing pipeline code, see Develop and debug ETL pipelines with the Lakeflow Pipelines Editor.

Apache Spark™ pipelines

Apache Spark includes declarative pipelines beginning in Spark 4.1, available through the pyspark.pipelines module. The Databricks Runtime extends these open source capabilities with additional APIs and integrations for managed production use.

Code written with the open-source pipelines module runs without modification on Databricks. The following features are not part of Apache Spark:

  • dp.create_auto_cdc_flow
  • dp.create_auto_cdc_from_snapshot_flow
  • @dp.expect(...)
  • @dp.temporary_view

The pipelines module was previously called dlt in Databricks. For details, and more information about the differences from Apache Spark, see What happened to @dlt?.

Functions for dataset definitions

Pipelines use Python decorators for defining datasets such as materialized views and streaming tables. See Functions to define datasets.

API reference

Considerations for Python pipelines

The following are important considerations when you implement pipelines with the Lakeflow Spark Declarative Pipelines (SDP) Python interface:

  • SDP evaluates the code that defines a pipeline multiple times during planning and pipeline runs. Python functions that define datasets should include only the code required to define the table or view. Arbitrary Python logic included in dataset definitions might lead to unexpected behavior.
  • Do not try to implement custom monitoring logic in your dataset definitions. See Define custom monitoring of pipelines with event hooks.
  • The function used to define a dataset must return a Spark DataFrame. Do not include logic in your dataset definitions that does not relate to a returned DataFrame.
  • Never use methods that save or write to files or tables as part of your pipeline dataset code.

Examples of Apache Spark operations that should never be used in pipeline code:

  • collect()
  • count()
  • toPandas()
  • save()
  • saveAsTable()
  • start()
  • toTable()

What happened to @dlt?

Previously, Databricks used the dlt module to support pipeline functionality. The dlt module has been replaced by the pyspark.pipelines module. You may still use dlt, but Databricks recommends using pipelines.

Differences between DLT, SDP, and Apache Spark

The following table shows the differences in syntax and functionality between DLT, Lakeflow Spark Declarative Pipelines, and Apache Spark Declarative Pipelines.

Area

DLT syntax

SDP Syntax (Lakeflow and Apache, where applicable)

Available in Apache Spark

Imports

import dlt

from pyspark import pipelines (as dp, optionally)

Yes

Streaming table

@dlt.table with a streaming dataframe

@dp.table

Yes

Materialized view

@dlt.table with a batch dataframe

@dp.materialized_view

Yes

View

@dlt.view

@dp.temporary_view

Yes

Append flow

@dlt.append_flow

@dp.append_flow

Yes

SQL – streaming

CREATE STREAMING TABLE ...

CREATE STREAMING TABLE ...

Yes

SQL – materialized

CREATE MATERIALIZED VIEW ...

CREATE MATERIALIZED VIEW ...

Yes

SQL – flow

CREATE FLOW ...

CREATE FLOW ...

Yes

Event log

spark.read.table("event_log")

spark.read.table("event_log")

No

Apply Changes (CDC)

dlt.apply_changes(...)

dp.create_auto_cdc_flow(...)

No

Expectations

@dlt.expect(...)

dp.expect(...)

No

Continuous mode

Pipeline config with continuous trigger

(same)

No

Sink

@dlt.create_sink(...)

dp.create_sink(...)

Yes