Lakeflow Declarative Pipelines developer reference
This section contains reference and instructions for Lakeflow Declarative Pipelines developers.
Data loading and transformations are implemented in Lakeflow Declarative Pipelines by queries that define streaming tables and materialized views. To implement these queries, Lakeflow Declarative Pipelines supports SQL and Python interfaces. Because these interfaces provide equivalent functionality for most data processing use cases, pipeline developers can choose the interface that they are most comfortable with.
Python development
Create Lakeflow Declarative Pipelines using Python code.
Topic | Description |
---|---|
An overview of developing Lakeflow Declarative Pipelines in Python. | |
Python reference documentation for the | |
Manage Python dependencies for Lakeflow Declarative Pipelines | Instructions for managing Python libraries with Lakeflow Declarative Pipelines. |
Instructions for using Python modules that you have stored in Databricks. |
SQL development
Create Lakeflow Declarative Pipelines using SQL code.
Topic | Description |
---|---|
An overview of developing Lakeflow Declarative Pipelines in SQL. | |
Reference documentation for SQL syntax for Lakeflow Declarative Pipelines. | |
Use Databricks SQL to work with Lakeflow Declarative Pipelines. |
Other development topics
The following topics describe other ways to develop Lakeflow Declarative Pipelines.
Topic | Description |
---|---|
Convert Lakeflow Declarative Pipelines into a Databricks Asset Bundle project | Convert an existing pipeline to a bundle, which allows you to manage your data processing configuration in a source-controlled YAML file for easier maintenance and automated deployments to target environments. |
Use the open source | |
Develop Lakeflow Declarative Pipelines code in your local development environment | An overview of options for developing Lakeflow Declarative Pipelines code locally. |