This feature is in Public Preview. Contact your Databricks representative to request access.
Delta Live Tables is a framework for building reliable, maintainable, and testable data processing pipelines. You define the transformations to perform on your data, and Delta Live Tables manages task orchestration, cluster management, monitoring, data quality, and error handling.
Instead of defining your data pipelines using a series of separate Apache Spark tasks, Delta Live Tables manages how your data is transformed based on a target schema you define for each processing step. You can also enforce data quality with Delta Live Tables expectations. Expectations allow you to define expected data quality and specify how to handle records that fail those expectations.
To get started:
- Develop your first Delta Live Tables pipeline: Delta Live Tables quickstart
- Learn more about Delta Live Tables concepts and features: Delta Live Tables user guide
- Review the supported language interfaces: Delta Live Tables language reference
- Learn how to configure your Delta Live Tables pipelines: Delta Live Tables settings
- Review frequently asked questions: Delta Live Tables frequently asked questions