Delta Live Tables introduction

Delta Live Tables is a framework for building reliable, maintainable, and testable data processing pipelines. You define the transformations to perform on your data, and Delta Live Tables manages task orchestration, cluster management, monitoring, data quality, and error handling.

Instead of defining your data pipelines using a series of separate Apache Spark tasks, Delta Live Tables manages how your data is transformed based on a target schema you define for each processing step. You can also enforce data quality with Delta Live Tables expectations. Expectations allow you to define expected data quality and specify how to handle records that fail those expectations.

To get started with Delta Live Tables:

  • Develop your first Delta Live Tables pipeline with the quickstart.

  • Learn about fundamental Delta Live Tables concepts.

  • Learn how to create, run, and manage pipelines with the Delta Live Tables user interface.

  • Learn how to develop Delta Live Tables pipelines with Python or SQL.

  • Learn how to manage data quality in your Delta Live Tables pipelines with expectations.

Learn more about Delta Live Tables:

Find answers and solutions for Delta Live Tables:

  • Implement common tasks in your Delta Live Tables pipelines: Cookbook

  • Review frequently asked questions and issues: FAQ

  • Learn best practices to develop, manage, and run your Delta Live Tables pipelines: Best practices

  • Learn how the Delta Live Tables upgrade process works and how to test your pipelines with the next system version: Upgrades