DLT
DLT is a framework for creating batch and streaming data pipelines in SQL and Python. Common use cases for DLT include data ingestion from sources such as cloud storage (such as Amazon S3, Azure ADLS Gen2, and Google Cloud Storage) and message buses (such as Apache Kafka, Amazon Kinesis, Google Pub/Sub, Azure EventHub, and Apache Pulsar), and incremental batch and streaming transformations.
This section provides detailed information about using DLT. The following topics will help you to get started.
Topic | Description |
---|---|
Learn about the high-level concepts of DLT, including pipelines, flows, streaming tables, and materialized views. | |
Follow tutorials to give you hands-on experience with using DLT. | |
Learn how to develop and test pipelines that create flows for ingesting and transforming data. | |
Learn how to schedule and configure pipelines. | |
Learn how to monitor your pipelines and troubleshoot pipeline queries. | |
Learn how to use Python and SQL when developing DLT pipelines. | |
DLT for Databricks SQL | Learn about using DLT streaming tables and materialized views in Databricks SQL. |