You can easily configure production incremental processing workloads with Structured Streaming on Databricks to fulfill latency and cost requirements for real-time or batch applications. Understanding key concepts of Structured Streaming on Databricks can help you avoid common pitfalls as you scaling up the volume and velocity of data and move from development to production.
Databricks has introduced Delta Live Tables to reduce the complexities of managing production infrastructure for Structured Streaming workloads. Databricks recommends using Delta Live Tables for new Structured Streaming pipelines; see Delta Live Tables introduction.
Interactive development with Databricks notebooks requires you attach your notebooks to a cluster in order to execute queries manually. You can schedule Databricks notebooks for automated deployment and automatic recovery from query failure using Workflows.
You can visualize Structured Streaming queries in notebooks during interactive development, or for interactive monitoring of production workloads. You should only visualize a Structured Streaming query in production if a human will regularly monitor the output of the notebook. While the
checkpointLocation parameters are optional, as a best practice Databricks recommends that you always specify them in production.
Structured Streaming on Databricks has enhanced options for helping to control costs and latency while streaming with Auto Loader and Delta Lake.
A stateful Structured Streaming query requires incremental updates to intermediate state information, whereas a stateless Structured Streaming query only tracks information about which rows have been processed from the source to the sink.
Stateful operations include streaming aggregation, streaming
dropDuplicates, stream-stream joins,
The intermediate state information required for stateful Structured Streaming queries can lead to unexpected latency and production problems if not configured properly.