Production considerations for Structured Streaming

This article contains recommendations to configure production incremental processing workloads with Structured Streaming on Databricks to fulfill latency and cost requirements for real-time or batch applications. Understanding key concepts of Structured Streaming on Databricks can help you avoid common pitfalls as you scale up the volume and velocity of data and move from development to production.

Databricks has introduced Delta Live Tables to reduce the complexities of managing production infrastructure for Structured Streaming workloads. Databricks recommends using Delta Live Tables for new Structured Streaming pipelines; see What is Delta Live Tables?.

Note

Compute auto-scaling has limitations scaling down cluster size for Structured Streaming workloads. Databricks recommends using Delta Live Tables with Enhanced Autoscaling for streaming workloads. See Optimize the cluster utilization of Delta Live Tables pipelines with Enhanced Autoscaling.

Using notebooks for Structured Streaming workloads

Interactive development with Databricks notebooks requires you attach your notebooks to a cluster in order to execute queries manually. You can schedule Databricks notebooks for automated deployment and automatic recovery from query failure using Workflows.

You can visualize Structured Streaming queries in notebooks during interactive development, or for interactive monitoring of production workloads. You should only visualize a Structured Streaming query in production if a human will regularly monitor the output of the notebook. While the trigger and checkpointLocation parameters are optional, as a best practice Databricks recommends that you always specify them in production.

Controlling batch size and frequency for Structured Streaming on Databricks

Structured Streaming on Databricks has enhanced options for helping to control costs and latency while streaming with Auto Loader and Delta Lake.

What is stateful streaming?

A stateful Structured Streaming query requires incremental updates to intermediate state information, whereas a stateless Structured Streaming query only tracks information about which rows have been processed from the source to the sink.

Stateful operations include streaming aggregation, streaming dropDuplicates, stream-stream joins, mapGroupsWithState, and flatMapGroupsWithState.

The intermediate state information required for stateful Structured Streaming queries can lead to unexpected latency and production problems if not configured properly.

In Databricks Runtime 13.2 and above, you can enable changelog checkpointing with RocksDB to lower checkpoint duration and end-to-end latency for Structured Streaming workloads. Databricks recommends enabling changelog checkpointing for all Structured Streaming stateful queries. See Enable changelog checkpointing.