Running Structured Streaming in Production

It is convenient to attach a notebook to a cluster and run your streaming queries interactively. However, for running them in production, you are likely to want more robustness and uptime guarantees. Let’s see how to make your streaming application more fault-tolerant using Databricks Jobs.

Recovering from Query Failures using Jobs

A production-grade streaming application must be robust to any kind of failures. In Structured Streaming, if you enable checkpointing for a StreamingQuery, then you can restart the query after a failure and the restarted query will continue where the failed on left off, while ensuring fault-tolerance and data consistency guarantees. Hence, to make your queries fault-tolerant, you have to enable query checkpointing, and configure Databricks Jobs to restart your queries automatically after a failure.

Enable Checkpointing

To enable checkpointing, set the option “checkpointLocation” to a DBFS or S3 path before starting the query. For example:

streamingDataFrame.writeStream
    .format("parquet")
    .option("path", "dbfs://outputPath/")
    .option("checkpointLocation", "dbfs://checkpointPath")
    .start()

This checkpoint location preserves all the essential information that uniquely identifies a query. Hence, each query must have a different checkpoint location, and multiple queries should never have the same location. See the Structured Streaming Programming Guide for more details.

Configuring Jobs to Restart Streaming Queries on Failure

You can create a Databricks Jobs with the notebook and/or JAR that has your StreamingQueries, and configure it to

  • Always use a new cluster.
  • Always retry on failure.

Jobs has tight integrations with Structured Streaming APIs and can monitor all streaming queries active in a Run. The above configuration ensures that if any of the query fails, Jobs will automatically terminate the Run (along all the other queries) and start a new Run in a new cluster. The new Run will re-execute the notebook/JAR code and restart all the queries again. This is the safest way to ensure that we get back into a good state.

Warning

Notebook Workflows are currently not supported with long running jobs, therefore we don’t recommend using Notebook Workflows in your streaming jobs.

Note

Failure in any of the active streaming queries will fail the active Run, and terminate all the other streaming queries.

Note

You do not need to use streamingQuery.awaitTermination() or spark.streams.awaitAnyTermination() at the end of your notebook. Jobs automatically prevents a Run from completing when a streaming query is active.

Here are the details of the recommended Job configuration.

  • Cluster: Set this to always use a new cluster, and use the latest Spark version (or at least version 2.1). Queries started in Spark 2.1 and above will be recoverable after query / Spark version upgrades.
  • Alerts: Set this if you want email notification on failures.
  • Schedule: Do not set any schedule.
  • Timeout: Do not set any timeout, as a streaming query will run for an indefinitely long time.
  • Maximum concurrent runs: Set this to 1, as there must be only one instance of each query concurrently active.
  • Retries: Set this to Unlimited.

See the Jobs to understand these configurations. Here is a screenshot of a good Job configuration.

../../../_images/job-conf.png
width:800px

Configuring Spark Scheduler Pools for Efficiency

By default, all queries started in a notebook run in the same fair scheduling pool . Therefore, jobs generated by triggers from all the streaming queries in a notebook run one after another in FIFO order. This can cause unnecessary delays in the queries as they are not efficiently sharing the cluster resources.

To enable all streaming queries to execute jobs concurrently and efficiently share the cluster, you can set the queries to execute in separate scheduler pools. For example:

// Run streaming query1 in scheduler pool1
spark.sparkContext.setLocalProperty("spark.scheduler.pool", "pool1")
df.writeStream.queryName("query1").format("parquet").start(path1)

// Run streaming query2 in scheduler pool2
spark.sparkContext.setLocalProperty("spark.scheduler.pool", "pool2")
df.writeStream.queryName("query2").format("orc").start(path2)

See Apache documentation for more details.