Configure Auto Loader for production workloads
Databricks recommends that you follow the streaming best practices for running Auto Loader in production.
Databricks recommends using Auto Loader in Lakeflow Spark Declarative Pipelines for incremental data ingestion. Lakeflow Spark Declarative Pipelines extends functionality in Apache Spark Structured Streaming and allows you to write just a few lines of declarative Python or SQL to deploy a production-quality data pipeline with:
- Autoscaling compute infrastructure for cost savings
- Data quality checks with expectations
- Automatic schema evolution handling
- Monitoring via metrics in the event log
Monitoring Auto Loader
Querying files discovered by Auto Loader
The cloud_files_state function is available in Databricks Runtime 11.3 LTS and above.
Auto Loader provides a SQL API for inspecting the state of a stream. Using the cloud_files_state function, you can find metadata about files that have been discovered by an Auto Loader stream. Simply query from cloud_files_state, providing the checkpoint location associated with an Auto Loader stream.
SELECT * FROM cloud_files_state('path/to/checkpoint');
Listen to stream updates
To further monitor Auto Loader streams, Databricks recommends using Apache Spark's Streaming Query Listener interface.
Auto Loader reports metrics to the Streaming Query Listener at every batch. You can view how many files exist in the backlog and how large the backlog is in the numFilesOutstanding and numBytesOutstanding metrics under the Raw Data tab in the streaming query progress dashboard:
{
"sources": [
{
"description": "CloudFilesSource[/path/to/source]",
"metrics": {
"numFilesOutstanding": "238",
"numBytesOutstanding": "163939124006"
}
}
]
}
In Databricks Runtime 10.4 LTS and above, when using file notification mode, the metrics also include the approximate number of file events in the cloud queue as approximateQueueSize for AWS and Azure.
Cost considerations
When running Auto Loader, your main sources of cost are compute resources and file discovery.
To reduce compute costs, Databricks recommends using Lakeflow Jobs to schedule Auto Loader as batch jobs using Trigger.AvailableNow instead of running it continuously as long as you don't have low latency requirements. See Configure Structured Streaming trigger intervals. These batch jobs can be triggered using file arrival triggers to further lower the latency between file arrival and processing.
File discovery costs can come in the form of LIST operations on your storage accounts in directory listing mode and API requests on the subscription service, and queue service in file notification mode. To reduce file discovery costs, Databricks recommends:
- Not using the
ProcessingTimeorContinuoustriggers when running Auto Loader continuously in directory listing mode. Use Auto Loader with file events instead. For details about how Auto Loader with file events works, see Auto Loader with file events overview. - Using legacy file notification mode if you cannot use Auto Loader with file events. In this mode, you can tag resources created by Auto Loader to track your costs using resource tags.
Archive files in the source directory to lower costs
Available in Databricks Runtime 16.4 LTS and above.
- Setting
cloudFiles.cleanSourcewill delete or move files in the source directory. - If you use
foreachBatchfor your data processing, your files become move or delete candidates as soon as yourforeachBatchoperation returns successfully even if your operation only consumed a subset of the files in the batch.
We recommend using Auto Loader with file events to lower discovery costs. This also lowers compute costs because discovery is incremental.
If you cannot use file events and must use directory listing to discover files, you can use the cloudFiles.cleanSource option to automatically archive or delete files after Auto Loader processes them to lower discovery costs. Because Auto Loader cleans up files from your source directory after processing, fewer files need to be listed during discovery.
When using cloudFiles.cleanSource with the MOVE option, consider the following requirements:
- Both the source directory and the destination move directory must be located in the same external location or volume.
- If your source and destination directory are in the same external location, they should not have sibling directories that contain managed storage (for example, a managed volume or catalog). In these cases, Auto Loader is unable to get the necessary permissions to write to the destination directory.
Databricks recommends using this option when:
- Your source directory accumulates a large number of files over time.
- You must retain processed files for compliance or auditing (set
cloudFiles.cleanSourcetoMOVE). - You want to reduce storage costs by removing files after ingestion (set
cloudFiles.cleanSourcetoDELETE). When using theDELETEmode, Databricks recommends enabling versioning on the bucket so that Auto Loader deletes act as soft-deletes and are available in case of a misconfiguration. Furthermore, Databricks recommends setting up cloud lifecycle policies to purge old, soft-deleted versions after a specified grace period (such as 60 or 90 days) based on your recovery requirements.
Using Trigger.AvailableNow and rate limiting
Available in Databricks Runtime 10.4 LTS and above.
Auto Loader can be scheduled to run in Lakeflow Jobs as a batch job by using Trigger.AvailableNow. The AvailableNow trigger will instruct Auto Loader to process all files that arrived before the query start time. New files that are uploaded after the stream has started are ignored until the next trigger.
With Trigger.AvailableNow, file discovery happens asynchronously with data processing and data can be processed across multiple micro-batches with rate limiting. Auto Loader by default processes a maximum of 1000 files every micro-batch. You can configure cloudFiles.maxFilesPerTrigger and cloudFiles.maxBytesPerTrigger to configure how many files or how many bytes should be processed in a micro-batch. The file limit is a hard limit but the byte limit is a soft limit, meaning that more bytes can be processed than the provided maxBytesPerTrigger. When the options are both provided together, Auto Loader processes as many files that are needed to hit one of the limits.
Checkpoint location
The checkpoint location is used to store the state and progress information of the stream. Databricks recommends setting the checkpoint location to a location without a cloud object lifecycle policy. If files in the checkpoint location are cleaned according to the policy, the stream state is corrupted. If this happens, you must restart the stream from scratch.
File event tracking
Auto Loader keeps track of discovered files in the checkpoint location using RocksDB to provide exactly-once ingestion guarantees. For high-volume or long-lived ingestion streams, Databricks recommends upgrading to Databricks Runtime 15.4 LTS or above. In these versions, Auto Loader does not wait for the entire RocksDB state to be downloaded before the stream starts, which can accelerate stream startup time.
If you want to prevent the file states from growing without limits, you can also consider using the cloudFiles.maxFileAge option to expire file events that are older than a certain age. The minimum value that you can set for cloudFiles.maxFileAge is "14 days". Deletes in RocksDB appear as tombstone entries. Therefore, you might see the storage usage increase temporarily as events expire before it starts to level off.
cloudFiles.maxFileAge is provided as a cost control mechanism for high volume datasets. Tuning cloudFiles.maxFileAge too aggressively can cause data quality issues such as duplicate ingestion or missing files. Therefore, Databricks recommends a conservative setting for cloudFiles.maxFileAge, such as 90 days, which is similar to what comparable data ingestion solutions recommend.
Trying to tune the cloudFiles.maxFileAge option can lead to unprocessed files being ignored by Auto Loader or already processed files expiring and then being re-processed causing duplicate data. Here are some things to consider when choosing a cloudFiles.maxFileAge:
- If your stream restarts after a long time, file notification events that are pulled from the queue that are older than
cloudFiles.maxFileAgeare ignored. Similarly, if you use directory listing, files that might have appeared during the down time that are older thancloudFiles.maxFileAgeare ignored. - If you use directory listing mode and use
cloudFiles.maxFileAge, for example set to"1 month", you stop your stream and restart the stream withcloudFiles.maxFileAgeset to"2 months", files that are older than 1 month, but more recent than 2 months are reprocessed.
If you set this option the first time you start the stream, you will not ingest data older than cloudFiles.maxFileAge, therefore, if you want to ingest old data you should not set this option as you start your stream for the first time. However, you should set this option on subsequent runs.
Trigger regular backfills using cloudFiles.backfillInterval
In rare instances, files might be missed or late when depending solely on notification systems, such as when notification message retention limits are reached. If you have strict requirements on data completeness and SLA, consider setting cloudFiles.backfillInterval to trigger asynchronous backfills at a specified interval. For example, set it to one day for daily backfills, or one week for weekly backfills. Triggering regular backfills does not cause duplicates.
When using file events, run your stream at least once every 7 days
When using file events, run your Auto Loader streams at least once every 7 days to avoid a full directory listing. Running your Auto Loader streams this frequently will ensure that file discovery is incremental.
For comprehensive managed file events best practices, see Best practices for Auto Loader with file events.