You can use scheduled query executions to keep your dashboards updated or to enable routine alerts. By default, your queries do not have a schedule.
If your query is used by an alert, the alert runs on its own refresh schedule and does not use the query schedule.
To set the schedule:
In the Query Editor, click Schedule to open a picker with schedule intervals.
Set the schedule.
The picker scrolls and allows you to choose:
An interval: 1-30 minutes, 1-12 hours, 1 or 30 days, 1 or 2 weeks
A time. The time selector displays in the picker only when the interval is greater than 1 day and the day selection is greater than 1 week. When you schedule a specific time, Databricks SQL takes input in your computer’s timezone and converts it to UTC. If you want a query to run at a certain time in UTC, you must adjust the picker by your local offset. For example, if you want a query to execute at
00:00UTC each day, but your current timezone is PDT (UTC-7), you should select
17:00in the picker:
Your query will run automatically.
If you experience a scheduled query not executing according to its schedule, you should manually trigger the query to make sure it doesn’t fail. However, you should be aware of the following:
If you schedule an interval—for example, “every 15 minutes”—the interval is calculated from the last successful execution. If you manually execute a query, the scheduled query will not be executed until the interval has passed.
If you schedule a time, Databricks SQL waits for the results to be “outdated”. For example, if you have a query set to refresh every Thursday and you manually execute it on Wednesday, by Thursday the results will still be considered “valid”, so the query wouldn’t be scheduled for a new execution. Thus, for example, when setting a weekly schedule, check the last query execution time and expect the scheduled query to be executed on the selected day after that execution is a week old. Make sure not to manually execute the query during this time.
If a query execution fails, Databricks SQL retries with a back-off algorithm. The more failures the further away the next retry will be (and it might be beyond the refresh interval).
When a query is “Run as Owner” and a schedule is added, the owner’s credential is used for execution and anyone with at least “Can Run” sees the results of those refreshed queries.
When a query is “Run as Viewer” and a schedule is added, the owner’s credential is used for execution but only the owner sees the results of the refreshed queries; all other viewers must manually refresh to see updated query results.
If one or more queries fails, Databricks SQL notifies query owners by email once per hour. These emails continue until there are no more failures. Failure report emails run on an independent process from the actual query schedules. It may take up to an hour after a failed query execution before Databricks SQL sends the failure report.
If query owners do not receive emailed failure reports when scheduled queries fail, your administrator has disabled them for your Databricks SQL instance.