Cluster Size and Autoscaling

When creating a cluster, you can either provide a fixed number of workers for the cluster or provide a min and max range for the number of workers for the cluster.

When you provide a fixed size cluster, Databricks ensures that your cluster has the specified number of workers.

Autoscaling Overview

Note

This feature works best with runtime versions 3.4 and above.

You can also specify a cluster to be auto-scaling and provide the min and max range of workers required for the cluster.

With Autoscaling enabled, Databricks automatically chooses the appropriate number of workers required to run your Spark job. The Databricks service may also dynamically re-allocate more workers or fewer workers during runtime to account for the characteristics of your job. Certain parts of your pipeline may be more computationally demanding than others, and Databricks automatically spins up additional workers during these phases of your job (and shuts them down when they’re no longer needed).

When you select a minimum and a maximum number of workers, the cluster size is automatically adjusted between those limits during the cluster’s lifetime.

Autoscaling Overview

Autoscaling makes it easier for users to achieve high cluster utilization, because they do not need to worry about the exact provisioning of their cluster to match their workload. This applies especially to running workloads whose requirements change over time (like exploring a dataset during the course of a day), but it can also apply to a one-time shorter workload for which provisioning requirements are unknown. This offers two advantages:

  1. Workloads can run faster compared to a constant-sized under-provisioned cluster.
  2. Autoscaling clusters can reduce overall costs compared to a statically-sized cluster.

Depending on the constant size of the cluster and on the workload, autoscaling gives you one or both of these benefits at the same time. Note that the cluster size can go below the minimum number of workers selected by the user when the cloud provider terminates instances. In this case, Databricks continuously retries to increase the cluster size.

Enabling Autoscaling

Users can enable Autoscaling by checking a box at the time of cluster creation.

enable_autoscaling

How Autoscaling works

Databricks monitors load on Spark clusters and decides whether to scale a cluster up or down and by how much. If a cluster has pending Spark Tasks, the cluster scales up. If a cluster does not have any pending Spark Tasks, the cluster scales down. The autoscaling algorithm uses exponential steps to ensure that users experience fast workloads while maintaining efficient cluster utilization.

Warning

Clusters with no pending tasks do not scale up, even if they have running tasks. This usually indicates that the cluster is fully utilized, and adding more nodes will not make the processing faster.

cluster_with_pending_tasks

This cluster currently has 16 running tasks and 16 pending tasks (total tasks - running tasks), and will be scaled up.

Reconfiguring Autoscaling Clusters

If you reconfigure a static cluster to be an autoscaling cluster, the cluster will immediately resize within the minimum and maximum bounds and then commence autoscaling. As an example, the table below demonstrates what will happen to clusters with a certain initial size if a user reconfigures the cluster to autoscale between 5 and 10 nodes.

Initial size Size after reconfiguration
6 6
12 10
3 5

Autoscaling for Jobs

Autoscaling for jobs is different from standard autoscaling, and is recommended only with runtime versions 3.4 and above. To enable this feature for a job running running Databricks Runtime 3.4 or higher, choose the Enable Autoscaling option on the Configure Cluster page.

This feature allows a jobs cluster to scale up and down more aggressively in response to load and is designed to improve resource utilization. In particular, a cluster can scale down idle VMs even when there are tasks running on other VMs. This autoscaling algorithm is different than the one used for standard interactive clusters. To revert to the same autoscaling behavior as standard clusters, you can set the following spark configuration: ‘spark.databricks.enableAggressiveAutoscaling false’.