Pool configurations

This article explains the configuration options available when you create and edit a pool.

Configure pool

Pool size and auto termination

When you create a pool, in order to control its size, you can set three parameters: minimum idle instances, maxium capacity, and idle instance auto termination.

Minimum Idle Instances

The minimum number of instances the pool keeps idle. These instances do not terminate, regardless of the setting specified in Idle Instance Auto Termination. If a cluster consumes idle instances from the pool, Databricks provisions additional instances to maintain the minimum.

Minimum Idle Instances configuration

Maximum Capacity

The maximum number of instances that the pool will provision. If set, this value constrains all instances (idle + used). If a cluster using the pool requests more instances than this number during autoscaling, the request will fail with an INSTANCE_POOL_MAX_CAPACITY_FAILURE error.

Maximum Capacity configuration

This configuration is optional. Databricks recommend setting a value only in the following circumstances:

  • You have an instance quota you must stay under.
  • You want to protect one set of work from impacting another set of work. For example, suppose your instance quota is 100 and you have teams A and B that need to run jobs. You can create pool A with a max 50 and pool B with max 50 so that the two teams share the 100 quota fairly.
  • You need to cap cost.

Idle Instance Auto Termination

The time in minutes that instances above the value set in Minimum Idle Instances can be idle before being terminated by the pool.

Idle Instance Auto Termination configuration

Instance types

A pool consists of both idle instances kept ready for new clusters and instances in use by running clusters. All of these instances are of the same instance provider type, selected when creating a pool.

A pool’s instance type cannot be edited. Clusters attached to a pool use the same instance type for the driver and worker nodes. Different families of instance types fit different use cases, such as memory-intensive or compute-intensive workloads.

Instance types

Databricks always provides one year’s deprecation notice before ceasing support for an instance type.

Preload Databricks Runtime version

You can speed up cluster launches by selecting a Databricks Runtime version to be loaded on idle instances in the pool. If a user selects that runtime when they create a cluster backed by the pool, that cluster will launch even more quickly than a pool-backed cluster that doesn’t use a preloaded Databricks Runtime version.

Preloaded runtime version

Pool tags

Pool tags allow you to easily monitor the cost of cloud resources used by various groups in your organization. You can specify tags as key-value pairs when you create a pool, and Databricks applies these tags to cloud resources like VMs and disk volumes, as well as DBU usage reports.

For convenience, Databricks applies three default tags to each pool: Vendor, DatabricksInstancePoolId, and DatabricksInstancePoolCreatorId. You can also add custom tags when you create a pool. You can add up to 43 custom tags.

Custom tag inheritance

Pool-backed clusters inherit default and custom tags from the pool configuration. For detailed information about how pool tags and cluster tags work together, see Monitor usage using cluster and pool tags.

Configure custom pool tags

  1. At the bottom of the pool configuration page, select the Tags tab.

  2. Specify a key-value pair for the custom tag.

    Tag key-value pair
  3. Click Add.

AWS configurations

When you configure a pool’s AWS instances you can choose the availability zone, the max spot price, and the EBS volume type and size. All clusters attached to the pool inherit these configurations. To specify configurations, at the bottom of the pool configuration page, click the Instances tab.

AWS configurations

Availability zones

Choosing a specific availability zone for an pool is useful primarily if your organization has purchased reserved instances in specific availability zones. Read more about AWS availability zones.

Max spot price

You can specify the max spot price to use when launching spot instances as a percentage of the corresponding on-demand price. By default, Databricks sets the max spot price at 100% of the on-demand price. Read more about AWS spot pricing.

A pool can either be all spot instances or all on-demand instances.

EBS volumes

This section describes the default EBS volume settings for pool instances.

Default EBS volumes

Databricks provisions EBS volumes for every instance as follows:

  • A 30 GB unencrypted EBS instance root volume used only by the host operating system and Databricks internal services.
  • A 150 GB encrypted EBS container root volume used by the Spark worker. This hosts Spark services and logs.
  • (HIPAA only) a 75 GB encrypted EBS worker log volume that stores logs for Databricks internal services.

Add EBS shuffle volumes

To add shuffle volumes, select General Purpose SSD in the EBS Volume Type drop-down list:

Add EBS shuffle volume

By default, Spark shuffle outputs go to the instance local disk. For instance types that do not have a local disk, or if you want to increase your Spark shuffle storage space, you can specify additional EBS volumes. This is particularly useful to prevent out of disk space errors when you run Spark jobs that produce large shuffle outputs.

Databricks encrypts these EBS volumes for both on-demand and spot instances. Read more about AWS EBS volumes.

AWS EBS limits

Ensure that your AWS EBS limits are high enough to satisfy the runtime requirements for all instances in all pools. For information on the default EBS limits and how to change them, see Amazon Elastic Block Store (EBS) Limits.

Autoscaling local storage

If you don’t want to allocate a fixed number of EBS volumes at pool creation time, use autoscaling local storage. With autoscaling local storage, Databricks monitors the amount of free disk space available on your pool’s Spark workers. If a worker begins to run too low on disk, Databricks automatically attaches a new EBS volume to the worker before it runs out of disk space. EBS volumes are attached up to a limit of 5 TB of total disk space per instance (including the instance’s local storage).

To configure autoscaling storage, select Enable autoscaling local storage in the Autopilot Options:

Enable autoscaling local storage

The EBS volumes attached to an instance are detached only when the instance is returned to AWS. That is, EBS volumes are never detached from an instance as long as it is in the pool. To scale down EBS usage, Databricks recommends configuring the Pool size and auto termination.

Note

  • Databricks uses Throughput Optimized HDD (st1) to extend the local storage of an instance. The default AWS capacity limit for these volumes is 20 TiB. To avoid hitting this limit, administrators should request an increase in this limit based on their usage requirements.
  • If you want to use autoscaling local storage, the IAM role or keys used to create your account must include the permissions ec2:AttachVolume, ec2:CreateVolume, ec2:DeleteVolume, and ec2:DescribeVolumes. For the complete list of permissions and instructions on how to update your existing IAM role or keys, see Configure your AWS account.