Create a cross-account IAM role
Learn how to set up a cross-account IAM role to enable Databricks to deploy workspaces in your AWS account.
Tip
You can automate IAM role creation using Databricks Terraform provider. See Create Databricks workspaces using Terraform.
Create a cross-account role
Get your Databricks external ID (see account ID). You need the ID when you create the AWS cross-account IAM role in your AWS account.
Log into your AWS Console as a user with administrator privileges and go to the IAM console.
Click the Roles tab in the sidebar.
Click Create role.
In Select type of trusted entity, click the AWS account tile.
Select the Another AWS account checkbox.
In the Account ID field, enter the Databricks account ID
414351767826
. This is not the Account ID you copied from the Databricks account console.Select the Require external ID checkbox.
In the External ID field, enter your Databricks account ID, which you copied from the Databricks account console.
Click the Next: Add Permissions button.
Click the Next: Name, Review, and Create button.
In the Role name field, enter a role name.
Under Step 1: Select trusted entities the JSON should look like the following:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:AssumeRole", "Principal": { "AWS": "414351767826" }, "Condition": { "StringEquals": { "sts:ExternalId": "YOUR_EXTERNAL_ID" } } } ] }
Click Create role. The list of roles displays.
Create an access policy
To create a new workspace, you must set up an access policy on your cross-account IAM role. This policy varies depending on your Amazon VPC (Virtual Private Cloud) deployment type and your restrictions. There are three policy options:
Default: Create your workspace in a single VPC that Databricks creates and configures in your AWS account. This is the default configuration. To use the default policy, see Default deployment policy.
Customer-managed VPC with default restrictions: Create your Databricks workspaces in your own VPC, using a feature known as customer-managed VPC. To use this policy, see Customer-managed VPC with default policy restrictions.
Customer-managed VPC with custom restrictions: Create your Databricks workspaces in your own VPC, using a feature known as customer-managed VPC. You can configure this access policy with custom restrictions for account ID, VPC ID, AWS Region, and security group. To use this policy, see Customer-managed VPC with custom policy restrictions.
Important
These policies assume the workspace uses secure cluster connectivity, which is sometimes referred to as No Public IP (NPIP). Secure cluster connectivity is the default as of September 1, 2020 for workspaces created with the Account API. If you have a workspace that does not use secure cluster connectivity (NPIP), contact your Databricks representative.
Default deployment policy
The following steps include an access policy for launching Databricks workspaces in a VPC that Databricks creates and configures in your AWS account. For information about how Databricks uses each permission, see IAM permissions for Databricks-managed VPC.
In the list of roles, click the role you created.
Add an inline policy.
On the Permissions tab, click Add inline policy.
In the policy editor, click the JSON tab.
Copy the access policy for deploying workspaces in a VPC that Databricks creates and configures in your AWS account.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1403287045000", "Effect": "Allow", "Action": [ "ec2:AllocateAddress", "ec2:AssignPrivateIpAddresses", "ec2:AssociateDhcpOptions", "ec2:AssociateIamInstanceProfile", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:AttachVolume", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CancelSpotInstanceRequests", "ec2:CreateDhcpOptions", "ec2:CreateFleet", "ec2:CreateInternetGateway", "ec2:CreateLaunchTemplate", "ec2:CreateLaunchTemplateVersion", "ec2:CreateNatGateway", "ec2:CreateRoute", "ec2:CreateRouteTable", "ec2:CreateSecurityGroup", "ec2:CreateSubnet", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateVpc", "ec2:CreateVpcEndpoint", "ec2:DeleteDhcpOptions", "ec2:DeleteFleets", "ec2:DeleteInternetGateway", "ec2:DeleteLaunchTemplate", "ec2:DeleteLaunchTemplateVersions", "ec2:DeleteNatGateway", "ec2:DeleteRoute", "ec2:DeleteRouteTable", "ec2:DeleteSecurityGroup", "ec2:DeleteSubnet", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DeleteVpc", "ec2:DeleteVpcEndpoints", "ec2:DescribeAvailabilityZones", "ec2:DescribeFleetHistory", "ec2:DescribeFleetInstances", "ec2:DescribeFleets", "ec2:DescribeIamInstanceProfileAssociations", "ec2:DescribeInstanceStatus", "ec2:DescribeInstances", "ec2:DescribeInternetGateways", "ec2:DescribeLaunchTemplates", "ec2:DescribeNatGateways", "ec2:DescribePrefixLists", "ec2:DescribeReservedInstancesOfferings", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSpotInstanceRequests", "ec2:DescribeSpotPriceHistory", "ec2:DescribeSubnets", "ec2:DescribeVolumes", "ec2:DescribeVpcs", "ec2:DetachInternetGateway", "ec2:DisassociateIamInstanceProfile", "ec2:DisassociateRouteTable", "ec2:GetLaunchTemplateData", "ec2:GetSpotPlacementScores", "ec2:ModifyFleet", "ec2:ModifyLaunchTemplate", "ec2:ModifyVpcAttribute", "ec2:ReleaseAddress", "ec2:ReplaceIamInstanceProfileAssociation", "ec2:RequestSpotInstances", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", "ec2:RunInstances", "ec2:TerminateInstances" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole", "iam:PutRolePolicy" ], "Resource": "arn:aws:iam::*:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot", "Condition": { "StringLike": { "iam:AWSServiceName": "spot.amazonaws.com" } } } ] }
Click Review policy.
In the Name field, enter a policy name.
Click Create policy.
If you use Service Control Policies to deny certain actions at the AWS account level, ensure that
sts:AssumeRole
is allowlisted so Databricks can assume the cross-account role.
In the role summary, copy the Role ARN.
Customer-managed VPC with default policy restrictions
The following steps include an access policy for launching Databricks workspaces in a customer-managed VPC with default policy restrictions. For information about how Databricks uses each permission, see IAM permissions for customer-managed VPC.
Log into your AWS Console as a user with administrator privileges and go to the IAM console.
Click the Roles tab in the sidebar.
In the list of roles, click the cross-account IAM role that you created for Databricks.
Add an inline policy.
On the Permissions tab, click Add inline policy.
In the policy editor, click the JSON tab.
Copy the access policy below for deploying workspaces in a customer-managed VPC with default restrictions.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1403287045000", "Effect": "Allow", "Action": [ "ec2:AssociateIamInstanceProfile", "ec2:AttachVolume", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CancelSpotInstanceRequests", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DescribeAvailabilityZones", "ec2:DescribeIamInstanceProfileAssociations", "ec2:DescribeInstanceStatus", "ec2:DescribeInstances", "ec2:DescribeInternetGateways", "ec2:DescribeNatGateways", "ec2:DescribeNetworkAcls", "ec2:DescribePrefixLists", "ec2:DescribeReservedInstancesOfferings", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSpotInstanceRequests", "ec2:DescribeSpotPriceHistory", "ec2:DescribeSubnets", "ec2:DescribeVolumes", "ec2:DescribeVpcAttribute", "ec2:DescribeVpcs", "ec2:DetachVolume", "ec2:DisassociateIamInstanceProfile", "ec2:ReplaceIamInstanceProfileAssociation", "ec2:RequestSpotInstances", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", "ec2:RunInstances", "ec2:TerminateInstances" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole", "iam:PutRolePolicy" ], "Resource": "arn:aws:iam::*:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot", "Condition": { "StringLike": { "iam:AWSServiceName": "spot.amazonaws.com" } } } ] }
Click Review policy.
In the Name field, enter a policy name.
Click Create policy.
If you use Service Control Policies to deny certain actions at the AWS account level, ensure that
sts:AssumeRole
is allowlisted so Databricks can assume the cross-account role.
In the role summary, copy the Role ARN.
Customer-managed VPC with custom policy restrictions
The following steps include an access policy for launching Databricks workspaces in a customer-managed VPC with custom policy restrictions. For information about how Databricks uses each permission, see IAM permissions for customer-managed VPC.
Note
The Databricks production AWS account from which Amazon Machine Images (AMI) are sourced is 601306020600
. You can use this account Id to create custom access policies that restrict which AMIs can be used within your AWS account. Please reach out to your Databricks representative for more information.
Log into your AWS Console as a user with administrator privileges and go to the IAM console.
Click the Roles tab in the sidebar.
In the list of roles, click the cross-account IAM role that you created for Databricks.
Add an inline policy.
On the Permissions tab, click Add inline policy.
In the policy editor, click the JSON tab.
Copy the access policy below for deploying workspaces in a customer-managed VPC with custom restrictions for account ID, VPC ID, Region, and security group.
Replace the following values in the policy with your own configuration values:
ACCOUNTID
— Your AWS account ID, which is a number.VPCID
— ID of your AWS VPC in which you want to launch workspaces.REGION
— AWS Region name for your VPC deployment, for exampleus-west-2
.SECURITYGROUPID
— ID of your AWS security group. When you add a security group restriction, you cannot reuse the cross-account IAM role or reference a credentials ID (credentials_id
) for any other workspaces. For those other workspaces, you must create separate roles, policies, and credentials objects.
Note
If you have custom requirements configured for security groups with your customer-managed vpc, please contact your Databricks representative for customizations to the IAM policy.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "NonResourceBasedPermissions", "Effect": "Allow", "Action": [ "ec2:CancelSpotInstanceRequests", "ec2:DescribeAvailabilityZones", "ec2:DescribeIamInstanceProfileAssociations", "ec2:DescribeInstanceStatus", "ec2:DescribeInstances", "ec2:DescribeInternetGateways", "ec2:DescribeNatGateways", "ec2:DescribeNetworkAcls", "ec2:DescribePrefixLists", "ec2:DescribeReservedInstancesOfferings", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSpotInstanceRequests", "ec2:DescribeSpotPriceHistory", "ec2:DescribeSubnets", "ec2:DescribeVolumes", "ec2:DescribeVpcAttribute", "ec2:DescribeVpcs", "ec2:CreateTags", "ec2:DeleteTags", "ec2:RequestSpotInstances" ], "Resource": [ "*" ] }, { "Sid": "InstancePoolsSupport", "Effect": "Allow", "Action": [ "ec2:AssociateIamInstanceProfile", "ec2:DisassociateIamInstanceProfile", "ec2:ReplaceIamInstanceProfileAssociation" ], "Resource": "arn:aws:ec2:REGION:ACCOUNTID:instance/*", "Condition": { "StringEquals": { "ec2:ResourceTag/Vendor": "Databricks" } } }, { "Sid": "AllowEc2RunInstancePerTag", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:REGION:ACCOUNTID:volume/*", "arn:aws:ec2:REGION:ACCOUNTID:instance/*" ], "Condition": { "StringEquals": { "aws:RequestTag/Vendor": "Databricks" } } }, { "Sid": "AllowEc2RunInstanceImagePerTag", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:REGION:ACCOUNTID:image/*" ], "Condition": { "StringEquals": { "aws:ResourceTag/Vendor": "Databricks" } } }, { "Sid": "AllowEc2RunInstancePerVPCid", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:REGION:ACCOUNTID:network-interface/*", "arn:aws:ec2:REGION:ACCOUNTID:subnet/*", "arn:aws:ec2:REGION:ACCOUNTID:security-group/*" ], "Condition": { "StringEquals": { "ec2:vpc": "arn:aws:ec2:REGION:ACCOUNTID:vpc/VPCID" } } }, { "Sid": "AllowEc2RunInstanceOtherResources", "Effect": "Allow", "Action": "ec2:RunInstances", "NotResource": [ "arn:aws:ec2:REGION:ACCOUNTID:image/*", "arn:aws:ec2:REGION:ACCOUNTID:network-interface/*", "arn:aws:ec2:REGION:ACCOUNTID:subnet/*", "arn:aws:ec2:REGION:ACCOUNTID:security-group/*", "arn:aws:ec2:REGION:ACCOUNTID:volume/*", "arn:aws:ec2:REGION:ACCOUNTID:instance/*" ] }, { "Sid": "EC2TerminateInstancesTag", "Effect": "Allow", "Action": [ "ec2:TerminateInstances" ], "Resource": [ "arn:aws:ec2:REGION:ACCOUNTID:instance/*" ], "Condition": { "StringEquals": { "ec2:ResourceTag/Vendor": "Databricks" } } }, { "Sid": "EC2AttachDetachVolumeTag", "Effect": "Allow", "Action": [ "ec2:AttachVolume", "ec2:DetachVolume" ], "Resource": [ "arn:aws:ec2:REGION:ACCOUNTID:instance/*", "arn:aws:ec2:REGION:ACCOUNTID:volume/*" ], "Condition": { "StringEquals": { "ec2:ResourceTag/Vendor": "Databricks" } } }, { "Sid": "EC2CreateVolumeByTag", "Effect": "Allow", "Action": [ "ec2:CreateVolume" ], "Resource": [ "arn:aws:ec2:REGION:ACCOUNTID:volume/*" ], "Condition": { "StringEquals": { "aws:RequestTag/Vendor": "Databricks" } } }, { "Sid": "EC2DeleteVolumeByTag", "Effect": "Allow", "Action": [ "ec2:DeleteVolume" ], "Resource": [ "arn:aws:ec2:REGION:ACCOUNTID:volume/*" ], "Condition": { "StringEquals": { "ec2:ResourceTag/Vendor": "Databricks" } } }, { "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole", "iam:PutRolePolicy" ], "Resource": "arn:aws:iam::*:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot", "Condition": { "StringLike": { "iam:AWSServiceName": "spot.amazonaws.com" } } }, { "Sid": "VpcNonresourceSpecificActions", "Effect": "Allow", "Action": [ "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress" ], "Resource": "arn:aws:ec2:REGION:ACCOUNTID:security-group/SECURITYGROUPID", "Condition": { "StringEquals": { "ec2:vpc": "arn:aws:ec2:REGION:ACCOUNTID:vpc/VPCID" } } } ] }
Click Review policy.
In the Name field, enter a policy name.
Click Create policy.
If you use Service Control Policies to deny certain actions at the AWS account level, ensure that
sts:AssumeRole
is allowlisted so Databricks can assume the cross-account role.
In the role summary, copy the Role ARN.