The Databricks Unified Data Analytics Platform, from the original creators of Apache Spark, enables data teams to collaborate in order to solve some of the world’s toughest problems.
Databricks is structured to enable secure cross-functional team collaboration while keeping a significant amount of backend services managed by Databricks so you can stay focused on your data science, data analytics, and data engineering tasks.
Databricks operates out of a control plane and a data plane.
- The control plane includes the backend services that Databricks manages in its own AWS account. Notebook commands and many other workspace configurations are stored in the control plane and encrypted at rest.
- The data plane is managed by your AWS account and is where your data resides. This is also where data is processed. You can use Databricks connectors so that your clusters can connect to external data sources outside of your AWS account to ingest data or for storage. You can also ingest data from external streaming data sources, such as events data, streaming data, IoT data, and more.
Although architectures can vary depending on custom configurations, the following diagram represents the most common structure and flow of data for Databricks on AWS environments.
Your data always resides in your AWS account in the data plane and in your own data sources, not the control plane, so you maintain control and ownership of your data.
Job results reside in storage in your account. Interactive notebook results are stored in a combination of the control plane (partial results for presentation in the UI) and your AWS storage.
If you want interactive notebook results stored only in your cloud account storage, you can ask your Databricks representative to enable interactive notebook results in the customer account for your workspace. Note that some metadata about results, such as chart column names, continues to be stored in the control plane. This feature is in Public Preview.
In September 2020, Databricks released the E2 version of the platform, which provides:
- Multi-workspace accounts: Create multiple workspaces per account using the Account API.
- Customer-managed VPCs: Create Databricks workspaces in your own VPC rather than using the default architecture in which clusters are created in a single AWS VPC that Databricks creates and configures in your AWS account.
- Secure cluster connectivity: Also known as “No Public IPs,” secure cluster connectivity lets you launch clusters in which all nodes have only private IP addresses, providing enhanced security.
- Customer-managed keys for managed services: (Public Preview): Provide KMS keys to encrypt notebook and secret data in the Databricks-managed control plane.
Along with features like token management, IP access lists, cluster policies, and IAM credential passthrough, the E2 architecture makes the Databricks platform on AWS more secure, more scalable, and simpler to manage.
New accounts—except for select custom accounts—are created on the E2 platform, and most existing accounts have been migrated. If you are unsure whether your account is on the E2 platform, contact your Databricks representative.