Create and manage catalogs

Preview

Unity Catalog is in Public Preview. To participate in the preview, contact your Databricks representative.

This article shows how to create and manage catalogs in Unity Catalog. A catalog contains schemas (databases), and a schema contains tables and views.

Requirements

  • You must be a Databricks account admin or metastore admin.

  • Your Databricks account must be on the Premium plan or above.

  • You must have a Unity Catalog metastore linked to the workspace where you perform the catalog creation.

  • The compute resource that you use to run the notebook, Databricks SQL editor, or Data Explorer workflow to create the catalog must be compliant with Unity Catalog security requirements.

Create a catalog

To create a catalog, you can use the Data Explorer or a SQL command.

  1. Log in to a workspace that is linked to the metastore.

  2. From the persona switcher, select SQL.

  3. Click Data Icon Data.

  4. Click the Create Catalog button.

  5. Assign permissions for your catalog. See Manage privileges.

  6. Click Save.

  1. Run the following SQL command in a notebook or Databricks SQL editor. Items in brackets are optional. Replace the placeholder values:

    • <catalog_name>: A name for the catalog.

    • <comment>: An optional comment.

    CREATE CATALOG [ IF NOT EXISTS ] <catalog_name>
       [ COMMENT <comment> ];
    

    For example, to create a catalog named example:

    CREATE CATALOG IF NOT EXISTS example;
    
  2. Assign privileges to the catalog. See Manage privileges.

  1. Run the following SQL command in a notebook. Items in brackets are optional. Replace the placeholder values:

    • <catalog_name>: A name for the catalog.

    • <comment>: An optional comment.

    spark.sql("CREATE CATALOG [ IF NOT EXISTS ] <catalog_name> [ COMMENT <comment> ]")
    

    For example, to create a catalog named example:

    spark.sql("CREATE CATALOG IF NOT EXISTS example")
    
  2. Assign privileges to the catalog. See Manage privileges.

  1. Run the following SQL command in a notebook. Items in brackets are optional. Replace the placeholder values:

    • <catalog_name>: A name for the catalog.

    • <comment>: An optional comment.

    library(SparkR)
    
    sql("CREATE CATALOG [ IF NOT EXISTS ] <catalog_name> [ COMMENT <comment> ]")
    

    For example, to create a catalog named example:

    library(SparkR)
    
    sql("CREATE CATALOG IF NOT EXISTS example")
    
  2. Assign privileges to the catalog. See Manage privileges.

  1. Run the following SQL command in a notebook. Items in brackets are optional. Replace the placeholder values:

    • <catalog_name>: A name for the catalog.

    • <comment>: An optional comment.

    spark.sql("CREATE CATALOG [ IF NOT EXISTS ] <catalog_name> [ COMMENT <comment> ]")
    

    For example, to create a catalog named example:

    spark.sql("CREATE CATALOG IF NOT EXISTS example")
    
  2. Assign privileges to the catalog. See Manage privileges.

When you create a catalog, two schemas (databases) are automatically created: default and information_schema.

You can also create a catalog by using the Databricks Terraform provider and databricks_catalog. You can retrieve information about catalogs by using databricks_catalogs.

Delete a catalog

To delete (or drop) a catalog, you can use the Data Explorer or a SQL command.

You must delete all schemas in the catalog except information_schema before you can delete a catalog. This includes the auto-created default schema.

  1. Log in to a workspace that is linked to the metastore.

  2. From the persona switcher, select SQL.

  3. Click Data Icon Data.

  4. In the Data pane, on the left, click the catalog you want to delete.

  5. In the detail pane, click the three-dot menu to the left of the Create database button and select Delete.

  6. On the Delete catalog dialog, click Delete.

Run the following SQL command in a notebook or Databricks SQL editor. Items in brackets are optional. Replace the placeholder <catalog_name>.

For parameter descriptions, see DROP CATALOG.

If you use DROP CATALOG without the CASCADE option, you must delete all schemas in the catalog except information_schema before you can delete the catalog. This includes the auto-created default schema.

DROP CATALOG [ IF EXISTS ] <catalog_name> [ RESTRICT | CASCADE ]

For example, to delete a catalog named vaccine and its schemas:

DROP CATALOG vaccine CASCADE

Run the following SQL command in a notebook. Items in brackets are optional. Replace the placeholder <catalog_name>.

For parameter descriptions, see DROP CATALOG.

If you use DROP CATALOG without the CASCADE option, you must delete all schemas in the catalog except information_schema before you can delete the catalog. This includes the auto-created default schema.

spark.sql("DROP CATALOG [ IF EXISTS ] <catalog_name> [ RESTRICT | CASCADE ]")

For example, to delete a catalog named vaccine and its schemas:

spark.sql("DROP CATALOG vaccine CASCADE")

Run the following SQL command in a notebook. Items in brackets are optional. Replace the placeholder <catalog_name>.

For parameter descriptions, see DROP CATALOG.

If you use DROP CATALOG without the CASCADE option, you must delete all schemas in the catalog except information_schema before you can delete the catalog. This includes the auto-created default schema.

library(SparkR)

rsql("DROP CATALOG [ IF EXISTS ] <catalog_name> [ RESTRICT | CASCADE ]")

For example, to delete a catalog named vaccine and its schemas:

library(SparkR)

sql("DROP CATALOG vaccine CASCADE")

Run the following SQL command in a notebook. Items in brackets are optional. Replace the placeholder <catalog_name>.

For parameter descriptions, see DROP CATALOG.

If you use DROP CATALOG without the CASCADE option, you must delete all schemas in the catalog except information_schema before you can delete the catalog. This includes the auto-created default schema.

spark.sql("DROP CATALOG [ IF EXISTS ] <catalog_name> [ RESTRICT | CASCADE ]")

For example, to delete a catalog named vaccine and its schemas:

spark.sql("DROP CATALOG vaccine CASCADE")