ALTER TABLE

Applies to: check marked yes Databricks SQL check marked yes Databricks Runtime

Alters the schema or properties of a table.

For type changes or renaming columns in Delta Lake see rewrite the data.

To change the comment on a table use COMMENT ON.

If the table is cached, the command clears cached data of the table and all its dependents that refer to it. The cache will be lazily filled when the table or the dependents are accessed the next time.

Note

When you add a column to an existing Delta table, you cannot define a DEFAULT value. All columns added to Delta tables are treated as NULL for existing rows. After adding a column, you can optionally define a default value for the column, but this is only applied for new rows inserted into the table. Use the following syntax:

sql
ALTER TABLE table_name ALTER COLUMN column_name SET DEFAULT default_expression

On foreign tables, you can perform only ALTER TABLE SET OWNER and ALTER TABLE RENAME TO.

Required permissions

If you use Unity Catalog you must have MODIFY permission to:

  • ALTER COLUMN

  • ADD COLUMN

  • DROP COLUMN

  • SET TBLPROPERTIES

  • UNSET TBLPROPERTIES

All other operations require ownership of the table.

Syntax

ALTER TABLE table_name
   { RENAME TO clause |
     ADD COLUMN clause |
     ALTER COLUMN clause |
     DROP COLUMN clause |
     RENAME COLUMN clause |
     ADD CONSTRAINT clause |
     DROP CONSTRAINT clause |
     DROP FEATURE clause |
     ADD PARTITION clause |
     DROP PARTITION clause |
     RENAME PARTITION clause |
     RECOVER PARTITIONS clause |
     SET TBLPROPERTIES clause |
     UNSET TBLPROPERTIES clause |
     SET SERDE clause |
     SET LOCATION clause |
     SET OWNER TO clause |
     SET SERDE clause |
     SET LOCATION clause |
     SET OWNER TO clause |
     SET TAGS clause |
     UNSET TAGS clause }

Parameters

  • table_name

    Identifies the table being altered. The name must not include a temporal specification. If the table cannot be found Databricks raises a TABLE_OR_VIEW_NOT_FOUND error.

  • RENAME TO to_table_name

    Renames the table within the same schema.

    Note

    If you use AWS Glue Data Catalog as the metastore, RENAME is not supported.

  • ADD COLUMN

    This clause is not supported for JDBC data sources.

    Adds one or more columns to the table, or fields to existing columns in a Delta Lake table.

    { ADD [COLUMN | COLUMNS ]
      ( { { column_identifier | field_name } data_type
          [DEFAULT clause] [COMMENT comment] [FIRST | AFTER identifier]
          } [, ...] ) }
    
    • column_identifier

      The name of the column to be added. The name must be unique within the table.

      Unless FIRST or AFTER name are specified the column or field will be appended at the end.

    • field_name

      The fully qualified name of the field to be added to an existing column. All components of the path to the nested field must exist and the field name itself must be unique.

    • DEFAULT default_expression

      Applies to: check marked yes Databricks SQL SQL warehouse version 2022.35 or higher check marked yes Databricks Runtime 11.2 and above

      Defines a DEFAULT value for the column which is used on INSERT and MERGE ... INSERT when the column is not specified.

      If no default is specified, DEFAULT NULL is implied for nullable columns.

      default_expression may be composed of literals, and built-in SQL functions or operators except:

      default_expression must not contain any subquery.

      DEFAULT is supported for CSV, JSON, PARQUET, and ORC sources.

    • data_type

      Specifies the data type of the column or field. Not all data types supported by Databricks are supported by all data sources.

    • COMMENT comment

      An optional STRING literal describing the added column or field.

    • FIRST

      If specified the column will be added as the first column of the table, or the field will be added as the first field of in the containing struct.

    • AFTER identifier

      If specified the column or field will be added immediately after the field or column identifier.

  • ALTER COLUMN

    Changes a property or the location of a column.

    { { ALTER | CHANGE } [COLUMN] { column_identifier | field_name }
      { COMMENT comment |
        { FIRST | AFTER column_identifier } |
        { SET | DROP } NOT NULL |
        SET DEFAULT clause |
        DROP DEFAULT |
        SYNC IDENTITY |
        SET TAGS clause |
        UNSET TAGS clause } }
    
    • column_identifier

      The name of the column to be altered.

    • field_name

      The fully qualified name of the field to be altered. All components of the path to the nested field must exist.

    • COMMENT comment

      Changes the description of the column_name column. comment must be a STRING literal.

    • FIRST or AFTER identifier

      Moves the column from its current position to the front (FIRST) or immediately AFTER the identifier. This clause is only supported if table_name is a Delta table.

    • SET NOT NULL or DROP NOT NULL

      Changes the domain of valid column values to exclude nulls SET NOT NULL, or include nulls DROP NOT NULL. This option is only supported for Delta Lake tables. Delta Lake will ensure the constraint is valid for all existing and new data.

    • SYNC IDENTITY

      Applies to: check marked yes Databricks SQL check marked yes Databricks Runtime 10.3 and above

      Synchronize the metadata of an identity column with the actual data. When you write your own values to an identity column, it might not comply with the metadata. This option evaluates the state and updates the metadata to be consistent with the actual data. After this command, the next automatically assigned identity value will start from start + (n + 1) * step, where n is the smallest value that satisfies start + n * step >= max() (for a positive step).

      This option is only supported for identity columns on Delta Lake tables.

    • DROP DEFAULT

      Applies to: check marked yes Databricks SQL SQL warehouse version 2022.35 or higher check marked yes Databricks Runtime 11.2 and above

      Removes the default expression from the column. For nullable columns this is equivalent to SET DEFAULT NULL. For columns defined with NOT NULL you need to provide a value on every future INSERT operation

    • SET DEFAULT default_expression

      Applies to: check marked yes Databricks SQL SQL warehouse version 2022.35 or higher check marked yes Databricks Runtime 11.2 and above

      Defines a DEFAULT value for the column which is used on INSERT and MERGE ... INSERT when the column is not specified.

      If no default is specified DEFAULT NULL is implied for nullable columns.

      default_expression may be composed of literals, built-in SQL functions, or operators except:

      default_expression must not contain a subquery.

      DEFAULT is supported for CSV, JSON, ORC, and PARQUET sources.

      When you define the default for a newly added column, the default applies to all pre-existing rows. If the default includes a non-deterministic function such as rand or current_timestamp the value is computed once when the ALTER TABLE is executed, and applied as a constant to pre-existing rows. For newly inserted rows, the default expression runs once per rows.

      When you set a default using ALTER COLUMN, existing rows are not affected by that change.

    • SET TAGS ( { tag_name = tag_value } [, …] )

      Applies to: check marked yes Databricks Runtime 13.3 LTS and above

      Apply tags to the column. You need to have apply_tag permission to add tags to the column.

      • tag_name

        A literal STRING. The tag_name must be unique within the table or column.

      • tag_value

        A literal STRING.

    • UNSET TAGS ( tag_name [, …] )

      Applies to: check marked yes Databricks Runtime 13.3 LTS and above

      Remove tags from the column. You need to have apply_tag permission to remove tags from the column.

      • tag_name

        A literal STRING. The tag_name must be unique within the table or column.

  • DROP COLUMN

    Preview

    This feature is in Public Preview.

    Applies to: check marked yes Databricks SQL check marked yes Databricks Runtime 11.0 and above

    Drop one or more columns or fields in a Delta Lake table.

    When you drop a column or field, you must drop dependent check constraints and generated columns.

    For requirements, see Rename and drop columns with Delta Lake column mapping.

    DROP [COLUMN | COLUMNS] [ IF EXISTS ] ( { {column_identifier | field_name} [, ...] )
    
    • IF EXISTS

      When you specify IF EXISTS, Databricks ignores an attempt to drop columns that do not exist. Otherwise, dropping non-existing columns will cause an error.

    • column_identifier

      The name of the existing column.

    • field_name

      The fully qualified name of an existing field.

  • RENAME COLUMN

    Preview

    This feature is in Public Preview.

    Applies to: check marked yes Databricks SQL check marked yes Databricks Runtime 11.0 and above

    Renames a column or field in a Delta Lake table.

    When you rename a column or field you also need to change dependent check constraints and generated columns. Any primary keys and foreign keys using the column will be dropped. In case of foreign keys you must own the table on which the foreign key is defined.

    For requirements, see Rename and drop columns with Delta Lake column mapping.

    RENAME COLUMN { column_identifier TO to_column_identifier|
                    field_name TO to_field_identifier }
    
  • ADD CONSTRAINT

    Adds a check constraint, informational foreign key constraint, or informational primary key constraint to the table.

    Foreign keys and primary keys are supported only for tables in Unity Catalog, not the hive_metastore catalog.

  • DROP CONSTRAINT

    Drops a primary key, foreign key, or check constraint from the table.

  • DROP FEATURE feature_name

    Applies to: check marked yes Databricks Runtime 14.1 and above

    Removes a feature from a Delta Lake table.

    • feature_name

      The name of a feature in form of a STRING literal or identifier, that must be understood by Databricks and be supported on the table.

      Supported feature_names are:

      • ‘deletionVectors’ or deletionvectors

      • ‘v2Checkpoint’ or v2checkpoint

      See table features for details.

  • ADD PARTITION

    If specified adds one or more partitions to the table. Adding partitions is not supported for Delta Lake tables.

    ADD [IF NOT EXISTS] { PARTITION clause [ LOCATION path ] } [...]
    
    • IF NOT EXISTS

      An optional clause directing Databricks to ignore the statement if the partition already exists.

    • PARTITION clause

      A partition to be added. The partition keys must match the partitioning of the table and be associated with values. If the partition already exists an error is raised unless IF NOT EXISTS has been specified.

    • LOCATION path

      path must be a STRING literal representing an optional location pointing to the partition.

      If no location is specified the location will be derived from the location of the table and the partition keys.

      If there are files present at the location they populate the partition and must be compatible with the data_source of the table and its options.

  • DROP PARTITION

    If specified this clause drops one or more partitions from the table, optionally deleting any files at the partitions’ locations.

    Delta Lake tables do not support dropping of partitions.

    DROP [ IF EXISTS ] PARTITION clause [, ...] [PURGE]
    
    • IF EXISTS

      When you specify IF EXISTS Databricks will ignore an attempt to drop partitions that do not exists. Otherwise, non existing partitions will cause an error.

    • PARTITION clause

      Specifies a partition to be dropped. If the partition is only partially identified a slice of partitions is dropped.

    • PURGE

      If set, the table catalog must remove partition data by skipping the Trash folder even when the catalog has configured one. The option is applicable only for managed tables. It is effective only when:

      The file system supports a Trash folder. The catalog has been configured for moving the dropped partition to the Trash folder. There is no Trash folder in AWS S3, so it is not effective.

      There is no need to manually delete files after dropping partitions.

  • RENAME PARTITION

    Replaces the keys of a partition.

    Delta Lake tables do not support renaming partitions.

    from_partition_clause RENAME TO to_partition_clause
    
  • RECOVER PARTITIONS

    This clause does not apply to Delta Lake tables.

    Instructs Databricks to scan the table’s location and add any files to the table which have been added directly to the filesystem.

  • SET TBLPROPERTIES

    Sets or resets one or more user defined properties.

  • UNSET TBLPROPERTIES

    Removes one or more user defined properties.

  • SET LOCATION

    Moves the location of a partition or table.

    Delta Lake does not support moving individual partitions of a Delta Lake table.

    [ PARTITION clause ] SET LOCATION path
    
    • PARTITION clause

      Optionally identifies the partition for which the location will to be changed. If you omit naming a partition Databricks moves the location of the table.

    • LOCATION path

      path must be a STRING literal. Specifies the new location for the partition or table.

      Files in the original location will not be moved to the new location.

  • [ SET ] OWNER TO principal

    Transfers ownership of the table to principal.

    Applies to: check marked yes Databricks SQL SQL warehouse version 2022.35 or higher check marked yes Databricks Runtime 11.2 and above

    SET is allowed as an optional keyword.

  • SET TAGS ( { tag_name = tag_value } [, …] )

    Applies to: check marked yes Databricks Runtime 13.3 LTS and above

    Apply tags to the table. You need to have apply_tag permission to add tags to the table.

    • tag_name

      A literal STRING. The tag_name must be unique within the table or column.

    • tag_value

      A literal STRING.

  • UNSET TAGS ( tag_name [, …] )

    Applies to: check marked yes Databricks Runtime 13.3 LTS and above

    Remove tags from the table. You need to have apply_tag permission to remove tags from the table.

    • tag_name

      A literal STRING. The tag_name must be unique within the table or column.

Examples

For Delta Lake add constraints and alter column examples, see

-- RENAME table
> DESCRIBE student;
                col_name data_type comment
 ----------------------- --------- -------
                    name    string    NULL
                  rollno       int    NULL
                     age       int    NULL
 # Partition Information
              # col_name data_type comment
                     age       int    NULL

> ALTER TABLE Student RENAME TO StudentInfo;

-- After Renaming the table
> DESCRIBE StudentInfo;
                col_name data_type comment
 ----------------------- --------- -------
                    name    string    NULL
                  rollno       int    NULL
                     age       int    NULL
 # Partition Information
              # col_name data_type comment
                     age       int    NULL


-- RENAME partition
> SHOW PARTITIONS StudentInfo;
 partition
 ---------
    age=10
    age=11
    age=12

> ALTER TABLE default.StudentInfo PARTITION (age='10') RENAME TO PARTITION (age='15');

-- After renaming Partition
> SHOW PARTITIONS StudentInfo;
 partition
 ---------
    age=11
    age=12
    age=15

-- Add new columns to a table
> DESCRIBE StudentInfo;
                col_name data_type comment
 ----------------------- --------- -------
                    name    string    NULL
                  rollno       int    NULL
                     age       int    NULL
 # Partition Information
              # col_name data_type comment
                     age       int    NULL


> ALTER TABLE StudentInfo ADD columns (LastName string, DOB timestamp);

-- After Adding New columns to the table
> DESCRIBE StudentInfo;
                col_name data_type comment
 ----------------------- --------- -------
                    name    string    NULL
                  rollno       int    NULL
                LastName    string    NULL
                     DOB timestamp    NULL
                     age       int    NULL
 # Partition Information
              # col_name data_type comment
                     age       int    NULL

-- Add a new partition to a table
> SHOW PARTITIONS StudentInfo;
 partition
 ---------
    age=11
    age=12
    age=15

> ALTER TABLE StudentInfo ADD IF NOT EXISTS PARTITION (age=18);

-- After adding a new partition to the table
> SHOW PARTITIONS StudentInfo;
 partition
 ---------
    age=11
    age=12
    age=15
    age=18


-- Drop a partition from the table
> SHOW PARTITIONS StudentInfo;
 partition
 ---------
    age=11
    age=12
    age=15
    age=18

> ALTER TABLE StudentInfo DROP IF EXISTS PARTITION (age=18);

-- After dropping the partition of the table
> SHOW PARTITIONS StudentInfo;
 partition
 ---------
    age=11
    age=12
    age=15

-- Adding multiple partitions to the table
> SHOW PARTITIONS StudentInfo;
 partition
 ---------
    age=11
    age=12
    age=15


> ALTER TABLE StudentInfo ADD IF NOT EXISTS PARTITION (age=18) PARTITION (age=20);

-- After adding multiple partitions to the table
> SHOW PARTITIONS StudentInfo;
 partition
 ---------
    age=11
    age=12
    age=15
    age=18
    age=20


-- ALTER or CHANGE COLUMNS
> DESCRIBE StudentInfo;
                col_name data_type comment
+-----------------------+---------+-------
                    name    string    NULL
                  rollno       int    NULL
                LastName    string    NULL
                     DOB timestamp    NULL
                     age       int    NULL
 # Partition Information
              # col_name data_type comment
                     age       int    NULL

ALTER TABLE StudentInfo ALTER COLUMN name COMMENT "new comment";

--After ALTER or CHANGE COLUMNS
> DESCRIBE StudentInfo;
                col_name data_type     comment
 ----------------------- --------- -----------
                    name    string new comment
                  rollno       int        NULL
                LastName    string        NULL
                     DOB timestamp        NULL
                     age       int        NULL
 # Partition Information
              # col_name data_type     comment
                     age       int        NULL

-- RENAME COLUMN
> ALTER TABLE StudentInfo RENAME COLUMN name TO FirstName;

--After RENAME COLUMN
> DESCRIBE StudentInfo;
                col_name data_type     comment
 ----------------------- --------- -----------
               FirstName    string new comment
                  rollno       int        NULL
                LastName    string        NULL
                     DOB timestamp        NULL
                     age       int        NULL
 # Partition Information
              # col_name data_type     comment
                     age       int        NULL

-- Change the file Location
> ALTER TABLE dbx.tab1 PARTITION (a='1', b='2') SET LOCATION '/path/to/part/ways';

-- SET SERDE/ SERDE Properties (DBR only)
> ALTER TABLE test_tab SET SERDE 'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe';

> ALTER TABLE dbx.tab1 SET SERDE 'org.apache.hadoop' WITH SERDEPROPERTIES ('k' = 'v', 'kay' = 'vee');

-- SET TABLE PROPERTIES
> ALTER TABLE dbx.tab1 SET TBLPROPERTIES ('winner' = 'loser');

-- DROP TABLE PROPERTIES
> ALTER TABLE dbx.tab1 UNSET TBLPROPERTIES ('winner');

-- Drop the "deletion vectors" from a Delta table
> ALTER TABLE my_table DROP FEATURE deletionVectors;

-- Applies three tags to the table named `test`.
> ALTER TABLE test SET TAGS ('tag1' = 'val1', 'tag2' = 'val2', 'tag3' = 'val3');

-- Removes three tags from the table named `test`.
> ALTER TABLE test UNSET TAGS ('tag1', 'tag2', 'tag3');

-- Applies three tags to table `main.schema1.test` column `col1`.
> ALTER TABLE main.schema1.test ALTER COLUMN col1 SET TAGS ('tag1' = 'val1', 'tag2' = 'val2', 'tag3' = 'val3');

-- Removes three tags from table `main.schema1.test` column `col1`.
> ALTER TABLE main.schema1.test ALTER COLUMN col1 UNSET TAGS ('tag1', 'tag2', 'tag3');