Insert

Important

This documentation has been retired and might not be updated. The products, services, or technologies mentioned in this content are no longer supported. See INSERT.

Insert from select queries

INSERT INTO [TABLE] [db_name.]table_name [PARTITION part_spec] select_statement

INSERT OVERWRITE TABLE [db_name.]table_name [PARTITION part_spec] select_statement

part_spec:
  : (part_col_name1=val1 [, part_col_name2=val2, ...])

Insert data into a table or a partition from the result table of a select statement. Data is inserted by ordinal (ordering of columns) and not by names.

Note

(Delta Lake on Databricks) If a column has a NOT NULL constraint, and an INSERT INTO statement sets a column value to NULL, a SparkException is thrown.

OVERWRITE

Overwrite existing data in the table or the partition. Otherwise, new data is appended.

Examples

-- Creates a partitioned native parquet table
CREATE TABLE data_source_tab1 (col1 INT, p1 INT, p2 INT)
  USING PARQUET PARTITIONED BY (p1, p2)

-- Appends two rows into the partition (p1 = 3, p2 = 4)
INSERT INTO data_source_tab1 PARTITION (p1 = 3, p2 = 4)
  SELECT id FROM RANGE(1, 3)

-- Overwrites the partition (p1 = 3, p2 = 4) using two new rows
INSERT OVERWRITE TABLE default.data_source_tab1 PARTITION (p1 = 3, p2 = 4)
  SELECT id FROM RANGE(3, 5)

Insert values into tables

INSERT INTO [TABLE] [db_name.]table_name [PARTITION part_spec] VALUES values_row [, values_row ...]

INSERT OVERWRITE TABLE [db_name.]table_name [PARTITION part_spec] VALUES values_row [, values_row ...]

values_row:
    : (val1 [, val2, ...])

Insert data into a table or a partition from a row value list.

OVERWRITE

Overwrite existing data in the table or the partition. Otherwise, new data is appended.

Examples

-- Creates a partitioned hive serde table (using the HiveQL syntax)
CREATE TABLE hive_serde_tab1 (col1 INT, p1 INT, p2 INT)
  USING HIVE OPTIONS(fileFormat 'PARQUET') PARTITIONED BY (p1, p2)

-- Appends two rows into the partition (p1 = 3, p2 = 4)
INSERT INTO hive_serde_tab1 PARTITION (p1 = 3, p2 = 4)
  VALUES (1), (2)

-- Overwrites the partition (p1 = 3, p2 = 4) using two new rows
INSERT OVERWRITE TABLE hive_serde_tab1 PARTITION (p1 = 3, p2 = 4)
  VALUES (3), (4)

Dynamic partition inserts

In part_spec, the partition column values are optional. When the partition specification part_spec is not completely provided, such inserts are called dynamic partition inserts or multi-partition inserts. When the values are not specified, these columns are referred to as dynamic partition columns; otherwise, they are static partition columns. For example, the partition spec (p1 = 3, p2, p3) has a static partition column (p1) and two dynamic partition columns (p2 and p3).

In part_spec, static partition keys must come before the dynamic partition keys. This means all partition columns having constant values must appear before other partition columns that do not have an assigned constant value.

The partition values of dynamic partition columns are determined during the execution. The dynamic partition columns must be specified last in both part_spec and the input result set (of the row value lists or the select query). They are resolved by position, instead of by names. Thus, the orders must be exactly matched.

The DataFrameWriter APIs do not have an interface to specify partition values. Therefore, the insertInto() API is always using dynamic partition mode.

Important

In the dynamic partition mode, the input result set could result in a large number of dynamic partitions, and thus generate a large number of partition directories.

OVERWRITE

The semantics are different based on the type of the target table.

  • Hive SerDe tables: INSERT OVERWRITE doesn’t delete partitions ahead, and only overwrite those partitions that have data written into it at runtime. This matches Apache Hive semantics. For Hive SerDe tables, Spark SQL respects the Hive-related configuration, including hive.exec.dynamic.partition and hive.exec.dynamic.partition.mode.

  • Native data source tables: INSERT OVERWRITE first deletes all the partitions that match the partition specification (e.g., PARTITION(a=1, b)) and then inserts all the remaining values. The behavior of native data source tables can be changed to be consistent with Hive SerDe tables by changing the session-specific configuration spark.sql.sources.partitionOverwriteMode to DYNAMIC. The default mode is STATIC.

Examples

-- Create a partitioned native Parquet table
CREATE TABLE data_source_tab2 (col1 INT, p1 STRING, p2 STRING)
  USING PARQUET PARTITIONED BY (p1, p2)

-- Two partitions ('part1', 'part1') and ('part1', 'part2') are created by this dynamic insert.
-- The dynamic partition column p2 is resolved by the last column `'part' || id`
INSERT INTO data_source_tab2 PARTITION (p1 = 'part1', p2)
  SELECT id, 'part' || id FROM RANGE(1, 3)

-- A new partition ('partNew1', 'partNew2') is added by this INSERT OVERWRITE.
INSERT OVERWRITE TABLE data_source_tab2 PARTITION (p1 = 'partNew1', p2)
  VALUES (3, 'partNew2')

-- After this INSERT OVERWRITE, the two partitions ('part1', 'part1') and ('part1', 'part2') are dropped,
-- because both partitions are included by (p1 = 'part1', p2).
-- Then, two partitions ('partNew1', 'partNew2'), ('part1', 'part1') exist after this operation.
INSERT OVERWRITE TABLE data_source_tab2 PARTITION (p1 = 'part1', p2)
  VALUES (5, 'part1')

-- Create and fill a partitioned hive serde table with three partitions:
-- ('part1', 'part1'), ('part1', 'part2') and ('partNew1', 'partNew2')
CREATE TABLE hive_serde_tab2 (col1 INT, p1 STRING, p2 STRING)
  USING HIVE OPTIONS(fileFormat 'PARQUET') PARTITIONED BY (p1, p2)
INSERT INTO hive_serde_tab2 PARTITION (p1 = 'part1', p2)
  SELECT id, 'part' || id FROM RANGE(1, 3)
INSERT OVERWRITE TABLE hive_serde_tab2 PARTITION (p1 = 'partNew1', p2)
  VALUES (3, 'partNew2')

-- After this INSERT OVERWRITE, only the partitions ('part1', 'part1') is overwritten by the new value.
-- All the three partitions still exist.
INSERT OVERWRITE TABLE hive_serde_tab2 PARTITION (p1 = 'part1', p2)
  VALUES (5, 'part1')

Insert values into directory

INSERT OVERWRITE [LOCAL] DIRECTORY [directory_path]
  USING data_source [OPTIONS (key1=val1, key2=val2, ...)]
  [AS] SELECT ... FROM ...

Insert the query results of select_statement into a directory directory_path using Spark native format. If the specified path exists, it is replaced with the output of the select_statement.

DIRECTORY

The path of the destination directory of the insert. The directory can also be specified in OPTIONS using the key path. If the specified path exists, it is replaced with the output of the select_statement. If LOCAL is used, the directory is on the local file system.

USING

The file format to use for the insert. One of TEXT, CSV, JSON, JDBC, PARQUET, ORC, HIVE, and LIBSVM, or a fully qualified class name of a custom implementation of org.apache.spark.sql.sources.DataSourceRegister.

AS

Populate the destination directory with input data from the select statement.

Examples

INSERT OVERWRITE DIRECTORY
USING parquet
OPTIONS ('path' '/tmp/destination/path')
SELECT key, col1, col2 FROM source_table

INSERT OVERWRITE DIRECTORY '/tmp/destination/path'
USING json
SELECT 1 as a, 'c' as b

Insert values into directory with Hive format

INSERT OVERWRITE [LOCAL] DIRECTORY directory_path
  [ROW FORMAT row_format] [STORED AS file_format]
  [AS] select_statement

Insert the query results of select_statement into a directory directory_path using Hive SerDe. If the specified path exists, it is replaced with the output of the select_statement.

Note

This command is supported only when Hive support is enabled.

DIRECTORY

The path of the destination directory of the insert. If the specified path exists, it will be replaced with the output of the select_statement. If LOCAL is used, the directory is on the local file system.

ROW FORMAT

Use the SERDE clause to specify a custom SerDe for this insert. Otherwise, use the DELIMITED clause to use the native SerDe and specify the delimiter, escape character, null character, and so on.

STORED AS

The file format for this insert. One of TEXTFILE, SEQUENCEFILE, RCFILE, ORC, PARQUET, and AVRO. Alternatively, you can specify your own input and output format through INPUTFORMAT and OUTPUTFORMAT. Only TEXTFILE, SEQUENCEFILE, and RCFILE can be used with ROW FORMAT SERDE, and only TEXTFILE can be used with ROW FORMAT DELIMITED.

AS

Populate the destination directory with input data from the select statement.

Examples

INSERT OVERWRITE LOCAL DIRECTORY '/tmp/destination/path'
STORED AS orc
SELECT * FROM source_table where key < 10