Pular para o conteúdo principal

IDENTIFIER clause

Applies to: check marked yes Databricks SQL check marked yes Databricks Runtime 13.3 LTS and above

The IDENTIFIER clause enables SQL-injection-safe parameterization of identifiers in SQL statements.

  • Applies to: check marked yes Databricks Runtime 18.0 and above

    The clause can be used instead of any object name or identifier within a multipart name, as long as the arguments consist only of a string literal or string parameter marker. This includes coalesced strings such as: 'myschema' :table 'mycolumn' or :schema :table :column.

  • Applies to: check marked yes Databricks SQL check marked yes Databricks Runtime 13.3 LTS and above

    The IDENTIFIER clause interprets a constant string such as 'myschema' || :table || 'mycolumn' or :schema || :table || :column as a:

    • relation (table or view) name
    • function name
    • column name
    • field name
    • schema name
    • catalog name

    The IDENTIFIER clause is limited to the following statements:

    • The table, view, or function subject name of a CREATE, ALTER, DROP, or UNDROP statement.
    • The target table name of a MERGE, UPDATE, DELETE, INSERT, COPY INTO statements.
    • The target of a SHOW or DESCRIBE statement.
    • USE of a schema or catalog
    • A function invocation
    • A column, table or view referenced in a query. This includes queries embedded in a DDL or DML statement.
nota

Where the IDENTIFIER clause is not supported and SQL injection is not a concern, you can use EXECUTE IMMEDIATE instead. For example: EXECUTE IMMEDIATE 'CREATE TABLE ' || :tab || '(' || :col || 'INT)';

Syntax

IDENTIFIER ( strLiteral )

IDENTIFIER ( strExpr )

Parameters

  • strLiteral: A STRING literal typically composed of one or more string parameter markers and literal components which are coalesced;
  • strExpr: A constant STRING expression typically including one or more parameter markers. Starting with Databricks Runtime 18.0 this notation is deprecated.

Examples

Scala
// Creation of a table using parameter marker.
spark.sql("CREATE TABLE IDENTIFIER(:mytab)(c1 INT)", args = Map("mytab" -> "tab1"))

// Altering a table with a fixed schema and a parameterized table name.
spark.sql("ALTER TABLE IDENTIFIER('default.' || :mytab) ADD COLUMN c2 INT)", args = Map("mytab" -> "tab1"))

// Altering a table with a fixed schema and a parameterized table name in DBR 18.0 and above.
spark.sql("ALTER TABLE IDENTIFIER('default.' :mytab) ADD COLUMN c2 INT)", args = Map("mytab" -> "tab1"))

// Dropping a table with separate schema and table parameters.
spark.sql("DROP TABLE IDENTIFIER(:myschema || '.' || :mytab)", args = Map("mySchema" -> "default", "mytab" -> "tab1"))

// Dropping a table with separate schema and table parameters in DBR 18.0 and above.
spark.sql("DROP TABLE IDENTIFIER(:myschema '.' :mytab)", args = Map("mySchema" -> "default", "mytab" -> "tab1"))

// A parameterized reference to a table in a query. The table name is qualified and uses back-ticks.
spark.sql("SELECT * FROM IDENTIFIER(:mytab)", args = Map("mytab" -> "`default`.`tab1`"))

// You cannot qualify the IDENTIFIER claue or use it as a qualifier itself.
spark.sql("SELECT * FROM myschema.IDENTIFIER(:mytab)", args = Map("mytab" -> "`tab1`"))

spark.sql("SELECT * FROM IDENTIFIER(:myschema).mytab", args = Map("mychema" -> "`default`"))

// A parameterized column reference
spark.sql("SELECT IDENTIFIER(:col) FROM VALUES(1) AS T(c1)", args = Map("col" -> "t.c1"))

// Passing in an aggregate function name as a parameter
spark.sql("SELECT IDENTIFIER(:agg)(c1) FROM VALUES(1), (2) AS T(c1)", args = Map("agg" -> "max"))