Error classes in Databricks
Applies to: Databricks SQL Databricks Runtime 12.2 and above
Error classes are descriptive, human-readable, strings unique to the error condition.
You can use error classes to programmatically handle errors in your application without the need to parse the error message.
This is a list of common, named error conditions returned by Databricks.
Databricks Runtime and Databricks SQL
ADD_DEFAULT_UNSUPPORTED
Failed to execute <statementType>
command because DEFAULT
values are not supported when adding new columns to previously existing target data source with table provider: “<dataSource>
”.
AGGREGATE_FUNCTION_WITH_NONDETERMINISTIC_EXPRESSION
Non-deterministic expression <sqlExpr>
should not appear in the arguments of an aggregate function.
AI_FUNCTION_HTTP_PARSE_CAST_ERROR
Failed to parse model output when casting to the specified returnType: “<dataType>
”, response JSON was: “<responseString>
”. Please update the returnType to match the contents of the type represented by the response JSON and then retry the query again.
AI_FUNCTION_HTTP_PARSE_COLUMNS_ERROR
The actual model output has more than one column “<responseString>
”. However, the specified return type[“<dataType>
”] has only one column. Please update the returnType to contain the same number of columns as the model output and then retry the query again.
AI_FUNCTION_HTTP_REQUEST_ERROR
Error occurred while making an HTTP request for function <funcName>
: <errorMessage>
AI_FUNCTION_INVALID_MAX_WORDS
The maximum number of words must be a non-negative integer, but got <maxWords>
.
AI_FUNCTION_INVALID_MODEL_PARAMETERS
The provided model parameters (<modelParameters>
) are invalid in the AI_QUERY
function for serving endpoint “<endpointName>
”.
For more details see AI_FUNCTION_INVALID_MODEL_PARAMETERS
AI_FUNCTION_INVALID_RESPONSE_FORMAT
AI function: “<functionName>
” requires valid JSON string for responseFormat
parameter, but found the following response format: “<invalidResponseFormat>
”.
AI_FUNCTION_JSON_PARSE_ERROR
Error occurred while parsing the JSON response for function <funcName>
: <errorMessage>
AI_FUNCTION_MODEL_SCHEMA_PARSE_ERROR
Failed to parse the schema for the serving endpoint “<endpointName>
”: <errorMessage>
, response JSON was: “<responseJson>
”.
Set the returnType
parameter manually in the AI_QUERY
function to override schema resolution.
AI_FUNCTION_UNSUPPORTED_ERROR
The function <funcName>
is not supported in the current environment. It is only available in Databricks SQL Pro and Serverless.
AI_FUNCTION_UNSUPPORTED_REQUEST
Failed to evaluate the SQL function “<functionName>
” because the provided argument of <invalidValue>
has “<invalidDataType>
”, but only the following types are supported: <supportedDataTypes>
. Please update the function call to provide an argument of string type and retry the query again.
AI_FUNCTION_UNSUPPORTED_RESPONSE_FORMAT
AI function: “<functionName>
” does not support the type “<invalidResponseFormatType>
” of the following response format: “<invalidResponseFormat>
”. Supported types of the response format are: <supportedResponseFormatTypes>
.
AI_FUNCTION_UNSUPPORTED_RETURN_TYPE
AI function: “<functionName>
” does not support the following type as return type: “<typeName>
”. Return type must be a valid SQL type understood by Catalyst and supported by AI function. Current supported types includes: <supportedValues>
AI_INVALID_ARGUMENT_VALUE_ERROR
Provided value “<argValue>
” is not supported by argument “<argName>
”. Supported values are: <supportedValues>
AI_QUERY_ENDPOINT_NOT_SUPPORT_STRUCTURED_OUTPUT
Expected the serving endpoint task type to be “Chat” for structured output support, but found “<taskType>
” for the endpoint “<endpointName>
”.
AI_QUERY_RETURN_TYPE_COLUMN_TYPE_MISMATCH
Provided “<sqlExpr>
” is not supported by the argument returnType.
AI_SEARCH_CONFLICTING_QUERY_PARAM_SUPPLY_ERROR
Conflicting parameters detected for vector_search
SQL function: <conflictParamNames>
, please specify one from: <parameterNames>
.
AI_SEARCH_EMBEDDING_COLUMN_TYPE_UNSUPPORTED_ERROR
vector_search
SQL function with embedding column type <embeddingColumnType>
is not supported.
AI_SEARCH_EMPTY_QUERY_PARAM_ERROR
vector_search
SQL function is missing query input parameter, please specify one from: <parameterNames>
.
AI_SEARCH_INDEX_TYPE_UNSUPPORTED_ERROR
vector_search
SQL function with index type <indexType>
is not supported.
AI_SEARCH_QUERY_TYPE_CONVERT_ENCODE_ERROR
Failure to materialize vector_search
SQL function query from spark type <dataType>
to scala-native objects during request-encoding with error: <errorMessage>
.
AI_SEARCH_UNSUPPORTED_NUM_RESULTS_ERROR
vector_search
SQL function with num_results larger than <maxLimit>
is not supported. The limit specified was <requestedLimit>
. Please try again with num_results <= `<maxLimit>`
ALL_PARAMETERS_MUST_BE_NAMED
Using name parameterized queries requires all parameters to be named. Parameters missing names: <exprs>
.
ALTER_SCHEDULE_DOES_NOT_EXIST
Cannot alter <scheduleType>
on a table without an existing schedule or trigger. Please add a schedule or trigger to the table before attempting to alter it.
ALTER_TABLE_COLUMN_DESCRIPTOR_DUPLICATE
ALTER TABLE <type>
column <columnName>
specifies descriptor “<optionName>
” more than once, which is invalid.
AMBIGUOUS_ALIAS_IN_NESTED_CTE
Name <name>
is ambiguous in nested CTE.
Please set <config>
to “CORRECTED
” so that name defined in inner CTE takes precedence. If set it to “LEGACY
”, outer CTE definitions will take precedence.
See https://spark.apache.org/docs/latest/sql-migration-guide.html#query-engine’.
AMBIGUOUS_COLUMN_REFERENCE
Column <name>
is ambiguous. It’s because you joined several DataFrame together, and some of these DataFrames are the same.
This column points to one of the DataFrames but Spark is unable to figure out which one.
Please alias the DataFrames with different names via DataFrame.alias
before joining them,
and specify the column using qualified name, e.g. df.alias("a").join(df.alias("b"), col("a.id") > col("b.id"))
.
AMBIGUOUS_REFERENCE_TO_FIELDS
Ambiguous reference to the field <field>
. It appears <count>
times in the schema.
AMBIGUOUS_RESOLVER_EXTENSION
The single-pass analyzer cannot process this query or command because the extension choice for <operator>
is ambiguous: <extensions>
.
Please contact Databricks support.
ANSI_CONFIG_CANNOT_BE_DISABLED
The ANSI SQL configuration <config>
cannot be disabled in this product.
ARGUMENT_NOT_CONSTANT
The function <functionName>
includes a parameter <parameterName>
at position <pos>
that requires a constant argument. Please compute the argument <sqlExpr>
separately and pass the result as a constant.
ARITHMETIC_OVERFLOW
<message>
.<alternative>
If necessary set <config>
to “false” to bypass this error.
For more details see ARITHMETIC_OVERFLOW
ASSIGNMENT_ARITY_MISMATCH
The number of columns or variables assigned or aliased: <numTarget>
does not match the number of source expressions: <numExpr>
.
AVRO_DEFAULT_VALUES_UNSUPPORTED
The use of default values is not supported when`rescuedDataColumn` is enabled. You may be able to remove this check by setting spark.databricks.sql.avro.rescuedDataBlockUserDefinedSchemaDefaultValue
to false, but the default values will not apply and null values will still be used.
AVRO_INCOMPATIBLE_READ_TYPE
Cannot convert Avro <avroPath>
to SQL <sqlPath>
because the original encoded data type is <avroType>
, however you’re trying to read the field as <sqlType>
, which would lead to an incorrect answer.
To allow reading this field, enable the SQL configuration: “spark.sql.legacy.avro.allowIncompatibleSchema”.
AVRO_POSITIONAL_FIELD_MATCHING_UNSUPPORTED
The use of positional field matching is not supported when either rescuedDataColumn
or failOnUnknownFields
is enabled. Remove these options to proceed.
BIGQUERY_OPTIONS_ARE_MUTUALLY_EXCLUSIVE
BigQuery connection credentials must be specified with either the ‘GoogleServiceAccountKeyJson’ parameter or all of ‘projectId’, ‘OAuthServiceAcctEmail’, ‘OAuthPvtKey’
BINARY_ARITHMETIC_OVERFLOW
<value1> <symbol> <value2>
caused overflow. Use <functionName>
to ignore overflow problem and return NULL
.
BOOLEAN_STATEMENT_WITH_EMPTY_ROW
Boolean statement <invalidStatement>
is invalid. Expected single row with a value of the BOOLEAN
type, but got an empty row.
CALL_ON_STREAMING_DATASET_UNSUPPORTED
The method <methodName>
can not be called on streaming Dataset/DataFrame.
CANNOT_ALTER_COLLATION_BUCKET_COLUMN
ALTER TABLE (ALTER|CHANGE) COLUMN
cannot change collation of type/subtypes of bucket columns, but found the bucket column <columnName>
in the table <tableName>
.
CANNOT_ALTER_PARTITION_COLUMN
ALTER TABLE (ALTER|CHANGE) COLUMN
is not supported for partition columns, but found the partition column <columnName>
in the table <tableName>
.
CANNOT_ASSIGN_EVENT_TIME_COLUMN_WITHOUT_WATERMARK
Watermark needs to be defined to reassign event time column. Failed to find watermark definition in the streaming query.
CANNOT_CONVERT_PROTOBUF_FIELD_TYPE_TO_SQL_TYPE
Cannot convert Protobuf <protobufColumn>
to SQL <sqlColumn>
because schema is incompatible (protobufType = <protobufType>
, sqlType = <sqlType>
).
CANNOT_CONVERT_PROTOBUF_MESSAGE_TYPE_TO_SQL_TYPE
Unable to convert <protobufType>
of Protobuf to SQL type <toType>
.
CANNOT_CONVERT_SQL_TYPE_TO_PROTOBUF_FIELD_TYPE
Cannot convert SQL <sqlColumn>
to Protobuf <protobufColumn>
because schema is incompatible (protobufType = <protobufType>
, sqlType = <sqlType>
).
CANNOT_CONVERT_SQL_VALUE_TO_PROTOBUF_ENUM_TYPE
Cannot convert SQL <sqlColumn>
to Protobuf <protobufColumn>
because <data>
is not in defined values for enum: <enumString>
.
CANNOT_COPY_STATE
Cannot copy catalog state like current database and temporary views from Unity Catalog to a legacy catalog.
CANNOT_CREATE_DATA_SOURCE_TABLE
Failed to create data source table <tableName>
:
For more details see CANNOT_CREATE_DATA_SOURCE_TABLE
CANNOT_DECODE_URL
The provided URL cannot be decoded: <url>
. Please ensure that the URL is properly formatted and try again.
CANNOT_DROP_AMBIGUOUS_CONSTRAINT
Cannot drop the constraint with the name <constraintName>
shared by a CHECK
constraint
and a PRIMARY KEY
or FOREIGN KEY
constraint. You can drop the PRIMARY KEY
or
FOREIGN KEY
constraint by queries:
ALTER TABLE
.. DROP PRIMARY KEY
or
ALTER TABLE
.. DROP FOREIGN KEY
..
CANNOT_ESTABLISH_CONNECTION
Cannot establish connection to remote <jdbcDialectName>
database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please check your workspace’s network setup and ensure it does not have outbound restrictions to the host. Please also check that the host does not block inbound connections from the network where the workspace’s Spark clusters are deployed. ** Detailed error message: <causeErrorMessage>
.
CANNOT_ESTABLISH_CONNECTION_SERVERLESS
Cannot establish connection to remote <jdbcDialectName>
database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please allow inbound traffic from the Internet to your host, as you are using Serverless Compute. If your network policies do not allow inbound Internet traffic, please use non Serverless Compute, or you may reach out to your Databricks representative to learn about Serverless Private Networking. ** Detailed error message: <causeErrorMessage>
.
CANNOT_INVOKE_IN_TRANSFORMATIONS
Dataset transformations and actions can only be invoked by the driver, not inside of other Dataset transformations; for example, dataset1.map(x => dataset2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the dataset1.map transformation. For more information, see SPARK
-28702.
CANNOT_LOAD_FUNCTION_CLASS
Cannot load class <className>
when registering the function <functionName>
, please make sure it is on the classpath.
CANNOT_LOAD_PROTOBUF_CLASS
Could not load Protobuf class with name <protobufClassName>
. <explanation>
.
CANNOT_LOAD_STATE_STORE
An error occurred during loading state.
For more details see CANNOT_LOAD_STATE_STORE
CANNOT_MERGE_INCOMPATIBLE_DATA_TYPE
Failed to merge incompatible data types <left>
and <right>
. Please check the data types of the columns being merged and ensure that they are compatible. If necessary, consider casting the columns to compatible data types before attempting the merge.
CANNOT_MERGE_SCHEMAS
Failed merging schemas:
Initial schema:
<left>
Schema that cannot be merged with the initial schema:
<right>
.
CANNOT_MODIFY_CONFIG
Cannot modify the value of the Spark config: <key>
.
See also https://spark.apache.org/docs/latest/sql-migration-guide.html#ddl-statements’.
CANNOT_PARSE_DECIMAL
Cannot parse decimal. Please ensure that the input is a valid number with optional decimal point or comma separators.
CANNOT_PARSE_INTERVAL
Unable to parse <intervalString>
. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format. If the issue persists, please double check that the input value is not null or empty and try again.
CANNOT_PARSE_JSON_FIELD
Cannot parse the field name <fieldName>
and the value <fieldValue>
of the JSON token type <jsonType>
to target Spark data type <dataType>
.
CANNOT_QUERY_TABLE_DURING_INITIALIZATION
Cannot query MV/ST during initialization.
For more details see CANNOT_QUERY_TABLE_DURING_INITIALIZATION
CANNOT_READ_ARCHIVED_FILE
Cannot read file at path <path>
because it has been archived. Please adjust your query filters to exclude archived files.
CANNOT_READ_SENSITIVE_KEY_FROM_SECURE_PROVIDER
Cannot read sensitive key ‘<key>
’ from secure provider.
CANNOT_RECOGNIZE_HIVE_TYPE
Cannot recognize hive type string: <fieldType>
, column: <fieldName>
. The specified data type for the field cannot be recognized by Spark SQL. Please check the data type of the specified field and ensure that it is a valid Spark SQL data type. Refer to the Spark SQL documentation for a list of valid data types and their format. If the data type is correct, please ensure that you are using a supported version of Spark SQL.
CANNOT_RESOLVE_DATAFRAME_COLUMN
Cannot resolve dataframe column <name>
. It’s probably because of illegal references like df1.select(df2.col("a"))
.
CANNOT_RESOLVE_STAR_EXPAND
Cannot resolve <targetString>
.* given input columns <columns>
. Please check that the specified table or struct exists and is accessible in the input columns.
CANNOT_RESTORE_PERMISSIONS_FOR_PATH
Failed to set permissions on created path <path>
back to <permission>
.
CANNOT_SHALLOW_CLONE_ACROSS_UC_AND_HMS
Cannot shallow-clone tables across Unity Catalog and Hive Metastore.
CANNOT_SHALLOW_CLONE_NON_UC_MANAGED_TABLE_AS_SOURCE_OR_TARGET
Shallow clone is only supported for the MANAGED
table type. The table <table>
is not MANAGED
table.
CANNOT_UPDATE_FIELD
Cannot update <table>
field <fieldName>
type:
For more details see CANNOT_UPDATE_FIELD
CANNOT_USE_KRYO
Cannot load Kryo serialization codec. Kryo serialization cannot be used in the Spark Connect client. Use Java serialization, provide a custom Codec, or use Spark Classic instead.
CANNOT_VALIDATE_CONNECTION
Validation of <jdbcDialectName>
connection is not supported. Please contact Databricks support for alternative solutions, or set “spark.databricks.testConnectionBeforeCreation” to “false” to skip connection testing before creating a connection object.
CANNOT_WRITE_STATE_STORE
Error writing state store files for provider <providerClass>
.
For more details see CANNOT_WRITE_STATE_STORE
CAST_INVALID_INPUT
The value <expression>
of the type <sourceType>
cannot be cast to <targetType>
because it is malformed. Correct the value as per the syntax, or change its target type. Use try_cast
to tolerate malformed input and return NULL
instead.
For more details see CAST_INVALID_INPUT
CAST_OVERFLOW
The value <value>
of the type <sourceType>
cannot be cast to <targetType>
due to an overflow. Use try_cast
to tolerate overflow and return NULL
instead.
CAST_OVERFLOW_IN_TABLE_INSERT
Fail to assign a value of <sourceType>
type to the <targetType>
type column or variable <columnName>
due to an overflow. Use try_cast
on the input value to tolerate overflow and return NULL
instead.
CATALOG_NOT_FOUND
The catalog <catalogName>
not found. Consider to set the SQL config <config>
to a catalog plugin.
CHECKPOINT_RDD_BLOCK_ID_NOT_FOUND
Checkpoint block <rddBlockId>
not found!
Either the executor that originally checkpointed this partition is no longer alive, or the original RDD is unpersisted.
If this problem persists, you may consider using rdd.checkpoint()
instead, which is slower than local checkpointing but more fault-tolerant.
CIRCULAR_CLASS_REFERENCE
Cannot have circular references in class, but got the circular reference of class <t>
.
CLASS_UNSUPPORTED_BY_MAP_OBJECTS
MapObjects
does not support the class <cls>
as resulting collection.
CLOUD_FILE_SOURCE_FILE_NOT_FOUND
A file notification was received for file: <filePath>
but it does not exist anymore. Please ensure that files are not deleted before they are processed. To continue your stream, you can set the Spark SQL configuration <config>
to true.
CLUSTERING_COLUMNS_MISMATCH
Specified clustering does not match that of the existing table <tableName>
.
Specified clustering columns: [<specifiedClusteringString>
].
Existing clustering columns: [<existingClusteringString>
].
CLUSTER_BY_AUTO_FEATURE_NOT_ENABLED
Please contact your Databricks representative to enable the cluster-by-auto feature.
CLUSTER_BY_AUTO_REQUIRES_CLUSTERING_FEATURE_ENABLED
Please enable clusteringTable.enableClusteringTableFeature to use CLUSTER BY
AUTO.
CLUSTER_BY_AUTO_REQUIRES_PREDICTIVE_OPTIMIZATION
CLUSTER BY
AUTO requires Predictive Optimization to be enabled.
CLUSTER_BY_AUTO_UNSUPPORTED_TABLE_TYPE_ERROR
CLUSTER BY
AUTO is only supported on UC Managed tables.
CODEC_NOT_AVAILABLE
The codec <codecName>
is not available.
For more details see CODEC_NOT_AVAILABLE
COLLATION_INVALID_NAME
The value <collationName>
does not represent a correct collation name. Suggested valid collation names: [<proposals>
].
COLLATION_INVALID_PROVIDER
The value <provider>
does not represent a correct collation provider. Supported providers are: [<supportedProviders>
].
COLLATION_MISMATCH
Could not determine which collation to use for string functions and operators.
For more details see COLLATION_MISMATCH
COLLECTION_SIZE_LIMIT_EXCEEDED
Can’t create array with <numberOfElements>
elements which exceeding the array size limit <maxRoundedArrayLength>
,
For more details see COLLECTION_SIZE_LIMIT_EXCEEDED
COLUMN_ALREADY_EXISTS
The column <columnName>
already exists. Choose another name or rename the existing column.
COLUMN_ARRAY_ELEMENT_TYPE_MISMATCH
Some values in field <pos>
are incompatible with the column array type. Expected type <type>
.
COLUMN_MASKS_CHECK_CONSTRAINT_UNSUPPORTED
Creating CHECK
constraint on table <tableName>
with column mask policies is not supported.
COLUMN_MASKS_DUPLICATE_USING_COLUMN_NAME
A <statementType>
statement attempted to assign a column mask policy to a column which included two or more other referenced columns in the USING COLUMNS
list with the same name <columnName>
, which is invalid.
COLUMN_MASKS_FEATURE_NOT_SUPPORTED
Column mask policies for <tableName>
are not supported:
For more details see COLUMN_MASKS_FEATURE_NOT_SUPPORTED
COLUMN_MASKS_INCOMPATIBLE_SCHEMA_CHANGE
Unable to <statementType> <columnName>
from table <tableName>
because it’s referenced in a column mask policy for column <maskedColumn>
. The table owner must remove or alter this policy before proceeding.
COLUMN_MASKS_MERGE_UNSUPPORTED_SOURCE
MERGE INTO
operations do not support column mask policies in source table <tableName>
.
COLUMN_MASKS_MERGE_UNSUPPORTED_TARGET
MERGE INTO
operations do not support writing into table <tableName>
with column mask policies.
COLUMN_MASKS_MULTI_PART_TARGET_COLUMN_NAME
This statement attempted to assign a column mask policy to a column <columnName>
with multiple name parts, which is invalid.
COLUMN_MASKS_MULTI_PART_USING_COLUMN_NAME
This statement attempted to assign a column mask policy to a column and the USING COLUMNS
list included the name <columnName>
with multiple name parts, which is invalid.
COLUMN_MASKS_SHOW_PARTITIONS_UNSUPPORTED
SHOW PARTITIONS
command is not supported for`<format>` tables with column masks.
COLUMN_MASKS_TABLE_CLONE_SOURCE_NOT_SUPPORTED
<mode>
clone from table <tableName>
with column mask policies is not supported.
COLUMN_MASKS_TABLE_CLONE_TARGET_NOT_SUPPORTED
<mode>
clone to table <tableName>
with column mask policies is not supported.
COLUMN_MASKS_UNSUPPORTED_CONSTANT_AS_PARAMETER
Using a constant as a parameter in a column mask policy is not supported. Please update your SQL command to remove the constant from the column mask definition and then retry the command again.
COLUMN_MASKS_UNSUPPORTED_PROVIDER
Failed to execute <statementType>
command because assigning column mask policies is not supported for target data source with table provider: “<provider>
”.
COLUMN_MASKS_UNSUPPORTED_SUBQUERY
Cannot perform <operation>
for table <tableName>
because it contains one or more column mask policies with subquery expression(s), which are not yet supported. Please contact the owner of the table to update the column mask policies in order to continue.
COLUMN_MASKS_USING_COLUMN_NAME_SAME_AS_TARGET_COLUMN
The column <columnName>
had the same name as the target column, which is invalid; please remove the column from the USING COLUMNS
list and retry the command.
COLUMN_NOT_DEFINED_IN_TABLE
<colType>
column <colName>
is not defined in table <tableName>
, defined table columns are: <tableCols>
.
COLUMN_NOT_FOUND
The column <colName>
cannot be found. Verify the spelling and correctness of the column name according to the SQL config <caseSensitiveConfig>
.
COLUMN_ORDINAL_OUT_OF_BOUNDS
Column ordinal out of bounds. The number of columns in the table is <attributesLength>
, but the column ordinal is <ordinal>
.
Attributes are the following: <attributes>
.
COMMA_PRECEDING_CONSTRAINT_ERROR
Unexpected ‘,’ before constraint(s) definition. Ensure that the constraint clause does not start with a comma when columns (and expectations) are not defined.
COMPARATOR_RETURNS_NULL
The comparator has returned a NULL
for a comparison between <firstValue>
and <secondValue>
.
It should return a positive integer for “greater than”, 0 for “equal” and a negative integer for “less than”.
To revert to deprecated behavior where NULL
is treated as 0 (equal), you must set “spark.sql.legacy.allowNullComparisonResultInArraySort” to “true”.
COMPLEX_EXPRESSION_UNSUPPORTED_INPUT
Cannot process input data types for the expression: <expression>
.
For more details see COMPLEX_EXPRESSION_UNSUPPORTED_INPUT
CONCURRENT_QUERY
Another instance of this query [id: <queryId>
] was just started by a concurrent session [existing runId: <existingQueryRunId>
new runId: <newQueryRunId>
].
CONCURRENT_STREAM_LOG_UPDATE
Concurrent update to the log. Multiple streaming jobs detected for <batchId>
.
Please make sure only one streaming job runs on a specific checkpoint location at a time.
CONFLICTING_DIRECTORY_STRUCTURES
Conflicting directory structures detected.
Suspicious paths:
<discoveredBasePaths>
If provided paths are partition directories, please set “basePath” in the options of the data source to specify the root directory of the table.
If there are multiple root directories, please load them separately and then union them.
CONFLICTING_PARTITION_COLUMN_NAMES
Conflicting partition column names detected:
<distinctPartColLists>
For partitioned table directories, data files should only live in leaf directories.
And directories at the same level should have the same partition column name.
Please check the following directories for unexpected files or inconsistent partition column names:
<suspiciousPaths>
CONFLICTING_PROVIDER
The specified provider <provider>
is inconsistent with the existing catalog provider <expectedProvider>
. Please use ‘USING <expectedProvider>
’ and retry the command.
CONNECTION_ALREADY_EXISTS
Cannot create connection <connectionName>
because it already exists.
Choose a different name, drop or replace the existing connection, or add the IF NOT EXISTS
clause to tolerate pre-existing connections.
CONNECTION_NAME_CANNOT_BE_EMPTY
Cannot execute this command because the connection name must be non-empty.
CONNECTION_NOT_FOUND
Cannot execute this command because the connection name <connectionName>
was not found.
CONNECTION_OPTION_NOT_SUPPORTED
Connections of type ‘<connectionType>
’ do not support the following option(s): <optionsNotSupported>
. Supported options: <allowedOptions>
.
CONNECTION_TYPE_NOT_SUPPORTED
Cannot create connection of type ‘<connectionType>
. Supported connection types: <allowedTypes>
.
CONVERSION_INVALID_INPUT
The value <str> (<fmt>
) cannot be converted to <targetType>
because it is malformed. Correct the value as per the syntax, or change its format. Use <suggestion>
to tolerate malformed input and return NULL
instead.
COPY_INTO_COLUMN_ARITY_MISMATCH
Cannot write to <tableName>
, the reason is
For more details see COPY_INTO_COLUMN_ARITY_MISMATCH
COPY_INTO_CREDENTIALS_NOT_ALLOWED_ON
Invalid scheme <scheme>
. COPY INTO
source credentials currently only supports s3/s3n/s3a/wasbs/abfss.
COPY_INTO_DUPLICATED_FILES_COPY_NOT_ALLOWED
Duplicated files were committed in a concurrent COPY INTO
operation. Please try again later.
COPY_INTO_ENCRYPTION_NOT_ALLOWED_ON
Invalid scheme <scheme>
. COPY INTO
source encryption currently only supports s3/s3n/s3a/abfss.
COPY_INTO_ENCRYPTION_NOT_SUPPORTED_FOR_AZURE
COPY INTO
encryption only supports ADLS Gen2, or abfss:// file scheme
COPY_INTO_ENCRYPTION_REQUIRED_WITH_EXPECTED
Invalid encryption option <requiredKey>
. COPY INTO
source encryption must specify ‘<requiredKey>
’ = ‘<keyValue>
’.
COPY_INTO_FEATURE_INCOMPATIBLE_SETTING
The COPY INTO
feature ‘<feature>
’ is not compatible with ‘<incompatibleSetting>
’.
COPY_INTO_NON_BLIND_APPEND_NOT_ALLOWED
COPY INTO
other than appending data is not allowed to run concurrently with other transactions. Please try again later.
COPY_INTO_SCHEMA_MISMATCH_WITH_TARGET_TABLE
A schema mismatch was detected while copying into the Delta table (Table: <table>
).
This may indicate an issue with the incoming data, or the Delta table schema can be evolved automatically according to the incoming data by setting:
COPY_OPTIONS
(‘mergeSchema’ = ‘true’)
Schema difference:
<schemaDiff>
COPY_INTO_SOURCE_FILE_FORMAT_NOT_SUPPORTED
The format of the source files must be one of CSV, JSON, AVRO, ORC, PARQUET
, TEXT, or BINARYFILE
. Using COPY INTO
on Delta tables as the source is not supported as duplicate data may be ingested after OPTIMIZE
operations. This check can be turned off by running the SQL command set spark.databricks.delta.copyInto.formatCheck.enabled = false
.
COPY_INTO_SOURCE_SCHEMA_INFERENCE_FAILED
The source directory did not contain any parsable files of type <format>
. Please check the contents of ‘<source>
’.
The error can be silenced by setting ‘<config>
’ to ‘false’.
COPY_INTO_STATE_INTERNAL_ERROR
An internal error occurred while processing COPY INTO
state.
For more details see COPY_INTO_STATE_INTERNAL_ERROR
COPY_INTO_SYNTAX_ERROR
Failed to parse the COPY INTO
command.
For more details see COPY_INTO_SYNTAX_ERROR
COPY_UNLOAD_FORMAT_TYPE_NOT_SUPPORTED
Cannot unload data in format ‘<formatType>
’. Supported formats for <connectionType>
are: <allowedFormats>
.
CREATE_OR_REFRESH_MV_ST_ASYNC
Cannot CREATE
OR REFRESH
materialized views or streaming tables with ASYNC
specified. Please remove ASYNC
from the CREATE
OR REFRESH
statement or use REFRESH ASYNC
to refresh existing materialized views or streaming tables asynchronously.
CREATE_PERMANENT_VIEW_WITHOUT_ALIAS
Not allowed to create the permanent view <name>
without explicitly assigning an alias for the expression <attr>
.
CREATE_TABLE_COLUMN_DESCRIPTOR_DUPLICATE
CREATE TABLE
column <columnName>
specifies descriptor “<optionName>
” more than once, which is invalid.
CREATE_VIEW_COLUMN_ARITY_MISMATCH
Cannot create view <viewName>
, the reason is
For more details see CREATE_VIEW_COLUMN_ARITY_MISMATCH
CSV_ENFORCE_SCHEMA_NOT_SUPPORTED
The CSV option enforceSchema
cannot be set when using rescuedDataColumn
or failOnUnknownFields
, as columns are read by name rather than ordinal.
DATATYPE_MISMATCH
Cannot resolve <sqlExpr>
due to data type mismatch:
For more details see DATATYPE_MISMATCH
DATATYPE_MISSING_SIZE
DataType <type>
requires a length parameter, for example <type>
(10). Please specify the length.
DATA_LINEAGE_SECURE_VIEW_LEAF_NODE_HAS_NO_RELATION
Write Lineage unsuccessful: missing corresponding relation with policies for CLM/RLS.
DATA_SOURCE_ALREADY_EXISTS
Data source ‘<provider>
’ already exists. Please choose a different name for the new data source.
DATA_SOURCE_NOT_EXIST
Data source ‘<provider>
’ not found. Please make sure the data source is registered.
DATA_SOURCE_NOT_FOUND
Failed to find the data source: <provider>
. Make sure the provider name is correct and the package is properly registered and compatible with your Spark version.
DATA_SOURCE_OPTION_CONTAINS_INVALID_CHARACTERS
Option <option>
must not be empty and should not contain invalid characters, query strings, or parameters.
DATA_SOURCE_TABLE_SCHEMA_MISMATCH
The schema of the data source table does not match the expected schema. If you are using the DataFrameReader.schema API or creating a table, avoid specifying the schema.
Data Source schema: <dsSchema>
Expected schema: <expectedSchema>
DATA_SOURCE_URL_NOT_ALLOWED
JDBC URL is not allowed in data source options, please specify ‘host’, ‘port’, and ‘database’ options instead.
DATETIME_FIELD_OUT_OF_BOUNDS
<rangeMessage>
. If necessary set <ansiConfig>
to “false” to bypass this error.
DC_API_QUOTA_EXCEEDED
You have exceeded the API quota for the data source <sourceName>
.
For more details see DC_API_QUOTA_EXCEEDED
DC_CONNECTION_ERROR
Failed to make a connection to the <sourceName>
source. Error code: <errorCode>
.
For more details see DC_CONNECTION_ERROR
DC_DYNAMICS_API_ERROR
Error happened in Dynamics API calls, errorCode: <errorCode>
.
For more details see DC_DYNAMICS_API_ERROR
DC_NETSUITE_ERROR
Error happened in Netsuite JDBC calls, errorCode: <errorCode>
.
For more details see DC_NETSUITE_ERROR
DC_SCHEMA_CHANGE_ERROR
SQLSTATE: none assigned
A schema change has occurred in table <tableName>
of the <sourceName>
source.
For more details see DC_SCHEMA_CHANGE_ERROR
DC_SERVICENOW_API_ERROR
Error happened in ServiceNow API calls, errorCode: <errorCode>
.
For more details see DC_SERVICENOW_API_ERROR
DC_SFDC_BULK_QUERY_JOB_INCOMPLETE
Ingestion for object <objName>
is incomplete because the Salesforce API query job took too long, failed, or was manually cancelled.
To try again, you can either re-run the entire pipeline or refresh this specific destination table. If the error persists, file a ticket. Job ID: <jobId>
. Job status: <jobStatus>
.
DC_SOURCE_API_ERROR
An error occurred in the <sourceName>
API call. Source API type: <apiType>
. Error code: <errorCode>
.
This can sometimes happen when you’ve reached a <sourceName>
API limit. If you haven’t exceeded your API limit, try re-running the connector. If the issue persists, please file a ticket.
DC_UNSUPPORTED_ERROR
Unsupported error happened in data source <sourceName>
.
For more details see DC_UNSUPPORTED_ERROR
DC_WORKDAY_RAAS_API_ERROR
Error happened in Workday RAAS API calls, errorCode: <errorCode>
.
For more details see DC_WORKDAY_RAAS_API_ERROR
DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION
Decimal precision <precision>
exceeds max precision <maxPrecision>
.
DEFAULT_DATABASE_NOT_EXISTS
Default database <defaultDatabase>
does not exist, please create it first or change default database to <defaultDatabase>
.
DEFAULT_FILE_NOT_FOUND
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running ‘REFRESH TABLE
tableName’ command in SQL or by recreating the Dataset/DataFrame involved. If disk cache is stale or the underlying files have been removed, you can invalidate disk cache manually by restarting the cluster.
DEFAULT_PLACEMENT_INVALID
A DEFAULT
keyword in a MERGE
, INSERT
, UPDATE
, or SET VARIABLE
command could not be directly assigned to a target column because it was part of an expression.
For example: UPDATE
SET` c1 = DEFAULT` is allowed, but UPDATE T
SET` c1 = DEFAULT
+ 1` is not allowed.
DEFAULT_UNSUPPORTED
Failed to execute <statementType>
command because DEFAULT
values are not supported for target data source with table provider: “<dataSource>
”.
DIFFERENT_DELTA_TABLE_READ_BY_STREAMING_SOURCE
The streaming query was reading from an unexpected Delta table (id = ‘<newTableId>
’).
It used to read from another Delta table (id = ‘<oldTableId>
’) according to checkpoint.
This may happen when you changed the code to read from a new table or you deleted and
re-created a table. Please revert your change or delete your streaming query checkpoint
to restart from scratch.
DIVIDE_BY_ZERO
Division by zero. Use try_divide
to tolerate divisor being 0 and return NULL
instead. If necessary set <config>
to “false” to bypass this error.
For more details see DIVIDE_BY_ZERO
DLT_VIEW_CLUSTER_BY_NOT_SUPPORTED
MATERIALIZED
VIEWs with a CLUSTER BY
clause are supported only in a Delta Live Tables pipeline.
DLT_VIEW_SCHEMA_WITH_TYPE_NOT_SUPPORTED
<mv>
schemas with a specified type are supported only in a Delta Live Tables pipeline.
DLT_VIEW_TABLE_CONSTRAINTS_NOT_SUPPORTED
CONSTRAINT
clauses in a view are only supported in a Delta Live Tables pipeline.
DROP_SCHEDULE_DOES_NOT_EXIST
Cannot drop SCHEDULE
on a table without an existing schedule or trigger.
DUPLICATED_FIELD_NAME_IN_ARROW_STRUCT
Duplicated field names in Arrow Struct are not allowed, got <fieldNames>
.
DUPLICATED_MAP_KEY
Duplicate map key <key>
was found, please check the input data.
If you want to remove the duplicated keys, you can set <mapKeyDedupPolicy>
to “LAST_WIN
” so that the key inserted at last takes precedence.
DUPLICATED_METRICS_NAME
The metric name is not unique: <metricName>
. The same name cannot be used for metrics with different results.
However multiple instances of metrics with with same result and name are allowed (e.g. self-joins).
DUPLICATE_ASSIGNMENTS
The columns or variables <nameList>
appear more than once as assignment targets.
DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT
Call to routine <routineName>
is invalid because it includes multiple argument assignments to the same parameter name <parameterName>
.
For more details see DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT
DUPLICATE_ROUTINE_PARAMETER_NAMES
Found duplicate name(s) in the parameter list of the user-defined routine <routineName>
: <names>
.
DUPLICATE_ROUTINE_RETURNS_COLUMNS
Found duplicate column(s) in the RETURNS
clause column list of the user-defined routine <routineName>
: <columns>
.
EMITTING_ROWS_OLDER_THAN_WATERMARK_NOT_ALLOWED
Previous node emitted a row with eventTime=<emittedRowEventTime>
which is older than current_watermark_value=<currentWatermark>
This can lead to correctness issues in the stateful operators downstream in the execution pipeline.
Please correct the operator logic to emit rows after current global watermark value.
EMPTY_SCHEMA_NOT_SUPPORTED_FOR_DATASOURCE
The <format>
datasource does not support writing empty or nested empty schemas. Please make sure the data schema has at least one or more column(s).
ENCODER_NOT_FOUND
Not found an encoder of the type <typeName>
to Spark SQL internal representation.
Consider to change the input type to one of supported at ‘<docroot>
/sql-ref-datatypes.html’.
END_OFFSET_HAS_GREATER_OFFSET_FOR_TOPIC_PARTITION_THAN_LATEST_WITH_TRIGGER_AVAILABLENOW
Some of partitions in Kafka topic(s) report available offset which is less than end offset during running query with Trigger.AvailableNow. The error could be transient - restart your query, and report if you still see the same issue.
latest offset: <latestOffset>
, end offset: <endOffset>
END_OFFSET_HAS_GREATER_OFFSET_FOR_TOPIC_PARTITION_THAN_PREFETCHED
For Kafka data source with Trigger.AvailableNow, end offset should have lower or equal offset per each topic partition than pre-fetched offset. The error could be transient - restart your query, and report if you still see the same issue.
pre-fetched offset: <prefetchedOffset>
, end offset: <endOffset>
.
ERROR_READING_AVRO_UNKNOWN_FINGERPRINT
Error reading avro data – encountered an unknown fingerprint: <fingerprint>
, not sure what schema to use.
This could happen if you registered additional schemas after starting your spark context.
EVENT_LOG_UNSUPPORTED_TABLE_TYPE
The table type of <tableIdentifier>
is <tableType>
.
Querying event logs only supports materialized views, streaming tables, or Delta Live Tables pipelines
EVENT_TIME_IS_NOT_ON_TIMESTAMP_TYPE
The event time <eventName>
has the invalid type <eventType>
, but expected “TIMESTAMP
”.
EXCEPT_NESTED_COLUMN_INVALID_TYPE
EXCEPT
column <columnName>
was resolved and expected to be StructType, but found type <dataType>
.
EXCEPT_OVERLAPPING_COLUMNS
Columns in an EXCEPT
list must be distinct and non-overlapping, but got (<columns>
).
EXCEPT_RESOLVED_COLUMNS_WITHOUT_MATCH
EXCEPT
columns [<exceptColumns>
] were resolved, but do not match any of the columns [<expandedColumns>
] from the star expansion.
EXCEPT_UNRESOLVED_COLUMN_IN_STRUCT_EXPANSION
The column/field name <objectName>
in the EXCEPT
clause cannot be resolved. Did you mean one of the following: [<objectList>
]?
Note: nested columns in the EXCEPT
clause may not include qualifiers (table name, parent struct column name, etc.) during a struct expansion; try removing qualifiers if they are used with nested columns.
EXECUTOR_BROADCAST_JOIN_OOM
There is not enough memory to build the broadcast relation <relationClassName>
. Relation Size = <relationSize>
. Total memory used by this task = <taskMemoryUsage>
. Executor Memory Manager Metrics: onHeapExecutionMemoryUsed = <onHeapExecutionMemoryUsed>
, offHeapExecutionMemoryUsed = <offHeapExecutionMemoryUsed>
, onHeapStorageMemoryUsed = <onHeapStorageMemoryUsed>
, offHeapStorageMemoryUsed = <offHeapStorageMemoryUsed>
. [sparkPlanId: <sparkPlanId>
] Disable broadcasts for this query using ‘set spark.sql.autoBroadcastJoinThreshold=-1’ or using join hint to force shuffle join.
EXECUTOR_BROADCAST_JOIN_STORE_OOM
There is not enough memory to store the broadcast relation <relationClassName>
. Relation Size = <relationSize>
. StorageLevel = <storageLevel>
. [sparkPlanId: <sparkPlanId>
] Disable broadcasts for this query using ‘set spark.sql.autoBroadcastJoinThreshold=-1’ or using join hint to force shuffle join.
EXEC_IMMEDIATE_DUPLICATE_ARGUMENT_ALIASES
The USING
clause of this EXECUTE IMMEDIATE
command contained multiple arguments with same alias (<aliases>
), which is invalid; please update the command to specify unique aliases and then try it again.
EXPECT_PERMANENT_VIEW_NOT_TEMP
‘<operation>
’ expects a permanent view but <viewName>
is a temp view.
EXPECT_TABLE_NOT_VIEW
‘<operation>
’ expects a table but <viewName>
is a view.
For more details see EXPECT_TABLE_NOT_VIEW
EXPECT_VIEW_NOT_TABLE
The table <tableName>
does not support <operation>
.
For more details see EXPECT_VIEW_NOT_TABLE
EXPRESSION_TYPE_IS_NOT_ORDERABLE
Column expression <expr>
cannot be sorted because its type <exprType>
is not orderable.
FABRIC_REFRESH_INVALID_SCOPE
Error running ‘REFRESH FOREIGN <scope> <name>
’. Cannot refresh a Fabric <scope>
directly, please use ‘REFRESH FOREIGN CATALOG <catalogName>
’ to refresh the Fabric Catalog instead.
FAILED_EXECUTE_UDF
User defined function (<functionName>
: (<signature>
) => <result>
) failed due to: <reason>
.
FAILED_FUNCTION_CALL
Failed preparing of the function <funcName>
for call. Please, double check function’s arguments.
FAILED_RENAME_TEMP_FILE
Failed to rename temp file <srcPath>
to <dstPath>
as FileSystem.rename returned false.
FAILED_ROW_TO_JSON
Failed to convert the row value <value>
of the class <class>
to the target SQL type <sqlType>
in the JSON format.
FAILED_TO_PARSE_TOO_COMPLEX
The statement, including potential SQL functions and referenced views, was too complex to parse.
To mitigate this error divide the statement into multiple, less complex chunks.
FEATURE_NOT_ENABLED
The feature <featureName>
is not enabled. Consider setting the config <configKey>
to <configValue>
to enable this capability.
FEATURE_NOT_ON_CLASSIC_WAREHOUSE
<feature>
is not supported on Classic SQL warehouses. To use this feature, use a Pro or Serverless SQL warehouse.
FEATURE_REQUIRES_UC
<feature>
is not supported without Unity Catalog. To use this feature, enable Unity Catalog.
FILE_IN_STAGING_PATH_ALREADY_EXISTS
File in staging path <path>
already exists but OVERWRITE
is not set
FLATMAPGROUPSWITHSTATE_USER_FUNCTION_ERROR
An error occurred in the user provided function in flatMapGroupsWithState. Reason: <reason>
FOREACH_BATCH_USER_FUNCTION_ERROR
An error occurred in the user provided function in foreach batch sink. Reason: <reason>
FOREACH_USER_FUNCTION_ERROR
An error occurred in the user provided function in foreach sink. Reason: <reason>
FOREIGN_KEY_MISMATCH
Foreign key parent columns <parentColumns>
do not match primary key child columns <childColumns>
.
FOREIGN_OBJECT_NAME_CANNOT_BE_EMPTY
Cannot execute this command because the foreign <objectType>
name must be non-empty.
FOUND_MULTIPLE_DATA_SOURCES
Detected multiple data sources with the name ‘<provider>
’. Please check the data source isn’t simultaneously registered and located in the classpath.
FROM_JSON_CONFLICTING_SCHEMA_UPDATES
from_json inference encountered conflicting schema updates at: <location>
FROM_JSON_CORRUPT_RECORD_COLUMN_IN_SCHEMA
from_json found columnNameOfCorruptRecord (<columnNameOfCorruptRecord>
) present
in a JSON object and can no longer proceed. Please configure a different value for
the option ‘columnNameOfCorruptRecord’.
FROM_JSON_INFERENCE_NOT_SUPPORTED
from_json inference is only supported when defining streaming tables
FROM_JSON_INVALID_CONFIGURATION
from_json configuration is invalid:
For more details see FROM_JSON_INVALID_CONFIGURATION
FUNCTION_PARAMETERS_MUST_BE_NAMED
The function <function>
requires named parameters. Parameters missing names: <exprs>
. Please update the function call to add names for all parameters, e.g., <function>
(param_name => …).
GENERATED_COLUMN_WITH_DEFAULT_VALUE
A column cannot have both a default value and a generation expression but column <colName>
has default value: (<defaultValue>
) and generation expression: (<genExpr>
).
GET_TABLES_BY_TYPE_UNSUPPORTED_BY_HIVE_VERSION
Hive 2.2 and lower versions don’t support getTablesByType. Please use Hive 2.3 or higher version.
GROUPING_COLUMN_MISMATCH
Column of grouping (<grouping>
) can’t be found in grouping columns <groupingColumns>
.
GROUPING_ID_COLUMN_MISMATCH
Columns of grouping_id (<groupingIdColumn>
) does not match grouping columns (<groupByColumns>
).
GROUP_BY_AGGREGATE
Aggregate functions are not allowed in GROUP BY
, but found <sqlExpr>
.
For more details see GROUP_BY_AGGREGATE
GROUP_BY_POS_AGGREGATE
GROUP BY <index>
refers to an expression <aggExpr>
that contains an aggregate function. Aggregate functions are not allowed in GROUP BY
.
GROUP_BY_POS_OUT_OF_RANGE
GROUP BY
position <index>
is not in select list (valid range is [1, <size>
]).
GROUP_EXPRESSION_TYPE_IS_NOT_ORDERABLE
The expression <sqlExpr>
cannot be used as a grouping expression because its data type <dataType>
is not an orderable data type.
HDFS_HTTP_ERROR
When attempting to read from HDFS, HTTP request failed.
For more details see HDFS_HTTP_ERROR
HLL_INVALID_INPUT_SKETCH_BUFFER
Invalid call to <function>
; only valid HLL sketch buffers are supported as inputs (such as those produced by the hll_sketch_agg
function).
HLL_INVALID_LG_K
Invalid call to <function>
; the lgConfigK
value must be between <min>
and <max>
, inclusive: <value>
.
HLL_UNION_DIFFERENT_LG_K
Sketches have different lgConfigK
values: <left>
and <right>
. Set the allowDifferentLgConfigK
parameter to true to call <function>
with different lgConfigK
values.
HYBRID_ANALYZER_EXCEPTION
An failure occurred when attempting to resolve a query or command with both the legacy fixed-point analyzer as well as the single-pass resolver.
For more details see HYBRID_ANALYZER_EXCEPTION
IDENTIFIER_TOO_MANY_NAME_PARTS
<identifier>
is not a valid identifier as it has more than 2 name parts.
IDENTITY_COLUMNS_DUPLICATED_SEQUENCE_GENERATOR_OPTION
Duplicated IDENTITY
column sequence generator option: <sequenceGeneratorOption>
.
ILLEGAL_STATE_STORE_VALUE
Illegal value provided to the State Store
For more details see ILLEGAL_STATE_STORE_VALUE
INAPPROPRIATE_URI_SCHEME_OF_CONNECTION_OPTION
Connection can’t be created due to inappropriate scheme of URI <uri>
provided for the connection option ‘<option>
’.
Allowed scheme(s): <allowedSchemes>
.
Please add a scheme if it is not present in the URI, or specify a scheme from the allowed values.
INCOMPATIBLE_COLUMN_TYPE
<operator>
can only be performed on tables with compatible column types. The <columnOrdinalNumber>
column of the <tableOrdinalNumber>
table is <dataType1>
type which is not compatible with <dataType2>
at the same column of the first table.<hint>
.
INCOMPATIBLE_DATASOURCE_REGISTER
Detected an incompatible DataSourceRegister. Please remove the incompatible library from classpath or upgrade it. Error: <message>
INCOMPATIBLE_DATA_FOR_TABLE
Cannot write incompatible data for the table <tableName>
:
For more details see INCOMPATIBLE_DATA_FOR_TABLE
INCOMPATIBLE_VIEW_SCHEMA_CHANGE
The SQL query of view <viewName>
has an incompatible schema change and column <colName>
cannot be resolved. Expected <expectedNum>
columns named <colName>
but got <actualCols>
.
Please try to re-create the view by running: <suggestion>
.
INCONSISTENT_BEHAVIOR_CROSS_VERSION
You may get a different result due to the upgrading to
For more details see INCONSISTENT_BEHAVIOR_CROSS_VERSION
INCORRECT_NUMBER_OF_ARGUMENTS
<failure>
, <functionName>
requires at least <minArgs>
arguments and at most <maxArgs>
arguments.
INCORRECT_RAMP_UP_RATE
Max offset with <rowsPerSecond>
rowsPerSecond is <maxSeconds>
, but ‘rampUpTimeSeconds’ is <rampUpTimeSeconds>
.
INDETERMINATE_COLLATION
Function called requires knowledge of the collation it should apply, but indeterminate collation was found. Use COLLATE
function to set the collation explicitly.
INDEX_ALREADY_EXISTS
Cannot create the index <indexName>
on table <tableName>
because it already exists.
INFINITE_STREAMING_TRIGGER_NOT_SUPPORTED
Trigger type <trigger>
is not supported for this cluster type.
Use a different trigger type e.g. AvailableNow, Once.
INSERT_COLUMN_ARITY_MISMATCH
Cannot write to <tableName>
, the reason is
For more details see INSERT_COLUMN_ARITY_MISMATCH
INSERT_PARTITION_COLUMN_ARITY_MISMATCH
Cannot write to ‘<tableName>
’, <reason>
:
Table columns: <tableColumns>
.
Partition columns with static values: <staticPartCols>
.
Data columns: <dataColumns>
.
INSUFFICIENT_PERMISSIONS_EXT_LOC
User <user>
has insufficient privileges for external location <location>
.
INSUFFICIENT_PERMISSIONS_NO_OWNER
There is no owner for <securableName>
. Ask your administrator to set an owner.
INSUFFICIENT_PERMISSIONS_SECURABLE_PARENT_OWNER
The owner of <securableName>
is different from the owner of <parentSecurableName>
.
INSUFFICIENT_PERMISSIONS_STORAGE_CRED
Storage credential <credentialName>
has insufficient privileges.
INSUFFICIENT_PERMISSIONS_UNDERLYING_SECURABLES
User cannot <action>
on <securableName>
because of permissions on underlying securables.
INSUFFICIENT_PERMISSIONS_UNDERLYING_SECURABLES_VERBOSE
User cannot <action>
on <securableName>
because of permissions on underlying securables:
<underlyingReport>
INTERVAL_ARITHMETIC_OVERFLOW
Integer overflow while operating with intervals.
For more details see INTERVAL_ARITHMETIC_OVERFLOW
INTERVAL_DIVIDED_BY_ZERO
Division by zero. Use try_divide
to tolerate divisor being 0 and return NULL
instead.
INVALID_AGGREGATE_FILTER
The FILTER
expression <filterExpr>
in an aggregate function is invalid.
For more details see INVALID_AGGREGATE_FILTER
INVALID_ARRAY_INDEX
The index <indexValue>
is out of bounds. The array has <arraySize>
elements. Use the SQL function get()
to tolerate accessing element at invalid index and return NULL
instead. If necessary set <ansiConfig>
to “false” to bypass this error.
For more details see INVALID_ARRAY_INDEX
INVALID_ARRAY_INDEX_IN_ELEMENT_AT
The index <indexValue>
is out of bounds. The array has <arraySize>
elements. Use try_element_at
to tolerate accessing element at invalid index and return NULL
instead. If necessary set <ansiConfig>
to “false” to bypass this error.
For more details see INVALID_ARRAY_INDEX_IN_ELEMENT_AT
INVALID_ATTRIBUTE_NAME_SYNTAX
Syntax error in the attribute name: <name>
. Check that backticks appear in pairs, a quoted string is a complete name part and use a backtick only inside quoted name parts.
INVALID_BITMAP_POSITION
The 0-indexed bitmap position <bitPosition>
is out of bounds. The bitmap has <bitmapNumBits>
bits (<bitmapNumBytes>
bytes).
INVALID_BOOLEAN_STATEMENT
Boolean statement is expected in the condition, but <invalidStatement>
was found.
INVALID_BOUNDARY
The boundary <boundary>
is invalid: <invalidValue>
.
For more details see INVALID_BOUNDARY
INVALID_BUCKET_COLUMN_DATA_TYPE
Cannot use <type>
for bucket column. Collated data types are not supported for bucketing.
INVALID_COLUMN_NAME_AS_PATH
The datasource <datasource>
cannot save the column <columnName>
because its name contains some characters that are not allowed in file paths. Please, use an alias to rename it.
INVALID_COLUMN_OR_FIELD_DATA_TYPE
Column or field <name>
is of type <type>
while it’s required to be <expectedType>
.
INVALID_CONF_VALUE
The value ‘<confValue>
’ in the config “<confName>
” is invalid.
For more details see INVALID_CONF_VALUE
INVALID_CORRUPT_RECORD_TYPE
The column <columnName>
for corrupt records must have the nullable STRING
type, but got <actualType>
.
INVALID_CURRENT_RECIPIENT_USAGE
current_recipient
function can only be used in the CREATE VIEW
statement or the ALTER VIEW
statement to define a share only view in Unity Catalog.
INVALID_DATETIME_PATTERN
Unrecognized datetime pattern: <pattern>
.
For more details see INVALID_DATETIME_PATTERN
INVALID_DEFAULT_VALUE
Failed to execute <statement>
command because the destination column or variable <colName>
has a DEFAULT
value <defaultValue>
,
For more details see INVALID_DEFAULT_VALUE
INVALID_DEST_CATALOG
Destination catalog of the SYNC
command must be within Unity Catalog. Found <catalog>
.
INVALID_DRIVER_MEMORY
System memory <systemMemory>
must be at least <minSystemMemory>
.
Please increase heap size using the –driver-memory option or “<config>
” in Spark configuration.
INVALID_ESC
Found an invalid escape string: <invalidEscape>
. The escape string must contain only one character.
INVALID_EXECUTOR_MEMORY
Executor memory <executorMemory>
must be at least <minSystemMemory>
.
Please increase executor memory using the –executor-memory option or “<config>
” in Spark configuration.
INVALID_EXPRESSION_ENCODER
Found an invalid expression encoder. Expects an instance of ExpressionEncoder but got <encoderType>
. For more information consult ‘<docroot>
/api/java/index.html?org/apache/spark/sql/Encoder.html’.
INVALID_EXTERNAL_TYPE
The external type <externalType>
is not valid for the type <type>
at the expression <expr>
.
INVALID_EXTRACT_BASE_FIELD_TYPE
Can’t extract a value from <base>
. Need a complex type [STRUCT
, ARRAY
, MAP
] but got <other>
.
INVALID_FRACTION_OF_SECOND
Valid range for seconds is [0, 60] (inclusive), but the provided value is <secAndMicros>
. To avoid this error, use try_make_timestamp
, which returns NULL
on error.
If you do not want to use the session default timestamp version of this function, use try_make_timestamp_ntz
or try_make_timestamp_ltz
.
INVALID_HTTP_REQUEST_METHOD
The input parameter: method, value: <paramValue>
is not a valid parameter for http_request because it is not a valid HTTP method.
INVALID_HTTP_REQUEST_PATH
The input parameter: path, value: <paramValue>
is not a valid parameter for http_request because path traversal is not allowed.
INVALID_IDENTIFIER
The unquoted identifier <ident>
is invalid and must be back quoted as: <ident>
.
Unquoted identifiers can only contain ASCII
letters (‘a’ - ‘z’, ‘A’ - ‘Z’), digits (‘0’ - ‘9’), and underbar (‘_’).
Unquoted identifiers must also not start with a digit.
Different data sources and meta stores may impose additional restrictions on valid identifiers.
INVALID_INDEX_OF_ZERO
The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).
INVALID_INTERVAL_FORMAT
Error parsing ‘<input>
’ to interval. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format.
For more details see INVALID_INTERVAL_FORMAT
INVALID_INTERVAL_WITH_MICROSECONDS_ADDITION
Cannot add an interval to a date because its microseconds part is not 0. If necessary set <ansiConfig>
to “false” to bypass this error.
INVALID_INVERSE_DISTRIBUTION_FUNCTION
Invalid inverse distribution function <funcName>
.
For more details see INVALID_INVERSE_DISTRIBUTION_FUNCTION
INVALID_JAVA_IDENTIFIER_AS_FIELD_NAME
<fieldName>
is not a valid identifier of Java and cannot be used as field name
<walkedTypePath>
.
INVALID_JSON_DATA_TYPE
Failed to convert the JSON string ‘<invalidType>
’ to a data type. Please enter a valid data type.
INVALID_JSON_DATA_TYPE_FOR_COLLATIONS
Collations can only be applied to string types, but the JSON data type is <jsonType>
.
INVALID_JSON_RECORD_TYPE
Detected an invalid type of a JSON record while inferring a common schema in the mode <failFastMode>
. Expected a STRUCT
type, but found <invalidType>
.
INVALID_JSON_SCHEMA_MAP_TYPE
Input schema <jsonSchema>
can only contain STRING
as a key type for a MAP
.
INVALID_KRYO_SERIALIZER_BUFFER_SIZE
The value of the config “<bufferSizeConfKey>
” must be less than 2048 MiB, but got <bufferSizeConfValue>
MiB.
INVALID_LABEL_USAGE
The usage of the label <labelName>
is invalid.
For more details see INVALID_LABEL_USAGE
INVALID_LAMBDA_FUNCTION_CALL
Invalid lambda function call.
For more details see INVALID_LAMBDA_FUNCTION_CALL
INVALID_LATERAL_JOIN_TYPE
The <joinType>
JOIN with LATERAL
correlation is not allowed because an OUTER
subquery cannot correlate to its join partner. Remove the LATERAL
correlation or use an INNER
JOIN, or LEFT OUTER
JOIN instead.
INVALID_LIMIT_LIKE_EXPRESSION
The limit like expression <expr>
is invalid.
For more details see INVALID_LIMIT_LIKE_EXPRESSION
INVALID_NON_ABSOLUTE_PATH
The provided non absolute path <path>
can not be qualified. Please update the path to be a valid dbfs mount location.
INVALID_NON_DETERMINISTIC_EXPRESSIONS
The operator expects a deterministic expression, but the actual expression is <sqlExprs>
.
INVALID_NUMERIC_LITERAL_RANGE
Numeric literal <rawStrippedQualifier>
is outside the valid range for <typeName>
with minimum value of <minValue>
and maximum value of <maxValue>
. Please adjust the value accordingly.
INVALID_PANDAS_UDF_PLACEMENT
The group aggregate pandas UDF <functionList>
cannot be invoked together with as other, non-pandas aggregate functions.
INVALID_PARAMETER_MARKER_VALUE
An invalid parameter mapping was provided:
For more details see INVALID_PARAMETER_MARKER_VALUE
INVALID_PARAMETER_VALUE
The value of parameter(s) <parameter>
in <functionName>
is invalid:
For more details see INVALID_PARAMETER_VALUE
INVALID_PARTITION_OPERATION
The partition command is invalid.
For more details see INVALID_PARTITION_OPERATION
INVALID_PARTITION_VALUE
Failed to cast value <value>
to data type <dataType>
for partition column <columnName>
. Ensure the value matches the expected data type for this partition column.
INVALID_PIPELINE_ID
Pipeline id <pipelineId>
is not valid.
A pipeline id should be a UUID in the format of ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’
INVALID_PROPERTY_VALUE
<value>
is an invalid property value, please use quotes, e.g. SET <key>=<value>
INVALID_QUALIFIED_COLUMN_NAME
The column name <columnName>
is invalid because it is not qualified with a table name or consists of more than 4 name parts.
INVALID_QUERY_MIXED_QUERY_PARAMETERS
Parameterized query must either use positional, or named parameters, but not both.
INVALID_REGEXP_REPLACE
Could not perform regexp_replace for source = “<source>
”, pattern = “<pattern>
”, replacement = “<replacement>
” and position = <position>
.
INVALID_RESET_COMMAND_FORMAT
Expected format is ‘RESET
’ or ‘RESET
key’. If you want to include special characters in key, please use quotes, e.g., RESET key
.
INVALID_S3_COPY_CREDENTIALS
COPY INTO
credentials must include AWS_ACCESS_KEY
, AWS_SECRET_KEY
, and AWS_SESSION_TOKEN
.
INVALID_SAVE_MODE
The specified save mode <mode>
is invalid. Valid save modes include “append”, “overwrite”, “ignore”, “error”, “errorifexists”, and “default”.
INVALID_SCHEMA
The input schema <inputSchema>
is not a valid schema string.
For more details see INVALID_SCHEMA
INVALID_SCHEMA_OR_RELATION_NAME
<name>
is not a valid name for tables/schemas. Valid names only contain alphabet characters, numbers and _.
INVALID_SET_SYNTAX
Expected format is ‘SET
’, ‘SET
key’, or ‘SET
key=value’. If you want to include special characters in key, or include semicolon in value, please use backquotes, e.g., SET key
=value
.
INVALID_SINGLE_VARIANT_COLUMN
The singleVariantColumn
option cannot be used if there is also a user specified schema.
INVALID_SOURCE_CATALOG
Source catalog must not be within Unity Catalog for the SYNC
command. Found <catalog>
.
INVALID_SQL_ARG
The argument <name>
of sql()
is invalid. Consider to replace it either by a SQL literal or by collection constructor functions such as map()
, array()
, struct()
.
INVALID_STAGING_PATH_IN_STAGING_ACCESS_QUERY
Invalid staging path in staging <operation>
query: <path>
INVALID_STATEMENT_FOR_EXECUTE_INTO
The INTO
clause of EXECUTE IMMEDIATE
is only valid for queries but the given statement is not a query: <sqlString>
.
INVALID_TEMP_OBJ_REFERENCE
Cannot create the persistent object <objName>
of the type <obj>
because it references to the temporary object <tempObjName>
of the type <tempObj>
. Please make the temporary object <tempObjName>
persistent, or make the persistent object <objName>
temporary.
INVALID_TIMESTAMP_FORMAT
The provided timestamp <timestamp>
doesn’t match the expected syntax <format>
.
INVALID_TIMEZONE
The timezone: <timeZone>
is invalid. The timezone must be either a region-based zone ID or a zone offset. Region IDs must have the form ‘area/city’, such as ‘America/Los_Angeles’. Zone offsets must be in the format ‘(+|-)HH’, ‘(+|-)HH:mm’ or ‘(+|-)HH:mm:ss’, e.g ‘-08’ , ‘+01:00’ or ‘-13:33:33’, and must be in the range from -18:00 to +18:00. ‘Z’ and ‘UTC’ are accepted as synonyms for ‘+00:00’.
INVALID_TIME_TRAVEL_TIMESTAMP_EXPR
The time travel timestamp expression <expr>
is invalid.
For more details see INVALID_TIME_TRAVEL_TIMESTAMP_EXPR
INVALID_UDF_IMPLEMENTATION
Function <funcName>
does not implement a ScalarFunction or AggregateFunction.
INVALID_UPGRADE_SYNTAX
<command> <supportedOrNot>
the source table is in Hive Metastore and the destination table is in Unity Catalog.
INVALID_URL
The url is invalid: <url>
. Use try_parse_url
to tolerate invalid URL and return NULL
instead.
INVALID_UUID
Input <uuidInput>
is not a valid UUID.
The UUID should be in the format of ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’
Please check the format of the UUID.
INVALID_VARIABLE_DECLARATION
Invalid variable declaration.
For more details see INVALID_VARIABLE_DECLARATION
INVALID_VARIABLE_TYPE_FOR_QUERY_EXECUTE_IMMEDIATE
Variable type must be string type but got <varType>
.
INVALID_VARIANT_CAST
The variant value <value>
cannot be cast into <dataType>
. Please use try_variant_get
instead.
INVALID_VARIANT_GET_PATH
The path <path>
is not a valid variant extraction path in <functionName>
.
A valid path should start with $
and is followed by zero or more segments like [123]
, .name
, ['name']
, or ["name"]
.
INVALID_WHERE_CONDITION
The WHERE
condition <condition>
contains invalid expressions: <expressionList>
.
Rewrite the query to avoid window functions, aggregate functions, and generator functions in the WHERE
clause.
INVALID_WRITER_COMMIT_MESSAGE
The data source writer has generated an invalid number of commit messages. Expected exactly one writer commit message from each task, but received <detail>
.
INVALID_WRITE_DISTRIBUTION
The requested write distribution is invalid.
For more details see INVALID_WRITE_DISTRIBUTION
JOIN_CONDITION_IS_NOT_BOOLEAN_TYPE
The join condition <joinCondition>
has the invalid type <conditionType>
, expected “BOOLEAN
”.
KAFKA_DATA_LOSS
Some data may have been lost because they are not available in Kafka any more;
either the data was aged out by Kafka or the topic may have been deleted before all the data in the
topic was processed.
If you don’t want your streaming query to fail on such cases, set the source option failOnDataLoss to false.
Reason:
For more details see KAFKA_DATA_LOSS
KINESIS_COULD_NOT_READ_SHARD_UNTIL_END_OFFSET
Could not read until the desired sequence number <endSeqNum>
for shard <shardId>
in
kinesis stream <stream>
with consumer mode <consumerMode>
. The query will fail due to
potential data loss. The last read record was at sequence number <lastSeqNum>
.
This can happen if the data with endSeqNum has already been aged out, or the Kinesis stream was
deleted and reconstructed with the same name. The failure behavior can be overridden
by setting spark.databricks.kinesis.failOnDataLoss to false in spark configuration.
KINESIS_EFO_CONSUMER_NOT_FOUND
For kinesis stream <streamId>
, the previously registered EFO consumer <consumerId>
of the stream has been deleted.
Restart the query so that a new consumer will be registered.
KINESIS_EFO_SUBSCRIBE_LIMIT_EXCEEDED
For shard <shard>
, the previous call of subscribeToShard API was within the 5 seconds of the next call.
Restart the query after 5 seconds or more.
KINESIS_FETCHED_SHARD_LESS_THAN_TRACKED_SHARD
The minimum fetched shardId from Kinesis (<fetchedShardId>
)
is less than the minimum tracked shardId (<trackedShardId>
).
This is unexpected and occurs when a Kinesis stream is deleted and recreated with the same name,
and a streaming query using this Kinesis stream is restarted using an existing checkpoint location.
Restart the streaming query with a new checkpoint location, or create a stream with a new name.
KINESIS_RECORD_SEQ_NUMBER_ORDER_VIOLATION
For shard <shard>
, the last record read from Kinesis in previous fetches has sequence number <lastSeqNum>
,
which is greater than the record read in current fetch with sequence number <recordSeqNum>
.
This is unexpected and can happen when the start position of retry or next fetch is incorrectly initialized, and may result in duplicate records downstream.
KINESIS_SOURCE_MUST_BE_IN_EFO_MODE_TO_CONFIGURE_CONSUMERS
To read from Kinesis Streams with consumer configurations (consumerName
, consumerNamePrefix
, or registeredConsumerId
), consumerMode
must be efo
.
KINESIS_SOURCE_MUST_SPECIFY_REGISTERED_CONSUMER_ID_AND_TYPE
To read from Kinesis Streams with registered consumers, you must specify both the registeredConsumerId
and registeredConsumerIdType
options.
KINESIS_SOURCE_MUST_SPECIFY_STREAM_NAMES_OR_ARNS
To read from Kinesis Streams, you must configure either (but not both) of the streamName
or streamARN
options as a comma-separated list of stream names/ARNs.
KINESIS_SOURCE_NO_CONSUMER_OPTIONS_WITH_REGISTERED_CONSUMERS
To read from Kinesis Streams with registered consumers, do not configure consumerName
or consumerNamePrefix
options as they will not take effect.
KINESIS_SOURCE_REGISTERED_CONSUMER_ID_COUNT_MISMATCH
The number of registered consumer ids should be equal to the number of Kinesis streams but got <numConsumerIds>
consumer ids and <numStreams>
streams.
KINESIS_SOURCE_REGISTERED_CONSUMER_NOT_FOUND
The registered consumer <consumerId>
provided cannot be found for streamARN <streamARN>
. Verify that you have registered the consumer or do not provide the registeredConsumerId
option.
KINESIS_SOURCE_REGISTERED_CONSUMER_TYPE_INVALID
The registered consumer type <consumerType>
is invalid. It must be either name
or ARN
.
KRYO_BUFFER_OVERFLOW
Kryo serialization failed: <exceptionMsg>
. To avoid this, increase “<bufferSizeConfKey>
” value.
LABEL_ALREADY_EXISTS
The label <label>
already exists. Choose another name or rename the existing label.
LOCAL_MUST_WITH_SCHEMA_FILE
LOCAL
must be used together with the schema of file
, but got: <actualSchema>
.
LOCATION_ALREADY_EXISTS
Cannot name the managed table as <identifier>
, as its associated location <location>
already exists. Please pick a different table name, or remove the existing location first.
LOST_TOPIC_PARTITIONS_IN_END_OFFSET_WITH_TRIGGER_AVAILABLENOW
Some of partitions in Kafka topic(s) have been lost during running query with Trigger.AvailableNow. The error could be transient - restart your query, and report if you still see the same issue.
topic-partitions for latest offset: <tpsForLatestOffset>
, topic-partitions for end offset: <tpsForEndOffset>
MALFORMED_AVRO_MESSAGE
Malformed Avro messages are detected in message deserialization. Parse Mode: <mode>
. To process malformed Avro message as null result, try setting the option ‘mode’ as ‘PERMISSIVE
’.
MALFORMED_RECORD_IN_PARSING
Malformed records are detected in record parsing: <badRecord>
.
Parse Mode: <failFastMode>
. To process malformed records as null result, try setting the option ‘mode’ as ‘PERMISSIVE
’.
For more details see MALFORMED_RECORD_IN_PARSING
MATERIALIZED_VIEW_MESA_REFRESH_WITHOUT_PIPELINE_ID
Cannot <refreshType>
the materialized view because it predates having a pipelineId. To enable <refreshType>
please drop and recreate the materialized view.
MATERIALIZED_VIEW_OPERATION_NOT_ALLOWED
The materialized view operation <operation>
is not allowed:
For more details see MATERIALIZED_VIEW_OPERATION_NOT_ALLOWED
MATERIALIZED_VIEW_OUTPUT_WITHOUT_EXPLICIT_ALIAS
Output expression <expression>
in a materialized view must be explicitly aliased.
MATERIALIZED_VIEW_OVER_STREAMING_QUERY_INVALID
materialized view <name>
could not be created with streaming query. Please use CREATE
[OR REFRESH
] <st>
or remove the STREAM
keyword to your FROM
clause to turn this relation into a batch query instead.
MATERIALIZED_VIEW_UNSUPPORTED_OPERATION
Operation <operation>
is not supported on materialized views for this version.
MAX_NUMBER_VARIABLES_IN_SESSION_EXCEEDED
Cannot create the new variable <variableName>
because the number of variables in the session exceeds the maximum allowed number (<maxNumVariables>
).
MAX_RECORDS_PER_FETCH_INVALID_FOR_KINESIS_SOURCE
maxRecordsPerFetch needs to be a positive integer less than or equal to <kinesisRecordLimit>
MERGE_CARDINALITY_VIOLATION
The ON
search condition of the MERGE
statement matched a single row from the target table with multiple rows of the source table.
This could result in the target row being operated on more than once with an update or delete operation and is not allowed.
METRIC_STORE_INVALID_ARGUMENT_VALUE_ERROR
Provided value “<argValue>
” is not supported by argument “<argName>
” for the METRIC_STORE
table function.
For more details see METRIC_STORE_INVALID_ARGUMENT_VALUE_ERROR
METRIC_STORE_UNSUPPORTED_ERROR
Metric Store routine <routineName>
is currently disabled in this environment.
MIGRATION_NOT_SUPPORTED
<table>
is not supported for migrating to managed table because it is not a <tableKind>
table.
MISMATCHED_TOPIC_PARTITIONS_BETWEEN_END_OFFSET_AND_PREFETCHED
Kafka data source in Trigger.AvailableNow should provide the same topic partitions in pre-fetched offset to end offset for each microbatch. The error could be transient - restart your query, and report if you still see the same issue.
topic-partitions for pre-fetched offset: <tpsForPrefetched>
, topic-partitions for end offset: <tpsForEndOffset>
.
MISSING_AGGREGATION
The non-aggregating expression <expression>
is based on columns which are not participating in the GROUP BY
clause.
Add the columns or the expression to the GROUP BY
, aggregate the expression, or use <expressionAnyValue>
if you do not care which of the values within a group is returned.
For more details see MISSING_AGGREGATION
MISSING_CLAUSES_FOR_OPERATION
Missing clause <clauses>
for operation <operation>
. Please add the required clauses.
MISSING_CONNECTION_OPTION
Connections of type ‘<connectionType>
’ must include the following option(s): <requiredOptions>
.
MISSING_DATABASE_FOR_V1_SESSION_CATALOG
Database name is not specified in the v1 session catalog. Please ensure to provide a valid database name when interacting with the v1 catalog.
MISSING_GROUP_BY
The query does not include a GROUP BY
clause. Add GROUP BY
or turn it into the window functions using OVER clauses.
MISSING_PARAMETER_FOR_KAFKA
Parameter <parameterName>
is required for Kafka, but is not specified in <functionName>
.
MISSING_PARAMETER_FOR_ROUTINE
Parameter <parameterName>
is required, but is not specified in <functionName>
.
MISSING_TIMEOUT_CONFIGURATION
The operation has timed out, but no timeout duration is configured. To set a processing time-based timeout, use ‘GroupState.setTimeoutDuration()’ in your ‘mapGroupsWithState’ or ‘flatMapGroupsWithState’ operation. For event-time-based timeout, use ‘GroupState.setTimeoutTimestamp()’ and define a watermark using ‘Dataset.withWatermark()’.
MISSING_WINDOW_SPECIFICATION
Window specification is not defined in the WINDOW
clause for <windowName>
. For more information about WINDOW
clauses, please refer to ‘<docroot>
/sql-ref-syntax-qry-select-window.html’.
MULTIPLE_LOAD_PATH
Databricks Delta does not support multiple input paths in the load() API.
paths: <pathList>
. To build a single DataFrame by loading
multiple paths from the same Delta table, please load the root path of
the Delta table with the corresponding partition filters. If the multiple paths
are from different Delta tables, please use Dataset’s union()/unionByName() APIs
to combine the DataFrames generated by separate load() API calls.
MULTIPLE_QUERY_RESULT_CLAUSES_WITH_PIPE_OPERATORS
<clause1>
and <clause2>
cannot coexist in the same SQL pipe operator using ‘|>’. Please separate the multiple result clauses into separate pipe operators and then retry the query again.
MULTIPLE_XML_DATA_SOURCE
Detected multiple data sources with the name <provider> (<sourceNames>
). Please specify the fully qualified class name or remove <externalSource>
from the classpath.
MULTI_SOURCES_UNSUPPORTED_FOR_EXPRESSION
The expression <expr>
does not support more than one source.
MUTUALLY_EXCLUSIVE_CLAUSES
Mutually exclusive clauses or options <clauses>
. Please remove one of these clauses.
MV_ST_ALTER_QUERY_INCORRECT_BACKING_TYPE
The input query expects a <expectedType>
, but the underlying table is a <givenType>
.
NAMED_PARAMETERS_NOT_SUPPORTED
Named parameters are not supported for function <functionName>
; please retry the query with positional arguments to the function call instead.
NAMED_PARAMETERS_NOT_SUPPORTED_FOR_SQL_UDFS
Cannot call function <functionName>
because named argument references are not supported. In this case, the named argument reference was <argument>
.
NAMED_PARAMETER_SUPPORT_DISABLED
Cannot call function <functionName>
because named argument references are not enabled here.
In this case, the named argument reference was <argument>
.
Set “spark.sql.allowNamedFunctionArguments” to “true” to turn on feature.
NAMESPACE_ALREADY_EXISTS
Cannot create namespace <nameSpaceName>
because it already exists.
Choose a different name, drop the existing namespace, or add the IF NOT EXISTS
clause to tolerate pre-existing namespace.
NAMESPACE_NOT_EMPTY
Cannot drop a namespace <nameSpaceNameName>
because it contains objects.
Use DROP NAMESPACE
… CASCADE
to drop the namespace and all its objects.
NAMESPACE_NOT_FOUND
The namespace <nameSpaceName>
cannot be found. Verify the spelling and correctness of the namespace.
If you did not qualify the name with, verify the current_schema() output, or qualify the name with the correctly.
To tolerate the error on drop use DROP NAMESPACE IF EXISTS
.
NATIVE_IO_ERROR
Native request failed. requestId: <requestId>
, cloud: <cloud>
, operation: <operation>
request: [https: <https>
, method = <method>
, path = <path>
, params = <params>
, host = <host>
, headers = <headers>
, bodyLen = <bodyLen>
],
error: <error>
NEGATIVE_VALUES_IN_FREQUENCY_EXPRESSION
Found the negative value in <frequencyExpression>
: <negativeValue>
, but expected a positive integral value.
NESTED_AGGREGATE_FUNCTION
It is not allowed to use an aggregate function in the argument of another aggregate function. Please use the inner aggregate function in a sub-query.
NESTED_EXECUTE_IMMEDIATE
Nested EXECUTE IMMEDIATE
commands are not allowed. Please ensure that the SQL query provided (<sqlString>
) does not contain another EXECUTE IMMEDIATE
command.
NONEXISTENT_FIELD_NAME_IN_LIST
Field(s) <nonExistFields>
do(es) not exist. Available fields: <fieldNames>
NON_FOLDABLE_ARGUMENT
The function <funcName>
requires the parameter <paramName>
to be a foldable expression of the type <paramType>
, but the actual argument is a non-foldable.
NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION
When there are more than one MATCHED
clauses in a MERGE
statement, only the last MATCHED
clause can omit the condition.
NON_LAST_NOT_MATCHED_BY_SOURCE_CLAUSE_OMIT_CONDITION
When there are more than one NOT MATCHED BY SOURCE
clauses in a MERGE
statement, only the last NOT MATCHED BY SOURCE
clause can omit the condition.
NON_LAST_NOT_MATCHED_BY_TARGET_CLAUSE_OMIT_CONDITION
When there are more than one NOT MATCHED [BY TARGET
] clauses in a MERGE
statement, only the last NOT MATCHED [BY TARGET
] clause can omit the condition.
NON_TIME_WINDOW_NOT_SUPPORTED_IN_STREAMING
Window function is not supported in <windowFunc>
(as column <columnName>
) on streaming DataFrames/Datasets.
Structured Streaming only supports time-window aggregation using the WINDOW
function. (window specification: <windowSpec>
)
NOT_ALLOWED_IN_PIPE_OPERATOR_WHERE
Not allowed in the pipe WHERE
clause:
For more details see NOT_ALLOWED_IN_PIPE_OPERATOR_WHERE
NOT_A_CONSTANT_STRING
The expression <expr>
used for the routine or clause <name>
must be a constant STRING
which is NOT NULL
.
For more details see NOT_A_CONSTANT_STRING
NOT_A_PARTITIONED_TABLE
Operation <operation>
is not allowed for <tableIdentWithDB>
because it is not a partitioned table.
NOT_A_SCALAR_FUNCTION
<functionName>
appears as a scalar expression here, but the function was defined as a table function. Please update the query to move the function call into the FROM
clause, or redefine <functionName>
as a scalar function instead.
NOT_A_TABLE_FUNCTION
<functionName>
appears as a table function here, but the function was defined as a scalar function. Please update the query to move the function call outside the FROM
clause, or redefine <functionName>
as a table function instead.
NOT_NULL_ASSERT_VIOLATION
NULL
value appeared in non-nullable field: <walkedTypePath>
If the schema is inferred from a Scala tuple/case class, or a Java bean, please try to use scala.Option[_] or other nullable types (such as java.lang.Integer instead of int/scala.Int).
NOT_NULL_CONSTRAINT_VIOLATION
Assigning a NULL
is not allowed here.
For more details see NOT_NULL_CONSTRAINT_VIOLATION
NOT_SUPPORTED_CHANGE_COLUMN
ALTER TABLE ALTER
/CHANGE COLUMN
is not supported for changing <table>
’s column <originName>
with type <originType>
to <newName>
with type <newType>
.
NOT_SUPPORTED_COMMAND_WITHOUT_HIVE_SUPPORT
<cmd>
is not supported, if you want to enable it, please set “spark.sql.catalogImplementation” to “hive”.
NOT_SUPPORTED_IN_JDBC_CATALOG
Not supported command in JDBC catalog:
For more details see NOT_SUPPORTED_IN_JDBC_CATALOG
NO_DEFAULT_COLUMN_VALUE_AVAILABLE
Can’t determine the default value for <colName>
since it is not nullable and it has no default value.
NO_MERGE_ACTION_SPECIFIED
df.mergeInto needs to be followed by at least one of whenMatched/whenNotMatched/whenNotMatchedBySource.
NO_PARENT_EXTERNAL_LOCATION_FOR_PATH
SQLSTATE: none assigned
No parent external location was found for path ‘<path>
’. Please create an external location on one of the parent paths and then retry the query or command again.
NO_STORAGE_LOCATION_FOR_TABLE
SQLSTATE: none assigned
No storage location was found for table ‘<tableId>
’ when generating table credentials. Please verify the table type and the table location URL and then retry the query or command again.
NO_SUCH_CATALOG_EXCEPTION
Catalog ‘<catalog>
’ was not found. Please verify the catalog name and then retry the query or command again.
NO_SUCH_CLEANROOM_EXCEPTION
SQLSTATE: none assigned
The clean room ‘<cleanroom>
’ does not exist. Please verify that the clean room name is spelled correctly and matches the name of a valid existing clean room and then retry the query or command again.
NO_SUCH_EXTERNAL_LOCATION_EXCEPTION
SQLSTATE: none assigned
The external location ‘<externalLocation>
’ does not exist. Please verify that the external location name is correct and then retry the query or command again.
NO_SUCH_METASTORE_EXCEPTION
SQLSTATE: none assigned
The metastore was not found. Please ask your account administrator to assign a metastore to the current workspace and then retry the query or command again.
NO_SUCH_PROVIDER_EXCEPTION
SQLSTATE: none assigned
The share provider ‘<providerName>
’ does not exist. Please verify the share provider name is spelled correctly and matches the name of a valid existing provider name and then retry the query or command again.
NO_SUCH_RECIPIENT_EXCEPTION
SQLSTATE: none assigned
The recipient ‘<recipient>
’ does not exist. Please verify that the recipient name is spelled correctly and matches the name of a valid existing recipient and then retry the query or command again.
NO_SUCH_STORAGE_CREDENTIAL_EXCEPTION
SQLSTATE: none assigned
The storage credential ‘<storageCredential>
’ does not exist. Please verify that the storage credential name is spelled correctly and matches the name of a valid existing storage credential and then retry the query or command again.
NO_SUCH_USER_EXCEPTION
SQLSTATE: none assigned
The user ‘<userName>
’ does not exist. Please verify that the user to whom you grant permission or alter ownership is spelled correctly and matches the name of a valid existing user and then retry the query or command again.
NULL_QUERY_STRING_EXECUTE_IMMEDIATE
Execute immediate requires a non-null variable as the query string, but the provided variable <varName>
is null.
NUMERIC_OUT_OF_SUPPORTED_RANGE
The value <value>
cannot be interpreted as a numeric since it has more than 38 digits.
NUM_COLUMNS_MISMATCH
<operator>
can only be performed on inputs with the same number of columns, but the first input has <firstNumColumns>
columns and the <invalidOrdinalNum>
input has <invalidNumColumns>
columns.
NUM_TABLE_VALUE_ALIASES_MISMATCH
Number of given aliases does not match number of output columns.
Function name: <funcName>
; number of aliases: <aliasesNum>
; number of output columns: <outColsNum>
.
ONLY_SECRET_FUNCTION_SUPPORTED_HERE
Calling function <functionName>
is not supported in this <location>
; <supportedFunctions>
supported here.
ONLY_SUPPORTED_WITH_UC_SQL_CONNECTOR
SQL operation <operation>
is only supported on Databricks SQL connectors with Unity Catalog support.
ORDER_BY_POS_OUT_OF_RANGE
ORDER BY
position <index>
is not in select list (valid range is [1, <size>
]).
PARQUET_CONVERSION_FAILURE
Unable to create a Parquet converter for the data type <dataType>
whose Parquet type is <parquetType>
.
For more details see PARQUET_CONVERSION_FAILURE
PARSE_MODE_UNSUPPORTED
The function <funcName>
doesn’t support the <mode>
mode. Acceptable modes are PERMISSIVE
and FAILFAST
.
PARTITIONS_ALREADY_EXIST
Cannot ADD or RENAME
TO partition(s) <partitionList>
in table <tableName>
because they already exist.
Choose a different name, drop the existing partition, or add the IF NOT EXISTS
clause to tolerate a pre-existing partition.
PARTITIONS_NOT_FOUND
The partition(s) <partitionList>
cannot be found in table <tableName>
.
Verify the partition specification and table name.
To tolerate the error on drop use ALTER TABLE
… DROP IF EXISTS PARTITION
.
PARTITION_COLUMN_NOT_FOUND_IN_SCHEMA
Partition column <column>
not found in schema <schema>
. Please provide the existing column for partitioning.
PARTITION_LOCATION_ALREADY_EXISTS
Partition location <locationPath>
already exists in table <tableName>
.
PARTITION_LOCATION_IS_NOT_UNDER_TABLE_DIRECTORY
Failed to execute the ALTER TABLE SET PARTITION LOCATION
statement, because the
partition location <location>
is not under the table directory <table>
.
To fix it, please set the location of partition to a subdirectory of <table>
.
PARTITION_METADATA
<action>
is not allowed on table <tableName>
since storing partition metadata is not supported in Unity Catalog.
PARTITION_NUMBER_MISMATCH
Number of values (<partitionNumber>
) did not match schema size (<partitionSchemaSize>
): values are <partitionValues>
, schema is <partitionSchema>
, file path is <urlEncodedPath>
.
Please re-materialize the table or contact the owner.
PARTITION_TRANSFORM_EXPRESSION_NOT_IN_PARTITIONED_BY
The expression <expression>
must be inside ‘partitionedBy’.
PATH_ALREADY_EXISTS
Path <outputPath>
already exists. Set mode as “overwrite” to overwrite the existing path.
PHOTON_DESERIALIZED_PROTOBUF_MEMORY_LIMIT_EXCEEDED
Deserializing the Photon protobuf plan requires at least <size>
bytes, which exceeds the
limit of <limit>
bytes. This could be due to a very large plan or the presence of a very
wide schema. Try to simplify the query, remove unnecessary columns, or disable Photon.
PHOTON_SERIALIZED_PROTOBUF_MEMORY_LIMIT_EXCEEDED
The serialized Photon protobuf plan has size <size>
bytes, which exceeds the limit of
<limit>
bytes. The serialized size of data types in the plan is <dataTypeSize>
bytes.
This could be due to a very large plan or the presence of a very wide schema.
Consider rewriting the query to remove unwanted operations and columns or disable Photon.
PIPE_OPERATOR_AGGREGATE_EXPRESSION_CONTAINS_NO_AGGREGATE_FUNCTION
Non-grouping expression <expr>
is provided as an argument to the |> AGGREGATE
pipe operator but does not contain any aggregate function; please update it to include an aggregate function and then retry the query again.
PIPE_OPERATOR_CONTAINS_AGGREGATE_FUNCTION
Aggregate function <expr>
is not allowed when using the pipe operator |> <clause>
clause; please use the pipe operator |> AGGREGATE
clause instead.
PIVOT_VALUE_DATA_TYPE_MISMATCH
Invalid pivot value ‘<value>
’: value data type <valueType>
does not match pivot column data type <pivotType>
.
PROCEDURE_ARGUMENT_NUMBER_MISMATCH
Procedure <procedureName>
expects <expected>
arguments, but <actual>
were provided.
PROCEDURE_CREATION_PARAMETER_OUT_INOUT_WITH_DEFAULT
The parameter <parameterName>
is defined with parameter mode <parameterMode>
. OUT and INOUT
parameter cannot be omitted when invoking a routine and therefore do not support a DEFAULT
expression. To proceed, remove the DEFAULT
clause or change the parameter mode to IN
.
PROCEDURE_NOT_SUPPORTED_WITH_HMS
Stored procedure is not supported with Hive Metastore. Please use Unity Catalog instead.
PROTOBUF_FIELD_MISSING
Searching for <field>
in Protobuf schema at <protobufSchema>
gave <matchSize>
matches. Candidates: <matches>
.
PROTOBUF_FIELD_MISSING_IN_SQL_SCHEMA
Found <field>
in Protobuf schema but there is no match in the SQL schema.
PROTOBUF_JAVA_CLASSES_NOT_SUPPORTED
Java classes are not supported for <protobufFunction>
. Contact Databricks Support about alternate options.
PROTOBUF_NOT_LOADED_SQL_FUNCTIONS_UNUSABLE
Cannot call the <functionName>
SQL function because the Protobuf data source is not loaded.
Please restart your job or session with the ‘spark-protobuf’ package loaded, such as by using the –packages argument on the command line, and then retry your query or command again.
PS_FETCH_RETRY_EXCEPTION
Task in pubsub fetch stage cannot be retried. Partition <partitionInfo>
in stage <stageInfo>
, TID <taskId>
.
PS_INVALID_UNSAFE_ROW_CONVERSION_FROM_PROTO
Invalid UnsafeRow to decode to PubSubMessageMetadata, the desired proto schema is: <protoSchema>
. The input UnsafeRow might be corrupted: <unsafeRow>
.
PS_MOVING_CHECKPOINT_FAILURE
Fail to move raw data checkpoint files from <src>
to destination directory: <dest>
.
PS_MULTIPLE_FAILED_EPOCHS
PubSub stream cannot be started as there is more than one failed fetch: <failedEpochs>
.
PS_OPTION_NOT_IN_BOUNDS
<key>
must be within the following bounds (<min>
, <max>
) exclusive of both bounds.
PS_PROVIDE_CREDENTIALS_WITH_OPTION
Shared clusters do not support authentication with instance profiles. Provide credentials to the stream directly using .option().
PS_SPARK_SPECULATION_NOT_SUPPORTED
PubSub source connector is only available in cluster with spark.speculation
disabled.
PS_UNABLE_TO_CREATE_SUBSCRIPTION
An error occurred while trying to create subscription <subId>
on topic <topicId>
. Please check that there are sufficient permissions to create a subscription and try again.
PYTHON_STREAMING_DATA_SOURCE_RUNTIME_ERROR
Failed when Python streaming data source perform <action>
: <msg>
QUERIED_TABLE_INCOMPATIBLE_WITH_COLUMN_MASK_POLICY
Unable to access referenced table because a previously assigned column mask is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:
For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_COLUMN_MASK_POLICY
QUERIED_TABLE_INCOMPATIBLE_WITH_ROW_LEVEL_SECURITY_POLICY
Unable to access referenced table because a previously assigned row level security policy is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:
For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_ROW_LEVEL_SECURITY_POLICY
READ_CURRENT_FILE_NOT_FOUND
<message>
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running ‘REFRESH TABLE
tableName’ command in SQL or by recreating the Dataset/DataFrame involved.
READ_FILES_AMBIGUOUS_ROUTINE_PARAMETERS
The invocation of function <functionName>
has <parameterName>
and <alternativeName>
set, which are aliases of each other. Please set only one of them.
READ_TVF_UNEXPECTED_REQUIRED_PARAMETER
The function <functionName>
required parameter <parameterName>
must be assigned at position <expectedPos>
without the name.
RECIPIENT_EXPIRATION_NOT_SUPPORTED
Only TIMESTAMP
/TIMESTAMP_LTZ
/TIMESTAMP_NTZ
types are supported for recipient expiration timestamp.
RECURSIVE_PROTOBUF_SCHEMA
Found recursive reference in Protobuf schema, which can not be processed by Spark by default: <fieldDescriptor>
. try setting the option recursive.fields.max.depth
1 to 10. Going beyond 10 levels of recursion is not allowed.
REF_DEFAULT_VALUE_IS_NOT_ALLOWED_IN_PARTITION
References to DEFAULT
column values are not allowed within the PARTITION
clause.
REMOTE_FUNCTION_HTTP_FAILED_ERROR
The remote HTTP request failed with code <errorCode>
, and error message <errorMessage>
REMOTE_FUNCTION_HTTP_RESULT_PARSE_ERROR
Failed to evaluate the <functionName>
SQL function due to inability to parse the JSON result from the remote HTTP response; the error message is <errorMessage>
. Check API documentation: <docUrl>
. Please fix the problem indicated in the error message and retry the query again.
REMOTE_FUNCTION_HTTP_RESULT_UNEXPECTED_ERROR
Failed to evaluate the <functionName>
SQL function due to inability to process the unexpected remote HTTP response; the error message is <errorMessage>
. Check API documentation: <docUrl>
. Please fix the problem indicated in the error message and retry the query again.
REMOTE_FUNCTION_HTTP_RETRY_TIMEOUT
The remote request failed after retrying <N>
times; the last failed HTTP error code was <errorCode>
and the message was <errorMessage>
REMOTE_FUNCTION_MISSING_REQUIREMENTS_ERROR
Failed to evaluate the <functionName>
SQL function because <errorMessage>
. Check requirements in <docUrl>
. Please fix the problem indicated in the error message and retry the query again.
REQUIRED_PARAMETER_ALREADY_PROVIDED_POSITIONALLY
The routine <routineName>
required parameter <parameterName>
has been assigned at position <positionalIndex>
without the name.
Please update the function call to either remove the named argument with <parameterName>
for this parameter or remove the positional
argument at <positionalIndex>
and then try the query again.
REQUIRED_PARAMETER_NOT_FOUND
Cannot invoke routine <routineName>
because the parameter named <parameterName>
is required, but the routine call did not supply a value. Please update the routine call to supply an argument value (either positionally at index <index>
or by name) and retry the query again.
REQUIRES_SINGLE_PART_NAMESPACE
<sessionCatalog>
requires a single-part namespace, but got <namespace>
.
RESCUED_DATA_COLUMN_CONFLICT_WITH_SINGLE_VARIANT
The ‘rescuedDataColumn’ DataFrame API reader option is mutually exclusive with the ‘singleVariantColumn’ DataFrame API option.
Please remove one of them and then retry the DataFrame operation again.
RESERVED_CDC_COLUMNS_ON_WRITE
The write contains reserved columns <columnList>
that are used
internally as metadata for Change Data Feed. To write to the table either rename/drop
these columns or disable Change Data Feed on the table by setting
<config>
to false.
RESTRICTED_STREAMING_OPTION_PERMISSION_ENFORCED
The option <option>
has restricted values on Shared clusters for the <source>
source.
For more details see RESTRICTED_STREAMING_OPTION_PERMISSION_ENFORCED
ROUTINE_ALREADY_EXISTS
Cannot create the <newRoutineType> <routineName>
because a <existingRoutineType>
of that name already exists.
Choose a different name, drop or replace the existing <existingRoutineType>
, or add the IF NOT EXISTS
clause to tolerate a pre-existing <newRoutineType>
.
ROUTINE_NOT_FOUND
The routine <routineName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP
… IF EXISTS
.
ROUTINE_PARAMETER_NOT_FOUND
The routine <routineName>
does not support the parameter <parameterName>
specified at position <pos>
.<suggestion>
ROUTINE_USES_SYSTEM_RESERVED_CLASS_NAME
The function <routineName>
cannot be created because the specified classname ‘<className>
’ is reserved for system use. Please rename the class and try again.
ROW_LEVEL_SECURITY_CHECK_CONSTRAINT_UNSUPPORTED
Creating CHECK
constraint on table <tableName>
with row level security policies is not supported.
ROW_LEVEL_SECURITY_DUPLICATE_COLUMN_NAME
A <statementType>
statement attempted to assign a row level security policy to a table, but two or more referenced columns had the same name <columnName>
, which is invalid.
ROW_LEVEL_SECURITY_FEATURE_NOT_SUPPORTED
Row level security policies for <tableName>
are not supported:
For more details see ROW_LEVEL_SECURITY_FEATURE_NOT_SUPPORTED
ROW_LEVEL_SECURITY_INCOMPATIBLE_SCHEMA_CHANGE
Unable to <statementType> <columnName>
from table <tableName>
because it’s referenced in a row level security policy. The table owner must remove or alter this policy before proceeding.
ROW_LEVEL_SECURITY_MERGE_UNSUPPORTED_SOURCE
MERGE INTO
operations do not support row level security policies in source table <tableName>
.
ROW_LEVEL_SECURITY_MERGE_UNSUPPORTED_TARGET
MERGE INTO
operations do not support writing into table <tableName>
with row level security policies.
ROW_LEVEL_SECURITY_MULTI_PART_COLUMN_NAME
This statement attempted to assign a row level security policy to a table, but referenced column <columnName>
had multiple name parts, which is invalid.
ROW_LEVEL_SECURITY_REQUIRE_UNITY_CATALOG
Row level security policies are only supported in Unity Catalog.
ROW_LEVEL_SECURITY_SHOW_PARTITIONS_UNSUPPORTED
SHOW PARTITIONS
command is not supported for`<format>` tables with row level security policy.
ROW_LEVEL_SECURITY_TABLE_CLONE_SOURCE_NOT_SUPPORTED
<mode>
clone from table <tableName>
with row level security policy is not supported.
ROW_LEVEL_SECURITY_TABLE_CLONE_TARGET_NOT_SUPPORTED
<mode>
clone to table <tableName>
with row level security policy is not supported.
ROW_LEVEL_SECURITY_UNSUPPORTED_CONSTANT_AS_PARAMETER
Using a constant as a parameter in a row level security policy is not supported. Please update your SQL command to remove the constant from the row filter definition and then retry the command again.
ROW_LEVEL_SECURITY_UNSUPPORTED_PROVIDER
Failed to execute <statementType>
command because assigning row level security policy is not supported for target data source with table provider: “<provider>
”.
RULE_ID_NOT_FOUND
Not found an id for the rule name “<ruleName>
”. Please modify RuleIdCollection.scala if you are adding a new rule.
SCALAR_FUNCTION_NOT_COMPATIBLE
ScalarFunction <scalarFunc>
not overrides method ‘produceResult(InternalRow)’ with custom implementation.
SCALAR_FUNCTION_NOT_FULLY_IMPLEMENTED
ScalarFunction <scalarFunc>
not implements or overrides method ‘produceResult(InternalRow)’.
SCALAR_SUBQUERY_IS_IN_GROUP_BY_OR_AGGREGATE_FUNCTION
The correlated scalar subquery ‘<sqlExpr>
’ is neither present in GROUP BY
, nor in an aggregate function.
Add it to GROUP BY
using ordinal position or wrap it in first()
(or first_value
) if you don’t care which value you get.
SCHEDULE_ALREADY_EXISTS
Cannot add <scheduleType>
to a table that already has <existingScheduleType>
. Please drop the existing schedule or use ALTER TABLE
… ALTER <scheduleType>
… to alter it.
SCHEDULE_PERIOD_INVALID
The schedule period for <timeUnit>
must be an integer value between 1 and <upperBound>
(inclusive). Received: <actual>
.
SCHEMA_ALREADY_EXISTS
Cannot create schema <schemaName>
because it already exists.
Choose a different name, drop the existing schema, or add the IF NOT EXISTS
clause to tolerate pre-existing schema.
SCHEMA_NOT_EMPTY
Cannot drop a schema <schemaName>
because it contains objects.
Use DROP SCHEMA
… CASCADE
to drop the schema and all its objects.
SCHEMA_NOT_FOUND
The schema <schemaName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog.
To tolerate the error on drop use DROP SCHEMA IF EXISTS
.
SECOND_FUNCTION_ARGUMENT_NOT_INTEGER
The second argument of <functionName>
function needs to be an integer.
SECRET_FUNCTION_INVALID_LOCATION
Cannot execute <commandType>
command with one or more non-encrypted references to the SECRET
function; please encrypt the result of each such function call with AES_ENCRYPT
and try the command again
SEED_EXPRESSION_IS_UNFOLDABLE
The seed expression <seedExpr>
of the expression <exprWithSeed>
must be foldable.
SERVER_IS_BUSY
The server is busy and could not handle the request. Please wait a moment and try again.
SHOW_COLUMNS_WITH_CONFLICT_NAMESPACE
SHOW COLUMNS
with conflicting namespaces: <namespaceA>
!= <namespaceB>
.
SPECIFY_BUCKETING_IS_NOT_ALLOWED
A CREATE TABLE
without explicit column list cannot specify bucketing information.
Please use the form with explicit column list and specify bucketing information.
Alternatively, allow bucketing information to be inferred by omitting the clause.
SPECIFY_CLUSTER_BY_WITH_BUCKETING_IS_NOT_ALLOWED
Cannot specify both CLUSTER BY
and CLUSTERED BY INTO BUCKETS
.
SPECIFY_CLUSTER_BY_WITH_PARTITIONED_BY_IS_NOT_ALLOWED
Cannot specify both CLUSTER BY
and PARTITIONED BY
.
SPECIFY_PARTITION_IS_NOT_ALLOWED
A CREATE TABLE
without explicit column list cannot specify PARTITIONED BY
.
Please use the form with explicit column list and specify PARTITIONED BY
.
Alternatively, allow partitioning to be inferred by omitting the PARTITION BY
clause.
STAGING_PATH_CURRENTLY_INACCESSIBLE
Transient error while accessing target staging path <path>
, please try in a few minutes
STAR_GROUP_BY_POS
Star (*) is not allowed in a select list when GROUP BY
an ordinal position is used.
STATEFUL_PROCESSOR_CANNOT_PERFORM_OPERATION_WITH_INVALID_HANDLE_STATE
Failed to perform stateful processor operation=<operationType>
with invalid handle state=<handleState>
.
STATEFUL_PROCESSOR_CANNOT_PERFORM_OPERATION_WITH_INVALID_TIME_MODE
Failed to perform stateful processor operation=<operationType>
with invalid timeMode=<timeMode>
STATEFUL_PROCESSOR_DUPLICATE_STATE_VARIABLE_DEFINED
State variable with name <stateVarName>
has already been defined in the StatefulProcessor.
STATEFUL_PROCESSOR_INCORRECT_TIME_MODE_TO_ASSIGN_TTL
Cannot use TTL for state=<stateName>
in timeMode=<timeMode>
, use TimeMode.ProcessingTime() instead.
STATEFUL_PROCESSOR_TTL_DURATION_MUST_BE_POSITIVE
TTL duration must be greater than zero for State store operation=<operationType>
on state=<stateName>
.
STATEFUL_PROCESSOR_UNKNOWN_TIME_MODE
Unknown time mode <timeMode>
. Accepted timeMode modes are ‘none’, ‘processingTime’, ‘eventTime’
STATE_STORE_CANNOT_CREATE_COLUMN_FAMILY_WITH_RESERVED_CHARS
Failed to create column family with unsupported starting character and name=<colFamilyName>
.
STATE_STORE_CANNOT_USE_COLUMN_FAMILY_WITH_INVALID_NAME
Failed to perform column family operation=<operationName>
with invalid name=<colFamilyName>
. Column family name cannot be empty or include leading/trailing spaces or use the reserved keyword=default
STATE_STORE_COLUMN_FAMILY_SCHEMA_INCOMPATIBLE
Incompatible schema transformation with column family=<colFamilyName>
, oldSchema=<oldSchema>
, newSchema=<newSchema>
.
STATE_STORE_HANDLE_NOT_INITIALIZED
The handle has not been initialized for this StatefulProcessor.
Please only use the StatefulProcessor within the transformWithState operator.
STATE_STORE_INCORRECT_NUM_ORDERING_COLS_FOR_RANGE_SCAN
Incorrect number of ordering ordinals=<numOrderingCols>
for range scan encoder. The number of ordering ordinals cannot be zero or greater than number of schema columns.
STATE_STORE_INCORRECT_NUM_PREFIX_COLS_FOR_PREFIX_SCAN
Incorrect number of prefix columns=<numPrefixCols>
for prefix scan encoder. Prefix columns cannot be zero or greater than or equal to num of schema columns.
STATE_STORE_INVALID_CONFIG_AFTER_RESTART
Cannot change <configName>
from <oldConfig>
to <newConfig>
between restarts. Please set <configName>
to <oldConfig>
, or restart with a new checkpoint directory.
STATE_STORE_INVALID_PROVIDER
The given State Store Provider <inputClass>
does not extend org.apache.spark.sql.execution.streaming.state.StateStoreProvider.
STATE_STORE_INVALID_VARIABLE_TYPE_CHANGE
Cannot change <stateVarName>
to <newType>
between query restarts. Please set <stateVarName>
to <oldType>
, or restart with a new checkpoint directory.
STATE_STORE_NULL_TYPE_ORDERING_COLS_NOT_SUPPORTED
Null type ordering column with name=<fieldName>
at index=<index>
is not supported for range scan encoder.
STATE_STORE_PROVIDER_DOES_NOT_SUPPORT_FINE_GRAINED_STATE_REPLAY
The given State Store Provider <inputClass>
does not extend org.apache.spark.sql.execution.streaming.state.SupportsFineGrainedReplay.
Therefore, it does not support option snapshotStartBatchId or readChangeFeed in state data source.
STATE_STORE_UNSUPPORTED_OPERATION_ON_MISSING_COLUMN_FAMILY
State store operation=<operationType>
not supported on missing column family=<colFamilyName>
.
STATE_STORE_VARIABLE_SIZE_ORDERING_COLS_NOT_SUPPORTED
Variable size ordering column with name=<fieldName>
at index=<index>
is not supported for range scan encoder.
STATIC_PARTITION_COLUMN_IN_INSERT_COLUMN_LIST
Static partition column <staticName>
is also specified in the column list.
STDS_FAILED_TO_READ_OPERATOR_METADATA
Failed to read the operator metadata for checkpointLocation=<checkpointLocation>
and batchId=<batchId>
.
Either the file does not exist, or the file is corrupted.
Rerun the streaming query to construct the operator metadata, and report to the corresponding communities or vendors if the error persists.
STDS_FAILED_TO_READ_STATE_SCHEMA
Failed to read the state schema. Either the file does not exist, or the file is corrupted. options: <sourceOptions>
.
Rerun the streaming query to construct the state schema, and report to the corresponding communities or vendors if the error persists.
STDS_INVALID_OPTION_VALUE
Invalid value for source option ‘<optionName>
’:
For more details see STDS_INVALID_OPTION_VALUE
STDS_NO_PARTITION_DISCOVERED_IN_STATE_STORE
The state does not have any partition. Please double check that the query points to the valid state. options: <sourceOptions>
STREAMING_AQE_NOT_SUPPORTED_FOR_STATEFUL_OPERATORS
Adaptive Query Execution is not supported for stateful operators in Structured Streaming.
STREAMING_FROM_MATERIALIZED_VIEW
Cannot stream from materialized view <viewName>
. Streaming from materialized views is not supported.
STREAMING_OUTPUT_MODE
Invalid streaming output mode: <outputMode>
.
For more details see STREAMING_OUTPUT_MODE
STREAMING_REAL_TIME_MODE
Streaming real-time mode has the following limitation:
For more details see STREAMING_REAL_TIME_MODE
STREAMING_STATEFUL_OPERATOR_NOT_MATCH_IN_STATE_METADATA
Streaming stateful operator name does not match with the operator in state metadata. This likely to happen when user adds/removes/changes stateful operator of existing streaming query.
Stateful operators in the metadata: [<OpsInMetadataSeq>
]; Stateful operators in current batch: [<OpsInCurBatchSeq>
].
STREAMING_TABLE_NEEDS_REFRESH
streaming table <tableName>
needs to be refreshed to execute <operation>
.
If the table is created from DBSQL
, please run REFRESH <st>
.
If the table is created by a pipeline in Delta Live Tables, please run a pipeline update.
STREAMING_TABLE_NOT_SUPPORTED
streaming tables can only be created and refreshed in Delta Live Tables and Databricks SQL Warehouses.
STREAMING_TABLE_OPERATION_NOT_ALLOWED
The operation <operation>
is not allowed:
For more details see STREAMING_TABLE_OPERATION_NOT_ALLOWED
STREAMING_TABLE_QUERY_INVALID
streaming table <tableName>
can only be created from a streaming query. Please add the STREAM
keyword to your FROM
clause to turn this relation into a streaming query.
STREAM_NOT_FOUND_FOR_KINESIS_SOURCE
Kinesis stream <streamName>
in <region>
not found.
Please start a new query pointing to the correct stream name.
STRUCT_ARRAY_LENGTH_MISMATCH
Input row doesn’t have expected number of values required by the schema. <expected>
fields are required while <actual>
values are provided.
SUM_OF_LIMIT_AND_OFFSET_EXCEEDS_MAX_INT
The sum of the LIMIT
clause and the OFFSET
clause must not be greater than the maximum 32-bit integer value (2,147,483,647) but found limit = <limit>
, offset = <offset>
.
SYNC_SRC_TARGET_TBL_NOT_SAME
Source table name <srcTable>
must be same as destination table name <destTable>
.
SYNTAX_DISCONTINUED
Support of the clause or keyword: <clause>
has been discontinued in this context.
For more details see SYNTAX_DISCONTINUED
TABLE_OR_VIEW_ALREADY_EXISTS
Cannot create table or view <relationName>
because it already exists.
Choose a different name, drop the existing object, add the IF NOT EXISTS
clause to tolerate pre-existing objects, add the OR REPLACE
clause to replace the existing materialized view, or add the OR REFRESH
clause to refresh the existing streaming table.
TABLE_OR_VIEW_NOT_FOUND
The table or view <relationName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VIEW IF EXISTS
or DROP TABLE IF EXISTS
.
For more details see TABLE_OR_VIEW_NOT_FOUND
TABLE_VALUED_ARGUMENTS_NOT_YET_IMPLEMENTED_FOR_SQL_FUNCTIONS
Cannot <action>
SQL user-defined function <functionName>
with TABLE
arguments because this functionality is not yet implemented.
TABLE_VALUED_FUNCTION_FAILED_TO_ANALYZE_IN_PYTHON
Failed to analyze the Python user defined table function: <msg>
TABLE_VALUED_FUNCTION_REQUIRED_METADATA_INCOMPATIBLE_WITH_CALL
Failed to evaluate the table function <functionName>
because its table metadata <requestedMetadata>
, but the function call <invalidFunctionCallProperty>
.
TABLE_VALUED_FUNCTION_REQUIRED_METADATA_INVALID
Failed to evaluate the table function <functionName>
because its table metadata was invalid; <reason>
.
TABLE_VALUED_FUNCTION_TOO_MANY_TABLE_ARGUMENTS
There are too many table arguments for table-valued function.
It allows one table argument, but got: <num>
.
If you want to allow it, please set “spark.sql.allowMultipleTableArguments.enabled” to “true”
TABLE_WITH_ID_NOT_FOUND
Table with ID <tableId>
cannot be found. Verify the correctness of the UUID.
TEMP_TABLE_OR_VIEW_ALREADY_EXISTS
Cannot create the temporary view <relationName>
because it already exists.
Choose a different name, drop or replace the existing view, or add the IF NOT EXISTS
clause to tolerate pre-existing views.
TEMP_VIEW_NAME_TOO_MANY_NAME_PARTS
CREATE TEMPORARY VIEW
or the corresponding Dataset APIs only accept single-part view names, but got: <actualName>
.
TRAILING_COMMA_IN_SELECT
Trailing comma detected in SELECT
clause. Remove the trailing comma before the FROM
clause.
TRIGGER_INTERVAL_INVALID
The trigger interval must be a positive duration that can be converted into whole seconds. Received: <actual>
seconds.
UC_CATALOG_NAME_NOT_PROVIDED
For Unity Catalog, please specify the catalog name explicitly. E.g. SHOW GRANT your.address@email.com ON CATALOG
main.
UC_COMMAND_NOT_SUPPORTED
The command(s): <commandName>
are not supported in Unity Catalog.
For more details see UC_COMMAND_NOT_SUPPORTED
UC_COMMAND_NOT_SUPPORTED_IN_SERVERLESS
The command(s): <commandName>
are not supported for Unity Catalog clusters in serverless. Use single user or shared clusters instead.
UC_DATASOURCE_NOT_SUPPORTED
Data source format <dataSourceFormatName>
is not supported in Unity Catalog.
UC_EXTERNAL_VOLUME_MISSING_LOCATION
LOCATION
clause must be present for external volume. Please check the syntax ‘CREATE EXTERNAL VOLUME
… LOCATION
…’ for creating an external volume.
UC_FAILED_PROVISIONING_STATE
The query failed because it attempted to refer to table <tableName>
but was unable to do so: <failureReason>
. Please update the table <tableName>
to ensure it is in an Active provisioning state and then retry the query again.
UC_FILE_SCHEME_FOR_TABLE_CREATION_NOT_SUPPORTED
Creating table in Unity Catalog with file scheme <schemeName>
is not supported.
Instead, please create a federated data source connection using the CREATE CONNECTION
command for the same table provider, then create a catalog based on the connection with a CREATE FOREIGN CATALOG
command to reference the tables therein.
UC_HIVE_METASTORE_FEDERATION_CROSS_CATALOG_VIEW_NOT_SUPPORTED
Hive Metastore Federation view does not support dependencies across multiple catalogs. View <view>
in Hive Metastore Federation catalog must use dependency from hive_metastore or spark_catalog catalog but its dependency <dependency>
is in another catalog <referencedCatalog>
. Please update the dependencies to satisfy this constraint and then retry your query or command again.
UC_HIVE_METASTORE_FEDERATION_NOT_ENABLED
Hive Metastore federation is not enabled on this cluster.
Accessing the catalog <catalogName>
is not supported on this cluster
UC_INVALID_DEPENDENCIES
Dependencies of <viewName>
are recorded as <storedDeps>
while being parsed as <parsedDeps>
. This likely occurred through improper use of a non-SQL API. You can repair dependencies in Databricks Runtime by running ALTER VIEW <viewName>
AS <viewText>
.
UC_LAKEHOUSE_FEDERATION_WRITES_NOT_ALLOWED
Unity Catalog Lakehouse Federation write support is not enabled for provider <provider>
on this cluster.
UC_LOCATION_FOR_MANAGED_VOLUME_NOT_SUPPORTED
Managed volume does not accept LOCATION
clause. Please check the syntax ‘CREATE VOLUME
…’ for creating a managed volume.
UC_VOLUME_NOT_FOUND
Volume <name>
does not exist. Please use ‘SHOW VOLUMES
’ to list available volumes.
UDF_MAX_COUNT_EXCEEDED
Exceeded query-wide UDF limit of <maxNumUdfs>
UDFs (limited during public preview). Found <numUdfs>
. The UDFs were: <udfNames>
.
UDF_PYSPARK_UNSUPPORTED_TYPE
PySpark UDF <udf> (<eval-type>
) is not supported on clusters in Shared access mode.
UDF_UNSUPPORTED_PARAMETER_DEFAULT_VALUE
Parameter default value is not supported for user-defined <functionType>
function.
UDTF_ALIAS_NUMBER_MISMATCH
The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF.
Expected <aliasesSize>
aliases, but got <aliasesNames>
.
Please ensure that the number of aliases provided matches the number of columns output by the UDTF.
UDTF_INVALID_ALIAS_IN_REQUESTED_ORDERING_STRING_FROM_ANALYZE_METHOD
Failed to evaluate the user-defined table function because its ‘analyze’ method returned a requested OrderingColumn whose column name expression included an unnecessary alias <aliasName>
; please remove this alias and then try the query again.
UDTF_INVALID_REQUESTED_SELECTED_EXPRESSION_FROM_ANALYZE_METHOD_REQUIRES_ALIAS
Failed to evaluate the user-defined table function because its ‘analyze’ method returned a requested ‘select’ expression (<expression>
) that does not include a corresponding alias; please update the UDTF to specify an alias there and then try the query again.
UNABLE_TO_CONVERT_TO_PROTOBUF_MESSAGE_TYPE
Unable to convert SQL type <toType>
to Protobuf type <protobufType>
.
UNABLE_TO_FETCH_HIVE_TABLES
Unable to fetch tables of Hive database: <dbName>
. Error Class Name: <className>
.
UNBOUND_SQL_PARAMETER
Found the unbound parameter: <name>
. Please, fix args
and provide a mapping of the parameter to either a SQL literal or collection constructor functions such as map()
, array()
, struct()
.
UNCLOSED_BRACKETED_COMMENT
Found an unclosed bracketed comment. Please, append */ at the end of the comment.
UNEXPECTED_INPUT_TYPE
Parameter <paramIndex>
of function <functionName>
requires the <requiredType>
type, however <inputSql>
has the type <inputType>
.
UNEXPECTED_INPUT_TYPE_OF_NAMED_PARAMETER
The <namedParamKey>
parameter of function <functionName>
requires the <requiredType>
type, however <inputSql>
has the type <inputType>
.<hint>
UNEXPECTED_OPERATOR_IN_STREAMING_VIEW
Unexpected operator <op>
in the CREATE VIEW
statement as a streaming source.
A streaming view query must consist only of SELECT
, WHERE
, and UNION ALL
operations.
UNEXPECTED_POSITIONAL_ARGUMENT
Cannot invoke routine <routineName>
because it contains positional argument(s) following the named argument assigned to <parameterName>
; please rearrange them so the positional arguments come first and then retry the query again.
UNEXPECTED_SERIALIZER_FOR_CLASS
The class <className>
has an unexpected expression serializer. Expects “STRUCT
” or “IF
” which returns “STRUCT
” but found <expr>
.
UNKNOWN_FIELD_EXCEPTION
Encountered <changeType>
during parsing: <unknownFieldBlob>
, which can be fixed by an automatic retry: <isRetryable>
For more details see UNKNOWN_FIELD_EXCEPTION
UNKNOWN_POSITIONAL_ARGUMENT
The invocation of routine <routineName>
contains an unknown positional argument <sqlExpr>
at position <pos>
. This is invalid.
UNKNOWN_PROTOBUF_MESSAGE_TYPE
Attempting to treat <descriptorName>
as a Message, but it was <containingType>
.
UNPIVOT_REQUIRES_ATTRIBUTES
UNPIVOT
requires all given <given>
expressions to be columns when no <empty>
expressions are given. These are not columns: [<expressions>
].
UNPIVOT_REQUIRES_VALUE_COLUMNS
At least one value column needs to be specified for UNPIVOT
, all columns specified as ids.
UNPIVOT_VALUE_DATA_TYPE_MISMATCH
Unpivot value columns must share a least common type, some types do not: [<types>
].
UNPIVOT_VALUE_SIZE_MISMATCH
All unpivot value columns must have the same size as there are value column names (<names>
).
UNRECOGNIZED_PARAMETER_NAME
Cannot invoke routine <routineName>
because the routine call included a named argument reference for the argument named <argumentName>
, but this routine does not include any signature containing an argument with this name. Did you mean one of the following? [<proposal>
].
UNRECOGNIZED_STATISTIC
The statistic <stats>
is not recognized. Valid statistics include count
, count_distinct
, approx_count_distinct
, mean
, stddev
, min
, max
, and percentile values. Percentile must be a numeric value followed by ‘%’, within the range 0% to 100%.
UNRESOLVABLE_TABLE_VALUED_FUNCTION
Could not resolve <name>
to a table-valued function.
Please make sure that <name>
is defined as a table-valued function and that all required parameters are provided correctly.
If <name>
is not defined, please create the table-valued function before using it.
For more information about defining table-valued functions, please refer to the Apache Spark documentation.
UNRESOLVED_ALL_IN_GROUP_BY
Cannot infer grouping columns for GROUP BY ALL
based on the select clause. Please explicitly specify the grouping columns.
UNRESOLVED_COLUMN
A column, variable, or function parameter with name <objectName>
cannot be resolved.
For more details see UNRESOLVED_COLUMN
UNRESOLVED_FIELD
A field with name <fieldName>
cannot be resolved with the struct-type column <columnPath>
.
For more details see UNRESOLVED_FIELD
UNRESOLVED_MAP_KEY
Cannot resolve column <objectName>
as a map key. If the key is a string literal, add the single quotes ‘’ around it.
For more details see UNRESOLVED_MAP_KEY
UNRESOLVED_ROUTINE
Cannot resolve routine <routineName>
on search path <searchPath>
.
For more details see UNRESOLVED_ROUTINE
UNRESOLVED_USING_COLUMN_FOR_JOIN
USING
column <colName>
cannot be resolved on the <side>
side of the join. The <side>
-side columns: [<suggestion>
].
UNSTRUCTURED_DATA_PROCESSING_UNSUPPORTED_FILE_FORMAT
Unstructured file format <format>
is not supported. Supported file formats are <supportedFormats>
.
Please update the format
from your <expr>
expression to one of the supported formats and then retry the query again.
UNSTRUCTURED_DATA_PROCESSING_UNSUPPORTED_MODEL
Unstructured model <model>
is not supported. Supported models are <supportedModels>
.
Please switch to one of the supported models and then retry the query again.
UNSUPPORTED_CALL
Cannot call the method “<methodName>
” of the class “<className>
”.
For more details see UNSUPPORTED_CALL
UNSUPPORTED_CHAR_OR_VARCHAR_AS_STRING
The char/varchar type can’t be used in the table schema.
If you want Spark treat them as string type as same as Spark 3.0 and earlier, please set “spark.sql.legacy.charVarcharAsString” to “true”.
UNSUPPORTED_COLLATION
Collation <collationName>
is not supported for:
For more details see UNSUPPORTED_COLLATION
UNSUPPORTED_COMMON_ANCESTOR_LOC_FOR_FILE_STREAM_SOURCE
The common ancestor of source path and sourceArchiveDir should be registered with UC.
If you see this error message, it’s likely that you register the source path and sourceArchiveDir in different external locations.
Please put them into a single external location.
UNSUPPORTED_CONSTRAINT_TYPE
Unsupported constraint type. Only <supportedConstraintTypes>
are supported
UNSUPPORTED_DATASOURCE_FOR_DIRECT_QUERY
Unsupported data source type for direct query on files: <dataSourceType>
UNSUPPORTED_DATA_SOURCE_SAVE_MODE
The data source “<source>
” cannot be written in the <createMode>
mode. Please use either the “Append” or “Overwrite” mode instead.
UNSUPPORTED_DATA_TYPE_FOR_DATASOURCE
The <format>
datasource doesn’t support the column <columnName>
of the type <columnType>
.
UNSUPPORTED_DATA_TYPE_FOR_ENCODER
Cannot create encoder for <dataType>
. Please use a different output data type for your UDF or DataFrame.
UNSUPPORTED_DEFAULT_VALUE
DEFAULT
column values is not supported.
For more details see UNSUPPORTED_DEFAULT_VALUE
UNSUPPORTED_DESERIALIZER
The deserializer is not supported:
For more details see UNSUPPORTED_DESERIALIZER
UNSUPPORTED_EXPRESSION_GENERATED_COLUMN
Cannot create generated column <fieldName>
with generation expression <expressionStr>
because <reason>
.
UNSUPPORTED_EXPR_FOR_OPERATOR
A query operator contains one or more unsupported expressions.
Consider to rewrite it to avoid window functions, aggregate functions, and generator functions in the WHERE
clause.
Invalid expressions: [<invalidExprSqls>
]
UNSUPPORTED_EXPR_FOR_PARAMETER
A query parameter contains unsupported expression.
Parameters can either be variables or literals.
Invalid expression: [<invalidExprSql>
]
UNSUPPORTED_GROUPING_EXPRESSION
grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup.
UNSUPPORTED_INITIAL_POSITION_AND_TRIGGER_PAIR_FOR_KINESIS_SOURCE
<trigger>
with initial position <initialPosition>
is not supported with the Kinesis source
UNSUPPORTED_MANAGED_TABLE_CREATION
Creating a managed table <tableName>
using datasource <dataSource>
is not supported. You need to use datasource DELTA
or create an external table using CREATE EXTERNAL TABLE <tableName>
… USING <dataSource>
…
UNSUPPORTED_MERGE_CONDITION
MERGE
operation contains unsupported <condName>
condition.
For more details see UNSUPPORTED_MERGE_CONDITION
UNSUPPORTED_METRIC_VIEW_USAGE
The current metric view usage is not supported.
For more details see UNSUPPORTED_METRIC_VIEW_USAGE
UNSUPPORTED_NESTED_ROW_OR_COLUMN_ACCESS_POLICY
Table <tableName>
has a row level security policy or column mask which indirectly refers to another table with a row level security policy or column mask; this is not supported. Call sequence: <callSequence>
UNSUPPORTED_OVERWRITE
Can’t overwrite the target that is also being read from.
For more details see UNSUPPORTED_OVERWRITE
UNSUPPORTED_PARTITION_TRANSFORM
Unsupported partition transform: <transform>
. The supported transforms are identity
, bucket
, and clusterBy
. Ensure your transform expression uses one of these.
UNSUPPORTED_SAVE_MODE
The save mode <saveMode>
is not supported for:
For more details see UNSUPPORTED_SAVE_MODE
UNSUPPORTED_SHOW_CREATE_TABLE
Unsupported a SHOW CREATE TABLE
command.
For more details see UNSUPPORTED_SHOW_CREATE_TABLE
UNSUPPORTED_SINGLE_PASS_ANALYZER_FEATURE
The single-pass analyzer cannot process this query or command because it does not yet support <feature>
.
UNSUPPORTED_STREAMING_OPERATOR_WITHOUT_WATERMARK
<outputMode>
output mode not supported for <statefulOperator>
on streaming DataFrames/DataSets without watermark.
UNSUPPORTED_STREAMING_OPTIONS_FOR_VIEW
Unsupported for streaming a view. Reason:
For more details see UNSUPPORTED_STREAMING_OPTIONS_FOR_VIEW
UNSUPPORTED_STREAMING_OPTIONS_PERMISSION_ENFORCED
Streaming options <options>
are not supported for data source <source>
on a shared cluster. Please confirm that the options are specified and spelled correctly, and checkhttps://docs.databricks.com/en/compute/access-mode-limitations.html#streaming-limitations-and-requirements-for-unity-catalog-shared-access-mode for limitations.
UNSUPPORTED_STREAMING_SINK_PERMISSION_ENFORCED
Data source <sink>
is not supported as a streaming sink on a shared cluster.
UNSUPPORTED_STREAMING_SOURCE_PERMISSION_ENFORCED
Data source <source>
is not supported as a streaming source on a shared cluster.
UNSUPPORTED_STREAMING_TABLE_VALUED_FUNCTION
The function <funcName>
does not support streaming. Please remove the STREAM
keyword
UNSUPPORTED_STREAM_READ_LIMIT_FOR_KINESIS_SOURCE
<streamReadLimit>
is not supported with the Kinesis source
UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY
Unsupported subquery expression:
For more details see UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY
UNSUPPORTED_TIMESERIES_WITH_MORE_THAN_ONE_COLUMN
Creating primary key with more than one timeseries column <colSeq>
is not supported
UNSUPPORTED_TYPED_LITERAL
Literals of the type <unsupportedType>
are not supported. Supported types are <supportedTypes>
.
UNSUPPORTED_UDF_FEATURE
The function <function>
uses the following feature(s) that require a newer version of Databricks runtime: <features>
. Please consult <docLink>
for details.
UNTYPED_SCALA_UDF
You’re using untyped Scala UDF, which does not have the input type information.
Spark may blindly pass null to the Scala closure with primitive-type argument, and the closure will see the default value of the Java type for the null argument, e.g. udf((x: Int) => x, IntegerType)
, the result is 0 for null input. To get rid of this error, you could:
use typed Scala UDF APIs(without return type parameter), e.g.
udf((x: Int) => x)
.use Java UDF APIs, e.g.
udf(new UDF1[String, Integer] { override def call(s: String): Integer = s.length() }, IntegerType)
, if input types are all non primitive.set “spark.sql.legacy.allowUntypedScalaUDF” to “true” and use this API with caution.
UPGRADE_NOT_SUPPORTED
Table is not eligible for upgrade from Hive Metastore to Unity Catalog. Reason:
For more details see UPGRADE_NOT_SUPPORTED
USER_DEFINED_FUNCTIONS
User defined function is invalid:
For more details see USER_DEFINED_FUNCTIONS
USER_RAISED_EXCEPTION_PARAMETER_MISMATCH
The raise_error()
function was used to raise error class: <errorClass>
which expects parameters: <expectedParms>
.
The provided parameters <providedParms>
do not match the expected parameters.
Please make sure to provide all expected parameters.
USER_RAISED_EXCEPTION_UNKNOWN_ERROR_CLASS
The raise_error()
function was used to raise an unknown error class: <errorClass>
VARIABLE_ALREADY_EXISTS
Cannot create the variable <variableName>
because it already exists.
Choose a different name, or drop or replace the existing variable.
VARIABLE_NOT_FOUND
The variable <variableName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VARIABLE IF EXISTS
.
VARIANT_CONSTRUCTOR_SIZE_LIMIT
Cannot construct a Variant larger than 16 MiB. The maximum allowed size of a Variant value is 16 MiB.
VARIANT_SIZE_LIMIT
Cannot build variant bigger than <sizeLimit>
in <functionName>
.
Please avoid large input strings to this expression (for example, add function calls(s) to check the expression size and convert it to NULL
first if it is too big).
VIEW_ALREADY_EXISTS
Cannot create view <relationName>
because it already exists.
Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS
clause to tolerate pre-existing objects.
VIEW_EXCEED_MAX_NESTED_DEPTH
The depth of view <viewName>
exceeds the maximum view resolution depth (<maxNestedDepth>
).
Analysis is aborted to avoid errors. If you want to work around this, please try to increase the value of “spark.sql.view.maxNestedViewDepth”.
VIEW_NOT_FOUND
The view <relationName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VIEW IF EXISTS
.
VOLUME_ALREADY_EXISTS
Cannot create volume <relationName>
because it already exists.
Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS
clause to tolerate pre-existing objects.
WINDOW_FUNCTION_AND_FRAME_MISMATCH
<funcName>
function can only be evaluated in an ordered row-based window frame with a single offset: <windowExpr>
.
WRONG_COLUMN_DEFAULTS_FOR_DELTA_ALTER_TABLE_ADD_COLUMN_NOT_SUPPORTED
Failed to execute the command because DEFAULT
values are not supported when adding new
columns to previously existing Delta tables; please add the column without a default
value first, then run a second ALTER TABLE ALTER COLUMN SET DEFAULT
command to apply
for future inserted rows instead.
WRONG_COLUMN_DEFAULTS_FOR_DELTA_FEATURE_NOT_ENABLED
Failed to execute <commandType>
command because it assigned a column DEFAULT
value,
but the corresponding table feature was not enabled. Please retry the command again
after executing ALTER TABLE
tableName SET
TBLPROPERTIES
(‘delta.feature.allowColumnDefaults’ = ‘supported’).
WRONG_COMMAND_FOR_OBJECT_TYPE
The operation <operation>
requires a <requiredType>
. But <objectName>
is a <foundType>
. Use <alternative>
instead.
WRONG_NUM_ARGS
The <functionName>
requires <expectedNum>
parameters but the actual number is <actualNum>
.
For more details see WRONG_NUM_ARGS
XML_UNSUPPORTED_NESTED_TYPES
XML doesn’t support <innerDataType>
as inner type of <dataType>
. Please wrap the <innerDataType>
within a StructType field when using it inside <dataType>
.
Delta Lake
DELTA_ADDING_COLUMN_WITH_INTERNAL_NAME_FAILED
Failed to add column <colName>
because the name is reserved.
DELTA_ADDING_DELETION_VECTORS_DISALLOWED
The current operation attempted to add a deletion vector to a table that does not permit the creation of new deletion vectors. Please file a bug report.
DELTA_ADDING_DELETION_VECTORS_WITH_TIGHT_BOUNDS_DISALLOWED
All operations that add deletion vectors should set the tightBounds column in statistics to false. Please file a bug report.
DELTA_ADD_COLUMN_AT_INDEX_LESS_THAN_ZERO
Index <columnIndex>
to add column <columnName>
is lower than 0
DELTA_ADD_COLUMN_PARENT_NOT_STRUCT
Cannot add <columnName>
because its parent is not a StructType. Found <other>
DELTA_AGGREGATE_IN_GENERATED_COLUMN
Found <sqlExpr>
. A generated column cannot use an aggregate expression
DELTA_AGGREGATION_NOT_SUPPORTED
Aggregate functions are not supported in the <operation> <predicate>
.
DELTA_ALTER_COLLATION_NOT_SUPPORTED_BLOOM_FILTER
Failed to change the collation of column <column>
because it has a bloom filter index. Please either retain the existing collation or else drop the bloom filter index and then retry the command again to change the collation.
DELTA_ALTER_COLLATION_NOT_SUPPORTED_CLUSTER_BY
Failed to change the collation of column <column>
because it is a clustering column. Please either retain the existing collation or else change the column to a non-clustering column with an ALTER TABLE
command and then retry the command again to change the collation.
DELTA_ALTER_TABLE_CHANGE_COL_NOT_SUPPORTED
ALTER TABLE CHANGE COLUMN
is not supported for changing column <currentType>
to <newType>
DELTA_ALTER_TABLE_CLUSTER_BY_NOT_ALLOWED
ALTER TABLE CLUSTER BY
is supported only for Delta table with Liquid clustering.
DELTA_ALTER_TABLE_CLUSTER_BY_ON_PARTITIONED_TABLE_NOT_ALLOWED
ALTER TABLE CLUSTER BY
cannot be applied to a partitioned table.
DELTA_ALTER_TABLE_RENAME_NOT_ALLOWED
Operation not allowed: ALTER TABLE RENAME
TO is not allowed for managed Delta tables on S3, as eventual consistency on S3 may corrupt the Delta transaction log. If you insist on doing so and are sure that there has never been a Delta table with the new name <newName>
before, you can enable this by setting <key>
to be true.
DELTA_ALTER_TABLE_SET_CLUSTERING_TABLE_FEATURE_NOT_ALLOWED
Cannot enable <tableFeature>
table feature using ALTER TABLE SET TBLPROPERTIES
. Please use CREATE
OR REPLACE TABLE CLUSTER BY
to create a Delta table with clustering.
DELTA_AMBIGUOUS_DATA_TYPE_CHANGE
Cannot change data type of <column>
from <from>
to <to>
. This change contains column removals and additions, therefore they are ambiguous. Please make these changes individually using ALTER TABLE
[ADD | DROP | RENAME
] COLUMN
.
DELTA_AMBIGUOUS_PATHS_IN_CREATE_TABLE
CREATE TABLE
contains two different locations: <identifier>
and <location>
.
You can remove the LOCATION
clause from the CREATE TABLE
statement, or set
<config>
to true to skip this check.
DELTA_ARCHIVED_FILES_IN_LIMIT
Table <table>
does not contain enough records in non-archived files to satisfy specified LIMIT
of <limit>
records.
DELTA_ARCHIVED_FILES_IN_SCAN
Found <numArchivedFiles>
potentially archived file(s) in table <table>
that need to be scanned as part of this query.
Archived files cannot be accessed. The current time until archival is configured as <archivalTime>
.
Please adjust your query filters to exclude any archived files.
DELTA_BLOCK_COLUMN_MAPPING_AND_CDC_OPERATION
Operation “<opName>
” is not allowed when the table has enabled change data feed (CDF) and has undergone schema changes using DROP COLUMN
or RENAME COLUMN
.
DELTA_BLOOM_FILTER_DROP_ON_NON_EXISTING_COLUMNS
Cannot drop bloom filter indices for the following non-existent column(s): <unknownColumns>
DELTA_BLOOM_FILTER_OOM_ON_WRITE
OutOfMemoryError occurred while writing bloom filter indices for the following column(s): <columnsWithBloomFilterIndices>
.
You can reduce the memory footprint of bloom filter indices by choosing a smaller value for the ‘numItems’ option, a larger value for the ‘fpp’ option, or by indexing fewer columns.
DELTA_CANNOT_CHANGE_LOCATION
Cannot change the ‘location’ of the Delta table using SET TBLPROPERTIES
. Please use ALTER TABLE SET LOCATION
instead.
DELTA_CANNOT_CREATE_BLOOM_FILTER_NON_EXISTING_COL
Cannot create bloom filter indices for the following non-existent column(s): <unknownCols>
DELTA_CANNOT_DROP_BLOOM_FILTER_ON_NON_INDEXED_COLUMN
Cannot drop bloom filter index on a non indexed column: <columnName>
DELTA_CANNOT_DROP_CHECK_CONSTRAINT_FEATURE
Cannot drop the CHECK
constraints table feature.
The following constraints must be dropped first: <constraints>
.
DELTA_CANNOT_DROP_COLLATIONS_FEATURE
Cannot drop the collations table feature.
Columns with non-default collations must be altered to using UTF8_BINARY first: <colNames>
.
DELTA_CANNOT_FIND_BUCKET_SPEC
Expecting a bucketing Delta table but cannot find the bucket spec in the table
DELTA_CANNOT_MODIFY_APPEND_ONLY
This table is configured to only allow appends. If you would like to permit updates or deletes, use ‘ALTER TABLE
<table_name> SET TBLPROPERTIES (<config>
=false)’.
DELTA_CANNOT_MODIFY_COORDINATED_COMMITS_DEPENDENCIES
<Command>
cannot override or unset in-commit timestamp table properties because coordinated commits is enabled in this table and depends on them. Please remove them (“delta.enableInCommitTimestamps”, “delta.inCommitTimestampEnablementVersion”, “delta.inCommitTimestampEnablementTimestamp”) from the TBLPROPERTIES
clause and then retry the command again.
DELTA_CANNOT_MODIFY_TABLE_PROPERTY
The Delta table configuration <prop>
cannot be specified by the user
DELTA_CANNOT_OVERRIDE_COORDINATED_COMMITS_CONFS
<Command>
cannot override coordinated commits configurations for an existing target table. Please remove them (“delta.coordinatedCommits.commitCoordinator-preview”, “delta.coordinatedCommits.commitCoordinatorConf-preview”, “delta.coordinatedCommits.tableConf-preview”) from the TBLPROPERTIES
clause and then retry the command again.
DELTA_CANNOT_RECONSTRUCT_PATH_FROM_URI
A uri (<uri>
) which can’t be turned into a relative path was found in the transaction log.
DELTA_CANNOT_RELATIVIZE_PATH
A path (<path>
) which can’t be relativized with the current input found in the
transaction log. Please re-run this as:
%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<userPath>
”, true)
and then also run:
%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<path>
”)
DELTA_CANNOT_REPLACE_MISSING_TABLE
Table <tableName>
cannot be replaced as it does not exist. Use CREATE
OR REPLACE TABLE
to create the table.
DELTA_CANNOT_RESTORE_TABLE_VERSION
Cannot restore table to version <version>
. Available versions: [<startVersion>
, <endVersion>
].
DELTA_CANNOT_RESTORE_TIMESTAMP_EARLIER
Cannot restore table to timestamp (<requestedTimestamp>
) as it is before the earliest version available. Please use a timestamp after (<earliestTimestamp>
).
DELTA_CANNOT_RESTORE_TIMESTAMP_GREATER
Cannot restore table to timestamp (<requestedTimestamp>
) as it is after the latest version available. Please use a timestamp before (<latestTimestamp>
)
DELTA_CANNOT_SET_COORDINATED_COMMITS_DEPENDENCIES
<Command>
cannot set in-commit timestamp table properties together with coordinated commits, because the latter depends on the former and sets the former internally. Please remove them (“delta.enableInCommitTimestamps”, “delta.inCommitTimestampEnablementVersion”, “delta.inCommitTimestampEnablementTimestamp”) from the TBLPROPERTIES
clause and then retry the command again.
DELTA_CANNOT_SET_MANAGED_STATS_COLUMNS_PROPERTY
Cannot set delta.managedDataSkippingStatsColumns on non-DLT table
DELTA_CANNOT_UNSET_COORDINATED_COMMITS_CONFS
ALTER
cannot unset coordinated commits configurations. To downgrade a table from coordinated commits, please try again using ALTER
TABLE` [table-name] DROP FEATURE
‘coordinatedCommits-preview’`.
DELTA_CANNOT_UPDATE_ARRAY_FIELD
Cannot update %1$s field %2$s type: update the element by updating %2$s.element
DELTA_CANNOT_UPDATE_MAP_FIELD
Cannot update %1$s field %2$s type: update a map by updating %2$s.key or %2$s.value
DELTA_CANNOT_UPDATE_STRUCT_FIELD
Cannot update <tableName>
field <fieldName>
type: update struct by adding, deleting, or updating its fields
DELTA_CANNOT_VACUUM_LITE
VACUUM
LITE cannot delete all eligible files as some files are not referenced by the Delta log. Please run VACUUM FULL
.
DELTA_CAST_OVERFLOW_IN_TABLE_WRITE
Failed to write a value of <sourceType>
type into the <targetType>
type column <columnName>
due to an overflow.
Use try_cast
on the input value to tolerate overflow and return NULL
instead.
If necessary, set <storeAssignmentPolicyFlag>
to “LEGACY
” to bypass this error or set <updateAndMergeCastingFollowsAnsiEnabledFlag>
to true to revert to the old behaviour and follow <ansiEnabledFlag>
in UPDATE
and MERGE
.
DELTA_CDC_NOT_ALLOWED_IN_THIS_VERSION
Configuration delta.enableChangeDataFeed cannot be set. Change data feed from Delta is not yet available.
DELTA_CHANGE_DATA_FEED_INCOMPATIBLE_DATA_SCHEMA
Retrieving table changes between version <start>
and <end>
failed because of an incompatible data schema.
Your read schema is <readSchema>
at version <readVersion>
, but we found an incompatible data schema at version <incompatibleVersion>
.
If possible, please retrieve the table changes using the end version’s schema by setting <config>
to endVersion
, or contact support.
DELTA_CHANGE_DATA_FEED_INCOMPATIBLE_SCHEMA_CHANGE
Retrieving table changes between version <start>
and <end>
failed because of an incompatible schema change.
Your read schema is <readSchema>
at version <readVersion>
, but we found an incompatible schema change at version <incompatibleVersion>
.
If possible, please query table changes separately from version <start>
to <incompatibleVersion>
- 1, and from version <incompatibleVersion>
to <end>
.
DELTA_CHANGE_DATA_FILE_NOT_FOUND
File <filePath>
referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE
statement. This request appears to be targeting Change Data Feed, if that is the case, this error can occur when the change data file is out of the retention period and has been deleted by the VACUUM
statement. For more information, see <faqPath>
DELTA_CHANGE_TABLE_FEED_DISABLED
Cannot write to table with delta.enableChangeDataFeed set. Change data feed from Delta is not available.
DELTA_CHECKPOINT_NON_EXIST_TABLE
Cannot checkpoint a non-existing table <path>
. Did you manually delete files in the deltalog directory?
DELTA_CLONE_AMBIGUOUS_TARGET
Two paths were provided as the CLONE
target so it is ambiguous which to use. An external
location for CLONE
was provided at <externalLocation>
at the same time as the path
<targetIdentifier>
.
DELTA_CLONE_INCOMPLETE_FILE_COPY
File (<fileName>
) not copied completely. Expected file size: <expectedSize>
, found: <actualSize>
. To continue with the operation by ignoring the file size check set <config>
to false.
DELTA_CLONE_UNSUPPORTED_SOURCE
Unsupported <mode>
clone source ‘<name>
’, whose format is <format>
.
The supported formats are ‘delta’, ‘iceberg’ and ‘parquet’.
DELTA_CLUSTERING_CLONE_TABLE_NOT_SUPPORTED
CLONE
is not supported for Delta table with Liquid clustering for DBR version < 14.0.
DELTA_CLUSTERING_COLUMNS_DATATYPE_NOT_SUPPORTED
CLUSTER BY
is not supported because the following column(s): <columnsWithDataTypes>
don’t support data skipping.
DELTA_CLUSTERING_COLUMNS_MISMATCH
The provided clustering columns do not match the existing table’s.
provided:
<providedClusteringColumns>
existing:
<existingClusteringColumns>
DELTA_CLUSTERING_COLUMN_MISSING_STATS
Liquid clustering requires clustering columns to have stats. Couldn’t find clustering column(s) ‘<columns>
’ in stats schema:
<schema>
DELTA_CLUSTERING_CREATE_EXTERNAL_NON_LIQUID_TABLE_FROM_LIQUID_TABLE
Creating an external table without liquid clustering from a table directory with liquid clustering is not allowed; path: <path>
.
DELTA_CLUSTERING_PHASE_OUT_FAILED
Cannot finish the <phaseOutType>
of the table with <tableFeatureToAdd>
table feature (reason: <reason>
). Please try the OPTIMIZE
command again.
== Error ==
<error>
DELTA_CLUSTERING_REPLACE_TABLE_WITH_PARTITIONED_TABLE
REPLACE
a Delta table with Liquid clustering with a partitioned table is not allowed.
DELTA_CLUSTERING_SHOW_CREATE_TABLE_WITHOUT_CLUSTERING_COLUMNS
SHOW CREATE TABLE
is not supported for Delta table with Liquid clustering without any clustering columns.
DELTA_CLUSTERING_TO_PARTITIONED_TABLE_WITH_NON_EMPTY_CLUSTERING_COLUMNS
Transition a Delta table with Liquid clustering to a partitioned table is not allowed for operation: <operation>
, when the existing table has non-empty clustering columns.
Please run ALTER TABLE CLUSTER BY
NONE to remove the clustering columns first.
DELTA_CLUSTERING_WITH_DYNAMIC_PARTITION_OVERWRITE
Dynamic partition overwrite mode is not allowed for Delta table with Liquid clustering.
DELTA_CLUSTERING_WITH_PARTITION_PREDICATE
OPTIMIZE
command for Delta table with Liquid clustering doesn’t support partition predicates. Please remove the predicates: <predicates>
.
DELTA_CLUSTERING_WITH_ZORDER_BY
OPTIMIZE
command for Delta table with Liquid clustering cannot specify ZORDER BY
. Please remove ZORDER BY (<zOrderBy>
).
DELTA_CLUSTER_BY_INVALID_NUM_COLUMNS
CLUSTER BY
for Liquid clustering supports up to <numColumnsLimit>
clustering columns, but the table has <actualNumColumns>
clustering columns. Please remove the extra clustering columns.
DELTA_CLUSTER_BY_SCHEMA_NOT_PROVIDED
It is not allowed to specify CLUSTER BY
when the schema is not defined. Please define schema for table <tableName>
.
DELTA_CLUSTER_BY_WITH_BUCKETING
Clustering and bucketing cannot both be specified. Please remove CLUSTERED BY INTO BUCKETS
/ bucketBy if you want to create a Delta table with clustering.
DELTA_CLUSTER_BY_WITH_PARTITIONED_BY
Clustering and partitioning cannot both be specified. Please remove PARTITIONED BY
/ partitionBy / partitionedBy if you want to create a Delta table with clustering.
DELTA_COLUMN_DATA_SKIPPING_NOT_SUPPORTED_PARTITIONED_COLUMN
Data skipping is not supported for partition column ‘<column>
’.
DELTA_COLUMN_DATA_SKIPPING_NOT_SUPPORTED_TYPE
Data skipping is not supported for column ‘<column>
’ of type <type>
.
DELTA_COLUMN_MAPPING_MAX_COLUMN_ID_NOT_SET
The max column id property (<prop>
) is not set on a column mapping enabled table.
DELTA_COLUMN_MAPPING_MAX_COLUMN_ID_NOT_SET_CORRECTLY
The max column id property (<prop>
) on a column mapping enabled table is <tableMax>
, which cannot be smaller than the max column id for all fields (<fieldMax>
).
DELTA_COLUMN_NOT_FOUND_IN_MERGE
Unable to find the column ‘<targetCol>
’ of the target table from the INSERT
columns: <colNames>
. INSERT
clause must specify value for all the columns of the target table.
DELTA_COLUMN_PATH_NOT_NESTED
Expected <columnPath>
to be a nested data type, but found <other>
. Was looking for the
index of <column>
in a nested field.
Schema:
<schema>
DELTA_COLUMN_STRUCT_TYPE_MISMATCH
Struct column <source>
cannot be inserted into a <targetType>
field <targetField>
in <targetTable>
.
DELTA_COMMIT_INTERMEDIATE_REDIRECT_STATE
Cannot handle commit of table within redirect table state ‘<state>
’.
DELTA_COMPACTION_VALIDATION_FAILED
The validation of the compaction of path <compactedPath>
to <newPath>
failed: Please file a bug report.
DELTA_COMPLEX_TYPE_COLUMN_CONTAINS_NULL_TYPE
Found nested NullType in column <columName>
which is of <dataType>
. Delta doesn’t support writing NullType in complex types.
DELTA_CONCURRENT_APPEND
ConcurrentAppendException: Files were added to <partition>
by a concurrent update. <retryMsg> <conflictingCommit>
Refer to <docLink>
for more details.
DELTA_CONCURRENT_DELETE_DELETE
ConcurrentDeleteDeleteException: This transaction attempted to delete one or more files that were deleted (for example <file>
) by a concurrent update. Please try the operation again.<conflictingCommit>
Refer to <docLink>
for more details.
DELTA_CONCURRENT_DELETE_READ
ConcurrentDeleteReadException: This transaction attempted to read one or more files that were deleted (for example <file>
) by a concurrent update. Please try the operation again.<conflictingCommit>
Refer to <docLink>
for more details.
DELTA_CONCURRENT_TRANSACTION
ConcurrentTransactionException: This error occurs when multiple streaming queries are using the same checkpoint to write into this table. Did you run multiple instances of the same streaming query at the same time?<conflictingCommit>
Refer to <docLink>
for more details.
DELTA_CONCURRENT_WRITE
ConcurrentWriteException: A concurrent transaction has written new data since the current transaction read the table. Please try the operation again.<conflictingCommit>
Refer to <docLink>
for more details.
DELTA_CONF_OVERRIDE_NOT_SUPPORTED_IN_COMMAND
During <command>
, configuration “<configuration>
” cannot be set from the command. Please remove it from the TBLPROPERTIES
clause and then retry the command again.
DELTA_CONF_OVERRIDE_NOT_SUPPORTED_IN_SESSION
During <command>
, configuration “<configuration>
” cannot be set from the SparkSession configurations. Please unset it by running spark.conf.unset("<configuration>")
and then retry the command again.
DELTA_CONSTRAINT_ALREADY_EXISTS
Constraint ‘<constraintName>
’ already exists. Please delete the old constraint first.
Old constraint:
<oldConstraint>
DELTA_CONSTRAINT_DATA_TYPE_MISMATCH
Column <columnName>
has data type <columnType>
and cannot be altered to data type <dataType>
because this column is referenced by the following check constraint(s):
<constraints>
DELTA_CONSTRAINT_DEPENDENT_COLUMN_CHANGE
Cannot alter column <columnName>
because this column is referenced by the following check constraint(s):
<constraints>
DELTA_CONSTRAINT_DOES_NOT_EXIST
Cannot drop nonexistent constraint <constraintName>
from table <tableName>
. To avoid throwing an error, provide the parameter IF EXISTS
or set the SQL session configuration <config>
to <confValue>
.
DELTA_CONVERSION_MERGE_ON_READ_NOT_SUPPORTED
Conversion of Merge-On-Read <format>
table is not supported: <path>
, <hint>
DELTA_CONVERSION_NO_PARTITION_FOUND
Found no partition information in the catalog for table <tableName>
. Have you run “MSCK REPAIR TABLE
” on your table to discover partitions?
DELTA_CONVERSION_UNSUPPORTED_COLLATED_PARTITION_COLUMN
Cannot convert Parquet table with collated partition column <colName>
to Delta.
DELTA_CONVERSION_UNSUPPORTED_COLUMN_MAPPING
The configuration ‘<config>
’ cannot be set to <mode>
when using CONVERT
TO DELTA
.
DELTA_CONVERSION_UNSUPPORTED_SCHEMA_CHANGE
Unsupported schema changes found for <format>
table: <path>
, <hint>
DELTA_CONVERT_NON_PARQUET_TABLE
CONVERT
TO DELTA
only supports parquet tables, but you are trying to convert a <sourceName>
source: <tableId>
DELTA_CONVERT_TO_DELTA_ROW_TRACKING_WITHOUT_STATS
Cannot enable row tracking without collecting statistics.
If you want to enable row tracking, do the following:
Enable statistics collection by running the command
SET <statisticsCollectionPropertyKey>
= true
Run
CONVERT
TODELTA
without the NOSTATISTICS
option.
If you do not want to collect statistics, disable row tracking:
Deactivate enabling the table feature by default by running the command:
RESET <rowTrackingTableFeatureDefaultKey>
Deactivate the table property by default by running:
SET <rowTrackingDefaultPropertyKey>
= false
DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_SCHEMA
You are trying to create an external table <tableName>
from <path>
using Delta, but the schema is not specified when the
input path is empty.
To learn more about Delta, see <docLink>
DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_TXN_LOG
You are trying to create an external table <tableName>
from %2$s
using Delta, but there is no transaction log present at
%2$s/_delta_log
. Check the upstream job to make sure that it is writing using
format(“delta”) and that the path is the root of the table.
To learn more about Delta, see <docLink>
DELTA_CREATE_TABLE_IDENTIFIER_LOCATION_MISMATCH
Creating path-based Delta table with a different location isn’t supported. Identifier: <identifier>
, Location: <location>
DELTA_CREATE_TABLE_SCHEME_MISMATCH
The specified schema does not match the existing schema at <path>
.
== Specified ==
<specifiedSchema>
== Existing ==
<existingSchema>
== Differences ==
<schemaDifferences>
If your intention is to keep the existing schema, you can omit the
schema from the create table command. Otherwise please ensure that
the schema matches.
DELTA_CREATE_TABLE_SET_CLUSTERING_TABLE_FEATURE_NOT_ALLOWED
Cannot enable <tableFeature>
table feature using TBLPROPERTIES
. Please use CREATE
OR REPLACE TABLE CLUSTER BY
to create a Delta table with clustering.
DELTA_CREATE_TABLE_WITH_DIFFERENT_CLUSTERING
The specified clustering columns do not match the existing clustering columns at <path>
.
== Specified ==
<specifiedColumns>
== Existing ==
<existingColumns>
DELTA_CREATE_TABLE_WITH_DIFFERENT_PARTITIONING
The specified partitioning does not match the existing partitioning at <path>
.
== Specified ==
<specifiedColumns>
== Existing ==
<existingColumns>
DELTA_CREATE_TABLE_WITH_DIFFERENT_PROPERTY
The specified properties do not match the existing properties at <path>
.
== Specified ==
<specifiedProperties>
== Existing ==
<existingProperties>
DELTA_CREATE_TABLE_WITH_NON_EMPTY_LOCATION
Cannot create table (‘<tableId>
’). The associated location (‘<tableLocation>
’) is not empty and also not a Delta table.
DELTA_DATA_CHANGE_FALSE
Cannot change table metadata because the ‘dataChange’ option is set to false. Attempted operation: ‘<op>
’.
DELTA_DELETED_PARQUET_FILE_NOT_FOUND
File <filePath>
referenced in the transaction log cannot be found. This parquet file may be deleted under Delta’s data retention policy.
Default Delta data retention duration: <logRetentionPeriod>
. Modification time of the parquet file: <modificationTime>
. Deletion time of the parquet file: <deletionTime>
. Deleted on Delta version: <deletionVersion>
.
DELTA_DELETION_VECTOR_MISSING_NUM_RECORDS
It is invalid to commit files with deletion vectors that are missing the numRecords statistic.
DELTA_DOMAIN_METADATA_NOT_SUPPORTED
Detected DomainMetadata action(s) for domains <domainNames>
, but DomainMetadataTableFeature is not enabled.
DELTA_DROP_COLUMN_ON_SINGLE_FIELD_SCHEMA
Cannot drop column from a schema with a single column. Schema:
<schema>
DELTA_DUPLICATE_ACTIONS_FOUND
File operation ‘<actionType>
’ for path <path>
was specified several times.
It conflicts with <conflictingPath>
.
It is not valid for multiple file operations with the same path to exist in a single commit.
DELTA_DUPLICATE_COLUMNS_ON_UPDATE_TABLE
<message>
Please remove duplicate columns before you update your table.
DELTA_DUPLICATE_DOMAIN_METADATA_INTERNAL_ERROR
Internal error: two DomainMetadata actions within the same transaction have the same domain <domainName>
DELTA_DV_HISTOGRAM_DESERIALIZATON
Could not deserialize the deleted record counts histogram during table integrity verification.
DELTA_DYNAMIC_PARTITION_OVERWRITE_DISABLED
Dynamic partition overwrite mode is specified by session config or write options, but it is disabled by spark.databricks.delta.dynamicPartitionOverwrite.enabled=false
.
DELTA_EXCEED_CHAR_VARCHAR_LIMIT
Value “<value>
” exceeds char/varchar type length limitation. Failed check: <expr>
.
DELTA_FAILED_FIND_ATTRIBUTE_IN_OUTPUT_COLUMNS
Could not find <newAttributeName>
among the existing target output <targetOutputColumns>
DELTA_FAILED_SCAN_WITH_HISTORICAL_VERSION
Expect a full scan of the latest version of the Delta source, but found a historical scan of version <historicalVersion>
DELTA_FEATURES_PROTOCOL_METADATA_MISMATCH
Unable to operate on this table because the following table features are enabled in metadata but not listed in protocol: <features>
.
DELTA_FEATURES_REQUIRE_MANUAL_ENABLEMENT
Your table schema requires manually enablement of the following table feature(s): <unsupportedFeatures>
.
To do this, run the following command for each of features listed above:
ALTER TABLE
table_name SET TBLPROPERTIES
(‘delta.feature.feature_name’ = ‘supported’)
Replace “table_name” and “feature_name” with real values.
Current supported feature(s): <supportedFeatures>
.
DELTA_FEATURE_DROP_CHECKPOINT_FAILED
Dropping <featureName>
failed due to a failure in checkpoint creation.
Please try again later. It the issue persists, contact Databricks support.
DELTA_FEATURE_DROP_CONFLICT_REVALIDATION_FAIL
Cannot drop feature because a concurrent transaction modified the table.
Please try the operation again.
<concurrentCommit>
DELTA_FEATURE_DROP_DEPENDENT_FEATURE
Cannot drop table feature <feature>
because some other features (<dependentFeatures>
) in this table depends on <feature>
.
Consider dropping them first before dropping this feature.
DELTA_FEATURE_DROP_FEATURE_NOT_PRESENT
Cannot drop <feature>
from this table because it is not currently present in the table’s protocol.
DELTA_FEATURE_DROP_HISTORICAL_VERSIONS_EXIST
Cannot drop <feature>
because the Delta log contains historical versions that use the feature.
Please wait until the history retention period (<logRetentionPeriodKey>=<logRetentionPeriod>
)
has passed since the feature was last active.
Alternatively, please wait for the TRUNCATE HISTORY
retention period to expire (<truncateHistoryLogRetentionPeriod>
)
and then run:
ALTER TABLE
table_name DROP FEATURE
feature_name TRUNCATE HISTORY
DELTA_FEATURE_DROP_HISTORY_TRUNCATION_NOT_ALLOWED
The particular feature does not require history truncation.
DELTA_FEATURE_DROP_NONREMOVABLE_FEATURE
Cannot drop <feature>
because dropping this feature is not supported.
Please contact Databricks support.
DELTA_FEATURE_DROP_UNSUPPORTED_CLIENT_FEATURE
Cannot drop <feature>
because it is not supported by this Databricks version.
Consider using Databricks with a higher version.
DELTA_FEATURE_DROP_WAIT_FOR_RETENTION_PERIOD
Dropping <feature>
was partially successful.
The feature is now no longer used in the current version of the table. However, the feature
is still present in historical versions of the table. The table feature cannot be dropped
from the table protocol until these historical versions have expired.
To drop the table feature from the protocol, please wait for the historical versions to
expire, and then repeat this command. The retention period for historical versions is
currently configured as <logRetentionPeriodKey>=<logRetentionPeriod>
.
Alternatively, please wait for the TRUNCATE HISTORY
retention period to expire (<truncateHistoryLogRetentionPeriod>
)
and then run:
ALTER TABLE
table_name DROP FEATURE
feature_name TRUNCATE HISTORY
DELTA_FEATURE_REQUIRES_HIGHER_READER_VERSION
Unable to enable table feature <feature>
because it requires a higher reader protocol version (current <current>
). Consider upgrading the table’s reader protocol version to <required>
, or to a version which supports reader table features. Refer to <docLink>
for more information on table protocol versions.
DELTA_FEATURE_REQUIRES_HIGHER_WRITER_VERSION
Unable to enable table feature <feature>
because it requires a higher writer protocol version (current <current>
). Consider upgrading the table’s writer protocol version to <required>
, or to a version which supports writer table features. Refer to <docLink>
for more information on table protocol versions.
DELTA_FILE_NOT_FOUND_DETAILED
File <filePath>
referenced in the transaction log cannot be found. This occurs when data has been manually deleted from the file system rather than using the table DELETE
statement. For more information, see <faqPath>
DELTA_FILE_TO_OVERWRITE_NOT_FOUND
File (<path>
) to be rewritten not found among candidate files:
<pathList>
DELTA_FOUND_MAP_TYPE_COLUMN
A MapType was found. In order to access the key or value of a MapType, specify one
of:
<key>
or
<value>
followed by the name of the column (only if that column is a struct type).
e.g. mymap.key.mykey
If the column is a basic type, mymap.key or mymap.value is sufficient.
Schema:
<schema>
DELTA_GENERATED_COLUMNS_DATA_TYPE_MISMATCH
Column <columnName>
has data type <columnType>
and cannot be altered to data type <dataType>
because this column is referenced by the following generated column(s):
<generatedColumns>
DELTA_GENERATED_COLUMNS_DEPENDENT_COLUMN_CHANGE
Cannot alter column <columnName>
because this column is referenced by the following generated column(s):
<generatedColumns>
DELTA_GENERATED_COLUMNS_EXPR_TYPE_MISMATCH
The expression type of the generated column <columnName>
is <expressionType>
, but the column type is <columnType>
DELTA_GENERATED_COLUMN_UPDATE_TYPE_MISMATCH
Column <currentName>
is a generated column or a column used by a generated column. The data type is <currentDataType>
and cannot be converted to data type <updateDataType>
DELTA_ICEBERG_COMPAT_VIOLATION
The validation of IcebergCompatV`<version>` has failed.
For more details see DELTA_ICEBERG_COMPAT_VIOLATION
DELTA_IDENTITY_COLUMNS_ALTER_COLUMN_NOT_SUPPORTED
ALTER TABLE ALTER COLUMN
is not supported for IDENTITY
columns.
DELTA_IDENTITY_COLUMNS_ALTER_NON_DELTA_FORMAT
ALTER TABLE ALTER COLUMN SYNC IDENTITY
is only supported by Delta.
DELTA_IDENTITY_COLUMNS_ALTER_NON_IDENTITY_COLUMN
ALTER TABLE ALTER COLUMN SYNC IDENTITY
cannot be called on non IDENTITY
columns.
DELTA_IDENTITY_COLUMNS_EXPLICIT_INSERT_NOT_SUPPORTED
Providing values for GENERATED ALWAYS
AS IDENTITY
column <colName>
is not supported.
DELTA_IDENTITY_COLUMNS_PARTITION_NOT_SUPPORTED
PARTITIONED BY IDENTITY
column <colName>
is not supported.
DELTA_IDENTITY_COLUMNS_REPLACE_COLUMN_NOT_SUPPORTED
ALTER TABLE REPLACE COLUMNS
is not supported for table with IDENTITY
columns.
DELTA_IDENTITY_COLUMNS_UNSUPPORTED_DATA_TYPE
DataType <dataType>
is not supported for IDENTITY
columns.
DELTA_IDENTITY_COLUMNS_WITH_GENERATED_EXPRESSION
IDENTITY
column cannot be specified with a generated column expression.
DELTA_INCONSISTENT_BUCKET_SPEC
BucketSpec on Delta bucketed table does not match BucketSpec from metadata.Expected: <expected>
. Actual: <actual>
.
DELTA_INCONSISTENT_LOGSTORE_CONFS
(<setKeys>
) cannot be set to different values. Please only set one of them, or set them to the same value.
DELTA_INCORRECT_ARRAY_ACCESS
Incorrectly accessing an ArrayType. Use arrayname.element.elementname position to
add to an array.
DELTA_INCORRECT_ARRAY_ACCESS_BY_NAME
An ArrayType was found. In order to access elements of an ArrayType, specify
<rightName>
instead of <wrongName>
.
Schema:
<schema>
DELTA_INCORRECT_LOG_STORE_IMPLEMENTATION
The error typically occurs when the default LogStore implementation, that
is, HDFSLogStore, is used to write into a Delta table on a non-HDFS storage system.
In order to get the transactional ACID guarantees on table updates, you have to use the
correct implementation of LogStore that is appropriate for your storage system.
See <docLink>
for details.
DELTA_INDEX_LARGER_OR_EQUAL_THAN_STRUCT
Index <position>
to drop column equals to or is larger than struct length: <length>
DELTA_INDEX_LARGER_THAN_STRUCT
Index <index>
to add column <columnName>
is larger than struct length: <length>
DELTA_INSERT_COLUMN_ARITY_MISMATCH
Cannot write to ‘<tableName>
’, <columnName>
; target table has <numColumns>
column(s) but the inserted data has <insertColumns>
column(s)
DELTA_INVALID_BUCKET_COUNT
Invalid bucket count: <invalidBucketCount>
. Bucket count should be a positive number that is power of 2 and at least 8. You can use <validBucketCount>
instead.
DELTA_INVALID_CDC_RANGE
CDC range from start <start>
to end <end>
was invalid. End cannot be before start.
DELTA_INVALID_CHARACTERS_IN_COLUMN_NAME
Attribute name “<columnName>
” contains invalid character(s) among ” ,;{}()\n\t=”. Please use alias to rename it.
DELTA_INVALID_CHARACTERS_IN_COLUMN_NAMES
Found invalid character(s) among ‘ ,;{}()nt=’ in the column names of your schema.
Invalid column names: <invalidColumnNames>
.
Please use other characters and try again.
Alternatively, enable Column Mapping to keep using these characters.
DELTA_INVALID_CLONE_PATH
The target location for CLONE
needs to be an absolute path or table name. Use an
absolute path instead of <path>
.
DELTA_INVALID_COLUMN_NAMES_WHEN_REMOVING_COLUMN_MAPPING
Found invalid character(s) among ‘ ,;{}()nt=’ in the column names of your schema.
Invalid column names: <invalidColumnNames>
.
Column mapping cannot be removed when there are invalid characters in the column names.
Please rename the columns to remove the invalid characters and execute this command again.
DELTA_INVALID_FORMAT
Incompatible format detected.
A transaction log for Delta was found at ``<deltaRootPath>/_delta_log
,
but you are trying to <operation> <path>
using format(“<format>
”). You must use
‘format(“delta”)’ when reading and writing to a delta table.
To learn more about Delta, see <docLink>
DELTA_INVALID_GENERATED_COLUMN_REFERENCES
A generated column cannot use a non-existent column or another generated column
DELTA_INVALID_INVENTORY_SCHEMA
The schema for the specified INVENTORY
does not contain all of the required fields. Required fields are: <expectedSchema>
DELTA_INVALID_LOGSTORE_CONF
(<classConfig>
) and (<schemeConfig>
) cannot be set at the same time. Please set only one group of them.
DELTA_INVALID_MANAGED_TABLE_SYNTAX_NO_SCHEMA
You are trying to create a managed table <tableName>
using Delta, but the schema is not specified.
To learn more about Delta, see <docLink>
DELTA_INVALID_PARTITION_COLUMN_NAME
Found partition columns having invalid character(s) among ” ,;{}()nt=”. Please change the name to your partition columns. This check can be turned off by setting spark.conf.set(“spark.databricks.delta.partitionColumnValidity.enabled”, false) however this is not recommended as other features of Delta may not work properly.
DELTA_INVALID_PARTITION_COLUMN_TYPE
Using column <name>
of type <dataType>
as a partition column is not supported.
DELTA_INVALID_PARTITION_PATH
A partition path fragment should be the form like part1=foo/part2=bar
. The partition path: <path>
DELTA_INVALID_PROTOCOL_DOWNGRADE
Protocol version cannot be downgraded from <oldProtocol>
to <newProtocol>
DELTA_INVALID_PROTOCOL_VERSION
Unsupported Delta protocol version: table “<tableNameOrPath>
” requires reader version <readerRequired>
and writer version <writerRequired>
, but this version of Databricks supports reader versions <supportedReaders>
and writer versions <supportedWriters>
. Please upgrade to a newer release.
DELTA_INVALID_TABLE_VALUE_FUNCTION
Function <function>
is an unsupported table valued function for CDC reads.
DELTA_INVALID_TIMESTAMP_FORMAT
The provided timestamp <timestamp>
does not match the expected syntax <format>
.
DELTA_LOG_FILE_NOT_FOUND_FOR_STREAMING_SOURCE
If you never deleted it, it’s likely your query is lagging behind. Please delete its checkpoint to restart from scratch. To avoid this happening again, you can update your retention policy of your Delta table
DELTA_MATERIALIZED_ROW_TRACKING_COLUMN_NAME_MISSING
Materialized <rowTrackingColumn>
column name missing for <tableName>
.
DELTA_MAX_COMMIT_RETRIES_EXCEEDED
This commit has failed as it has been tried <numAttempts>
times but did not succeed.
This can be caused by the Delta table being committed continuously by many concurrent
commits.
Commit started at version: <startVersion>
Commit failed at version: <failVersion>
Number of actions attempted to commit: <numActions>
Total time spent attempting this commit: <timeSpent>
ms
DELTA_MERGE_ADD_VOID_COLUMN
Cannot add column <newColumn>
with type VOID. Please explicitly specify a non-void type.
DELTA_MERGE_INCOMPATIBLE_DATATYPE
Failed to merge incompatible data types <currentDataType>
and <updateDataType>
DELTA_MERGE_INCOMPATIBLE_DECIMAL_TYPE
Failed to merge decimal types with incompatible <decimalRanges>
DELTA_MERGE_MATERIALIZE_SOURCE_FAILED_REPEATEDLY
Keeping the source of the MERGE
statement materialized has failed repeatedly.
DELTA_MERGE_RESOLVED_ATTRIBUTE_MISSING_FROM_INPUT
Resolved attribute(s) <missingAttributes>
missing from <input>
in operator <merge>
DELTA_MERGE_UNEXPECTED_ASSIGNMENT_KEY
Unexpected assignment key: <unexpectedKeyClass>
- <unexpectedKeyObject>
DELTA_METADATA_CHANGED
MetadataChangedException: The metadata of the Delta table has been changed by a concurrent update. Please try the operation again.<conflictingCommit>
Refer to <docLink>
for more details.
DELTA_MISSING_CHANGE_DATA
Error getting change data for range [<startVersion>
, <endVersion>
] as change data was not
recorded for version [<version>
]. If you’ve enabled change data feed on this table,
use DESCRIBE HISTORY
to see when it was first enabled.
Otherwise, to start recording change data, use ALTER
TABLE` table_name SET TBLPROPERTIES
(<key>
=true)`.
DELTA_MISSING_COMMIT_INFO
This table has the feature <featureName>
enabled which requires the presence of the CommitInfo action in every commit. However, the CommitInfo action is missing from commit version <version>
.
DELTA_MISSING_COMMIT_TIMESTAMP
This table has the feature <featureName>
enabled which requires the presence of commitTimestamp in the CommitInfo action. However, this field has not been set in commit version <version>
.
DELTA_MISSING_DELTA_TABLE_COPY_INTO
Table doesn’t exist. Create an empty Delta table first using CREATE TABLE <tableName>
.
DELTA_MISSING_ICEBERG_CLASS
Iceberg class was not found. Please ensure Delta Iceberg support is installed.
Please refer to <docLink>
for more details.
DELTA_MISSING_NOT_NULL_COLUMN_VALUE
Column <columnName>
, which has a NOT NULL
constraint, is missing from the data being written into the table.
DELTA_MISSING_PROVIDER_FOR_CONVERT
CONVERT
TO DELTA
only supports parquet tables. Please rewrite your target as parquet.<path>
if it’s a parquet directory.
DELTA_MISSING_TRANSACTION_LOG
Incompatible format detected.
You are trying to <operation> <path>
using Delta, but there is no
transaction log present. Check the upstream job to make sure that it is writing
using format(“delta”) and that you are trying to %1$s the table base path.
To learn more about Delta, see <docLink>
DELTA_MODE_NOT_SUPPORTED
Specified mode ‘<mode>
’ is not supported. Supported modes are: <supportedModes>
DELTA_MULTIPLE_CDC_BOUNDARY
Multiple <startingOrEnding>
arguments provided for CDC read. Please provide one of either <startingOrEnding>
Timestamp or <startingOrEnding>
Version.
DELTA_MULTIPLE_CONF_FOR_SINGLE_COLUMN_IN_BLOOM_FILTER
Multiple bloom filter index configurations passed to command for column: <columnName>
DELTA_MULTIPLE_SOURCE_ROW_MATCHING_TARGET_ROW_IN_MERGE
Cannot perform Merge as multiple source rows matched and attempted to modify the same
target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge,
when multiple source rows match on the same target row, the result may be ambiguous
as it is unclear which source row should be used to update or delete the matching
target row. You can preprocess the source table to eliminate the possibility of
multiple matches. Please refer to
<usageReference>
DELTA_MUST_SET_ALL_COORDINATED_COMMITS_CONFS_IN_COMMAND
During <command>
, either both coordinated commits configurations (“delta.coordinatedCommits.commitCoordinator-preview”, “delta.coordinatedCommits.commitCoordinatorConf-preview”) are set in the command or neither of them. Missing: “<configuration>
”. Please specify this configuration in the TBLPROPERTIES
clause or remove the other configuration, and then retry the command again.
DELTA_MUST_SET_ALL_COORDINATED_COMMITS_CONFS_IN_SESSION
During <command>
, either both coordinated commits configurations (“coordinatedCommits.commitCoordinator-preview”, “coordinatedCommits.commitCoordinatorConf-preview”) are set in the SparkSession configurations or neither of them. Missing: “<configuration>
”. Please set this configuration in the SparkSession or unset the other configuration, and then retry the command again.
DELTA_NAME_CONFLICT_IN_BUCKETED_TABLE
The following column name(s) are reserved for Delta bucketed table internal usage only: <names>
DELTA_NESTED_FIELDS_NEED_RENAME
The input schema contains nested fields that are capitalized differently than the target table.
They need to be renamed to avoid the loss of data in these fields while writing to Delta.
Fields:
<fields>
.
Original schema:
<schema>
DELTA_NESTED_NOT_NULL_CONSTRAINT
The <nestType>
type of the field <parent>
contains a NOT NULL
constraint. Delta does not support NOT NULL
constraints nested within arrays or maps. To suppress this error and silently ignore the specified constraints, set <configKey>
= true.
Parsed <nestType>
type:
<nestedPrettyJson>
DELTA_NEW_CHECK_CONSTRAINT_VIOLATION
<numRows>
rows in <tableName>
violate the new CHECK
constraint (<checkConstraint>
)
DELTA_NEW_NOT_NULL_VIOLATION
<numRows>
rows in <tableName>
violate the new NOT NULL
constraint on <colName>
DELTA_NON_BOOLEAN_CHECK_CONSTRAINT
CHECK
constraint ‘<name>
’ (<expr>
) should be a boolean expression.
DELTA_NON_DETERMINISTIC_EXPRESSION_IN_GENERATED_COLUMN
Found <expr>
. A generated column cannot use a non deterministic expression.
DELTA_NON_DETERMINISTIC_FUNCTION_NOT_SUPPORTED
Non-deterministic functions are not supported in the <operation> <expression>
DELTA_NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION
When there are more than one MATCHED
clauses in a MERGE
statement, only the last MATCHED
clause can omit the condition.
DELTA_NON_LAST_NOT_MATCHED_BY_SOURCE_CLAUSE_OMIT_CONDITION
When there are more than one NOT MATCHED BY SOURCE
clauses in a MERGE
statement, only the last NOT MATCHED BY SOURCE
clause can omit the condition.
DELTA_NON_LAST_NOT_MATCHED_CLAUSE_OMIT_CONDITION
When there are more than one NOT MATCHED
clauses in a MERGE
statement, only the last NOT MATCHED
clause can omit the condition
DELTA_NON_PARTITION_COLUMN_ABSENT
Data written into Delta needs to contain at least one non-partitioned column.<details>
DELTA_NON_PARTITION_COLUMN_REFERENCE
Predicate references non-partition column ‘<columnName>
’. Only the partition columns may be referenced: [<columnList>
]
DELTA_NON_PARTITION_COLUMN_SPECIFIED
Non-partitioning column(s) <columnList>
are specified where only partitioning columns are expected: <fragment>
.
DELTA_NON_SINGLE_PART_NAMESPACE_FOR_CATALOG
Delta catalog requires a single-part namespace, but <identifier>
is multi-part.
DELTA_NOT_A_DATABRICKS_DELTA_TABLE
<table>
is not a Delta table. Please drop this table first if you would like to create it with Databricks Delta.
DELTA_NOT_A_DELTA_TABLE
<tableName>
is not a Delta table. Please drop this table first if you would like to recreate it with Delta Lake.
DELTA_NOT_NULL_NESTED_FIELD
A non-nullable nested field can’t be added to a nullable parent. Please set the nullability of the parent column accordingly.
DELTA_NO_REDIRECT_RULES_VIOLATED
Operation not allowed: <operation>
cannot be performed on a table with redirect feature.
The no redirect rules are not satisfied <noRedirectRules>
.
DELTA_NULL_SCHEMA_IN_STREAMING_WRITE
Delta doesn’t accept NullTypes in the schema for streaming writes.
DELTA_OPERATION_NOT_ALLOWED_DETAIL
Operation not allowed: <operation>
is not supported for Delta tables: <tableName>
DELTA_OPERATION_NOT_SUPPORTED_FOR_COLUMN_WITH_COLLATION
<operation>
is not supported for column <colName>
with non-default collation <collation>
.
DELTA_OPERATION_NOT_SUPPORTED_FOR_EXPRESSION_WITH_COLLATION
<operation>
is not supported for expression <exprText>
because it uses non-default collation.
DELTA_OPERATION_ON_TEMP_VIEW_WITH_GENERATED_COLS_NOT_SUPPORTED
<operation>
command on a temp view referring to a Delta table that contains generated columns is not supported. Please run the <operation>
command on the Delta table directly
DELTA_OPERATION_ON_VIEW_NOT_ALLOWED
Operation not allowed: <operation>
cannot be performed on a view.
DELTA_OPTIMIZE_FULL_NOT_SUPPORTED
OPTIMIZE FULL
is only supported for clustered tables with non-empty clustering columns.
DELTA_OVERWRITE_MUST_BE_TRUE
Copy option overwriteSchema cannot be specified without setting OVERWRITE
= ‘true’.
DELTA_OVERWRITE_SCHEMA_WITH_DYNAMIC_PARTITION_OVERWRITE
‘overwriteSchema’ cannot be used in dynamic partition overwrite mode.
DELTA_PARTITION_COLUMN_CAST_FAILED
Failed to cast value <value>
to <dataType>
for partition column <columnName>
DELTA_PARTITION_SCHEMA_IN_ICEBERG_TABLES
Partition schema cannot be specified when converting Iceberg tables. It is automatically inferred.
DELTA_POST_COMMIT_HOOK_FAILED
Committing to the Delta table version <version>
succeeded but error while executing post-commit hook <name> <message>
DELTA_PROTOCOL_CHANGED
ProtocolChangedException: The protocol version of the Delta table has been changed by a concurrent update. <additionalInfo> <conflictingCommit>
Refer to <docLink>
for more details.
DELTA_READ_FEATURE_PROTOCOL_REQUIRES_WRITE
Unable to upgrade only the reader protocol version to use table features. Writer protocol version must be at least <writerVersion>
to proceed. Refer to <docLink>
for more information on table protocol versions.
DELTA_READ_TABLE_WITHOUT_COLUMNS
You are trying to read a Delta table <tableName>
that does not have any columns.
Write some new data with the option mergeSchema = true
to be able to read the table.
DELTA_REPLACE_WHERE_IN_OVERWRITE
You can’t use replaceWhere in conjunction with an overwrite by filter
DELTA_REPLACE_WHERE_MISMATCH
Written data does not conform to partial table overwrite condition or constraint ‘<replaceWhere>
’.
<message>
DELTA_REPLACE_WHERE_WITH_DYNAMIC_PARTITION_OVERWRITE
A ‘replaceWhere’ expression and ‘partitionOverwriteMode’=’dynamic’ cannot both be set in the DataFrameWriter options.
DELTA_REPLACE_WHERE_WITH_FILTER_DATA_CHANGE_UNSET
‘replaceWhere’ cannot be used with data filters when ‘dataChange’ is set to false. Filters: <dataFilters>
DELTA_ROW_ID_ASSIGNMENT_WITHOUT_STATS
Cannot assign row IDs without row count statistics.
Collect statistics for the table by running the following code in a Scala notebook and retry:
import com.databricks.sql.transaction.tahoe.DeltaLog
import com.databricks.sql.transaction.tahoe.stats.StatisticsCollection
import org.apache.spark.sql.catalyst.TableIdentifier
val log = DeltaLog.forTable(spark, TableIdentifier(table_name))
StatisticsCollection.recompute(spark, log)
DELTA_SCHEMA_CHANGED
Detected schema change:
streaming source schema: <readSchema>
data file schema: <dataSchema>
Please try restarting the query. If this issue repeats across query restarts without
making progress, you have made an incompatible schema change and need to start your
query from scratch using a new checkpoint directory.
DELTA_SCHEMA_CHANGED_WITH_STARTING_OPTIONS
Detected schema change in version <version>
:
streaming source schema: <readSchema>
data file schema: <dataSchema>
Please try restarting the query. If this issue repeats across query restarts without
making progress, you have made an incompatible schema change and need to start your
query from scratch using a new checkpoint directory. If the issue persists after
changing to a new checkpoint directory, you may need to change the existing
‘startingVersion’ or ‘startingTimestamp’ option to start from a version newer than
<version>
with a new checkpoint directory.
DELTA_SCHEMA_CHANGED_WITH_VERSION
Detected schema change in version <version>
:
streaming source schema: <readSchema>
data file schema: <dataSchema>
Please try restarting the query. If this issue repeats across query restarts without
making progress, you have made an incompatible schema change and need to start your
query from scratch using a new checkpoint directory.
DELTA_SCHEMA_CHANGE_SINCE_ANALYSIS
The schema of your Delta table has changed in an incompatible way since your DataFrame
or DeltaTable object was created. Please redefine your DataFrame or DeltaTable object.
Changes:
<schemaDiff> <legacyFlagMessage>
DELTA_SCHEMA_NOT_PROVIDED
Table schema is not provided. Please provide the schema (column definition) of the table when using REPLACE
table and an AS SELECT
query is not provided.
DELTA_SCHEMA_NOT_SET
Table schema is not set. Write data into it or use CREATE TABLE
to set the schema.
DELTA_SET_LOCATION_SCHEMA_MISMATCH
The schema of the new Delta location is different than the current table schema.
original schema:
<original>
destination schema:
<destination>
If this is an intended change, you may turn this check off by running:
%%sql set <config>
= true
DELTA_SHALLOW_CLONE_FILE_NOT_FOUND
File <filePath>
referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE
statement. This table appears to be a shallow clone, if that is the case, this error can occur when the original table from which this table was cloned has deleted a file that the clone is still using. If you want any clones to be independent of the original table, use a DEEP clone instead.
DELTA_SHARING_CANNOT_MODIFY_RESERVED_RECIPIENT_PROPERTY
Pre-defined properties that start with <prefix>
cannot be modified.
DELTA_SHARING_CURRENT_RECIPIENT_PROPERTY_UNDEFINED
The data is restricted by recipient property <property>
that do not apply to the current recipient in the session.
For more details see DELTA_SHARING_CURRENT_RECIPIENT_PROPERTY_UNDEFINED
DELTA_SHARING_INVALID_PROVIDER_AUTH
Illegal authentication type <authenticationType>
for provider <provider>
.
DELTA_SHARING_INVALID_RECIPIENT_AUTH
Illegal authentication type <authenticationType>
for recipient <recipient>
.
DELTA_SHARING_MAXIMUM_RECIPIENT_TOKENS_EXCEEDED
There are more than two tokens for recipient <recipient>
.
DELTA_SHOW_PARTITION_IN_NON_PARTITIONED_COLUMN
Non-partitioning column(s) <badCols>
are specified for SHOW PARTITIONS
DELTA_SHOW_PARTITION_IN_NON_PARTITIONED_TABLE
SHOW PARTITIONS
is not allowed on a table that is not partitioned: <tableName>
DELTA_SOURCE_IGNORE_DELETE
Detected deleted data (for example <removedFile>
) from streaming source at version <version>
. This is currently not supported. If you’d like to ignore deletes, set the option ‘ignoreDeletes’ to ‘true’. The source table can be found at path <dataPath>
.
DELTA_SOURCE_TABLE_IGNORE_CHANGES
Detected a data update (for example <file>
) in the source table at version <version>
. This is currently not supported. If this is going to happen regularly and you are okay to skip changes, set the option ‘skipChangeCommits’ to ‘true’. If you would like the data update to be reflected, please restart this query with a fresh checkpoint directory or do a full refresh if you are using DLT. If you need to handle these changes, please switch to MVs. The source table can be found at path <dataPath>
.
DELTA_STATS_COLLECTION_COLUMN_NOT_FOUND
<statsType>
stats not found for column in Parquet metadata: <columnPath>
.
DELTA_STREAMING_CANNOT_CONTINUE_PROCESSING_POST_SCHEMA_EVOLUTION
We’ve detected one or more non-additive schema change(s) (<opType>
) between Delta version <previousSchemaChangeVersion>
and <currentSchemaChangeVersion>
in the Delta streaming source.
Please check if you want to manually propagate the schema change(s) to the sink table before we proceed with stream processing using the finalized schema at <currentSchemaChangeVersion>
.
Once you have fixed the schema of the sink table or have decided there is no need to fix, you can set (one of) the following SQL configurations to unblock the non-additive schema change(s) and continue stream processing.
To unblock for this particular stream just for this series of schema change(s): set <allowCkptVerKey> = <allowCkptVerValue>
.
To unblock for this particular stream: set <allowCkptKey> = <allowCkptValue>
To unblock for all streams: set <allowAllKey> = <allowAllValue>
.
Alternatively if applicable, you may replace the <allowAllMode>
with <opSpecificMode>
in the SQL conf to unblock stream for just this schema change type.
DELTA_STREAMING_CHECK_COLUMN_MAPPING_NO_SNAPSHOT
Failed to obtain Delta log snapshot for the start version when checking column mapping schema changes. Please choose a different start version, or force enable streaming read at your own risk by setting ‘<config>
’ to ‘true’.
DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE
Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).
For further information and possible next steps to resolve this issue, please review the documentation at <docLink>
Read schema: <readSchema>
. Incompatible data schema: <incompatibleSchema>
.
DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE_USE_SCHEMA_LOG
Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).
Please provide a ‘schemaTrackingLocation’ to enable non-additive schema evolution for Delta stream processing.
See <docLink>
for more details.
Read schema: <readSchema>
. Incompatible data schema: <incompatibleSchema>
.
DELTA_STREAMING_METADATA_EVOLUTION
The schema, table configuration or protocol of your Delta table has changed during streaming.
The schema or metadata tracking log has been updated.
Please restart the stream to continue processing using the updated metadata.
Updated schema: <schema>
.
Updated table configurations: <config>
.
Updated table protocol: <protocol>
DELTA_STREAMING_SCHEMA_EVOLUTION_UNSUPPORTED_ROW_FILTER_COLUMN_MASKS
Streaming from source table <tableId>
with schema tracking does not support row filters or column masks.
Please drop the row filters or column masks, or disable schema tracking.
DELTA_STREAMING_SCHEMA_LOCATION_CONFLICT
Detected conflicting schema location ‘<loc>
’ while streaming from table or table located at ‘<table>
’.
Another stream may be reusing the same schema location, which is not allowed.
Please provide a new unique schemaTrackingLocation
path or streamingSourceTrackingId
as a reader option for one of the streams from this table.
DELTA_STREAMING_SCHEMA_LOCATION_NOT_UNDER_CHECKPOINT
Schema location ‘<schemaTrackingLocation>
’ must be placed under checkpoint location ‘<checkpointLocation>
’.
DELTA_STREAMING_SCHEMA_LOG_DESERIALIZE_FAILED
Incomplete log file in the Delta streaming source schema log at ‘<location>
’.
The schema log may have been corrupted. Please pick a new schema location.
DELTA_STREAMING_SCHEMA_LOG_INCOMPATIBLE_DELTA_TABLE_ID
Detected incompatible Delta table id when trying to read Delta stream.
Persisted table id: <persistedId>
, Table id: <tableId>
The schema log might have been reused. Please pick a new schema location.
DELTA_STREAMING_SCHEMA_LOG_INCOMPATIBLE_PARTITION_SCHEMA
Detected incompatible partition schema when trying to read Delta stream.
Persisted schema: <persistedSchema>
, Delta partition schema: <partitionSchema>
Please pick a new schema location to reinitialize the schema log if you have manually changed the table’s partition schema recently.
DELTA_STREAMING_SCHEMA_LOG_INIT_FAILED_INCOMPATIBLE_METADATA
We could not initialize the Delta streaming source schema log because
we detected an incompatible schema or protocol change while serving a streaming batch from table version <a>
to <b>
.
DELTA_STREAMING_SCHEMA_LOG_PARSE_SCHEMA_FAILED
Failed to parse the schema from the Delta streaming source schema log.
The schema log may have been corrupted. Please pick a new schema location.
DELTA_TABLE_ALREADY_CONTAINS_CDC_COLUMNS
Unable to enable Change Data Capture on the table. The table already contains
reserved columns <columnList>
that will
be used internally as metadata for the table’s Change Data Feed. To enable
Change Data Feed on the table rename/drop these columns.
DELTA_TABLE_FOR_PATH_UNSUPPORTED_HADOOP_CONF
Currently DeltaTable.forPath only supports hadoop configuration keys starting with <allowedPrefixes>
but got <unsupportedOptions>
DELTA_TABLE_ID_MISMATCH
The Delta table at <tableLocation>
has been replaced while this command was using the table.
Table id was <oldId>
but is now <newId>
.
Please retry the current command to ensure it reads a consistent view of the table.
DELTA_TABLE_LOCATION_MISMATCH
The location of the existing table <tableName>
is <existingTableLocation>
. It doesn’t match the specified location <tableLocation>
.
DELTA_TABLE_ONLY_OPERATION
<tableName>
is not a Delta table. <operation>
is only supported for Delta tables.
DELTA_TIMESTAMP_GREATER_THAN_COMMIT
The provided timestamp (<providedTimestamp>
) is after the latest version available to this
table (<tableName>
). Please use a timestamp before or at <maximumTimestamp>
.
DELTA_TRUNCATED_TRANSACTION_LOG
<path>
: Unable to reconstruct state at version <version>
as the transaction log has been truncated due to manual deletion or the log retention policy (<logRetentionKey>=<logRetention>
) and checkpoint retention policy (<checkpointRetentionKey>=<checkpointRetention>
)
DELTA_TRUNCATE_TABLE_PARTITION_NOT_SUPPORTED
Operation not allowed: TRUNCATE TABLE
on Delta tables does not support partition predicates; use DELETE
to delete specific partitions or rows.
DELTA_UDF_IN_GENERATED_COLUMN
Found <udfExpr>
. A generated column cannot use a user-defined function
DELTA_UNEXPECTED_NUM_PARTITION_COLUMNS_FROM_FILE_NAME
Expecting <expectedColsSize>
partition column(s): <expectedCols>
, but found <parsedColsSize>
partition column(s): <parsedCols>
from parsing the file name: <path>
DELTA_UNEXPECTED_PARTIAL_SCAN
Expect a full scan of Delta sources, but found a partial scan. path:<path>
DELTA_UNEXPECTED_PARTITION_COLUMN_FROM_FILE_NAME
Expecting partition column <expectedCol>
, but found partition column <parsedCol>
from parsing the file name: <path>
DELTA_UNEXPECTED_PARTITION_SCHEMA_FROM_USER
CONVERT
TO DELTA
was called with a partition schema different from the partition schema inferred from the catalog, please avoid providing the schema so that the partition schema can be chosen from the catalog.
catalog partition schema:
<catalogPartitionSchema>
provided partition schema:
<userPartitionSchema>
DELTA_UNIFORM_COMPATIBILITY_LOCATION_CANNOT_BE_CHANGED
delta.universalFormat.compatibility.location cannot be changed.
DELTA_UNIFORM_COMPATIBILITY_LOCATION_NOT_REGISTERED
delta.universalFormat.compatibility.location is not registered in the catalog.
DELTA_UNIFORM_COMPATIBILITY_MISSING_OR_INVALID_LOCATION
Missing or invalid location for Uniform compatibility format. Please set an empty directory for delta.universalFormat.compatibility.location.
Failed reason:
For more details see DELTA_UNIFORM_COMPATIBILITY_MISSING_OR_INVALID_LOCATION
DELTA_UNIFORM_ICEBERG_INGRESS_VIOLATION
Read Iceberg with Delta Uniform has failed.
For more details see DELTA_UNIFORM_ICEBERG_INGRESS_VIOLATION
DELTA_UNIFORM_INGRESS_NOT_SUPPORTED_FORMAT
Format <fileFormat>
is not supported. Only iceberg as original file format is supported.
DELTA_UNIFORM_REFRESH_NOT_SUPPORTED
REFRESH
identifier SYNC UNIFORM
is not supported for reason:
For more details see DELTA_UNIFORM_REFRESH_NOT_SUPPORTED
DELTA_UNIFORM_REFRESH_NOT_SUPPORTED_FOR_MANAGED_ICEBERG_TABLE_WITH_METADATA_PATH
REFRESH TABLE
with METADATA_PATH
is not supported for managed Iceberg tables
DELTA_UNIVERSAL_FORMAT_CONVERSION_FAILED
Failed to convert the table version <version>
to the universal format <format>
. <message>
DELTA_UNIVERSAL_FORMAT_VIOLATION
The validation of Universal Format (<format>
) has failed: <violation>
DELTA_UNRECOGNIZED_COLUMN_CHANGE
Unrecognized column change <otherClass>
. You may be running an out-of-date Delta Lake version.
DELTA_UNSET_NON_EXISTENT_PROPERTY
Attempted to unset non-existent property ‘<property>
’ in table <tableName>
DELTA_UNSUPPORTED_ALTER_TABLE_CHANGE_COL_OP
ALTER TABLE CHANGE COLUMN
is not supported for changing column <fieldPath>
from <oldField>
to <newField>
DELTA_UNSUPPORTED_ALTER_TABLE_REPLACE_COL_OP
Unsupported ALTER TABLE REPLACE COLUMNS
operation. Reason: <details>
Failed to change schema from:
<oldSchema>
to:
<newSchema>
DELTA_UNSUPPORTED_CLONE_REPLACE_SAME_TABLE
You tried to REPLACE
an existing table (<tableName>
) with CLONE
. This operation is
unsupported. Try a different target for CLONE
or delete the table at the current target.
DELTA_UNSUPPORTED_COLUMN_MAPPING_MODE_CHANGE
Changing column mapping mode from ‘<oldMode>
’ to ‘<newMode>
’ is not supported.
DELTA_UNSUPPORTED_COLUMN_MAPPING_PROTOCOL
Your current table protocol version does not support changing column mapping modes
using <config>
.
Required Delta protocol version for column mapping:
<requiredVersion>
Your table’s current Delta protocol version:
<currentVersion>
<advice>
DELTA_UNSUPPORTED_COLUMN_MAPPING_SCHEMA_CHANGE
Schema change is detected:
old schema:
<oldTableSchema>
new schema:
<newTableSchema>
Schema changes are not allowed during the change of column mapping mode.
DELTA_UNSUPPORTED_COLUMN_TYPE_IN_BLOOM_FILTER
Creating a bloom filter index on a column with type <dataType>
is unsupported: <columnName>
DELTA_UNSUPPORTED_COMMENT_MAP_ARRAY
Can’t add a comment to <fieldPath>
. Adding a comment to a map key/value or array element is not supported.
DELTA_UNSUPPORTED_DATA_TYPES
Found columns using unsupported data types: <dataTypeList>
. You can set ‘<config>
’ to ‘false’ to disable the type check. Disabling this type check may allow users to create unsupported Delta tables and should only be used when trying to read/write legacy tables.
DELTA_UNSUPPORTED_DATA_TYPE_IN_GENERATED_COLUMN
<dataType>
cannot be the result of a generated column
DELTA_UNSUPPORTED_DESCRIBE_DETAIL_VIEW
<view>
is a view. DESCRIBE DETAIL
is only supported for tables.
DELTA_UNSUPPORTED_DROP_NESTED_COLUMN_FROM_NON_STRUCT_TYPE
Can only drop nested columns from StructType. Found <struct>
DELTA_UNSUPPORTED_EXPRESSION
Unsupported expression type(<expType>
) for <causedBy>
. The supported types are [<supportedTypes>
].
DELTA_UNSUPPORTED_FEATURES_FOR_READ
Unsupported Delta read feature: table “<tableNameOrPath>
” requires reader table feature(s) that are unsupported by this version of Databricks: <unsupported>
. Please refer to <link>
for more information on Delta Lake feature compatibility.
DELTA_UNSUPPORTED_FEATURES_FOR_WRITE
Unsupported Delta write feature: table “<tableNameOrPath>
” requires writer table feature(s) that are unsupported by this version of Databricks: <unsupported>
. Please refer to <link>
for more information on Delta Lake feature compatibility.
DELTA_UNSUPPORTED_FEATURES_IN_CONFIG
Table feature(s) configured in the following Spark configs or Delta table properties are not recognized by this version of Databricks: <configs>
.
DELTA_UNSUPPORTED_FEATURE_STATUS
Expecting the status for table feature <feature>
to be “supported”, but got “<status>
”.
DELTA_UNSUPPORTED_FIELD_UPDATE_NON_STRUCT
Updating nested fields is only supported for StructType, but you are trying to update a field of <columnName>
, which is of type: <dataType>
.
DELTA_UNSUPPORTED_FSCK_WITH_DELETION_VECTORS
The ‘FSCK REPAIR TABLE
’ command is not supported on table versions with missing deletion vector files.
Please contact support.
DELTA_UNSUPPORTED_GENERATE_WITH_DELETION_VECTORS
The ‘GENERATE
symlink_format_manifest’ command is not supported on table versions with deletion vectors.
In order to produce a version of the table without deletion vectors, run ‘REORG TABLE
table APPLY (PURGE
)’. Then re-run the ‘GENERATE
’ command.
Make sure that no concurrent transactions are adding deletion vectors again between REORG
and GENERATE
.
If you need to generate manifests regularly, or you cannot prevent concurrent transactions, consider disabling deletion vectors on this table using ‘ALTER TABLE
table SET TBLPROPERTIES
(delta.enableDeletionVectors = false)’.
DELTA_UNSUPPORTED_INVARIANT_NON_STRUCT
Invariants on nested fields other than StructTypes are not supported.
DELTA_UNSUPPORTED_MANIFEST_GENERATION_WITH_COLUMN_MAPPING
Manifest generation is not supported for tables that leverage column mapping, as external readers cannot read these Delta tables. See Delta documentation for more details.
DELTA_UNSUPPORTED_MERGE_SCHEMA_EVOLUTION_WITH_CDC
MERGE INTO
operations with schema evolution do not currently support writing CDC output.
DELTA_UNSUPPORTED_MULTI_COL_IN_PREDICATE
Multi-column In predicates are not supported in the <operation>
condition.
DELTA_UNSUPPORTED_NESTED_COLUMN_IN_BLOOM_FILTER
Creating a bloom filer index on a nested column is currently unsupported: <columnName>
DELTA_UNSUPPORTED_NESTED_FIELD_IN_OPERATION
Nested field is not supported in the <operation>
(field = <fieldName>
).
DELTA_UNSUPPORTED_NON_EMPTY_CLONE
The clone destination table is non-empty. Please TRUNCATE
or DELETE FROM
the table before running CLONE
.
DELTA_UNSUPPORTED_PARTITION_COLUMN_IN_BLOOM_FILTER
Creating a bloom filter index on a partitioning column is unsupported: <columnName>
DELTA_UNSUPPORTED_STATIC_PARTITIONS
Specifying static partitions in the partition spec is currently not supported during inserts
DELTA_UNSUPPORTED_SUBQUERY_IN_PARTITION_PREDICATES
Subquery is not supported in partition predicates.
DELTA_UNSUPPORTED_TIME_TRAVEL_VIEWS
Cannot time travel views, subqueries, streams or change data feed queries.
DELTA_UNSUPPORTED_TYPE_CHANGE_IN_SCHEMA
Unable to operate on this table because an unsupported type change was applied. Field <fieldName>
was changed from <fromType>
to <toType>
.
DELTA_UNSUPPORTED_VACUUM_SPECIFIC_PARTITION
Please provide the base path (<baseDeltaPath>
) when Vacuuming Delta tables. Vacuuming specific partitions is currently not supported.
DELTA_UNSUPPORTED_WRITES_WITHOUT_COORDINATOR
You are trying to perform writes on a table which has been registered with the commit coordinator <coordinatorName>
. However, no implementation of this coordinator is available in the current environment and writes without coordinators are not allowed.
DELTA_UPDATE_SCHEMA_MISMATCH_EXPRESSION
Cannot cast <fromCatalog>
to <toCatalog>
. All nested columns must match.
DELTA_VACUUM_COPY_INTO_STATE_FAILED
VACUUM
on data files succeeded, but COPY INTO
state garbage collection failed.
DELTA_VERSIONS_NOT_CONTIGUOUS
Versions (<versionList>
) are not contiguous.
For more details see DELTA_VERSIONS_NOT_CONTIGUOUS
DELTA_VIOLATE_CONSTRAINT_WITH_VALUES
CHECK
constraint <constraintName> <expression>
violated by row with values:
<values>
DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED
The validation of the properties of table <table>
has been violated:
For more details see DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED
DELTA_ZORDERING_ON_COLUMN_WITHOUT_STATS
Z-Ordering on <cols>
will be
ineffective, because we currently do not collect stats for these columns. Please refer to
<link>
for more information on data skipping and z-ordering. You can disable
this check by setting
‘%%sql set <zorderColStatKey>
= false’
Delta Sharing
DELTA_SHARING_ACTIVATION_NONCE_DOES_NOT_EXIST
SQLSTATE: none assigned
Activation nonce not found. The activation link you used is invalid or has expired. Regenerate the activation link and try again.
DELTA_SHARING_GET_RECIPIENT_PROPERTIES_INVALID_DEPENDENT
SQLSTATE: none assigned
The view defined with the current_recipient
function is for sharing only and can only be queried from the data recipient side. The provided securable with id <securableId>
is not a Delta Sharing View.
DELTA_SHARING_MUTABLE_SECURABLE_KIND_NOT_SUPPORTED
SQLSTATE: none assigned
The provided securable kind <securableKind>
does not support mutability in Delta Sharing.
DELTA_SHARING_ROTATE_TOKEN_NOT_AUTHORIZED_FOR_MARKETPLACE
SQLSTATE: none assigned
The provided securable kind <securableKind>
does not support rotate token action initiated by Marketplace service.
DS_AUTH_TYPE_NOT_AVAILABLE
SQLSTATE: none assigned
<dsError>
: Authentication type not available in provider entity <providerEntity>
.
DS_CDF_NOT_ENABLED
SQLSTATE: none assigned
<dsError>
: Unable to access change data feed for <tableName>
. CDF is not enabled on the original delta table. Please contact your data provider.
DS_DATA_MATERIALIZATION_COMMAND_FAILED
SQLSTATE: none assigned
<dsError>
: Data materialization task run <runId>
from org <orgId>
failed at command <command>
DS_DATA_MATERIALIZATION_COMMAND_NOT_SUPPORTED
SQLSTATE: none assigned
<dsError>
: Data materialization task run <runId>
from org <orgId>
does not support command <command>
DS_DATA_MATERIALIZATION_NO_VALID_NAMESPACE
SQLSTATE: none assigned
<dsError>
: Could not find valid namespace to create materialization for <tableName>
. Please contact your data provider to fix this.
DS_DATA_MATERIALIZATION_RUN_DOES_NOT_EXIST
SQLSTATE: none assigned
<dsError>
: Data materialization task run <runId>
from org <orgId>
does not exist
DS_DELTA_MISSING_CHECKPOINT_FILES
SQLSTATE: none assigned
<dsError>
: Couldn’t find all part files of the checkpoint at version: <version>
. <suggestion>
DS_EXPIRE_TOKEN_NOT_AUTHORIZED_FOR_MARKETPLACE
SQLSTATE: none assigned
<dsError>
: The provided securable kind <securableKind>
does not support expire token action initiated by Marketplace service.
DS_FLAKY_NETWORK_CONNECTION
SQLSTATE: none assigned
<dsError>
: Network connection is flaky for <rpcName>
, please retry.<traceId>
DS_MATERIALIZATION_QUERY_FAILED
SQLSTATE: none assigned
<dsError>
: Query failed for <schema>
.<table>
from Share <share>
.
DS_MATERIALIZATION_QUERY_TIMEDOUT
SQLSTATE: none assigned
<dsError>
: Query timed out for <schema>
.<table>
from Share <share>
after <timeoutInSec>
seconds.
DS_MISSING_IDEMPOTENCY_KEY
SQLSTATE: none assigned
<dsError>
: Idempotency key is require when query <schema>
.<table>
from Share <share>
asynchronously.
DS_MORE_THAN_ONE_RPC_PARAMETER_SET
SQLSTATE: none assigned
<dsError>
: Please only provide one of: <parameters>
.
DS_NO_METASTORE_ASSIGNED
SQLSTATE: none assigned
<dsError>
: No metastore assigned for the current workspace (workspaceId: <workspaceId>
).
DS_PAGINATION_AND_QUERY_ARGS_MISMATCH
SQLSTATE: none assigned
<dsError>
: Pagination or query arguments mismatch.
DS_PARTITION_COLUMNS_RENAMED
SQLSTATE: none assigned
<dsError>
: Partition column [<renamedColumns>
] renamed on the shared table. Please contact your data provider to fix this.
DS_QUERY_BEFORE_START_VERSION
SQLSTATE: none assigned
<dsError>
: You can only query table data since version <startVersion>
.
DS_QUERY_TIMEOUT_ON_SERVER
SQLSTATE: none assigned
<dsError>
: A timeout occurred when processing <queryType>
on <tableName>
after <numActions>
updates across <numIter>
iterations.<progressUpdate> <suggestion> <traceId>
DS_RESOURCE_EXHAUSTED
SQLSTATE: none assigned
<dsError>
: The <resource>
exceeded limit: [<limitSize>
]<suggestion>
.<traceId>
DS_SYSTEM_WORKSPACE_GROUP_PERMISSION_UNSUPPORTED
SQLSTATE: none assigned
Cannot grant privileges on <securableType>
to system generated group <principal>
.
DS_TIME_TRAVEL_NOT_PERMITTED
SQLSTATE: none assigned
<dsError>
: Time travel query is not permitted unless history is shared on <tableName>
. Please contact your data provider.
DS_UNAUTHORIZED_D2O_OIDC_RECIPIENT
SQLSTATE: none assigned
<dsError>
: Unauthorized D2O OIDC Recipient: <message>
.
DS_UNKNOWN_QUERY_ID
SQLSTATE: none assigned
<dsError>
: Unknown query id <queryID>
for <schema>
.<table>
from Share <share>
.
DS_UNKNOWN_QUERY_STATUS
SQLSTATE: none assigned
<dsError>
: Unknown query status for query id <queryID>
for <schema>
.<table>
from Share <share>
.
DS_UNSUPPORTED_DELTA_READER_VERSION
SQLSTATE: none assigned
<dsError>
: Delta protocol reader version <tableReaderVersion>
is higher than <supportedReaderVersion>
and cannot be supported in the delta sharing server.
DS_UNSUPPORTED_DELTA_TABLE_FEATURES
SQLSTATE: none assigned
<dsError>
: Table features <tableFeatures>
are found in table`<versionStr> <historySharingStatusStr> <optionStr>`
DS_UNSUPPORTED_STORAGE_SCHEME
SQLSTATE: none assigned
<dsError>
: Unsupported storage scheme: <scheme>
.
DS_UNSUPPORTED_TABLE_TYPE
SQLSTATE: none assigned
<dsError>
: Could not retrieve <schema>
.<table>
from Share <share>
because table with type [<tableType>
] is currently unsupported in Delta Sharing protocol.
DS_VIEW_SHARING_FUNCTIONS_NOT_ALLOWED
SQLSTATE: none assigned
<dsError>
: The following function(s): <functions>
are not allowed in the view sharing query.
Autoloader
CF_ADD_NEW_NOT_SUPPORTED
Schema evolution mode <addNewColumnsMode>
is not supported when the schema is specified. To use this mode, you can provide the schema through cloudFiles.schemaHints
instead.
CF_AMBIGUOUS_AUTH_OPTIONS_ERROR
Found notification-setup authentication options for the (default) directory
listing mode:
<options>
If you wish to use the file notification mode, please explicitly set:
.option(“cloudFiles.<useNotificationsKey>
”, “true”)
Alternatively, if you want to skip the validation of your options and ignore these
authentication options, you can set:
.option(“cloudFiles.ValidateOptionsKey>”, “false”)
CF_AMBIGUOUS_INCREMENTAL_LISTING_MODE_ERROR
Incremental listing mode (cloudFiles.<useIncrementalListingKey>
)
and file notification (cloudFiles.<useNotificationsKey>
)
have been enabled at the same time.
Please make sure that you select only one.
CF_BUCKET_MISMATCH
The <storeType>
in the file event <fileEvent>
is different from expected by the source: <source>
.
CF_CANNOT_EVOLVE_SCHEMA_LOG_EMPTY
Cannot evolve schema when the schema log is empty. Schema log location: <logPath>
CF_CANNOT_RESOLVE_CONTAINER_NAME
Cannot resolve container name from path: <path>
, Resolved uri: <uri>
CF_CANNOT_RUN_DIRECTORY_LISTING
Cannot run directory listing when there is an async backfill thread running
CF_CLEAN_SOURCE_ALLOW_OVERWRITES_BOTH_ON
Cannot turn on cloudFiles.cleanSource and cloudFiles.allowOverwrites at the same time.
CF_CLEAN_SOURCE_UNAUTHORIZED_WRITE_PERMISSION
Auto Loader cannot delete processed files because it does not have write permissions to the source directory.
<reason>
To fix you can either:
Grant write permissions to the source directory OR
Set cleanSource to ‘OFF’
You could also unblock your stream by setting the SQLConf spark.databricks.cloudFiles.cleanSource.disabledDueToAuthorizationErrors to ‘true’.
CF_DUPLICATE_COLUMN_IN_DATA
There was an error when trying to infer the partition schema of your table. You have the same column duplicated in your data and partition paths. To ignore the partition value, please provide your partition columns explicitly by using: .option(“cloudFiles.<partitionColumnsKey>
”, “{comma-separated-list}”)
CF_EMPTY_DIR_FOR_SCHEMA_INFERENCE
Cannot infer schema when the input path <path>
is empty. Please try to start the stream when there are files in the input path, or specify the schema.
CF_EVENT_GRID_AUTH_ERROR
Failed to create an Event Grid subscription. Please make sure that your service
principal has <permissionType>
Event Grid Subscriptions. See more details at:
<docLink>
CF_EVENT_GRID_CREATION_FAILED
Failed to create event grid subscription. Please ensure that Microsoft.EventGrid is
registered as resource provider in your subscription. See more details at:
<docLink>
CF_EVENT_GRID_NOT_FOUND_ERROR
Failed to create an Event Grid subscription. Please make sure that your storage
account (<storageAccount>
) is under your resource group (<resourceGroup>
) and that
the storage account is a “StorageV2 (general purpose v2)” account. See more details at:
<docLink>
CF_EVENT_NOTIFICATION_NOT_SUPPORTED
Auto Loader event notification mode is not supported for <cloudStore>
.
CF_FAILED_TO_CREATED_PUBSUB_SUBSCRIPTION
Failed to create subscription: <subscriptionName>
. A subscription with the same name already exists and is associated with another topic: <otherTopicName>
. The desired topic is <proposedTopicName>
. Either delete the existing subscription or create a subscription with a new resource suffix.
CF_FAILED_TO_CREATED_PUBSUB_TOPIC
Failed to create topic: <topicName>
. A topic with the same name already exists.<reason>
Remove the existing topic or try again with another resource suffix
CF_FAILED_TO_DELETE_GCP_NOTIFICATION
Failed to delete notification with id <notificationId>
on bucket <bucketName>
for topic <topicName>
. Please retry or manually remove the notification through the GCP console.
CF_FAILED_TO_DESERIALIZE_PERSISTED_SCHEMA
Failed to deserialize persisted schema from string: ‘<jsonSchema>
’
CF_FAILED_TO_INFER_SCHEMA
Failed to infer schema for format <fileFormatInput>
from existing files in input path <path>
.
For more details see CF_FAILED_TO_INFER_SCHEMA
CF_FOUND_MULTIPLE_AUTOLOADER_PUBSUB_SUBSCRIPTIONS
Found multiple (<num>
) subscriptions with the Auto Loader prefix for topic <topicName>
:
<subscriptionList>
There should only be one subscription per topic. Please manually ensure that your topic does not have multiple subscriptions.
CF_GCP_AUTHENTICATION
Please either provide all of the following: <clientEmail>
, <client>
,
<privateKey>
, and <privateKeyId>
or provide none of them in order to use the default
GCP credential provider chain for authenticating with GCP resources.
CF_GCP_LABELS_COUNT_EXCEEDED
Received too many labels (<num>
) for GCP resource. The maximum label count per resource is <maxNum>
.
CF_GCP_RESOURCE_TAGS_COUNT_EXCEEDED
Received too many resource tags (<num>
) for GCP resource. The maximum resource tag count per resource is <maxNum>
, as resource tags are stored as GCP labels on resources, and Databricks specific tags consume some of this label quota.
CF_INCORRECT_BATCH_USAGE
CloudFiles is a streaming source. Please use spark.readStream instead of spark.read. To disable this check, set <cloudFilesFormatValidationEnabled>
to false.
CF_INCORRECT_SQL_PARAMS
The cloud_files method accepts two required string parameters: the path to load from, and the file format. File reader options must be provided in a string key-value map. e.g. cloud_files(“path”, “json”, map(“option1”, “value1”)). Received: <params>
CF_INCORRECT_STREAM_USAGE
To use ‘cloudFiles’ as a streaming source, please provide the file format with the option ‘cloudFiles.format’, and use .load() to create your DataFrame. To disable this check, set <cloudFilesFormatValidationEnabled>
to false.
CF_INVALID_AZURE_CERTIFICATE
The private key provided with option cloudFiles.certificate cannot be parsed. Please provide a valid public key in PEM format.
CF_INVALID_AZURE_CERT_PRIVATE_KEY
The private key provided with option cloudFiles.certificatePrivateKey cannot be parsed. Please provide a valid private key in PEM format.
CF_INVALID_GCP_RESOURCE_TAG_KEY
Invalid resource tag key for GCP resource: <key>
. Keys must start with a lowercase letter, be within 1 to 63 characters long, and contain only lowercase letters, numbers, underscores (_), and hyphens (-).
CF_INVALID_GCP_RESOURCE_TAG_VALUE
Invalid resource tag value for GCP resource: <value>
. Values must be within 0 to 63 characters long and must contain only lowercase letters, numbers, underscores (_), and hyphens (-).
CF_INVALID_MANAGED_FILE_EVENTS_OPTION_KEYS
Auto Loader does not support the following options when used with managed file events:
<optionList>
We recommend that you remove these options and then restart the stream.
CF_INVALID_MANAGED_FILE_EVENTS_RESPONSE
Invalid response from managed file events service. Please contact Databricks support for assistance.
For more details see CF_INVALID_MANAGED_FILE_EVENTS_RESPONSE
CF_INVALID_SCHEMA_EVOLUTION_MODE
cloudFiles.<schemaEvolutionModeKey>
must be one of {
“<addNewColumns>
”
“<failOnNewColumns>
”
“<rescue>
”
“<noEvolution>
”}
CF_INVALID_SCHEMA_HINTS_OPTION
Schema hints can only specify a particular column once.
In this case, redefining column: <columnName>
multiple times in schemaHints:
<schemaHints>
CF_INVALID_SCHEMA_HINT_COLUMN
Schema hints can not be used to override maps’ and arrays’ nested types.
Conflicted column: <columnName>
CF_MANAGED_FILE_EVENTS_BACKFILL_IN_PROGRESS
You have requested Auto Loader to ignore existing files in your external location by setting includeExistingFiles to false. However, the managed file events service is still discovering existing files in your external location. Please try again after managed file events has completed discovering all files in your external location.
CF_MANAGED_FILE_EVENTS_ENDPOINT_NOT_FOUND
You are using Auto Loader with managed file events, but it appears that the external location for your input path ‘<path>
’ does not have file events enabled or the input path is invalid. Please request your Databricks Administrator to enable file events on the external location for your input path.
CF_MANAGED_FILE_EVENTS_ENDPOINT_PERMISSION_DENIED
You are using Auto Loader with managed file events, but you do not have access to the external location or volume for input path ‘<path>
’ or the input path is invalid. Please request your Databricks Administrator to grant read permissions for the external location or volume or provide a valid input path within an existing external location or volume.
CF_MANAGED_FILE_EVENTS_ONLY_ON_SERVERLESS
Auto Loader with managed file events is only available on Databricks serverless. To continue, please move this workload to Databricks serverless or turn off the cloudFiles.useManagedFileEvents option.
CF_MISSING_METADATA_FILE_ERROR
The metadata file in the streaming source checkpoint directory is missing. This metadata
file contains important default options for the stream, so the stream cannot be restarted
right now. Please contact Databricks support for assistance.
CF_MISSING_PARTITION_COLUMN_ERROR
Partition column <columnName>
does not exist in the provided schema:
<schema>
CF_MISSING_SCHEMA_IN_PATHLESS_MODE
Please specify a schema using .schema() if a path is not provided to the CloudFiles source while using file notification mode. Alternatively, to have Auto Loader to infer the schema please provide a base path in .load().
CF_MULTIPLE_PUBSUB_NOTIFICATIONS_FOR_TOPIC
Found existing notifications for topic <topicName>
on bucket <bucketName>
:
notification,id
<notificationList>
To avoid polluting the subscriber with unintended events, please delete the above notifications and retry.
CF_NEW_PARTITION_ERROR
New partition columns were inferred from your files: [<filesList>
]. Please provide all partition columns in your schema or provide a list of partition columns which you would like to extract values for by using: .option(“cloudFiles.partitionColumns”, “{comma-separated-list|empty-string}”)
CF_PARTITON_INFERENCE_ERROR
There was an error when trying to infer the partition schema of the current batch of files. Please provide your partition columns explicitly by using: .option(“cloudFiles.<partitionColumnOption>
”, “{comma-separated-list}”)
CF_PATH_DOES_NOT_EXIST_FOR_READ_FILES
Cannot read files when the input path <path>
does not exist. Please make sure the input path exists and re-try.
CF_PERIODIC_BACKFILL_NOT_SUPPORTED
Periodic backfill is not supported if asynchronous backfill is disabled. You can enable asynchronous backfill/directory listing by setting spark.databricks.cloudFiles.asyncDirListing
to true
CF_PROTOCOL_MISMATCH
<message>
If you don’t need to make any other changes to your code, then please set the SQL
configuration: ‘<sourceProtocolVersionKey> = <value>
’
to resume your stream. Please refer to:
<docLink>
for more details.
CF_REGION_NOT_FOUND_ERROR
Could not get default AWS Region. Please specify a region using the cloudFiles.region option.
CF_RESOURCE_SUFFIX_EMPTY
Failed to create notification services: the resource suffix cannot be empty.
CF_RESOURCE_SUFFIX_INVALID_CHAR_AWS
Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-) and underscores (_).
CF_RESOURCE_SUFFIX_INVALID_CHAR_AZURE
Failed to create notification services: the resource suffix can only have lowercase letter, number, and dash (-).
CF_RESOURCE_SUFFIX_INVALID_CHAR_GCP
Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-), underscores (_), periods (.), tildes (~) plus signs (+), and percent signs (<percentSign>
).
CF_RESOURCE_SUFFIX_LIMIT
Failed to create notification services: the resource suffix cannot have more than <limit>
characters.
CF_RESOURCE_SUFFIX_LIMIT_GCP
Failed to create notification services: the resource suffix must be between <lowerLimit>
and <upperLimit>
characters.
CF_RESTRICTED_GCP_RESOURCE_TAG_KEY
Found restricted GCP resource tag key (<key>
). The following GCP resource tag keys are restricted for Auto Loader: [<restrictedKeys>
]
CF_RETENTION_GREATER_THAN_MAX_FILE_AGE
cloudFiles.cleanSource.retentionDuration cannot be greater than cloudFiles.maxFileAge.
CF_SAME_PUB_SUB_TOPIC_NEW_KEY_PREFIX
Failed to create notification for topic: <topic>
with prefix: <prefix>
. There is already a topic with the same name with another prefix: <oldPrefix>
. Try using a different resource suffix for setup or delete the existing setup.
CF_SOURCE_UNSUPPORTED
The cloud files source only supports S3, Azure Blob Storage (wasb/wasbs) and Azure Data Lake Gen1 (adl) and Gen2 (abfs/abfss) paths right now. path: ‘<path>
’, resolved uri: ‘<uri>
’
CF_STATE_INCORRECT_SQL_PARAMS
The cloud_files_state function accepts a string parameter representing the checkpoint directory of a cloudFiles stream or a multi-part tableName identifying a streaming table, and an optional second integer parameter representing the checkpoint version to load state for. The second parameter may also be ‘latest’ to read the latest checkpoint. Received: <params>
CF_STATE_INVALID_CHECKPOINT_PATH
The input checkpoint path <path>
is invalid. Either the path does not exist or there are no cloud_files sources found.
CF_STATE_INVALID_VERSION
The specified version <version>
does not exist, or was removed during analysis.
CF_UNABLE_TO_DERIVE_STREAM_CHECKPOINT_LOCATION
Unable to derive the stream checkpoint location from the source checkpoint location: <checkPointLocation>
CF_UNABLE_TO_DETECT_FILE_FORMAT
Unable to detect the source file format from <fileSize>
sampled file(s), found <formats>
. Please specify the format.
CF_UNABLE_TO_EXTRACT_BUCKET_INFO
Unable to extract bucket information. Path: ‘<path>
’, resolved uri: ‘<uri>
’.
CF_UNABLE_TO_EXTRACT_KEY_INFO
Unable to extract key information. Path: ‘<path>
’, resolved uri: ‘<uri>
’.
CF_UNABLE_TO_EXTRACT_STORAGE_ACCOUNT_INFO
Unable to extract storage account information; path: ‘<path>
’, resolved uri: ‘<uri>
’
CF_UNABLE_TO_LIST_EFFICIENTLY
Received a directory rename event for the path <path>
, but we are unable to list this directory efficiently. In order for the stream to continue, set the option ‘cloudFiles.ignoreDirRenames’ to true, and consider enabling regular backfills with cloudFiles.backfillInterval for this data to be processed.
CF_UNKNOWN_OPTION_KEYS_ERROR
Found unknown option keys:
<optionList>
Please make sure that all provided option keys are correct. If you want to skip the
validation of your options and ignore these unknown options, you can set:
.option(“cloudFiles.<validateOptions>
”, “false”)
CF_UNSUPPORTED_CLOUD_FILES_SQL_FUNCTION
The SQL function ‘cloud_files’ to create an Auto Loader streaming source is supported only in a Delta Live Tables pipeline. See more details at:
<docLink>
CF_UNSUPPORTED_FORMAT_FOR_SCHEMA_INFERENCE
Schema inference is not supported for format: <format>
. Please specify the schema.
CF_UNSUPPORTED_LOG_VERSION
UnsupportedLogVersion: maximum supported log version is v`<maxVersion>, but encountered v
<version>`. The log file was produced by a newer version of DBR and cannot be read by this version. Please upgrade.
CF_UNSUPPORTED_SCHEMA_EVOLUTION_MODE
Schema evolution mode <mode>
is not supported for format: <format>
. Please set the schema evolution mode to ‘none’.
CF_USE_DELTA_FORMAT
Reading from a Delta table is not supported with this syntax. If you would like to consume data from Delta, please refer to the docs: read a Delta table (<deltaDocLink>
), or read a Delta table as a stream source (<streamDeltaDocLink>
). The streaming source from Delta is already optimized for incremental consumption of data.
Geospatial
GEOJSON_PARSE_ERROR
Error parsing GeoJSON: <parseError>
at position <pos>
For more details see GEOJSON_PARSE_ERROR
H3_INVALID_GRID_DISTANCE_VALUE
H3 grid distance <k>
must be non-negative
For more details see H3_INVALID_GRID_DISTANCE_VALUE
H3_INVALID_RESOLUTION_VALUE
H3 resolution <r>
must be between <minR>
and <maxR>
, inclusive
For more details see H3_INVALID_RESOLUTION_VALUE
H3_NOT_ENABLED
<h3Expression>
is disabled or unsupported. Consider enabling Photon or switch to a tier that supports H3 expressions
For more details see H3_NOT_ENABLED
H3_PENTAGON_ENCOUNTERED_ERROR
A pentagon was encountered while computing the hex ring of <h3Cell>
with grid distance <k>
ST_DIFFERENT_SRID_VALUES
Arguments to “<sqlFunction>
” must have the same SRID value. SRID values found: <srid1>
, <srid2>
ST_INVALID_CRS_TRANSFORMATION_ERROR
<sqlFunction>
: Invalid or unsupported CRS transformation from SRID <srcSrid>
to SRID <trgSrid>
ST_INVALID_ENDIANNESS_VALUE
Endianness ‘<e>
’ must be either ‘NDR’ (little-endian) or ‘XDR’ (big-endian)
ST_INVALID_GEOHASH_VALUE
<sqlFunction>
: Invalid geohash value: ‘<geohash>
’. Geohash values must be valid lowercase base32 strings as described inhttps://en.wikipedia.org/wiki/Geohash#Textual_representation
ST_NOT_ENABLED
<stExpression>
is disabled or unsupported. Consider enabling Photon or switch to a tier that supports ST expressions
ST_UNSUPPORTED_RETURN_TYPE
The GEOGRAPHY
and GEOMETRY
data types cannot be returned in queries. Use one of the following SQL expressions to convert them to standard interchange formats: <projectionExprs>
.
WKB_PARSE_ERROR
Error parsing WKB: <parseError>
at position <pos>
For more details see WKB_PARSE_ERROR
WKT_PARSE_ERROR
Error parsing WKT: <parseError>
at position <pos>
For more details see WKT_PARSE_ERROR
Unity Catalog
CONFLICTING_COLUMN_NAMES_ERROR
Column <columnName>
conflicts with another column with the same name but with/without trailing whitespaces (for example, an existing column named ``<columnName>
). Please rename the column with a different name.
CONNECTION_CREDENTIALS_NOT_SUPPORTED_FOR_ONLINE_TABLE_CONNECTION
SQLSTATE: none assigned
Invalid request to get connection-level credentials for connection of type <connectionType>
. Such credentials are only available for managed PostgreSQL connections.
CONNECTION_TYPE_NOT_ENABLED
SQLSTATE: none assigned
Connection type ‘<connectionType>
’ is not enabled. Please enable the connection to use it.
DELTA_SHARING_READ_ONLY_RECIPIENT_EXISTS
SQLSTATE: none assigned
There is already a Recipient object ‘<existingRecipientName>
’ with the same sharing identifier ‘<existingMetastoreId>
’.
DELTA_SHARING_READ_ONLY_SECURABLE_KIND
SQLSTATE: none assigned
Data of a Delta Sharing Securable Kind <securableKindName>
are read-only and can not be created, modified or deleted.
EXTERNAL_ACCESS_DISABLED_ON_METASTORE
SQLSTATE: none assigned
Credential vending is rejected for non Databricks Compute environment due to External Data Access being disabled for metastore <metastoreName>
. Please contact your metastore admin to enable ‘External Data Access’ configuration on the metastore.
EXTERNAL_ACCESS_NOT_ALLOWED_FOR_TABLE
SQLSTATE: none assigned
Table with id <tableId>
cannot be accessed from outside of Databricks Compute Environment due to its kind being <securableKind>
. Only ‘TABLE_EXTERNAL
’, ‘TABLE_DELTA_EXTERNAL
’ and ‘TABLE_DELTA
’ table kinds can be accessed externally.
EXTERNAL_USE_SCHEMA_ASSIGNED_TO_INCORRECT_SECURABLE_TYPE
SQLSTATE: none assigned
Privilege EXTERNAL
USE SCHEMA
is not applicable to this entity <assignedSecurableType>
and can only be assigned to a schema or catalog. Please remove the privilege from the <assignedSecurableType>
object and assign it to a schema or catalog instead.
EXTERNAL_WRITE_NOT_ALLOWED_FOR_TABLE
SQLSTATE: none assigned
Table with id <tableId>
cannot be written from outside of Databricks Compute Environment due to its kind being <securableKind>
. Only ‘TABLE_EXTERNAL
’ and ‘TABLE_DELTA_EXTERNAL
’ table kinds can be written externally.
FOREIGN_CATALOG_STORAGE_ROOT_MUST_SUPPORT_WRITES
SQLSTATE: none assigned
The storage location for a foreign catalog of type <catalogType>
will be used for unloading data and can not be read-only.
HMS_SECURABLE_OVERLAP_LIMIT_EXCEEDED
SQLSTATE: none assigned
The number of <resourceType>
s for input path <url>
exceeds the allowed limit (<overlapLimit>
) for overlapping HMS <resourceType>
s.
INVALID_RESOURCE_NAME_DELTA_SHARING
SQLSTATE: none assigned
Delta Sharing requests are not supported using resource names
INVALID_RESOURCE_NAME_ENTITY_TYPE
SQLSTATE: none assigned
The provided resource name references entity type <provided>
but expected <expected>
INVALID_RESOURCE_NAME_METASTORE_ID
SQLSTATE: none assigned
The provided resource name references a metastore that is not in scope for the current request
LOCATION_OVERLAP
SQLSTATE: none assigned
Input path url ‘<path>
’ overlaps with <overlappingLocation>
within ‘<caller>
’ call. <conflictingSummary>
.
REDSHIFT_FOREIGN_CATALOG_STORAGE_ROOT_MUST_BE_S3
SQLSTATE: none assigned
The storage root for Redshift foreign catalog has to be AWS S3.
SECURABLE_KIND_DOES_NOT_SUPPORT_LAKEHOUSE_FEDERATION
SQLSTATE: none assigned
Securable with kind <securableKind>
does not support Lakehouse Federation.
SECURABLE_KIND_NOT_ENABLED
SQLSTATE: none assigned
Securable kind ‘<securableKind>
’ is not enabled. If this is a securable kind associated with a preview feature, please enable it in workspace settings.
SECURABLE_TYPE_DOES_NOT_SUPPORT_LAKEHOUSE_FEDERATION
SQLSTATE: none assigned
Securable with type <securableType>
does not support Lakehouse Federation.
SOURCE_TABLE_COLUMN_COUNT_EXCEEDS_LIMIT
SQLSTATE: none assigned
The source table has more than <columnCount>
columns. Please reduce the number of columns to <columnLimitation>
or fewer.
UC_AAD_TOKEN_LIFETIME_TOO_SHORT
SQLSTATE: none assigned
Exchanged AAD token lifetime is <lifetime>
which is configured too short. Please check your Azure AD setting to make sure temporary access token has at least an hour lifetime.https://learn.microsoft.com/azure/active-directory/develop/active-directory-configurable-token-lifetimes
UC_AUTHZ_ACTION_NOT_SUPPORTED
SQLSTATE: none assigned
Authorizing <actionName>
is not supported; please check that the RPC invoked is implemented for this resource type
UC_BUILTIN_HMS_CONNECTION_CREATION_PERMISSION_DENIED
SQLSTATE: none assigned
Cannot create a connection for a builtin hive metastore because user: <userId>
is not the admin of the workspace: <workspaceId>
UC_BUILTIN_HMS_CONNECTION_MODIFY_RESTRICTED_FIELD
SQLSTATE: none assigned
Attempt to modify a restricted field in built-in HMS connection ‘<connectionName>
’. Only ‘warehouse_directory’ can be updated.
UC_CANNOT_RENAME_PARTITION_FILTERING_COLUMN
SQLSTATE: none assigned
Failed to rename table column <originalLogicalColumn>
because it’s used for partition filtering in <sharedTableName>
. In order to proceed, you can remove the table from the share, rename the column, and share it with the desired partition filtering columns again. Though, this may break the streaming query for your recipient.
UC_CHILD_CREATION_FORBIDDEN_FOR_NON_UC_CLUSTER
SQLSTATE: none assigned
Cannot create <securableType>
‘<securable>
’ under <parentSecurableType>
‘<parentSecurable>
’ because the request is not from a UC cluster.
UC_CLOUD_STORAGE_ACCESS_FAILURE
SQLSTATE: none assigned
Failed to access cloud storage: <errMsg>
exceptionTraceId=<exceptionTraceId>
UC_CONFLICTING_CONNECTION_OPTIONS
SQLSTATE: none assigned
Cannot create a connection with both username/password and oauth authentication options. Please choose one.
UC_CONNECTION_EXISTS_FOR_CREDENTIAL
SQLSTATE: none assigned
Credential ‘<credentialName>
’ has one or more dependent connections. You may use force option to continue to update or delete the credential, but the connections using this credential may not work anymore.
UC_CONNECTION_EXPIRED_REFRESH_TOKEN
SQLSTATE: none assigned
The refresh token associated with the connection is expired. Please update the connection to restart the OAuth flow to retrieve a fresh token.
UC_CONNECTION_IN_FAILED_STATE
SQLSTATE: none assigned
The connection is in the FAILED
state. Please update the connection with valid credentials to reactivate it.
UC_CONNECTION_MISSING_REFRESH_TOKEN
SQLSTATE: none assigned
There is no refresh token associated with the connection. Please update the OAuth client integration in your identity provider to return refresh tokens, and update or recreate the connection to restart the OAuth flow and retrieve the necessary tokens.
UC_CONNECTION_OAUTH_EXCHANGE_FAILED
SQLSTATE: none assigned
The OAuth token exchange failed with HTTP status code <httpStatus>
. The returned server response or exception message is: <response>
UC_COORDINATED_COMMITS_NOT_ENABLED
SQLSTATE: none assigned
Supports for coordinated commits is not enabled. Please contact Databricks support.
UC_CREATE_FORBIDDEN_UNDER_INACTIVE_SECURABLE
SQLSTATE: none assigned
Cannot create <securableType>
‘<securableName>
’ because it is under a <parentSecurableType>
‘<parentSecurableName>
’ that is not active. Please delete the parent securable and recreate the parent.
UC_CREDENTIAL_ACCESS_CONNECTOR_PARSING_FAILED
SQLSTATE: none assigned
Failed to parse the provided access connector ID: <accessConnectorId>
. Please verify its formatting and try again.
UC_CREDENTIAL_FAILED_TO_OBTAIN_VALIDATION_TOKEN
SQLSTATE: none assigned
Failed to obtain an AAD token to perform cloud permission validation on an access connector. Please retry the action.
UC_CREDENTIAL_INVALID_CLOUD_PERMISSIONS
SQLSTATE: none assigned
Registering a credential requires the contributor role over the corresponding access connector with ID <accessConnectorId>
. Please contact your account admin.
UC_CREDENTIAL_INVALID_CREDENTIAL_TYPE_FOR_PURPOSE
SQLSTATE: none assigned
Credential type ‘<credentialType>
’ is not supported for purpose ‘<credentialPurpose>
’
UC_CREDENTIAL_PERMISSION_DENIED
SQLSTATE: none assigned
Only the account admin can create or update a credential with type <storageCredentialType>
.
UC_CREDENTIAL_TRUST_POLICY_IS_OPEN
SQLSTATE: none assigned
The trust policy of the IAM role to allow Databricks Account to assume the role should require an external id. Please contact your account admin to add the external id condition. This behavior is to guard against the Confused Deputy problem https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).
UC_CREDENTIAL_UNPRIVILEGED_SERVICE_PRINCIPAL_NOT_SUPPORTED
SQLSTATE: none assigned
Service principals cannot use the CREATE_STORAGE_CREDENTIAL
privilege to register managed identities. To register a managed identity, please assign the service principal the account admin role.
UC_CREDENTIAL_WORKSPACE_API_PROHIBITED
SQLSTATE: none assigned
Creating or updating a credential as a non-account admin is not supported in the account-level API. Please use the workspace-level API instead.
UC_DELTA_UNIVERSAL_FORMAT_CANNOT_PARSE_ICEBERG_VERSION
SQLSTATE: none assigned
Unable to parse Iceberg table version from metadata location <metadataLocation>
.
UC_DELTA_UNIVERSAL_FORMAT_CONCURRENT_WRITE
SQLSTATE: none assigned
A concurrent update to the same iceberg metadata version was detected.
UC_DELTA_UNIVERSAL_FORMAT_INVALID_METADATA_LOCATION
SQLSTATE: none assigned
The committed metadata location <metadataLocation>
is invalid. It is not a subdirectory of the table’s root directory <tableRoot>
.
UC_DELTA_UNIVERSAL_FORMAT_MISSING_FIELD_CONSTRAINT
SQLSTATE: none assigned
The provided delta iceberg format conversion information is missing required fields.
UC_DELTA_UNIVERSAL_FORMAT_NON_CREATE_CONSTRAINT
SQLSTATE: none assigned
Setting delta iceberg format information on create is unsupported.
UC_DELTA_UNIVERSAL_FORMAT_TOO_LARGE_CONSTRAINT
SQLSTATE: none assigned
The provided delta iceberg format conversion information is too large.
UC_DELTA_UNIVERSAL_FORMAT_UPDATE_INVALID
SQLSTATE: none assigned
Uniform metadata can only be updated on Delta tables with uniform enabled.
UC_DEPENDENCY_DEPTH_LIMIT_EXCEEDED
SQLSTATE: none assigned
<resourceType>
‘<ref>
’ depth exceeds limit (or has a circular reference).
UC_DEPENDENCY_DOES_NOT_EXIST
SQLSTATE: none assigned
<resourceType>
‘<ref>
’ is invalid because one of the underlying resources does not exist. <cause>
UC_DEPENDENCY_PERMISSION_DENIED
SQLSTATE: none assigned
<resourceType>
‘<ref>
’ does not have sufficient privilege to execute because the owner of one of the underlying resources failed an authorization check. <cause>
UC_DUPLICATE_CONNECTION
SQLSTATE: none assigned
A connection: ‘<connectionName>
’ with the URL ‘<url>
’ already exists.
UC_DUPLICATE_FABRIC_CATALOG_CREATION
SQLSTATE: none assigned
Attempted to create a Fabric catalog with url ‘<storageLocation>
’ that matches an existing catalog, which is not allowed.
UC_DUPLICATE_TAG_ASSIGNMENT_CREATION
SQLSTATE: none assigned
Tag assignment with tag key <tagKey>
already exists
UC_ENTITY_DOES_NOT_HAVE_CORRESPONDING_ONLINE_CLUSTER
SQLSTATE: none assigned
Entity <securableType> <entityId>
does not have a corresponding online cluster.
UC_EXCEEDS_MAX_FILE_LIMIT
SQLSTATE: none assigned
There are more than <maxFileResults>
files. Please specify [max_results] to limit the number of files returned.
UC_EXTERNAL_LOCATION_OP_NOT_ALLOWED
SQLSTATE: none assigned
Cannot <opName> <extLoc> <reason>
. <suggestion>
.
UC_FOREIGN_CATALOG_FOR_CONNECTION_TYPE_NOT_SUPPORTED
SQLSTATE: none assigned
Creation of a foreign catalog for connection type ‘<connectionType>
’ is not supported. This connection type can only be used to create managed ingestion pipelines. Please reference Databricks documentation for more information.
UC_FOREIGN_CREDENTIAL_CHECK_ONLY_FOR_READ_OPERATIONS
SQLSTATE: none assigned
Only READ credentials can be retrieved for foreign tables.
UC_FOREIGN_KEY_CHILD_COLUMN_LENGTH_MISMATCH
SQLSTATE: none assigned
Foreign key <constraintName>
child columns and parent columns are of different sizes.
UC_FOREIGN_KEY_COLUMN_MISMATCH
SQLSTATE: none assigned
The foreign key parent columns do not match the referenced primary key child columns. Foreign key parent columns are (<parentColumns>
) and primary key child columns are (<primaryKeyChildColumns>
).
UC_FOREIGN_KEY_COLUMN_TYPE_MISMATCH
SQLSTATE: none assigned
The foreign key child column type does not match the parent column type. Foreign key child column <childColumnName>
has type <childColumnType>
and parent column <parentColumnName>
has type <parentColumnType>
.
UC_GCP_INVALID_PRIVATE_KEY
SQLSTATE: none assigned
Access denied. Cause: service account private key is invalid.
UC_GCP_INVALID_PRIVATE_KEY_JSON_FORMAT
SQLSTATE: none assigned
Google Server Account OAuth Private Key has to be a valid JSON object with required fields, please make sure to provide the full JSON file generated from ‘KEYS’ section of service account details page.
UC_GCP_INVALID_PRIVATE_KEY_JSON_FORMAT_MISSING_FIELDS
SQLSTATE: none assigned
Google Server Account OAuth Private Key has to be a valid JSON object with required fields, please make sure to provide the full JSON file generated from ‘KEYS’ section of service account details page. Missing fields are <missingFields>
UC_IAM_ROLE_NON_SELF_ASSUMING
SQLSTATE: none assigned
The IAM role for this storage credential was found to be non self-assuming. Please check your role’s trust and IAM policies to ensure that your IAM role can assume itself according to the Unity Catalog storage credential documentation.
UC_ICEBERG_COMMIT_CONFLICT
SQLSTATE: none assigned
Cannot commit <tableName>
: metadata location <baseMetadataLocation>
has changed from <catalogMetadataLocation>
.
UC_ICEBERG_COMMIT_INVALID_TABLE
SQLSTATE: none assigned
Cannot perform Managed Iceberg commit to a non Managed Iceberg table: <tableName>
.
UC_ICEBERG_COMMIT_MISSING_FIELD_CONSTRAINT
SQLSTATE: none assigned
The provided Managed Iceberg commit information is missing required fields.
UC_ID_MISMATCH
SQLSTATE: none assigned
The <type> <name>
does not have ID <wrongId>
. Please retry the operation.
UC_INVALID_ACCESS_DBFS_ENTITY
SQLSTATE: none assigned
Invalid access of <securableType> <securableName>
in the federated catalog <catalogName>
. <reason>
UC_INVALID_CREDENTIAL_CLOUD
SQLSTATE: none assigned
Invalid credential cloud provider ‘<cloud>
’. Allowed cloud provider ‘<allowedCloud>
’.
UC_INVALID_CREDENTIAL_PURPOSE_VALUE
SQLSTATE: none assigned
Invalid value ‘<value>
’ for credential’s ‘purpose’. Allowed values ‘<allowedValues>
’.
UC_INVALID_CREDENTIAL_TRANSITION
SQLSTATE: none assigned
Cannot update a connection from <startingCredentialType>
to <endingCredentialType>
. The only valid transition is from a username/password based connection to an OAuth token based connection.
UC_INVALID_CRON_STRING_FABRIC
SQLSTATE: none assigned
Invalid cron string. Found: ‘<cronString>
’ with parse exception: ‘<message>
’
UC_INVALID_DIRECT_ACCESS_MANAGED_TABLE
SQLSTATE: none assigned
Invalid direct access managed table <tableName>
. Make sure source table & pipeline definition are not defined.
UC_INVALID_EMPTY_STORAGE_LOCATION
SQLSTATE: none assigned
Unexpected empty storage location for <securableType>
‘<securableName>
’ in catalog ‘<catalogName>
’. In order to fix this error, please run DESCRIBE SCHEMA <catalogName>
.<securableName>
and refresh this page.
UC_INVALID_OPTIONS_UPDATE
SQLSTATE: none assigned
Invalid options provided for update. Invalid options: <invalidOptions>
. Allowed options: <allowedOptions>
.
UC_INVALID_OPTION_VALUE
SQLSTATE: none assigned
Invalid value ‘<value>
’ for ‘<option>
’. Allowed values ‘<allowedValues>
’.
UC_INVALID_OPTION_VALUE_EMPTY
SQLSTATE: none assigned
‘<option>
’ cannot be empty. Please enter a non-empty value.
UC_INVALID_RULE_CONDITION
SQLSTATE: none assigned
Invalid condition in rule ‘<ruleName>
’. Compilation error with message ‘<message>
’.
UC_INVALID_UPDATE_ON_SYSTEM_WORKSPACE_ADMIN_GROUP_OWNED_SECURABLE
SQLSTATE: none assigned
Cannot update <securableType>
‘<securableName>
’ as it’s owned by an internal group. Please contact Databricks support for additional details.
UC_INVALID_WASBS_EXTERNAL_LOCATION_STORAGE_CREDENTIAL
SQLSTATE: none assigned
Provided Storage Credential <storageCredentialName>
is not associated with DBFS Root, creation of wasbs External Location is prohibited.
UC_LOCATION_INVALID_SCHEME
SQLSTATE: none assigned
Storage location has invalid URI scheme: <scheme>
.
UC_MALFORMED_OAUTH_SERVER_RESPONSE
SQLSTATE: none assigned
The response from the token server was missing the field <missingField>
. The returned server response is: <response>
UC_METASTORE_ASSIGNMENT_STATUS_INVALID
SQLSTATE: none assigned
‘<metastoreAssignmentStatus>
’ cannot be assigned. Only MANUALLY_ASSIGNABLE
and AUTO_ASSIGNMENT_ENABLED
are supported.
UC_METASTORE_CERTIFICATION_NOT_ENABLED
SQLSTATE: none assigned
Metastore certification is not enabled.
UC_METASTORE_DB_SHARD_MAPPING_NOT_FOUND
SQLSTATE: none assigned
Failed to retrieve a metastore to database shard mapping for Metastore ID <metastoreId>
due to an internal error. Please contact Databricks support.
UC_METASTORE_HAS_ACTIVE_MANAGED_ONLINE_CATALOGS
SQLSTATE: none assigned
The metastore <metastoreId>
has <numberManagedOnlineCatalogs>
managed online catalog(s). Please explicitly delete them, then retry the metastore deletion.
UC_METASTORE_STORAGE_ROOT_CREDENTIAL_UPDATE_INVALID
SQLSTATE: none assigned
Metastore root credential cannot be defined when updating the metastore root location. The credential will be fetched from the metastore parent external location.
UC_METASTORE_STORAGE_ROOT_DELETION_INVALID
SQLSTATE: none assigned
Deletion of metastore storage root location failed. <reason>
UC_METASTORE_STORAGE_ROOT_READ_ONLY_INVALID
SQLSTATE: none assigned
The root <securableType>
for a metastore cannot be read-only.
UC_METASTORE_STORAGE_ROOT_UPDATE_INVALID
SQLSTATE: none assigned
Metastore storage root cannot be updated once it is set.
UC_MODEL_INVALID_STATE
SQLSTATE: none assigned
Cannot generate temporary ‘<opName>
’ credentials for model version <modelVersion>
with status <modelVersionStatus>
. ‘<opName>
’ credentials can only be generated for model versions with status <validStatus>
UC_NO_ORG_ID_IN_CONTEXT
SQLSTATE: none assigned
Attempted to access org ID (or workspace ID), but context has none.
UC_ONLINE_CATALOG_NOT_MUTABLE
SQLSTATE: none assigned
The <rpcName>
request updates <fieldName>
. Use the online store compute tab to modify anything other than comment, owner and isolationMode of an online catalog.
UC_ONLINE_CATALOG_QUOTA_EXCEEDED
SQLSTATE: none assigned
Cannot create more than <quota>
online stores in the metastore and there is already <currentCount>
. You may not have access to any existing online stores. Contact your metastore admin to be granted access or for further instructions.
UC_ONLINE_INDEX_CATALOG_INVALID_CRUD
SQLSTATE: none assigned
online index catalogs must be <action>
via the /vector-search API.
UC_ONLINE_INDEX_CATALOG_NOT_MUTABLE
SQLSTATE: none assigned
The <rpcName>
request updates <fieldName>
. Use the /vector-search API to modify anything other than comment, owner and isolationMode of an online index catalog.
UC_ONLINE_INDEX_CATALOG_QUOTA_EXCEEDED
SQLSTATE: none assigned
Cannot create more than <quota>
online index catalogs in the metastore and there is already <currentCount>
. You may not have access to any existing online index catalogs. Contact your metastore admin to be granted access or for further instructions.
UC_ONLINE_INDEX_INVALID_CRUD
SQLSTATE: none assigned
online indexes must be <action>
via the /vector-search API.
UC_ONLINE_STORE_INVALID_CRUD
SQLSTATE: none assigned
online stores must be <action>
via the online store compute tab.
UC_ONLINE_TABLE_COLUMN_NAME_TOO_LONG
SQLSTATE: none assigned
The source table column name <columnName>
is too long. The maximum length is <maxLength>
characters.
UC_ONLINE_TABLE_PRIMARY_KEY_COLUMN_NOT_IN_SOURCE_TABLE_PRIMARY_KEY_CONSTRAINT
SQLSTATE: none assigned
Column <columnName>
cannot be used as a primary key column of the online table because it is not part of the existing PRIMARY KEY
constraint of the source table. For details, please see <docLink>
UC_ONLINE_TABLE_TIMESERIES_KEY_NOT_IN_SOURCE_TABLE_PRIMARY_KEY_CONSTRAINT
SQLSTATE: none assigned
Column <columnName>
cannot be used as a timeseries key of the online table because it is not a timeseries column of the existing PRIMARY KEY
constraint of the source table. For details, please see <docLink>
UC_ONLINE_VIEWS_PER_SOURCE_TABLE_QUOTA_EXCEEDED
SQLSTATE: none assigned
Cannot create more than <quota>
online table(s) per source table.
UC_ONLINE_VIEW_ACCESS_DENIED
SQLSTATE: none assigned
Accessing resource <resourceName>
requires use of a Serverless SQL warehouse. Please ensure the warehouse being used to execute a query or view a database catalog in the UI is serverless. For details, please see <docLink>
UC_ONLINE_VIEW_CONTINUOUS_QUOTA_EXCEEDED
SQLSTATE: none assigned
Cannot create more than <quota>
continuous online views in the online store, and there is already <currentCount>
. You may not have access to any existing online views. Contact your online store admin to be granted access or for further instructions.
UC_ONLINE_VIEW_DOES_NOT_SUPPORT_DMK
SQLSTATE: none assigned
<tableKind>
can not be created under storage location with Databricks Managed Keys. Please choose a different schema/catalog in a storage location without Databricks Managed Keys encryption.
UC_ONLINE_VIEW_INVALID_CATALOG
SQLSTATE: none assigned
Invalid catalog <catalogName>
with kind <catalogKind>
to create <tableKind>
within. <tableKind>
can only be created under catalogs of kinds: <validCatalogKinds>
.
UC_ONLINE_VIEW_INVALID_SCHEMA
SQLSTATE: none assigned
Invalid schema <schemaName>
with kind <schemaKind>
to create <tableKind>
within. <tableKind>
can only be created under schemas of kinds: <validSchemaKinds>
.
UC_ONLINE_VIEW_INVALID_TTL_TIME_COLUMN_TYPE
SQLSTATE: none assigned
Column <columnName>
of type <columnType>
cannot be used as a TTL time column. Allowed types are <supportedTypes>
.
UC_OUT_OF_AUTHORIZED_PATHS_SCOPE
SQLSTATE: none assigned
Authorized Path Error. The <securableType>
location <location>
is not defined within the authorized paths for catalog: <catalogName>
.
UC_OVERLAPPED_AUTHORIZED_PATHS
SQLSTATE: none assigned
The ‘authorized_paths’ option contains overlapping paths: <overlappingPaths>
. Ensure each path is unique and does not intersect with others in the list.
UC_PAGINATION_AND_QUERY_ARGS_MISMATCH
SQLSTATE: none assigned
The query argument ‘<arg>
’ is set to ‘<received>
’ which is different to the value used in the first pagination call (‘<expected>
’)
UC_PER_METASTORE_DATABASE_CONCURRENCY_LIMIT_EXCEEDED
SQLSTATE: none assigned
Too many requests to database from metastore <metastoreId>
. Please try again later.
UC_PRIMARY_KEY_ON_NULLABLE_COLUMN
SQLSTATE: none assigned
Cannot create the primary key <constraintName>
because its child column(s) <childColumnNames>
is nullable. Please change the column nullability and retry.
UC_ROOT_STORAGE_S3_BUCKET_NAME_CONTAINS_DOT
SQLSTATE: none assigned
Root storage S3 bucket name containing dots is not supported by Unity Catalog: <uri>
UC_SCHEMA_EMPTY_STORAGE_LOCATION
SQLSTATE: none assigned
Unexpected empty storage location for schema ‘<schemaName>
’ in catalog ‘<catalogName>
’. Please make sure the schema uses a path scheme of <validPathSchemesListStr>
.
UC_STORAGE_CREDENTIAL_ACCESS_CONNECTOR_PARSING_FAILED
SQLSTATE: none assigned
Failed to parse the provided access connector ID: <accessConnectorId>
. Please verify its formatting and try again.
UC_STORAGE_CREDENTIAL_DBFS_ROOT_CREATION_PERMISSION_DENIED
SQLSTATE: none assigned
Cannot create a storage credential for DBFS root because user: <userId>
is not the admin of the workspace: <workspaceId>
UC_STORAGE_CREDENTIAL_DBFS_ROOT_INVALID_LOCATION
SQLSTATE: none assigned
Location <location>
is not inside the DBFS root <dbfsRootLocation>
UC_STORAGE_CREDENTIAL_DBFS_ROOT_PRIVATE_DBFS_ENABLED
SQLSTATE: none assigned
DBFS root storage credential is not yet supported for workspaces with Firewall-enabled DBFS
UC_STORAGE_CREDENTIAL_DBFS_ROOT_PRIVATE_NOT_SUPPORTED
SQLSTATE: none assigned
DBFS root storage credential for current workspace is not yet supported
UC_STORAGE_CREDENTIAL_DBFS_ROOT_WORKSPACE_DISABLED
SQLSTATE: none assigned
DBFS root is not enabled for workspace <workspaceId>
UC_STORAGE_CREDENTIAL_FAILED_TO_OBTAIN_VALIDATION_TOKEN
SQLSTATE: none assigned
Failed to obtain an AAD token to perform cloud permission validation on an access connector. Please retry the action.
UC_STORAGE_CREDENTIAL_INVALID_CLOUD_PERMISSIONS
SQLSTATE: none assigned
Registering a storage credential requires the contributor role over the corresponding access connector with ID <accessConnectorId>
. Please contact your account admin.
UC_STORAGE_CREDENTIAL_PERMISSION_DENIED
SQLSTATE: none assigned
Only the account admin can create or update a storage credential with type <storageCredentialType>
.
UC_STORAGE_CREDENTIAL_SERVICE_PRINCIPAL_MISSING_VALIDATION_TOKEN
SQLSTATE: none assigned
Missing validation token for service principal. Please provide a valid ARM-scoped Entra ID token in the ‘X-Databricks-Azure-SP-Management-Token’ request header and retry. For details, checkhttps://docs.databricks.com/api/workspace/storagecredentials
UC_STORAGE_CREDENTIAL_TRUST_POLICY_IS_OPEN
SQLSTATE: none assigned
The trust policy of the IAM role to allow Databricks Account to assume the role should require an external id. Please contact your account admin to add the external id condition. This behavior is to guard against the Confused Deputy problem https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).
UC_STORAGE_CREDENTIAL_UNPRIVILEGED_SERVICE_PRINCIPAL_NOT_SUPPORTED
SQLSTATE: none assigned
Service principals cannot use the CREATE_STORAGE_CREDENTIAL
privilege to register managed identities. To register a managed identity, please assign the service principal the account admin role.
UC_STORAGE_CREDENTIAL_WASBS_NOT_DBFS_ROOT
SQLSTATE: none assigned
Location <location>
is not inside the DBFS root, because of that we can’t create an storage credential <storageCredentialName>
UC_STORAGE_CREDENTIAL_WORKSPACE_API_PROHIBITED
SQLSTATE: none assigned
Creating or updating a storage credential as a non-account admin is not supported in the account-level API. Please use the workspace-level API instead.
UC_SYSTEM_WORKSPACE_GROUP_PERMISSION_UNSUPPORTED
SQLSTATE: none assigned
Cannot grant privileges on <securableType>
to system generated group <principal>
.
UC_TAG_ASSIGNMENT_WITH_KEY_DOES_NOT_EXIST
SQLSTATE: none assigned
Tag assignment with tag key <tagKey>
does not exist
UC_UNSUPPORTED_HTTP_CONNECTION_BASE_PATH
SQLSTATE: none assigned
Invalid base path provided, base path should be something like /api/resources/v1. Unsupported path: <path>
UC_UNSUPPORTED_HTTP_CONNECTION_HOST
SQLSTATE: none assigned
Invalid host name provided, host name should be something likehttps://www.databricks.com without path suffix. Unsupported host: <host>
UC_UNSUPPORTED_LATIN_CHARACTER_IN_PATH
SQLSTATE: none assigned
Only basic Latin/Latin-1 ASCII
chars are supported in external location/volume/table paths. Unsupported path: <path>
UC_UPDATE_FORBIDDEN_FOR_PROVISIONING_SECURABLE
SQLSTATE: none assigned
Cannot update <securableType>
‘<securableName>
’ because it is being provisioned.
UC_WRITE_CONFLICT
SQLSTATE: none assigned
The <type> <name>
has been modified by another request. Please retry the operation.
UNITY_CATALOG_EXTERNAL_COORDINATED_COMMITS_REQUEST_DENIED
SQLSTATE: none assigned
Request to perform commit/getCommits for table ‘<tableId>
’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
UNITY_CATALOG_EXTERNAL_CREATE_STAGING_TABLE_REQUEST_DENIED
SQLSTATE: none assigned
Request to create staging table ‘<tableFullName>
’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
UNITY_CATALOG_EXTERNAL_CREATE_TABLE_REQUEST_FOR_NON_EXTERNAL_TABLE_DENIED
SQLSTATE: none assigned
Request to create non-external table ‘<tableFullName>
’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
UNITY_CATALOG_EXTERNAL_GENERATE_PATH_CREDENTIALS_DENIED
SQLSTATE: none assigned
Request to generate access credential for path ‘<path>
’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
UNITY_CATALOG_EXTERNAL_GENERATE_TABLE_CREDENTIALS_DENIED
SQLSTATE: none assigned
Request to generate access credential for table ‘<tableId>
’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.
UNITY_CATALOG_EXTERNAL_GET_FOREIGN_CREDENTIALS_DENIED
SQLSTATE: none assigned
Request to get foreign credentials for securables from outside of Databricks Unity Catalog enabled compute environment is denied for security.
Files API
FILES_API_API_IS_NOT_ENABLED_FOR_CLOUD_PATHS
SQLSTATE: none assigned
Requested method of Files API is not supported for cloud paths
FILES_API_AWS_ALL_ACCESS_DISABLED
SQLSTATE: none assigned
All access to the storage bucket has been disabled in AWS.
FILES_API_AWS_BUCKET_DOES_NOT_EXIST
SQLSTATE: none assigned
The storage bucket does not exist in AWS.
FILES_API_AWS_INVALID_AUTHORIZATION_HEADER
SQLSTATE: none assigned
The workspace is misconfigured: it must be in the same region as the AWS workspace root storage bucket.
FILES_API_AWS_KMS_KEY_DISABLED
SQLSTATE: none assigned
The configured KMS keys to access the storage bucket are disabled in AWS.
FILES_API_AZURE_ACCOUNT_IS_DISABLED
SQLSTATE: none assigned
The storage account is disabled in Azure.
FILES_API_AZURE_CONTAINER_DOES_NOT_EXIST
SQLSTATE: none assigned
The Azure container does not exist.
FILES_API_AZURE_FORBIDDEN
SQLSTATE: none assigned
Access to the storage container is forbidden by Azure.
FILES_API_AZURE_HAS_A_LEASE
SQLSTATE: none assigned
Azure responded that there is currently a lease on the resource. Try again later.
FILES_API_AZURE_INSUFFICIENT_ACCOUNT_PERMISSION
SQLSTATE: none assigned
The account being accessed does not have sufficient permissions to execute this operation.
FILES_API_AZURE_INVALID_STORAGE_ACCOUNT_NAME
SQLSTATE: none assigned
Cannot access storage account in Azure: invalid storage account name.
FILES_API_AZURE_KEY_BASED_AUTHENTICATION_NOT_PERMITTED
SQLSTATE: none assigned
The key vault vault is not found in Azure. Check your customer-managed keys settings.
FILES_API_AZURE_KEY_VAULT_KEY_NOT_FOUND
SQLSTATE: none assigned
The Azure key vault key is not found in Azure. Check your customer-managed keys settings.
FILES_API_AZURE_KEY_VAULT_VAULT_NOT_FOUND
SQLSTATE: none assigned
The key vault vault is not found in Azure. Check your customer-managed keys settings.
FILES_API_AZURE_MI_ACCESS_CONNECTOR_NOT_FOUND
SQLSTATE: none assigned
Azure Managed Identity Credential with Access Connector not found. This could be because IP access controls rejected your request.
FILES_API_COLON_IS_NOT_SUPPORTED_IN_PATH
SQLSTATE: none assigned
the ‘:’ character is not supported in paths
FILES_API_CONTROL_PLANE_NETWORK_ZONE_NOT_ALLOWED
SQLSTATE: none assigned
Databricks Control plane network zone not allowed.
FILES_API_DIRECTORIES_CANNOT_HAVE_BODIES
SQLSTATE: none assigned
A body was provided but directories cannot have a file body
FILES_API_DIRECTORY_IS_NOT_EMPTY
SQLSTATE: none assigned
The directory is not empty. This operation is not supported on non-empty directories.
FILES_API_DUPLICATED_HEADER
SQLSTATE: none assigned
The request contained multiple copies of a header that is only allowed once.
FILES_API_DUPLICATE_QUERY_PARAMETER
SQLSTATE: none assigned
Query parameter ‘<parameter_name>’ must be present exactly once but was provided multiple times.
FILES_API_EXPIRE_TIME_MUST_BE_IN_THE_FUTURE
SQLSTATE: none assigned
ExpireTime must be in the future
FILES_API_EXPIRE_TIME_TOO_FAR_IN_FUTURE
SQLSTATE: none assigned
Requested TTL is longer than supported (1 hour)
FILES_API_EXTERNAL_LOCATION_PATH_OVERLAP_OTHER_UC_STORAGE_ENTITY
SQLSTATE: none assigned
<unity_catalog_error_message>
FILES_API_FILE_OR_DIRECTORY_ENDS_IN_DOT
SQLSTATE: none assigned
Files or directories ending in the ‘.’ character are not supported.
FILES_API_FILE_SIZE_EXCEEDED
SQLSTATE: none assigned
File size shouldn’t exceed <max_download_size_in_bytes> bytes, but <size_in_bytes> bytes were found.
FILES_API_GCP_ACCOUNT_IS_DISABLED
SQLSTATE: none assigned
Access to the storage bucket has been disabled in GCP.
FILES_API_GCP_BUCKET_DOES_NOT_EXIST
SQLSTATE: none assigned
The storage bucket does not exist in GCP.
FILES_API_GCP_KEY_DISABLED_OR_DESTROYED
SQLSTATE: none assigned
The customer-managed encryption key configured for that location is either disabled or destroyed.
FILES_API_GCP_REQUEST_IS_PROHIBITED_BY_POLICY
SQLSTATE: none assigned
The GCP requests to the bucket are prohibited by policy, check the VPC service controls.
FILES_API_HOST_TEMPORARILY_NOT_AVAILABLE
SQLSTATE: none assigned
Cloud provider host is temporarily not available; please try again later.
FILES_API_INVALID_SESSION_TOKEN_TYPE
SQLSTATE: none assigned
Invalid session token type. Expected ‘<expected>
’ but got ‘<actual>
’.
FILES_API_INVALID_UPLOAD_TYPE
SQLSTATE: none assigned
Invalid upload type. Expected ‘<expected>
’ but got ‘<actual>
’.
FILES_API_INVALID_VALUE_FOR_OVERWRITE_QUERY_PARAMETER
SQLSTATE: none assigned
Query parameter ‘overwrite’ must be one of: true,false but was: <got_values>
FILES_API_INVALID_VALUE_FOR_QUERY_PARAMETER
SQLSTATE: none assigned
Query parameter ‘<parameter_name>’ must be one of: <expected>
but was: <actual>
FILES_API_METHOD_IS_NOT_ENABLED_FOR_JOBS_BACKGROUND_COMPUTE_ARTIFACT_STORAGE
SQLSTATE: none assigned
Requested method of Files API is not supported for Jobs Background Compute Artifact Storage.
FILES_API_MISSING_CONTENT_LENGTH
SQLSTATE: none assigned
The content-length header is required in the request.
FILES_API_MISSING_QUERY_PARAMETER
SQLSTATE: none assigned
Query parameter ‘<parameter_name>’ is required but is missing from the request.
FILES_API_MISSING_REQUIRED_PARAMETER_IN_REQUEST
SQLSTATE: none assigned
The request is missing a required parameter.
FILES_API_NOT_ENABLED_FOR_PLACE
SQLSTATE: none assigned
Files API for <place>
is not enabled for this workspace/account
FILES_API_NOT_SUPPORTED_FOR_INTERNAL_WORKSPACE_STORAGE
SQLSTATE: none assigned
Requested method of Files API is not supported for Internal Workspace Storage
FILES_API_PAGE_SIZE_MUST_BE_GREATER_OR_EQUAL_TO_ZERO
SQLSTATE: none assigned
page_size must greater or equal to 0
FILES_API_PATH_END_WITH_A_SLASH
SQLSTATE: none assigned
Paths ending in the ‘/’ character represent directories. This API does not support operations on directories.
FILES_API_PATH_IS_A_DIRECTORY
SQLSTATE: none assigned
The given path points to an existing directory. This API does not support operations on directories.
FILES_API_PATH_IS_A_FILE
SQLSTATE: none assigned
The given path points to an existing file. This API does not support operations on files.
FILES_API_PATH_IS_NOT_A_VALID_UTF8_ENCODED_URL
SQLSTATE: none assigned
the given path was not a valid UTF-8 encoded URL
FILES_API_PATH_IS_NOT_ENABLED_FOR_DATAPLANE_PROXY
SQLSTATE: none assigned
Given path is not enabled for data plane proxy
FILES_API_PRESIGNED_URLS_FOR_MODELS_NOT_SUPPORTED
SQLSTATE: none assigned
Files API for presigned URLs for models are not supported at the moment
FILES_API_RECURSIVE_LIST_IS_NOT_SUPPORTED
SQLSTATE: none assigned
Recursively listing files is not supported.
FILES_API_REQUEST_MUST_INCLUDE_ACCOUNT_INFORMATION
SQLSTATE: none assigned
Request must include account information
FILES_API_REQUEST_MUST_INCLUDE_USER_INFORMATION
SQLSTATE: none assigned
Request must include user information
FILES_API_REQUEST_MUST_INCLUDE_WORKSPACE_INFORMATION
SQLSTATE: none assigned
Request must include workspace information
FILES_API_STORAGE_CONTEXT_IS_NOT_SET
SQLSTATE: none assigned
Storage configuration for this workspace is not accessible.
FILES_API_TABLE_TYPE_NOT_SUPPORTED
SQLSTATE: none assigned
Files API is not supported for <table_type>
FILES_API_UC_UNSUPPORTED_LATIN_CHARACTER_IN_PATH
SQLSTATE: none assigned
<unity_catalog_error_message>
FILES_API_UNEXPECTED_ERROR_WHILE_PARSING_URI
SQLSTATE: none assigned
Unexpected error when parsing the URI
FILES_API_UNEXPECTED_QUERY_PARAMETERS
SQLSTATE: none assigned
Unexpected query parameters: <unexpected_query_parameters>
FILES_API_UNSUPPORTED_PATH
SQLSTATE: none assigned
The provided path is not supported by the Files API. Make sure the provided path does not contain instances of ‘../’ or ‘./’ sequences. Make sure the provided path does not use multiple consecutive slashes (e.g. ‘///’).
FILES_API_URL_GENERATION_DISABLED
SQLSTATE: none assigned
Presigned URL generation is not enabled for <cloud>
.
Miscellaneous
ABAC_ROW_COLUMN_POLICIES_NOT_SUPPORTED_ON_ASSIGNED_CLUSTERS
SQLSTATE: none assigned
Query on table <tableFullName>
with row filter or column mask assigned through policy rules isn’t supported on assigned clusters.
AZURE_ENTRA_CREDENTIALS_MISSING
SQLSTATE: none assigned
Azure Entra (aka Azure Active Directory) credentials missing.
Ensure you are either logged in with your Entra account
or have setup an Azure DevOps personal access token (PAT) in User Settings > Git Integration.
If you are not using a PAT and are using Azure DevOps with the Repos API,
you must use an Azure Entra access token.
Seehttps://docs.microsoft.com/azure/databricks/dev-tools/api/latest/aad/app-aad-token for steps to acquire an Azure Entra access token.
AZURE_ENTRA_CREDENTIALS_PARSE_FAILURE
SQLSTATE: none assigned
Encountered an error with your Azure Entra (Azure Active Directory) credentials. Please try logging out of
Entra https://portal.azure.com) and logging back in.
Alternatively, you may also visit User Settings > Git Integration to set
up an Azure DevOps personal access token.
AZURE_ENTRA_LOGIN_ERROR
SQLSTATE: none assigned
Encountered an error with your Azure Active Directory credentials. Please try logging out of
Azure Active Directory https://portal.azure.com) and logging back in.
CLEAN_ROOM_DELTA_SHARING_ENTITY_NOT_AUTHORIZED
SQLSTATE: none assigned
Credential generation for clean room delta sharing securable cannot be requested.
CONSTRAINT_ALREADY_EXISTS
SQLSTATE: none assigned
Constraint with name <constraintName>
already exists, choose a different name.
COULD_NOT_READ_REMOTE_REPOSITORY
SQLSTATE: none assigned
Could not read remote repository (<repoUrl>
).
Your current Git credentials provider is <gitCredentialProvider>
and username is <gitCredentialUsername>
.
Your remote Git repo URL is valid.
Your personal access token or app password has the correct repo access.
Error from Git: <gitErrorMessage>
CSMS_BEGINNING_OF_TIME_NOT_SUPPORTED
SQLSTATE: none assigned
Parameter beginning_of_time
cannot be true.
CSMS_CONTINUATION_TOKEN_EXPIRED
SQLSTATE: none assigned
Requested objects could not be found for the continuation token.
CSMS_INVALID_CONTINUATION
SQLSTATE: none assigned
Provided both ‘beginning_of_time=true’ and a ‘continuation_token’. When ‘beginning_of_time’ is set to ‘true’, ‘continuation_token’ should not be provided.
CSMS_INVALID_MAX_OBJECTS
SQLSTATE: none assigned
Invalid value <value>
for parameter max_objects, expected value in [<minValue>
, <maxValue>
]
CSMS_INVALID_URI_FORMAT
SQLSTATE: none assigned
Invalid URI format. Expected a volume (e.g. “/Volumes/catalog/schema/volume”) or cloud storage path (e.g. “s3://some-uri”)
CSMS_LOCATION_ERROR
SQLSTATE: none assigned
Failed to list objects. There are problems on the location that need to be resolved. Details: <msg>
CSMS_METASTORE_RESOLUTION_FAILED
SQLSTATE: none assigned
Unable to determine a metastore for the request.
CSMS_UNITY_CATALOG_ENTITY_NOT_FOUND
SQLSTATE: none assigned
Unity Catalog entity not found. Ensure that the catalog, schema, volume and/or external location exists.
CSMS_UNITY_CATALOG_EXTERNAL_LOCATION_DOES_NOT_EXIST
SQLSTATE: none assigned
Unity Catalog external location does not exist.
CSMS_UNITY_CATALOG_METASTORE_DOES_NOT_EXIST
SQLSTATE: none assigned
Unable to determine a metastore for the request. Metastore does not exist
CSMS_UNITY_CATALOG_VOLUME_DOES_NOT_EXIST
SQLSTATE: none assigned
Unity Catalog volume does not exist.
CSMS_URI_TOO_LONG
SQLSTATE: none assigned
Provided uri is too long. Maximum permitted length is <maxLength>
.
DMK_CATALOGS_DISALLOWED_ON_CLASSIC_COMPUTE
SQLSTATE: none assigned
Databricks Default Storage cannot be accessed using Classic Compute. Please use Serverless compute to access data in Default Storage
GITHUB_APP_COULD_NOT_REFRESH_CREDENTIALS
SQLSTATE: none assigned
Operation failed because linked GitHub app credentials could not be refreshed.
Please try again or go to User Settings > Git Integration and try relinking your Git provider account.
If the problem persists, please file a support ticket.
GITHUB_APP_CREDENTIALS_NO_ACCESS
SQLSTATE: none assigned
The link to your GitHub account does not have access. To fix this error:
An admin of the repository must go tohttps://github.com/apps/databricks/installations/new and install the Databricks GitHub app on the repository.
Alternatively, a GitHub account owner can install the app on the account to give access to the account’s repositories.
If the app is already installed, have an admin ensure that if they are using scoped access with the ‘Only select repositories’ option, they have included access to this repository by selecting it.
Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.
If the problem persists, please file a support ticket.
GITHUB_APP_EXPIRED_CREDENTIALS
SQLSTATE: none assigned
Linked GitHub app credentials expired after 6 months of inactivity.
Go to User Settings > Git Integration and try relinking your credentials.
If the problem persists, please file a support ticket.
GITHUB_APP_INSTALL_ON_DIFFERENT_USER_ACCOUNT
SQLSTATE: none assigned
The link to your GitHub account does not have access. To fix this error:
GitHub user
<gitCredentialUsername>
should go tohttps://github.com/apps/databricks/installations/new and install the app on the account<gitCredentialUsername>
to allow access.If user
<gitCredentialUsername>
already installed the app and they are using scoped access with the ‘Only select repositories’ option, they should ensure they have included access to this repository by selecting it.
Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.
If the problem persists, please file a support ticket.
GITHUB_APP_INSTALL_ON_ORGANIZATION
SQLSTATE: none assigned
The link to your GitHub account does not have access. To fix this error:
An owner of the GitHub organization
<organizationName>
should go tohttps://github.com/apps/databricks/installations/new and install the app on the organization<organizationName>
to allow access.If the app is already installed on GitHub organization
<organizationName>
, have an owner of this organization ensure that if using scoped access with the ‘Only select repositories’ option, they have included access to this repository by selecting it.
Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.
If the problem persists, please file a support ticket.
GITHUB_APP_INSTALL_ON_YOUR_ACCOUNT
SQLSTATE: none assigned
The link to your GitHub account does not have access. To fix this error:
Go tohttps://github.com/apps/databricks/installations/new and install the app on your account
<gitCredentialUsername>
to allow access.If the app is already installed, and you are using scoped access with the ‘Only select repositories’ option, ensure that you have included access to this repository by selecting it.
Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.
If the problem persists, please file a support ticket.
GIT_CREDENTIAL_GENERIC_INVALID
SQLSTATE: none assigned
Invalid Git provider credentials for repository URL <repoUrl>
.
Your current Git credentials provider is <gitCredentialProvider>
and username is <gitCredentialUsername>
.
Go to User Settings > Git Integration to view your credential.
Please go to your remote Git provider to ensure that:
You have entered the correct Git user email or username with your Git provider credentials.
Your token or app password has the correct repo access.
Your token has not expired.
If you have SSO enabled with your Git provider, be sure to authorize your token.
GIT_CREDENTIAL_INVALID_PAT
SQLSTATE: none assigned
Invalid Git provider Personal Access Token credentials for repository URL <repoUrl>
.
Your current Git credentials provider is <gitCredentialProvider>
and username is <gitCredentialUsername>
.
Go to User Settings > Git Integration to view your credential.
Please go to your remote Git provider to ensure that:
You have entered the correct Git user email or username with your Git provider credentials.
Your token or app password has the correct repo access.
Your token has not expired.
If you have SSO enabled with your Git provider, be sure to authorize your token.
GIT_CREDENTIAL_MISSING
SQLSTATE: none assigned
No Git credential configured, but credential required for this repository (<repoUrl>
).
Go to User Settings > Git Integration to set up your Git credentials.
GIT_CREDENTIAL_NO_WRITE_PERMISSION
SQLSTATE: none assigned
Write access to <gitCredentialProvider>
repository (<repoUrl>
) not granted.
Make sure you (<gitCredentialUsername>
) have write access to this remote repository.
GIT_CREDENTIAL_PROVIDER_MISMATCHED
SQLSTATE: none assigned
Incorrect Git credential provider for repository.
Your current Git credential’s provider (<gitCredentialProvider>
) does not match that of the repository’s Git provider <repoUrl>
.
Try a different repository or go to User Settings > Git Integration to update your Git credentials.
HIERARCHICAL_NAMESPACE_NOT_ENABLED
SQLSTATE: none assigned
The Azure storage account does not have hierarchical namespace enabled.
INVALID_FIELD_LENGTH
SQLSTATE: none assigned
<rpcName> <fieldName>
too long. Maximum length is <maxLength>
characters.
JOBS_TASK_FRAMEWORK_TASK_RUN_OUTPUT_NOT_FOUND
SQLSTATE: none assigned
Task Framework: Task Run Output for Task with runId <runId>
and orgId <orgId>
could not be found.
JOBS_TASK_FRAMEWORK_TASK_RUN_STATE_NOT_FOUND
SQLSTATE: none assigned
Task Framework: Task Run State for Task with runId <runId>
and orgId <orgId>
could not be found.
JOBS_TASK_REGISTRY_TASK_CLIENT_CONFIG_DOES_NOT_EXIST
SQLSTATE: none assigned
RPC ClientConfig for Task with ID <taskId>
does not exist.
JOBS_TASK_REGISTRY_TASK_DOES_NOT_EXIST
SQLSTATE: none assigned
Task with ID <taskId>
does not exist.
JOBS_TASK_REGISTRY_UNSUPPORTED_JOB_TASK
SQLSTATE: none assigned
Task Registry: Unsupported or unknown JobTask with class <taskClassName>
.
PATH_BASED_ACCESS_NOT_SUPPORTED_FOR_EXTERNAL_SHALLOW_CLONE
SQLSTATE: none assigned
Path-based access to external shallow clone table <tableFullName>
is not supported. Please use table names to access the shallow clone instead.
PATH_BASED_ACCESS_NOT_SUPPORTED_FOR_FABRIC
SQLSTATE: none assigned
Fabric table located at url ‘<url>
’ is not found. Please use the REFRESH FOREIGN CATALOG
command to populate Fabric tables.
PATH_BASED_ACCESS_NOT_SUPPORTED_FOR_TABLES_WITH_ROW_COLUMN_ACCESS_POLICIES
SQLSTATE: none assigned
Path-based access to table <tableFullName>
with row filter or column mask not supported.
PERMISSION_DENIED
SQLSTATE: none assigned
User does not have <msg>
on <resourceType>
‘<resourceName>
’.
REDASH_DELETE_ASSET_HANDLER_INVALID_INPUT
SQLSTATE: none assigned
Unable to parse delete object request: <invalidInputMsg>
REDASH_DELETE_OBJECT_NOT_IN_TRASH
SQLSTATE: none assigned
Unable to delete object <resourceName>
that is not in trash
REDASH_PERMISSION_DENIED
SQLSTATE: none assigned
Could not find or have permission to access resource <resourceId>
REDASH_QUERY_SNIPPET_QUOTA_EXCEEDED
SQLSTATE: none assigned
The quota for the number of query snippets has been reached. The current quota is <quota>
.
REDASH_QUERY_SNIPPET_TRIGGER_ALREADY_IN_USE
SQLSTATE: none assigned
The specified trigger <trigger>
is already in use by another query snippet in this workspace.
REDASH_RESOURCE_NOT_FOUND
SQLSTATE: none assigned
The requested resource <resourceName>
does not exist
REDASH_RESTORE_ASSET_HANDLER_INVALID_INPUT
SQLSTATE: none assigned
Unable to parse delete object request: <invalidInputMsg>
REDASH_RESTORE_OBJECT_NOT_IN_TRASH
SQLSTATE: none assigned
Unable to restore object <resourceName>
that is not in trash
REDASH_TRASH_OBJECT_ALREADY_IN_TRASH
SQLSTATE: none assigned
Unable to trash already-trashed object <resourceName>
REDASH_UNABLE_TO_GENERATE_RESOURCE_NAME
SQLSTATE: none assigned
Could not generate resource name from id <id>
REDASH_VISUALIZATION_NOT_FOUND
SQLSTATE: none assigned
Could not find visualization <visualizationId>
REDASH_VISUALIZATION_QUOTA_EXCEEDED
SQLSTATE: none assigned
The quota for the number of visualizations on query <query_id> has been reached. The current quota is <quota>
.
REPOSITORY_URL_NOT_FOUND
SQLSTATE: none assigned
Remote repository (<repoUrl>
) not found.
Your current Git credentials provider is <gitCredentialProvider>
and username is <gitCredentialUsername>
.
Please ensure that:
Your remote Git repo URL is valid.
Your personal access token or app password has the correct repo access.
RESOURCE_ALREADY_EXISTS
SQLSTATE: none assigned
<resourceType>
‘<resourceIdentifier>
’ already exists
RESOURCE_DOES_NOT_EXIST
SQLSTATE: none assigned
<resourceType>
‘<resourceIdentifier>
’ does not exist.
ROW_COLUMN_ACCESS_POLICIES_NOT_SUPPORTED_ON_ASSIGNED_CLUSTERS
SQLSTATE: none assigned
Query on table <tableFullName>
with row filter or column mask not supported on assigned clusters.
ROW_COLUMN_SECURITY_NOT_SUPPORTED_WITH_TABLE_IN_DELTA_SHARING
SQLSTATE: none assigned
Table <tableFullName>
is being shared with Delta Sharing, and cannot use row/column security.