Error conditions in Databricks
This is a list of common, named error conditions returned by Databricks.
Also see SQLSTATE codes.
Databricks Runtime and Databricks SQL
AGGREGATE_FUNCTION_WITH_NONDETERMINISTIC_EXPRESSION
SQLSTATE: none assigned
Non-deterministic expression <sqlExpr>
should not appear in the arguments of an aggregate function.
AI_FUNCTION_UNSUPPORTED_RETURN_TYPE
AI function: “<functionName>
” does not support the following type as return type: “<typeName>
”. Return type must be a valid SQL type understood by Catalyst and supported by AI function. Current supported types includes: <supportedValues>
AI_INVALID_ARGUMENT_VALUE_ERROR
Provided value “<argValue>
” is not supported by argument “<argName>
”. Supported values are: <supportedValues>
ALL_PARTITION_COLUMNS_NOT_ALLOWED
SQLSTATE: none assigned
Cannot use all columns for partition columns.
ALTER_TABLE_COLUMN_DESCRIPTOR_DUPLICATE
ALTER TABLE <type>
column <columnName>
specifies descriptor “<optionName>
” more than once, which is invalid.
AMBIGUOUS_ALIAS_IN_NESTED_CTE
SQLSTATE: none assigned
Name <name>
is ambiguous in nested CTE.
Please set <config>
to “CORRECTED” so that name defined in inner CTE takes precedence. If set it to “LEGACY”, outer CTE definitions will take precedence.
See https://spark.apache.org/docs/latest/sql-migration-guide.html#query-engine’.
AMBIGUOUS_REFERENCE_TO_FIELDS
Ambiguous reference to the field <field>
. It appears <count>
times in the schema.
ARGUMENT_NOT_CONSTANT
The function <functionName>
includes a parameter <parameterName>
at position <pos>
that requires a constant argument. Please compute the argument <sqlExpr>
separately and pass the result as a constant.
ARITHMETIC_OVERFLOW
<message>
.<alternative>
If necessary set <config>
to “false” to bypass this error.
For more details see ARITHMETIC_OVERFLOW
ASSIGNMENT_ARITY_MISMATCH
The number of columns or variables assigned or aliased: <numTarget>
does not match the number of source expressions: <numExpr>
.
AVRO_INCORRECT_TYPE
SQLSTATE: none assigned
Cannot convert Avro <avroPath>
to SQL <sqlPath>
because the original encoded data type is <avroType>
, however you’re trying to read the field as <sqlType>
, which would lead to an incorrect answer. To allow reading this field, enable the SQL configuration: <key>
.
AVRO_LOWER_PRECISION
SQLSTATE: none assigned
Cannot convert Avro <avroPath>
to SQL <sqlPath>
because the original encoded data type is <avroType>
, however you’re trying to read the field as <sqlType>
, which leads to data being read as null. Please provide a wider decimal type to get the correct result. To allow reading null to this field, enable the SQL configuration: <key>
.
CALL_ON_STREAMING_DATASET_UNSUPPORTED
SQLSTATE: none assigned
The method <methodName>
can not be called on streaming Dataset/DataFrame.
CANNOT_CONVERT_PROTOBUF_FIELD_TYPE_TO_SQL_TYPE
SQLSTATE: none assigned
Cannot convert Protobuf <protobufColumn>
to SQL <sqlColumn>
because schema is incompatible (protobufType = <protobufType>
, sqlType = <sqlType>
).
CANNOT_CONVERT_PROTOBUF_MESSAGE_TYPE_TO_SQL_TYPE
SQLSTATE: none assigned
Unable to convert <protobufType>
of Protobuf to SQL type <toType>
.
CANNOT_CONVERT_SQL_TYPE_TO_PROTOBUF_FIELD_TYPE
SQLSTATE: none assigned
Cannot convert SQL <sqlColumn>
to Protobuf <protobufColumn>
because schema is incompatible (protobufType = <protobufType>
, sqlType = <sqlType>
).
CANNOT_CONVERT_SQL_VALUE_TO_PROTOBUF_ENUM_TYPE
SQLSTATE: none assigned
Cannot convert SQL <sqlColumn>
to Protobuf <protobufColumn>
because <data>
is not in defined values for enum: <enumString>
.
CANNOT_COPY_STATE
Cannot copy catalog state like current database and temporary views from Unity Catalog to a legacy catalog.
CANNOT_DECODE_URL
The provided URL cannot be decoded: <url>
. Please ensure that the URL is properly formatted and try again.
CANNOT_DROP_AMBIGUOUS_CONSTRAINT
Cannot drop the constraint with the name <constraintName>
shared by a CHECK constraint
and a PRIMARY KEY or FOREIGN KEY constraint. You can drop the PRIMARY KEY or
FOREIGN KEY constraint by queries:
ALTER TABLE .. DROP PRIMARY KEY or
ALTER TABLE .. DROP FOREIGN KEY ..
CANNOT_ESTABLISH_CONNECTION
SQLSTATE: none assigned
Cannot establish connection to remote <jdbcDialectName>
database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please check your workspace’s network setup and ensure it does not have outbound restrictions to the host. Please also check that the host does not block inbound connections from the network where the workspace’s Spark clusters are deployed. ** Detailed error message: <causeErrorMessage>
.
CANNOT_ESTABLISH_CONNECTION_SERVERLESS
SQLSTATE: none assigned
Cannot establish connection to remote <jdbcDialectName>
database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please allow inbound traffic from the Internet to your host, as you are using Serverless Compute. If your network policies do not allow inbound Internet traffic, please use non Serverless Compute, or you may reach out to your Databricks representative to learn about Serverless Private Networking. ** Detailed error message: <causeErrorMessage>
.
CANNOT_INVOKE_IN_TRANSFORMATIONS
SQLSTATE: none assigned
Dataset transformations and actions can only be invoked by the driver, not inside of other Dataset transformations; for example, dataset1.map(x => dataset2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the dataset1.map transformation. For more information, see SPARK-28702.
CANNOT_LOAD_FUNCTION_CLASS
SQLSTATE: none assigned
Cannot load class <className>
when registering the function <functionName>
, please make sure it is on the classpath.
CANNOT_LOAD_PROTOBUF_CLASS
SQLSTATE: none assigned
Could not load Protobuf class with name <protobufClassName>
. <explanation>
.
CANNOT_LOAD_STATE_STORE
An error occurred during loading state.
For more details see CANNOT_LOAD_STATE_STORE
CANNOT_MERGE_INCOMPATIBLE_DATA_TYPE
Failed to merge incompatible data types <left>
and <right>
. Please check the data types of the columns being merged and ensure that they are compatible. If necessary, consider casting the columns to compatible data types before attempting the merge.
CANNOT_MERGE_SCHEMAS
Failed merging schemas:
Initial schema:
<left>
Schema that cannot be merged with the initial schema:
<right>
.
CANNOT_MODIFY_CONFIG
Cannot modify the value of the Spark config: <key>
.
See also https://spark.apache.org/docs/latest/sql-migration-guide.html#ddl-statements’.
CANNOT_PARSE_DECIMAL
Cannot parse decimal. Please ensure that the input is a valid number with optional decimal point or comma separators.
CANNOT_PARSE_INTERVAL
SQLSTATE: none assigned
Unable to parse <intervalString>
. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format. If the issue persists, please double check that the input value is not null or empty and try again.
CANNOT_PARSE_JSON_FIELD
Cannot parse the field name <fieldName>
and the value <fieldValue>
of the JSON token type <jsonType>
to target Spark data type <dataType>
.
CANNOT_PARSE_PROTOBUF_DESCRIPTOR
SQLSTATE: none assigned
Error parsing descriptor bytes into Protobuf FileDescriptorSet.
CANNOT_READ_ARCHIVED_FILE
Cannot read file at path <path>
because it has been archived. Please adjust your query filters to exclude archived files.
CANNOT_READ_SENSITIVE_KEY_FROM_SECURE_PROVIDER
Cannot read sensitive key ‘<key>
’ from secure provider.
CANNOT_RECOGNIZE_HIVE_TYPE
Cannot recognize hive type string: <fieldType>
, column: <fieldName>
. The specified data type for the field cannot be recognized by Spark SQL. Please check the data type of the specified field and ensure that it is a valid Spark SQL data type. Refer to the Spark SQL documentation for a list of valid data types and their format. If the data type is correct, please ensure that you are using a supported version of Spark SQL.
CANNOT_RESOLVE_STAR_EXPAND
SQLSTATE: none assigned
Cannot resolve <targetString>
.* given input columns <columns>
. Please check that the specified table or struct exists and is accessible in the input columns.
CANNOT_RESTORE_PERMISSIONS_FOR_PATH
SQLSTATE: none assigned
Failed to set permissions on created path <path>
back to <permission>
.
CANNOT_SHALLOW_CLONE_ACROSS_UC_AND_HMS
Cannot shallow-clone tables across Unity Catalog and Hive Metastore.
CANNOT_SHALLOW_CLONE_NON_UC_MANAGED_TABLE_AS_SOURCE_OR_TARGET
Shallow clone is only supported for the MANAGED table type. The table <table>
is not MANAGED table.
CANNOT_UPDATE_FIELD
SQLSTATE: none assigned
Cannot update <table>
field <fieldName>
type:
For more details see CANNOT_UPDATE_FIELD
CANNOT_UP_CAST_DATATYPE
SQLSTATE: none assigned
Cannot up cast <expression>
from <sourceType>
to <targetType>
.
<details>
CANNOT_VALIDATE_CONNECTION
SQLSTATE: none assigned
Validation of <jdbcDialectName>
connection is not supported. Please contact Databricks support for alternative solutions, or set “spark.databricks.testConnectionBeforeCreation” to “false” to skip connection testing before creating a connection object.
CAST_INVALID_INPUT
The value <expression>
of the type <sourceType>
cannot be cast to <targetType>
because it is malformed. Correct the value as per the syntax, or change its target type. Use try_cast
to tolerate malformed input and return NULL instead. If necessary set <ansiConfig>
to “false” to bypass this error.
For more details see CAST_INVALID_INPUT
CAST_OVERFLOW
The value <value>
of the type <sourceType>
cannot be cast to <targetType>
due to an overflow. Use try_cast
to tolerate overflow and return NULL instead. If necessary set <ansiConfig>
to “false” to bypass this error.
CAST_OVERFLOW_IN_TABLE_INSERT
Fail to assign a value of <sourceType>
type to the <targetType>
type column or variable <columnName>
due to an overflow. Use try_cast
on the input value to tolerate overflow and return NULL instead.
CLOUD_FILE_SOURCE_FILE_NOT_FOUND
A file notification was received for file: <filePath>
but it does not exist anymore. Please ensure that files are not deleted before they are processed. To continue your stream, you can set the Spark SQL configuration <config>
to true.
CODEC_NOT_AVAILABLE
SQLSTATE: none assigned
The codec <codecName>
is not available. Consider to set the config <configKey>
to <configVal>
.
CODEC_SHORT_NAME_NOT_FOUND
SQLSTATE: none assigned
Cannot find a short name for the codec <codecName>
.
COLUMN_ALREADY_EXISTS
The column <columnName>
already exists. Consider to choose another name or rename the existing column.
COLUMN_MASKS_CHECK_CONSTRAINT_UNSUPPORTED
Creating CHECK constraint on table <tableName>
with column mask policies is not supported.
COLUMN_MASKS_DUPLICATE_USING_COLUMN_NAME
A <statementType>
statement attempted to assign a column mask policy to a column which included two or more other referenced columns in the USING COLUMNS list with the same name <columnName>
, which is invalid.
COLUMN_MASKS_FEATURE_NOT_SUPPORTED
Column mask policies for <tableName>
are not supported:
For more details see COLUMN_MASKS_FEATURE_NOT_SUPPORTED
COLUMN_MASKS_MERGE_UNSUPPORTED_SOURCE
MERGE INTO operations do not support column mask policies in source table <tableName>
.
COLUMN_MASKS_MERGE_UNSUPPORTED_TARGET
MERGE INTO operations do not support writing into table <tableName>
with column mask policies.
COLUMN_MASKS_MULTI_PART_TARGET_COLUMN_NAME
This statement attempted to assign a column mask policy to a column <columnName>
with multiple name parts, which is invalid.
COLUMN_MASKS_MULTI_PART_USING_COLUMN_NAME
This statement attempted to assign a column mask policy to a column and the USING COLUMNS list included the name <columnName>
with multiple name parts, which is invalid.
COLUMN_MASKS_TABLE_CLONE_SOURCE_NOT_SUPPORTED
<mode>
clone from table <tableName>
with column mask policies is not supported.
COLUMN_MASKS_TABLE_CLONE_TARGET_NOT_SUPPORTED
<mode>
clone to table <tableName>
with column mask policies is not supported.
COLUMN_MASKS_UNSUPPORTED_PROVIDER
Failed to execute <statementType>
command because assigning column mask policies is not supported for target data source with table provider: “<provider>
”.
COLUMN_MASKS_UNSUPPORTED_SUBQUERY
Cannot perform <operation>
for table <tableName>
because it contains one or more column mask policies with subquery expression(s), which are not yet supported. Please contact the owner of the table to update the column mask policies in order to continue.
COLUMN_MASKS_USING_COLUMN_NAME_SAME_AS_TARGET_COLUMN
The column <columnName>
had the same name as the target column, which is invalid; please remove the column from the USING COLUMNS list and retry the command.
COLUMN_NOT_DEFINED_IN_TABLE
SQLSTATE: none assigned
<colType>
column <colName>
is not defined in table <tableName>
, defined table columns are: <tableCols>
.
COLUMN_NOT_FOUND
The column <colName>
cannot be found. Verify the spelling and correctness of the column name according to the SQL config <caseSensitiveConfig>
.
COMPARATOR_RETURNS_NULL
SQLSTATE: none assigned
The comparator has returned a NULL for a comparison between <firstValue>
and <secondValue>
. It should return a positive integer for “greater than”, 0 for “equal” and a negative integer for “less than”. To revert to deprecated behavior where NULL is treated as 0 (equal), you must set “spark.sql.legacy.allowNullComparisonResultInArraySort” to “true”.
CONCURRENT_QUERY
SQLSTATE: none assigned
Another instance of this query [id: <queryId>
] was just started by a concurrent session [existing runId: <existingQueryRunId>
new runId: <newQueryRunId>
].
CONCURRENT_STREAM_LOG_UPDATE
Concurrent update to the log. Multiple streaming jobs detected for <batchId>
.
Please make sure only one streaming job runs on a specific checkpoint location at a time.
CONNECTION_ALREADY_EXISTS
Cannot create connection <connectionName>
because it already exists.
Choose a different name, drop or replace the existing connection, or add the IF NOT EXISTS clause to tolerate pre-existing connections.
CONNECTION_NAME_CANNOT_BE_EMPTY
Cannot execute this command because the connection name must be non-empty.
CONNECTION_NOT_FOUND
Cannot execute this command because the connection name <connectionName>
was not found.
CONNECTION_OPTION_NOT_SUPPORTED
Connections of type ‘<connectionType>
’ do not support the following option(s): <optionsNotSupported>
. Supported options: <allowedOptions>
.
CONNECTION_TYPE_NOT_SUPPORTED
Cannot create connection of type ‘<connectionType>
. Supported connection types: <allowedTypes>
.
CONNECTOR_OPERATION_INTERNAL_ERROR
The <operation>
failed for <sourceName>
: Please file a support ticket to report.
CONVERSION_INVALID_INPUT
The value <str>
(<fmt>
) cannot be converted to <targetType>
because it is malformed. Correct the value as per the syntax, or change its format. Use <suggestion>
to tolerate malformed input and return NULL instead.
COPY_INTO_CREDENTIALS_NOT_ALLOWED_ON
Invalid scheme <scheme>
. COPY INTO source encryption currently only supports s3/s3n/s3a/wasbs/abfss.
COPY_INTO_DUPLICATED_FILES_COPY_NOT_ALLOWED
Duplicated files were committed in a concurrent COPY INTO operation. Please try again later.
COPY_INTO_ENCRYPTION_NOT_ALLOWED_ON
Invalid scheme <scheme>
. COPY INTO source encryption currently only supports s3/s3n/s3a/abfss.
COPY_INTO_ENCRYPTION_NOT_SUPPORTED_FOR_AZURE
COPY INTO encryption only supports ADLS Gen2, or abfss:// file scheme
COPY_INTO_ENCRYPTION_REQUIRED_WITH_EXPECTED
Invalid encryption option <requiredKey>
. COPY INTO source encryption must specify ‘<requiredKey>
’ = ‘<keyValue>
’.
COPY_INTO_NON_BLIND_APPEND_NOT_ALLOWED
COPY INTO other than appending data is not allowed to run concurrently with other transactions. Please try again later.
COPY_INTO_SOURCE_FILE_FORMAT_NOT_SUPPORTED
The format of the source files must be one of CSV, JSON, AVRO, ORC, PARQUET, TEXT, or BINARYFILE. Using COPY INTO on Delta tables as the source is not supported as duplicate data may be ingested after OPTIMIZE operations. This check can be turned off by running the SQL command set spark.databricks.delta.copyInto.formatCheck.enabled = false
.
COPY_INTO_SOURCE_SCHEMA_INFERENCE_FAILED
The source directory did not contain any parsable files of type <format>
. Please check the contents of ‘<source>
’.
CREATE_PERMANENT_VIEW_WITHOUT_ALIAS
SQLSTATE: none assigned
Not allowed to create the permanent view <name>
without explicitly assigning an alias for the expression <attr>
.
CREATE_TABLE_COLUMN_DESCRIPTOR_DUPLICATE
CREATE TABLE column <columnName>
specifies descriptor “<optionName>
” more than once, which is invalid.
CREATE_VIEW_COLUMN_ARITY_MISMATCH
Cannot create view <viewName>
, the reason is
For more details see CREATE_VIEW_COLUMN_ARITY_MISMATCH
DATATYPE_MISMATCH
Cannot resolve <sqlExpr>
due to data type mismatch:
For more details see DATATYPE_MISMATCH
DATATYPE_MISSING_SIZE
DataType <type>
requires a length parameter, for example <type>
(10). Please specify the length.
DATA_SOURCE_NOT_FOUND
Failed to find data source: <provider>
. Please find packages at https://spark.apache.org/third-party-projects.html
.
DATA_SOURCE_OPTION_CONTAINS_INVALID_CHARACTERS
Option <option>
must not be empty and should not contain invalid characters, query strings, or parameters.
DATA_SOURCE_URL_NOT_ALLOWED
JDBC URL is not allowed in data source options, please specify ‘host’, ‘port’, and ‘database’ options instead.
DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION
Decimal precision <precision>
exceeds max precision <maxPrecision>
.
DEFAULT_DATABASE_NOT_EXISTS
Default database <defaultDatabase>
does not exist, please create it first or change default database to <defaultDatabase>
.
DEFAULT_FILE_NOT_FOUND
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running ‘REFRESH TABLE tableName’ command in SQL or by recreating the Dataset/DataFrame involved. If disk cache is stale or the underlying files have been removed, you can invalidate disk cache manually by restarting the cluster.
DEFAULT_PLACEMENT_INVALID
A DEFAULT keyword in a MERGE, INSERT, UPDATE, or SET VARIABLE command could not be directly assigned to a target column because it was part of an expression.
For example: UPDATE SET c1 = DEFAULT
is allowed, but UPDATE T SET c1 = DEFAULT + 1
is not allowed.
DIFFERENT_DELTA_TABLE_READ_BY_STREAMING_SOURCE
The streaming query was reading from an unexpected Delta table (id = ‘<newTableId>
’).
It used to read from another Delta table (id = ‘<oldTableId>
’) according to checkpoint.
This may happen when you changed the code to read from a new table or you deleted and
re-created a table. Please revert your change or delete your streaming query checkpoint
to restart from scratch.
DISTINCT_WINDOW_FUNCTION_UNSUPPORTED
SQLSTATE: none assigned
Distinct window functions are not supported: <windowExpr>
.
DIVIDE_BY_ZERO
Division by zero. Use try_divide
to tolerate divisor being 0 and return NULL instead. If necessary set <config>
to “false” to bypass this error.
For more details see DIVIDE_BY_ZERO
DLT_VIEW_LOCATION_NOT_SUPPORTED
MATERIALIZED VIEW locations are supported only in a Delta Live Tables pipeline.
DLT_VIEW_SCHEMA_WITH_TYPE_NOT_SUPPORTED
MATERIALIZED VIEW schemas with a specified type are supported only in a Delta Live Tables pipeline.
DUPLICATED_FIELD_NAME_IN_ARROW_STRUCT
SQLSTATE: none assigned
Duplicated field names in Arrow Struct are not allowed, got <fieldNames>
.
DUPLICATED_MAP_KEY
Duplicate map key <key>
was found, please check the input data. If you want to remove the duplicated keys, you can set <mapKeyDedupPolicy>
to “LAST_WIN” so that the key inserted at last takes precedence.
DUPLICATED_METRICS_NAME
SQLSTATE: none assigned
The metric name is not unique: <metricName>
. The same name cannot be used for metrics with different results. However multiple instances of metrics with with same result and name are allowed (e.g. self-joins).
DUPLICATE_ASSIGNMENTS
The columns or variables <nameList>
appear more than once as assignment targets.
DUPLICATE_CLAUSES
SQLSTATE: none assigned
Found duplicate clauses: <clauseName>
. Please, remove one of them.
DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT
Call to function <functionName>
is invalid because it includes multiple argument assignments to the same parameter name <parameterName>
.
For more details see DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT
DUPLICATE_ROUTINE_PARAMETER_NAMES
Found duplicate name(s) in the parameter list of the user-defined routine <routineName>
: <names>
.
DUPLICATE_ROUTINE_RETURNS_COLUMNS
Found duplicate column(s) in the RETURNS clause column list of the user-defined routine <routineName>
: <columns>
.
ENCODER_NOT_FOUND
SQLSTATE: none assigned
Not found an encoder of the type <typeName>
to Spark SQL internal representation. Consider to change the input type to one of supported at ‘<docroot>
/sql-ref-datatypes.html’.
ERROR_READING_AVRO_UNKNOWN_FINGERPRINT
SQLSTATE: none assigned
Error reading avro data – encountered an unknown fingerprint: <fingerprint>
, not sure what schema to use. This could happen if you registered additional schemas after starting your spark context.
EVENT_LOG_UNSUPPORTED_TABLE_TYPE
The table type of <tableIdentifier>
is <tableType>
.
Querying event logs only supports Materialized Views, Streaming Tables, or Delta Live Tables pipelines
EVENT_TIME_IS_NOT_ON_TIMESTAMP_TYPE
SQLSTATE: none assigned
The event time <eventName>
has the invalid type <eventType>
, but expected “TIMESTAMP
”.
EXCEPT_NESTED_COLUMN_INVALID_TYPE
EXCEPT column <columnName>
was resolved and expected to be StructType, but found type <dataType>
.
EXCEPT_RESOLVED_COLUMNS_WITHOUT_MATCH
EXCEPT columns [<exceptColumns>
] were resolved, but do not match any of the columns [<expandedColumns>
] from the star expansion.
EXCEPT_UNRESOLVED_COLUMN_IN_STRUCT_EXPANSION
The column/field name <objectName>
in the EXCEPT clause cannot be resolved. Did you mean one of the following: [<objectList>
]?
Note: nested columns in the EXCEPT clause may not include qualifiers (table name, parent struct column name, etc.) during a struct expansion; try removing qualifiers if they are used with nested columns.
EXPRESSION_TYPE_IS_NOT_ORDERABLE
SQLSTATE: none assigned
Column expression <expr>
cannot be sorted because its type <exprType>
is not orderable.
FAILED_EXECUTE_UDF
Failed to execute user defined function (<functionName>
: (<signature>
) => <result>
).
FAILED_FUNCTION_CALL
Failed preparing of the function <funcName>
for call. Please, double check function’s arguments.
FAILED_RENAME_TEMP_FILE
SQLSTATE: none assigned
Failed to rename temp file <srcPath>
to <dstPath>
as FileSystem.rename returned false.
FEATURE_NOT_ON_CLASSIC_WAREHOUSE
<feature>
is not supported on Classic SQL warehouses. To use this feature, use a Pro or Serverless SQL warehouse. To learn more about warehouse types, see <docLink>
FEATURE_REQUIRES_UC
<feature>
is not supported without Unity Catalog. To use this feature, enable Unity Catalog. To learn more about Unity Catalog, see <docLink>
FIELDS_ALREADY_EXISTS
SQLSTATE: none assigned
Cannot <op>
column, because <fieldNames>
already exists in <struct>
.
FILE_IN_STAGING_PATH_ALREADY_EXISTS
File in staging path <path>
already exists but OVERWRITE is not set
FOREIGN_KEY_MISMATCH
Foreign key parent columns <parentColumns>
do not match primary key child columns <childColumns>
.
FOREIGN_OBJECT_NAME_CANNOT_BE_EMPTY
Cannot execute this command because the foreign <objectType>
name must be non-empty.
FROM_JSON_CONFLICTING_SCHEMA_UPDATES
from_json inference encountered conflicting schema updates at: <location>
FROM_JSON_INFERENCE_NOT_SUPPORTED
from_json inference is only supported when defining streaming tables
FROM_JSON_INVALID_CONFIGURATION
from_json configuration is invalid:
For more details see FROM_JSON_INVALID_CONFIGURATION
GENERATED_COLUMN_WITH_DEFAULT_VALUE
SQLSTATE: none assigned
A column cannot have both a default value and a generation expression but column <colName>
has default value: (<defaultValue>
) and generation expression: (<genExpr>
).
GRAPHITE_SINK_PROPERTY_MISSING
SQLSTATE: none assigned
Graphite sink requires ‘<property>
’ property.
GROUPING_COLUMN_MISMATCH
Column of grouping (<grouping>
) can’t be found in grouping columns <groupingColumns>
.
GROUPING_ID_COLUMN_MISMATCH
Columns of grouping_id (<groupingIdColumn>
) does not match grouping columns (<groupByColumns>
).
GROUP_BY_AGGREGATE
Aggregate functions are not allowed in GROUP BY, but found <sqlExpr>
.
For more details see GROUP_BY_AGGREGATE
GROUP_BY_POS_AGGREGATE
GROUP BY <index>
refers to an expression <aggExpr>
that contains an aggregate function. Aggregate functions are not allowed in GROUP BY.
GROUP_BY_POS_OUT_OF_RANGE
GROUP BY position <index>
is not in select list (valid range is [1, <size>
]).
GROUP_EXPRESSION_TYPE_IS_NOT_ORDERABLE
SQLSTATE: none assigned
The expression <sqlExpr>
cannot be used as a grouping expression because its data type <dataType>
is not an orderable data type.
HLL_INVALID_INPUT_SKETCH_BUFFER
SQLSTATE: none assigned
Invalid call to <function>
; only valid HLL sketch buffers are supported as inputs (such as those produced by the hll_sketch_agg
function).
HLL_INVALID_LG_K
SQLSTATE: none assigned
Invalid call to <function>
; the lgConfigK
value must be between <min>
and <max>
, inclusive: <value>
.
HLL_UNION_DIFFERENT_LG_K
SQLSTATE: none assigned
Sketches have different lgConfigK
values: <left>
and <right>
. Set the allowDifferentLgConfigK
parameter to true to call <function>
with different lgConfigK
values.
IDENTIFIER_TOO_MANY_NAME_PARTS
<identifier>
is not a valid identifier as it has more than 2 name parts.
INCOMPATIBLE_COLUMN_TYPE
<operator>
can only be performed on tables with compatible column types. The <columnOrdinalNumber>
column of the <tableOrdinalNumber>
table is <dataType1> type which is not compatible with <dataType2> at the same column of the first table.<hint>
.
INCOMPATIBLE_DATASOURCE_REGISTER
SQLSTATE: none assigned
Detected an incompatible DataSourceRegister. Please remove the incompatible library from classpath or upgrade it. Error: <message>
INCOMPATIBLE_DATA_FOR_TABLE
Cannot write incompatible data for the table <tableName>
:
For more details see INCOMPATIBLE_DATA_FOR_TABLE
INCOMPATIBLE_VIEW_SCHEMA_CHANGE
SQLSTATE: none assigned
The SQL query of view <viewName>
has an incompatible schema change and column <colName>
cannot be resolved. Expected <expectedNum>
columns named <colName>
but got <actualCols>
.
Please try to re-create the view by running: <suggestion>
.
INCONSISTENT_BEHAVIOR_CROSS_VERSION
You may get a different result due to the upgrading to
For more details see INCONSISTENT_BEHAVIOR_CROSS_VERSION
INCORRECT_END_OFFSET
Max offset with <rowsPerSecond>
rowsPerSecond is <maxSeconds>
, but it’s <endSeconds>
now.
INCORRECT_NUMBER_OF_ARGUMENTS
<failure>
, <functionName>
requires at least <minArgs>
arguments and at most <maxArgs>
arguments.
INCORRECT_RAMP_UP_RATE
Max offset with <rowsPerSecond>
rowsPerSecond is <maxSeconds>
, but ‘rampUpTimeSeconds’ is <rampUpTimeSeconds>
.
INDEX_ALREADY_EXISTS
Cannot create the index <indexName>
on table <tableName>
because it already exists.
INSERT_COLUMN_ARITY_MISMATCH
Cannot write to <tableName>
, the reason is
For more details see INSERT_COLUMN_ARITY_MISMATCH
INSERT_PARTITION_COLUMN_ARITY_MISMATCH
Cannot write to ‘<tableName>
’, <reason>
:
Table columns: <tableColumns>
.
Partition columns with static values: <staticPartCols>
.
Data columns: <dataColumns>
.
INSUFFICIENT_PERMISSIONS_EXT_LOC
User <user>
has insufficient privileges for external location <location>
.
INSUFFICIENT_PERMISSIONS_NO_OWNER
There is no owner for <securableName>
. Ask your administrator to set an owner.
INSUFFICIENT_PERMISSIONS_SECURABLE_PARENT_OWNER
The owner of <securableName>
is different from the owner of <parentSecurableName>
.
INSUFFICIENT_PERMISSIONS_STORAGE_CRED
Storage credential <credentialName>
has insufficient privileges.
INSUFFICIENT_PERMISSIONS_UNDERLYING_SECURABLES
User cannot <action>
on <securableName>
because of permissions on underlying securables.
INSUFFICIENT_PERMISSIONS_UNDERLYING_SECURABLES_VERBOSE
User cannot <action>
on <securableName>
because of permissions on underlying securables:
<underlyingReport>
INSUFFICIENT_TABLE_PROPERTY
SQLSTATE: none assigned
Can’t find table property:
For more details see INSUFFICIENT_TABLE_PROPERTY
INTERNAL_ERROR_METADATA_CATALOG
An object in the metadata catalog has been corrupted:
For more details see INTERNAL_ERROR_METADATA_CATALOG
INTERVAL_DIVIDED_BY_ZERO
Division by zero. Use try_divide
to tolerate divisor being 0 and return NULL instead.
INVALID_ARRAY_INDEX
The index <indexValue>
is out of bounds. The array has <arraySize>
elements. Use the SQL function get()
to tolerate accessing element at invalid index and return NULL instead. If necessary set <ansiConfig>
to “false” to bypass this error.
For more details see INVALID_ARRAY_INDEX
INVALID_ARRAY_INDEX_IN_ELEMENT_AT
The index <indexValue>
is out of bounds. The array has <arraySize>
elements. Use try_element_at
to tolerate accessing element at invalid index and return NULL instead. If necessary set <ansiConfig>
to “false” to bypass this error.
For more details see INVALID_ARRAY_INDEX_IN_ELEMENT_AT
INVALID_BITMAP_POSITION
The 0-indexed bitmap position <bitPosition>
is out of bounds. The bitmap has <bitmapNumBits>
bits (<bitmapNumBytes>
bytes).
INVALID_BOUNDARY
SQLSTATE: none assigned
The boundary <boundary>
is invalid: <invalidValue>
.
For more details see INVALID_BOUNDARY
INVALID_BYTE_STRING
SQLSTATE: none assigned
The expected format is ByteString, but was <unsupported>
(<class>
).
INVALID_COLUMN_NAME_AS_PATH
The datasource <datasource>
cannot save the column <columnName>
because its name contains some characters that are not allowed in file paths. Please, use an alias to rename it.
INVALID_COLUMN_OR_FIELD_DATA_TYPE
Column or field <name>
is of type <type>
while it’s required to be <expectedType>
.
INVALID_DEFAULT_VALUE
SQLSTATE: none assigned
Failed to execute <statement>
command because the destination column or variable <colName>
has a DEFAULT value <defaultValue>
,
For more details see INVALID_DEFAULT_VALUE
INVALID_DEST_CATALOG
Destination catalog of the SYNC command must be within Unity Catalog. Found <catalog>
.
INVALID_DRIVER_MEMORY
System memory <systemMemory>
must be at least <minSystemMemory>
. Please increase heap size using the –driver-memory option or “<config>
” in Spark configuration.
INVALID_ESC
SQLSTATE: none assigned
Found an invalid escape string: <invalidEscape>
. The escape string must contain only one character.
INVALID_ESCAPE_CHAR
SQLSTATE: none assigned
EscapeChar
should be a string literal of length one, but got <sqlExpr>
.
INVALID_EXECUTOR_MEMORY
Executor memory <executorMemory>
must be at least <minSystemMemory>
. Please increase executor memory using the –executor-memory option or “<config>
” in Spark configuration.
INVALID_EXTRACT_BASE_FIELD_TYPE
Can’t extract a value from <base>
. Need a complex type [STRUCT
, ARRAY
, MAP
] but got <other>
.
INVALID_FRACTION_OF_SECOND
The fraction of sec must be zero. Valid range is [0, 60]. If necessary set <ansiConfig>
to “false” to bypass this error.
INVALID_HIVE_COLUMN_NAME
SQLSTATE: none assigned
Cannot create the table <tableName>
having the nested column <columnName>
whose name contains invalid characters <invalidChars>
in Hive metastore.
INVALID_IDENTIFIER
The identifier <ident>
is invalid. Please, consider quoting it with back-quotes as <ident>
.
INVALID_INDEX_OF_ZERO
The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).
INVALID_INLINE_TABLE
SQLSTATE: none assigned
Invalid inline table.
For more details see INVALID_INLINE_TABLE
INVALID_JSON_SCHEMA_MAP_TYPE
Input schema <jsonSchema>
can only contain STRING
as a key type for a MAP
.
INVALID_KRYO_SERIALIZER_BUFFER_SIZE
The value of the config “<bufferSizeConfKey>
” must be less than 2048 MiB, but got <bufferSizeConfValue>
MiB.
INVALID_LAMBDA_FUNCTION_CALL
SQLSTATE: none assigned
Invalid lambda function call.
For more details see INVALID_LAMBDA_FUNCTION_CALL
INVALID_LATERAL_JOIN_TYPE
The <joinType>
JOIN with LATERAL correlation is not allowed because an OUTER subquery cannot correlate to its join partner. Remove the LATERAL correlation or use an INNER JOIN, or LEFT OUTER JOIN instead.
INVALID_LIMIT_LIKE_EXPRESSION
SQLSTATE: none assigned
The limit like expression <expr>
is invalid.
For more details see INVALID_LIMIT_LIKE_EXPRESSION
INVALID_NON_DETERMINISTIC_EXPRESSIONS
SQLSTATE: none assigned
The operator expects a deterministic expression, but the actual expression is <sqlExprs>
.
INVALID_NUMERIC_LITERAL_RANGE
SQLSTATE: none assigned
Numeric literal <rawStrippedQualifier>
is outside the valid range for <typeName>
with minimum value of <minValue>
and maximum value of <maxValue>
. Please adjust the value accordingly.
INVALID_OBSERVED_METRICS
SQLSTATE: none assigned
Invalid observed metrics.
For more details see INVALID_OBSERVED_METRICS
INVALID_PANDAS_UDF_PLACEMENT
The group aggregate pandas UDF <functionList>
cannot be invoked together with as other, non-pandas aggregate functions.
INVALID_PARAMETER_MARKER_VALUE
An invalid parameter mapping was provided:
For more details see INVALID_PARAMETER_MARKER_VALUE
INVALID_PARAMETER_VALUE
The value of parameter(s) <parameter>
in <functionName>
is invalid:
For more details see INVALID_PARAMETER_VALUE
INVALID_PARTITION_OPERATION
SQLSTATE: none assigned
The partition command is invalid.
For more details see INVALID_PARTITION_OPERATION
INVALID_PIPELINE_ID
Pipeline id <pipelineId>
is not valid.
A pipeline id should be a UUID in the format of ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’
INVALID_PROPERTY_VALUE
<value>
is an invalid property value, please use quotes, e.g. SET <key>
=<value>
INVALID_S3_COPY_CREDENTIALS
COPY INTO credentials must include AWS_ACCESS_KEY, AWS_SECRET_KEY, and AWS_SESSION_TOKEN.
INVALID_SCHEMA
The input schema <inputSchema>
is not a valid schema string.
For more details see INVALID_SCHEMA
INVALID_SET_SYNTAX
Expected format is ‘SET’, ‘SET key’, or ‘SET key=value’. If you want to include special characters in key, or include semicolon in value, please use backquotes, e.g., SET key
=value
.
INVALID_SOURCE_CATALOG
Source catalog must not be within Unity Catalog for the SYNC command. Found <catalog>
.
INVALID_SQL_ARG
SQLSTATE: none assigned
The argument <name>
of sql()
is invalid. Consider to replace it by a SQL literal.
INVALID_STAGING_PATH_IN_STAGING_ACCESS_QUERY
Invalid staging path in staging <operation>
query: <path>
INVALID_TEMP_OBJ_REFERENCE
SQLSTATE: none assigned
Cannot create the persistent object <objName>
of the type <obj>
because it references to the temporary object <tempObjName>
of the type <tempObj>
. Please make the temporary object <tempObjName>
persistent, or make the persistent object <objName>
temporary.
INVALID_TIMESTAMP_FORMAT
The provided timestamp <timestamp>
doesn’t match the expected syntax <format>
.
INVALID_TIME_TRAVEL_TIMESTAMP_EXPR
SQLSTATE: none assigned
The time travel timestamp expression <expr>
is invalid.
For more details see INVALID_TIME_TRAVEL_TIMESTAMP_EXPR
INVALID_UDF_IMPLEMENTATION
SQLSTATE: none assigned
Function <funcName>
does not implement ScalarFunction or AggregateFunction.
INVALID_UPGRADE_SYNTAX
<command>
<supportedOrNot>
the source table is in Hive Metastore and the destination table is in Unity Catalog.
INVALID_URL
SQLSTATE: none assigned
The url is invalid: <url>
. If necessary set <ansiConfig>
to “false” to bypass this error.
INVALID_UUID
Input <uuidInput>
is not a valid UUID.
The UUID should be in the format of ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’
Please check the format of the UUID.
INVALID_VIEW_TEXT
SQLSTATE: none assigned
The view <viewName>
cannot be displayed due to invalid view text: <viewText>
. This may be caused by an unauthorized modification of the view or an incorrect query syntax. Please check your query syntax and verify that the view has not been tampered with.
INVALID_WHERE_CONDITION
The WHERE condition <condition>
contains invalid expressions: <expressionList>
.
Rewrite the query to avoid window functions, aggregate functions, and generator functions in the WHERE clause.
INVALID_WINDOW_SPEC_FOR_AGGREGATION_FUNC
SQLSTATE: none assigned
Cannot specify ORDER BY or a window frame for <aggFunc>
.
INVALID_WRITE_DISTRIBUTION
SQLSTATE: none assigned
The requested write distribution is invalid.
For more details see INVALID_WRITE_DISTRIBUTION
JOIN_CONDITION_IS_NOT_BOOLEAN_TYPE
SQLSTATE: none assigned
The join condition <joinCondition>
has the invalid type <conditionType>
, expected “BOOLEAN
”.
KINESIS_FETCHED_SHARD_LESS_THAN_TRACKED_SHARD
The minimum fetched shardId from Kinesis (<fetchedShardId>
)
is less than the minimum tracked shardId (<trackedShardId>
).
This is unexpected and occurs when a Kinesis stream is deleted and recreated with the same name,
and a streaming query using this Kinesis stream is restarted using an existing checkpoint location.
Restart the streaming query with a new checkpoint location, or create a stream with a new name.
KRYO_BUFFER_OVERFLOW
SQLSTATE: none assigned
Kryo serialization failed: <exceptionMsg>
. To avoid this, increase “<bufferSizeConfKey>
” value.
LOCAL_MUST_WITH_SCHEMA_FILE
SQLSTATE: none assigned
LOCAL must be used together with the schema of file
, but got: <actualSchema>
.
LOCATION_ALREADY_EXISTS
Cannot name the managed table as <identifier>
, as its associated location <location>
already exists. Please pick a different table name, or remove the existing location first.
MALFORMED_PROTOBUF_MESSAGE
SQLSTATE: none assigned
Malformed Protobuf messages are detected in message deserialization. Parse Mode: <failFastMode>
. To process malformed protobuf message as null result, try setting the option ‘mode’ as ‘PERMISSIVE’.
MALFORMED_RECORD_IN_PARSING
Malformed records are detected in record parsing: <badRecord>
.
Parse Mode: <failFastMode>
. To process malformed records as null result, try setting the option ‘mode’ as ‘PERMISSIVE’.
For more details see MALFORMED_RECORD_IN_PARSING
MATERIALIZED_VIEW_MESA_REFRESH_WITHOUT_PIPELINE_ID
Cannot <refreshType>
the materialized view because it predates having a pipelineId. To enable <refreshType>
please drop and recreate the materialized view.
MATERIALIZED_VIEW_OPERATION_NOT_ALLOWED
The materialized view operation <operation>
is not allowed:
For more details see MATERIALIZED_VIEW_OPERATION_NOT_ALLOWED
MATERIALIZED_VIEW_OUTPUT_WITHOUT_EXPLICIT_ALIAS
SQLSTATE: none assigned
Output expression <expression>
in a materialized view must be explicitly aliased.
MAX_RECORDS_PER_FETCH_INVALID_FOR_KINESIS_SOURCE
maxRecordsPerFetch needs to be a positive integer less than or equal to <kinesisRecordLimit>
MERGE_CARDINALITY_VIOLATION
The ON search condition of the MERGE statement matched a single row from the target table with multiple rows of the source table.
This could result in the target row being operated on more than once with an update or delete operation and is not allowed.
MISSING_AGGREGATION
The non-aggregating expression <expression>
is based on columns which are not participating in the GROUP BY clause.
Add the columns or the expression to the GROUP BY, aggregate the expression, or use <expressionAnyValue>
if you do not care which of the values within a group is returned.
For more details see MISSING_AGGREGATION
MISSING_ATTRIBUTES
SQLSTATE: none assigned
Resolved attribute(s) <missingAttributes>
missing from <input>
in operator <operator>
.
For more details see MISSING_ATTRIBUTES
MISSING_CONNECTION_OPTION
Connections of type ‘<connectionType>
’ must include the following option(s): <requiredOptions>
.
MISSING_GROUP_BY
The query does not include a GROUP BY clause. Add GROUP BY or turn it into the window functions using OVER clauses.
MISSING_PARAMETER_FOR_KAFKA
SQLSTATE: none assigned
Parameter <parameterName>
is required for Kafka, but is not specified in <functionName>
.
MISSING_PARAMETER_FOR_ROUTINE
Parameter <parameterName>
is required, but is not specified in <functionName>
.
MULTIPLE_LOAD_PATH
Databricks Delta does not support multiple input paths in the load() API.
paths: <pathList>
. To build a single DataFrame by loading
multiple paths from the same Delta table, please load the root path of
the Delta table with the corresponding partition filters. If the multiple paths
are from different Delta tables, please use Dataset’s union()/unionByName() APIs
to combine the DataFrames generated by separate load() API calls.
MULTI_SOURCES_UNSUPPORTED_FOR_EXPRESSION
SQLSTATE: none assigned
The expression <expr>
does not support more than one source.
MULTI_UDF_INTERFACE_ERROR
SQLSTATE: none assigned
Not allowed to implement multiple UDF interfaces, UDF class <className>
.
MV_ST_ALTER_QUERY_INCORRECT_BACKING_TYPE
The input query expects a <expectedType>
, but the underlying table is a <givenType>
.
NAMED_PARAMETERS_NOT_SUPPORTED
Named parameters are not supported for function <functionName>
; please retry the query with positional arguments to the function call instead.
NAMED_PARAMETERS_NOT_SUPPORTED_FOR_SQL_UDFS
SQLSTATE: none assigned
Cannot call function <functionName>
because named argument references for SQL UDF are not supported. In this case, the named argument reference was <argument>
.
NAMED_PARAMETER_SUPPORT_DISABLED
SQLSTATE: none assigned
Cannot call function <functionName>
because named argument references are not enabled here. In this case, the named argument reference was <argument>
. Set “spark.sql.allowNamedFunctionArguments” to “true” to turn on feature.
NAMESPACE_ALREADY_EXISTS
Cannot create namespace <nameSpaceName>
because it already exists.
Choose a different name, drop the existing namespace, or add the IF NOT EXISTS clause to tolerate pre-existing namespace.
NAMESPACE_NOT_EMPTY
Cannot drop a namespace <nameSpaceNameName>
because it contains objects.
Use DROP NAMESPACE … CASCADE to drop the namespace and all its objects.
NAMESPACE_NOT_FOUND
The namespace <nameSpaceName>
cannot be found. Verify the spelling and correctness of the namespace.
If you did not qualify the name with, verify the current_schema() output, or qualify the name with the correctly.
To tolerate the error on drop use DROP NAMESPACE IF EXISTS.
NESTED_AGGREGATE_FUNCTION
It is not allowed to use an aggregate function in the argument of another aggregate function. Please use the inner aggregate function in a sub-query.
NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION
When there are more than one MATCHED clauses in a MERGE statement, only the last MATCHED clause can omit the condition.
NON_LAST_NOT_MATCHED_BY_SOURCE_CLAUSE_OMIT_CONDITION
When there are more than one NOT MATCHED BY SOURCE clauses in a MERGE statement, only the last NOT MATCHED BY SOURCE clause can omit the condition.
NON_LAST_NOT_MATCHED_BY_TARGET_CLAUSE_OMIT_CONDITION
When there are more than one NOT MATCHED [BY TARGET] clauses in a MERGE statement, only the last NOT MATCHED [BY TARGET] clause can omit the condition.
NON_TIME_WINDOW_NOT_SUPPORTED_IN_STREAMING
SQLSTATE: none assigned
Window function is not supported in <windowFunc>
(as column <columnName>
) on streaming DataFrames/Datasets. Structured Streaming only supports time-window aggregation using the WINDOW function. (window specification: <windowSpec>
)
NOT_ALLOWED_IN_FROM
SQLSTATE: none assigned
Not allowed in the FROM clause:
For more details see NOT_ALLOWED_IN_FROM
NOT_A_CONSTANT_STRING
The expression <expr>
used for the routine or clause <name>
must be a constant STRING
which is NOT NULL.
For more details see NOT_A_CONSTANT_STRING
NOT_A_PARTITIONED_TABLE
SQLSTATE: none assigned
Operation <operation>
is not allowed for <tableIdentWithDB>
because it is not a partitioned table.
NOT_A_SCALAR_FUNCTION
<functionName>
appears as a scalar expression here, but the function was defined as a table function. Please update the query to move the function call into the FROM clause, or redefine <functionName>
as a scalar function instead.
NOT_A_TABLE_FUNCTION
<functionName>
appears as a table function here, but the function was defined as a scalar function. Please update the query to move the function call outside the FROM clause, or redefine <functionName>
as a table function instead.
NOT_NULL_CONSTRAINT_VIOLATION
Assigning a NULL is not allowed here.
For more details see NOT_NULL_CONSTRAINT_VIOLATION
NOT_SUPPORTED_CHANGE_COLUMN
SQLSTATE: none assigned
ALTER TABLE ALTER/CHANGE COLUMN is not supported for changing <table>
’s column <originName>
with type <originType>
to <newName>
with type <newType>
.
NOT_SUPPORTED_COMMAND_WITHOUT_HIVE_SUPPORT
SQLSTATE: none assigned
<cmd>
is not supported, if you want to enable it, please set “spark.sql.catalogImplementation” to “hive”.
NOT_SUPPORTED_IN_JDBC_CATALOG
Not supported command in JDBC catalog:
For more details see NOT_SUPPORTED_IN_JDBC_CATALOG
NO_DEFAULT_COLUMN_VALUE_AVAILABLE
Can’t determine the default value for <colName>
since it is not nullable and it has no default value.
NO_HANDLER_FOR_UDAF
SQLSTATE: none assigned
No handler for UDAF ‘<functionName>
’. Use sparkSession.udf.register(…) instead.
NO_SQL_TYPE_IN_PROTOBUF_SCHEMA
SQLSTATE: none assigned
Cannot find <catalystFieldPath>
in Protobuf schema.
NUMERIC_OUT_OF_SUPPORTED_RANGE
The value <value>
cannot be interpreted as a numeric since it has more than 38 digits.
NUMERIC_VALUE_OUT_OF_RANGE
<value>
cannot be represented as Decimal(<precision>
, <scale>
). If necessary set <config>
to “false” to bypass this error, and return NULL instead.
NUM_COLUMNS_MISMATCH
<operator>
can only be performed on inputs with the same number of columns, but the first input has <firstNumColumns>
columns and the <invalidOrdinalNum>
input has <invalidNumColumns>
columns.
NUM_RECORDS_MISMATCH
SQLSTATE: none assigned
Failed to validate the number of records in <operation>
. Added <numAddedRecords>
records and removed <numRemovedRecords>
records. This is a bug. Please contact Databricks support.
NUM_TABLE_VALUE_ALIASES_MISMATCH
SQLSTATE: none assigned
Number of given aliases does not match number of output columns. Function name: <funcName>
; number of aliases: <aliasesNum>
; number of output columns: <outColsNum>
.
ONLY_SECRET_FUNCTION_SUPPORTED_HERE
SQLSTATE: none assigned
Calling function <functionName>
is not supported in this <location>
; <supportedFunctions>
supported here.
ORDER_BY_POS_OUT_OF_RANGE
ORDER BY position <index>
is not in select list (valid range is [1, <size>
]).
PARTITIONS_ALREADY_EXIST
Cannot ADD or RENAME TO partition(s) <partitionList>
in table <tableName>
because they already exist.
Choose a different name, drop the existing partition, or add the IF NOT EXISTS clause to tolerate a pre-existing partition.
PARTITIONS_NOT_FOUND
The partition(s) <partitionList>
cannot be found in table <tableName>
.
Verify the partition specification and table name.
To tolerate the error on drop use ALTER TABLE … DROP IF EXISTS PARTITION.
PARTITION_METADATA
<action>
is not allowed on table <tableName>
since storing partition metadata is not supported in Unity Catalog.
PATH_ALREADY_EXISTS
Path <outputPath>
already exists. Set mode as “overwrite” to overwrite the existing path.
PIVOT_VALUE_DATA_TYPE_MISMATCH
Invalid pivot value ‘<value>
’: value data type <valueType>
does not match pivot column data type <pivotType>
.
PLAN_VALIDATION_FAILED_RULE_EXECUTOR
SQLSTATE: none assigned
The input plan of <ruleExecutor>
is invalid: <reason>
PLAN_VALIDATION_FAILED_RULE_IN_BATCH
SQLSTATE: none assigned
Rule <rule>
in batch <batch>
generated an invalid plan: <reason>
PROTOBUF_DESCRIPTOR_FILE_NOT_FOUND
SQLSTATE: none assigned
Error reading Protobuf descriptor file at path: <filePath>
.
PROTOBUF_FIELD_MISSING
SQLSTATE: none assigned
Searching for <field>
in Protobuf schema at <protobufSchema>
gave <matchSize>
matches. Candidates: <matches>
.
PROTOBUF_FIELD_MISSING_IN_SQL_SCHEMA
SQLSTATE: none assigned
Found <field>
in Protobuf schema but there is no match in the SQL schema.
PROTOBUF_JAVA_CLASSES_NOT_SUPPORTED
SQLSTATE: none assigned
Java classes are not supported for <protobufFunction>
. Contact Databricks Support about alternate options.
PROTOBUF_MESSAGE_NOT_FOUND
SQLSTATE: none assigned
Unable to locate Message <messageName>
in Descriptor.
PS_FETCH_RETRY_EXCEPTION
Task in pubsub fetch stage cannot be retried. Partition <partitionInfo>
in stage <stageInfo>
, TID <taskId>
.
PS_INVALID_UNSAFE_ROW_CONVERSION_FROM_PROTO
Invalid UnsafeRow to decode to PubSubMessageMetadata, the desired proto schema is: <protoSchema>
. The input UnsafeRow might be corrupted: <unsafeRow>
.
PS_MOVING_CHECKPOINT_FAILURE
Fail to move raw data checkpoint files from <src>
to destination directory: <dest>
.
PS_MULTIPLE_FAILED_EPOCHS
PubSub stream cannot be started as there is more than one failed fetch: <failedEpochs>
.
PS_OPTION_NOT_IN_BOUNDS
<key>
must be within the following bounds (<min>
, <max>
) exclusive of both bounds.
PS_PROVIDE_CREDENTIALS_WITH_OPTION
Shared clusters do not support authentication with instance profiles. Provide credentials to the stream directly using .option().
PS_SPARK_SPECULATION_NOT_SUPPORTED
PubSub source connector is only available in cluster with spark.speculation
disabled.
QUERIED_TABLE_INCOMPATIBLE_WITH_COLUMN_MASK_POLICY
Unable to access referenced table because a previously assigned column mask is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:
For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_COLUMN_MASK_POLICY
QUERIED_TABLE_INCOMPATIBLE_WITH_ROW_LEVEL_SECURITY_POLICY
Unable to access referenced table because a previously assigned row level security policy is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:
For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_ROW_LEVEL_SECURITY_POLICY
QUERY_RESULT_PARSE_AS_ARROW_FAILED
An internal error occurred while parsing the result as an Arrow dataset.
QUERY_RESULT_READ_FROM_CLOUD_STORE_FAILED
An internal error occurred while downloading the result set from the cloud store.
QUERY_RESULT_WRITE_TO_CLOUD_STORE_FAILED
An internal error occurred while uploading the result set to the cloud store.
READ_FILES_AMBIGUOUS_ROUTINE_PARAMETERS
The invocation of function <functionName>
has <parameterName>
and <alternativeName>
set, which are aliases of each other. Please set only one of them.
READ_TVF_UNEXPECTED_REQUIRED_PARAMETER
The function <functionName>
required parameter <parameterName>
must be assigned at position <expectedPos>
without the name.
RECURSIVE_PROTOBUF_SCHEMA
SQLSTATE: none assigned
Found recursive reference in Protobuf schema, which can not be processed by Spark by default: <fieldDescriptor>
. try setting the option recursive.fields.max.depth
0 to 10. Going beyond 10 levels of recursion is not allowed.
REF_DEFAULT_VALUE_IS_NOT_ALLOWED_IN_PARTITION
SQLSTATE: none assigned
References to DEFAULT column values are not allowed within the PARTITION clause.
RELATION_LARGER_THAN_8G
SQLSTATE: none assigned
Can not build a <relationName>
that is larger than 8G.
REMOTE_FUNCTION_HTTP_FAILED_ERROR
The remote HTTP request failed with code <errorCode>
, and error message <errorMessage>
REMOTE_FUNCTION_HTTP_RESULT_PARSE_ERROR
Could not parse the JSON result from the remote HTTP response; the error message is <errorMessage>
REMOTE_FUNCTION_HTTP_RETRY_TIMEOUT
The remote request failed after retrying <N>
times; the last failed HTTP error code was <errorCode>
and the message was <errorMessage>
REQUIRED_PARAMETER_NOT_FOUND
Cannot invoke function <functionName>
because the parameter named <parameterName>
is required, but the function call did not supply a value. Please update the function call to supply an argument value (either positionally at index <index>
or by name) and retry the query again.
REQUIRES_SINGLE_PART_NAMESPACE
<sessionCatalog>
requires a single-part namespace, but got <namespace>
.
RESERVED_CDC_COLUMNS_ON_WRITE
The write contains reserved columns <columnList>
that are used
internally as metadata for Change Data Feed. To write to the table either rename/drop
these columns or disable Change Data Feed on the table by setting
<config>
to false.
ROUTINE_ALREADY_EXISTS
Cannot create the function <routineName>
because it already exists.
Choose a different name, drop or replace the existing function, or add the IF NOT EXISTS clause to tolerate a pre-existing function.
ROUTINE_NOT_FOUND
The function <routineName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP FUNCTION IF EXISTS.
ROUTINE_PARAMETER_NOT_FOUND
The function <functionName>
does not support the parameter <parameterName>
specified at position <pos>
.<suggestion>
ROUTINE_USES_SYSTEM_RESERVED_CLASS_NAME
The function <routineName>
cannot be created because the specified classname ‘<className>
’ is reserved for system use. Please rename the class and try again.
ROW_LEVEL_SECURITY_CHECK_CONSTRAINT_UNSUPPORTED
Creating CHECK constraint on table <tableName>
with row level security policies is not supported.
ROW_LEVEL_SECURITY_DUPLICATE_COLUMN_NAME
A <statementType>
statement attempted to assign a row level security policy to a table, but two or more referenced columns had the same name <columnName>
, which is invalid.
ROW_LEVEL_SECURITY_FEATURE_NOT_SUPPORTED
Row level security policies for <tableName>
are not supported:
For more details see ROW_LEVEL_SECURITY_FEATURE_NOT_SUPPORTED
ROW_LEVEL_SECURITY_MERGE_UNSUPPORTED_SOURCE
MERGE INTO operations do not support row level security policies in source table <tableName>
.
ROW_LEVEL_SECURITY_MERGE_UNSUPPORTED_TARGET
MERGE INTO operations do not support writing into table <tableName>
with row level security policies.
ROW_LEVEL_SECURITY_MULTI_PART_COLUMN_NAME
This statement attempted to assign a row level security policy to a table, but referenced column <columnName>
had multiple name parts, which is invalid.
ROW_LEVEL_SECURITY_REQUIRE_UNITY_CATALOG
Row level security policies are only supported in Unity Catalog.
ROW_LEVEL_SECURITY_TABLE_CLONE_SOURCE_NOT_SUPPORTED
<mode>
clone from table <tableName>
with row level security policy is not supported.
ROW_LEVEL_SECURITY_TABLE_CLONE_TARGET_NOT_SUPPORTED
<mode>
clone to table <tableName>
with row level security policy is not supported.
ROW_LEVEL_SECURITY_UNSUPPORTED_PROVIDER
Failed to execute <statementType>
command because assigning row level security policy is not supported for target data source with table provider: “<provider>
”.
RULE_ID_NOT_FOUND
Not found an id for the rule name “<ruleName>
”. Please modify RuleIdCollection.scala if you are adding a new rule.
SCALAR_SUBQUERY_IS_IN_GROUP_BY_OR_AGGREGATE_FUNCTION
SQLSTATE: none assigned
The correlated scalar subquery ‘<sqlExpr>
’ is neither present in GROUP BY, nor in an aggregate function. Add it to GROUP BY using ordinal position or wrap it in first()
(or first_value
) if you don’t care which value you get.
SCHEMA_ALREADY_EXISTS
Cannot create schema <schemaName>
because it already exists.
Choose a different name, drop the existing schema, or add the IF NOT EXISTS clause to tolerate pre-existing schema.
SCHEMA_NOT_EMPTY
Cannot drop a schema <schemaName>
because it contains objects.
Use DROP SCHEMA … CASCADE to drop the schema and all its objects.
SCHEMA_NOT_FOUND
The schema <schemaName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog.
To tolerate the error on drop use DROP SCHEMA IF EXISTS.
SCHEMA_REGISTRY_CONFIGURATION_ERROR
SQLSTATE: none assigned
Schema from schema registry could not be initialized. <reason>
.
SECOND_FUNCTION_ARGUMENT_NOT_INTEGER
The second argument of <functionName>
function needs to be an integer.
SECRET_FUNCTION_INVALID_LOCATION
SQLSTATE: none assigned
Cannot execute <commandType>
command with one or more non-encrypted references to the SECRET function; please encrypt the result of each such function call with AES_ENCRYPT and try the command again
SEED_EXPRESSION_IS_UNFOLDABLE
SQLSTATE: none assigned
The seed expression <seedExpr>
of the expression <exprWithSeed>
must be foldable.
SPECIFY_BUCKETING_IS_NOT_ALLOWED
SQLSTATE: none assigned
Cannot specify bucketing information if the table schema is not specified when creating and will be inferred at runtime.
SPECIFY_PARTITION_IS_NOT_ALLOWED
SQLSTATE: none assigned
It is not allowed to specify partition columns when the table schema is not defined. When the table schema is not provided, schema and partition columns will be inferred.
SQL_CONF_NOT_FOUND
SQLSTATE: none assigned
The SQL config <sqlConf>
cannot be found. Please verify that the config exists.
STAGING_PATH_CURRENTLY_INACCESSIBLE
Transient error while accessing target staging path <path>
, please try in a few minutes
STAR_GROUP_BY_POS
Star (*) is not allowed in a select list when GROUP BY an ordinal position is used.
STATIC_PARTITION_COLUMN_IN_INSERT_COLUMN_LIST
SQLSTATE: none assigned
Static partition column <staticName>
is also specified in the column list.
STREAMING_FROM_MATERIALIZED_VIEW
SQLSTATE: none assigned
Cannot stream from Materialized View <viewName>
. Streaming from Materialized Views is not supported.
STREAMING_TABLE_NEEDS_REFRESH
Streaming table <tableName>
needs to be refreshed. Please run CREATE OR REFRESH STREAMING TABLE <tableName>
AS to update the table.
STREAMING_TABLE_NOT_SUPPORTED
Streaming Tables can only be created and refreshed in Delta Live Tables and Databricks SQL Warehouses.
STREAMING_TABLE_OPERATION_INTERNAL_ERROR
Internal error during operation <operation>
on Streaming Table: Please file a bug report.
STREAMING_TABLE_OPERATION_NOT_ALLOWED
The operation <operation>
is not allowed:
For more details see STREAMING_TABLE_OPERATION_NOT_ALLOWED
STREAMING_TABLE_QUERY_INVALID
Streaming table <tableName>
can only be created from a streaming query. Please add the STREAM keyword to your FROM clause to turn this relation into a streaming query.
STREAM_FAILED
SQLSTATE: none assigned
Query [id = <id>
, runId = <runId>
] terminated with exception: <message>
STREAM_NOT_FOUND_FOR_KINESIS_SOURCE
Kinesis stream <streamName>
in <region>
not found.
Please start a new query pointing to the correct stream name.
SUM_OF_LIMIT_AND_OFFSET_EXCEEDS_MAX_INT
SQLSTATE: none assigned
The sum of the LIMIT clause and the OFFSET clause must not be greater than the maximum 32-bit integer value (2,147,483,647) but found limit = <limit>
, offset = <offset>
.
SYNC_METADATA_NOT_SUPPORTED
Repair table sync metadata command is only supported for Unity Catalog tables.
SYNC_SRC_TARGET_TBL_NOT_SAME
Source table name <srcTable>
must be same as destination table name <destTable>
.
TABLE_OR_VIEW_ALREADY_EXISTS
Cannot create table or view <relationName>
because it already exists.
Choose a different name, drop or replace the existing object, add the IF NOT EXISTS clause to tolerate pre-existing objects, or add the OR REFRESH clause to refresh the existing streaming table.
TABLE_OR_VIEW_NOT_FOUND
The table or view <relationName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS.
For more details see TABLE_OR_VIEW_NOT_FOUND
TABLE_VALUED_ARGUMENTS_NOT_YET_IMPLEMENTED_FOR_SQL_FUNCTIONS
SQLSTATE: none assigned
Cannot <action>
SQL user-defined function <functionName>
with TABLE arguments because this functionality is not yet implemented.
TABLE_VALUED_FUNCTION_FAILED_TO_ANALYZE_IN_PYTHON
SQLSTATE: none assigned
Failed to analyze the Python user defined table function: <msg>
TABLE_VALUED_FUNCTION_TOO_MANY_TABLE_ARGUMENTS
SQLSTATE: none assigned
There are too many table arguments for table-valued function. It allows one table argument, but got: <num>
. If you want to allow it, please set “spark.sql.allowMultipleTableArguments.enabled” to “true”
TABLE_WITH_ID_NOT_FOUND
Table with ID <tableId>
cannot be found. Verify the correctness of the UUID.
TEMP_TABLE_OR_VIEW_ALREADY_EXISTS
Cannot create the temporary view <relationName>
because it already exists.
Choose a different name, drop or replace the existing view, or add the IF NOT EXISTS clause to tolerate pre-existing views.
TEMP_VIEW_NAME_TOO_MANY_NAME_PARTS
CREATE TEMPORARY VIEW or the corresponding Dataset APIs only accept single-part view names, but got: <actualName>
.
UC_CATALOG_NAME_NOT_PROVIDED
For Unity Catalog, please specify the catalog name explicitly. E.g. SHOW GRANT your.address@email.com
ON CATALOG main.
UC_COMMAND_NOT_SUPPORTED
The command(s): <commandName>
are not supported in Unity Catalog.
For more details see UC_COMMAND_NOT_SUPPORTED
UC_DATASOURCE_NOT_SUPPORTED
Data source format <dataSourceFormatName>
is not supported in Unity Catalog.
UC_EXTERNAL_VOLUME_MISSING_LOCATION
SQLSTATE: none assigned
LOCATION clause must be present for external volume. Please check the syntax ‘CREATE EXTERNAL VOLUME … LOCATION …’ for creating an external volume.
UC_FILE_SCHEME_FOR_TABLE_CREATION_NOT_SUPPORTED
Creating table in Unity Catalog with file scheme <schemeName>
is not supported.
Instead, please create a federated data source connection using the CREATE CONNECTION command for the same table provider, then create a catalog based on the connection with a CREATE FOREIGN CATALOG command to reference the tables therein.
UC_INVALID_DEPENDENCIES
Dependencies of <viewName>
are recorded as <storedDeps>
while being parsed as <parsedDeps>
. This likely occurred through improper use of a non-SQL API. You can repair dependencies in Databricks Runtime by running ALTER VIEW <viewName>
AS <viewText>
.
UC_LOCATION_FOR_MANAGED_VOLUME_NOT_SUPPORTED
SQLSTATE: none assigned
Managed volume does not accept LOCATION clause. Please check the syntax ‘CREATE VOLUME …’ for creating a managed volume.
UC_VOLUME_NOT_FOUND
Volume <name>
does not exist. Please use ‘SHOW VOLUMES’ to list available volumes.
UDF_MAX_COUNT_EXCEEDED
Exceeded query-wide UDF limit of <maxNumUdfs>
UDFs (limited during public preview). Found <numUdfs>
. The UDFs were: <udfNames>
.
UDF_PYSPARK_UNSUPPORTED_TYPE
PySpark UDF <udf>
(<eval-type>) is not supported on clusters in Shared access mode.
UDF_UNSUPPORTED_PARAMETER_DEFAULT_VALUE
Parameter default value is not supported for user-defined <functionType>
function.
UDTF_ALIAS_NUMBER_MISMATCH
SQLSTATE: none assigned
The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF. Expected <aliasesSize>
aliases, but got <aliasesNames>
. Please ensure that the number of aliases provided matches the number of columns output by the UDTF.
UNABLE_TO_CONVERT_TO_PROTOBUF_MESSAGE_TYPE
SQLSTATE: none assigned
Unable to convert SQL type <toType>
to Protobuf type <protobufType>
.
UNBOUND_SQL_PARAMETER
Found the unbound parameter: <name>
. Please, fix args
and provide a mapping of the parameter to a SQL literal.
UNCLOSED_BRACKETED_COMMENT
Found an unclosed bracketed comment. Please, append */ at the end of the comment.
UNEXPECTED_INPUT_TYPE
Parameter <paramIndex>
of function <functionName>
requires the <requiredType>
type, however <inputSql>
has the type <inputType>
.
UNEXPECTED_OPERATOR_IN_STREAMING_VIEW
Unexpected operator <op>
in the CREATE VIEW statement as a streaming source.
A streaming view query must consist only of SELECT, WHERE, and UNION ALL operations.
UNEXPECTED_POSITIONAL_ARGUMENT
Cannot invoke function <functionName>
because it contains positional argument(s) following the named argument assigned to <parameterName>
; please rearrange them so the positional arguments come first and then retry the query again.
UNKNOWN_FIELD_EXCEPTION
Encountered unknown fields during parsing: <unknownFieldBlob>
, which can be fixed by an automatic retry: <isRetryable>
For more details see UNKNOWN_FIELD_EXCEPTION
UNKNOWN_POSITIONAL_ARGUMENT
The invocation of function <functionName>
contains an unknown positional argument <sqlExpr>
at position <pos>
. This is invalid.
UNKNOWN_PROTOBUF_MESSAGE_TYPE
SQLSTATE: none assigned
Attempting to treat <descriptorName>
as a Message, but it was <containingType>
.
UNPIVOT_REQUIRES_ATTRIBUTES
UNPIVOT requires all given <given>
expressions to be columns when no <empty>
expressions are given. These are not columns: [<expressions>
].
UNPIVOT_REQUIRES_VALUE_COLUMNS
At least one value column needs to be specified for UNPIVOT, all columns specified as ids.
UNPIVOT_VALUE_DATA_TYPE_MISMATCH
Unpivot value columns must share a least common type, some types do not: [<types>
].
UNPIVOT_VALUE_SIZE_MISMATCH
All unpivot value columns must have the same size as there are value column names (<names>
).
UNRECOGNIZED_PARAMETER_NAME
Cannot invoke function <functionName>
because the function call included a named argument reference for the argument named <argumentName>
, but this function does not include any signature containing an argument with this name. Did you mean one of the following? [<proposal>
].
UNRESOLVABLE_TABLE_VALUED_FUNCTION
SQLSTATE: none assigned
Could not resolve <name>
to a table-valued function. Please make sure that <name>
is defined as a table-valued function and that all required parameters are provided correctly. If <name>
is not defined, please create the table-valued function before using it. For more information about defining table-valued functions, please refer to the Apache Spark documentation.
UNRESOLVED_ALL_IN_GROUP_BY
Cannot infer grouping columns for GROUP BY ALL based on the select clause. Please explicitly specify the grouping columns.
UNRESOLVED_COLUMN
A column, variable, or function parameter with name <objectName>
cannot be resolved.
For more details see UNRESOLVED_COLUMN
UNRESOLVED_FIELD
A field with name <fieldName>
cannot be resolved with the struct-type column <columnPath>
.
For more details see UNRESOLVED_FIELD
UNRESOLVED_MAP_KEY
Cannot resolve column <objectName>
as a map key. If the key is a string literal, add the single quotes ‘’ around it.
For more details see UNRESOLVED_MAP_KEY
UNRESOLVED_ROUTINE
Cannot resolve function <routineName>
on search path <searchPath>
.
For more details see UNRESOLVED_ROUTINE
UNRESOLVED_USING_COLUMN_FOR_JOIN
USING column <colName>
cannot be resolved on the <side>
side of the join. The <side>
-side columns: [<suggestion>
].
UNSET_NONEXISTENT_PROPERTIES
SQLSTATE: none assigned
Attempted to unset non-existent properties [<properties>
] in table <table>
.
UNSUPPORTED_ADD_FILE
SQLSTATE: none assigned
Don’t support add file.
For more details see UNSUPPORTED_ADD_FILE
UNSUPPORTED_CHAR_OR_VARCHAR_AS_STRING
SQLSTATE: none assigned
The char/varchar type can’t be used in the table schema. If you want Spark treat them as string type as same as Spark 3.0 and earlier, please set “spark.sql.legacy.charVarcharAsString” to “true”.
UNSUPPORTED_COMMON_ANCESTOR_LOC_FOR_FILE_STREAM_SOURCE
The common ancestor of source path and sourceArchiveDir should be registered with UC.
If you see this error message, it’s likely that you register the source path and sourceArchiveDir in different external locations.
Please put them into a single external location.
UNSUPPORTED_CONSTRAINT_TYPE
Unsupported constraint type. Only <supportedConstraintTypes>
are supported
UNSUPPORTED_DATA_SOURCE_FOR_DIRECT_QUERY
SQLSTATE: none assigned
The direct query on files does not support the data source type: <className>
. Please try a different data source type or consider using a different query method.
UNSUPPORTED_DATA_TYPE_FOR_DATASOURCE
SQLSTATE: none assigned
The <format>
datasource doesn’t support the column <columnName>
of the type <columnType>
.
UNSUPPORTED_DEFAULT_VALUE
SQLSTATE: none assigned
DEFAULT column values is not supported.
For more details see UNSUPPORTED_DEFAULT_VALUE
UNSUPPORTED_DESERIALIZER
The deserializer is not supported:
For more details see UNSUPPORTED_DESERIALIZER
UNSUPPORTED_EXPRESSION_GENERATED_COLUMN
SQLSTATE: none assigned
Cannot create generated column <fieldName>
with generation expression <expressionStr>
because <reason>
.
UNSUPPORTED_EXPR_FOR_OPERATOR
SQLSTATE: none assigned
A query operator contains one or more unsupported expressions. Consider to rewrite it to avoid window functions, aggregate functions, and generator functions in the WHERE clause.
Invalid expressions: [<invalidExprSqls>
]
UNSUPPORTED_GROUPING_EXPRESSION
SQLSTATE: none assigned
grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup.
UNSUPPORTED_INITIAL_POSITION_AND_TRIGGER_PAIR_FOR_KINESIS_SOURCE
<trigger>
with initial position <initialPosition>
is not supported with the Kinesis source
UNSUPPORTED_INSERT
SQLSTATE: none assigned
Can’t insert into the target.
For more details see UNSUPPORTED_INSERT
UNSUPPORTED_MANAGED_TABLE_CREATION
Creating a managed table <tableName>
using datasource <dataSource>
is not supported. You need to use datasource DELTA or create an external table using CREATE EXTERNAL TABLE <tableName>
… USING <dataSource>
…
UNSUPPORTED_MERGE_CONDITION
SQLSTATE: none assigned
MERGE operation contains unsupported <condName>
condition.
For more details see UNSUPPORTED_MERGE_CONDITION
UNSUPPORTED_NESTED_ROW_OR_COLUMN_ACCESS_POLICY
Table <tableName>
has a row level security policy or column mask which indirectly refers to another table with a row level security policy or column mask; this is not supported. Call sequence: <callSequence>
UNSUPPORTED_OVERWRITE
SQLSTATE: none assigned
Can’t overwrite the target that is also being read from.
For more details see UNSUPPORTED_OVERWRITE
UNSUPPORTED_SAVE_MODE
SQLSTATE: none assigned
The save mode <saveMode>
is not supported for:
For more details see UNSUPPORTED_SAVE_MODE
UNSUPPORTED_STREAMING_OPTIONS_PERMISSION_ENFORCED
SQLSTATE: none assigned
Streaming options <options>
are not supported for data source <source>
on a shared cluster.
UNSUPPORTED_STREAMING_SINK_PERMISSION_ENFORCED
SQLSTATE: none assigned
Data source <sink>
is not supported as a streaming sink on a shared cluster.
UNSUPPORTED_STREAMING_SOURCE_PERMISSION_ENFORCED
SQLSTATE: none assigned
Data source <source>
is not supported as a streaming source on a shared cluster.
UNSUPPORTED_STREAMING_TABLE_VALUED_FUNCTION
The function <funcName>
does not support streaming. Please remove the STREAM keyword
UNSUPPORTED_STREAM_READ_LIMIT_FOR_KINESIS_SOURCE
<streamReadLimit>
is not supported with the Kinesis source
UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY
Unsupported subquery expression:
For more details see UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY
UNSUPPORTED_TIMESERIES_WITH_MORE_THAN_ONE_COLUMN
Creating primary key with more than one timeseries column <colSeq>
is not supported
UNSUPPORTED_TYPED_LITERAL
Literals of the type <unsupportedType>
are not supported. Supported types are <supportedTypes>
.
UNTYPED_SCALA_UDF
SQLSTATE: none assigned
You’re using untyped Scala UDF, which does not have the input type information. Spark may blindly pass null to the Scala closure with primitive-type argument, and the closure will see the default value of the Java type for the null argument, e.g. udf((x: Int) => x, IntegerType)
, the result is 0 for null input. To get rid of this error, you could:
use typed Scala UDF APIs(without return type parameter), e.g.
udf((x: Int) => x)
.use Java UDF APIs, e.g.
udf(new UDF1[String, Integer] { override def call(s: String): Integer = s.length() }, IntegerType)
, if input types are all non primitive.set “spark.sql.legacy.allowUntypedScalaUDF” to “true” and use this API with caution.
UPGRADE_NOT_SUPPORTED
Table is not eligible for upgrade from Hive Metastore to Unity Catalog. Reason:
For more details see UPGRADE_NOT_SUPPORTED
USER_DEFINED_FUNCTIONS
User defined function is invalid:
For more details see USER_DEFINED_FUNCTIONS
VARIABLE_ALREADY_EXISTS
Cannot create the variable <variableName>
because it already exists.
Choose a different name, or drop or replace the existing variable.
VARIABLE_NOT_FOUND
The variable <variableName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VARIABLE IF EXISTS.
VIEW_ALREADY_EXISTS
Cannot create view <relationName>
because it already exists.
Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS clause to tolerate pre-existing objects.
VIEW_NOT_FOUND
The view <relationName>
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VIEW IF EXISTS.
VOLUME_ALREADY_EXISTS
Cannot create volume <relationName>
because it already exists.
Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS clause to tolerate pre-existing objects.
WINDOW_FUNCTION_AND_FRAME_MISMATCH
SQLSTATE: none assigned
<funcName>
function can only be evaluated in an ordered row-based window frame with a single offset: <windowExpr>
.
WINDOW_FUNCTION_WITHOUT_OVER_CLAUSE
SQLSTATE: none assigned
Window function <funcName>
requires an OVER clause.
WRITE_STREAM_NOT_ALLOWED
SQLSTATE: none assigned
writeStream
can be called only on streaming Dataset/DataFrame.
WRONG_COLUMN_DEFAULTS_FOR_DELTA_ALTER_TABLE_ADD_COLUMN_NOT_SUPPORTED
Failed to execute the command because DEFAULT values are not supported when adding new columns to previously existing Delta tables; please add the column without a default value first, then run a second ALTER TABLE ALTER COLUMN SET DEFAULT command to apply for future inserted rows instead.
WRONG_COLUMN_DEFAULTS_FOR_DELTA_FEATURE_NOT_ENABLED
Failed to execute <commandType>
command because it assigned a column DEFAULT value, but the corresponding table feature was not enabled. Please retry the command again after executing ALTER TABLE tableName SET TBLPROPERTIES(‘delta.feature.allowColumnDefaults’ = ‘supported’).
WRONG_COMMAND_FOR_OBJECT_TYPE
SQLSTATE: none assigned
The operation <operation>
requires a <requiredType>
. But <objectName>
is a <foundType>
. Use <alternative>
instead.
WRONG_NUM_ARGS
The <functionName>
requires <expectedNum>
parameters but the actual number is <actualNum>
.
For more details see WRONG_NUM_ARGS
Delta Lake
DELTA_ADDING_COLUMN_WITH_INTERNAL_NAME_FAILED
Failed to add column <colName>
because the name is reserved.
DELTA_ADDING_DELETION_VECTORS_DISALLOWED
The current operation attempted to add a deletion vector to a table that does not permit the creation of new deletion vectors. Please file a bug report.
DELTA_ADDING_DELETION_VECTORS_WITH_TIGHT_BOUNDS_DISALLOWED
All operations that add deletion vectors should set the tightBounds column in statistics to false. Please file a bug report.
DELTA_ADD_COLUMN_AT_INDEX_LESS_THAN_ZERO
Index <columnIndex>
to add column <columnName>
is lower than 0
DELTA_ADD_COLUMN_PARENT_NOT_STRUCT
Cannot add <columnName>
because its parent is not a StructType. Found <other>
DELTA_AGGREGATE_IN_GENERATED_COLUMN
Found <sqlExpr>
. A generated column cannot use an aggregate expression
DELTA_AGGREGATION_NOT_SUPPORTED
Aggregate functions are not supported in the <operation>
<predicate>
.
DELTA_ALTER_TABLE_CHANGE_COL_NOT_SUPPORTED
ALTER TABLE CHANGE COLUMN is not supported for changing column <currentType>
to <newType>
DELTA_ALTER_TABLE_CLUSTER_BY_NOT_ALLOWED
ALTER TABLE CLUSTER BY is not allowed for Delta table with Liquid clustering.
DELTA_ALTER_TABLE_RENAME_NOT_ALLOWED
Operation not allowed: ALTER TABLE RENAME TO is not allowed for managed Delta tables on S3, as eventual consistency on S3 may corrupt the Delta transaction log. If you insist on doing so and are sure that there has never been a Delta table with the new name <newName>
before, you can enable this by setting <key>
to be true.
DELTA_ALTER_TABLE_SET_CLUSTERING_TABLE_FEATURE_NOT_ALLOWED
Cannot enable Liquid table feature using SET TBLPROPERTIES. Please use CREATE TABLE CLUSTER BY to create a Delta table with Liquid clustering.
DELTA_AMBIGUOUS_DATA_TYPE_CHANGE
Cannot change data type of <column>
from <from>
to <to>
. This change contains column removals and additions, therefore they are ambiguous. Please make these changes individually using ALTER TABLE [ADD | DROP | RENAME] COLUMN.
DELTA_AMBIGUOUS_PATHS_IN_CREATE_TABLE
CREATE TABLE contains two different locations: <identifier>
and <location>
.
You can remove the LOCATION clause from the CREATE TABLE statement, or set
<config>
to true to skip this check.
DELTA_ARCHIVED_FILES_IN_LIMIT
Table <table>
does not contain enough records in non-archived files to satisfy specified LIMIT of <limit>
records.
DELTA_ARCHIVED_FILES_IN_SCAN
Found <numArchivedFiles>
potentially archived file(s) in table <table>
that need to be scanned as part of this query.
Archived files cannot be accessed. The current time until archival is configured as <archivalTime>
.
Please adjust your query filters to exclude any archived files.
DELTA_BLOCK_COLUMN_MAPPING_AND_CDC_OPERATION
Operation “<opName>
” is not allowed when the table has enabled change data feed (CDF) and has undergone schema changes using DROP COLUMN or RENAME COLUMN.
DELTA_BLOOM_FILTER_DROP_ON_NON_EXISTING_COLUMNS
Cannot drop bloom filter indices for the following non-existent column(s): <unknownColumns>
DELTA_BLOOM_FILTER_OOM_ON_WRITE
OutOfMemoryError occurred while writing bloom filter indices for the following column(s): <columnsWithBloomFilterIndices>
.
You can reduce the memory footprint of bloom filter indices by choosing a smaller value for the ‘numItems’ option, a larger value for the ‘fpp’ option, or by indexing fewer columns.
DELTA_CANNOT_CHANGE_LOCATION
Cannot change the ‘location’ of the Delta table using SET TBLPROPERTIES. Please use ALTER TABLE SET LOCATION instead.
DELTA_CANNOT_CREATE_BLOOM_FILTER_NON_EXISTING_COL
Cannot create bloom filter indices for the following non-existent column(s): <unknownCols>
DELTA_CANNOT_DROP_BLOOM_FILTER_ON_NON_INDEXED_COLUMN
Cannot drop bloom filter index on a non indexed column: <columnName>
DELTA_CANNOT_FIND_BUCKET_SPEC
Expecting a bucketing Delta table but cannot find the bucket spec in the table
DELTA_CANNOT_GENERATE_UPDATE_EXPRESSIONS
Calling without generated columns should always return a update expression for each column
DELTA_CANNOT_MODIFY_APPEND_ONLY
This table is configured to only allow appends. If you would like to permit updates or deletes, use ‘ALTER TABLE <table_name>
SET TBLPROPERTIES (<config>
=false)’.
DELTA_CANNOT_MODIFY_TABLE_PROPERTY
The Delta table configuration <prop>
cannot be specified by the user
DELTA_CANNOT_RECONSTRUCT_PATH_FROM_URI
A uri (<uri>
) which can’t be turned into a relative path was found in the transaction log.
DELTA_CANNOT_RELATIVIZE_PATH
A path (<path>
) which can’t be relativized with the current input found in the
transaction log. Please re-run this as:
%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<userPath>
”, true)
and then also run:
%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<path>
”)
DELTA_CANNOT_REPLACE_MISSING_TABLE
Table <tableName>
cannot be replaced as it does not exist. Use CREATE OR REPLACE TABLE to create the table.
DELTA_CANNOT_RESOLVE_SOURCE_COLUMN
Couldn’t resolve qualified source column <columnName>
within the source query. Please contact Databricks support.
DELTA_CANNOT_RESTORE_TABLE_VERSION
Cannot restore table to version <version>
. Available versions: [<startVersion>
, <endVersion>
].
DELTA_CANNOT_RESTORE_TIMESTAMP_GREATER
Cannot restore table to timestamp (<requestedTimestamp>
) as it is after the latest version available. Please use a timestamp before (<latestTimestamp>
)
DELTA_CANNOT_UPDATE_ARRAY_FIELD
Cannot update %1$s field %2$s type: update the element by updating %2$s.element
DELTA_CANNOT_UPDATE_MAP_FIELD
Cannot update %1$s field %2$s type: update a map by updating %2$s.key or %2$s.value
DELTA_CANNOT_UPDATE_STRUCT_FIELD
Cannot update <tableName>
field <fieldName>
type: update struct by adding, deleting, or updating its fields
DELTA_CAST_OVERFLOW_IN_TABLE_WRITE
Failed to write a value of <sourceType>
type into the <targetType>
type column <columnName>
due to an overflow.
Use try_cast
on the input value to tolerate overflow and return NULL instead.
If necessary, set <storeAssignmentPolicyFlag>
to “LEGACY” to bypass this error or set <updateAndMergeCastingFollowsAnsiEnabledFlag>
to true to revert to the old behaviour and follow <ansiEnabledFlag>
in UPDATE and MERGE.
DELTA_CDC_NOT_ALLOWED_IN_THIS_VERSION
Configuration delta.enableChangeDataFeed cannot be set. Change data feed from Delta is not yet available.
DELTA_CHANGE_DATA_FEED_INCOMPATIBLE_DATA_SCHEMA
Retrieving table changes between version <start>
and <end>
failed because of an incompatible data schema.
Your read schema is <readSchema>
at version <readVersion>
, but we found an incompatible data schema at version <incompatibleVersion>
.
If possible, please retrieve the table changes using the end version’s schema by setting <config>
to endVersion
, or contact support.
DELTA_CHANGE_DATA_FEED_INCOMPATIBLE_SCHEMA_CHANGE
Retrieving table changes between version <start>
and <end>
failed because of an incompatible schema change.
Your read schema is <readSchema>
at version <readVersion>
, but we found an incompatible schema change at version <incompatibleVersion>
.
If possible, please query table changes separately from version <start>
to <incompatibleVersion>
- 1, and from version <incompatibleVersion>
to <end>
.
DELTA_CHANGE_DATA_FILE_NOT_FOUND
File <filePath>
referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE
statement. This request appears to be targeting Change Data Feed, if that is the case, this error can occur when the change data file is out of the retention period and has been deleted by the VACUUM
statement. For more information, see <faqPath>
DELTA_CHANGE_TABLE_FEED_DISABLED
Cannot write to table with delta.enableChangeDataFeed set. Change data feed from Delta is not available.
DELTA_CHECKPOINT_NON_EXIST_TABLE
Cannot checkpoint a non-existing table <path>
. Did you manually delete files in the deltalog directory?
DELTA_CLONE_AMBIGUOUS_TARGET
Two paths were provided as the CLONE target so it is ambiguous which to use. An external
location for CLONE was provided at <externalLocation>
at the same time as the path
<targetIdentifier>
.
DELTA_CLONE_INCOMPLETE_FILE_COPY
File (<fileName>
) not copied completely. Expected file size: <expectedSize>
, found: <actualSize>
. To continue with the operation by ignoring the file size check set <config>
to false.
DELTA_CLONE_UNSUPPORTED_SOURCE
Unsupported <mode>
clone source ‘<name>
’, whose format is <format>
.
The supported formats are ‘delta’, ‘iceberg’ and ‘parquet’.
DELTA_CLUSTERING_CLONE_TABLE_NOT_SUPPORTED
CLONE is not supported for Delta table with Liquid clustering for DBR version < 14.0.
DELTA_CLUSTERING_COLUMN_MISSING_STATS
Liquid clustering requires clustering columns to have stats. Couldn’t find clustering column <column>
in stats schema:
<schema>
DELTA_CLUSTERING_REPLACE_TABLE_WITH_PARTITIONED_TABLE
REPLACE a Delta table with Liquid clustering with a partitioned table is not allowed.
DELTA_CLUSTERING_SHOW_CREATE_TABLE_WITHOUT_CLUSTERING_COLUMNS
SHOW CREATE TABLE is not supported for Delta table with Liquid clustering without any clustering columns.
DELTA_CLUSTERING_WITH_PARTITION_PREDICATE
OPTIMIZE command for Delta table with Liquid clustering doesn’t support partition predicates. Please remove the predicates: <predicates>
.
DELTA_CLUSTERING_WITH_ZORDER_BY
OPTIMIZE command for Delta table with Liquid clustering cannot specify ZORDER BY. Please remove ZORDER BY (<zOrderBy>
).
DELTA_CLUSTER_BY_INVALID_NUM_COLUMNS
CLUSTER BY for Liquid clustering supports up to <numColumnsLimit>
clustering columns, but the table has <actualNumColumns>
clustering columns. Please remove the extra clustering columns.
DELTA_CLUSTER_BY_SCHEMA_NOT_PROVIDED
It is not allowed to specify CLUSTER BY when the schema is not defined. Please define schema for table <tableName>
.
DELTA_CLUSTER_BY_WITH_BUCKETING
CLUSTER BY and CLUSTERED BY INTO BUCKETS cannot both be specified for table <tableName>
. Please remove CLUSTERED BY INTO BUCKETS if you want to create a Delta table with Liquid clustering.
DELTA_CLUSTER_BY_WITH_PARTITIONED_BY
CLUSTER BY and PARTITIONED BY cannot both be specified for table <tableName>
. Please remove the unnecessary PARTITIONED BY if you want to create a Delta table with Liquid clustering.
DELTA_COLUMN_DATA_SKIPPING_NOT_SUPPORTED_PARTITIONED_COLUMN
Data skipping is not supported for partition column ‘<column>
’.
DELTA_COLUMN_DATA_SKIPPING_NOT_SUPPORTED_TYPE
Data skipping is not supported for column ‘<column>
’ of type <type>
.
DELTA_COLUMN_MAPPING_MAX_COLUMN_ID_NOT_SET
The max column id property (<prop>
) is not set on a column mapping enabled table.
DELTA_COLUMN_MAPPING_MAX_COLUMN_ID_NOT_SET_CORRECTLY
The max column id property (<prop>
) on a column mapping enabled table is <tableMax>
, which cannot be smaller than the max column id for all fields (<fieldMax>
).
DELTA_COLUMN_NOT_FOUND_IN_MERGE
Unable to find the column ‘<targetCol>
’ of the target table from the INSERT columns: <colNames>
. INSERT clause must specify value for all the columns of the target table.
DELTA_COLUMN_PATH_NOT_NESTED
Expected <columnPath>
to be a nested data type, but found <other>
. Was looking for the
index of <column>
in a nested field
DELTA_COLUMN_STRUCT_TYPE_MISMATCH
Struct column <source>
cannot be inserted into a <targetType>
field <targetField>
in <targetTable>
.
DELTA_COMPACTION_VALIDATION_FAILED
The validation of the compaction of path <compactedPath>
to <newPath>
failed: Please file a bug report.
DELTA_COMPLEX_TYPE_COLUMN_CONTAINS_NULL_TYPE
Found nested NullType in column <columName>
which is of <dataType>
. Delta doesn’t support writing NullType in complex types.
DELTA_CONSTRAINT_ALREADY_EXISTS
Constraint ‘<constraintName>
’ already exists. Please delete the old constraint first.
Old constraint:
<oldConstraint>
DELTA_CONSTRAINT_DOES_NOT_EXIST
Cannot drop nonexistent constraint <constraintName>
from table <tableName>
. To avoid throwing an error, provide the parameter IF EXISTS or set the SQL session configuration <config>
to <confValue>
.
DELTA_CONVERSION_NO_PARTITION_FOUND
Found no partition information in the catalog for table <tableName>
. Have you run “MSCK REPAIR TABLE” on your table to discover partitions?
DELTA_CONVERSION_UNSUPPORTED_COLUMN_MAPPING
The configuration ‘<config>
’ cannot be set to <mode>
when using CONVERT TO DELTA.
DELTA_CONVERT_NON_PARQUET_TABLE
CONVERT TO DELTA only supports parquet tables, but you are trying to convert a <sourceName>
source: <tableId>
DELTA_CONVERT_TO_DELTA_ROW_TRACKING_WITHOUT_STATS
Cannot enable row tracking without collecting statistics.
If you want to enable row tracking, do the following:
Enable statistics collection by running the command
SET
<statisticsCollectionPropertyKey>
= trueRun CONVERT TO DELTA without the NO STATISTICS option.
If you do not want to collect statistics, disable row tracking:
Deactivate enabling the table feature by default by running the command:
RESET
<rowTrackingTableFeatureDefaultKey>
Deactivate the table property by default by running:
SET
<rowTrackingDefaultPropertyKey>
= false
DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_SCHEMA
You are trying to create an external table <tableName>
from <path>
using Delta, but the schema is not specified when the
input path is empty.
To learn more about Delta, see <docLink>
DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_TXN_LOG
You are trying to create an external table <tableName>
from %2$s
using Delta, but there is no transaction log present at
%2$s/_delta_log
. Check the upstream job to make sure that it is writing using
format(“delta”) and that the path is the root of the table.
To learn more about Delta, see <docLink>
DELTA_CREATE_TABLE_SCHEME_MISMATCH
The specified schema does not match the existing schema at <path>
.
== Specified ==
<specifiedSchema>
== Existing ==
<existingSchema>
== Differences ==
<schemaDifferences>
If your intention is to keep the existing schema, you can omit the
schema from the create table command. Otherwise please ensure that
the schema matches.
DELTA_CREATE_TABLE_SET_CLUSTERING_TABLE_FEATURE_NOT_ALLOWED
Cannot enable Liquid table feature using TBLPROPERTIES. Please use CREATE OR REPLACE TABLE CLUSTER BY to create a Delta table with Liquid clustering.
DELTA_CREATE_TABLE_WITH_DIFFERENT_PARTITIONING
The specified partitioning does not match the existing partitioning at <path>
.
== Specified ==
<specifiedColumns>
== Existing ==
<existingColumns>
DELTA_CREATE_TABLE_WITH_DIFFERENT_PROPERTY
The specified properties do not match the existing properties at <path>
.
== Specified ==
<specificiedProperties>
== Existing ==
<existingProperties>
DELTA_CREATE_TABLE_WITH_NON_EMPTY_LOCATION
Cannot create table (‘<tableId>
’). The associated location (‘<tableLocation>
’) is not empty and also not a Delta table.
DELTA_DATA_CHANGE_FALSE
Cannot change table metadata because the ‘dataChange’ option is set to false. Attempted operation: ‘<op>
’.
DELTA_DELETION_VECTOR_CHECKSUM_MISMATCH
Could not verify deletion vector integrity, CRC checksum verification failed.
DELTA_DELETION_VECTOR_INVALID_ROW_INDEX
Deletion vector integrity check failed. Encountered an invalid row index.
DELTA_DELETION_VECTOR_MISSING_NUM_RECORDS
It is invalid to commit files with deletion vectors that are missing the numRecords statistic.
DELTA_DELETION_VECTOR_SIZE_MISMATCH
Deletion vector integrity check failed. Encountered a size mismatch.
DELTA_DOMAIN_METADATA_NOT_SUPPORTED
Detected DomainMetadata action(s) for domains <domainNames>
, but DomainMetadataTableFeature is not enabled.
DELTA_DUPLICATE_ACTIONS_FOUND
File operation ‘<actionType>
’ for path <path>
was specified several times.
It conflicts with <conflictingPath>
.
It is not valid for multiple file operations with the same path to exist in a single commit.
DELTA_DUPLICATE_COLUMNS_ON_UPDATE_TABLE
<message>
Please remove duplicate columns before you update your table.
DELTA_DUPLICATE_DOMAIN_METADATA_INTERNAL_ERROR
Internal error: two DomainMetadata actions within the same transaction have the same domain <domainName>
DELTA_DV_HISTOGRAM_DESERIALIZATON
Could not deserialize the deleted record counts histogram during table integrity verification.
DELTA_DYNAMIC_PARTITION_OVERWRITE_DISABLED
Dynamic partition overwrite mode is specified by session config or write options, but it is disabled by spark.databricks.delta.dynamicPartitionOverwrite.enabled=false
.
DELTA_EXPRESSIONS_NOT_FOUND_IN_GENERATED_COLUMN
Cannot find the expressions in the generated column <columnName>
DELTA_EXTRACT_REFERENCES_FIELD_NOT_FOUND
Field <fieldName>
could not be found when extracting references.
DELTA_FAILED_FIND_ATTRIBUTE_IN_OUTPUT_COLUMNS
Could not find <newAttributeName>
among the existing target output <targetOutputColumns>
DELTA_FAILED_SCAN_WITH_HISTORICAL_VERSION
Expect a full scan of the latest version of the Delta source, but found a historical scan of version <historicalVersion>
DELTA_FAIL_RELATIVIZE_PATH
Failed to relativize the path (<path>
). This can happen when absolute paths make
it into the transaction log, which start with the scheme
s3://, wasbs:// or adls://. This is a bug that has existed before DBR 5.0.
To fix this issue, please upgrade your writer jobs to DBR 5.0 and please run:
%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<path>
”).
If this table was created with a shallow clone across file systems
(different buckets/containers) and this table is NOT USED IN PRODUCTION, you can
set the SQL configuration <config>
to true. Using this SQL configuration could lead to accidental data loss,
therefore we do not recommend the use of this flag unless
this is a shallow clone for testing purposes.
DELTA_FEATURES_PROTOCOL_METADATA_MISMATCH
Unable to operate on this table because the following table features are enabled in metadata but not listed in protocol: <features>
.
DELTA_FEATURES_REQUIRE_MANUAL_ENABLEMENT
Your table schema requires manually enablement of the following table feature(s): <unsupportedFeatures>
.
To do this, run the following command for each of features listed above:
ALTER TABLE table_name SET TBLPROPERTIES (‘delta.feature.feature_name’ = ‘supported’)
Replace “table_name” and “feature_name” with real values.
Current supported feature(s): <supportedFeatures>
.
DELTA_FEATURE_DROP_CONFLICT_REVALIDATION_FAIL
Cannot drop feature because a concurrent transaction modified the table.
Please try the operation again.
<concurrentCommit>
DELTA_FEATURE_DROP_FEATURE_NOT_PRESENT
Cannot drop <feature>
from this table because it is not currently present in the table’s protocol.
DELTA_FEATURE_DROP_HISTORICAL_VERSIONS_EXIST
Cannot drop <feature>
because the Delta log contains historical versions that use the feature.
Please wait until the history retention period (<logRetentionPeriodKey>
=<logRetentionPeriod>
)
has passed since the feature was last active.
DELTA_FEATURE_DROP_LEGACY_FEATURE
Cannot drop <feature>
because it is implicitly supported by the table protocol version.
DELTA_FEATURE_DROP_NONREMOVABLE_FEATURE
Cannot drop <feature>
because dropping this feature is not supported.
Please contact Databricks support.
DELTA_FEATURE_DROP_UNSUPPORTED_CLIENT_FEATURE
Cannot drop <feature>
because it is not supported by this Databricks version.
Consider using Databricks with a higher version.
DELTA_FEATURE_DROP_WAIT_FOR_RETENTION_PERIOD
Dropping <feature>
was partially successful.
The feature is now no longer used in the current version of the table. However, the feature
is still present in historical versions of the table. The table feature cannot be dropped
from the table protocol until these historical versions have expired.
To drop the table feature from the protocol, please wait for the historical versions to
expire, and then repeat this command. The retention period for historical versions is
currently configured as <logRetentionPeriodKey>
=<logRetentionPeriod>
.
DELTA_FEATURE_REQUIRES_HIGHER_READER_VERSION
Unable to enable table feature <feature>
because it requires a higher reader protocol version (current <current>
). Consider upgrading the table’s reader protocol version to <required>
, or to a version which supports reader table features. Refer to <docLink>
for more information on table protocol versions.
DELTA_FEATURE_REQUIRES_HIGHER_WRITER_VERSION
Unable to enable table feature <feature>
because it requires a higher writer protocol version (current <current>
). Consider upgrading the table’s writer protocol version to <required>
, or to a version which supports writer table features. Refer to <docLink>
for more information on table protocol versions.
DELTA_FILE_NOT_FOUND_DETAILED
File <filePath>
referenced in the transaction log cannot be found. This occurs when data has been manually deleted from the file system rather than using the table DELETE
statement. For more information, see <faqPath>
DELTA_FILE_TO_OVERWRITE_NOT_FOUND
File (<path>
) to be rewritten not found among candidate files:
<pathList>
DELTA_FOUND_MAP_TYPE_COLUMN
A MapType was found. In order to access the key or value of a MapType, specify one
of:
<key>
or
<value>
followed by the name of the column (only if that column is a struct type).
e.g. mymap.key.mykey
If the column is a basic type, mymap.key or mymap.value is sufficient.
DELTA_GENERATED_COLUMNS_DATA_TYPE_MISMATCH
Column <columnName>
is a generated column or a column used by a generated column. The data type is <columnType>
. It doesn’t accept data type <dataType>
DELTA_GENERATED_COLUMNS_EXPR_TYPE_MISMATCH
The expression type of the generated column <columnName>
is <expressionType>
, but the column type is <columnType>
DELTA_GENERATED_COLUMN_UPDATE_TYPE_MISMATCH
Column <currentName>
is a generated column or a column used by a generated column. The data type is <currentDataType>
and cannot be converted to data type <updateDataType>
DELTA_ICEBERG_COMPAT_V1_VIOLATION
The validation of IcebergCompatV1 has failed.
For more details see DELTA_ICEBERG_COMPAT_V1_VIOLATION
DELTA_INCONSISTENT_BUCKET_SPEC
BucketSpec on Delta bucketed table does not match BucketSpec from metadata.Expected: <expected>
. Actual: <actual>
.
DELTA_INCONSISTENT_LOGSTORE_CONFS
(<setKeys>
) cannot be set to different values. Please only set one of them, or set them to the same value.
DELTA_INCORRECT_ARRAY_ACCESS
Incorrectly accessing an ArrayType. Use arrayname.element.elementname position to
add to an array.
DELTA_INCORRECT_ARRAY_ACCESS_BY_NAME
An ArrayType was found. In order to access elements of an ArrayType, specify
<rightName>
Instead of <wrongName>
DELTA_INCORRECT_LOG_STORE_IMPLEMENTATION
The error typically occurs when the default LogStore implementation, that
is, HDFSLogStore, is used to write into a Delta table on a non-HDFS storage system.
In order to get the transactional ACID guarantees on table updates, you have to use the
correct implementation of LogStore that is appropriate for your storage system.
See <docLink>
for details.
DELTA_INDEX_LARGER_OR_EQUAL_THAN_STRUCT
Index <position>
to drop column equals to or is larger than struct length: <length>
DELTA_INDEX_LARGER_THAN_STRUCT
Index <index>
to add column <columnName>
is larger than struct length: <length>
DELTA_INSERT_COLUMN_ARITY_MISMATCH
Cannot write to ‘<tableName>
’, <columnName>
; target table has <numColumns>
column(s) but the inserted data has <insertColumns>
column(s)
DELTA_INVALID_BUCKET_COUNT
Invalid bucket count: <invalidBucketCount>
. Bucket count should be a positive number that is power of 2 and at least 8. You can use <validBucketCount>
instead.
DELTA_INVALID_CDC_RANGE
CDC range from start <start>
to end <end>
was invalid. End cannot be before start.
DELTA_INVALID_CHARACTERS_IN_COLUMN_NAME
Attribute name “<columnName>
” contains invalid character(s) among ” ,;{}()\n\t=”. Please use alias to rename it.
DELTA_INVALID_CHARACTERS_IN_COLUMN_NAMES
Found invalid character(s) among ‘ ,;{}()nt=’ in the column names of your schema. <advice>
DELTA_INVALID_CLONE_PATH
The target location for CLONE needs to be an absolute path or table name. Use an
absolute path instead of <path>
.
DELTA_INVALID_COMMITTED_VERSION
The committed version is <committedVersion>
but the current version is <currentVersion>
. Please contact Databricks support.
DELTA_INVALID_FORMAT
Incompatible format detected.
A transaction log for Delta was found at ``<deltaRootPath>/_delta_log
,
but you are trying to <operation>
<path>
using format(“<format>
”). You must use
‘format(“delta”)’ when reading and writing to a delta table.
To learn more about Delta, see <docLink>
DELTA_INVALID_FORMAT_FROM_SOURCE_VERSION
Unsupported format. Expected version should be smaller than or equal to <expectedVersion>
but was <realVersion>
. Please upgrade to newer version of Delta.
DELTA_INVALID_GENERATED_COLUMN_REFERENCES
A generated column cannot use a non-existent column or another generated column
DELTA_INVALID_LOGSTORE_CONF
(<classConfig>
) and (<schemeConfig>
) cannot be set at the same time. Please set only one group of them.
DELTA_INVALID_MANAGED_TABLE_SYNTAX_NO_SCHEMA
You are trying to create a managed table <tableName>
using Delta, but the schema is not specified.
To learn more about Delta, see <docLink>
DELTA_INVALID_PARTITIONING_SCHEMA
The AddFile contains partitioning schema different from the table’s partitioning schema
expected: <neededPartitioning>
actual: <specifiedPartitioning>
To disable this check set <config>
to “false”
DELTA_INVALID_PARTITION_COLUMN_NAME
Found partition columns having invalid character(s) among ” ,;{}()nt=”. Please change the name to your partition columns. This check can be turned off by setting spark.conf.set(“spark.databricks.delta.partitionColumnValidity.enabled”, false) however this is not recommended as other features of Delta may not work properly.
DELTA_INVALID_PARTITION_COLUMN_TYPE
Using column <name>
of type <dataType>
as a partition column is not supported.
DELTA_INVALID_PARTITION_PATH
A partition path fragment should be the form like part1=foo/part2=bar
. The partition path: <path>
DELTA_INVALID_PROTOCOL_DOWNGRADE
Protocol version cannot be downgraded from <oldProtocol>
to <newProtocol>
DELTA_INVALID_PROTOCOL_VERSION
Delta protocol version is not supported by this version of Databricks: table requires <required>
, client supports <supported>
. Please upgrade to a newer release.
DELTA_INVALID_TABLE_VALUE_FUNCTION
Function <function>
is an unsupported table valued function for CDC reads.
DELTA_INVALID_TIMESTAMP_FORMAT
The provided timestamp <timestamp>
does not match the expected syntax <format>
.
DELTA_LOG_FILE_NOT_FOUND_FOR_STREAMING_SOURCE
If you never deleted it, it’s likely your query is lagging behind. Please delete its checkpoint to restart from scratch. To avoid this happening again, you can update your retention policy of your Delta table
DELTA_MATERIALIZED_ROW_TRACKING_COLUMN_NAME_MISSING
Materialized <rowTrackingColumn>
column name missing for <tableName>
.
DELTA_MAX_COMMIT_RETRIES_EXCEEDED
This commit has failed as it has been tried <numAttempts>
times but did not succeed.
This can be caused by the Delta table being committed continuously by many concurrent
commits.
Commit started at version: <startVersion>
Commit failed at version: <failVersion>
Number of actions attempted to commit: <numActions>
Total time spent attempting this commit: <timeSpent>
ms
DELTA_MERGE_INCOMPATIBLE_DECIMAL_TYPE
Failed to merge decimal types with incompatible <decimalRanges>
DELTA_MERGE_MATERIALIZE_SOURCE_FAILED_REPEATEDLY
Keeping the source of the MERGE statement materialized has failed repeatedly.
DELTA_MERGE_RESOLVED_ATTRIBUTE_MISSING_FROM_INPUT
Resolved attribute(s) <missingAttributes>
missing from <input>
in operator <merge>
DELTA_MERGE_UNEXPECTED_ASSIGNMENT_KEY
Unexpected assignment key: <unexpectedKeyClass>
- <unexpectedKeyObject>
DELTA_MISSING_CHANGE_DATA
Error getting change data for range [<startVersion>
, <endVersion>
] as change data was not
recorded for version [<version>
]. If you’ve enabled change data feed on this table,
use DESCRIBE HISTORY
to see when it was first enabled.
Otherwise, to start recording change data, use `ALTER TABLE table_name SET TBLPROPERTIES
(<key>
=true)`.
DELTA_MISSING_DELTA_TABLE_COPY_INTO
Table doesn’t exist. Create an empty Delta table first using CREATE TABLE <tableName>
.
DELTA_MISSING_FILES_UNEXPECTED_VERSION
The stream from your Delta table was expecting process data from version <startVersion>
,
but the earliest available version in the deltalog directory is <earliestVersion>
. The files
in the transaction log may have been deleted due to log cleanup. In order to avoid losing
data, we recommend that you restart your stream with a new checkpoint location and to
increase your delta.logRetentionDuration setting, if you have explicitly set it below 30
days.
If you would like to ignore the missed data and continue your stream from where it left
off, you can set the .option(“<option>
”, “false”) as part
of your readStream statement.
DELTA_MISSING_ICEBERG_CLASS
Iceberg class was not found. Please ensure Delta Iceberg support is installed.
Please refer to <docLink>
for more details.
DELTA_MISSING_NOT_NULL_COLUMN_VALUE
Column <columnName>
, which has a NOT NULL constraint, is missing from the data being written into the table.
DELTA_MISSING_PROVIDER_FOR_CONVERT
CONVERT TO DELTA only supports parquet tables. Please rewrite your target as parquet.<path>
if it’s a parquet directory.
DELTA_MISSING_TRANSACTION_LOG
Incompatible format detected.
You are trying to <operation>
<path>
using Delta, but there is no
transaction log present. Check the upstream job to make sure that it is writing
using format(“delta”) and that you are trying to %1$s the table base path.
To learn more about Delta, see <docLink>
DELTA_MODE_NOT_SUPPORTED
Specified mode ‘<mode>
’ is not supported. Supported modes are: <supportedModes>
DELTA_MULTIPLE_CDC_BOUNDARY
Multiple <startingOrEnding>
arguments provided for CDC read. Please provide one of either <startingOrEnding>
Timestamp or <startingOrEnding>
Version.
DELTA_MULTIPLE_CONF_FOR_SINGLE_COLUMN_IN_BLOOM_FILTER
Multiple bloom filter index configurations passed to command for column: <columnName>
DELTA_MULTIPLE_SOURCE_ROW_MATCHING_TARGET_ROW_IN_MERGE
Cannot perform Merge as multiple source rows matched and attempted to modify the same
target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge,
when multiple source rows match on the same target row, the result may be ambiguous
as it is unclear which source row should be used to update or delete the matching
target row. You can preprocess the source table to eliminate the possibility of
multiple matches. Please refer to
<usageReference>
DELTA_NAME_CONFLICT_IN_BUCKETED_TABLE
The following column name(s) are reserved for Delta bucketed table internal usage only: <names>
DELTA_NESTED_FIELDS_NEED_RENAME
Nested fields need renaming to avoid data loss. Fields:
<fields>
.
Original schema:
<schema>
DELTA_NESTED_NOT_NULL_CONSTRAINT
The <nestType>
type of the field <parent>
contains a NOT NULL constraint. Delta does not support NOT NULL constraints nested within arrays or maps. To suppress this error and silently ignore the specified constraints, set <configKey>
= true.
Parsed <nestType>
type:
<nestedPrettyJson>
DELTA_NEW_CHECK_CONSTRAINT_VIOLATION
<numRows>
rows in <tableName>
violate the new CHECK constraint (<checkConstraint>
)
DELTA_NEW_NOT_NULL_VIOLATION
<numRows>
rows in <tableName>
violate the new NOT NULL constraint on <colName>
DELTA_NON_BOOLEAN_CHECK_CONSTRAINT
CHECK constraint ‘<name>
’ (<expr>
) should be a boolean expression.
DELTA_NON_DETERMINISTIC_FUNCTION_NOT_SUPPORTED
Non-deterministic functions are not supported in the <operation>
<expression>
DELTA_NON_GENERATED_COLUMN_MISSING_UPDATE_EXPR
<columnName>
is not a generated column but is missing its update expression
DELTA_NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION
When there are more than one MATCHED clauses in a MERGE statement, only the last MATCHED clause can omit the condition.
DELTA_NON_LAST_NOT_MATCHED_BY_SOURCE_CLAUSE_OMIT_CONDITION
When there are more than one NOT MATCHED BY SOURCE clauses in a MERGE statement, only the last NOT MATCHED BY SOURCE clause can omit the condition.
DELTA_NON_LAST_NOT_MATCHED_CLAUSE_OMIT_CONDITION
When there are more than one NOT MATCHED clauses in a MERGE statement, only the last NOT MATCHED clause can omit the condition
DELTA_NON_PARTITION_COLUMN_ABSENT
Data written into Delta needs to contain at least one non-partitioned column.<details>
DELTA_NON_PARTITION_COLUMN_REFERENCE
Predicate references non-partition column ‘<columnName>
’. Only the partition columns may be referenced: [<columnList>
]
DELTA_NON_PARTITION_COLUMN_SPECIFIED
Non-partitioning column(s) <columnList>
are specified where only partitioning columns are expected: <fragment>
.
DELTA_NON_SINGLE_PART_NAMESPACE_FOR_CATALOG
Delta catalog requires a single-part namespace, but <identifier>
is multi-part.
DELTA_NOT_A_DATABRICKS_DELTA_TABLE
<table>
is not a Delta table. Please drop this table first if you would like to create it with Databricks Delta.
DELTA_NOT_A_DELTA_TABLE
<tableName>
is not a Delta table. Please drop this table first if you would like to recreate it with Delta Lake.
DELTA_NOT_NULL_NESTED_FIELD
A non-nullable nested field can’t be added to a nullable parent. Please set the nullability of the parent column accordingly.
DELTA_NO_NEW_ATTRIBUTE_ID
Could not find a new attribute ID for column <columnName>
. This should have been checked earlier.
DELTA_NULL_SCHEMA_IN_STREAMING_WRITE
Delta doesn’t accept NullTypes in the schema for streaming writes.
DELTA_OPERATION_NOT_ALLOWED_DETAIL
Operation not allowed: <operation>
is not supported for Delta tables: <tableName>
DELTA_OPERATION_ON_TEMP_VIEW_WITH_GENERATED_COLS_NOT_SUPPORTED
<operation>
command on a temp view referring to a Delta table that contains generated columns is not supported. Please run the <operation>
command on the Delta table directly
DELTA_OVERWRITE_MUST_BE_TRUE
Copy option overwriteSchema cannot be specified without setting OVERWRITE = ‘true’.
DELTA_OVERWRITE_SCHEMA_WITH_DYNAMIC_PARTITION_OVERWRITE
‘overwriteSchema’ cannot be used in dynamic partition overwrite mode.
DELTA_PARTITION_COLUMN_CAST_FAILED
Failed to cast value <value>
to <dataType>
for partition column <columnName>
DELTA_PARTITION_SCHEMA_IN_ICEBERG_TABLES
Partition schema cannot be specified when converting Iceberg tables. It is automatically inferred.
DELTA_POST_COMMIT_HOOK_FAILED
Committing to the Delta table version <version>
succeeded but error while executing post-commit hook <name>``<message>
DELTA_READ_FEATURE_PROTOCOL_REQUIRES_WRITE
Unable to upgrade only the reader protocol version to use table features. Writer protocol version must be at least <writerVersion>
to proceed. Refer to <docLink>
for more information on table protocol versions.
DELTA_READ_TABLE_WITHOUT_COLUMNS
You are trying to read a Delta table <tableName>
that does not have any columns.
Write some new data with the option mergeSchema = true
to be able to read the table.
DELTA_REMOVE_FILE_CDC_MISSING_EXTENDED_METADATA
RemoveFile created without extended metadata is ineligible for CDC:
<file>
DELTA_REPLACE_WHERE_IN_OVERWRITE
You can’t use replaceWhere in conjunction with an overwrite by filter
DELTA_REPLACE_WHERE_MISMATCH
Data written out does not match replaceWhere ‘<replaceWhere>
’.
<message>
DELTA_REPLACE_WHERE_WITH_DYNAMIC_PARTITION_OVERWRITE
A ‘replaceWhere’ expression and ‘partitionOverwriteMode’=’dynamic’ cannot both be set in the DataFrameWriter options.
DELTA_REPLACE_WHERE_WITH_FILTER_DATA_CHANGE_UNSET
‘replaceWhere’ cannot be used with data filters when ‘dataChange’ is set to false. Filters: <dataFilters>
DELTA_ROW_ID_ASSIGNMENT_WITHOUT_STATS
Cannot assign row IDs without row count statistics.
Collect statistics for the table by running the following code in a Scala notebook and retry:
import com.databricks.sql.transaction.tahoe.DeltaLog
import com.databricks.sql.transaction.tahoe.stats.StatisticsCollection
import org.apache.spark.sql.catalyst.TableIdentifier
val log = DeltaLog.forTable(spark, TableIdentifier(table_name))
StatisticsCollection.recompute(spark, log)
DELTA_SCHEMA_CHANGED
Detected schema change:
streaming source schema: <readSchema>
data file schema: <dataSchema>
Please try restarting the query. If this issue repeats across query restarts without
making progress, you have made an incompatible schema change and need to start your
query from scratch using a new checkpoint directory.
DELTA_SCHEMA_CHANGED_WITH_STARTING_OPTIONS
Detected schema change in version <version>
:
streaming source schema: <readSchema>
data file schema: <dataSchema>
Please try restarting the query. If this issue repeats across query restarts without
making progress, you have made an incompatible schema change and need to start your
query from scratch using a new checkpoint directory. If the issue persists after
changing to a new checkpoint directory, you may need to change the existing
‘startingVersion’ or ‘startingTimestamp’ option to start from a version newer than
<version>
with a new checkpoint directory.
DELTA_SCHEMA_CHANGED_WITH_VERSION
Detected schema change in version <version>
:
streaming source schema: <readSchema>
data file schema: <dataSchema>
Please try restarting the query. If this issue repeats across query restarts without
making progress, you have made an incompatible schema change and need to start your
query from scratch using a new checkpoint directory.
DELTA_SCHEMA_CHANGE_SINCE_ANALYSIS
The schema of your Delta table has changed in an incompatible way since your DataFrame
or DeltaTable object was created. Please redefine your DataFrame or DeltaTable object.
Changes:
<schemaDiff>``<legacyFlagMessage>
DELTA_SCHEMA_NOT_CONSISTENT_WITH_TARGET
The table schema <tableSchema>
is not consistent with the target attributes: <targetAttrs>
DELTA_SCHEMA_NOT_PROVIDED
Table schema is not provided. Please provide the schema (column definition) of the table when using REPLACE table and an AS SELECT query is not provided.
DELTA_SCHEMA_NOT_SET
Table schema is not set. Write data into it or use CREATE TABLE to set the schema.
DELTA_SET_LOCATION_SCHEMA_MISMATCH
The schema of the new Delta location is different than the current table schema.
original schema:
<original>
destination schema:
<destination>
If this is an intended change, you may turn this check off by running:
%%sql set <config>
= true
DELTA_SHALLOW_CLONE_FILE_NOT_FOUND
File <filePath>
referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE
statement. This table appears to be a shallow clone, if that is the case, this error can occur when the original table from which this table was cloned has deleted a file that the clone is still using. If you want any clones to be independent of the original table, use a DEEP clone instead.
DELTA_SHOW_PARTITION_IN_NON_PARTITIONED_COLUMN
Non-partitioning column(s) <badCols>
are specified for SHOW PARTITIONS
DELTA_SHOW_PARTITION_IN_NON_PARTITIONED_TABLE
SHOW PARTITIONS is not allowed on a table that is not partitioned: <tableName>
DELTA_SOURCE_IGNORE_DELETE
Detected deleted data (for example <removedFile>
) from streaming source at version <version>
. This is currently not supported. If you’d like to ignore deletes, set the option ‘ignoreDeletes’ to ‘true’. The source table can be found at path <dataPath>
.
DELTA_SOURCE_TABLE_IGNORE_CHANGES
Detected a data update (for example <file>
) in the source table at version <version>
. This is currently not supported. If you’d like to ignore updates, set the option ‘skipChangeCommits’ to ‘true’. If you would like the data update to be reflected, please restart this query with a fresh checkpoint directory. The source table can be found at path <dataPath>
.
DELTA_STATE_RECOVER_ERROR
The <operation>
of your Delta table could not be recovered while Reconstructing
version: <version>
. Did you manually delete files in the deltalog directory?
DELTA_STATS_COLLECTION_COLUMN_NOT_FOUND
<statsType>
stats not found for column in Parquet metadata: <columnPath>
.
DELTA_STREAMING_CANNOT_CONTINUE_PROCESSING_POST_SCHEMA_EVOLUTION
We’ve detected a non-additive schema change (<opType>
) at Delta version <schemaChangeVersion>
in the Delta streaming source. Please check if you want to manually propagate this schema change to the sink table before we proceed with stream processing.
Once you have fixed the schema of the sink table or have decided there is no need to fix, you can set (one of) the following SQL configurations to unblock this non-additive schema change and continue stream processing.
To unblock for this particular stream just for this single schema change: set <allowCkptVerKey>` = `<allowCkptVerValue>
.
To unblock for this particular stream: set <allowCkptKey>` = `<allowCkptValue>
To unblock for all streams: set <allowAllKey>` = `<allowAllValue>
.
Alternatively if applicable, you may replace the <allowAllMode>
with <opSpecificMode>
in the SQL conf to unblock stream for just this schema change type.
DELTA_STREAMING_CHECK_COLUMN_MAPPING_NO_SNAPSHOT
Failed to obtain Delta log snapshot for the start version when checking column mapping schema changes. Please choose a different start version, or force enable streaming read at your own risk by setting ‘<config>
’ to ‘true’.
DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE
Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).
For further information and possible next steps to resolve this issue, please review the documentation at <docLink>
Read schema: <readSchema>
. Incompatible data schema: <incompatibleSchema>
.
DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE_USE_SCHEMA_LOG
Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).
Please provide a ‘schemaTrackingLocation’ to enable non-additive schema evolution for Delta stream processing.
See <docLink>
for more details.
Read schema: <readSchema>
. Incompatible data schema: <incompatibleSchema>
.
DELTA_STREAMING_METADATA_EVOLUTION
The schema, table configuration or protocol of your Delta table has changed during streaming.
The schema or metadata tracking log has been updated.
Please restart the stream to continue processing using the updated metadata.
Updated schema: <schema>
.
Updated table configurations: <config>
.
Updated table protocol: <protocol>
DELTA_STREAMING_SCHEMA_EVOLUTION_UNSUPPORTED_ROW_FILTER_COLUMN_MASKS
Streaming from source table <tableId>
with schema tracking does not support row filters or column masks.
Please drop the row filters or column masks, or disable schema tracking.
DELTA_STREAMING_SCHEMA_LOCATION_CONFLICT
Detected conflicting schema location ‘<loc>
’ while streaming from table or table located at ‘<table>
’.
Another stream may be reusing the same schema location, which is not allowed.
Please provide a new unique schemaTrackingLocation
path or streamingSourceTrackingId
as a reader option for one of the streams from this table.
DELTA_STREAMING_SCHEMA_LOCATION_NOT_UNDER_CHECKPOINT
Schema location ‘<schemaTrackingLocation>
’ must be placed under checkpoint location ‘<checkpointLocation>
’.
DELTA_STREAMING_SCHEMA_LOG_DESERIALIZE_FAILED
Incomplete log file in the Delta streaming source schema log at ‘<location>
’.
The schema log may have been corrupted. Please pick a new schema location.
DELTA_STREAMING_SCHEMA_LOG_INCOMPATIBLE_DELTA_TABLE_ID
Detected incompatible Delta table id when trying to read Delta stream.
Persisted table id: <persistedId>
, Table id: <tableId>
The schema log might have been reused. Please pick a new schema location.
DELTA_STREAMING_SCHEMA_LOG_INCOMPATIBLE_PARTITION_SCHEMA
Detected incompatible partition schema when trying to read Delta stream.
Persisted schema: <persistedSchema>
, Delta partition schema: <partitionSchema>
Please pick a new schema location to reinitialize the schema log if you have manually changed the table’s partition schema recently.
DELTA_STREAMING_SCHEMA_LOG_INIT_FAILED_INCOMPATIBLE_METADATA
We could not initialize the Delta streaming source schema log because
we detected an incompatible schema or protocol change while serving a streaming batch from table version <a>
to <b>
.
DELTA_STREAMING_SCHEMA_LOG_PARSE_SCHEMA_FAILED
Failed to parse the schema from the Delta streaming source schema log.
The schema log may have been corrupted. Please pick a new schema location.
DELTA_TABLE_ALREADY_CONTAINS_CDC_COLUMNS
Unable to enable Change Data Capture on the table. The table already contains
reserved columns <columnList>
that will
be used internally as metadata for the table’s Change Data Feed. To enable
Change Data Feed on the table rename/drop these columns.
DELTA_TABLE_FOR_PATH_UNSUPPORTED_HADOOP_CONF
Currently DeltaTable.forPath only supports hadoop configuration keys starting with <allowedPrefixes>
but got <unsupportedOptions>
DELTA_TABLE_LOCATION_MISMATCH
The location of the existing table <tableName>
is <existingTableLocation>
. It doesn’t match the specified location <tableLocation>
.
DELTA_TABLE_ONLY_OPERATION
<tableName>
is not a Delta table. <operation>
is only supported for Delta tables.
DELTA_TIMESTAMP_GREATER_THAN_COMMIT
The provided timestamp (<providedTimestamp>
) is after the latest version available to this
table (<tableName>
). Please use a timestamp before or at <maximumTimestamp>
.
DELTA_TRUNCATED_TRANSACTION_LOG
<path>
: Unable to reconstruct state at version <version>
as the transaction log has been truncated due to manual deletion or the log retention policy (<logRetentionKey>
=<logRetention>
) and checkpoint retention policy (<checkpointRetentionKey>
=<checkpointRetention>
)
DELTA_TRUNCATE_TABLE_PARTITION_NOT_SUPPORTED
Operation not allowed: TRUNCATE TABLE on Delta tables does not support partition predicates; use DELETE to delete specific partitions or rows.
DELTA_TXN_LOG_FAILED_INTEGRITY
The transaction log has failed integrity checks. Failed verification at version <version>
of:
<mismatchStringOpt>
DELTA_UDF_IN_GENERATED_COLUMN
Found <udfExpr>
. A generated column cannot use a user-defined function
DELTA_UNEXPECTED_ACTION_IN_OPTIMIZE
Unexpected action <action>
with type <actionClass>
. Optimize should only have AddFiles and RemoveFiles.
DELTA_UNEXPECTED_CHANGE_FILES_FOUND
Change files found in a dataChange = false transaction. Files:
<fileList>
DELTA_UNEXPECTED_NUM_PARTITION_COLUMNS_FROM_FILE_NAME
Expecting <expectedColsSize>
partition column(s): <expectedCols>
, but found <parsedColsSize>
partition column(s): <parsedCols>
from parsing the file name: <path>
DELTA_UNEXPECTED_PARTIAL_SCAN
Expect a full scan of Delta sources, but found a partial scan. path:<path>
DELTA_UNEXPECTED_PARTITION_COLUMN_FROM_FILE_NAME
Expecting partition column <expectedCol>
, but found partition column <parsedCol>
from parsing the file name: <path>
DELTA_UNEXPECTED_PARTITION_SCHEMA_FROM_USER
CONVERT TO DELTA was called with a partition schema different from the partition schema inferred from the catalog, please avoid providing the schema so that the partition schema can be chosen from the catalog.
catalog partition schema:
<catalogPartitionSchema>
provided partition schema:
<userPartitionSchema>
DELTA_UNIVERSAL_FORMAT_VIOLATION
The validation of Universal Format (<format>
) has failed: <violation>
DELTA_UNRECOGNIZED_COLUMN_CHANGE
Unrecognized column change <otherClass>
. You may be running an out-of-date Delta Lake version.
DELTA_UNSET_NON_EXISTENT_PROPERTY
Attempted to unset non-existent property ‘<property>
’ in table <tableName>
DELTA_UNSUPPORTED_ALTER_TABLE_REPLACE_COL_OP
Unsupported ALTER TABLE REPLACE COLUMNS operation. Reason: <details>
Failed to change schema from:
<oldSchema>
to:
<newSchema>
DELTA_UNSUPPORTED_CLONE_REPLACE_SAME_TABLE
You tried to REPLACE an existing table (<tableName>
) with CLONE. This operation is
unsupported. Try a different target for CLONE or delete the table at the current target.
DELTA_UNSUPPORTED_COLUMN_MAPPING_MODE_CHANGE
Changing column mapping mode from ‘<oldMode>
’ to ‘<newMode>
’ is not supported.
DELTA_UNSUPPORTED_COLUMN_MAPPING_PROTOCOL
Your current table protocol version does not support changing column mapping modes
using <config>
.
Required Delta protocol version for column mapping:
<requiredVersion>
Your table’s current Delta protocol version:
<currentVersion>
<advice>
DELTA_UNSUPPORTED_COLUMN_MAPPING_SCHEMA_CHANGE
Schema change is detected:
old schema:
<oldTableSchema>
new schema:
<newTableSchema>
Schema changes are not allowed during the change of column mapping mode.
DELTA_UNSUPPORTED_COLUMN_TYPE_IN_BLOOM_FILTER
Creating a bloom filter index on a column with type <dataType>
is unsupported: <columnName>
DELTA_UNSUPPORTED_DATA_TYPES
Found columns using unsupported data types: <dataTypeList>
. You can set ‘<config>
’ to ‘false’ to disable the type check. Disabling this type check may allow users to create unsupported Delta tables and should only be used when trying to read/write legacy tables.
DELTA_UNSUPPORTED_DESCRIBE_DETAIL_VIEW
<view>
is a view. DESCRIBE DETAIL is only supported for tables.
DELTA_UNSUPPORTED_DROP_NESTED_COLUMN_FROM_NON_STRUCT_TYPE
Can only drop nested columns from StructType. Found <struct>
DELTA_UNSUPPORTED_EXPRESSION
Unsupported expression type(<expType>
) for <causedBy>
. The supported types are [<supportedTypes>
].
DELTA_UNSUPPORTED_FEATURES_FOR_READ
Unable to read this table because it requires reader table feature(s) that are unsupported by this version of Databricks: <unsupported>
.
DELTA_UNSUPPORTED_FEATURES_FOR_WRITE
Unable to write this table because it requires writer table feature(s) that are unsupported by this version of Databricks: <unsupported>
.
DELTA_UNSUPPORTED_FEATURES_IN_CONFIG
Table feature(s) configured in the following Spark configs or Delta table properties are not recognized by this version of Databricks: <configs>
.
DELTA_UNSUPPORTED_FEATURE_STATUS
Expecting the status for table feature <feature>
to be “supported”, but got “<status>
”.
DELTA_UNSUPPORTED_FIELD_UPDATE_NON_STRUCT
Updating nested fields is only supported for StructType, but you are trying to update a field of <columnName>
, which is of type: <dataType>
.
DELTA_UNSUPPORTED_FSCK_WITH_DELETION_VECTORS
The ‘FSCK REPAIR TABLE’ command is not supported on table versions with missing deletion vector files.
Please contact support.
DELTA_UNSUPPORTED_GENERATE_WITH_DELETION_VECTORS
The ‘GENERATE symlink_format_manifest’ command is not supported on table versions with deletion vectors.
In order to produce a version of the table without deletion vectors, run ‘REORG TABLE table APPLY (PURGE)’. Then re-run the ‘GENERATE’ command.
Make sure that no concurrent transactions are adding deletion vectors again between REORG and GENERATE.
If you need to generate manifests regularly, or you cannot prevent concurrent transactions, consider disabling deletion vectors on this table using ‘ALTER TABLE table SET TBLPROPERTIES (delta.enableDeletionVectors = false)’.
DELTA_UNSUPPORTED_INVARIANT_NON_STRUCT
Invariants on nested fields other than StructTypes are not supported.
DELTA_UNSUPPORTED_MANIFEST_GENERATION_WITH_COLUMN_MAPPING
Manifest generation is not supported for tables that leverage column mapping, as external readers cannot read these Delta tables. See Delta documentation for more details.
DELTA_UNSUPPORTED_MERGE_SCHEMA_EVOLUTION_WITH_CDC
MERGE INTO operations with schema evolution do not currently support writing CDC output.
DELTA_UNSUPPORTED_MULTI_COL_IN_PREDICATE
Multi-column In predicates are not supported in the <operation>
condition.
DELTA_UNSUPPORTED_NESTED_COLUMN_IN_BLOOM_FILTER
Creating a bloom filer index on a nested column is currently unsupported: <columnName>
DELTA_UNSUPPORTED_NESTED_FIELD_IN_OPERATION
Nested field is not supported in the <operation>
(field = <fieldName>
).
DELTA_UNSUPPORTED_NON_EMPTY_CLONE
The clone destination table is non-empty. Please TRUNCATE or DELETE FROM the table before running CLONE.
DELTA_UNSUPPORTED_PARTITION_COLUMN_IN_BLOOM_FILTER
Creating a bloom filter index on a partitioning column is unsupported: <columnName>
DELTA_UNSUPPORTED_STATIC_PARTITIONS
Specifying static partitions in the partition spec is currently not supported during inserts
DELTA_UNSUPPORTED_SUBQUERY_IN_PARTITION_PREDICATES
Subquery is not supported in partition predicates.
DELTA_UNSUPPORTED_TIME_TRAVEL_VIEWS
Cannot time travel views, subqueries, streams or change data feed queries.
DELTA_UNSUPPORTED_VACUUM_SPECIFIC_PARTITION
Please provide the base path (<baseDeltaPath>
) when Vacuuming Delta tables. Vacuuming specific partitions is currently not supported.
DELTA_UPDATE_SCHEMA_MISMATCH_EXPRESSION
Cannot cast <fromCatalog>
to <toCatalog>
. All nested columns must match.
DELTA_VERSIONS_NOT_CONTIGUOUS
Versions (<versionList>
) are not contiguous.
For more details see DELTA_VERSIONS_NOT_CONTIGUOUS
DELTA_VIOLATE_CONSTRAINT_WITH_VALUES
CHECK constraint <constraintName>
<expression>
violated by row with values:
<values>
DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED
The validation of the properties of table <table>
has been violated:
For more details see DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED
DELTA_ZORDERING_ON_COLUMN_WITHOUT_STATS
Z-Ordering on <cols>
will be
ineffective, because we currently do not collect stats for these columns. Please refer to
<link>
for more information on data skipping and z-ordering. You can disable
this check by setting
‘%%sql set <zorderColStatKey>
= false’
Autoloader
CF_ADD_NEW_NOT_SUPPORTED
Schema evolution mode <addNewColumnsMode>
is not supported when the schema is specified. To use this mode, you can provide the schema through cloudFiles.schemaHints
instead.
CF_AMBIGUOUS_AUTH_OPTIONS_ERROR
Found notification-setup authentication options for the (default) directory
listing mode:
<options>
If you wish to use the file notification mode, please explicitly set:
.option(“cloudFiles.<useNotificationsKey>
”, “true”)
Alternatively, if you want to skip the validation of your options and ignore these
authentication options, you can set:
.option(“cloudFiles.ValidateOptionsKey>”, “false”)
CF_AMBIGUOUS_INCREMENTAL_LISTING_MODE_ERROR
Incremental listing mode (cloudFiles.<useIncrementalListingKey>
)
and file notification (cloudFiles.<useNotificationsKey>
)
have been enabled at the same time.
Please make sure that you select only one.
CF_BUCKET_MISMATCH
The <storeType>
in the file event <fileEvent>
is different from expected by the source: <source>
.
CF_CANNOT_EVOLVE_SCHEMA_LOG_EMPTY
Cannot evolve schema when the schema log is empty. Schema log location: <logPath>
CF_CANNOT_RESOLVE_CONTAINER_NAME
Cannot resolve container name from path: <path>
, Resolved uri: <uri>
CF_CANNOT_RUN_DIRECTORY_LISTING
Cannot run directory listing when there is an async backfill thread running
CF_CLEAN_SOURCE_ALLOW_OVERWRITES_BOTH_ON
Cannot turn on cloudFiles.cleanSource and cloudFiles.allowOverwrites at the same time.
CF_CLEAN_SOURCE_UNAUTHORIZED_WRITE_PERMISSION
Auto Loader cannot delete processed files because it does not have write permissions to the source directory.
<reason>
To fix you can either:
Grant write permissions to the source directory OR
Set cleanSource to ‘OFF’
You could also unblock your stream by setting the SQLConf spark.databricks.cloudFiles.cleanSource.disabledDueToAuthorizationErrors to ‘true’.
CF_DUPLICATE_COLUMN_IN_DATA
There was an error when trying to infer the partition schema of your table. You have the same column duplicated in your data and partition paths. To ignore the partition value, please provide your partition columns explicitly by using: .option(“cloudFiles.<partitionColumnsKey>
”, “{comma-separated-list}”)
CF_EMPTY_DIR_FOR_SCHEMA_INFERENCE
Cannot infer schema when the input path <path>
is empty. Please try to start the stream when there are files in the input path, or specify the schema.
CF_EVENT_GRID_AUTH_ERROR
Failed to create an Event Grid subscription. Please make sure that your service
principal has <permissionType>
Event Grid Subscriptions. See more details at:
<docLink>
CF_EVENT_GRID_CREATION_FAILED
Failed to create event grid subscription. Please ensure that Microsoft.EventGrid is
registered as resource provider in your subscription. See more details at:
<docLink>
CF_EVENT_GRID_NOT_FOUND_ERROR
Failed to create an Event Grid subscription. Please make sure that your storage
account (<storageAccount>
) is under your resource group (<resourceGroup>
) and that
the storage account is a “StorageV2 (general purpose v2)” account. See more details at:
<docLink>
CF_EVENT_NOTIFICATION_NOT_SUPPORTED
Auto Loader event notification mode is not supported for <cloudStore>
.
CF_FAILED_TO_CREATED_PUBSUB_SUBSCRIPTION
Failed to create subscription: <subscriptionName>
. A subscription with the same name already exists and is associated with another topic: <otherTopicName>
. The desired topic is <proposedTopicName>
. Either delete the existing subscription or create a subscription with a new resource suffix.
CF_FAILED_TO_CREATED_PUBSUB_TOPIC
Failed to create topic: <topicName>
. A topic with the same name already exists.<reason>
Remove the existing topic or try again with another resource suffix
CF_FAILED_TO_DELETE_GCP_NOTIFICATION
Failed to delete notification with id <notificationId>
on bucket <bucketName>
for topic <topicName>
. Please retry or manually remove the notification through the GCP console.
CF_FAILED_TO_DESERIALIZE_PERSISTED_SCHEMA
Failed to deserialize persisted schema from string: ‘<jsonSchema>
’
CF_FAILED_TO_INFER_SCHEMA
Failed to infer schema for format <fileFormatInput>
from existing files in input path <path>
. Please ensure you configured the options properly or explicitly specify the schema.
CF_FOUND_MULTIPLE_AUTOLOADER_PUBSUB_SUBSCRIPTIONS
Found multiple (<num>
) subscriptions with the Auto Loader prefix for topic <topicName>
:
<subscriptionList>
There should only be one subscription per topic. Please manually ensure that your topic does not have multiple subscriptions.
CF_GCP_AUTHENTICATION
Please either provide all of the following: <clientEmail>
, <client>
,
<privateKey>
, and <privateKeyId>
or provide none of them in order to use the default
GCP credential provider chain for authenticating with GCP resources.
CF_GCP_LABELS_COUNT_EXCEEDED
Received too many labels (<num>
) for GCP resource. The maximum label count per resource is <maxNum>
.
CF_GCP_RESOURCE_TAGS_COUNT_EXCEEDED
Received too many resource tags (<num>
) for GCP resource. The maximum resource tag count per resource is <maxNum>
, as resource tags are stored as GCP labels on resources, and Databricks specific tags consume some of this label quota.
CF_INCORRECT_SQL_PARAMS
The cloud_files method accepts two required string parameters: the path to load from, and the file format. File reader options must be provided in a string key-value map. e.g. cloud_files(“path”, “json”, map(“option1”, “value1”)). Received: <params>
CF_INVALID_GCP_RESOURCE_TAG_KEY
Invalid resource tag key for GCP resource: <key>
. Keys must start with a lowercase letter, be within 1 to 63 characters long, and contain only lowercase letters, numbers, underscores (_), and hyphens (-).
CF_INVALID_GCP_RESOURCE_TAG_VALUE
Invalid resource tag value for GCP resource: <value>
. Values must be within 0 to 63 characters long and must contain only lowercase letters, numbers, underscores (_), and hyphens (-).
CF_INVALID_SCHEMA_EVOLUTION_MODE
cloudFiles.<schemaEvolutionModeKey>
must be one of {
“<addNewColumns>
”
“<failOnNewColumns>
”
“<rescue>
”
“<noEvolution>
”}
CF_INVALID_SCHEMA_HINTS_OPTION
Schema hints can only specify a particular column once.
In this case, redefining column: <columnName>
multiple times in schemaHints:
<schemaHints>
CF_INVALID_SCHEMA_HINT_COLUMN
Schema hints can not be used to override maps’ and arrays’ nested types.
Conflicted column: <columnName>
CF_MISSING_METADATA_FILE_ERROR
The metadata file in the streaming source checkpoint directory is missing. This metadata
file contains important default options for the stream, so the stream cannot be restarted
right now. Please contact Databricks support for assistance.
CF_MISSING_PARTITION_COLUMN_ERROR
Partition column <columnName>
does not exist in the provided schema:
<schema>
CF_MISSING_SCHEMA_IN_PATHLESS_MODE
Please specify a schema using .schema() if a path is not provided to the CloudFiles source while using file notification mode. Alternatively, to have Auto Loader to infer the schema please provide a base path in .load().
CF_MULTIPLE_PUBSUB_NOTIFICATIONS_FOR_TOPIC
Found existing notifications for topic <topicName>
on bucket <bucketName>
:
notification,id
<notificationList>
To avoid polluting the subscriber with unintended events, please delete the above notifications and retry.
CF_NEW_PARTITION_ERROR
New partition columns were inferred from your files: [<filesList>
]. Please provide all partition columns in your schema or provide a list of partition columns which you would like to extract values for by using: .option(“cloudFiles.partitionColumns”, “{comma-separated-list|empty-string}”)
CF_PARTITON_INFERENCE_ERROR
There was an error when trying to infer the partition schema of the current batch of files. Please provide your partition columns explicitly by using: .option(“cloudFiles.<partitionColumnOption>
”, “{comma-separated-list}”)
CF_PATH_DOES_NOT_EXIST_FOR_READ_FILES
Cannot read files when the input path <path>
does not exist. Please make sure the input path exists and re-try.
CF_PERIODIC_BACKFILL_NOT_SUPPORTED
Periodic backfill is not supported if asynchronous backfill is disabled. You can enable asynchronous backfill/directory listing by setting spark.databricks.cloudFiles.asyncDirListing
to true
CF_PROTOCOL_MISMATCH
<message>
If you don’t need to make any other changes to your code, then please set the SQL
configuration: ‘<sourceProtocolVersionKey>
= <value>
’
to resume your stream. Please refer to:
<docLink>
for more details.
CF_REGION_NOT_FOUND_ERROR
Could not get default AWS Region. Please specify a region using the cloudFiles.region option.
CF_RESOURCE_SUFFIX_EMPTY
Failed to create notification services: the resource suffix cannot be empty.
CF_RESOURCE_SUFFIX_INVALID_CHAR_AWS
Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-) and underscores (_).
CF_RESOURCE_SUFFIX_INVALID_CHAR_AZURE
Failed to create notification services: the resource suffix can only have lowercase letter, number, and dash (-).
CF_RESOURCE_SUFFIX_INVALID_CHAR_GCP
Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-), underscores (_), periods (.), tildes (~) plus signs (+), and percent signs (<percentSign>
).
CF_RESOURCE_SUFFIX_LIMIT
Failed to create notification services: the resource suffix cannot have more than <limit>
characters.
CF_RESOURCE_SUFFIX_LIMIT_GCP
Failed to create notification services: the resource suffix must be between <lowerLimit>
and <upperLimit>
characters.
CF_RESTRICTED_GCP_RESOURCE_TAG_KEY
Found restricted GCP resource tag key (<key>
). The following GCP resource tag keys are restricted for Auto Loader: [<restrictedKeys>
]
CF_RETENTION_GREATER_THAN_MAX_FILE_AGE
cloudFiles.cleanSource.retentionDuration cannot be greater than cloudFiles.maxFileAge.
CF_SAME_PUB_SUB_TOPIC_NEW_KEY_PREFIX
Failed to create notification for topic: <topic>
with prefix: <prefix>
. There is already a topic with the same name with another prefix: <oldPrefix>
. Try using a different resource suffix for setup or delete the existing setup.
CF_SOURCE_UNSUPPORTED
The cloud files source only supports S3, Azure Blob Storage (wasb/wasbs) and Azure Data Lake Gen1 (adl) and Gen2 (abfs/abfss) paths right now. path: ‘<path>
’, resolved uri: ‘<uri>
’
CF_STATE_INCORRECT_SQL_PARAMS
The cloud_files_state function accepts a string parameter representing the checkpoint directory of a cloudFiles stream or a multi-part tableName identifying a streaming table, and an optional second integer parameter representing the checkpoint version to load state for. The second parameter may also be ‘latest’ to read the latest checkpoint. Received: <params>
CF_STATE_INVALID_CHECKPOINT_PATH
The input checkpoint path <path>
is invalid. Either the path does not exist or there are no cloud_files sources found.
CF_STATE_INVALID_VERSION
The specified version <version>
does not exist, or was removed during analysis.
CF_UNABLE_TO_DERIVE_STREAM_CHECKPOINT_LOCATION
Unable to derive the stream checkpoint location from the source checkpoint location: <checkPointLocation>
CF_UNABLE_TO_DETECT_FILE_FORMAT
Unable to detect the source file format from <fileSize>
sampled file(s), found <formats>
. Please specify the format.
CF_UNABLE_TO_EXTRACT_BUCKET_INFO
Unable to extract bucket information. Path: ‘<path>
’, resolved uri: ‘<uri>
’.
CF_UNABLE_TO_EXTRACT_KEY_INFO
Unable to extract key information. Path: ‘<path>
’, resolved uri: ‘<uri>
’.
CF_UNABLE_TO_EXTRACT_STORAGE_ACCOUNT_INFO
Unable to extract storage account information; path: ‘<path>
’, resolved uri: ‘<uri>
’
CF_UNABLE_TO_LIST_EFFICIENTLY
Received a directory rename event for the path <path>
, but we are unable to list this directory efficiently. In order for the stream to continue, set the option ‘cloudFiles.ignoreDirRenames’ to true, and consider enabling regular backfills with cloudFiles.backfillInterval for this data to be processed.
CF_UNKNOWN_OPTION_KEYS_ERROR
Found unknown option keys:
<optionList>
Please make sure that all provided option keys are correct. If you want to skip the
validation of your options and ignore these unknown options, you can set:
.option(“cloudFiles.<validateOptions>
”, “false”)
CF_UNSUPPORTED_FORMAT_FOR_SCHEMA_INFERENCE
Schema inference is not supported for format: <format>
. Please specify the schema.
CF_UNSUPPORTED_LOG_VERSION
UnsupportedLogVersion: maximum supported log version is v`<maxVersion>, but encountered v
<version>`. The log file was produced by a newer version of DBR and cannot be read by this version. Please upgrade.
CF_UNSUPPORTED_SCHEMA_EVOLUTION_MODE
Schema evolution mode <mode>
is not supported for format: <format>
. Please set the schema evolution mode to ‘none’.
CF_USE_DELTA_FORMAT
Reading from a Delta table is not supported with this syntax. If you would like to consume data from Delta, please refer to the docs: read a Delta table (<deltaDocLink>
), or read a Delta table as a stream source (<streamDeltaDocLink>
). The streaming source from Delta is already optimized for incremental consumption of data.
Geospatial
GEOJSON_PARSE_ERROR
Error parsing GeoJSON: <parseError>
at position <pos>
For more details see GEOJSON_PARSE_ERROR
H3_INVALID_GRID_DISTANCE_VALUE
H3 grid distance <k>
must be non-negative
For more details see H3_INVALID_GRID_DISTANCE_VALUE
H3_INVALID_RESOLUTION_VALUE
H3 resolution <r>
must be between <minR>
and <maxR>
, inclusive
For more details see H3_INVALID_RESOLUTION_VALUE
H3_NOT_ENABLED
SQLSTATE: none assigned
<h3Expression> is disabled or unsupported. Consider enabling Photon or switch to a tier that supports H3 expressions
For more details see H3_NOT_ENABLED
H3_PENTAGON_ENCOUNTERED_ERROR
A pentagon was encountered while computing the hex ring of <h3Cell> with grid distance <k>
ST_NOT_ENABLED
SQLSTATE: none assigned
<stExpression>
is disabled or unsupported. Consider enabling Photon or switch to a tier that supports ST expressions
WKB_PARSE_ERROR
Error parsing WKB: <parseError>
at position <pos>
For more details see WKB_PARSE_ERROR
WKT_PARSE_ERROR
Error parsing WKT: <parseError>
at position <pos>
For more details see WKT_PARSE_ERROR