Jaa


Error classes in Azure Databricks

Applies to: check marked yes Databricks SQL check marked yes Databricks Runtime 12.2 and above

Error classes are descriptive, human-readable, strings unique to the error condition.

You can use error classes to programmatically handle errors in your application without the need to parse the error message.

This is a list of common, named error conditions returned by Azure Databricks.

Databricks Runtime and Databricks SQL

ADD_DEFAULT_UNSUPPORTED

SQLSTATE: 42623

Failed to execute <statementType> command because DEFAULT values are not supported when adding new columns to previously existing target data source with table provider: “<dataSource>”.

AGGREGATE_FUNCTION_WITH_NONDETERMINISTIC_EXPRESSION

SQLSTATE: 42845

Non-deterministic expression <sqlExpr> should not appear in the arguments of an aggregate function.

AI_FUNCTION_HTTP_PARSE_CAST_ERROR

SQLSTATE: 2203G

Failed to parse model output when casting to the specified returnType: “<dataType>”, response JSON was: “<responseString>”. Please update the returnType to match the contents of the type represented by the response JSON and then retry the query again.

AI_FUNCTION_HTTP_PARSE_COLUMNS_ERROR

SQLSTATE: 2203G

The actual model output has more than one column “<responseString>”. However, the specified return type[“<dataType>”] has only one column. Please update the returnType to contain the same number of columns as the model output and then retry the query again.

AI_FUNCTION_HTTP_REQUEST_ERROR

SQLSTATE: 08000

Error occurred while making an HTTP request for function <funcName>: <errorMessage>

AI_FUNCTION_INVALID_HTTP_RESPONSE

SQLSTATE: 08000

Invalid HTTP response for function <funcName>: <errorMessage>

AI_FUNCTION_INVALID_MAX_WORDS

SQLSTATE: 22032

The maximum number of words must be a non-negative integer, but got <maxWords>.

AI_FUNCTION_INVALID_MODEL_PARAMETERS

SQLSTATE: 22023

The provided model parameters (<modelParameters>) are invalid in the AI_QUERY function for serving endpoint “<endpointName>”.

For more details see AI_FUNCTION_INVALID_MODEL_PARAMETERS

AI_FUNCTION_INVALID_RESPONSE_FORMAT

SQLSTATE: 0A000

AI function: “<functionName>” requires valid JSON string for responseFormat parameter, but found the following response format: “<invalidResponseFormat>”.

AI_FUNCTION_JSON_PARSE_ERROR

SQLSTATE: 22000

Error occurred while parsing the JSON response for function <funcName>: <errorMessage>

AI_FUNCTION_MODEL_SCHEMA_PARSE_ERROR

SQLSTATE: 2203G

Failed to parse the schema for the serving endpoint “<endpointName>”: <errorMessage>, response JSON was: “<responseJson>”.

Set the returnType parameter manually in the AI_QUERY function to override schema resolution.

AI_FUNCTION_UNSUPPORTED_ERROR

SQLSTATE: 56038

The function <funcName> is not supported in the current environment. It is only available in Databricks SQL Pro and Serverless.

AI_FUNCTION_UNSUPPORTED_REQUEST

SQLSTATE: 0A000

Failed to evaluate the SQL function “<functionName>” because the provided argument of <invalidValue> has “<invalidDataType>”, but only the following types are supported: <supportedDataTypes>. Please update the function call to provide an argument of string type and retry the query again.

AI_FUNCTION_UNSUPPORTED_RESPONSE_FORMAT

SQLSTATE: 0A000

AI function: “<functionName>” does not support the type “<invalidResponseFormatType>” of the following response format: “<invalidResponseFormat>”. Supported types of the response format are: <supportedResponseFormatTypes>.

AI_FUNCTION_UNSUPPORTED_RETURN_TYPE

SQLSTATE: 0A000

AI function: “<functionName>” does not support the following type as return type: “<typeName>”. Return type must be a valid SQL type understood by Catalyst and supported by AI function. Current supported types includes: <supportedValues>

AI_INVALID_ARGUMENT_VALUE_ERROR

SQLSTATE: 22032

Provided value “<argValue>” is not supported by argument “<argName>”. Supported values are: <supportedValues>

AI_QUERY_ENDPOINT_NOT_SUPPORT_STRUCTURED_OUTPUT

SQLSTATE: 0A000

Expected the serving endpoint task type to be “Chat” for structured output support, but found “<taskType>” for the endpoint “<endpointName>”.

AI_QUERY_RETURN_TYPE_COLUMN_TYPE_MISMATCH

SQLSTATE: 0A000

Provided “<sqlExpr>” is not supported by the argument returnType.

AI_SEARCH_CONFLICTING_QUERY_PARAM_SUPPLY_ERROR

SQLSTATE: 0A000

Conflicting parameters detected for vector_search SQL function: <conflictParamNames>, please specify one from: <parameterNames>.

AI_SEARCH_EMBEDDING_COLUMN_TYPE_UNSUPPORTED_ERROR

SQLSTATE: 0A000

vector_search SQL function with embedding column type <embeddingColumnType> is not supported.

AI_SEARCH_EMPTY_QUERY_PARAM_ERROR

SQLSTATE: 0A000

vector_search SQL function is missing query input parameter, please specify one from: <parameterNames>.

AI_SEARCH_INDEX_TYPE_UNSUPPORTED_ERROR

SQLSTATE: 0A000

vector_search SQL function with index type <indexType> is not supported.

AI_SEARCH_QUERY_TYPE_CONVERT_ENCODE_ERROR

SQLSTATE: 0A000

Failure to materialize vector_search SQL function query from spark type <dataType> to scala-native objects during request-encoding with error: <errorMessage>.

AI_SEARCH_UNSUPPORTED_NUM_RESULTS_ERROR

SQLSTATE: 0A000

vector_search SQL function with num_results larger than <maxLimit> is not supported. The limit specified was <requestedLimit>. Please try again with num_results <= <maxLimit>

ALL_PARAMETERS_MUST_BE_NAMED

SQLSTATE: 07001

Using name parameterized queries requires all parameters to be named. Parameters missing names: <exprs>.

ALL_PARTITION_COLUMNS_NOT_ALLOWED

SQLSTATE: KD005

Cannot use all columns for partition columns.

ALTER_SCHEDULE_DOES_NOT_EXIST

SQLSTATE: 42704

Cannot alter <scheduleType> on a table without an existing schedule or trigger. Please add a schedule or trigger to the table before attempting to alter it.

ALTER_TABLE_COLUMN_DESCRIPTOR_DUPLICATE

SQLSTATE: 42710

ALTER TABLE <type> column <columnName> specifies descriptor “<optionName>” more than once, which is invalid.

AMBIGUOUS_ALIAS_IN_NESTED_CTE

SQLSTATE: 42KD0

Name <name> is ambiguous in nested CTE.

Please set <config> to “CORRECTED” so that name defined in inner CTE takes precedence. If set it to “LEGACY”, outer CTE definitions will take precedence.

See https://spark.apache.org/docs/latest/sql-migration-guide.html#query-engine’.

AMBIGUOUS_COLUMN_OR_FIELD

SQLSTATE: 42702

Column or field <name> is ambiguous and has <n> matches.

AMBIGUOUS_COLUMN_REFERENCE

SQLSTATE: 42702

Column <name> is ambiguous. It’s because you joined several DataFrame together, and some of these DataFrames are the same.

This column points to one of the DataFrames but Spark is unable to figure out which one.

Please alias the DataFrames with different names via DataFrame.alias before joining them,

and specify the column using qualified name, e.g. df.alias("a").join(df.alias("b"), col("a.id") > col("b.id")).

AMBIGUOUS_CONSTRAINT

SQLSTATE: 42K0C

Ambiguous reference to constraint <constraint>.

AMBIGUOUS_LATERAL_COLUMN_ALIAS

SQLSTATE: 42702

Lateral column alias <name> is ambiguous and has <n> matches.

AMBIGUOUS_REFERENCE

SQLSTATE: 42704

Reference <name> is ambiguous, could be: <referenceNames>.

AMBIGUOUS_REFERENCE_TO_FIELDS

SQLSTATE: 42000

Ambiguous reference to the field <field>. It appears <count> times in the schema.

AMBIGUOUS_RESOLVER_EXTENSION

SQLSTATE: 0A000

The single-pass analyzer cannot process this query or command because the extension choice for <operator> is ambiguous: <extensions>.

Please contact Databricks support.

ANALYZE_CONSTRAINTS_NOT_SUPPORTED

SQLSTATE: 0A000

ANALYZE CONSTRAINTS is not supported.

ANSI_CONFIG_CANNOT_BE_DISABLED

SQLSTATE: 56038

The ANSI SQL configuration <config> cannot be disabled in this product.

AQE_THREAD_INTERRUPTED

SQLSTATE: HY008

AQE thread is interrupted, probably due to query cancellation by user.

ARGUMENT_NOT_CONSTANT

SQLSTATE: 42K08

The function <functionName> includes a parameter <parameterName> at position <pos> that requires a constant argument. Please compute the argument <sqlExpr> separately and pass the result as a constant.

ARITHMETIC_OVERFLOW

SQLSTATE: 22003

<message>.<alternative> If necessary set <config> to “false” to bypass this error.

For more details see ARITHMETIC_OVERFLOW

ASSIGNMENT_ARITY_MISMATCH

SQLSTATE: 42802

The number of columns or variables assigned or aliased: <numTarget> does not match the number of source expressions: <numExpr>.

AS_OF_JOIN

SQLSTATE: 42604

Invalid as-of join.

For more details see AS_OF_JOIN

AVRO_DEFAULT_VALUES_UNSUPPORTED

SQLSTATE: 0A000

The use of default values is not supported whenrescuedDataColumn is enabled. You may be able to remove this check by setting spark.databricks.sql.avro.rescuedDataBlockUserDefinedSchemaDefaultValue to false, but the default values will not apply and null values will still be used.

AVRO_INCOMPATIBLE_READ_TYPE

SQLSTATE: 22KD3

Cannot convert Avro <avroPath> to SQL <sqlPath> because the original encoded data type is <avroType>, however you’re trying to read the field as <sqlType>, which would lead to an incorrect answer.

To allow reading this field, enable the SQL configuration: “spark.sql.legacy.avro.allowIncompatibleSchema”.

AVRO_POSITIONAL_FIELD_MATCHING_UNSUPPORTED

SQLSTATE: 0A000

The use of positional field matching is not supported when either rescuedDataColumn or failOnUnknownFields is enabled. Remove these options to proceed.

BATCH_METADATA_NOT_FOUND

SQLSTATE: 42K03

Unable to find batch <batchMetadataFile>.

BIGQUERY_OPTIONS_ARE_MUTUALLY_EXCLUSIVE

SQLSTATE: 42616

BigQuery connection credentials must be specified with either the ‘GoogleServiceAccountKeyJson’ parameter or all of ‘projectId’, ‘OAuthServiceAcctEmail’, ‘OAuthPvtKey’

BINARY_ARITHMETIC_OVERFLOW

SQLSTATE: 22003

<value1> <symbol> <value2> caused overflow. Use <functionName> to ignore overflow problem and return NULL.

BOOLEAN_STATEMENT_WITH_EMPTY_ROW

SQLSTATE: 21000

Boolean statement <invalidStatement> is invalid. Expected single row with a value of the BOOLEAN type, but got an empty row.

BUILT_IN_CATALOG

SQLSTATE: 42832

<operation> doesn’t support built-in catalogs.

CALL_ON_STREAMING_DATASET_UNSUPPORTED

SQLSTATE: 42KDE

The method <methodName> can not be called on streaming Dataset/DataFrame.

CANNOT_ALTER_COLLATION_BUCKET_COLUMN

SQLSTATE: 428FR

ALTER TABLE (ALTER|CHANGE) COLUMN cannot change collation of type/subtypes of bucket columns, but found the bucket column <columnName> in the table <tableName>.

CANNOT_ALTER_PARTITION_COLUMN

SQLSTATE: 428FR

ALTER TABLE (ALTER|CHANGE) COLUMN is not supported for partition columns, but found the partition column <columnName> in the table <tableName>.

CANNOT_ASSIGN_EVENT_TIME_COLUMN_WITHOUT_WATERMARK

SQLSTATE: 42611

Watermark needs to be defined to reassign event time column. Failed to find watermark definition in the streaming query.

CANNOT_CAST_DATATYPE

SQLSTATE: 42846

Cannot cast <sourceType> to <targetType>.

CANNOT_CONVERT_PROTOBUF_FIELD_TYPE_TO_SQL_TYPE

SQLSTATE: 42846

Cannot convert Protobuf <protobufColumn> to SQL <sqlColumn> because schema is incompatible (protobufType = <protobufType>, sqlType = <sqlType>).

CANNOT_CONVERT_PROTOBUF_MESSAGE_TYPE_TO_SQL_TYPE

SQLSTATE: 42846

Unable to convert <protobufType> of Protobuf to SQL type <toType>.

CANNOT_CONVERT_SQL_TYPE_TO_PROTOBUF_FIELD_TYPE

SQLSTATE: 42846

Cannot convert SQL <sqlColumn> to Protobuf <protobufColumn> because schema is incompatible (protobufType = <protobufType>, sqlType = <sqlType>).

CANNOT_CONVERT_SQL_VALUE_TO_PROTOBUF_ENUM_TYPE

SQLSTATE: 42846

Cannot convert SQL <sqlColumn> to Protobuf <protobufColumn> because <data> is not in defined values for enum: <enumString>.

CANNOT_COPY_STATE

SQLSTATE: 0AKD0

Cannot copy catalog state like current database and temporary views from Unity Catalog to a legacy catalog.

CANNOT_CREATE_DATA_SOURCE_TABLE

SQLSTATE: 42KDE

Failed to create data source table <tableName>:

For more details see CANNOT_CREATE_DATA_SOURCE_TABLE

CANNOT_DECODE_URL

SQLSTATE: 22546

The provided URL cannot be decoded: <url>. Please ensure that the URL is properly formatted and try again.

CANNOT_DELETE_SYSTEM_OWNED

SQLSTATE: 42832

System owned <resourceType> cannot be deleted.

CANNOT_DROP_AMBIGUOUS_CONSTRAINT

SQLSTATE: 42K0C

Cannot drop the constraint with the name <constraintName> shared by a CHECK constraint

and a PRIMARY KEY or FOREIGN KEY constraint. You can drop the PRIMARY KEY or

FOREIGN KEY constraint by queries:

ALTER TABLE .. DROP PRIMARY KEY or

ALTER TABLE .. DROP FOREIGN KEY ..

CANNOT_ESTABLISH_CONNECTION

SQLSTATE: 08001

Cannot establish connection to remote <jdbcDialectName> database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please check your workspace’s network setup and ensure it does not have outbound restrictions to the host. Please also check that the host does not block inbound connections from the network where the workspace’s Spark clusters are deployed. ** Detailed error message: <causeErrorMessage>.

CANNOT_ESTABLISH_CONNECTION_SERVERLESS

SQLSTATE: 08001

Cannot establish connection to remote <jdbcDialectName> database. Please check connection information and credentials e.g. host, port, user, password and database options. ** If you believe the information is correct, please allow inbound traffic from the Internet to your host, as you are using Serverless Compute. If your network policies do not allow inbound Internet traffic, please use non Serverless Compute, or you may reach out to your Databricks representative to learn about Serverless Private Networking. ** Detailed error message: <causeErrorMessage>.

CANNOT_INVOKE_IN_TRANSFORMATIONS

SQLSTATE: 0A000

Dataset transformations and actions can only be invoked by the driver, not inside of other Dataset transformations; for example, dataset1.map(x => dataset2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the dataset1.map transformation. For more information, see SPARK-28702.

CANNOT_LOAD_FUNCTION_CLASS

SQLSTATE: 46103

Cannot load class <className> when registering the function <functionName>, please make sure it is on the classpath.

CANNOT_LOAD_PROTOBUF_CLASS

SQLSTATE: 42K03

Could not load Protobuf class with name <protobufClassName>. <explanation>.

CANNOT_LOAD_STATE_STORE

SQLSTATE: 58030

An error occurred during loading state.

For more details see CANNOT_LOAD_STATE_STORE

CANNOT_MERGE_INCOMPATIBLE_DATA_TYPE

SQLSTATE: 42825

Failed to merge incompatible data types <left> and <right>. Please check the data types of the columns being merged and ensure that they are compatible. If necessary, consider casting the columns to compatible data types before attempting the merge.

CANNOT_MERGE_SCHEMAS

SQLSTATE: 42KD9

Failed merging schemas:

Initial schema:

<left>

Schema that cannot be merged with the initial schema:

<right>.

CANNOT_MODIFY_CONFIG

SQLSTATE: 46110

Cannot modify the value of the Spark config: <key>.

See also https://spark.apache.org/docs/latest/sql-migration-guide.html#ddl-statements’.

CANNOT_PARSE_DECIMAL

SQLSTATE: 22018

Cannot parse decimal. Please ensure that the input is a valid number with optional decimal point or comma separators.

CANNOT_PARSE_INTERVAL

SQLSTATE: 22006

Unable to parse <intervalString>. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format. If the issue persists, please double check that the input value is not null or empty and try again.

CANNOT_PARSE_JSON_FIELD

SQLSTATE: 2203G

Cannot parse the field name <fieldName> and the value <fieldValue> of the JSON token type <jsonType> to target Spark data type <dataType>.

CANNOT_PARSE_PROTOBUF_DESCRIPTOR

SQLSTATE: 22018

Error parsing descriptor bytes into Protobuf FileDescriptorSet.

CANNOT_PARSE_TIMESTAMP

SQLSTATE: 22007

<message>. If necessary set <ansiConfig> to “false” to bypass this error.

CANNOT_QUERY_TABLE_DURING_INITIALIZATION

SQLSTATE: 55019

Cannot query MV/ST during initialization.

For more details see CANNOT_QUERY_TABLE_DURING_INITIALIZATION

CANNOT_READ_ARCHIVED_FILE

SQLSTATE: KD003

Cannot read file at path <path> because it has been archived. Please adjust your query filters to exclude archived files.

CANNOT_READ_FILE

SQLSTATE: KD003

Cannot read <format> file at path: <path>.

For more details see CANNOT_READ_FILE

CANNOT_READ_SENSITIVE_KEY_FROM_SECURE_PROVIDER

SQLSTATE: 42501

Cannot read sensitive key ‘<key>’ from secure provider.

CANNOT_RECOGNIZE_HIVE_TYPE

SQLSTATE: 429BB

Cannot recognize hive type string: <fieldType>, column: <fieldName>. The specified data type for the field cannot be recognized by Spark SQL. Please check the data type of the specified field and ensure that it is a valid Spark SQL data type. Refer to the Spark SQL documentation for a list of valid data types and their format. If the data type is correct, please ensure that you are using a supported version of Spark SQL.

CANNOT_REFERENCE_UC_IN_HMS

SQLSTATE: 0AKD0

Cannot reference a Unity Catalog <objType> in Hive Metastore objects.

CANNOT_REMOVE_RESERVED_PROPERTY

SQLSTATE: 42000

Cannot remove reserved property: <property>.

CANNOT_RENAME_ACROSS_CATALOG

SQLSTATE: 0AKD0

Renaming a <type> across catalogs is not allowed.

CANNOT_RENAME_ACROSS_SCHEMA

SQLSTATE: 0AKD0

Renaming a <type> across schemas is not allowed.

CANNOT_RESOLVE_DATAFRAME_COLUMN

SQLSTATE: 42704

Cannot resolve dataframe column <name>. It’s probably because of illegal references like df1.select(df2.col("a")).

CANNOT_RESOLVE_STAR_EXPAND

SQLSTATE: 42704

Cannot resolve <targetString>.* given input columns <columns>. Please check that the specified table or struct exists and is accessible in the input columns.

CANNOT_RESTORE_PERMISSIONS_FOR_PATH

SQLSTATE: 58030

Failed to set permissions on created path <path> back to <permission>.

CANNOT_SHALLOW_CLONE_ACROSS_UC_AND_HMS

SQLSTATE: 0AKD0

Cannot shallow-clone tables across Unity Catalog and Hive Metastore.

CANNOT_SHALLOW_CLONE_NESTED

SQLSTATE: 0AKUC

Cannot shallow-clone a table <table> that is already a shallow clone.

CANNOT_SHALLOW_CLONE_NON_UC_MANAGED_TABLE_AS_SOURCE_OR_TARGET

SQLSTATE: 0AKUC

Shallow clone is only supported for the MANAGED table type. The table <table> is not MANAGED table.

CANNOT_UPDATE_FIELD

SQLSTATE: 0A000

Cannot update <table> field <fieldName> type:

For more details see CANNOT_UPDATE_FIELD

CANNOT_UP_CAST_DATATYPE

SQLSTATE: 42846

Cannot up cast <expression> from <sourceType> to <targetType>.

<details>

CANNOT_USE_KRYO

SQLSTATE: 22KD3

Cannot load Kryo serialization codec. Kryo serialization cannot be used in the Spark Connect client. Use Java serialization, provide a custom Codec, or use Spark Classic instead.

CANNOT_VALIDATE_CONNECTION

SQLSTATE: 08000

Validation of <jdbcDialectName> connection is not supported. Please contact Databricks support for alternative solutions, or set “spark.databricks.testConnectionBeforeCreation” to “false” to skip connection testing before creating a connection object.

CANNOT_WRITE_STATE_STORE

SQLSTATE: 58030

Error writing state store files for provider <providerClass>.

For more details see CANNOT_WRITE_STATE_STORE

CAST_INVALID_INPUT

SQLSTATE: 22018

The value <expression> of the type <sourceType> cannot be cast to <targetType> because it is malformed. Correct the value as per the syntax, or change its target type. Use try_cast to tolerate malformed input and return NULL instead.

For more details see CAST_INVALID_INPUT

CAST_OVERFLOW

SQLSTATE: 22003

The value <value> of the type <sourceType> cannot be cast to <targetType> due to an overflow. Use try_cast to tolerate overflow and return NULL instead.

CAST_OVERFLOW_IN_TABLE_INSERT

SQLSTATE: 22003

Fail to assign a value of <sourceType> type to the <targetType> type column or variable <columnName> due to an overflow. Use try_cast on the input value to tolerate overflow and return NULL instead.

CATALOG_NOT_FOUND

SQLSTATE: 42P08

The catalog <catalogName> not found. Consider to set the SQL config <config> to a catalog plugin.

CHECKPOINT_RDD_BLOCK_ID_NOT_FOUND

SQLSTATE: 56000

Checkpoint block <rddBlockId> not found!

Either the executor that originally checkpointed this partition is no longer alive, or the original RDD is unpersisted.

If this problem persists, you may consider using rdd.checkpoint() instead, which is slower than local checkpointing but more fault-tolerant.

CIRCULAR_CLASS_REFERENCE

SQLSTATE: 42602

Cannot have circular references in class, but got the circular reference of class <t>.

CLASS_NOT_OVERRIDE_EXPECTED_METHOD

SQLSTATE: 38000

<className> must override either <method1> or <method2>.

CLASS_UNSUPPORTED_BY_MAP_OBJECTS

SQLSTATE: 0A000

MapObjects does not support the class <cls> as resulting collection.

CLEANROOM_COMMANDS_NOT_SUPPORTED

SQLSTATE: 0A000

Clean Room commands are not supported

CLEANROOM_INVALID_SHARED_DATA_OBJECT_NAME

SQLSTATE: 42K05

Invalid name to reference a <type> inside a Clean Room. Use a <type>’s name inside the clean room following the format of [catalog].[schema].[<type>].

If you are unsure about what name to use, you can run “SHOW ALL IN CLEANROOM [clean_room]” and use the value in the “name” column.

CLOUD_FILE_SOURCE_FILE_NOT_FOUND

SQLSTATE: 42K03

A file notification was received for file: <filePath> but it does not exist anymore. Please ensure that files are not deleted before they are processed. To continue your stream, you can set the Spark SQL configuration <config> to true.

CLOUD_PROVIDER_ERROR

SQLSTATE: 58000

Cloud provider error: <message>

CLUSTERING_COLUMNS_MISMATCH

SQLSTATE: 42P10

Specified clustering does not match that of the existing table <tableName>.

Specified clustering columns: [<specifiedClusteringString>].

Existing clustering columns: [<existingClusteringString>].

CLUSTERING_NOT_SUPPORTED

SQLSTATE: 42000

<operation>’ does not support clustering.

CLUSTER_BY_AUTO_FEATURE_NOT_ENABLED

SQLSTATE: 0A000

Please contact your Databricks representative to enable the cluster-by-auto feature.

CLUSTER_BY_AUTO_REQUIRES_CLUSTERING_FEATURE_ENABLED

SQLSTATE: 56038

Please enable clusteringTable.enableClusteringTableFeature to use CLUSTER BY AUTO.

CLUSTER_BY_AUTO_REQUIRES_PREDICTIVE_OPTIMIZATION

SQLSTATE: 56038

CLUSTER BY AUTO requires Predictive Optimization to be enabled.

CLUSTER_BY_AUTO_UNSUPPORTED_TABLE_TYPE_ERROR

SQLSTATE: 56038

CLUSTER BY AUTO is only supported on UC Managed tables.

CODEC_NOT_AVAILABLE

SQLSTATE: 56038

The codec <codecName> is not available.

For more details see CODEC_NOT_AVAILABLE

CODEC_SHORT_NAME_NOT_FOUND

SQLSTATE: 42704

Cannot find a short name for the codec <codecName>.

COLLATION_INVALID_NAME

SQLSTATE: 42704

The value <collationName> does not represent a correct collation name. Suggested valid collation names: [<proposals>].

COLLATION_INVALID_PROVIDER

SQLSTATE: 42704

The value <provider> does not represent a correct collation provider. Supported providers are: [<supportedProviders>].

COLLATION_MISMATCH

SQLSTATE: 42P21

Could not determine which collation to use for string functions and operators.

For more details see COLLATION_MISMATCH

COLLECTION_SIZE_LIMIT_EXCEEDED

SQLSTATE: 54000

Can’t create array with <numberOfElements> elements which exceeding the array size limit <maxRoundedArrayLength>,

For more details see COLLECTION_SIZE_LIMIT_EXCEEDED

COLUMN_ALIASES_NOT_ALLOWED

SQLSTATE: 42601

Column aliases are not allowed in <op>.

COLUMN_ALREADY_EXISTS

SQLSTATE: 42711

The column <columnName> already exists. Choose another name or rename the existing column.

COLUMN_ARRAY_ELEMENT_TYPE_MISMATCH

SQLSTATE: 0A000

Some values in field <pos> are incompatible with the column array type. Expected type <type>.

COLUMN_MASKS_CHECK_CONSTRAINT_UNSUPPORTED

SQLSTATE: 0A000

Creating CHECK constraint on table <tableName> with column mask policies is not supported.

COLUMN_MASKS_DUPLICATE_USING_COLUMN_NAME

SQLSTATE: 42734

A <statementType> statement attempted to assign a column mask policy to a column which included two or more other referenced columns in the USING COLUMNS list with the same name <columnName>, which is invalid.

COLUMN_MASKS_FEATURE_NOT_SUPPORTED

SQLSTATE: 0A000

Column mask policies for <tableName> are not supported:

For more details see COLUMN_MASKS_FEATURE_NOT_SUPPORTED

COLUMN_MASKS_INCOMPATIBLE_SCHEMA_CHANGE

SQLSTATE: 0A000

Unable to <statementType> <columnName> from table <tableName> because it’s referenced in a column mask policy for column <maskedColumn>. The table owner must remove or alter this policy before proceeding.

COLUMN_MASKS_MERGE_UNSUPPORTED_SOURCE

SQLSTATE: 0A000

MERGE INTO operations do not support column mask policies in source table <tableName>.

COLUMN_MASKS_MERGE_UNSUPPORTED_TARGET

SQLSTATE: 0A000

MERGE INTO operations do not support writing into table <tableName> with column mask policies.

COLUMN_MASKS_MULTI_PART_TARGET_COLUMN_NAME

SQLSTATE: 42K05

This statement attempted to assign a column mask policy to a column <columnName> with multiple name parts, which is invalid.

COLUMN_MASKS_MULTI_PART_USING_COLUMN_NAME

SQLSTATE: 42K05

This statement attempted to assign a column mask policy to a column and the USING COLUMNS list included the name <columnName> with multiple name parts, which is invalid.

COLUMN_MASKS_NOT_ENABLED

SQLSTATE: 56038

Support for defining column masks is not enabled

COLUMN_MASKS_REQUIRE_UNITY_CATALOG

SQLSTATE: 0A000

Column mask policies are only supported in Unity Catalog.

COLUMN_MASKS_SHOW_PARTITIONS_UNSUPPORTED

SQLSTATE: 0A000

SHOW PARTITIONS command is not supported for<format> tables with column masks.

COLUMN_MASKS_TABLE_CLONE_SOURCE_NOT_SUPPORTED

SQLSTATE: 0A000

<mode> clone from table <tableName> with column mask policies is not supported.

COLUMN_MASKS_TABLE_CLONE_TARGET_NOT_SUPPORTED

SQLSTATE: 0A000

<mode> clone to table <tableName> with column mask policies is not supported.

COLUMN_MASKS_UNSUPPORTED_CONSTANT_AS_PARAMETER

SQLSTATE: 0AKD1

Using a constant as a parameter in a column mask policy is not supported. Please update your SQL command to remove the constant from the column mask definition and then retry the command again.

COLUMN_MASKS_UNSUPPORTED_PROVIDER

SQLSTATE: 0A000

Failed to execute <statementType> command because assigning column mask policies is not supported for target data source with table provider: “<provider>”.

COLUMN_MASKS_UNSUPPORTED_SUBQUERY

SQLSTATE: 0A000

Cannot perform <operation> for table <tableName> because it contains one or more column mask policies with subquery expression(s), which are not yet supported. Please contact the owner of the table to update the column mask policies in order to continue.

COLUMN_MASKS_USING_COLUMN_NAME_SAME_AS_TARGET_COLUMN

SQLSTATE: 42734

The column <columnName> had the same name as the target column, which is invalid; please remove the column from the USING COLUMNS list and retry the command.

COLUMN_NOT_DEFINED_IN_TABLE

SQLSTATE: 42703

<colType> column <colName> is not defined in table <tableName>, defined table columns are: <tableCols>.

COLUMN_NOT_FOUND

SQLSTATE: 42703

The column <colName> cannot be found. Verify the spelling and correctness of the column name according to the SQL config <caseSensitiveConfig>.

COLUMN_ORDINAL_OUT_OF_BOUNDS

SQLSTATE: 22003

Column ordinal out of bounds. The number of columns in the table is <attributesLength>, but the column ordinal is <ordinal>.

Attributes are the following: <attributes>.

COMMA_PRECEDING_CONSTRAINT_ERROR

SQLSTATE: 42601

Unexpected ‘,’ before constraint(s) definition. Ensure that the constraint clause does not start with a comma when columns (and expectations) are not defined.

COMMENT_ON_CONNECTION_NOT_IMPLEMENTED_YET

SQLSTATE: 42000

The COMMENT ON CONNECTION command is not implemented yet

COMPARATOR_RETURNS_NULL

SQLSTATE: 22004

The comparator has returned a NULL for a comparison between <firstValue> and <secondValue>.

It should return a positive integer for “greater than”, 0 for “equal” and a negative integer for “less than”.

To revert to deprecated behavior where NULL is treated as 0 (equal), you must set “spark.sql.legacy.allowNullComparisonResultInArraySort” to “true”.

COMPLEX_EXPRESSION_UNSUPPORTED_INPUT

SQLSTATE: 42K09

Cannot process input data types for the expression: <expression>.

For more details see COMPLEX_EXPRESSION_UNSUPPORTED_INPUT

CONCURRENT_QUERY

SQLSTATE: 0A000

Another instance of this query [id: <queryId>] was just started by a concurrent session [existing runId: <existingQueryRunId> new runId: <newQueryRunId>].

CONCURRENT_STREAM_LOG_UPDATE

SQLSTATE: 40000

Concurrent update to the log. Multiple streaming jobs detected for <batchId>.

Please make sure only one streaming job runs on a specific checkpoint location at a time.

CONFIG_NOT_AVAILABLE

SQLSTATE: 42K0I

Configuration <config> is not available.

CONFLICTING_DIRECTORY_STRUCTURES

SQLSTATE: KD009

Conflicting directory structures detected.

Suspicious paths:

<discoveredBasePaths>

If provided paths are partition directories, please set “basePath” in the options of the data source to specify the root directory of the table.

If there are multiple root directories, please load them separately and then union them.

CONFLICTING_PARTITION_COLUMN_NAMES

SQLSTATE: KD009

Conflicting partition column names detected:

<distinctPartColLists>

For partitioned table directories, data files should only live in leaf directories.

And directories at the same level should have the same partition column name.

Please check the following directories for unexpected files or inconsistent partition column names:

<suspiciousPaths>

CONFLICTING_PROVIDER

SQLSTATE: 22023

The specified provider <provider> is inconsistent with the existing catalog provider <expectedProvider>. Please use ‘USING <expectedProvider>’ and retry the command.

CONNECT

SQLSTATE: 56K00

Generic Spark Connect error.

For more details see CONNECT

CONNECTION_ALREADY_EXISTS

SQLSTATE: 42000

Cannot create connection <connectionName> because it already exists.

Choose a different name, drop or replace the existing connection, or add the IF NOT EXISTS clause to tolerate pre-existing connections.

CONNECTION_NAME_CANNOT_BE_EMPTY

SQLSTATE: 42000

Cannot execute this command because the connection name must be non-empty.

CONNECTION_NOT_FOUND

SQLSTATE: 42000

Cannot execute this command because the connection name <connectionName> was not found.

CONNECTION_OPTION_NOT_SUPPORTED

SQLSTATE: 42000

Connections of type ‘<connectionType>’ do not support the following option(s): <optionsNotSupported>. Supported options: <allowedOptions>.

CONNECTION_TYPE_NOT_SUPPORTED

SQLSTATE: 42000

Cannot create connection of type ‘<connectionType>. Supported connection types: <allowedTypes>.

CONSTRAINTS_REQUIRE_UNITY_CATALOG

SQLSTATE: 0A000

Table constraints are only supported in Unity Catalog.

CONVERSION_INVALID_INPUT

SQLSTATE: 22018

The value <str> (<fmt>) cannot be converted to <targetType> because it is malformed. Correct the value as per the syntax, or change its format. Use <suggestion> to tolerate malformed input and return NULL instead.

COPY_INTO_COLUMN_ARITY_MISMATCH

SQLSTATE: 21S01

Cannot write to <tableName>, the reason is

For more details see COPY_INTO_COLUMN_ARITY_MISMATCH

COPY_INTO_CREDENTIALS_NOT_ALLOWED_ON

SQLSTATE: 0A000

Invalid scheme <scheme>. COPY INTO source credentials currently only supports s3/s3n/s3a/wasbs/abfss.

COPY_INTO_CREDENTIALS_REQUIRED

SQLSTATE: 42601

COPY INTO source credentials must specify <keyList>.

COPY_INTO_DUPLICATED_FILES_COPY_NOT_ALLOWED

SQLSTATE: 25000

Duplicated files were committed in a concurrent COPY INTO operation. Please try again later.

COPY_INTO_ENCRYPTION_NOT_ALLOWED_ON

SQLSTATE: 0A000

Invalid scheme <scheme>. COPY INTO source encryption currently only supports s3/s3n/s3a/abfss.

COPY_INTO_ENCRYPTION_NOT_SUPPORTED_FOR_AZURE

SQLSTATE: 0A000

COPY INTO encryption only supports ADLS Gen2, or abfss:// file scheme

COPY_INTO_ENCRYPTION_REQUIRED

SQLSTATE: 42601

COPY INTO source encryption must specify ‘<key>’.

COPY_INTO_ENCRYPTION_REQUIRED_WITH_EXPECTED

SQLSTATE: 42601

Invalid encryption option <requiredKey>. COPY INTO source encryption must specify ‘<requiredKey>’ = ‘<keyValue>’.

COPY_INTO_FEATURE_INCOMPATIBLE_SETTING

SQLSTATE: 42613

The COPY INTO feature ‘<feature>’ is not compatible with ‘<incompatibleSetting>’.

COPY_INTO_NON_BLIND_APPEND_NOT_ALLOWED

SQLSTATE: 25000

COPY INTO other than appending data is not allowed to run concurrently with other transactions. Please try again later.

COPY_INTO_ROCKSDB_MAX_RETRY_EXCEEDED

SQLSTATE: 25000

COPY INTO failed to load its state, maximum retries exceeded.

COPY_INTO_SCHEMA_MISMATCH_WITH_TARGET_TABLE

SQLSTATE: 42KDG

A schema mismatch was detected while copying into the Delta table (Table: <table>).

This may indicate an issue with the incoming data, or the Delta table schema can be evolved automatically according to the incoming data by setting:

COPY_OPTIONS (‘mergeSchema’ = ‘true’)

Schema difference:

<schemaDiff>

COPY_INTO_SOURCE_FILE_FORMAT_NOT_SUPPORTED

SQLSTATE: 0A000

The format of the source files must be one of CSV, JSON, AVRO, ORC, PARQUET, TEXT, or BINARYFILE. Using COPY INTO on Delta tables as the source is not supported as duplicate data may be ingested after OPTIMIZE operations. This check can be turned off by running the SQL command set spark.databricks.delta.copyInto.formatCheck.enabled = false.

COPY_INTO_SOURCE_SCHEMA_INFERENCE_FAILED

SQLSTATE: 42KD9

The source directory did not contain any parsable files of type <format>. Please check the contents of ‘<source>’.

The error can be silenced by setting ‘<config>’ to ‘false’.

COPY_INTO_STATE_INTERNAL_ERROR

SQLSTATE: 55019

An internal error occurred while processing COPY INTO state.

For more details see COPY_INTO_STATE_INTERNAL_ERROR

COPY_INTO_SYNTAX_ERROR

SQLSTATE: 42601

Failed to parse the COPY INTO command.

For more details see COPY_INTO_SYNTAX_ERROR

COPY_INTO_UNSUPPORTED_FEATURE

SQLSTATE: 0A000

The COPY INTO feature ‘<feature>’ is not supported.

COPY_UNLOAD_FORMAT_TYPE_NOT_SUPPORTED

SQLSTATE: 42000

Cannot unload data in format ‘<formatType>’. Supported formats for <connectionType> are: <allowedFormats>.

CREATE_FOREIGN_SCHEMA_NOT_IMPLEMENTED_YET

SQLSTATE: 42000

The CREATE FOREIGN SCHEMA command is not implemented yet

CREATE_FOREIGN_TABLE_NOT_IMPLEMENTED_YET

SQLSTATE: 42000

The CREATE FOREIGN TABLE command is not implemented yet

CREATE_OR_REFRESH_MV_ST_ASYNC

SQLSTATE: 0A000

Cannot CREATE OR REFRESH materialized views or streaming tables with ASYNC specified. Please remove ASYNC from the CREATE OR REFRESH statement or use REFRESH ASYNC to refresh existing materialized views or streaming tables asynchronously.

CREATE_PERMANENT_VIEW_WITHOUT_ALIAS

SQLSTATE: 0A000

Not allowed to create the permanent view <name> without explicitly assigning an alias for the expression <attr>.

CREATE_TABLE_COLUMN_DESCRIPTOR_DUPLICATE

SQLSTATE: 42710

CREATE TABLE column <columnName> specifies descriptor “<optionName>” more than once, which is invalid.

CREATE_VIEW_COLUMN_ARITY_MISMATCH

SQLSTATE: 21S01

Cannot create view <viewName>, the reason is

For more details see CREATE_VIEW_COLUMN_ARITY_MISMATCH

CREDENTIAL_MISSING

SQLSTATE: 42601

Please provide credentials when creating or updating external locations.

CSV_ENFORCE_SCHEMA_NOT_SUPPORTED

SQLSTATE: 0A000

The CSV option enforceSchema cannot be set when using rescuedDataColumn or failOnUnknownFields, as columns are read by name rather than ordinal.

CYCLIC_FUNCTION_REFERENCE

SQLSTATE: 42887

Cyclic function reference detected: <path>.

DATABRICKS_DELTA_NOT_ENABLED

SQLSTATE: 56038

Databricks Delta is not enabled in your account.<hints>

DATATYPE_MISMATCH

SQLSTATE: 42K09

Cannot resolve <sqlExpr> due to data type mismatch:

For more details see DATATYPE_MISMATCH

DATATYPE_MISSING_SIZE

SQLSTATE: 42K01

DataType <type> requires a length parameter, for example <type>(10). Please specify the length.

DATA_LINEAGE_SECURE_VIEW_LEAF_NODE_HAS_NO_RELATION

SQLSTATE: 25000

Write Lineage unsuccessful: missing corresponding relation with policies for CLM/RLS.

DATA_SOURCE_ALREADY_EXISTS

SQLSTATE: 42710

Data source ‘<provider>’ already exists. Please choose a different name for the new data source.

DATA_SOURCE_EXTERNAL_ERROR

SQLSTATE: KD010

Encountered error when saving to external data source.

DATA_SOURCE_NOT_EXIST

SQLSTATE: 42704

Data source ‘<provider>’ not found. Please make sure the data source is registered.

DATA_SOURCE_NOT_FOUND

SQLSTATE: 42K02

Failed to find the data source: <provider>. Make sure the provider name is correct and the package is properly registered and compatible with your Spark version.

DATA_SOURCE_OPTION_CONTAINS_INVALID_CHARACTERS

SQLSTATE: 42602

Option <option> must not be empty and should not contain invalid characters, query strings, or parameters.

DATA_SOURCE_OPTION_IS_REQUIRED

SQLSTATE: 42601

Option <option> is required.

DATA_SOURCE_TABLE_SCHEMA_MISMATCH

SQLSTATE: 42K03

The schema of the data source table does not match the expected schema. If you are using the DataFrameReader.schema API or creating a table, avoid specifying the schema.

Data Source schema: <dsSchema>

Expected schema: <expectedSchema>

DATA_SOURCE_URL_NOT_ALLOWED

SQLSTATE: 42KDB

JDBC URL is not allowed in data source options, please specify ‘host’, ‘port’, and ‘database’ options instead.

DATETIME_FIELD_OUT_OF_BOUNDS

SQLSTATE: 22023

<rangeMessage>. If necessary set <ansiConfig> to “false” to bypass this error.

DATETIME_OVERFLOW

SQLSTATE: 22008

Datetime operation overflow: <operation>.

DC_API_QUOTA_EXCEEDED

SQLSTATE: KD000

You have exceeded the API quota for the data source <sourceName>.

For more details see DC_API_QUOTA_EXCEEDED

DC_CONNECTION_ERROR

SQLSTATE: KD000

Failed to make a connection to the <sourceName> source. Error code: <errorCode>.

For more details see DC_CONNECTION_ERROR

DC_DYNAMICS_API_ERROR

SQLSTATE: KD000

Error happened in Dynamics API calls, errorCode: <errorCode>.

For more details see DC_DYNAMICS_API_ERROR

DC_NETSUITE_ERROR

SQLSTATE: KD000

Error happened in Netsuite JDBC calls, errorCode: <errorCode>.

For more details see DC_NETSUITE_ERROR

DC_SCHEMA_CHANGE_ERROR

SQLSTATE: none assigned

A schema change has occurred in table <tableName> of the <sourceName> source.

For more details see DC_SCHEMA_CHANGE_ERROR

DC_SERVICENOW_API_ERROR

SQLSTATE: KD000

Error happened in ServiceNow API calls, errorCode: <errorCode>.

For more details see DC_SERVICENOW_API_ERROR

DC_SFDC_BULK_QUERY_JOB_INCOMPLETE

SQLSTATE: KD000

Ingestion for object <objName> is incomplete because the Salesforce API query job took too long, failed, or was manually cancelled.

To try again, you can either re-run the entire pipeline or refresh this specific destination table. If the error persists, file a ticket. Job ID: <jobId>. Job status: <jobStatus>.

DC_SHAREPOINT_API_ERROR

SQLSTATE: KD000

Error happened in Sharepoint API calls, errorCode: <errorCode>.

For more details see DC_SHAREPOINT_API_ERROR

DC_SOURCE_API_ERROR

SQLSTATE: KD000

An error occurred in the <sourceName> API call. Source API type: <apiType>. Error code: <errorCode>.

This can sometimes happen when you’ve reached a <sourceName> API limit. If you haven’t exceeded your API limit, try re-running the connector. If the issue persists, please file a ticket.

DC_UNSUPPORTED_ERROR

SQLSTATE: 0A000

Unsupported error happened in data source <sourceName>.

For more details see DC_UNSUPPORTED_ERROR

DC_WORKDAY_RAAS_API_ERROR

SQLSTATE: KD000

Error happened in Workday RAAS API calls, errorCode: <errorCode>.

For more details see DC_WORKDAY_RAAS_API_ERROR

DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION

SQLSTATE: 22003

Decimal precision <precision> exceeds max precision <maxPrecision>.

DEFAULT_DATABASE_NOT_EXISTS

SQLSTATE: 42704

Default database <defaultDatabase> does not exist, please create it first or change default database to <defaultDatabase>.

DEFAULT_FILE_NOT_FOUND

SQLSTATE: 42K03

It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running ‘REFRESH TABLE tableName’ command in SQL or by recreating the Dataset/DataFrame involved. If disk cache is stale or the underlying files have been removed, you can invalidate disk cache manually by restarting the cluster.

DEFAULT_PLACEMENT_INVALID

SQLSTATE: 42608

A DEFAULT keyword in a MERGE, INSERT, UPDATE, or SET VARIABLE command could not be directly assigned to a target column because it was part of an expression.

For example: UPDATE SET c1 = DEFAULT is allowed, but UPDATE T SET c1 = ``DEFAULT`` + 1 is not allowed.

DEFAULT_UNSUPPORTED

SQLSTATE: 42623

Failed to execute <statementType> command because DEFAULT values are not supported for target data source with table provider: “<dataSource>”.

DIFFERENT_DELTA_TABLE_READ_BY_STREAMING_SOURCE

SQLSTATE: 55019

The streaming query was reading from an unexpected Delta table (id = ‘<newTableId>’).

It used to read from another Delta table (id = ‘<oldTableId>’) according to checkpoint.

This may happen when you changed the code to read from a new table or you deleted and

re-created a table. Please revert your change or delete your streaming query checkpoint

to restart from scratch.

DISTINCT_WINDOW_FUNCTION_UNSUPPORTED

SQLSTATE: 0A000

Distinct window functions are not supported: <windowExpr>.

DIVIDE_BY_ZERO

SQLSTATE: 22012

Division by zero. Use try_divide to tolerate divisor being 0 and return NULL instead. If necessary set <config> to “false” to bypass this error.

For more details see DIVIDE_BY_ZERO

DLT_EXPECTATIONS_NOT_SUPPORTED

SQLSTATE: 56038

Expectations are only supported within a Delta Live Tables pipeline.

DLT_VIEW_CLUSTER_BY_NOT_SUPPORTED

SQLSTATE: 56038

MATERIALIZED VIEWs with a CLUSTER BY clause are supported only in a Delta Live Tables pipeline.

DLT_VIEW_LOCATION_NOT_SUPPORTED

SQLSTATE: 56038

<mv> locations are supported only in a Delta Live Tables pipeline.

DLT_VIEW_SCHEMA_WITH_TYPE_NOT_SUPPORTED

SQLSTATE: 56038

<mv> schemas with a specified type are supported only in a Delta Live Tables pipeline.

DLT_VIEW_TABLE_CONSTRAINTS_NOT_SUPPORTED

SQLSTATE: 56038

CONSTRAINT clauses in a view are only supported in a Delta Live Tables pipeline.

DROP_SCHEDULE_DOES_NOT_EXIST

SQLSTATE: 42000

Cannot drop SCHEDULE on a table without an existing schedule or trigger.

DUPLICATED_CTE_NAMES

SQLSTATE: 42602

CTE definition can’t have duplicate names: <duplicateNames>.

DUPLICATED_FIELD_NAME_IN_ARROW_STRUCT

SQLSTATE: 42713

Duplicated field names in Arrow Struct are not allowed, got <fieldNames>.

DUPLICATED_MAP_KEY

SQLSTATE: 23505

Duplicate map key <key> was found, please check the input data.

If you want to remove the duplicated keys, you can set <mapKeyDedupPolicy> to “LAST_WIN” so that the key inserted at last takes precedence.

DUPLICATED_METRICS_NAME

SQLSTATE: 42710

The metric name is not unique: <metricName>. The same name cannot be used for metrics with different results.

However multiple instances of metrics with with same result and name are allowed (e.g. self-joins).

DUPLICATE_ASSIGNMENTS

SQLSTATE: 42701

The columns or variables <nameList> appear more than once as assignment targets.

DUPLICATE_CLAUSES

SQLSTATE: 42614

Found duplicate clauses: <clauseName>. Please, remove one of them.

DUPLICATE_KEY

SQLSTATE: 23505

Found duplicate keys <keyColumn>.

DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT

SQLSTATE: 4274K

Call to routine <routineName> is invalid because it includes multiple argument assignments to the same parameter name <parameterName>.

For more details see DUPLICATE_ROUTINE_PARAMETER_ASSIGNMENT

DUPLICATE_ROUTINE_PARAMETER_NAMES

SQLSTATE: 42734

Found duplicate name(s) in the parameter list of the user-defined routine <routineName>: <names>.

DUPLICATE_ROUTINE_RETURNS_COLUMNS

SQLSTATE: 42711

Found duplicate column(s) in the RETURNS clause column list of the user-defined routine <routineName>: <columns>.

EMITTING_ROWS_OLDER_THAN_WATERMARK_NOT_ALLOWED

SQLSTATE: 42815

Previous node emitted a row with eventTime=<emittedRowEventTime> which is older than current_watermark_value=<currentWatermark>

This can lead to correctness issues in the stateful operators downstream in the execution pipeline.

Please correct the operator logic to emit rows after current global watermark value.

EMPTY_JSON_FIELD_VALUE

SQLSTATE: 42604

Failed to parse an empty string for data type <dataType>.

EMPTY_LOCAL_FILE_IN_STAGING_ACCESS_QUERY

SQLSTATE: 22023

Empty local file in staging <operation> query

EMPTY_SCHEMA_NOT_SUPPORTED_FOR_DATASOURCE

SQLSTATE: 0A000

The <format> datasource does not support writing empty or nested empty schemas. Please make sure the data schema has at least one or more column(s).

ENCODER_NOT_FOUND

SQLSTATE: 42704

Not found an encoder of the type <typeName> to Spark SQL internal representation.

Consider to change the input type to one of supported at ‘<docroot>/sql-ref-datatypes.html’.

END_LABEL_WITHOUT_BEGIN_LABEL

SQLSTATE: 42K0L

End label <endLabel> can not exist without begin label.

END_OFFSET_HAS_GREATER_OFFSET_FOR_TOPIC_PARTITION_THAN_LATEST_WITH_TRIGGER_AVAILABLENOW

SQLSTATE: KD000

Some of partitions in Kafka topic(s) report available offset which is less than end offset during running query with Trigger.AvailableNow. The error could be transient - restart your query, and report if you still see the same issue.

latest offset: <latestOffset>, end offset: <endOffset>

END_OFFSET_HAS_GREATER_OFFSET_FOR_TOPIC_PARTITION_THAN_PREFETCHED

SQLSTATE: KD000

For Kafka data source with Trigger.AvailableNow, end offset should have lower or equal offset per each topic partition than pre-fetched offset. The error could be transient - restart your query, and report if you still see the same issue.

pre-fetched offset: <prefetchedOffset>, end offset: <endOffset>.

ERROR_READING_AVRO_UNKNOWN_FINGERPRINT

SQLSTATE: KD00B

Error reading avro data – encountered an unknown fingerprint: <fingerprint>, not sure what schema to use.

This could happen if you registered additional schemas after starting your spark context.

EVENT_LOG_REQUIRES_SHARED_COMPUTE

SQLSTATE: 42601

Cannot query event logs from an Assigned or No Isolation Shared cluster, please use a Shared cluster or a Databricks SQL warehouse instead.

EVENT_LOG_UNAVAILABLE

SQLSTATE: 55019

No event logs available for <tableOrPipeline>. Please try again later after events are generated

EVENT_LOG_UNSUPPORTED_TABLE_TYPE

SQLSTATE: 42832

The table type of <tableIdentifier> is <tableType>.

Querying event logs only supports materialized views, streaming tables, or Delta Live Tables pipelines

EVENT_TIME_IS_NOT_ON_TIMESTAMP_TYPE

SQLSTATE: 42K09

The event time <eventName> has the invalid type <eventType>, but expected “TIMESTAMP”.

EXCEED_LIMIT_LENGTH

SQLSTATE: 54006

Exceeds char/varchar type length limitation: <limit>.

EXCEPT_NESTED_COLUMN_INVALID_TYPE

SQLSTATE: 428H2

EXCEPT column <columnName> was resolved and expected to be StructType, but found type <dataType>.

EXCEPT_OVERLAPPING_COLUMNS

SQLSTATE: 42702

Columns in an EXCEPT list must be distinct and non-overlapping, but got (<columns>).

EXCEPT_RESOLVED_COLUMNS_WITHOUT_MATCH

SQLSTATE: 42703

EXCEPT columns [<exceptColumns>] were resolved, but do not match any of the columns [<expandedColumns>] from the star expansion.

EXCEPT_UNRESOLVED_COLUMN_IN_STRUCT_EXPANSION

SQLSTATE: 42703

The column/field name <objectName> in the EXCEPT clause cannot be resolved. Did you mean one of the following: [<objectList>]?

Note: nested columns in the EXCEPT clause may not include qualifiers (table name, parent struct column name, etc.) during a struct expansion; try removing qualifiers if they are used with nested columns.

EXECUTOR_BROADCAST_JOIN_OOM

SQLSTATE: 53200

There is not enough memory to build the broadcast relation <relationClassName>. Relation Size = <relationSize>. Total memory used by this task = <taskMemoryUsage>. Executor Memory Manager Metrics: onHeapExecutionMemoryUsed = <onHeapExecutionMemoryUsed>, offHeapExecutionMemoryUsed = <offHeapExecutionMemoryUsed>, onHeapStorageMemoryUsed = <onHeapStorageMemoryUsed>, offHeapStorageMemoryUsed = <offHeapStorageMemoryUsed>. [sparkPlanId: <sparkPlanId>] Disable broadcasts for this query using ‘set spark.sql.autoBroadcastJoinThreshold=-1’ or using join hint to force shuffle join.

EXECUTOR_BROADCAST_JOIN_STORE_OOM

SQLSTATE: 53200

There is not enough memory to store the broadcast relation <relationClassName>. Relation Size = <relationSize>. StorageLevel = <storageLevel>. [sparkPlanId: <sparkPlanId>] Disable broadcasts for this query using ‘set spark.sql.autoBroadcastJoinThreshold=-1’ or using join hint to force shuffle join.

EXEC_IMMEDIATE_DUPLICATE_ARGUMENT_ALIASES

SQLSTATE: 42701

The USING clause of this EXECUTE IMMEDIATE command contained multiple arguments with same alias (<aliases>), which is invalid; please update the command to specify unique aliases and then try it again.

EXPECT_PERMANENT_VIEW_NOT_TEMP

SQLSTATE: 42809

<operation>’ expects a permanent view but <viewName> is a temp view.

EXPECT_TABLE_NOT_VIEW

SQLSTATE: 42809

<operation>’ expects a table but <viewName> is a view.

For more details see EXPECT_TABLE_NOT_VIEW

EXPECT_VIEW_NOT_TABLE

SQLSTATE: 42809

The table <tableName> does not support <operation>.

For more details see EXPECT_VIEW_NOT_TABLE

EXPRESSION_DECODING_FAILED

SQLSTATE: 42846

Failed to decode a row to a value of the expressions: <expressions>.

EXPRESSION_ENCODING_FAILED

SQLSTATE: 42846

Failed to encode a value of the expressions: <expressions> to a row.

EXPRESSION_TYPE_IS_NOT_ORDERABLE

SQLSTATE: 42822

Column expression <expr> cannot be sorted because its type <exprType> is not orderable.

EXTERNAL_TABLE_INVALID_SCHEME

SQLSTATE: 0A000

External tables don’t support the <scheme> scheme.

FABRIC_REFRESH_INVALID_SCOPE

SQLSTATE: 0A000

Error running ‘REFRESH FOREIGN <scope> <name>’. Cannot refresh a Fabric <scope> directly, please use ‘REFRESH FOREIGN CATALOG <catalogName>’ to refresh the Fabric Catalog instead.

FAILED_EXECUTE_UDF

SQLSTATE: 39000

User defined function (<functionName>: (<signature>) => <result>) failed due to: <reason>.

FAILED_FUNCTION_CALL

SQLSTATE: 38000

Failed preparing of the function <funcName> for call. Please, double check function’s arguments.

FAILED_JDBC

SQLSTATE: HV000

Failed JDBC <url> on the operation:

For more details see FAILED_JDBC

FAILED_PARSE_STRUCT_TYPE

SQLSTATE: 22018

Failed parsing struct: <raw>.

FAILED_READ_FILE

SQLSTATE: KD001

Error while reading file <path>.

For more details see FAILED_READ_FILE

FAILED_REGISTER_CLASS_WITH_KRYO

SQLSTATE: KD000

Failed to register classes with Kryo.

FAILED_RENAME_PATH

SQLSTATE: 42K04

Failed to rename <sourcePath> to <targetPath> as destination already exists.

FAILED_RENAME_TEMP_FILE

SQLSTATE: 58030

Failed to rename temp file <srcPath> to <dstPath> as FileSystem.rename returned false.

FAILED_ROW_TO_JSON

SQLSTATE: 2203G

Failed to convert the row value <value> of the class <class> to the target SQL type <sqlType> in the JSON format.

FAILED_TO_LOAD_ROUTINE

SQLSTATE: 38000

Failed to load routine <routineName>.

FAILED_TO_PARSE_TOO_COMPLEX

SQLSTATE: 54001

The statement, including potential SQL functions and referenced views, was too complex to parse.

To mitigate this error divide the statement into multiple, less complex chunks.

FEATURE_NOT_ENABLED

SQLSTATE: 56038

The feature <featureName> is not enabled. Consider setting the config <configKey> to <configValue> to enable this capability.

FEATURE_NOT_ON_CLASSIC_WAREHOUSE

SQLSTATE: 56038

<feature> is not supported on Classic SQL warehouses. To use this feature, use a Pro or Serverless SQL warehouse.

FEATURE_REQUIRES_UC

SQLSTATE: 0AKUD

<feature> is not supported without Unity Catalog. To use this feature, enable Unity Catalog.

FEATURE_UNAVAILABLE

SQLSTATE: 56038

<feature> is not supported in your environment. To use this feature, please contact Databricks Support.

FIELD_ALREADY_EXISTS

SQLSTATE: 42710

Cannot <op> column, because <fieldNames> already exists in <struct>.

FIELD_NOT_FOUND

SQLSTATE: 42704

No such struct field <fieldName> in <fields>.

FILE_IN_STAGING_PATH_ALREADY_EXISTS

SQLSTATE: 42K04

File in staging path <path> already exists but OVERWRITE is not set

FLATMAPGROUPSWITHSTATE_USER_FUNCTION_ERROR

SQLSTATE: 39000

An error occurred in the user provided function in flatMapGroupsWithState. Reason: <reason>

FORBIDDEN_OPERATION

SQLSTATE: 42809

The operation <statement> is not allowed on the <objectType>: <objectName>.

FOREACH_BATCH_USER_FUNCTION_ERROR

SQLSTATE: 39000

An error occurred in the user provided function in foreach batch sink. Reason: <reason>

FOREACH_USER_FUNCTION_ERROR

SQLSTATE: 39000

An error occurred in the user provided function in foreach sink. Reason: <reason>

FOREIGN_KEY_MISMATCH

SQLSTATE: 42830

Foreign key parent columns <parentColumns> do not match primary key child columns <childColumns>.

FOREIGN_OBJECT_NAME_CANNOT_BE_EMPTY

SQLSTATE: 42000

Cannot execute this command because the foreign <objectType> name must be non-empty.

FOUND_MULTIPLE_DATA_SOURCES

SQLSTATE: 42710

Detected multiple data sources with the name ‘<provider>’. Please check the data source isn’t simultaneously registered and located in the classpath.

FROM_JSON_CONFLICTING_SCHEMA_UPDATES

SQLSTATE: 42601

from_json inference encountered conflicting schema updates at: <location>

FROM_JSON_CORRUPT_RECORD_COLUMN_IN_SCHEMA

SQLSTATE: 42601

from_json found columnNameOfCorruptRecord (<columnNameOfCorruptRecord>) present

in a JSON object and can no longer proceed. Please configure a different value for

the option ‘columnNameOfCorruptRecord’.

FROM_JSON_CORRUPT_SCHEMA

SQLSTATE: 42601

from_json inference could not read the schema stored at: <location>

FROM_JSON_INFERENCE_FAILED

SQLSTATE: 42601

from_json was unable to infer the schema. Please provide one instead.

FROM_JSON_INFERENCE_NOT_SUPPORTED

SQLSTATE: 0A000

from_json inference is only supported when defining streaming tables

FROM_JSON_INVALID_CONFIGURATION

SQLSTATE: 42601

from_json configuration is invalid:

For more details see FROM_JSON_INVALID_CONFIGURATION

FROM_JSON_SCHEMA_EVOLUTION_FAILED

SQLSTATE: 22KD3

from_json could not evolve from <old> to <new>

FUNCTION_PARAMETERS_MUST_BE_NAMED

SQLSTATE: 07001

The function <function> requires named parameters. Parameters missing names: <exprs>. Please update the function call to add names for all parameters, e.g., <function>(param_name => …).

GENERATED_COLUMN_WITH_DEFAULT_VALUE

SQLSTATE: 42623

A column cannot have both a default value and a generation expression but column <colName> has default value: (<defaultValue>) and generation expression: (<genExpr>).

GET_TABLES_BY_TYPE_UNSUPPORTED_BY_HIVE_VERSION

SQLSTATE: 56038

Hive 2.2 and lower versions don’t support getTablesByType. Please use Hive 2.3 or higher version.

GET_WARMUP_TRACING_FAILED

SQLSTATE: 42601

Failed to get warmup tracing. Cause: <cause>.

GET_WARMUP_TRACING_FUNCTION_NOT_ALLOWED

SQLSTATE: 42601

Function get_warmup_tracing() not allowed.

GRAPHITE_SINK_INVALID_PROTOCOL

SQLSTATE: KD000

Invalid Graphite protocol: <protocol>.

GRAPHITE_SINK_PROPERTY_MISSING

SQLSTATE: KD000

Graphite sink requires ‘<property>’ property.

GROUPING_COLUMN_MISMATCH

SQLSTATE: 42803

Column of grouping (<grouping>) can’t be found in grouping columns <groupingColumns>.

GROUPING_ID_COLUMN_MISMATCH

SQLSTATE: 42803

Columns of grouping_id (<groupingIdColumn>) does not match grouping columns (<groupByColumns>).

GROUPING_SIZE_LIMIT_EXCEEDED

SQLSTATE: 54000

Grouping sets size cannot be greater than <maxSize>.

GROUP_BY_AGGREGATE

SQLSTATE: 42903

Aggregate functions are not allowed in GROUP BY, but found <sqlExpr>.

For more details see GROUP_BY_AGGREGATE

GROUP_BY_POS_AGGREGATE

SQLSTATE: 42903

GROUP BY <index> refers to an expression <aggExpr> that contains an aggregate function. Aggregate functions are not allowed in GROUP BY.

GROUP_BY_POS_OUT_OF_RANGE

SQLSTATE: 42805

GROUP BY position <index> is not in select list (valid range is [1, <size>]).

GROUP_EXPRESSION_TYPE_IS_NOT_ORDERABLE

SQLSTATE: 42822

The expression <sqlExpr> cannot be used as a grouping expression because its data type <dataType> is not an orderable data type.

HDFS_HTTP_ERROR

SQLSTATE: KD00F

When attempting to read from HDFS, HTTP request failed.

For more details see HDFS_HTTP_ERROR

HLL_INVALID_INPUT_SKETCH_BUFFER

SQLSTATE: 22546

Invalid call to <function>; only valid HLL sketch buffers are supported as inputs (such as those produced by the hll_sketch_agg function).

HLL_INVALID_LG_K

SQLSTATE: 22546

Invalid call to <function>; the lgConfigK value must be between <min> and <max>, inclusive: <value>.

HLL_UNION_DIFFERENT_LG_K

SQLSTATE: 22000

Sketches have different lgConfigK values: <left> and <right>. Set the allowDifferentLgConfigK parameter to true to call <function> with different lgConfigK values.

HYBRID_ANALYZER_EXCEPTION

SQLSTATE: 0A000

An failure occurred when attempting to resolve a query or command with both the legacy fixed-point analyzer as well as the single-pass resolver.

For more details see HYBRID_ANALYZER_EXCEPTION

IDENTIFIER_TOO_MANY_NAME_PARTS

SQLSTATE: 42601

<identifier> is not a valid identifier as it has more than 2 name parts.

IDENTITY_COLUMNS_DUPLICATED_SEQUENCE_GENERATOR_OPTION

SQLSTATE: 42601

Duplicated IDENTITY column sequence generator option: <sequenceGeneratorOption>.

IDENTITY_COLUMNS_ILLEGAL_STEP

SQLSTATE: 42611

IDENTITY column step cannot be 0.

IDENTITY_COLUMNS_UNSUPPORTED_DATA_TYPE

SQLSTATE: 428H2

DataType <dataType> is not supported for IDENTITY columns.

ILLEGAL_DAY_OF_WEEK

SQLSTATE: 22009

Illegal input for day of week: <string>.

ILLEGAL_STATE_STORE_VALUE

SQLSTATE: 42601

Illegal value provided to the State Store

For more details see ILLEGAL_STATE_STORE_VALUE

INAPPROPRIATE_URI_SCHEME_OF_CONNECTION_OPTION

SQLSTATE: 42616

Connection can’t be created due to inappropriate scheme of URI <uri> provided for the connection option ‘<option>’.

Allowed scheme(s): <allowedSchemes>.

Please add a scheme if it is not present in the URI, or specify a scheme from the allowed values.

INCOMPARABLE_PIVOT_COLUMN

SQLSTATE: 42818

Invalid pivot column <columnName>. Pivot columns must be comparable.

INCOMPATIBLE_COLUMN_TYPE

SQLSTATE: 42825

<operator> can only be performed on tables with compatible column types. The <columnOrdinalNumber> column of the <tableOrdinalNumber> table is <dataType1> type which is not compatible with <dataType2> at the same column of the first table.<hint>.

INCOMPATIBLE_DATASOURCE_REGISTER

SQLSTATE: 56038

Detected an incompatible DataSourceRegister. Please remove the incompatible library from classpath or upgrade it. Error: <message>

INCOMPATIBLE_DATA_FOR_TABLE

SQLSTATE: KD000

Cannot write incompatible data for the table <tableName>:

For more details see INCOMPATIBLE_DATA_FOR_TABLE

INCOMPATIBLE_JOIN_TYPES

SQLSTATE: 42613

The join types <joinType1> and <joinType2> are incompatible.

INCOMPATIBLE_VIEW_SCHEMA_CHANGE

SQLSTATE: 51024

The SQL query of view <viewName> has an incompatible schema change and column <colName> cannot be resolved. Expected <expectedNum> columns named <colName> but got <actualCols>.

Please try to re-create the view by running: <suggestion>.

INCOMPLETE_TYPE_DEFINITION

SQLSTATE: 42K01

Incomplete complex type:

For more details see INCOMPLETE_TYPE_DEFINITION

INCONSISTENT_BEHAVIOR_CROSS_VERSION

SQLSTATE: 42K0B

You may get a different result due to the upgrading to

For more details see INCONSISTENT_BEHAVIOR_CROSS_VERSION

INCORRECT_NUMBER_OF_ARGUMENTS

SQLSTATE: 42605

<failure>, <functionName> requires at least <minArgs> arguments and at most <maxArgs> arguments.

INCORRECT_RAMP_UP_RATE

SQLSTATE: 22003

Max offset with <rowsPerSecond> rowsPerSecond is <maxSeconds>, but ‘rampUpTimeSeconds’ is <rampUpTimeSeconds>.

INDETERMINATE_COLLATION

SQLSTATE: 42P22

Function called requires knowledge of the collation it should apply, but indeterminate collation was found. Use COLLATE function to set the collation explicitly.

INDEX_ALREADY_EXISTS

SQLSTATE: 42710

Cannot create the index <indexName> on table <tableName> because it already exists.

INDEX_NOT_FOUND

SQLSTATE: 42704

Cannot find the index <indexName> on table <tableName>.

INFINITE_STREAMING_TRIGGER_NOT_SUPPORTED

SQLSTATE: 0A000

Trigger type <trigger> is not supported for this cluster type.

Use a different trigger type e.g. AvailableNow, Once.

INSERT_COLUMN_ARITY_MISMATCH

SQLSTATE: 21S01

Cannot write to <tableName>, the reason is

For more details see INSERT_COLUMN_ARITY_MISMATCH

INSERT_PARTITION_COLUMN_ARITY_MISMATCH

SQLSTATE: 21S01

Cannot write to ‘<tableName>’, <reason>:

Table columns: <tableColumns>.

Partition columns with static values: <staticPartCols>.

Data columns: <dataColumns>.

INSUFFICIENT_PERMISSIONS

SQLSTATE: 42501

Insufficient privileges:

<report>

INSUFFICIENT_PERMISSIONS_EXT_LOC

SQLSTATE: 42501

User <user> has insufficient privileges for external location <location>.

INSUFFICIENT_PERMISSIONS_NO_OWNER

SQLSTATE: 42501

There is no owner for <securableName>. Ask your administrator to set an owner.

INSUFFICIENT_PERMISSIONS_OWNERSHIP_SECURABLE

SQLSTATE: 42501

User does not own <securableName>.

INSUFFICIENT_PERMISSIONS_SECURABLE

SQLSTATE: 42501

User does not have permission <action> on <securableName>.

INSUFFICIENT_PERMISSIONS_SECURABLE_PARENT_OWNER

SQLSTATE: 42501

The owner of <securableName> is different from the owner of <parentSecurableName>.

INSUFFICIENT_PERMISSIONS_STORAGE_CRED

SQLSTATE: 42501

Storage credential <credentialName> has insufficient privileges.

INSUFFICIENT_PERMISSIONS_UNDERLYING_SECURABLES

SQLSTATE: 42501

User cannot <action> on <securableName> because of permissions on underlying securables.

INSUFFICIENT_PERMISSIONS_UNDERLYING_SECURABLES_VERBOSE

SQLSTATE: 42501

User cannot <action> on <securableName> because of permissions on underlying securables:

<underlyingReport>

INTERVAL_ARITHMETIC_OVERFLOW

SQLSTATE: 22015

Integer overflow while operating with intervals.

For more details see INTERVAL_ARITHMETIC_OVERFLOW

INTERVAL_DIVIDED_BY_ZERO

SQLSTATE: 22012

Division by zero. Use try_divide to tolerate divisor being 0 and return NULL instead.

INVALID_AGGREGATE_FILTER

SQLSTATE: 42903

The FILTER expression <filterExpr> in an aggregate function is invalid.

For more details see INVALID_AGGREGATE_FILTER

INVALID_ARRAY_INDEX

SQLSTATE: 22003

The index <indexValue> is out of bounds. The array has <arraySize> elements. Use the SQL function get() to tolerate accessing element at invalid index and return NULL instead. If necessary set <ansiConfig> to “false” to bypass this error.

For more details see INVALID_ARRAY_INDEX

INVALID_ARRAY_INDEX_IN_ELEMENT_AT

SQLSTATE: 22003

The index <indexValue> is out of bounds. The array has <arraySize> elements. Use try_element_at to tolerate accessing element at invalid index and return NULL instead. If necessary set <ansiConfig> to “false” to bypass this error.

For more details see INVALID_ARRAY_INDEX_IN_ELEMENT_AT

INVALID_ATTRIBUTE_NAME_SYNTAX

SQLSTATE: 42601

Syntax error in the attribute name: <name>. Check that backticks appear in pairs, a quoted string is a complete name part and use a backtick only inside quoted name parts.

INVALID_BITMAP_POSITION

SQLSTATE: 22003

The 0-indexed bitmap position <bitPosition> is out of bounds. The bitmap has <bitmapNumBits> bits (<bitmapNumBytes> bytes).

INVALID_BOOLEAN_STATEMENT

SQLSTATE: 22546

Boolean statement is expected in the condition, but <invalidStatement> was found.

INVALID_BOUNDARY

SQLSTATE: 22003

The boundary <boundary> is invalid: <invalidValue>.

For more details see INVALID_BOUNDARY

INVALID_BUCKET_COLUMN_DATA_TYPE

SQLSTATE: 42601

Cannot use <type> for bucket column. Collated data types are not supported for bucketing.

INVALID_BUCKET_FILE

SQLSTATE: 58030

Invalid bucket file: <path>.

INVALID_BYTE_STRING

SQLSTATE: 22P03

The expected format is ByteString, but was <unsupported> (<class>).

INVALID_COLUMN_NAME_AS_PATH

SQLSTATE: 46121

The datasource <datasource> cannot save the column <columnName> because its name contains some characters that are not allowed in file paths. Please, use an alias to rename it.

INVALID_COLUMN_OR_FIELD_DATA_TYPE

SQLSTATE: 42000

Column or field <name> is of type <type> while it’s required to be <expectedType>.

INVALID_CONF_VALUE

SQLSTATE: 22022

The value ‘<confValue>’ in the config “<confName>” is invalid.

For more details see INVALID_CONF_VALUE

INVALID_CORRUPT_RECORD_TYPE

SQLSTATE: 42804

The column <columnName> for corrupt records must have the nullable STRING type, but got <actualType>.

INVALID_CURRENT_RECIPIENT_USAGE

SQLSTATE: 42887

current_recipient function can only be used in the CREATE VIEW statement or the ALTER VIEW statement to define a share only view in Unity Catalog.

INVALID_CURSOR

SQLSTATE: HY109

The cursor is invalid.

For more details see INVALID_CURSOR

INVALID_DATETIME_PATTERN

SQLSTATE: 22007

Unrecognized datetime pattern: <pattern>.

For more details see INVALID_DATETIME_PATTERN

INVALID_DEFAULT_VALUE

SQLSTATE: 42623

Failed to execute <statement> command because the destination column or variable <colName> has a DEFAULT value <defaultValue>,

For more details see INVALID_DEFAULT_VALUE

INVALID_DELIMITER_VALUE

SQLSTATE: 42602

Invalid value for delimiter.

For more details see INVALID_DELIMITER_VALUE

INVALID_DEST_CATALOG

SQLSTATE: 42809

Destination catalog of the SYNC command must be within Unity Catalog. Found <catalog>.

INVALID_DRIVER_MEMORY

SQLSTATE: F0000

System memory <systemMemory> must be at least <minSystemMemory>.

Please increase heap size using the –driver-memory option or “<config>” in Spark configuration.

INVALID_DYNAMIC_OPTIONS

SQLSTATE: 42K10

Options passed <option_list> are forbidden for foreign table <table_name>.

INVALID_EMPTY_LOCATION

SQLSTATE: 42K05

The location name cannot be empty string, but <location> was given.

INVALID_ESC

SQLSTATE: 42604

Found an invalid escape string: <invalidEscape>. The escape string must contain only one character.

INVALID_ESCAPE_CHAR

SQLSTATE: 42604

EscapeChar should be a string literal of length one, but got <sqlExpr>.

INVALID_EXECUTOR_MEMORY

SQLSTATE: F0000

Executor memory <executorMemory> must be at least <minSystemMemory>.

Please increase executor memory using the –executor-memory option or “<config>” in Spark configuration.

INVALID_EXPRESSION_ENCODER

SQLSTATE: 42001

Found an invalid expression encoder. Expects an instance of ExpressionEncoder but got <encoderType>. For more information consult ‘<docroot>/api/java/index.html?org/apache/spark/sql/Encoder.html’.

INVALID_EXTERNAL_TYPE

SQLSTATE: 42K0N

The external type <externalType> is not valid for the type <type> at the expression <expr>.

INVALID_EXTRACT_BASE_FIELD_TYPE

SQLSTATE: 42000

Can’t extract a value from <base>. Need a complex type [STRUCT, ARRAY, MAP] but got <other>.

INVALID_EXTRACT_FIELD

SQLSTATE: 42601

Cannot extract <field> from <expr>.

INVALID_EXTRACT_FIELD_TYPE

SQLSTATE: 42000

Field name should be a non-null string literal, but it’s <extraction>.

INVALID_FIELD_NAME

SQLSTATE: 42000

Field name <fieldName> is invalid: <path> is not a struct.

INVALID_FORMAT

SQLSTATE: 42601

The format is invalid: <format>.

For more details see INVALID_FORMAT

INVALID_FRACTION_OF_SECOND

SQLSTATE: 22023

Valid range for seconds is [0, 60] (inclusive), but the provided value is <secAndMicros>. To avoid this error, use try_make_timestamp, which returns NULL on error.

If you do not want to use the session default timestamp version of this function, use try_make_timestamp_ntz or try_make_timestamp_ltz.

INVALID_HANDLE

SQLSTATE: HY000

The handle <handle> is invalid.

For more details see INVALID_HANDLE

INVALID_HTTP_REQUEST_METHOD

SQLSTATE: 22023

The input parameter: method, value: <paramValue> is not a valid parameter for http_request because it is not a valid HTTP method.

INVALID_HTTP_REQUEST_PATH

SQLSTATE: 22023

The input parameter: path, value: <paramValue> is not a valid parameter for http_request because path traversal is not allowed.

INVALID_IDENTIFIER

SQLSTATE: 42602

The unquoted identifier <ident> is invalid and must be back quoted as: <ident>.

Unquoted identifiers can only contain ASCII letters (‘a’ - ‘z’, ‘A’ - ‘Z’), digits (‘0’ - ‘9’), and underbar (‘_’).

Unquoted identifiers must also not start with a digit.

Different data sources and meta stores may impose additional restrictions on valid identifiers.

INVALID_INDEX_OF_ZERO

SQLSTATE: 22003

The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).

INVALID_INLINE_TABLE

SQLSTATE: 42000

Invalid inline table.

For more details see INVALID_INLINE_TABLE

INVALID_INTERVAL_FORMAT

SQLSTATE: 22006

Error parsing ‘<input>’ to interval. Please ensure that the value provided is in a valid format for defining an interval. You can reference the documentation for the correct format.

For more details see INVALID_INTERVAL_FORMAT

INVALID_INTERVAL_WITH_MICROSECONDS_ADDITION

SQLSTATE: 22006

Cannot add an interval to a date because its microseconds part is not 0. If necessary set <ansiConfig> to “false” to bypass this error.

INVALID_INVERSE_DISTRIBUTION_FUNCTION

SQLSTATE: 42K0K

Invalid inverse distribution function <funcName>.

For more details see INVALID_INVERSE_DISTRIBUTION_FUNCTION

INVALID_JAVA_IDENTIFIER_AS_FIELD_NAME

SQLSTATE: 46121

<fieldName> is not a valid identifier of Java and cannot be used as field name

<walkedTypePath>.

INVALID_JOIN_TYPE_FOR_JOINWITH

SQLSTATE: 42613

Invalid join type in joinWith: <joinType>.

INVALID_JSON_DATA_TYPE

SQLSTATE: 2203G

Failed to convert the JSON string ‘<invalidType>’ to a data type. Please enter a valid data type.

INVALID_JSON_DATA_TYPE_FOR_COLLATIONS

SQLSTATE: 2203G

Collations can only be applied to string types, but the JSON data type is <jsonType>.

INVALID_JSON_RECORD_TYPE

SQLSTATE: 22023

Detected an invalid type of a JSON record while inferring a common schema in the mode <failFastMode>. Expected a STRUCT type, but found <invalidType>.

INVALID_JSON_ROOT_FIELD

SQLSTATE: 22032

Cannot convert JSON root field to target Spark type.

INVALID_JSON_SCHEMA_MAP_TYPE

SQLSTATE: 22032

Input schema <jsonSchema> can only contain STRING as a key type for a MAP.

INVALID_KRYO_SERIALIZER_BUFFER_SIZE

SQLSTATE: F0000

The value of the config “<bufferSizeConfKey>” must be less than 2048 MiB, but got <bufferSizeConfValue> MiB.

INVALID_LABEL_USAGE

SQLSTATE: 42K0L

The usage of the label <labelName> is invalid.

For more details see INVALID_LABEL_USAGE

INVALID_LAMBDA_FUNCTION_CALL

SQLSTATE: 42K0D

Invalid lambda function call.

For more details see INVALID_LAMBDA_FUNCTION_CALL

INVALID_LATERAL_JOIN_TYPE

SQLSTATE: 42613

The <joinType> JOIN with LATERAL correlation is not allowed because an OUTER subquery cannot correlate to its join partner. Remove the LATERAL correlation or use an INNER JOIN, or LEFT OUTER JOIN instead.

INVALID_LIMIT_LIKE_EXPRESSION

SQLSTATE: 42K0E

The limit like expression <expr> is invalid.

For more details see INVALID_LIMIT_LIKE_EXPRESSION

INVALID_NON_ABSOLUTE_PATH

SQLSTATE: 22KD1

The provided non absolute path <path> can not be qualified. Please update the path to be a valid dbfs mount location.

INVALID_NON_DETERMINISTIC_EXPRESSIONS

SQLSTATE: 42K0E

The operator expects a deterministic expression, but the actual expression is <sqlExprs>.

INVALID_NUMERIC_LITERAL_RANGE

SQLSTATE: 22003

Numeric literal <rawStrippedQualifier> is outside the valid range for <typeName> with minimum value of <minValue> and maximum value of <maxValue>. Please adjust the value accordingly.

INVALID_OBSERVED_METRICS

SQLSTATE: 42K0E

Invalid observed metrics.

For more details see INVALID_OBSERVED_METRICS

INVALID_OPTIONS

SQLSTATE: 42K06

Invalid options:

For more details see INVALID_OPTIONS

INVALID_PANDAS_UDF_PLACEMENT

SQLSTATE: 0A000

The group aggregate pandas UDF <functionList> cannot be invoked together with as other, non-pandas aggregate functions.

INVALID_PARAMETER_MARKER_VALUE

SQLSTATE: 22023

An invalid parameter mapping was provided:

For more details see INVALID_PARAMETER_MARKER_VALUE

INVALID_PARAMETER_VALUE

SQLSTATE: 22023

The value of parameter(s) <parameter> in <functionName> is invalid:

For more details see INVALID_PARAMETER_VALUE

INVALID_PARTITION_COLUMN_DATA_TYPE

SQLSTATE: 0A000

Cannot use <type> for partition column.

INVALID_PARTITION_OPERATION

SQLSTATE: 42601

The partition command is invalid.

For more details see INVALID_PARTITION_OPERATION

INVALID_PARTITION_VALUE

SQLSTATE: 42846

Failed to cast value <value> to data type <dataType> for partition column <columnName>. Ensure the value matches the expected data type for this partition column.

INVALID_PIPELINE_ID

SQLSTATE: 42604

Pipeline id <pipelineId> is not valid.

A pipeline id should be a UUID in the format of ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’

INVALID_PRIVILEGE

SQLSTATE: 42852

Privilege <privilege> is not valid for <securable>.

INVALID_PROPERTY_KEY

SQLSTATE: 42602

<key> is an invalid property key, please use quotes, e.g. SET <key>=<value>.

INVALID_PROPERTY_VALUE

SQLSTATE: 42602

<value> is an invalid property value, please use quotes, e.g. SET <key>=<value>

INVALID_QUALIFIED_COLUMN_NAME

SQLSTATE: 42000

The column name <columnName> is invalid because it is not qualified with a table name or consists of more than 4 name parts.

INVALID_QUERY_MIXED_QUERY_PARAMETERS

SQLSTATE: 42613

Parameterized query must either use positional, or named parameters, but not both.

INVALID_REGEXP_REPLACE

SQLSTATE: 22023

Could not perform regexp_replace for source = “<source>”, pattern = “<pattern>”, replacement = “<replacement>” and position = <position>.

INVALID_RESET_COMMAND_FORMAT

SQLSTATE: 42000

Expected format is ‘RESET’ or ‘RESET key’. If you want to include special characters in key, please use quotes, e.g., RESET key.

INVALID_S3_COPY_CREDENTIALS

SQLSTATE: 42501

COPY INTO credentials must include AWS_ACCESS_KEY, AWS_SECRET_KEY, and AWS_SESSION_TOKEN.

INVALID_SAVE_MODE

SQLSTATE: 42000

The specified save mode <mode> is invalid. Valid save modes include “append”, “overwrite”, “ignore”, “error”, “errorifexists”, and “default”.

INVALID_SCHEMA

SQLSTATE: 42K07

The input schema <inputSchema> is not a valid schema string.

For more details see INVALID_SCHEMA

INVALID_SCHEMA_OR_RELATION_NAME

SQLSTATE: 42602

<name> is not a valid name for tables/schemas. Valid names only contain alphabet characters, numbers and _.

INVALID_SCHEME

SQLSTATE: 0AKUC

Unity catalog does not support <name> as the default file scheme.

INVALID_SECRET_LOOKUP

SQLSTATE: 22531

Invalid secret lookup:

For more details see INVALID_SECRET_LOOKUP

INVALID_SET_SYNTAX

SQLSTATE: 42000

Expected format is ‘SET’, ‘SET key’, or ‘SET key=value’. If you want to include special characters in key, or include semicolon in value, please use backquotes, e.g., SET key=value.

INVALID_SHARED_ALIAS_NAME

SQLSTATE: 42601

The <sharedObjectType> alias name must be of the form “schema.name”.

INVALID_SINGLE_VARIANT_COLUMN

SQLSTATE: 42613

The singleVariantColumn option cannot be used if there is also a user specified schema.

INVALID_SOURCE_CATALOG

SQLSTATE: 42809

Source catalog must not be within Unity Catalog for the SYNC command. Found <catalog>.

INVALID_SQL_ARG

SQLSTATE: 42K08

The argument <name> of sql() is invalid. Consider to replace it either by a SQL literal or by collection constructor functions such as map(), array(), struct().

INVALID_SQL_SYNTAX

SQLSTATE: 42000

Invalid SQL syntax:

For more details see INVALID_SQL_SYNTAX

INVALID_STAGING_PATH_IN_STAGING_ACCESS_QUERY

SQLSTATE: 42604

Invalid staging path in staging <operation> query: <path>

INVALID_STATEMENT_FOR_EXECUTE_INTO

SQLSTATE: 07501

The INTO clause of EXECUTE IMMEDIATE is only valid for queries but the given statement is not a query: <sqlString>.

INVALID_STATEMENT_OR_CLAUSE

SQLSTATE: 42601

The statement or clause: <operation> is not valid.

INVALID_SUBQUERY_EXPRESSION

SQLSTATE: 42823

Invalid subquery:

For more details see INVALID_SUBQUERY_EXPRESSION

INVALID_TEMP_OBJ_REFERENCE

SQLSTATE: 42K0F

Cannot create the persistent object <objName> of the type <obj> because it references to the temporary object <tempObjName> of the type <tempObj>. Please make the temporary object <tempObjName> persistent, or make the persistent object <objName> temporary.

INVALID_TIMESTAMP_FORMAT

SQLSTATE: 22000

The provided timestamp <timestamp> doesn’t match the expected syntax <format>.

INVALID_TIMEZONE

SQLSTATE: 22009

The timezone: <timeZone> is invalid. The timezone must be either a region-based zone ID or a zone offset. Region IDs must have the form ‘area/city’, such as ‘America/Los_Angeles’. Zone offsets must be in the format ‘(+|-)HH’, ‘(+|-)HH:mm’ or ‘(+|-)HH:mm:ss’, e.g ‘-08’ , ‘+01:00’ or ‘-13:33:33’, and must be in the range from -18:00 to +18:00. ‘Z’ and ‘UTC’ are accepted as synonyms for ‘+00:00’.

INVALID_TIME_TRAVEL_SPEC

SQLSTATE: 42K0E

Cannot specify both version and timestamp when time travelling the table.

INVALID_TIME_TRAVEL_TIMESTAMP_EXPR

SQLSTATE: 42K0E

The time travel timestamp expression <expr> is invalid.

For more details see INVALID_TIME_TRAVEL_TIMESTAMP_EXPR

INVALID_TYPED_LITERAL

SQLSTATE: 42604

The value of the typed literal <valueType> is invalid: <value>.

INVALID_UDF_IMPLEMENTATION

SQLSTATE: 38000

Function <funcName> does not implement a ScalarFunction or AggregateFunction.

INVALID_UPGRADE_SYNTAX

SQLSTATE: 42809

<command> <supportedOrNot> the source table is in Hive Metastore and the destination table is in Unity Catalog.

INVALID_URL

SQLSTATE: 22P02

The url is invalid: <url>. Use try_parse_url to tolerate invalid URL and return NULL instead.

INVALID_USAGE_OF_STAR_OR_REGEX

SQLSTATE: 42000

Invalid usage of <elem> in <prettyName>.

INVALID_UTF8_STRING

SQLSTATE: 22029

Invalid UTF8 byte sequence found in string: <str>.

INVALID_UUID

SQLSTATE: 42604

Input <uuidInput> is not a valid UUID.

The UUID should be in the format of ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’

Please check the format of the UUID.

INVALID_VARIABLE_DECLARATION

SQLSTATE: 42K0M

Invalid variable declaration.

For more details see INVALID_VARIABLE_DECLARATION

INVALID_VARIABLE_TYPE_FOR_QUERY_EXECUTE_IMMEDIATE

SQLSTATE: 42K09

Variable type must be string type but got <varType>.

INVALID_VARIANT_CAST

SQLSTATE: 22023

The variant value <value> cannot be cast into <dataType>. Please use try_variant_get instead.

INVALID_VARIANT_FROM_PARQUET

SQLSTATE: 22023

Invalid variant.

For more details see INVALID_VARIANT_FROM_PARQUET

INVALID_VARIANT_GET_PATH

SQLSTATE: 22023

The path <path> is not a valid variant extraction path in <functionName>.

A valid path should start with $ and is followed by zero or more segments like [123], .name, ['name'], or ["name"].

INVALID_VARIANT_SHREDDING_SCHEMA

SQLSTATE: 22023

The schema <schema> is not a valid variant shredding schema.

INVALID_WHERE_CONDITION

SQLSTATE: 42903

The WHERE condition <condition> contains invalid expressions: <expressionList>.

Rewrite the query to avoid window functions, aggregate functions, and generator functions in the WHERE clause.

INVALID_WINDOW_SPEC_FOR_AGGREGATION_FUNC

SQLSTATE: 42601

Cannot specify ORDER BY or a window frame for <aggFunc>.

INVALID_WRITER_COMMIT_MESSAGE

SQLSTATE: 42KDE

The data source writer has generated an invalid number of commit messages. Expected exactly one writer commit message from each task, but received <detail>.

INVALID_WRITE_DISTRIBUTION

SQLSTATE: 42000

The requested write distribution is invalid.

For more details see INVALID_WRITE_DISTRIBUTION

ISOLATED_COMMAND_FAILURE

SQLSTATE: 39000

Failed to execute <command>.

JOIN_CONDITION_IS_NOT_BOOLEAN_TYPE

SQLSTATE: 42K0E

The join condition <joinCondition> has the invalid type <conditionType>, expected “BOOLEAN”.

KAFKA_DATA_LOSS

SQLSTATE: 22000

Some data may have been lost because they are not available in Kafka any more;

either the data was aged out by Kafka or the topic may have been deleted before all the data in the

topic was processed.

If you don’t want your streaming query to fail on such cases, set the source option failOnDataLoss to false.

Reason:

For more details see KAFKA_DATA_LOSS

KINESIS_COULD_NOT_READ_SHARD_UNTIL_END_OFFSET

SQLSTATE: 22000

Could not read until the desired sequence number <endSeqNum> for shard <shardId> in

kinesis stream <stream> with consumer mode <consumerMode>. The query will fail due to

potential data loss. The last read record was at sequence number <lastSeqNum>.

This can happen if the data with endSeqNum has already been aged out, or the Kinesis stream was

deleted and reconstructed with the same name. The failure behavior can be overridden

by setting spark.databricks.kinesis.failOnDataLoss to false in spark configuration.

KINESIS_EFO_CONSUMER_NOT_FOUND

SQLSTATE: 51000

For kinesis stream <streamId>, the previously registered EFO consumer <consumerId> of the stream has been deleted.

Restart the query so that a new consumer will be registered.

KINESIS_EFO_SUBSCRIBE_LIMIT_EXCEEDED

SQLSTATE: 51000

For shard <shard>, the previous call of subscribeToShard API was within the 5 seconds of the next call.

Restart the query after 5 seconds or more.

KINESIS_FETCHED_SHARD_LESS_THAN_TRACKED_SHARD

SQLSTATE: 42K04

The minimum fetched shardId from Kinesis (<fetchedShardId>)

is less than the minimum tracked shardId (<trackedShardId>).

This is unexpected and occurs when a Kinesis stream is deleted and recreated with the same name,

and a streaming query using this Kinesis stream is restarted using an existing checkpoint location.

Restart the streaming query with a new checkpoint location, or create a stream with a new name.

KINESIS_POLLING_MODE_UNSUPPORTED

SQLSTATE: 0A000

Kinesis polling mode is unsupported.

KINESIS_RECORD_SEQ_NUMBER_ORDER_VIOLATION

SQLSTATE: 22000

For shard <shard>, the last record read from Kinesis in previous fetches has sequence number <lastSeqNum>,

which is greater than the record read in current fetch with sequence number <recordSeqNum>.

This is unexpected and can happen when the start position of retry or next fetch is incorrectly initialized, and may result in duplicate records downstream.

KINESIS_SOURCE_MUST_BE_IN_EFO_MODE_TO_CONFIGURE_CONSUMERS

SQLSTATE: 42KDF

To read from Kinesis Streams with consumer configurations (consumerName, consumerNamePrefix, or registeredConsumerId), consumerMode must be efo.

KINESIS_SOURCE_MUST_SPECIFY_REGISTERED_CONSUMER_ID_AND_TYPE

SQLSTATE: 42KDF

To read from Kinesis Streams with registered consumers, you must specify both the registeredConsumerId and registeredConsumerIdType options.

KINESIS_SOURCE_MUST_SPECIFY_STREAM_NAMES_OR_ARNS

SQLSTATE: 42KDF

To read from Kinesis Streams, you must configure either (but not both) of the streamName or streamARN options as a comma-separated list of stream names/ARNs.

KINESIS_SOURCE_NO_CONSUMER_OPTIONS_WITH_REGISTERED_CONSUMERS

SQLSTATE: 42KDF

To read from Kinesis Streams with registered consumers, do not configure consumerName or consumerNamePrefix options as they will not take effect.

KINESIS_SOURCE_REGISTERED_CONSUMER_ID_COUNT_MISMATCH

SQLSTATE: 22023

The number of registered consumer ids should be equal to the number of Kinesis streams but got <numConsumerIds> consumer ids and <numStreams> streams.

KINESIS_SOURCE_REGISTERED_CONSUMER_NOT_FOUND

SQLSTATE: 22023

The registered consumer <consumerId> provided cannot be found for streamARN <streamARN>. Verify that you have registered the consumer or do not provide the registeredConsumerId option.

KINESIS_SOURCE_REGISTERED_CONSUMER_TYPE_INVALID

SQLSTATE: 22023

The registered consumer type <consumerType> is invalid. It must be either name or ARN.

KRYO_BUFFER_OVERFLOW

SQLSTATE: 54006

Kryo serialization failed: <exceptionMsg>. To avoid this, increase “<bufferSizeConfKey>” value.

LABELS_MISMATCH

SQLSTATE: 42K0L

Begin label <beginLabel> does not match the end label <endLabel>.

LABEL_ALREADY_EXISTS

SQLSTATE: 42K0L

The label <label> already exists. Choose another name or rename the existing label.

LOAD_DATA_PATH_NOT_EXISTS

SQLSTATE: 42K03

LOAD DATA input path does not exist: <path>.

LOCAL_MUST_WITH_SCHEMA_FILE

SQLSTATE: 42601

LOCAL must be used together with the schema of file, but got: <actualSchema>.

LOCATION_ALREADY_EXISTS

SQLSTATE: 42710

Cannot name the managed table as <identifier>, as its associated location <location> already exists. Please pick a different table name, or remove the existing location first.

LOST_TOPIC_PARTITIONS_IN_END_OFFSET_WITH_TRIGGER_AVAILABLENOW

SQLSTATE: KD000

Some of partitions in Kafka topic(s) have been lost during running query with Trigger.AvailableNow. The error could be transient - restart your query, and report if you still see the same issue.

topic-partitions for latest offset: <tpsForLatestOffset>, topic-partitions for end offset: <tpsForEndOffset>

MALFORMED_AVRO_MESSAGE

SQLSTATE: KD000

Malformed Avro messages are detected in message deserialization. Parse Mode: <mode>. To process malformed Avro message as null result, try setting the option ‘mode’ as ‘PERMISSIVE’.

MALFORMED_CHARACTER_CODING

SQLSTATE: 22000

Invalid value found when performing <function> with <charset>

MALFORMED_CSV_RECORD

SQLSTATE: KD000

Malformed CSV record: <badRecord>

MALFORMED_RECORD_IN_PARSING

SQLSTATE: 22023

Malformed records are detected in record parsing: <badRecord>.

Parse Mode: <failFastMode>. To process malformed records as null result, try setting the option ‘mode’ as ‘PERMISSIVE’.

For more details see MALFORMED_RECORD_IN_PARSING

MALFORMED_VARIANT

SQLSTATE: 22023

Variant binary is malformed. Please check the data source is valid.

MANAGED_TABLE_WITH_CRED

SQLSTATE: 42613

Create managed table with storage credential is not supported.

MATERIALIZED_VIEW_MESA_REFRESH_WITHOUT_PIPELINE_ID

SQLSTATE: 55019

Cannot <refreshType> the materialized view because it predates having a pipelineId. To enable <refreshType> please drop and recreate the materialized view.

MATERIALIZED_VIEW_OPERATION_NOT_ALLOWED

SQLSTATE: 56038

The materialized view operation <operation> is not allowed:

For more details see MATERIALIZED_VIEW_OPERATION_NOT_ALLOWED

MATERIALIZED_VIEW_OUTPUT_WITHOUT_EXPLICIT_ALIAS

SQLSTATE: 0A000

Output expression <expression> in a materialized view must be explicitly aliased.

MATERIALIZED_VIEW_OVER_STREAMING_QUERY_INVALID

SQLSTATE: 42000

materialized view <name> could not be created with streaming query. Please use CREATE [OR REFRESH] <st> or remove the STREAM keyword to your FROM clause to turn this relation into a batch query instead.

MATERIALIZED_VIEW_UNSUPPORTED_OPERATION

SQLSTATE: 0A000

Operation <operation> is not supported on materialized views for this version.

MAX_NUMBER_VARIABLES_IN_SESSION_EXCEEDED

SQLSTATE: 54KD1

Cannot create the new variable <variableName> because the number of variables in the session exceeds the maximum allowed number (<maxNumVariables>).

MAX_RECORDS_PER_FETCH_INVALID_FOR_KINESIS_SOURCE

SQLSTATE: 22023

maxRecordsPerFetch needs to be a positive integer less than or equal to <kinesisRecordLimit>

MERGE_CARDINALITY_VIOLATION

SQLSTATE: 23K01

The ON search condition of the MERGE statement matched a single row from the target table with multiple rows of the source table.

This could result in the target row being operated on more than once with an update or delete operation and is not allowed.

MERGE_WITHOUT_WHEN

SQLSTATE: 42601

There must be at least one WHEN clause in a MERGE statement.

METRIC_CONSTRAINT_NOT_SUPPORTED

SQLSTATE: 0A000

METRIC CONSTRAINT is not enabled.

METRIC_STORE_INVALID_ARGUMENT_VALUE_ERROR

SQLSTATE: 22023

Provided value “<argValue>” is not supported by argument “<argName>” for the METRIC_STORE table function.

For more details see METRIC_STORE_INVALID_ARGUMENT_VALUE_ERROR

METRIC_STORE_UNSUPPORTED_ERROR

SQLSTATE: 56038

Metric Store routine <routineName> is currently disabled in this environment.

MIGRATION_NOT_SUPPORTED

SQLSTATE: 42601

<table> is not supported for migrating to managed table because it is not a <tableKind> table.

MISMATCHED_TOPIC_PARTITIONS_BETWEEN_END_OFFSET_AND_PREFETCHED

SQLSTATE: KD000

Kafka data source in Trigger.AvailableNow should provide the same topic partitions in pre-fetched offset to end offset for each microbatch. The error could be transient - restart your query, and report if you still see the same issue.

topic-partitions for pre-fetched offset: <tpsForPrefetched>, topic-partitions for end offset: <tpsForEndOffset>.

MISSING_AGGREGATION

SQLSTATE: 42803

The non-aggregating expression <expression> is based on columns which are not participating in the GROUP BY clause.

Add the columns or the expression to the GROUP BY, aggregate the expression, or use <expressionAnyValue> if you do not care which of the values within a group is returned.

For more details see MISSING_AGGREGATION

MISSING_CLAUSES_FOR_OPERATION

SQLSTATE: 42601

Missing clause <clauses> for operation <operation>. Please add the required clauses.

MISSING_CONNECTION_OPTION

SQLSTATE: 42000

Connections of type ‘<connectionType>’ must include the following option(s): <requiredOptions>.

MISSING_DATABASE_FOR_V1_SESSION_CATALOG

SQLSTATE: 3F000

Database name is not specified in the v1 session catalog. Please ensure to provide a valid database name when interacting with the v1 catalog.

MISSING_GROUP_BY

SQLSTATE: 42803

The query does not include a GROUP BY clause. Add GROUP BY or turn it into the window functions using OVER clauses.

MISSING_NAME_FOR_CHECK_CONSTRAINT

SQLSTATE: 42000

CHECK constraint must have a name.

MISSING_PARAMETER_FOR_KAFKA

SQLSTATE: 42KDF

Parameter <parameterName> is required for Kafka, but is not specified in <functionName>.

MISSING_PARAMETER_FOR_ROUTINE

SQLSTATE: 42KDF

Parameter <parameterName> is required, but is not specified in <functionName>.

MISSING_TIMEOUT_CONFIGURATION

SQLSTATE: HY000

The operation has timed out, but no timeout duration is configured. To set a processing time-based timeout, use ‘GroupState.setTimeoutDuration()’ in your ‘mapGroupsWithState’ or ‘flatMapGroupsWithState’ operation. For event-time-based timeout, use ‘GroupState.setTimeoutTimestamp()’ and define a watermark using ‘Dataset.withWatermark()’.

MISSING_WINDOW_SPECIFICATION

SQLSTATE: 42P20

Window specification is not defined in the WINDOW clause for <windowName>. For more information about WINDOW clauses, please refer to ‘<docroot>/sql-ref-syntax-qry-select-window.html’.

MODIFY_BUILTIN_CATALOG

SQLSTATE: 42832

Modifying built-in catalog <catalogName> is not supported.

MULTIPLE_LOAD_PATH

SQLSTATE: 42000

Databricks Delta does not support multiple input paths in the load() API.

paths: <pathList>. To build a single DataFrame by loading

multiple paths from the same Delta table, please load the root path of

the Delta table with the corresponding partition filters. If the multiple paths

are from different Delta tables, please use Dataset’s union()/unionByName() APIs

to combine the DataFrames generated by separate load() API calls.

MULTIPLE_MATCHING_CONSTRAINTS

SQLSTATE: 42891

Found at least two matching constraints with the given condition.

MULTIPLE_QUERY_RESULT_CLAUSES_WITH_PIPE_OPERATORS

SQLSTATE: 42000

<clause1> and <clause2> cannot coexist in the same SQL pipe operator using ‘|>’. Please separate the multiple result clauses into separate pipe operators and then retry the query again.

MULTIPLE_TIME_TRAVEL_SPEC

SQLSTATE: 42K0E

Cannot specify time travel in both the time travel clause and options.

MULTIPLE_XML_DATA_SOURCE

SQLSTATE: 42710

Detected multiple data sources with the name <provider> (<sourceNames>). Please specify the fully qualified class name or remove <externalSource> from the classpath.

MULTI_SOURCES_UNSUPPORTED_FOR_EXPRESSION

SQLSTATE: 42K0E

The expression <expr> does not support more than one source.

MULTI_UDF_INTERFACE_ERROR

SQLSTATE: 0A000

Not allowed to implement multiple UDF interfaces, UDF class <className>.

MUTUALLY_EXCLUSIVE_CLAUSES

SQLSTATE: 42613

Mutually exclusive clauses or options <clauses>. Please remove one of these clauses.

MV_ST_ALTER_QUERY_INCORRECT_BACKING_TYPE

SQLSTATE: 42601

The input query expects a <expectedType>, but the underlying table is a <givenType>.

NAMED_PARAMETERS_NOT_SUPPORTED

SQLSTATE: 4274K

Named parameters are not supported for function <functionName>; please retry the query with positional arguments to the function call instead.

NAMED_PARAMETERS_NOT_SUPPORTED_FOR_SQL_UDFS

SQLSTATE: 0A000

Cannot call function <functionName> because named argument references are not supported. In this case, the named argument reference was <argument>.

NAMED_PARAMETER_SUPPORT_DISABLED

SQLSTATE: 0A000

Cannot call function <functionName> because named argument references are not enabled here.

In this case, the named argument reference was <argument>.

Set “spark.sql.allowNamedFunctionArguments” to “true” to turn on feature.

NAMESPACE_ALREADY_EXISTS

SQLSTATE: 42000

Cannot create namespace <nameSpaceName> because it already exists.

Choose a different name, drop the existing namespace, or add the IF NOT EXISTS clause to tolerate pre-existing namespace.

NAMESPACE_NOT_EMPTY

SQLSTATE: 42000

Cannot drop a namespace <nameSpaceNameName> because it contains objects.

Use DROP NAMESPACECASCADE to drop the namespace and all its objects.

NAMESPACE_NOT_FOUND

SQLSTATE: 42000

The namespace <nameSpaceName> cannot be found. Verify the spelling and correctness of the namespace.

If you did not qualify the name with, verify the current_schema() output, or qualify the name with the correctly.

To tolerate the error on drop use DROP NAMESPACE IF EXISTS.

NATIVE_IO_ERROR

SQLSTATE: KD00F

Native request failed. requestId: <requestId>, cloud: <cloud>, operation: <operation>

request: [https: <https>, method = <method>, path = <path>, params = <params>, host = <host>, headers = <headers>, bodyLen = <bodyLen>],

error: <error>

NATIVE_XML_DATA_SOURCE_NOT_ENABLED

SQLSTATE: 56038

Native XML Data Source is not enabled in this cluster.

NEGATIVE_VALUES_IN_FREQUENCY_EXPRESSION

SQLSTATE: 22003

Found the negative value in <frequencyExpression>: <negativeValue>, but expected a positive integral value.

NESTED_AGGREGATE_FUNCTION

SQLSTATE: 42607

It is not allowed to use an aggregate function in the argument of another aggregate function. Please use the inner aggregate function in a sub-query.

NESTED_EXECUTE_IMMEDIATE

SQLSTATE: 07501

Nested EXECUTE IMMEDIATE commands are not allowed. Please ensure that the SQL query provided (<sqlString>) does not contain another EXECUTE IMMEDIATE command.

NONEXISTENT_FIELD_NAME_IN_LIST

SQLSTATE: HV091

Field(s) <nonExistFields> do(es) not exist. Available fields: <fieldNames>

NON_FOLDABLE_ARGUMENT

SQLSTATE: 42K08

The function <funcName> requires the parameter <paramName> to be a foldable expression of the type <paramType>, but the actual argument is a non-foldable.

NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION

SQLSTATE: 42613

When there are more than one MATCHED clauses in a MERGE statement, only the last MATCHED clause can omit the condition.

NON_LAST_NOT_MATCHED_BY_SOURCE_CLAUSE_OMIT_CONDITION

SQLSTATE: 42613

When there are more than one NOT MATCHED BY SOURCE clauses in a MERGE statement, only the last NOT MATCHED BY SOURCE clause can omit the condition.

NON_LAST_NOT_MATCHED_BY_TARGET_CLAUSE_OMIT_CONDITION

SQLSTATE: 42613

When there are more than one NOT MATCHED [BY TARGET] clauses in a MERGE statement, only the last NOT MATCHED [BY TARGET] clause can omit the condition.

NON_LITERAL_PIVOT_VALUES

SQLSTATE: 42K08

Literal expressions required for pivot values, found <expression>.

NON_PARTITION_COLUMN

SQLSTATE: 42000

PARTITION clause cannot contain the non-partition column: <columnName>.

NON_TIME_WINDOW_NOT_SUPPORTED_IN_STREAMING

SQLSTATE: 42KDE

Window function is not supported in <windowFunc> (as column <columnName>) on streaming DataFrames/Datasets.

Structured Streaming only supports time-window aggregation using the WINDOW function. (window specification: <windowSpec>)

NOT_ALLOWED_IN_FROM

SQLSTATE: 42601

Not allowed in the FROM clause:

For more details see NOT_ALLOWED_IN_FROM

NOT_ALLOWED_IN_PIPE_OPERATOR_WHERE

SQLSTATE: 42601

Not allowed in the pipe WHERE clause:

For more details see NOT_ALLOWED_IN_PIPE_OPERATOR_WHERE

NOT_A_CONSTANT_STRING

SQLSTATE: 42601

The expression <expr> used for the routine or clause <name> must be a constant STRING which is NOT NULL.

For more details see NOT_A_CONSTANT_STRING

NOT_A_PARTITIONED_TABLE

SQLSTATE: 42809

Operation <operation> is not allowed for <tableIdentWithDB> because it is not a partitioned table.

NOT_A_SCALAR_FUNCTION

SQLSTATE: 42887

<functionName> appears as a scalar expression here, but the function was defined as a table function. Please update the query to move the function call into the FROM clause, or redefine <functionName> as a scalar function instead.

NOT_A_TABLE_FUNCTION

SQLSTATE: 42887

<functionName> appears as a table function here, but the function was defined as a scalar function. Please update the query to move the function call outside the FROM clause, or redefine <functionName> as a table function instead.

NOT_NULL_ASSERT_VIOLATION

SQLSTATE: 42000

NULL value appeared in non-nullable field: <walkedTypePath>If the schema is inferred from a Scala tuple/case class, or a Java bean, please try to use scala.Option[_] or other nullable types (such as java.lang.Integer instead of int/scala.Int).

NOT_NULL_CONSTRAINT_VIOLATION

SQLSTATE: 42000

Assigning a NULL is not allowed here.

For more details see NOT_NULL_CONSTRAINT_VIOLATION

NOT_SUPPORTED_CHANGE_COLUMN

SQLSTATE: 0A000

ALTER TABLE ALTER/CHANGE COLUMN is not supported for changing <table>’s column <originName> with type <originType> to <newName> with type <newType>.

NOT_SUPPORTED_COMMAND_FOR_V2_TABLE

SQLSTATE: 0A000

<cmd> is not supported for v2 tables.

NOT_SUPPORTED_COMMAND_WITHOUT_HIVE_SUPPORT

SQLSTATE: 0A000

<cmd> is not supported, if you want to enable it, please set “spark.sql.catalogImplementation” to “hive”.

NOT_SUPPORTED_IN_JDBC_CATALOG

SQLSTATE: 0A000

Not supported command in JDBC catalog:

For more details see NOT_SUPPORTED_IN_JDBC_CATALOG

NOT_SUPPORTED_WITH_DB_SQL

SQLSTATE: 0A000

<operation> is not supported on a SQL <endpoint>.

NOT_SUPPORTED_WITH_SERVERLESS

SQLSTATE: 0A000

<operation> is not supported on serverless compute.

NOT_UNRESOLVED_ENCODER

SQLSTATE: 42601

Unresolved encoder expected, but <attr> was found.

NO_DEFAULT_COLUMN_VALUE_AVAILABLE

SQLSTATE: 42608

Can’t determine the default value for <colName> since it is not nullable and it has no default value.

NO_HANDLER_FOR_UDAF

SQLSTATE: 42000

No handler for UDAF ‘<functionName>’. Use sparkSession.udf.register(…) instead.

NO_MERGE_ACTION_SPECIFIED

SQLSTATE: 42K0E

df.mergeInto needs to be followed by at least one of whenMatched/whenNotMatched/whenNotMatchedBySource.

NO_PARENT_EXTERNAL_LOCATION_FOR_PATH

SQLSTATE: none assigned

No parent external location was found for path ‘<path>’. Please create an external location on one of the parent paths and then retry the query or command again.

NO_SQL_TYPE_IN_PROTOBUF_SCHEMA

SQLSTATE: 42S22

Cannot find <catalystFieldPath> in Protobuf schema.

NO_STORAGE_LOCATION_FOR_TABLE

SQLSTATE: none assigned

No storage location was found for table ‘<tableId>’ when generating table credentials. Please verify the table type and the table location URL and then retry the query or command again.

NO_SUCH_CATALOG_EXCEPTION

SQLSTATE: 42704

Catalog ‘<catalog>’ was not found. Please verify the catalog name and then retry the query or command again.

NO_SUCH_CLEANROOM_EXCEPTION

SQLSTATE: none assigned

The clean room ‘<cleanroom>’ does not exist. Please verify that the clean room name is spelled correctly and matches the name of a valid existing clean room and then retry the query or command again.

NO_SUCH_EXTERNAL_LOCATION_EXCEPTION

SQLSTATE: none assigned

The external location ‘<externalLocation>’ does not exist. Please verify that the external location name is correct and then retry the query or command again.

NO_SUCH_METASTORE_EXCEPTION

SQLSTATE: none assigned

The metastore was not found. Please ask your account administrator to assign a metastore to the current workspace and then retry the query or command again.

NO_SUCH_PROVIDER_EXCEPTION

SQLSTATE: none assigned

The share provider ‘<providerName>’ does not exist. Please verify the share provider name is spelled correctly and matches the name of a valid existing provider name and then retry the query or command again.

NO_SUCH_RECIPIENT_EXCEPTION

SQLSTATE: none assigned

The recipient ‘<recipient>’ does not exist. Please verify that the recipient name is spelled correctly and matches the name of a valid existing recipient and then retry the query or command again.

NO_SUCH_SHARE_EXCEPTION

SQLSTATE: none assigned

The share ‘<share>’ does not exist. Please verify that the share name is spelled correctly and matches the name of a valid existing share and then retry the query or command again.

NO_SUCH_STORAGE_CREDENTIAL_EXCEPTION

SQLSTATE: none assigned

The storage credential ‘<storageCredential>’ does not exist. Please verify that the storage credential name is spelled correctly and matches the name of a valid existing storage credential and then retry the query or command again.

NO_SUCH_USER_EXCEPTION

SQLSTATE: none assigned

The user ‘<userName>’ does not exist. Please verify that the user to whom you grant permission or alter ownership is spelled correctly and matches the name of a valid existing user and then retry the query or command again.

NO_UDF_INTERFACE

SQLSTATE: 38000

UDF class <className> doesn’t implement any UDF interface.

NULLABLE_COLUMN_OR_FIELD

SQLSTATE: 42000

Column or field <name> is nullable while it’s required to be non-nullable.

NULLABLE_ROW_ID_ATTRIBUTES

SQLSTATE: 42000

Row ID attributes cannot be nullable: <nullableRowIdAttrs>.

NULL_DATA_SOURCE_OPTION

SQLSTATE: 22024

Data source read/write option <option> cannot have null value.

NULL_MAP_KEY

SQLSTATE: 2200E

Cannot use null as map key.

NULL_QUERY_STRING_EXECUTE_IMMEDIATE

SQLSTATE: 22004

Execute immediate requires a non-null variable as the query string, but the provided variable <varName> is null.

NUMERIC_OUT_OF_SUPPORTED_RANGE

SQLSTATE: 22003

The value <value> cannot be interpreted as a numeric since it has more than 38 digits.

NUMERIC_VALUE_OUT_OF_RANGE

SQLSTATE: 22003

For more details see NUMERIC_VALUE_OUT_OF_RANGE

NUM_COLUMNS_MISMATCH

SQLSTATE: 42826

<operator> can only be performed on inputs with the same number of columns, but the first input has <firstNumColumns> columns and the <invalidOrdinalNum> input has <invalidNumColumns> columns.

NUM_TABLE_VALUE_ALIASES_MISMATCH

SQLSTATE: 42826

Number of given aliases does not match number of output columns.

Function name: <funcName>; number of aliases: <aliasesNum>; number of output columns: <outColsNum>.

OAUTH_CUSTOM_IDENTITY_CLAIM_NOT_PROVIDED

SQLSTATE: 22KD2

No custom identity claim was provided.

ONLY_SECRET_FUNCTION_SUPPORTED_HERE

SQLSTATE: 42K0E

Calling function <functionName> is not supported in this <location>; <supportedFunctions> supported here.

ONLY_SUPPORTED_WITH_UC_SQL_CONNECTOR

SQLSTATE: 0A000

SQL operation <operation> is only supported on Databricks SQL connectors with Unity Catalog support.

OPERATION_CANCELED

SQLSTATE: HY008

Operation has been canceled.

OPERATION_REQUIRES_UNITY_CATALOG

SQLSTATE: 0AKUD

Operation <operation> requires Unity Catalog enabled.

OP_NOT_SUPPORTED_READ_ONLY

SQLSTATE: 42KD1

<plan> is not supported in read-only session mode.

ORDER_BY_POS_OUT_OF_RANGE

SQLSTATE: 42805

ORDER BY position <index> is not in select list (valid range is [1, <size>]).

PARQUET_CONVERSION_FAILURE

SQLSTATE: 42846

Unable to create a Parquet converter for the data type <dataType> whose Parquet type is <parquetType>.

For more details see PARQUET_CONVERSION_FAILURE

PARQUET_TYPE_ILLEGAL

SQLSTATE: 42846

Illegal Parquet type: <parquetType>.

PARQUET_TYPE_NOT_RECOGNIZED

SQLSTATE: 42846

Unrecognized Parquet type: <field>.

PARQUET_TYPE_NOT_SUPPORTED

SQLSTATE: 42846

Parquet type not yet supported: <parquetType>.

PARSE_EMPTY_STATEMENT

SQLSTATE: 42617

Syntax error, unexpected empty statement.

PARSE_MODE_UNSUPPORTED

SQLSTATE: 42601

The function <funcName> doesn’t support the <mode> mode. Acceptable modes are PERMISSIVE and FAILFAST.

PARSE_SYNTAX_ERROR

SQLSTATE: 42601

Syntax error at or near <error> <hint>.

PARTITIONS_ALREADY_EXIST

SQLSTATE: 428FT

Cannot ADD or RENAME TO partition(s) <partitionList> in table <tableName> because they already exist.

Choose a different name, drop the existing partition, or add the IF NOT EXISTS clause to tolerate a pre-existing partition.

PARTITIONS_NOT_FOUND

SQLSTATE: 428FT

The partition(s) <partitionList> cannot be found in table <tableName>.

Verify the partition specification and table name.

To tolerate the error on drop use ALTER TABLEDROP IF EXISTS PARTITION.

PARTITION_COLUMN_NOT_FOUND_IN_SCHEMA

SQLSTATE: 42000

Partition column <column> not found in schema <schema>. Please provide the existing column for partitioning.

PARTITION_LOCATION_ALREADY_EXISTS

SQLSTATE: 42K04

Partition location <locationPath> already exists in table <tableName>.

PARTITION_LOCATION_IS_NOT_UNDER_TABLE_DIRECTORY

SQLSTATE: 42KD5

Failed to execute the ALTER TABLE SET PARTITION LOCATION statement, because the

partition location <location> is not under the table directory <table>.

To fix it, please set the location of partition to a subdirectory of <table>.

PARTITION_METADATA

SQLSTATE: 0AKUC

<action> is not allowed on table <tableName> since storing partition metadata is not supported in Unity Catalog.

PARTITION_NUMBER_MISMATCH

SQLSTATE: KD009

Number of values (<partitionNumber>) did not match schema size (<partitionSchemaSize>): values are <partitionValues>, schema is <partitionSchema>, file path is <urlEncodedPath>.

Please re-materialize the table or contact the owner.

PARTITION_TRANSFORM_EXPRESSION_NOT_IN_PARTITIONED_BY

SQLSTATE: 42S23

The expression <expression> must be inside ‘partitionedBy’.

PATH_ALREADY_EXISTS

SQLSTATE: 42K04

Path <outputPath> already exists. Set mode as “overwrite” to overwrite the existing path.

PATH_NOT_FOUND

SQLSTATE: 42K03

Path does not exist: <path>.

PHOTON_DESERIALIZED_PROTOBUF_MEMORY_LIMIT_EXCEEDED

SQLSTATE: 54000

Deserializing the Photon protobuf plan requires at least <size> bytes, which exceeds the

limit of <limit> bytes. This could be due to a very large plan or the presence of a very

wide schema. Try to simplify the query, remove unnecessary columns, or disable Photon.

PHOTON_SERIALIZED_PROTOBUF_MEMORY_LIMIT_EXCEEDED

SQLSTATE: 54000

The serialized Photon protobuf plan has size <size> bytes, which exceeds the limit of

<limit> bytes. The serialized size of data types in the plan is <dataTypeSize> bytes.

This could be due to a very large plan or the presence of a very wide schema.

Consider rewriting the query to remove unwanted operations and columns or disable Photon.

PIPE_OPERATOR_AGGREGATE_EXPRESSION_CONTAINS_NO_AGGREGATE_FUNCTION

SQLSTATE: 0A000

Non-grouping expression <expr> is provided as an argument to the |> AGGREGATE pipe operator but does not contain any aggregate function; please update it to include an aggregate function and then retry the query again.

PIPE_OPERATOR_CONTAINS_AGGREGATE_FUNCTION

SQLSTATE: 0A000

Aggregate function <expr> is not allowed when using the pipe operator |> <clause> clause; please use the pipe operator |> AGGREGATE clause instead.

PIVOT_VALUE_DATA_TYPE_MISMATCH

SQLSTATE: 42K09

Invalid pivot value ‘<value>’: value data type <valueType> does not match pivot column data type <pivotType>.

PROCEDURE_ARGUMENT_NUMBER_MISMATCH

SQLSTATE: 42605

Procedure <procedureName> expects <expected> arguments, but <actual> were provided.

PROCEDURE_CREATION_EMPTY_ROUTINE

SQLSTATE: 0A000

CREATE PROCEDURE with an empty routine definition is not allowed.

PROCEDURE_CREATION_PARAMETER_OUT_INOUT_WITH_DEFAULT

SQLSTATE: 42601

The parameter <parameterName> is defined with parameter mode <parameterMode>. OUT and INOUT parameter cannot be omitted when invoking a routine and therefore do not support a DEFAULT expression. To proceed, remove the DEFAULT clause or change the parameter mode to IN.

PROCEDURE_NOT_SUPPORTED

SQLSTATE: 0A000

Stored procedure is not supported

PROCEDURE_NOT_SUPPORTED_WITH_HMS

SQLSTATE: 0A000

Stored procedure is not supported with Hive Metastore. Please use Unity Catalog instead.

PROTOBUF_DEPENDENCY_NOT_FOUND

SQLSTATE: 42K0G

Could not find dependency: <dependencyName>.

PROTOBUF_DESCRIPTOR_FILE_NOT_FOUND

SQLSTATE: 42K0G

Error reading Protobuf descriptor file at path: <filePath>.

PROTOBUF_FIELD_MISSING

SQLSTATE: 42K0G

Searching for <field> in Protobuf schema at <protobufSchema> gave <matchSize> matches. Candidates: <matches>.

PROTOBUF_FIELD_MISSING_IN_SQL_SCHEMA

SQLSTATE: 42K0G

Found <field> in Protobuf schema but there is no match in the SQL schema.

PROTOBUF_FIELD_TYPE_MISMATCH

SQLSTATE: 42K0G

Type mismatch encountered for field: <field>.

PROTOBUF_JAVA_CLASSES_NOT_SUPPORTED

SQLSTATE: 0A000

Java classes are not supported for <protobufFunction>. Contact Databricks Support about alternate options.

PROTOBUF_MESSAGE_NOT_FOUND

SQLSTATE: 42K0G

Unable to locate Message <messageName> in Descriptor.

PROTOBUF_NOT_LOADED_SQL_FUNCTIONS_UNUSABLE

SQLSTATE: 22KD3

Cannot call the <functionName> SQL function because the Protobuf data source is not loaded.

Please restart your job or session with the ‘spark-protobuf’ package loaded, such as by using the –packages argument on the command line, and then retry your query or command again.

PROTOBUF_TYPE_NOT_SUPPORT

SQLSTATE: 42K0G

Protobuf type not yet supported: <protobufType>.

PS_FETCH_RETRY_EXCEPTION

SQLSTATE: 22000

Task in pubsub fetch stage cannot be retried. Partition <partitionInfo> in stage <stageInfo>, TID <taskId>.

PS_INVALID_EMPTY_OPTION

SQLSTATE: 42000

<key> cannot be an empty string.

PS_INVALID_KEY_TYPE

SQLSTATE: 22000

Invalid key type for PubSub dedup: <key>.

PS_INVALID_OPTION

SQLSTATE: 42000

The option <key> is not supported by PubSub. It can only be used in testing.

PS_INVALID_OPTION_TYPE

SQLSTATE: 42000

Invalid type for <key>. Expected type of <key> to be type <type>.

PS_INVALID_READ_LIMIT

SQLSTATE: 42000

Invalid read limit on PubSub stream: <limit>.

PS_INVALID_UNSAFE_ROW_CONVERSION_FROM_PROTO

SQLSTATE: 22000

Invalid UnsafeRow to decode to PubSubMessageMetadata, the desired proto schema is: <protoSchema>. The input UnsafeRow might be corrupted: <unsafeRow>.

PS_MISSING_AUTH_INFO

SQLSTATE: 42000

Failed to find complete PubSub authentication information.

PS_MISSING_REQUIRED_OPTION

SQLSTATE: 42000

Could not find required option: <key>.

PS_MOVING_CHECKPOINT_FAILURE

SQLSTATE: 22000

Fail to move raw data checkpoint files from <src> to destination directory: <dest>.

PS_MULTIPLE_FAILED_EPOCHS

SQLSTATE: 22000

PubSub stream cannot be started as there is more than one failed fetch: <failedEpochs>.

PS_OPTION_NOT_IN_BOUNDS

SQLSTATE: 22000

<key> must be within the following bounds (<min>, <max>) exclusive of both bounds.

PS_PROVIDE_CREDENTIALS_WITH_OPTION

SQLSTATE: 42000

Shared clusters do not support authentication with instance profiles. Provide credentials to the stream directly using .option().

PS_SPARK_SPECULATION_NOT_SUPPORTED

SQLSTATE: 0A000

PubSub source connector is only available in cluster with spark.speculation disabled.

PS_UNABLE_TO_CREATE_SUBSCRIPTION

SQLSTATE: 42000

An error occurred while trying to create subscription <subId> on topic <topicId>. Please check that there are sufficient permissions to create a subscription and try again.

PS_UNABLE_TO_PARSE_PROTO

SQLSTATE: 22000

Unable to parse serialized bytes to generate proto.

PS_UNSUPPORTED_GET_OFFSET_CALL

SQLSTATE: 0A000

getOffset is not supported without supplying a limit.

PYTHON_DATA_SOURCE_ERROR

SQLSTATE: 38000

Failed to <action> Python data source <type>: <msg>

PYTHON_STREAMING_DATA_SOURCE_RUNTIME_ERROR

SQLSTATE: 38000

Failed when Python streaming data source perform <action>: <msg>

QUERIED_TABLE_INCOMPATIBLE_WITH_COLUMN_MASK_POLICY

SQLSTATE: 428HD

Unable to access referenced table because a previously assigned column mask is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:

For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_COLUMN_MASK_POLICY

QUERIED_TABLE_INCOMPATIBLE_WITH_ROW_LEVEL_SECURITY_POLICY

SQLSTATE: 428HD

Unable to access referenced table because a previously assigned row level security policy is currently incompatible with the table schema; to continue, please contact the owner of the table to update the policy:

For more details see QUERIED_TABLE_INCOMPATIBLE_WITH_ROW_LEVEL_SECURITY_POLICY

READ_CURRENT_FILE_NOT_FOUND

SQLSTATE: 42K03

<message>

It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running ‘REFRESH TABLE tableName’ command in SQL or by recreating the Dataset/DataFrame involved.

READ_FILES_AMBIGUOUS_ROUTINE_PARAMETERS

SQLSTATE: 4274K

The invocation of function <functionName> has <parameterName> and <alternativeName> set, which are aliases of each other. Please set only one of them.

READ_TVF_UNEXPECTED_REQUIRED_PARAMETER

SQLSTATE: 4274K

The function <functionName> required parameter <parameterName> must be assigned at position <expectedPos> without the name.

RECIPIENT_EXPIRATION_NOT_SUPPORTED

SQLSTATE: 0A000

Only TIMESTAMP/TIMESTAMP_LTZ/TIMESTAMP_NTZ types are supported for recipient expiration timestamp.

RECURSIVE_PROTOBUF_SCHEMA

SQLSTATE: 42K0G

Found recursive reference in Protobuf schema, which can not be processed by Spark by default: <fieldDescriptor>. try setting the option recursive.fields.max.depth 1 to 10. Going beyond 10 levels of recursion is not allowed.

RECURSIVE_VIEW

SQLSTATE: 42K0H

Recursive view <viewIdent> detected (cycle: <newPath>).

REF_DEFAULT_VALUE_IS_NOT_ALLOWED_IN_PARTITION

SQLSTATE: 42601

References to DEFAULT column values are not allowed within the PARTITION clause.

RELATION_LARGER_THAN_8G

SQLSTATE: 54000

Can not build a <relationName> that is larger than 8G.

REMOTE_FUNCTION_HTTP_FAILED_ERROR

SQLSTATE: 57012

The remote HTTP request failed with code <errorCode>, and error message <errorMessage>

REMOTE_FUNCTION_HTTP_RESULT_PARSE_ERROR

SQLSTATE: 22032

Failed to evaluate the <functionName> SQL function due to inability to parse the JSON result from the remote HTTP response; the error message is <errorMessage>. Check API documentation: <docUrl>. Please fix the problem indicated in the error message and retry the query again.

REMOTE_FUNCTION_HTTP_RESULT_UNEXPECTED_ERROR

SQLSTATE: 57012

Failed to evaluate the <functionName> SQL function due to inability to process the unexpected remote HTTP response; the error message is <errorMessage>. Check API documentation: <docUrl>. Please fix the problem indicated in the error message and retry the query again.

REMOTE_FUNCTION_HTTP_RETRY_TIMEOUT

SQLSTATE: 57012

The remote request failed after retrying <N> times; the last failed HTTP error code was <errorCode> and the message was <errorMessage>

REMOTE_FUNCTION_MISSING_REQUIREMENTS_ERROR

SQLSTATE: 57012

Failed to evaluate the <functionName> SQL function because <errorMessage>. Check requirements in <docUrl>. Please fix the problem indicated in the error message and retry the query again.

RENAME_SRC_PATH_NOT_FOUND

SQLSTATE: 42K03

Failed to rename as <sourcePath> was not found.

REPEATED_CLAUSE

SQLSTATE: 42614

The <clause> clause may be used at most once per <operation> operation.

REQUIRED_PARAMETER_ALREADY_PROVIDED_POSITIONALLY

SQLSTATE: 4274K

The routine <routineName> required parameter <parameterName> has been assigned at position <positionalIndex> without the name.

Please update the function call to either remove the named argument with <parameterName> for this parameter or remove the positional

argument at <positionalIndex> and then try the query again.

REQUIRED_PARAMETER_NOT_FOUND

SQLSTATE: 4274K

Cannot invoke routine <routineName> because the parameter named <parameterName> is required, but the routine call did not supply a value. Please update the routine call to supply an argument value (either positionally at index <index> or by name) and retry the query again.

REQUIRES_SINGLE_PART_NAMESPACE

SQLSTATE: 42K05

<sessionCatalog> requires a single-part namespace, but got <namespace>.

RESCUED_DATA_COLUMN_CONFLICT_WITH_SINGLE_VARIANT

SQLSTATE: 4274K

The ‘rescuedDataColumn’ DataFrame API reader option is mutually exclusive with the ‘singleVariantColumn’ DataFrame API option.

Please remove one of them and then retry the DataFrame operation again.

RESERVED_CDC_COLUMNS_ON_WRITE

SQLSTATE: 42939

The write contains reserved columns <columnList> that are used

internally as metadata for Change Data Feed. To write to the table either rename/drop

these columns or disable Change Data Feed on the table by setting

<config> to false.

RESTRICTED_STREAMING_OPTION_PERMISSION_ENFORCED

SQLSTATE: 0A000

The option <option> has restricted values on Shared clusters for the <source> source.

For more details see RESTRICTED_STREAMING_OPTION_PERMISSION_ENFORCED

ROUTINE_ALREADY_EXISTS

SQLSTATE: 42723

Cannot create the <newRoutineType> <routineName> because a <existingRoutineType> of that name already exists.

Choose a different name, drop or replace the existing <existingRoutineType>, or add the IF NOT EXISTS clause to tolerate a pre-existing <newRoutineType>.

ROUTINE_NOT_FOUND

SQLSTATE: 42883

The routine <routineName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.

To tolerate the error on drop use DROPIF EXISTS.

ROUTINE_PARAMETER_NOT_FOUND

SQLSTATE: 42000

The routine <routineName> does not support the parameter <parameterName> specified at position <pos>.<suggestion>

ROUTINE_USES_SYSTEM_RESERVED_CLASS_NAME

SQLSTATE: 42939

The function <routineName> cannot be created because the specified classname ‘<className>’ is reserved for system use. Please rename the class and try again.

ROW_LEVEL_SECURITY_CHECK_CONSTRAINT_UNSUPPORTED

SQLSTATE: 0A000

Creating CHECK constraint on table <tableName> with row level security policies is not supported.

ROW_LEVEL_SECURITY_DUPLICATE_COLUMN_NAME

SQLSTATE: 42734

A <statementType> statement attempted to assign a row level security policy to a table, but two or more referenced columns had the same name <columnName>, which is invalid.

ROW_LEVEL_SECURITY_FEATURE_NOT_SUPPORTED

SQLSTATE: 0A000

Row level security policies for <tableName> are not supported:

For more details see ROW_LEVEL_SECURITY_FEATURE_NOT_SUPPORTED

ROW_LEVEL_SECURITY_INCOMPATIBLE_SCHEMA_CHANGE

SQLSTATE: 0A000

Unable to <statementType> <columnName> from table <tableName> because it’s referenced in a row level security policy. The table owner must remove or alter this policy before proceeding.

ROW_LEVEL_SECURITY_MERGE_UNSUPPORTED_SOURCE

SQLSTATE: 0A000

MERGE INTO operations do not support row level security policies in source table <tableName>.

ROW_LEVEL_SECURITY_MERGE_UNSUPPORTED_TARGET

SQLSTATE: 0A000

MERGE INTO operations do not support writing into table <tableName> with row level security policies.

ROW_LEVEL_SECURITY_MULTI_PART_COLUMN_NAME

SQLSTATE: 42K05

This statement attempted to assign a row level security policy to a table, but referenced column <columnName> had multiple name parts, which is invalid.

ROW_LEVEL_SECURITY_REQUIRE_UNITY_CATALOG

SQLSTATE: 0A000

Row level security policies are only supported in Unity Catalog.

ROW_LEVEL_SECURITY_SHOW_PARTITIONS_UNSUPPORTED

SQLSTATE: 0A000

SHOW PARTITIONS command is not supported for<format> tables with row level security policy.

ROW_LEVEL_SECURITY_TABLE_CLONE_SOURCE_NOT_SUPPORTED

SQLSTATE: 0A000

<mode> clone from table <tableName> with row level security policy is not supported.

ROW_LEVEL_SECURITY_TABLE_CLONE_TARGET_NOT_SUPPORTED

SQLSTATE: 0A000

<mode> clone to table <tableName> with row level security policy is not supported.

ROW_LEVEL_SECURITY_UNSUPPORTED_CONSTANT_AS_PARAMETER

SQLSTATE: 0AKD1

Using a constant as a parameter in a row level security policy is not supported. Please update your SQL command to remove the constant from the row filter definition and then retry the command again.

ROW_LEVEL_SECURITY_UNSUPPORTED_PROVIDER

SQLSTATE: 0A000

Failed to execute <statementType> command because assigning row level security policy is not supported for target data source with table provider: “<provider>”.

ROW_SUBQUERY_TOO_MANY_ROWS

SQLSTATE: 21000

More than one row returned by a subquery used as a row.

ROW_VALUE_IS_NULL

SQLSTATE: 22023

Found NULL in a row at the index <index>, expected a non-NULL value.

RULE_ID_NOT_FOUND

SQLSTATE: 22023

Not found an id for the rule name “<ruleName>”. Please modify RuleIdCollection.scala if you are adding a new rule.

SAMPLE_TABLE_PERMISSIONS

SQLSTATE: 42832

Permissions not supported on sample databases/tables.

SCALAR_FUNCTION_NOT_COMPATIBLE

SQLSTATE: 42K0O

ScalarFunction <scalarFunc> not overrides method ‘produceResult(InternalRow)’ with custom implementation.

SCALAR_FUNCTION_NOT_FULLY_IMPLEMENTED

SQLSTATE: 42K0P

ScalarFunction <scalarFunc> not implements or overrides method ‘produceResult(InternalRow)’.

SCALAR_SUBQUERY_IS_IN_GROUP_BY_OR_AGGREGATE_FUNCTION

SQLSTATE: 0A000

The correlated scalar subquery ‘<sqlExpr>’ is neither present in GROUP BY, nor in an aggregate function.

Add it to GROUP BY using ordinal position or wrap it in first() (or first_value) if you don’t care which value you get.

SCALAR_SUBQUERY_TOO_MANY_ROWS

SQLSTATE: 21000

More than one row returned by a subquery used as an expression.

SCHEDULE_ALREADY_EXISTS

SQLSTATE: 42710

Cannot add <scheduleType> to a table that already has <existingScheduleType>. Please drop the existing schedule or use ALTER TABLEALTER <scheduleType> … to alter it.

SCHEDULE_PERIOD_INVALID

SQLSTATE: 22003

The schedule period for <timeUnit> must be an integer value between 1 and <upperBound> (inclusive). Received: <actual>.

SCHEMA_ALREADY_EXISTS

SQLSTATE: 42P06

Cannot create schema <schemaName> because it already exists.

Choose a different name, drop the existing schema, or add the IF NOT EXISTS clause to tolerate pre-existing schema.

SCHEMA_NOT_EMPTY

SQLSTATE: 2BP01

Cannot drop a schema <schemaName> because it contains objects.

Use DROP SCHEMACASCADE to drop the schema and all its objects.

SCHEMA_NOT_FOUND

SQLSTATE: 42704

The schema <schemaName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog.

To tolerate the error on drop use DROP SCHEMA IF EXISTS.

SCHEMA_REGISTRY_CONFIGURATION_ERROR

SQLSTATE: 42K0G

Schema from schema registry could not be initialized. <reason>.

SECOND_FUNCTION_ARGUMENT_NOT_INTEGER

SQLSTATE: 22023

The second argument of <functionName> function needs to be an integer.

SECRET_FUNCTION_INVALID_LOCATION

SQLSTATE: 42K0E

Cannot execute <commandType> command with one or more non-encrypted references to the SECRET function; please encrypt the result of each such function call with AES_ENCRYPT and try the command again

SEED_EXPRESSION_IS_UNFOLDABLE

SQLSTATE: 42K08

The seed expression <seedExpr> of the expression <exprWithSeed> must be foldable.

SERVER_IS_BUSY

SQLSTATE: 08KD1

The server is busy and could not handle the request. Please wait a moment and try again.

SHOW_COLUMNS_WITH_CONFLICT_NAMESPACE

SQLSTATE: 42K05

SHOW COLUMNS with conflicting namespaces: <namespaceA> != <namespaceB>.

SORT_BY_WITHOUT_BUCKETING

SQLSTATE: 42601

sortBy must be used together with bucketBy.

SPARK_JOB_CANCELLED

SQLSTATE: HY008

Job <jobId> cancelled <reason>

SPECIFY_BUCKETING_IS_NOT_ALLOWED

SQLSTATE: 42601

A CREATE TABLE without explicit column list cannot specify bucketing information.

Please use the form with explicit column list and specify bucketing information.

Alternatively, allow bucketing information to be inferred by omitting the clause.

SPECIFY_CLUSTER_BY_WITH_BUCKETING_IS_NOT_ALLOWED

SQLSTATE: 42908

Cannot specify both CLUSTER BY and CLUSTERED BY INTO BUCKETS.

SPECIFY_CLUSTER_BY_WITH_PARTITIONED_BY_IS_NOT_ALLOWED

SQLSTATE: 42908

Cannot specify both CLUSTER BY and PARTITIONED BY.

SPECIFY_PARTITION_IS_NOT_ALLOWED

SQLSTATE: 42601

A CREATE TABLE without explicit column list cannot specify PARTITIONED BY.

Please use the form with explicit column list and specify PARTITIONED BY.

Alternatively, allow partitioning to be inferred by omitting the PARTITION BY clause.

SQL_CONF_NOT_FOUND

SQLSTATE: 42K0I

The SQL config <sqlConf> cannot be found. Please verify that the config exists.

STAGING_PATH_CURRENTLY_INACCESSIBLE

SQLSTATE: 22000

Transient error while accessing target staging path <path>, please try in a few minutes

STAR_GROUP_BY_POS

SQLSTATE: 0A000

Star (*) is not allowed in a select list when GROUP BY an ordinal position is used.

STATEFUL_PROCESSOR_CANNOT_PERFORM_OPERATION_WITH_INVALID_HANDLE_STATE

SQLSTATE: 42802

Failed to perform stateful processor operation=<operationType> with invalid handle state=<handleState>.

STATEFUL_PROCESSOR_CANNOT_PERFORM_OPERATION_WITH_INVALID_TIME_MODE

SQLSTATE: 42802

Failed to perform stateful processor operation=<operationType> with invalid timeMode=<timeMode>

STATEFUL_PROCESSOR_DUPLICATE_STATE_VARIABLE_DEFINED

SQLSTATE: 42802

State variable with name <stateVarName> has already been defined in the StatefulProcessor.

STATEFUL_PROCESSOR_INCORRECT_TIME_MODE_TO_ASSIGN_TTL

SQLSTATE: 42802

Cannot use TTL for state=<stateName> in timeMode=<timeMode>, use TimeMode.ProcessingTime() instead.

STATEFUL_PROCESSOR_TTL_DURATION_MUST_BE_POSITIVE

SQLSTATE: 42802

TTL duration must be greater than zero for State store operation=<operationType> on state=<stateName>.

STATEFUL_PROCESSOR_UNKNOWN_TIME_MODE

SQLSTATE: 42802

Unknown time mode <timeMode>. Accepted timeMode modes are ‘none’, ‘processingTime’, ‘eventTime’

STATE_STORE_CANNOT_CREATE_COLUMN_FAMILY_WITH_RESERVED_CHARS

SQLSTATE: 42802

Failed to create column family with unsupported starting character and name=<colFamilyName>.

STATE_STORE_CANNOT_USE_COLUMN_FAMILY_WITH_INVALID_NAME

SQLSTATE: 42802

Failed to perform column family operation=<operationName> with invalid name=<colFamilyName>. Column family name cannot be empty or include leading/trailing spaces or use the reserved keyword=default

STATE_STORE_COLUMN_FAMILY_SCHEMA_INCOMPATIBLE

SQLSTATE: 42802

Incompatible schema transformation with column family=<colFamilyName>, oldSchema=<oldSchema>, newSchema=<newSchema>.

STATE_STORE_HANDLE_NOT_INITIALIZED

SQLSTATE: 42802

The handle has not been initialized for this StatefulProcessor.

Please only use the StatefulProcessor within the transformWithState operator.

STATE_STORE_INCORRECT_NUM_ORDERING_COLS_FOR_RANGE_SCAN

SQLSTATE: 42802

Incorrect number of ordering ordinals=<numOrderingCols> for range scan encoder. The number of ordering ordinals cannot be zero or greater than number of schema columns.

STATE_STORE_INCORRECT_NUM_PREFIX_COLS_FOR_PREFIX_SCAN

SQLSTATE: 42802

Incorrect number of prefix columns=<numPrefixCols> for prefix scan encoder. Prefix columns cannot be zero or greater than or equal to num of schema columns.

STATE_STORE_INVALID_CONFIG_AFTER_RESTART

SQLSTATE: 42K06

Cannot change <configName> from <oldConfig> to <newConfig> between restarts. Please set <configName> to <oldConfig>, or restart with a new checkpoint directory.

STATE_STORE_INVALID_PROVIDER

SQLSTATE: 42K06

The given State Store Provider <inputClass> does not extend org.apache.spark.sql.execution.streaming.state.StateStoreProvider.

STATE_STORE_INVALID_VARIABLE_TYPE_CHANGE

SQLSTATE: 42K06

Cannot change <stateVarName> to <newType> between query restarts. Please set <stateVarName> to <oldType>, or restart with a new checkpoint directory.

STATE_STORE_NULL_TYPE_ORDERING_COLS_NOT_SUPPORTED

SQLSTATE: 42802

Null type ordering column with name=<fieldName> at index=<index> is not supported for range scan encoder.

STATE_STORE_PROVIDER_DOES_NOT_SUPPORT_FINE_GRAINED_STATE_REPLAY

SQLSTATE: 42K06

The given State Store Provider <inputClass> does not extend org.apache.spark.sql.execution.streaming.state.SupportsFineGrainedReplay.

Therefore, it does not support option snapshotStartBatchId or readChangeFeed in state data source.

STATE_STORE_UNSUPPORTED_OPERATION_ON_MISSING_COLUMN_FAMILY

SQLSTATE: 42802

State store operation=<operationType> not supported on missing column family=<colFamilyName>.

STATE_STORE_VARIABLE_SIZE_ORDERING_COLS_NOT_SUPPORTED

SQLSTATE: 42802

Variable size ordering column with name=<fieldName> at index=<index> is not supported for range scan encoder.

STATIC_PARTITION_COLUMN_IN_INSERT_COLUMN_LIST

SQLSTATE: 42713

Static partition column <staticName> is also specified in the column list.

STDS_COMMITTED_BATCH_UNAVAILABLE

SQLSTATE: KD006

No committed batch found, checkpoint location: <checkpointLocation>. Ensure that the query has run and committed any microbatch before stopping.

STDS_CONFLICT_OPTIONS

SQLSTATE: 42613

The options <options> cannot be specified together. Please specify the one.

STDS_FAILED_TO_READ_OPERATOR_METADATA

SQLSTATE: 42K03

Failed to read the operator metadata for checkpointLocation=<checkpointLocation> and batchId=<batchId>.

Either the file does not exist, or the file is corrupted.

Rerun the streaming query to construct the operator metadata, and report to the corresponding communities or vendors if the error persists.

STDS_FAILED_TO_READ_STATE_SCHEMA

SQLSTATE: 42K03

Failed to read the state schema. Either the file does not exist, or the file is corrupted. options: <sourceOptions>.

Rerun the streaming query to construct the state schema, and report to the corresponding communities or vendors if the error persists.

STDS_INVALID_OPTION_VALUE

SQLSTATE: 42616

Invalid value for source option ‘<optionName>’:

For more details see STDS_INVALID_OPTION_VALUE

STDS_NO_PARTITION_DISCOVERED_IN_STATE_STORE

SQLSTATE: KD006

The state does not have any partition. Please double check that the query points to the valid state. options: <sourceOptions>

STDS_OFFSET_LOG_UNAVAILABLE

SQLSTATE: KD006

The offset log for <batchId> does not exist, checkpoint location: <checkpointLocation>.

Please specify the batch ID which is available for querying - you can query the available batch IDs via using state metadata data source.

STDS_OFFSET_METADATA_LOG_UNAVAILABLE

SQLSTATE: KD006

Metadata is not available for offset log for <batchId>, checkpoint location: <checkpointLocation>.

The checkpoint seems to be only run with older Spark version(s). Run the streaming query with the recent Spark version, so that Spark constructs the state metadata.

STDS_REQUIRED_OPTION_UNSPECIFIED

SQLSTATE: 42601

<optionName>’ must be specified.

STREAMING_AQE_NOT_SUPPORTED_FOR_STATEFUL_OPERATORS

SQLSTATE: 0A000

Adaptive Query Execution is not supported for stateful operators in Structured Streaming.

STREAMING_FROM_MATERIALIZED_VIEW

SQLSTATE: 0A000

Cannot stream from materialized view <viewName>. Streaming from materialized views is not supported.

STREAMING_OUTPUT_MODE

SQLSTATE: 42KDE

Invalid streaming output mode: <outputMode>.

For more details see STREAMING_OUTPUT_MODE

STREAMING_REAL_TIME_MODE

SQLSTATE: 0A000

Streaming real-time mode has the following limitation:

For more details see STREAMING_REAL_TIME_MODE

STREAMING_STATEFUL_OPERATOR_NOT_MATCH_IN_STATE_METADATA

SQLSTATE: 42K03

Streaming stateful operator name does not match with the operator in state metadata. This likely to happen when user adds/removes/changes stateful operator of existing streaming query.

Stateful operators in the metadata: [<OpsInMetadataSeq>]; Stateful operators in current batch: [<OpsInCurBatchSeq>].

STREAMING_TABLE_NEEDS_REFRESH

SQLSTATE: 55019

streaming table <tableName> needs to be refreshed to execute <operation>.

If the table is created from DBSQL, please run REFRESH <st>.

If the table is created by a pipeline in Delta Live Tables, please run a pipeline update.

STREAMING_TABLE_NOT_SUPPORTED

SQLSTATE: 56038

streaming tables can only be created and refreshed in Delta Live Tables and Databricks SQL Warehouses.

STREAMING_TABLE_OPERATION_NOT_ALLOWED

SQLSTATE: 42601

The operation <operation> is not allowed:

For more details see STREAMING_TABLE_OPERATION_NOT_ALLOWED

STREAMING_TABLE_QUERY_INVALID

SQLSTATE: 42000

streaming table <tableName> can only be created from a streaming query. Please add the STREAM keyword to your FROM clause to turn this relation into a streaming query.

STREAM_NOT_FOUND_FOR_KINESIS_SOURCE

SQLSTATE: 42K02

Kinesis stream <streamName> in <region> not found.

Please start a new query pointing to the correct stream name.

STRUCT_ARRAY_LENGTH_MISMATCH

SQLSTATE: 2201E

Input row doesn’t have expected number of values required by the schema. <expected> fields are required while <actual> values are provided.

SUM_OF_LIMIT_AND_OFFSET_EXCEEDS_MAX_INT

SQLSTATE: 22003

The sum of the LIMIT clause and the OFFSET clause must not be greater than the maximum 32-bit integer value (2,147,483,647) but found limit = <limit>, offset = <offset>.

SYNC_METADATA_DELTA_ONLY

SQLSTATE: 0AKDD

Repair table sync metadata command is only supported for delta table.

SYNC_SRC_TARGET_TBL_NOT_SAME

SQLSTATE: 42KD2

Source table name <srcTable> must be same as destination table name <destTable>.

SYNTAX_DISCONTINUED

SQLSTATE: 42601

Support of the clause or keyword: <clause> has been discontinued in this context.

For more details see SYNTAX_DISCONTINUED

TABLE_OR_VIEW_ALREADY_EXISTS

SQLSTATE: 42P07

Cannot create table or view <relationName> because it already exists.

Choose a different name, drop the existing object, add the IF NOT EXISTS clause to tolerate pre-existing objects, add the OR REPLACE clause to replace the existing materialized view, or add the OR REFRESH clause to refresh the existing streaming table.

TABLE_OR_VIEW_NOT_FOUND

SQLSTATE: 42P01

The table or view <relationName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.

To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS.

For more details see TABLE_OR_VIEW_NOT_FOUND

TABLE_VALUED_ARGUMENTS_NOT_YET_IMPLEMENTED_FOR_SQL_FUNCTIONS

SQLSTATE: 0A000

Cannot <action> SQL user-defined function <functionName> with TABLE arguments because this functionality is not yet implemented.

TABLE_VALUED_FUNCTION_FAILED_TO_ANALYZE_IN_PYTHON

SQLSTATE: 38000

Failed to analyze the Python user defined table function: <msg>

TABLE_VALUED_FUNCTION_REQUIRED_METADATA_INCOMPATIBLE_WITH_CALL

SQLSTATE: 22023

Failed to evaluate the table function <functionName> because its table metadata <requestedMetadata>, but the function call <invalidFunctionCallProperty>.

TABLE_VALUED_FUNCTION_REQUIRED_METADATA_INVALID

SQLSTATE: 22023

Failed to evaluate the table function <functionName> because its table metadata was invalid; <reason>.

TABLE_VALUED_FUNCTION_TOO_MANY_TABLE_ARGUMENTS

SQLSTATE: 54023

There are too many table arguments for table-valued function.

It allows one table argument, but got: <num>.

If you want to allow it, please set “spark.sql.allowMultipleTableArguments.enabled” to “true”

TABLE_WITH_ID_NOT_FOUND

SQLSTATE: 42P01

Table with ID <tableId> cannot be found. Verify the correctness of the UUID.

TASK_WRITE_FAILED

SQLSTATE: 58030

Task failed while writing rows to <path>.

TEMP_TABLE_OR_VIEW_ALREADY_EXISTS

SQLSTATE: 42P07

Cannot create the temporary view <relationName> because it already exists.

Choose a different name, drop or replace the existing view, or add the IF NOT EXISTS clause to tolerate pre-existing views.

TEMP_VIEW_NAME_TOO_MANY_NAME_PARTS

SQLSTATE: 428EK

CREATE TEMPORARY VIEW or the corresponding Dataset APIs only accept single-part view names, but got: <actualName>.

TRAILING_COMMA_IN_SELECT

SQLSTATE: 42601

Trailing comma detected in SELECT clause. Remove the trailing comma before the FROM clause.

TRIGGER_INTERVAL_INVALID

SQLSTATE: 22003

The trigger interval must be a positive duration that can be converted into whole seconds. Received: <actual> seconds.

UC_BUCKETED_TABLES

SQLSTATE: 0AKUC

Bucketed tables are not supported in Unity Catalog.

UC_CATALOG_NAME_NOT_PROVIDED

SQLSTATE: 3D000

For Unity Catalog, please specify the catalog name explicitly. E.g. SHOW GRANT your.address@email.com ON CATALOG main.

UC_COMMAND_NOT_SUPPORTED

SQLSTATE: 0AKUC

The command(s): <commandName> are not supported in Unity Catalog.

For more details see UC_COMMAND_NOT_SUPPORTED

UC_COMMAND_NOT_SUPPORTED_IN_SERVERLESS

SQLSTATE: 0AKUC

The command(s): <commandName> are not supported for Unity Catalog clusters in serverless. Use single user or shared clusters instead.

UC_COMMAND_NOT_SUPPORTED_IN_SHARED_ACCESS_MODE

SQLSTATE: 0AKUC

The command(s): <commandName> are not supported for Unity Catalog clusters in shared access mode. Use single user access mode instead.

UC_CREDENTIAL_PURPOSE_NOT_SUPPORTED

SQLSTATE: 0AKUC

The specified credential kind is not supported.

UC_DATASOURCE_NOT_SUPPORTED

SQLSTATE: 0AKUC

Data source format <dataSourceFormatName> is not supported in Unity Catalog.

UC_DATASOURCE_OPTIONS_NOT_SUPPORTED

SQLSTATE: 0AKUC

Data source options are not supported in Unity Catalog.

UC_EXTERNAL_VOLUME_MISSING_LOCATION

SQLSTATE: 42601

LOCATION clause must be present for external volume. Please check the syntax ‘CREATE EXTERNAL VOLUMELOCATION …’ for creating an external volume.

UC_FAILED_PROVISIONING_STATE

SQLSTATE: 0AKUC

The query failed because it attempted to refer to table <tableName> but was unable to do so: <failureReason>. Please update the table <tableName> to ensure it is in an Active provisioning state and then retry the query again.

UC_FILE_SCHEME_FOR_TABLE_CREATION_NOT_SUPPORTED

SQLSTATE: 0AKUC

Creating table in Unity Catalog with file scheme <schemeName> is not supported.

Instead, please create a federated data source connection using the CREATE CONNECTION command for the same table provider, then create a catalog based on the connection with a CREATE FOREIGN CATALOG command to reference the tables therein.

UC_HIVE_METASTORE_FEDERATION_CROSS_CATALOG_VIEW_NOT_SUPPORTED

SQLSTATE: 56038

Hive Metastore Federation view does not support dependencies across multiple catalogs. View <view> in Hive Metastore Federation catalog must use dependency from hive_metastore or spark_catalog catalog but its dependency <dependency> is in another catalog <referencedCatalog>. Please update the dependencies to satisfy this constraint and then retry your query or command again.

UC_HIVE_METASTORE_FEDERATION_NOT_ENABLED

SQLSTATE: 0A000

Hive Metastore federation is not enabled on this cluster.

Accessing the catalog <catalogName> is not supported on this cluster

UC_INVALID_DEPENDENCIES

SQLSTATE: 56098

Dependencies of <viewName> are recorded as <storedDeps> while being parsed as <parsedDeps>. This likely occurred through improper use of a non-SQL API. You can repair dependencies in Databricks Runtime by running ALTER VIEW <viewName> AS <viewText>.

UC_INVALID_NAMESPACE

SQLSTATE: 0AKUC

Nested or empty namespaces are not supported in Unity Catalog.

UC_INVALID_REFERENCE

SQLSTATE: 0AKUC

Non-Unity-Catalog object <name> can’t be referenced in Unity Catalog objects.

UC_LAKEHOUSE_FEDERATION_WRITES_NOT_ALLOWED

SQLSTATE: 56038

Unity Catalog Lakehouse Federation write support is not enabled for provider <provider> on this cluster.

UC_LOCATION_FOR_MANAGED_VOLUME_NOT_SUPPORTED

SQLSTATE: 42601

Managed volume does not accept LOCATION clause. Please check the syntax ‘CREATE VOLUME …’ for creating a managed volume.

UC_NOT_ENABLED

SQLSTATE: 56038

Unity Catalog is not enabled on this cluster.

UC_QUERY_FEDERATION_NOT_ENABLED

SQLSTATE: 56038

Unity Catalog Query Federation is not enabled on this cluster.

UC_SERVICE_CREDENTIALS_NOT_ENABLED

SQLSTATE: 56038

Service credentials are not enabled on this cluster.

UC_VOLUMES_NOT_ENABLED

SQLSTATE: 56038

Support for Unity Catalog Volumes is not enabled on this instance.

UC_VOLUMES_SHARING_NOT_ENABLED

SQLSTATE: 56038

Support for Volume Sharing is not enabled on this instance.

UC_VOLUME_NOT_FOUND

SQLSTATE: 42704

Volume <name> does not exist. Please use ‘SHOW VOLUMES’ to list available volumes.

UDF_ERROR

SQLSTATE: none assigned

Execution of function <fn> failed

For more details see UDF_ERROR

UDF_LIMITS

SQLSTATE: 54KD0

One or more UDF limits were breached.

For more details see UDF_LIMITS

UDF_MAX_COUNT_EXCEEDED

SQLSTATE: 54KD0

Exceeded query-wide UDF limit of <maxNumUdfs> UDFs (limited during public preview). Found <numUdfs>. The UDFs were: <udfNames>.

UDF_PYSPARK_ERROR

SQLSTATE: 39000

Python worker exited unexpectedly

For more details see UDF_PYSPARK_ERROR

UDF_PYSPARK_UNSUPPORTED_TYPE

SQLSTATE: 0A000

PySpark UDF <udf> (<eval-type>) is not supported on clusters in Shared access mode.

UDF_PYSPARK_USER_CODE_ERROR

SQLSTATE: 39000

Execution failed.

For more details see UDF_PYSPARK_USER_CODE_ERROR

UDF_UNSUPPORTED_PARAMETER_DEFAULT_VALUE

SQLSTATE: 0A000

Parameter default value is not supported for user-defined <functionType> function.

UDF_USER_CODE_ERROR

SQLSTATE: 39000

Execution of function <fn> failed.

For more details see UDF_USER_CODE_ERROR

UDTF_ALIAS_NUMBER_MISMATCH

SQLSTATE: 42802

The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF.

Expected <aliasesSize> aliases, but got <aliasesNames>.

Please ensure that the number of aliases provided matches the number of columns output by the UDTF.

UDTF_INVALID_ALIAS_IN_REQUESTED_ORDERING_STRING_FROM_ANALYZE_METHOD

SQLSTATE: 42802

Failed to evaluate the user-defined table function because its ‘analyze’ method returned a requested OrderingColumn whose column name expression included an unnecessary alias <aliasName>; please remove this alias and then try the query again.

UDTF_INVALID_REQUESTED_SELECTED_EXPRESSION_FROM_ANALYZE_METHOD_REQUIRES_ALIAS

SQLSTATE: 42802

Failed to evaluate the user-defined table function because its ‘analyze’ method returned a requested ‘select’ expression (<expression>) that does not include a corresponding alias; please update the UDTF to specify an alias there and then try the query again.

UNABLE_TO_ACQUIRE_MEMORY

SQLSTATE: 53200

Unable to acquire <requestedBytes> bytes of memory, got <receivedBytes>.

UNABLE_TO_CONVERT_TO_PROTOBUF_MESSAGE_TYPE

SQLSTATE: 42K0G

Unable to convert SQL type <toType> to Protobuf type <protobufType>.

UNABLE_TO_FETCH_HIVE_TABLES

SQLSTATE: 58030

Unable to fetch tables of Hive database: <dbName>. Error Class Name: <className>.

UNABLE_TO_INFER_SCHEMA

SQLSTATE: 42KD9

Unable to infer schema for <format>. It must be specified manually.

UNAUTHORIZED_ACCESS

SQLSTATE: 42501

Unauthorized access:

<report>

UNBOUND_SQL_PARAMETER

SQLSTATE: 42P02

Found the unbound parameter: <name>. Please, fix args and provide a mapping of the parameter to either a SQL literal or collection constructor functions such as map(), array(), struct().

UNCLOSED_BRACKETED_COMMENT

SQLSTATE: 42601

Found an unclosed bracketed comment. Please, append */ at the end of the comment.

UNEXPECTED_INPUT_TYPE

SQLSTATE: 42K09

Parameter <paramIndex> of function <functionName> requires the <requiredType> type, however <inputSql> has the type <inputType>.

UNEXPECTED_INPUT_TYPE_OF_NAMED_PARAMETER

SQLSTATE: 42K09

The <namedParamKey> parameter of function <functionName> requires the <requiredType> type, however <inputSql> has the type <inputType>.<hint>

UNEXPECTED_OPERATOR_IN_STREAMING_VIEW

SQLSTATE: 42KDD

Unexpected operator <op> in the CREATE VIEW statement as a streaming source.

A streaming view query must consist only of SELECT, WHERE, and UNION ALL operations.

UNEXPECTED_POSITIONAL_ARGUMENT

SQLSTATE: 4274K

Cannot invoke routine <routineName> because it contains positional argument(s) following the named argument assigned to <parameterName>; please rearrange them so the positional arguments come first and then retry the query again.

UNEXPECTED_SERIALIZER_FOR_CLASS

SQLSTATE: 42846

The class <className> has an unexpected expression serializer. Expects “STRUCT” or “IF” which returns “STRUCT” but found <expr>.

UNKNOWN_FIELD_EXCEPTION

SQLSTATE: KD003

Encountered <changeType> during parsing: <unknownFieldBlob>, which can be fixed by an automatic retry: <isRetryable>

For more details see UNKNOWN_FIELD_EXCEPTION

UNKNOWN_POSITIONAL_ARGUMENT

SQLSTATE: 4274K

The invocation of routine <routineName> contains an unknown positional argument <sqlExpr> at position <pos>. This is invalid.

UNKNOWN_PRIMITIVE_TYPE_IN_VARIANT

SQLSTATE: 22023

Unknown primitive type with id <id> was found in a variant value.

UNKNOWN_PROTOBUF_MESSAGE_TYPE

SQLSTATE: 42K0G

Attempting to treat <descriptorName> as a Message, but it was <containingType>.

UNPIVOT_REQUIRES_ATTRIBUTES

SQLSTATE: 42K0A

UNPIVOT requires all given <given> expressions to be columns when no <empty> expressions are given. These are not columns: [<expressions>].

UNPIVOT_REQUIRES_VALUE_COLUMNS

SQLSTATE: 42K0A

At least one value column needs to be specified for UNPIVOT, all columns specified as ids.

UNPIVOT_VALUE_DATA_TYPE_MISMATCH

SQLSTATE: 42K09

Unpivot value columns must share a least common type, some types do not: [<types>].

UNPIVOT_VALUE_SIZE_MISMATCH

SQLSTATE: 428C4

All unpivot value columns must have the same size as there are value column names (<names>).

UNRECOGNIZED_PARAMETER_NAME

SQLSTATE: 4274K

Cannot invoke routine <routineName> because the routine call included a named argument reference for the argument named <argumentName>, but this routine does not include any signature containing an argument with this name. Did you mean one of the following? [<proposal>].

UNRECOGNIZED_SQL_TYPE

SQLSTATE: 42704

Unrecognized SQL type - name: <typeName>, id: <jdbcType>.

UNRECOGNIZED_STATISTIC

SQLSTATE: 42704

The statistic <stats> is not recognized. Valid statistics include count, count_distinct, approx_count_distinct, mean, stddev, min, max, and percentile values. Percentile must be a numeric value followed by ‘%’, within the range 0% to 100%.

UNRESOLVABLE_TABLE_VALUED_FUNCTION

SQLSTATE: 42883

Could not resolve <name> to a table-valued function.

Please make sure that <name> is defined as a table-valued function and that all required parameters are provided correctly.

If <name> is not defined, please create the table-valued function before using it.

For more information about defining table-valued functions, please refer to the Apache Spark documentation.

UNRESOLVED_ALL_IN_GROUP_BY

SQLSTATE: 42803

Cannot infer grouping columns for GROUP BY ALL based on the select clause. Please explicitly specify the grouping columns.

UNRESOLVED_COLUMN

SQLSTATE: 42703

A column, variable, or function parameter with name <objectName> cannot be resolved.

For more details see UNRESOLVED_COLUMN

UNRESOLVED_FIELD

SQLSTATE: 42703

A field with name <fieldName> cannot be resolved with the struct-type column <columnPath>.

For more details see UNRESOLVED_FIELD

UNRESOLVED_MAP_KEY

SQLSTATE: 42703

Cannot resolve column <objectName> as a map key. If the key is a string literal, add the single quotes ‘’ around it.

For more details see UNRESOLVED_MAP_KEY

UNRESOLVED_ROUTINE

SQLSTATE: 42883

Cannot resolve routine <routineName> on search path <searchPath>.

For more details see UNRESOLVED_ROUTINE

UNRESOLVED_USING_COLUMN_FOR_JOIN

SQLSTATE: 42703

USING column <colName> cannot be resolved on the <side> side of the join. The <side>-side columns: [<suggestion>].

UNRESOLVED_VARIABLE

SQLSTATE: 42883

Cannot resolve variable <variableName> on search path <searchPath>.

UNSTRUCTURED_DATA_PROCESSING_UNSUPPORTED_FILE_FORMAT

SQLSTATE: 0A000

Unstructured file format <format> is not supported. Supported file formats are <supportedFormats>.

Please update the format from your <expr> expression to one of the supported formats and then retry the query again.

UNSTRUCTURED_DATA_PROCESSING_UNSUPPORTED_MODEL

SQLSTATE: 0A000

Unstructured model <model> is not supported. Supported models are <supportedModels>.

Please switch to one of the supported models and then retry the query again.

UNSUPPORTED_ADD_FILE

SQLSTATE: 0A000

Don’t support add file.

For more details see UNSUPPORTED_ADD_FILE

UNSUPPORTED_ARROWTYPE

SQLSTATE: 0A000

Unsupported arrow type <typeName>.

UNSUPPORTED_BATCH_TABLE_VALUED_FUNCTION

SQLSTATE: 42000

The function <funcName> does not support batch queries.

UNSUPPORTED_CALL

SQLSTATE: 0A000

Cannot call the method “<methodName>” of the class “<className>”.

For more details see UNSUPPORTED_CALL

UNSUPPORTED_CHAR_OR_VARCHAR_AS_STRING

SQLSTATE: 0A000

The char/varchar type can’t be used in the table schema.

If you want Spark treat them as string type as same as Spark 3.0 and earlier, please set “spark.sql.legacy.charVarcharAsString” to “true”.

UNSUPPORTED_CLAUSE_FOR_OPERATION

SQLSTATE: 0A000

The <clause> is not supported for <operation>.

UNSUPPORTED_COLLATION

SQLSTATE: 0A000

Collation <collationName> is not supported for:

For more details see UNSUPPORTED_COLLATION

UNSUPPORTED_COMMON_ANCESTOR_LOC_FOR_FILE_STREAM_SOURCE

SQLSTATE: 42616

The common ancestor of source path and sourceArchiveDir should be registered with UC.

If you see this error message, it’s likely that you register the source path and sourceArchiveDir in different external locations.

Please put them into a single external location.

UNSUPPORTED_CONSTRAINT_CLAUSES

SQLSTATE: 0A000

Constraint clauses <clauses> are unsupported.

UNSUPPORTED_CONSTRAINT_TYPE

SQLSTATE: 42000

Unsupported constraint type. Only <supportedConstraintTypes> are supported

UNSUPPORTED_DATASOURCE_FOR_DIRECT_QUERY

SQLSTATE: 0A000

Unsupported data source type for direct query on files: <dataSourceType>

UNSUPPORTED_DATATYPE

SQLSTATE: 0A000

Unsupported data type <typeName>.

UNSUPPORTED_DATA_SOURCE_SAVE_MODE

SQLSTATE: 0A000

The data source “<source>” cannot be written in the <createMode> mode. Please use either the “Append” or “Overwrite” mode instead.

UNSUPPORTED_DATA_TYPE_FOR_DATASOURCE

SQLSTATE: 0A000

The <format> datasource doesn’t support the column <columnName> of the type <columnType>.

UNSUPPORTED_DATA_TYPE_FOR_ENCODER

SQLSTATE: 0A000

Cannot create encoder for <dataType>. Please use a different output data type for your UDF or DataFrame.

UNSUPPORTED_DEFAULT_VALUE

SQLSTATE: 0A000

DEFAULT column values is not supported.

For more details see UNSUPPORTED_DEFAULT_VALUE

UNSUPPORTED_DESERIALIZER

SQLSTATE: 0A000

The deserializer is not supported:

For more details see UNSUPPORTED_DESERIALIZER

UNSUPPORTED_EXPRESSION_GENERATED_COLUMN

SQLSTATE: 42621

Cannot create generated column <fieldName> with generation expression <expressionStr> because <reason>.

UNSUPPORTED_EXPR_FOR_OPERATOR

SQLSTATE: 42K0E

A query operator contains one or more unsupported expressions.

Consider to rewrite it to avoid window functions, aggregate functions, and generator functions in the WHERE clause.

Invalid expressions: [<invalidExprSqls>]

UNSUPPORTED_EXPR_FOR_PARAMETER

SQLSTATE: 42K0E

A query parameter contains unsupported expression.

Parameters can either be variables or literals.

Invalid expression: [<invalidExprSql>]

UNSUPPORTED_EXPR_FOR_WINDOW

SQLSTATE: 42P20

Expression <sqlExpr> not supported within a window function.

UNSUPPORTED_FEATURE

SQLSTATE: 0A000

The feature is not supported:

For more details see UNSUPPORTED_FEATURE

UNSUPPORTED_FN_TYPE

SQLSTATE: 0A000

Unsupported user defined function type: <language>

UNSUPPORTED_GENERATOR

SQLSTATE: 42K0E

The generator is not supported:

For more details see UNSUPPORTED_GENERATOR

UNSUPPORTED_GROUPING_EXPRESSION

SQLSTATE: 42K0E

grouping()/grouping_id() can only be used with GroupingSets/Cube/Rollup.

UNSUPPORTED_INITIAL_POSITION_AND_TRIGGER_PAIR_FOR_KINESIS_SOURCE

SQLSTATE: 42616

<trigger> with initial position <initialPosition> is not supported with the Kinesis source

UNSUPPORTED_INSERT

SQLSTATE: 42809

Can’t insert into the target.

For more details see UNSUPPORTED_INSERT

UNSUPPORTED_JOIN_TYPE

SQLSTATE: 0A000

Unsupported join type ‘<typ>’. Supported join types include: <supported>.

UNSUPPORTED_MANAGED_TABLE_CREATION

SQLSTATE: 0AKDD

Creating a managed table <tableName> using datasource <dataSource> is not supported. You need to use datasource DELTA or create an external table using CREATE EXTERNAL TABLE <tableName>USING <dataSource>

UNSUPPORTED_MERGE_CONDITION

SQLSTATE: 42K0E

MERGE operation contains unsupported <condName> condition.

For more details see UNSUPPORTED_MERGE_CONDITION

UNSUPPORTED_METRIC_VIEW_USAGE

SQLSTATE: 0A000

The current metric view usage is not supported.

For more details see UNSUPPORTED_METRIC_VIEW_USAGE

UNSUPPORTED_NESTED_ROW_OR_COLUMN_ACCESS_POLICY

SQLSTATE: 0A000

Table <tableName> has a row level security policy or column mask which indirectly refers to another table with a row level security policy or column mask; this is not supported. Call sequence: <callSequence>

UNSUPPORTED_OVERWRITE

SQLSTATE: 42902

Can’t overwrite the target that is also being read from.

For more details see UNSUPPORTED_OVERWRITE

UNSUPPORTED_PARTITION_TRANSFORM

SQLSTATE: 0A000

Unsupported partition transform: <transform>. The supported transforms are identity, bucket, and clusterBy. Ensure your transform expression uses one of these.

UNSUPPORTED_SAVE_MODE

SQLSTATE: 0A000

The save mode <saveMode> is not supported for:

For more details see UNSUPPORTED_SAVE_MODE

UNSUPPORTED_SHOW_CREATE_TABLE

SQLSTATE: 0A000

Unsupported a SHOW CREATE TABLE command.

For more details see UNSUPPORTED_SHOW_CREATE_TABLE

UNSUPPORTED_SINGLE_PASS_ANALYZER_FEATURE

SQLSTATE: 0A000

The single-pass analyzer cannot process this query or command because it does not yet support <feature>.

UNSUPPORTED_STREAMING_OPERATOR_WITHOUT_WATERMARK

SQLSTATE: 0A000

<outputMode> output mode not supported for <statefulOperator> on streaming DataFrames/DataSets without watermark.

UNSUPPORTED_STREAMING_OPTIONS_FOR_VIEW

SQLSTATE: 0A000

Unsupported for streaming a view. Reason:

For more details see UNSUPPORTED_STREAMING_OPTIONS_FOR_VIEW

UNSUPPORTED_STREAMING_OPTIONS_PERMISSION_ENFORCED

SQLSTATE: 0A000

Streaming options <options> are not supported for data source <source> on a shared cluster. Please confirm that the options are specified and spelled correctly, and checkhttps://docs.databricks.com/en/compute/access-mode-limitations.html#streaming-limitations-and-requirements-for-unity-catalog-shared-access-mode for limitations.

UNSUPPORTED_STREAMING_SINK_PERMISSION_ENFORCED

SQLSTATE: 0A000

Data source <sink> is not supported as a streaming sink on a shared cluster.

UNSUPPORTED_STREAMING_SOURCE_PERMISSION_ENFORCED

SQLSTATE: 0A000

Data source <source> is not supported as a streaming source on a shared cluster.

UNSUPPORTED_STREAMING_TABLE_VALUED_FUNCTION

SQLSTATE: 42000

The function <funcName> does not support streaming. Please remove the STREAM keyword

UNSUPPORTED_STREAM_READ_LIMIT_FOR_KINESIS_SOURCE

SQLSTATE: 0A000

<streamReadLimit> is not supported with the Kinesis source

UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY

SQLSTATE: 0A000

Unsupported subquery expression:

For more details see UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY

UNSUPPORTED_TIMESERIES_COLUMNS

SQLSTATE: 56038

Creating primary key with timeseries columns is not supported

UNSUPPORTED_TIMESERIES_WITH_MORE_THAN_ONE_COLUMN

SQLSTATE: 0A000

Creating primary key with more than one timeseries column <colSeq> is not supported

UNSUPPORTED_TRIGGER_FOR_KINESIS_SOURCE

SQLSTATE: 0A000

<trigger> is not supported with the Kinesis source

UNSUPPORTED_TYPED_LITERAL

SQLSTATE: 0A000

Literals of the type <unsupportedType> are not supported. Supported types are <supportedTypes>.

UNSUPPORTED_UDF_FEATURE

SQLSTATE: 0A000

The function <function> uses the following feature(s) that require a newer version of Databricks runtime: <features>. Please consult <docLink> for details.

UNTYPED_SCALA_UDF

SQLSTATE: 42K0E

You’re using untyped Scala UDF, which does not have the input type information.

Spark may blindly pass null to the Scala closure with primitive-type argument, and the closure will see the default value of the Java type for the null argument, e.g. udf((x: Int) => x, IntegerType), the result is 0 for null input. To get rid of this error, you could:

  1. use typed Scala UDF APIs(without return type parameter), e.g. udf((x: Int) => x).
  2. use Java UDF APIs, e.g. udf(new UDF1[String, Integer] { override def call(s: String): Integer = s.length() }, IntegerType), if input types are all non primitive.
  3. set “spark.sql.legacy.allowUntypedScalaUDF” to “true” and use this API with caution.

UPGRADE_NOT_SUPPORTED

SQLSTATE: 0AKUC

Table is not eligible for upgrade from Hive Metastore to Unity Catalog. Reason:

For more details see UPGRADE_NOT_SUPPORTED

USER_DEFINED_FUNCTIONS

SQLSTATE: 42601

User defined function is invalid:

For more details see USER_DEFINED_FUNCTIONS

USER_RAISED_EXCEPTION

SQLSTATE: P0001

<errorMessage>

USER_RAISED_EXCEPTION_PARAMETER_MISMATCH

SQLSTATE: P0001

The raise_error() function was used to raise error class: <errorClass> which expects parameters: <expectedParms>.

The provided parameters <providedParms> do not match the expected parameters.

Please make sure to provide all expected parameters.

USER_RAISED_EXCEPTION_UNKNOWN_ERROR_CLASS

SQLSTATE: P0001

The raise_error() function was used to raise an unknown error class: <errorClass>

VARIABLE_ALREADY_EXISTS

SQLSTATE: 42723

Cannot create the variable <variableName> because it already exists.

Choose a different name, or drop or replace the existing variable.

VARIABLE_NOT_FOUND

SQLSTATE: 42883

The variable <variableName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a schema and catalog, verify the current_schema() output, or qualify the name with the correct schema and catalog.

To tolerate the error on drop use DROP VARIABLE IF EXISTS.

VARIANT_CONSTRUCTOR_SIZE_LIMIT

SQLSTATE: 22023

Cannot construct a Variant larger than 16 MiB. The maximum allowed size of a Variant value is 16 MiB.

VARIANT_DUPLICATE_KEY

SQLSTATE: 22023

Failed to build variant because of a duplicate object key <key>.

VARIANT_SIZE_LIMIT

SQLSTATE: 22023

Cannot build variant bigger than <sizeLimit> in <functionName>.

Please avoid large input strings to this expression (for example, add function calls(s) to check the expression size and convert it to NULL first if it is too big).

VIEW_ALREADY_EXISTS

SQLSTATE: 42P07

Cannot create view <relationName> because it already exists.

Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS clause to tolerate pre-existing objects.

VIEW_EXCEED_MAX_NESTED_DEPTH

SQLSTATE: 54K00

The depth of view <viewName> exceeds the maximum view resolution depth (<maxNestedDepth>).

Analysis is aborted to avoid errors. If you want to work around this, please try to increase the value of “spark.sql.view.maxNestedViewDepth”.

VIEW_NOT_FOUND

SQLSTATE: 42P01

The view <relationName> cannot be found. Verify the spelling and correctness of the schema and catalog.

If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.

To tolerate the error on drop use DROP VIEW IF EXISTS.

VOLUME_ALREADY_EXISTS

SQLSTATE: 42000

Cannot create volume <relationName> because it already exists.

Choose a different name, drop or replace the existing object, or add the IF NOT EXISTS clause to tolerate pre-existing objects.

WINDOW_FUNCTION_AND_FRAME_MISMATCH

SQLSTATE: 42K0E

<funcName> function can only be evaluated in an ordered row-based window frame with a single offset: <windowExpr>.

WINDOW_FUNCTION_WITHOUT_OVER_CLAUSE

SQLSTATE: 42601

Window function <funcName> requires an OVER clause.

WITH_CREDENTIAL

SQLSTATE: 42601

WITH CREDENTIAL syntax is not supported for <type>.

WRITE_STREAM_NOT_ALLOWED

SQLSTATE: 42601

writeStream can be called only on streaming Dataset/DataFrame.

WRONG_COLUMN_DEFAULTS_FOR_DELTA_ALTER_TABLE_ADD_COLUMN_NOT_SUPPORTED

SQLSTATE: 0AKDC

Failed to execute the command because DEFAULT values are not supported when adding new

columns to previously existing Delta tables; please add the column without a default

value first, then run a second ALTER TABLE ALTER COLUMN SET DEFAULT command to apply

for future inserted rows instead.

WRONG_COLUMN_DEFAULTS_FOR_DELTA_FEATURE_NOT_ENABLED

SQLSTATE: 0AKDE

Failed to execute <commandType> command because it assigned a column DEFAULT value,

but the corresponding table feature was not enabled. Please retry the command again

after executing ALTER TABLE tableName SET

TBLPROPERTIES(‘delta.feature.allowColumnDefaults’ = ‘supported’).

WRONG_COMMAND_FOR_OBJECT_TYPE

SQLSTATE: 42809

The operation <operation> requires a <requiredType>. But <objectName> is a <foundType>. Use <alternative> instead.

WRONG_NUM_ARGS

SQLSTATE: 42605

The <functionName> requires <expectedNum> parameters but the actual number is <actualNum>.

For more details see WRONG_NUM_ARGS

XML_ROW_TAG_MISSING

SQLSTATE: 42KDF

<rowTag> option is required for reading files in XML format.

XML_UNSUPPORTED_NESTED_TYPES

SQLSTATE: 0N000

XML doesn’t support <innerDataType> as inner type of <dataType>. Please wrap the <innerDataType> within a StructType field when using it inside <dataType>.

XML_WILDCARD_RESCUED_DATA_CONFLICT_ERROR

SQLSTATE: 22023

Rescued data and wildcard column cannot be simultaneously enabled. Remove the wildcardColumnName option.

ZORDERBY_COLUMN_DOES_NOT_EXIST

SQLSTATE: 42703

ZOrderBy column <columnName> doesn’t exist.

Delta Lake

DELTA_ACTIVE_SPARK_SESSION_NOT_FOUND

SQLSTATE: 08003

Could not find active SparkSession

DELTA_ACTIVE_TRANSACTION_ALREADY_SET

SQLSTATE: 0B000

Cannot set a new txn as active when one is already active

DELTA_ADDING_COLUMN_WITH_INTERNAL_NAME_FAILED

SQLSTATE: 42000

Failed to add column <colName> because the name is reserved.

DELTA_ADDING_DELETION_VECTORS_DISALLOWED

SQLSTATE: 0A000

The current operation attempted to add a deletion vector to a table that does not permit the creation of new deletion vectors. Please file a bug report.

DELTA_ADDING_DELETION_VECTORS_WITH_TIGHT_BOUNDS_DISALLOWED

SQLSTATE: 42000

All operations that add deletion vectors should set the tightBounds column in statistics to false. Please file a bug report.

DELTA_ADD_COLUMN_AT_INDEX_LESS_THAN_ZERO

SQLSTATE: 42KD3

Index <columnIndex> to add column <columnName> is lower than 0

DELTA_ADD_COLUMN_PARENT_NOT_STRUCT

SQLSTATE: 42KD3

Cannot add <columnName> because its parent is not a StructType. Found <other>

DELTA_ADD_COLUMN_STRUCT_NOT_FOUND

SQLSTATE: 42KD3

Struct not found at position <position>

DELTA_ADD_CONSTRAINTS

SQLSTATE: 0A000

Please use ALTER TABLE ADD CONSTRAINT to add CHECK constraints.

DELTA_AGGREGATE_IN_GENERATED_COLUMN

SQLSTATE: 42621

Found <sqlExpr>. A generated column cannot use an aggregate expression

DELTA_AGGREGATION_NOT_SUPPORTED

SQLSTATE: 42903

Aggregate functions are not supported in the <operation> <predicate>.

DELTA_ALTER_COLLATION_NOT_SUPPORTED_BLOOM_FILTER

SQLSTATE: 428FR

Failed to change the collation of column <column> because it has a bloom filter index. Please either retain the existing collation or else drop the bloom filter index and then retry the command again to change the collation.

DELTA_ALTER_COLLATION_NOT_SUPPORTED_CLUSTER_BY

SQLSTATE: 428FR

Failed to change the collation of column <column> because it is a clustering column. Please either retain the existing collation or else change the column to a non-clustering column with an ALTER TABLE command and then retry the command again to change the collation.

DELTA_ALTER_TABLE_CHANGE_COL_NOT_SUPPORTED

SQLSTATE: 42837

ALTER TABLE CHANGE COLUMN is not supported for changing column <currentType> to <newType>

DELTA_ALTER_TABLE_CLUSTER_BY_NOT_ALLOWED

SQLSTATE: 42000

ALTER TABLE CLUSTER BY is supported only for Delta table with Liquid clustering.

DELTA_ALTER_TABLE_CLUSTER_BY_ON_PARTITIONED_TABLE_NOT_ALLOWED

SQLSTATE: 42000

ALTER TABLE CLUSTER BY cannot be applied to a partitioned table.

DELTA_ALTER_TABLE_RENAME_NOT_ALLOWED

SQLSTATE: 42000

Operation not allowed: ALTER TABLE RENAME TO is not allowed for managed Delta tables on S3, as eventual consistency on S3 may corrupt the Delta transaction log. If you insist on doing so and are sure that there has never been a Delta table with the new name <newName> before, you can enable this by setting <key> to be true.

DELTA_ALTER_TABLE_SET_CLUSTERING_TABLE_FEATURE_NOT_ALLOWED

SQLSTATE: 42000

Cannot enable <tableFeature> table feature using ALTER TABLE SET TBLPROPERTIES. Please use CREATE OR REPLACE TABLE CLUSTER BY to create a Delta table with clustering.

DELTA_AMBIGUOUS_DATA_TYPE_CHANGE

SQLSTATE: 429BQ

Cannot change data type of <column> from <from> to <to>. This change contains column removals and additions, therefore they are ambiguous. Please make these changes individually using ALTER TABLE [ADD | DROP | RENAME] COLUMN.

DELTA_AMBIGUOUS_PARTITION_COLUMN

SQLSTATE: 42702

Ambiguous partition column <column> can be <colMatches>.

DELTA_AMBIGUOUS_PATHS_IN_CREATE_TABLE

SQLSTATE: 42613

CREATE TABLE contains two different locations: <identifier> and <location>.

You can remove the LOCATION clause from the CREATE TABLE statement, or set

<config> to true to skip this check.

DELTA_ARCHIVED_FILES_IN_LIMIT

SQLSTATE: 42KDC

Table <table> does not contain enough records in non-archived files to satisfy specified LIMIT of <limit> records.

DELTA_ARCHIVED_FILES_IN_SCAN

SQLSTATE: 42KDC

Found <numArchivedFiles> potentially archived file(s) in table <table> that need to be scanned as part of this query.

Archived files cannot be accessed. The current time until archival is configured as <archivalTime>.

Please adjust your query filters to exclude any archived files.

DELTA_BLOCK_COLUMN_MAPPING_AND_CDC_OPERATION

SQLSTATE: 42KD4

Operation “<opName>” is not allowed when the table has enabled change data feed (CDF) and has undergone schema changes using DROP COLUMN or RENAME COLUMN.

DELTA_BLOOM_FILTER_DROP_ON_NON_EXISTING_COLUMNS

SQLSTATE: 42703

Cannot drop bloom filter indices for the following non-existent column(s): <unknownColumns>

DELTA_BLOOM_FILTER_OOM_ON_WRITE

SQLSTATE: 82100

OutOfMemoryError occurred while writing bloom filter indices for the following column(s): <columnsWithBloomFilterIndices>.

You can reduce the memory footprint of bloom filter indices by choosing a smaller value for the ‘numItems’ option, a larger value for the ‘fpp’ option, or by indexing fewer columns.

DELTA_CANNOT_CHANGE_DATA_TYPE

SQLSTATE: 429BQ

Cannot change data type: <dataType>

DELTA_CANNOT_CHANGE_LOCATION

SQLSTATE: 42601

Cannot change the ‘location’ of the Delta table using SET TBLPROPERTIES. Please use ALTER TABLE SET LOCATION instead.

DELTA_CANNOT_CHANGE_PROVIDER

SQLSTATE: 42939

‘provider’ is a reserved table property, and cannot be altered.

DELTA_CANNOT_CREATE_BLOOM_FILTER_NON_EXISTING_COL

SQLSTATE: 42703

Cannot create bloom filter indices for the following non-existent column(s): <unknownCols>

DELTA_CANNOT_CREATE_LOG_PATH

SQLSTATE: 42KD5

Cannot create <path>

DELTA_CANNOT_DESCRIBE_VIEW_HISTORY

SQLSTATE: 42809

Cannot describe the history of a view.

DELTA_CANNOT_DROP_BLOOM_FILTER_ON_NON_INDEXED_COLUMN

SQLSTATE: 42703

Cannot drop bloom filter index on a non indexed column: <columnName>

DELTA_CANNOT_DROP_CHECK_CONSTRAINT_FEATURE

SQLSTATE: 0AKDE

Cannot drop the CHECK constraints table feature.

The following constraints must be dropped first: <constraints>.

DELTA_CANNOT_DROP_COLLATIONS_FEATURE

SQLSTATE: 0AKDE

Cannot drop the collations table feature.

Columns with non-default collations must be altered to using UTF8_BINARY first: <colNames>.

DELTA_CANNOT_EVALUATE_EXPRESSION

SQLSTATE: 0AKDC

Cannot evaluate expression: <expression>

DELTA_CANNOT_FIND_BUCKET_SPEC

SQLSTATE: 22000

Expecting a bucketing Delta table but cannot find the bucket spec in the table

DELTA_CANNOT_GENERATE_CODE_FOR_EXPRESSION

SQLSTATE: 0AKDC

Cannot generate code for expression: <expression>

DELTA_CANNOT_MODIFY_APPEND_ONLY

SQLSTATE: 42809

This table is configured to only allow appends. If you would like to permit updates or deletes, use ‘ALTER TABLE <table_name> SET TBLPROPERTIES (<config>=false)’.

DELTA_CANNOT_MODIFY_COORDINATED_COMMITS_DEPENDENCIES

SQLSTATE: 42616

<Command> cannot override or unset in-commit timestamp table properties because coordinated commits is enabled in this table and depends on them. Please remove them (“delta.enableInCommitTimestamps”, “delta.inCommitTimestampEnablementVersion”, “delta.inCommitTimestampEnablementTimestamp”) from the TBLPROPERTIES clause and then retry the command again.

DELTA_CANNOT_MODIFY_TABLE_PROPERTY

SQLSTATE: 42939

The Delta table configuration <prop> cannot be specified by the user

DELTA_CANNOT_OVERRIDE_COORDINATED_COMMITS_CONFS

SQLSTATE: 42616

<Command> cannot override coordinated commits configurations for an existing target table. Please remove them (“delta.coordinatedCommits.commitCoordinator-preview”, “delta.coordinatedCommits.commitCoordinatorConf-preview”, “delta.coordinatedCommits.tableConf-preview”) from the TBLPROPERTIES clause and then retry the command again.

DELTA_CANNOT_RECONSTRUCT_PATH_FROM_URI

SQLSTATE: 22KD1

A uri (<uri>) which can’t be turned into a relative path was found in the transaction log.

DELTA_CANNOT_RELATIVIZE_PATH

SQLSTATE: 42000

A path (<path>) which can’t be relativized with the current input found in the

transaction log. Please re-run this as:

%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<userPath>”, true)

and then also run:

%%scala com.databricks.delta.Delta.fixAbsolutePathsInLog(“<path>”)

DELTA_CANNOT_RENAME_PATH

SQLSTATE: 22KD1

Cannot rename <currentPath> to <newPath>

DELTA_CANNOT_REPLACE_MISSING_TABLE

SQLSTATE: 42P01

Table <tableName> cannot be replaced as it does not exist. Use CREATE OR REPLACE TABLE to create the table.

DELTA_CANNOT_RESOLVE_COLUMN

SQLSTATE: 42703

Can’t resolve column <columnName> in <schema>

DELTA_CANNOT_RESTORE_TABLE_VERSION

SQLSTATE: 22003

Cannot restore table to version <version>. Available versions: [<startVersion>, <endVersion>].

DELTA_CANNOT_RESTORE_TIMESTAMP_EARLIER

SQLSTATE: 22003

Cannot restore table to timestamp (<requestedTimestamp>) as it is before the earliest version available. Please use a timestamp after (<earliestTimestamp>).

DELTA_CANNOT_RESTORE_TIMESTAMP_GREATER

SQLSTATE: 22003

Cannot restore table to timestamp (<requestedTimestamp>) as it is after the latest version available. Please use a timestamp before (<latestTimestamp>)

DELTA_CANNOT_SET_COORDINATED_COMMITS_DEPENDENCIES

SQLSTATE: 42616

<Command> cannot set in-commit timestamp table properties together with coordinated commits, because the latter depends on the former and sets the former internally. Please remove them (“delta.enableInCommitTimestamps”, “delta.inCommitTimestampEnablementVersion”, “delta.inCommitTimestampEnablementTimestamp”) from the TBLPROPERTIES clause and then retry the command again.

DELTA_CANNOT_SET_LOCATION_ON_PATH_IDENTIFIER

SQLSTATE: 42613

Cannot change the location of a path based table.

DELTA_CANNOT_SET_MANAGED_STATS_COLUMNS_PROPERTY

SQLSTATE: 42616

Cannot set delta.managedDataSkippingStatsColumns on non-DLT table

DELTA_CANNOT_UNSET_COORDINATED_COMMITS_CONFS

SQLSTATE: 42616

ALTER cannot unset coordinated commits configurations. To downgrade a table from coordinated commits, please try again using ALTER TABLE [table-name] ``DROP FEATURE`` ‘coordinatedCommits-preview’.

DELTA_CANNOT_UPDATE_ARRAY_FIELD

SQLSTATE: 429BQ

Cannot update %1$s field %2$s type: update the element by updating %2$s.element

DELTA_CANNOT_UPDATE_MAP_FIELD

SQLSTATE: 429BQ

Cannot update %1$s field %2$s type: update a map by updating %2$s.key or %2$s.value

DELTA_CANNOT_UPDATE_OTHER_FIELD

SQLSTATE: 429BQ

Cannot update <tableName> field of type <typeName>

DELTA_CANNOT_UPDATE_STRUCT_FIELD

SQLSTATE: 429BQ

Cannot update <tableName> field <fieldName> type: update struct by adding, deleting, or updating its fields

DELTA_CANNOT_USE_ALL_COLUMNS_FOR_PARTITION

SQLSTATE: 428FT

Cannot use all columns for partition columns

DELTA_CANNOT_VACUUM_LITE

SQLSTATE: 55000

VACUUM LITE cannot delete all eligible files as some files are not referenced by the Delta log. Please run VACUUM FULL.

DELTA_CANNOT_WRITE_INTO_VIEW

SQLSTATE: 0A000

<table> is a view. Writes to a view are not supported.

DELTA_CAST_OVERFLOW_IN_TABLE_WRITE

SQLSTATE: 22003

Failed to write a value of <sourceType> type into the <targetType> type column <columnName> due to an overflow.

Use try_cast on the input value to tolerate overflow and return NULL instead.

If necessary, set <storeAssignmentPolicyFlag> to “LEGACY” to bypass this error or set <updateAndMergeCastingFollowsAnsiEnabledFlag> to true to revert to the old behaviour and follow <ansiEnabledFlag> in UPDATE and MERGE.

DELTA_CDC_NOT_ALLOWED_IN_THIS_VERSION

SQLSTATE: 0AKDC

Configuration delta.enableChangeDataFeed cannot be set. Change data feed from Delta is not yet available.

DELTA_CHANGE_DATA_FEED_INCOMPATIBLE_DATA_SCHEMA

SQLSTATE: 0AKDC

Retrieving table changes between version <start> and <end> failed because of an incompatible data schema.

Your read schema is <readSchema> at version <readVersion>, but we found an incompatible data schema at version <incompatibleVersion>.

If possible, please retrieve the table changes using the end version’s schema by setting <config> to endVersion, or contact support.

DELTA_CHANGE_DATA_FEED_INCOMPATIBLE_SCHEMA_CHANGE

SQLSTATE: 0AKDC

Retrieving table changes between version <start> and <end> failed because of an incompatible schema change.

Your read schema is <readSchema> at version <readVersion>, but we found an incompatible schema change at version <incompatibleVersion>.

If possible, please query table changes separately from version <start> to <incompatibleVersion> - 1, and from version <incompatibleVersion> to <end>.

DELTA_CHANGE_DATA_FILE_NOT_FOUND

SQLSTATE: 42K03

File <filePath> referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE statement. This request appears to be targeting Change Data Feed, if that is the case, this error can occur when the change data file is out of the retention period and has been deleted by the VACUUM statement. For more information, see <faqPath>

DELTA_CHANGE_TABLE_FEED_DISABLED

SQLSTATE: 42807

Cannot write to table with delta.enableChangeDataFeed set. Change data feed from Delta is not available.

DELTA_CHECKPOINT_NON_EXIST_TABLE

SQLSTATE: 42K03

Cannot checkpoint a non-existing table <path>. Did you manually delete files in the _delta_log directory?

DELTA_CLONE_AMBIGUOUS_TARGET

SQLSTATE: 42613

Two paths were provided as the CLONE target so it is ambiguous which to use. An external

location for CLONE was provided at <externalLocation> at the same time as the path

<targetIdentifier>.

DELTA_CLONE_INCOMPLETE_FILE_COPY

SQLSTATE: 42000

File (<fileName>) not copied completely. Expected file size: <expectedSize>, found: <actualSize>. To continue with the operation by ignoring the file size check set <config> to false.

DELTA_CLONE_UNSUPPORTED_SOURCE

SQLSTATE: 0AKDC

Unsupported <mode> clone source ‘<name>’, whose format is <format>.

The supported formats are ‘delta’, ‘iceberg’ and ‘parquet’.

DELTA_CLUSTERING_CLONE_TABLE_NOT_SUPPORTED

SQLSTATE: 0A000

CLONE is not supported for Delta table with Liquid clustering for DBR version < 14.0.

DELTA_CLUSTERING_COLUMNS_DATATYPE_NOT_SUPPORTED

SQLSTATE: 0A000

CLUSTER BY is not supported because the following column(s): <columnsWithDataTypes> don’t support data skipping.

DELTA_CLUSTERING_COLUMNS_MISMATCH

SQLSTATE: 42P10

The provided clustering columns do not match the existing table’s.

  • provided: <providedClusteringColumns>
  • existing: <existingClusteringColumns>

DELTA_CLUSTERING_COLUMN_MISSING_STATS

SQLSTATE: 22000

Liquid clustering requires clustering columns to have stats. Couldn’t find clustering column(s) ‘<columns>’ in stats schema:

<schema>

DELTA_CLUSTERING_CREATE_EXTERNAL_NON_LIQUID_TABLE_FROM_LIQUID_TABLE

SQLSTATE: 22000

Creating an external table without liquid clustering from a table directory with liquid clustering is not allowed; path: <path>.

DELTA_CLUSTERING_NOT_SUPPORTED

SQLSTATE: 42000

<operation>’ does not support clustering.

DELTA_CLUSTERING_PHASE_OUT_FAILED

SQLSTATE: 0AKDE

Cannot finish the <phaseOutType> of the table with <tableFeatureToAdd> table feature (reason: <reason>). Please try the OPTIMIZE command again.

== Error ==

<error>

DELTA_CLUSTERING_REPLACE_TABLE_WITH_PARTITIONED_TABLE

SQLSTATE: 42000

REPLACE a Delta table with Liquid clustering with a partitioned table is not allowed.

DELTA_CLUSTERING_SHOW_CREATE_TABLE_WITHOUT_CLUSTERING_COLUMNS

SQLSTATE: 0A000

SHOW CREATE TABLE is not supported for Delta table with Liquid clustering without any clustering columns.

DELTA_CLUSTERING_TO_PARTITIONED_TABLE_WITH_NON_EMPTY_CLUSTERING_COLUMNS

SQLSTATE: 42000

Transition a Delta table with Liquid clustering to a partitioned table is not allowed for operation: <operation>, when the existing table has non-empty clustering columns.

Please run ALTER TABLE CLUSTER BY NONE to remove the clustering columns first.

DELTA_CLUSTERING_WITH_DYNAMIC_PARTITION_OVERWRITE

SQLSTATE: 42000

Dynamic partition overwrite mode is not allowed for Delta table with Liquid clustering.

DELTA_CLUSTERING_WITH_PARTITION_PREDICATE

SQLSTATE: 0A000

OPTIMIZE command for Delta table with Liquid clustering doesn’t support partition predicates. Please remove the predicates: <predicates>.

DELTA_CLUSTERING_WITH_ZORDER_BY

SQLSTATE: 42613

OPTIMIZE command for Delta table with Liquid clustering cannot specify ZORDER BY. Please remove ZORDER BY (<zOrderBy>).

DELTA_CLUSTER_BY_INVALID_NUM_COLUMNS

SQLSTATE: 54000

CLUSTER BY for Liquid clustering supports up to <numColumnsLimit> clustering columns, but the table has <actualNumColumns> clustering columns. Please remove the extra clustering columns.

DELTA_CLUSTER_BY_SCHEMA_NOT_PROVIDED

SQLSTATE: 42908

It is not allowed to specify CLUSTER BY when the schema is not defined. Please define schema for table <tableName>.

DELTA_CLUSTER_BY_WITH_BUCKETING

SQLSTATE: 42613

Clustering and bucketing cannot both be specified. Please remove CLUSTERED BY INTO BUCKETS / bucketBy if you want to create a Delta table with clustering.

DELTA_CLUSTER_BY_WITH_PARTITIONED_BY

SQLSTATE: 42613

Clustering and partitioning cannot both be specified. Please remove PARTITIONED BY / partitionBy / partitionedBy if you want to create a Delta table with clustering.

DELTA_COLLATIONS_NOT_SUPPORTED

SQLSTATE: 0AKDC

Collations are not supported in Delta Lake.

DELTA_COLUMN_DATA_SKIPPING_NOT_SUPPORTED_PARTITIONED_COLUMN

SQLSTATE: 0AKDC

Data skipping is not supported for partition column ‘<column>’.

DELTA_COLUMN_DATA_SKIPPING_NOT_SUPPORTED_TYPE

SQLSTATE: 0AKDC

Data skipping is not supported for column ‘<column>’ of type <type>.

DELTA_COLUMN_MAPPING_MAX_COLUMN_ID_NOT_SET

SQLSTATE: 42703

The max column id property (<prop>) is not set on a column mapping enabled table.

DELTA_COLUMN_MAPPING_MAX_COLUMN_ID_NOT_SET_CORRECTLY

SQLSTATE: 42703

The max column id property (<prop>) on a column mapping enabled table is <tableMax>, which cannot be smaller than the max column id for all fields (<fieldMax>).

DELTA_COLUMN_MISSING_DATA_TYPE

SQLSTATE: 42601

The data type of the column <colName> was not provided.

DELTA_COLUMN_NOT_FOUND

SQLSTATE: 42703

Unable to find the column <columnName> given [<columnList>]

DELTA_COLUMN_NOT_FOUND_IN_MERGE

SQLSTATE: 42703

Unable to find the column ‘<targetCol>’ of the target table from the INSERT columns: <colNames>. INSERT clause must specify value for all the columns of the target table.

DELTA_COLUMN_NOT_FOUND_IN_SCHEMA

SQLSTATE: 42703

Couldn’t find column <columnName> in:

<tableSchema>

DELTA_COLUMN_PATH_NOT_NESTED

SQLSTATE: 42704

Expected <columnPath> to be a nested data type, but found <other>. Was looking for the

index of <column> in a nested field.

Schema:

<schema>

DELTA_COLUMN_STRUCT_TYPE_MISMATCH

SQLSTATE: 2200G

Struct column <source> cannot be inserted into a <targetType> field <targetField> in <targetTable>.

DELTA_COMMIT_INTERMEDIATE_REDIRECT_STATE

SQLSTATE: 42P01

Cannot handle commit of table within redirect table state ‘<state>’.

DELTA_COMPACTION_VALIDATION_FAILED

SQLSTATE: 22000

The validation of the compaction of path <compactedPath> to <newPath> failed: Please file a bug report.

DELTA_COMPLEX_TYPE_COLUMN_CONTAINS_NULL_TYPE

SQLSTATE: 22005

Found nested NullType in column <columName> which is of <dataType>. Delta doesn’t support writing NullType in complex types.

DELTA_CONCURRENT_APPEND

SQLSTATE: 2D521

ConcurrentAppendException: Files were added to <partition> by a concurrent update. <retryMsg> <conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONCURRENT_DELETE_DELETE

SQLSTATE: 2D521

ConcurrentDeleteDeleteException: This transaction attempted to delete one or more files that were deleted (for example <file>) by a concurrent update. Please try the operation again.<conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONCURRENT_DELETE_READ

SQLSTATE: 2D521

ConcurrentDeleteReadException: This transaction attempted to read one or more files that were deleted (for example <file>) by a concurrent update. Please try the operation again.<conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONCURRENT_TRANSACTION

SQLSTATE: 2D521

ConcurrentTransactionException: This error occurs when multiple streaming queries are using the same checkpoint to write into this table. Did you run multiple instances of the same streaming query at the same time?<conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONCURRENT_WRITE

SQLSTATE: 2D521

ConcurrentWriteException: A concurrent transaction has written new data since the current transaction read the table. Please try the operation again.<conflictingCommit>

Refer to <docLink> for more details.

DELTA_CONFLICT_SET_COLUMN

SQLSTATE: 42701

There is a conflict from these SET columns: <columnList>.

DELTA_CONF_OVERRIDE_NOT_SUPPORTED_IN_COMMAND

SQLSTATE: 42616

During <command>, configuration “<configuration>” cannot be set from the command. Please remove it from the TBLPROPERTIES clause and then retry the command again.

DELTA_CONF_OVERRIDE_NOT_SUPPORTED_IN_SESSION

SQLSTATE: 42616

During <command>, configuration “<configuration>” cannot be set from the SparkSession configurations. Please unset it by running spark.conf.unset("<configuration>") and then retry the command again.

DELTA_CONSTRAINT_ALREADY_EXISTS

SQLSTATE: 42710

Constraint ‘<constraintName>’ already exists. Please delete the old constraint first.

Old constraint:

<oldConstraint>

DELTA_CONSTRAINT_DATA_TYPE_MISMATCH

SQLSTATE: 42K09

Column <columnName> has data type <columnType> and cannot be altered to data type <dataType> because this column is referenced by the following check constraint(s):

<constraints>

DELTA_CONSTRAINT_DEPENDENT_COLUMN_CHANGE

SQLSTATE: 42K09

Cannot alter column <columnName> because this column is referenced by the following check constraint(s):

<constraints>

DELTA_CONSTRAINT_DOES_NOT_EXIST

SQLSTATE: 42704

Cannot drop nonexistent constraint <constraintName> from table <tableName>. To avoid throwing an error, provide the parameter IF EXISTS or set the SQL session configuration <config> to <confValue>.

DELTA_CONVERSION_MERGE_ON_READ_NOT_SUPPORTED

SQLSTATE: 0AKDC

Conversion of Merge-On-Read <format> table is not supported: <path>, <hint>

DELTA_CONVERSION_NO_PARTITION_FOUND

SQLSTATE: 42KD6

Found no partition information in the catalog for table <tableName>. Have you run “MSCK REPAIR TABLE” on your table to discover partitions?

DELTA_CONVERSION_UNSUPPORTED_COLLATED_PARTITION_COLUMN

SQLSTATE: 0AKDC

Cannot convert Parquet table with collated partition column <colName> to Delta.

DELTA_CONVERSION_UNSUPPORTED_COLUMN_MAPPING

SQLSTATE: 0AKDC

The configuration ‘<config>’ cannot be set to <mode> when using CONVERT TO DELTA.

DELTA_CONVERSION_UNSUPPORTED_SCHEMA_CHANGE

SQLSTATE: 0AKDC

Unsupported schema changes found for <format> table: <path>, <hint>

DELTA_CONVERT_NON_PARQUET_TABLE

SQLSTATE: 0AKDC

CONVERT TO DELTA only supports parquet tables, but you are trying to convert a <sourceName> source: <tableId>

DELTA_CONVERT_TO_DELTA_ROW_TRACKING_WITHOUT_STATS

SQLSTATE: 22000

Cannot enable row tracking without collecting statistics.

If you want to enable row tracking, do the following:

  1. Enable statistics collection by running the command

SET <statisticsCollectionPropertyKey> = true

  1. Run CONVERT TO DELTA without the NO STATISTICS option.

If you do not want to collect statistics, disable row tracking:

  1. Deactivate enabling the table feature by default by running the command:

RESET <rowTrackingTableFeatureDefaultKey>

  1. Deactivate the table property by default by running:

SET <rowTrackingDefaultPropertyKey> = false

DELTA_COPY_INTO_TARGET_FORMAT

SQLSTATE: 0AKDD

COPY INTO target must be a Delta table.

DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_SCHEMA

SQLSTATE: 42601

You are trying to create an external table <tableName>

from <path> using Delta, but the schema is not specified when the

input path is empty.

To learn more about Delta, see <docLink>

DELTA_CREATE_EXTERNAL_TABLE_WITHOUT_TXN_LOG

SQLSTATE: 42K03

You are trying to create an external table <tableName>

from %2$s using Delta, but there is no transaction log present at

%2$s/_delta_log. Check the upstream job to make sure that it is writing using

format(“delta”) and that the path is the root of the table.

To learn more about Delta, see <docLink>

DELTA_CREATE_TABLE_IDENTIFIER_LOCATION_MISMATCH

SQLSTATE: 0AKDC

Creating path-based Delta table with a different location isn’t supported. Identifier: <identifier>, Location: <location>

DELTA_CREATE_TABLE_MISSING_TABLE_NAME_OR_LOCATION

SQLSTATE: 42601

Table name or location has to be specified.

DELTA_CREATE_TABLE_SCHEME_MISMATCH

SQLSTATE: 42KD7

The specified schema does not match the existing schema at <path>.

== Specified ==

<specifiedSchema>

== Existing ==

<existingSchema>

== Differences ==

<schemaDifferences>

If your intention is to keep the existing schema, you can omit the

schema from the create table command. Otherwise please ensure that

the schema matches.

DELTA_CREATE_TABLE_SET_CLUSTERING_TABLE_FEATURE_NOT_ALLOWED

SQLSTATE: 42000

Cannot enable <tableFeature> table feature using TBLPROPERTIES. Please use CREATE OR REPLACE TABLE CLUSTER BY to create a Delta table with clustering.

DELTA_CREATE_TABLE_WITH_DIFFERENT_CLUSTERING

SQLSTATE: 42KD7

The specified clustering columns do not match the existing clustering columns at <path>.

== Specified ==

<specifiedColumns>

== Existing ==

<existingColumns>

DELTA_CREATE_TABLE_WITH_DIFFERENT_PARTITIONING

SQLSTATE: 42KD7

The specified partitioning does not match the existing partitioning at <path>.

== Specified ==

<specifiedColumns>

== Existing ==

<existingColumns>

DELTA_CREATE_TABLE_WITH_DIFFERENT_PROPERTY

SQLSTATE: 42KD7

The specified properties do not match the existing properties at <path>.

== Specified ==

<specifiedProperties>

== Existing ==

<existingProperties>

DELTA_CREATE_TABLE_WITH_NON_EMPTY_LOCATION

SQLSTATE: 42601

Cannot create table (‘<tableId>’). The associated location (‘<tableLocation>’) is not empty and also not a Delta table.

DELTA_DATA_CHANGE_FALSE

SQLSTATE: 0AKDE

Cannot change table metadata because the ‘dataChange’ option is set to false. Attempted operation: ‘<op>’.

DELTA_DELETED_PARQUET_FILE_NOT_FOUND

SQLSTATE: 42K03

File <filePath> referenced in the transaction log cannot be found. This parquet file may be deleted under Delta’s data retention policy.

Default Delta data retention duration: <logRetentionPeriod>. Modification time of the parquet file: <modificationTime>. Deletion time of the parquet file: <deletionTime>. Deleted on Delta version: <deletionVersion>.

DELTA_DELETION_VECTOR_MISSING_NUM_RECORDS

SQLSTATE: 2D521

It is invalid to commit files with deletion vectors that are missing the numRecords statistic.

DELTA_DOMAIN_METADATA_NOT_SUPPORTED

SQLSTATE: 0A000

Detected DomainMetadata action(s) for domains <domainNames>, but DomainMetadataTableFeature is not enabled.

DELTA_DROP_COLUMN_AT_INDEX_LESS_THAN_ZERO

SQLSTATE: 42KD8

Index <columnIndex> to drop column is lower than 0

DELTA_DROP_COLUMN_ON_SINGLE_FIELD_SCHEMA

SQLSTATE: 0AKDC

Cannot drop column from a schema with a single column. Schema:

<schema>

DELTA_DUPLICATE_ACTIONS_FOUND

SQLSTATE: 2D521

File operation ‘<actionType>’ for path <path> was specified several times.

It conflicts with <conflictingPath>.

It is not valid for multiple file operations with the same path to exist in a single commit.

DELTA_DUPLICATE_COLUMNS_FOUND

SQLSTATE: 42711

Found duplicate column(s) <coltype>: <duplicateCols>

DELTA_DUPLICATE_COLUMNS_ON_INSERT

SQLSTATE: 42701

Duplicate column names in INSERT clause

DELTA_DUPLICATE_COLUMNS_ON_UPDATE_TABLE

SQLSTATE: 42701

<message>

Please remove duplicate columns before you update your table.

DELTA_DUPLICATE_DATA_SKIPPING_COLUMNS

SQLSTATE: 42701

Duplicated data skipping columns found: <columns>.

DELTA_DUPLICATE_DOMAIN_METADATA_INTERNAL_ERROR

SQLSTATE: 42601

Internal error: two DomainMetadata actions within the same transaction have the same domain <domainName>

DELTA_DV_HISTOGRAM_DESERIALIZATON

SQLSTATE: 22000

Could not deserialize the deleted record counts histogram during table integrity verification.

DELTA_DYNAMIC_PARTITION_OVERWRITE_DISABLED

SQLSTATE: 0A000

Dynamic partition overwrite mode is specified by session config or write options, but it is disabled by spark.databricks.delta.dynamicPartitionOverwrite.enabled=false.

DELTA_EMPTY_DATA

SQLSTATE: 428GU

Data used in creating the Delta table doesn’t have any columns.

DELTA_EMPTY_DIRECTORY

SQLSTATE: 42K03

No file found in the directory: <directory>.

DELTA_EXCEED_CHAR_VARCHAR_LIMIT

SQLSTATE: 22001

Value “<value>” exceeds char/varchar type length limitation. Failed check: <expr>.

DELTA_FAILED_CAST_PARTITION_VALUE

SQLSTATE: 22018

Failed to cast partition value <value> to <dataType>

DELTA_FAILED_FIND_ATTRIBUTE_IN_OUTPUT_COLUMNS

SQLSTATE: 42703

Could not find <newAttributeName> among the existing target output <targetOutputColumns>

DELTA_FAILED_INFER_SCHEMA

SQLSTATE: 42KD9

Failed to infer schema from the given list of files.

DELTA_FAILED_MERGE_SCHEMA_FILE

SQLSTATE: 42KDA

Failed to merge schema of file <file>:

<schema>

SQLSTATE: KD001

Could not read footer for file: <currentFile>

DELTA_FAILED_RECOGNIZE_PREDICATE

SQLSTATE: 42601

Cannot recognize the predicate ‘<predicate>

DELTA_FAILED_SCAN_WITH_HISTORICAL_VERSION

SQLSTATE: KD002

Expect a full scan of the latest version of the Delta source, but found a historical scan of version <historicalVersion>

DELTA_FAILED_TO_MERGE_FIELDS

SQLSTATE: 22005

Failed to merge fields ‘<currentField>’ and ‘<updateField>

DELTA_FEATURES_PROTOCOL_METADATA_MISMATCH

SQLSTATE: 0AKDE

Unable to operate on this table because the following table features are enabled in metadata but not listed in protocol: <features>.

DELTA_FEATURES_REQUIRE_MANUAL_ENABLEMENT

SQLSTATE: 0AKDE

Your table schema requires manually enablement of the following table feature(s): <unsupportedFeatures>.

To do this, run the following command for each of features listed above:

ALTER TABLE table_name SET TBLPROPERTIES (‘delta.feature.feature_name’ = ‘supported’)

Replace “table_name” and “feature_name” with real values.

Current supported feature(s): <supportedFeatures>.

DELTA_FEATURE_DROP_CHECKPOINT_FAILED

SQLSTATE: 22KD0

Dropping <featureName> failed due to a failure in checkpoint creation.

Please try again later. It the issue persists, contact Databricks support.

DELTA_FEATURE_DROP_CONFLICT_REVALIDATION_FAIL

SQLSTATE: 0AKDE

Cannot drop feature because a concurrent transaction modified the table.

Please try the operation again.

<concurrentCommit>

DELTA_FEATURE_DROP_DEPENDENT_FEATURE

SQLSTATE: 0AKDE

Cannot drop table feature <feature> because some other features (<dependentFeatures>) in this table depends on <feature>.

Consider dropping them first before dropping this feature.

DELTA_FEATURE_DROP_FEATURE_NOT_PRESENT

SQLSTATE: 0AKDE

Cannot drop <feature> from this table because it is not currently present in the table’s protocol.

DELTA_FEATURE_DROP_HISTORICAL_VERSIONS_EXIST

SQLSTATE: 0AKDE

Cannot drop <feature> because the Delta log contains historical versions that use the feature.

Please wait until the history retention period (<logRetentionPeriodKey>=<logRetentionPeriod>)

has passed since the feature was last active.

Alternatively, please wait for the TRUNCATE HISTORY retention period to expire (<truncateHistoryLogRetentionPeriod>)

and then run:

ALTER TABLE table_name DROP FEATURE feature_name TRUNCATE HISTORY

DELTA_FEATURE_DROP_HISTORY_TRUNCATION_NOT_ALLOWED

SQLSTATE: 0AKDE

The particular feature does not require history truncation.

DELTA_FEATURE_DROP_NONREMOVABLE_FEATURE

SQLSTATE: 0AKDE

Cannot drop <feature> because dropping this feature is not supported.

Please contact Databricks support.

DELTA_FEATURE_DROP_UNSUPPORTED_CLIENT_FEATURE

SQLSTATE: 0AKDE

Cannot drop <feature> because it is not supported by this Databricks version.

Consider using Databricks with a higher version.

DELTA_FEATURE_DROP_WAIT_FOR_RETENTION_PERIOD

SQLSTATE: 0AKDE

Dropping <feature> was partially successful.

The feature is now no longer used in the current version of the table. However, the feature

is still present in historical versions of the table. The table feature cannot be dropped

from the table protocol until these historical versions have expired.

To drop the table feature from the protocol, please wait for the historical versions to

expire, and then repeat this command. The retention period for historical versions is

currently configured as <logRetentionPeriodKey>=<logRetentionPeriod>.

Alternatively, please wait for the TRUNCATE HISTORY retention period to expire (<truncateHistoryLogRetentionPeriod>)

and then run:

ALTER TABLE table_name DROP FEATURE feature_name TRUNCATE HISTORY

DELTA_FEATURE_REQUIRES_HIGHER_READER_VERSION

SQLSTATE: 0AKDE

Unable to enable table feature <feature> because it requires a higher reader protocol version (current <current>). Consider upgrading the table’s reader protocol version to <required>, or to a version which supports reader table features. Refer to <docLink> for more information on table protocol versions.

DELTA_FEATURE_REQUIRES_HIGHER_WRITER_VERSION

SQLSTATE: 0AKDE

Unable to enable table feature <feature> because it requires a higher writer protocol version (current <current>). Consider upgrading the table’s writer protocol version to <required>, or to a version which supports writer table features. Refer to <docLink> for more information on table protocol versions.

DELTA_FILE_ALREADY_EXISTS

SQLSTATE: 42K04

Existing file path <path>

DELTA_FILE_LIST_AND_PATTERN_STRING_CONFLICT

SQLSTATE: 42613

Cannot specify both file list and pattern string.

DELTA_FILE_NOT_FOUND

SQLSTATE: 42K03

File path <path>

DELTA_FILE_NOT_FOUND_DETAILED

SQLSTATE: 42K03

File <filePath> referenced in the transaction log cannot be found. This occurs when data has been manually deleted from the file system rather than using the table DELETE statement. For more information, see <faqPath>

DELTA_FILE_OR_DIR_NOT_FOUND

SQLSTATE: 42K03

No such file or directory: <path>

DELTA_FILE_TO_OVERWRITE_NOT_FOUND

SQLSTATE: 42K03

File (<path>) to be rewritten not found among candidate files:

<pathList>

DELTA_FOUND_MAP_TYPE_COLUMN

SQLSTATE: KD003

A MapType was found. In order to access the key or value of a MapType, specify one

of:

<key> or

<value>

followed by the name of the column (only if that column is a struct type).

e.g. mymap.key.mykey

If the column is a basic type, mymap.key or mymap.value is sufficient.

Schema:

<schema>

DELTA_GENERATED_COLUMNS_DATA_TYPE_MISMATCH

SQLSTATE: 42K09

Column <columnName> has data type <columnType> and cannot be altered to data type <dataType> because this column is referenced by the following generated column(s):

<generatedColumns>

DELTA_GENERATED_COLUMNS_DEPENDENT_COLUMN_CHANGE

SQLSTATE: 42K09

Cannot alter column <columnName> because this column is referenced by the following generated column(s):

<generatedColumns>

DELTA_GENERATED_COLUMNS_EXPR_TYPE_MISMATCH

SQLSTATE: 42K09

The expression type of the generated column <columnName> is <expressionType>, but the column type is <columnType>

DELTA_GENERATED_COLUMN_UPDATE_TYPE_MISMATCH

SQLSTATE: 42K09

Column <currentName> is a generated column or a column used by a generated column. The data type is <currentDataType> and cannot be converted to data type <updateDataType>

DELTA_ICEBERG_COMPAT_VIOLATION

SQLSTATE: KD00E

The validation of IcebergCompatV<version> has failed.

For more details see DELTA_ICEBERG_COMPAT_VIOLATION

DELTA_IDENTITY_COLUMNS_ALTER_COLUMN_NOT_SUPPORTED

SQLSTATE: 429BQ

ALTER TABLE ALTER COLUMN is not supported for IDENTITY columns.

DELTA_IDENTITY_COLUMNS_ALTER_NON_DELTA_FORMAT

SQLSTATE: 0AKDD

ALTER TABLE ALTER COLUMN SYNC IDENTITY is only supported by Delta.

DELTA_IDENTITY_COLUMNS_ALTER_NON_IDENTITY_COLUMN

SQLSTATE: 429BQ

ALTER TABLE ALTER COLUMN SYNC IDENTITY cannot be called on non IDENTITY columns.

DELTA_IDENTITY_COLUMNS_EXPLICIT_INSERT_NOT_SUPPORTED

SQLSTATE: 42808

Providing values for GENERATED ALWAYS AS IDENTITY column <colName> is not supported.

DELTA_IDENTITY_COLUMNS_ILLEGAL_STEP

SQLSTATE: 42611

IDENTITY column step cannot be 0.

DELTA_IDENTITY_COLUMNS_NON_DELTA_FORMAT

SQLSTATE: 0AKDD

IDENTITY columns are only supported by Delta.

DELTA_IDENTITY_COLUMNS_PARTITION_NOT_SUPPORTED

SQLSTATE: 42601

PARTITIONED BY IDENTITY column <colName> is not supported.

DELTA_IDENTITY_COLUMNS_REPLACE_COLUMN_NOT_SUPPORTED

SQLSTATE: 429BQ

ALTER TABLE REPLACE COLUMNS is not supported for table with IDENTITY columns.

DELTA_IDENTITY_COLUMNS_UNSUPPORTED_DATA_TYPE

SQLSTATE: 428H2

DataType <dataType> is not supported for IDENTITY columns.

DELTA_IDENTITY_COLUMNS_UPDATE_NOT_SUPPORTED

SQLSTATE: 42808

UPDATE on IDENTITY column <colName> is not supported.

DELTA_IDENTITY_COLUMNS_WITH_GENERATED_EXPRESSION

SQLSTATE: 42613

IDENTITY column cannot be specified with a generated column expression.

DELTA_ILLEGAL_OPTION

SQLSTATE: 42616

Invalid value ‘<input>’ for option ‘<name>’, <explain>

DELTA_ILLEGAL_USAGE

SQLSTATE: 42601

The usage of <option> is not allowed when <operation> a Delta table.

DELTA_INCONSISTENT_BUCKET_SPEC

SQLSTATE: 42000

BucketSpec on Delta bucketed table does not match BucketSpec from metadata.Expected: <expected>. Actual: <actual>.

DELTA_INCONSISTENT_LOGSTORE_CONFS

SQLSTATE: F0000

(<setKeys>) cannot be set to different values. Please only set one of them, or set them to the same value.

DELTA_INCORRECT_ARRAY_ACCESS

SQLSTATE: KD003

Incorrectly accessing an ArrayType. Use arrayname.element.elementname position to

add to an array.

DELTA_INCORRECT_ARRAY_ACCESS_BY_NAME

SQLSTATE: KD003

An ArrayType was found. In order to access elements of an ArrayType, specify

<rightName> instead of <wrongName>.

Schema:

<schema>

DELTA_INCORRECT_GET_CONF

SQLSTATE: 42000

Use getConf() instead of `conf.getConf()

DELTA_INCORRECT_LOG_STORE_IMPLEMENTATION

SQLSTATE: 0AKDC

The error typically occurs when the default LogStore implementation, that

is, HDFSLogStore, is used to write into a Delta table on a non-HDFS storage system.

In order to get the transactional ACID guarantees on table updates, you have to use the

correct implementation of LogStore that is appropriate for your storage system.

See <docLink> for details.

DELTA_INDEX_LARGER_OR_EQUAL_THAN_STRUCT

SQLSTATE: 42KD8

Index <position> to drop column equals to or is larger than struct length: <length>

DELTA_INDEX_LARGER_THAN_STRUCT

SQLSTATE: 42KD8

Index <index> to add column <columnName> is larger than struct length: <length>

DELTA_INSERT_COLUMN_ARITY_MISMATCH

SQLSTATE: 42802

Cannot write to ‘<tableName>’, <columnName>; target table has <numColumns> column(s) but the inserted data has <insertColumns> column(s)

DELTA_INSERT_COLUMN_MISMATCH

SQLSTATE: 42802

Column <columnName> is not specified in INSERT

DELTA_INVALID_AUTO_COMPACT_TYPE

SQLSTATE: 22023

Invalid auto-compact type: <value>. Allowed values are: <allowed>.

DELTA_INVALID_BUCKET_COUNT

SQLSTATE: 22023

Invalid bucket count: <invalidBucketCount>. Bucket count should be a positive number that is power of 2 and at least 8. You can use <validBucketCount> instead.

DELTA_INVALID_BUCKET_INDEX

SQLSTATE: 22023

Cannot find the bucket column in the partition columns

DELTA_INVALID_CALENDAR_INTERVAL_EMPTY

SQLSTATE: 2200P

Interval cannot be null or blank.

DELTA_INVALID_CDC_RANGE

SQLSTATE: 22003

CDC range from start <start> to end <end> was invalid. End cannot be before start.

DELTA_INVALID_CHARACTERS_IN_COLUMN_NAME

SQLSTATE: 42K05

Attribute name “<columnName>” contains invalid character(s) among ” ,;{}()\n\t=”. Please use alias to rename it.

DELTA_INVALID_CHARACTERS_IN_COLUMN_NAMES

SQLSTATE: 42K05

Found invalid character(s) among ‘ ,;{}()nt=’ in the column names of your schema.

Invalid column names: <invalidColumnNames>.

Please use other characters and try again.

Alternatively, enable Column Mapping to keep using these characters.

DELTA_INVALID_CLONE_PATH

SQLSTATE: 22KD1

The target location for CLONE needs to be an absolute path or table name. Use an

absolute path instead of <path>.

DELTA_INVALID_COLUMN_NAMES_WHEN_REMOVING_COLUMN_MAPPING

SQLSTATE: 42K05

Found invalid character(s) among ‘ ,;{}()nt=’ in the column names of your schema.

Invalid column names: <invalidColumnNames>.

Column mapping cannot be removed when there are invalid characters in the column names.

Please rename the columns to remove the invalid characters and execute this command again.

DELTA_INVALID_FORMAT

SQLSTATE: 22000

Incompatible format detected.

A transaction log for Delta was found at <deltaRootPath>/_delta_log``,

but you are trying to <operation> <path> using format(“<format>”). You must use

‘format(“delta”)’ when reading and writing to a delta table.

To learn more about Delta, see <docLink>

DELTA_INVALID_GENERATED_COLUMN_REFERENCES

SQLSTATE: 42621

A generated column cannot use a non-existent column or another generated column

DELTA_INVALID_IDEMPOTENT_WRITES_OPTIONS

SQLSTATE: 42616

Invalid options for idempotent Dataframe writes: <reason>

DELTA_INVALID_INTERVAL

SQLSTATE: 22006

<interval> is not a valid INTERVAL.

DELTA_INVALID_INVENTORY_SCHEMA

SQLSTATE: 42000

The schema for the specified INVENTORY does not contain all of the required fields. Required fields are: <expectedSchema>

DELTA_INVALID_ISOLATION_LEVEL

SQLSTATE: 25000

invalid isolation level ‘<isolationLevel>

DELTA_INVALID_LOGSTORE_CONF

SQLSTATE: F0000

(<classConfig>) and (<schemeConfig>) cannot be set at the same time. Please set only one group of them.

DELTA_INVALID_MANAGED_TABLE_SYNTAX_NO_SCHEMA

SQLSTATE: 42000

You are trying to create a managed table <tableName>

using Delta, but the schema is not specified.

To learn more about Delta, see <docLink>

DELTA_INVALID_PARTITION_COLUMN

SQLSTATE: 42996

<columnName> is not a valid partition column in table <tableName>.

DELTA_INVALID_PARTITION_COLUMN_NAME

SQLSTATE: 42996

Found partition columns having invalid character(s) among ” ,;{}()nt=”. Please change the name to your partition columns. This check can be turned off by setting spark.conf.set(“spark.databricks.delta.partitionColumnValidity.enabled”, false) however this is not recommended as other features of Delta may not work properly.

DELTA_INVALID_PARTITION_COLUMN_TYPE

SQLSTATE: 42996

Using column <name> of type <dataType> as a partition column is not supported.

DELTA_INVALID_PARTITION_PATH

SQLSTATE: 22KD1

A partition path fragment should be the form like part1=foo/part2=bar. The partition path: <path>

DELTA_INVALID_PROTOCOL_DOWNGRADE

SQLSTATE: KD004

Protocol version cannot be downgraded from <oldProtocol> to <newProtocol>

DELTA_INVALID_PROTOCOL_VERSION

SQLSTATE: KD004

Unsupported Delta protocol version: table “<tableNameOrPath>” requires reader version <readerRequired> and writer version <writerRequired>, but this version of Databricks supports reader versions <supportedReaders> and writer versions <supportedWriters>. Please upgrade to a newer release.

DELTA_INVALID_TABLE_VALUE_FUNCTION

SQLSTATE: 22000

Function <function> is an unsupported table valued function for CDC reads.

DELTA_INVALID_TIMESTAMP_FORMAT

SQLSTATE: 22007

The provided timestamp <timestamp> does not match the expected syntax <format>.

DELTA_LOG_ALREADY_EXISTS

SQLSTATE: 42K04

A Delta log already exists at <path>

DELTA_LOG_FILE_NOT_FOUND_FOR_STREAMING_SOURCE

SQLSTATE: 42K03

If you never deleted it, it’s likely your query is lagging behind. Please delete its checkpoint to restart from scratch. To avoid this happening again, you can update your retention policy of your Delta table

DELTA_MATERIALIZED_ROW_TRACKING_COLUMN_NAME_MISSING

SQLSTATE: 22000

Materialized <rowTrackingColumn> column name missing for <tableName>.

DELTA_MAX_ARRAY_SIZE_EXCEEDED

SQLSTATE: 42000

Please use a limit less than Int.MaxValue - 8.

DELTA_MAX_COMMIT_RETRIES_EXCEEDED

SQLSTATE: 40000

This commit has failed as it has been tried <numAttempts> times but did not succeed.

This can be caused by the Delta table being committed continuously by many concurrent

commits.

Commit started at version: <startVersion>

Commit failed at version: <failVersion>

Number of actions attempted to commit: <numActions>

Total time spent attempting this commit: <timeSpent> ms

DELTA_MAX_LIST_FILE_EXCEEDED

SQLSTATE: 42000

File list must have at most <maxFileListSize> entries, had <numFiles>.

DELTA_MERGE_ADD_VOID_COLUMN

SQLSTATE: 42K09

Cannot add column <newColumn> with type VOID. Please explicitly specify a non-void type.

DELTA_MERGE_INCOMPATIBLE_DATATYPE

SQLSTATE: 42K09

Failed to merge incompatible data types <currentDataType> and <updateDataType>

DELTA_MERGE_INCOMPATIBLE_DECIMAL_TYPE

SQLSTATE: 42806

Failed to merge decimal types with incompatible <decimalRanges>

DELTA_MERGE_MATERIALIZE_SOURCE_FAILED_REPEATEDLY

SQLSTATE: 25000

Keeping the source of the MERGE statement materialized has failed repeatedly.

DELTA_MERGE_MISSING_WHEN

SQLSTATE: 42601

There must be at least one WHEN clause in a MERGE statement.

DELTA_MERGE_RESOLVED_ATTRIBUTE_MISSING_FROM_INPUT

SQLSTATE: 42601

Resolved attribute(s) <missingAttributes> missing from <input> in operator <merge>

DELTA_MERGE_UNEXPECTED_ASSIGNMENT_KEY

SQLSTATE: 22005

Unexpected assignment key: <unexpectedKeyClass> - <unexpectedKeyObject>

DELTA_MERGE_UNRESOLVED_EXPRESSION

SQLSTATE: 42601

Cannot resolve <sqlExpr> in <clause> given <cols>.

DELTA_METADATA_CHANGED

SQLSTATE: 2D521

MetadataChangedException: The metadata of the Delta table has been changed by a concurrent update. Please try the operation again.<conflictingCommit>

Refer to <docLink> for more details.

DELTA_MISSING_CHANGE_DATA

SQLSTATE: KD002

Error getting change data for range [<startVersion> , <endVersion>] as change data was not

recorded for version [<version>]. If you’ve enabled change data feed on this table,

use DESCRIBE HISTORY to see when it was first enabled.

Otherwise, to start recording change data, use ALTER TABLE` table_name SET TBLPROPERTIES

(<key>=true)`.

DELTA_MISSING_COLUMN

SQLSTATE: 42703

Cannot find <columnName> in table columns: <columnList>

DELTA_MISSING_COMMIT_INFO

SQLSTATE: KD004

This table has the feature <featureName> enabled which requires the presence of the CommitInfo action in every commit. However, the CommitInfo action is missing from commit version <version>.

DELTA_MISSING_COMMIT_TIMESTAMP

SQLSTATE: KD004

This table has the feature <featureName> enabled which requires the presence of commitTimestamp in the CommitInfo action. However, this field has not been set in commit version <version>.

DELTA_MISSING_DELTA_TABLE

SQLSTATE: 42P01

<tableName> is not a Delta table.

DELTA_MISSING_DELTA_TABLE_COPY_INTO

SQLSTATE: 42P01

Table doesn’t exist. Create an empty Delta table first using CREATE TABLE <tableName>.

DELTA_MISSING_ICEBERG_CLASS

SQLSTATE: 56038

Iceberg class was not found. Please ensure Delta Iceberg support is installed.

Please refer to <docLink> for more details.

DELTA_MISSING_NOT_NULL_COLUMN_VALUE

SQLSTATE: 23502

Column <columnName>, which has a NOT NULL constraint, is missing from the data being written into the table.

DELTA_MISSING_PARTITION_COLUMN

SQLSTATE: 42KD6

Partition column <columnName> not found in schema <columnList>

DELTA_MISSING_PART_FILES

SQLSTATE: 42KD6

Couldn’t find all part files of the checkpoint version: <version>

DELTA_MISSING_PROVIDER_FOR_CONVERT

SQLSTATE: 0AKDC

CONVERT TO DELTA only supports parquet tables. Please rewrite your target as parquet.<path> if it’s a parquet directory.

DELTA_MISSING_SET_COLUMN

SQLSTATE: 42703

SET column <columnName> not found given columns: <columnList>.

DELTA_MISSING_TRANSACTION_LOG

SQLSTATE: 42000

Incompatible format detected.

You are trying to <operation> <path> using Delta, but there is no

transaction log present. Check the upstream job to make sure that it is writing

using format(“delta”) and that you are trying to %1$s the table base path.

To learn more about Delta, see <docLink>

DELTA_MODE_NOT_SUPPORTED

SQLSTATE: 0AKDC

Specified mode ‘<mode>’ is not supported. Supported modes are: <supportedModes>

DELTA_MULTIPLE_CDC_BOUNDARY

SQLSTATE: 42614

Multiple <startingOrEnding> arguments provided for CDC read. Please provide one of either <startingOrEnding>Timestamp or <startingOrEnding>Version.

DELTA_MULTIPLE_CONF_FOR_SINGLE_COLUMN_IN_BLOOM_FILTER

SQLSTATE: 42614

Multiple bloom filter index configurations passed to command for column: <columnName>

DELTA_MULTIPLE_SOURCE_ROW_MATCHING_TARGET_ROW_IN_MERGE

SQLSTATE: 21506

Cannot perform Merge as multiple source rows matched and attempted to modify the same

target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge,

when multiple source rows match on the same target row, the result may be ambiguous

as it is unclear which source row should be used to update or delete the matching

target row. You can preprocess the source table to eliminate the possibility of

multiple matches. Please refer to

<usageReference>

DELTA_MUST_SET_ALL_COORDINATED_COMMITS_CONFS_IN_COMMAND

SQLSTATE: 42616

During <command>, either both coordinated commits configurations (“delta.coordinatedCommits.commitCoordinator-preview”, “delta.coordinatedCommits.commitCoordinatorConf-preview”) are set in the command or neither of them. Missing: “<configuration>”. Please specify this configuration in the TBLPROPERTIES clause or remove the other configuration, and then retry the command again.

DELTA_MUST_SET_ALL_COORDINATED_COMMITS_CONFS_IN_SESSION

SQLSTATE: 42616

During <command>, either both coordinated commits configurations (“coordinatedCommits.commitCoordinator-preview”, “coordinatedCommits.commitCoordinatorConf-preview”) are set in the SparkSession configurations or neither of them. Missing: “<configuration>”. Please set this configuration in the SparkSession or unset the other configuration, and then retry the command again.

DELTA_NAME_CONFLICT_IN_BUCKETED_TABLE

SQLSTATE: 42000

The following column name(s) are reserved for Delta bucketed table internal usage only: <names>

DELTA_NESTED_FIELDS_NEED_RENAME

SQLSTATE: 42K05

The input schema contains nested fields that are capitalized differently than the target table.

They need to be renamed to avoid the loss of data in these fields while writing to Delta.

Fields:

<fields>.

Original schema:

<schema>

DELTA_NESTED_NOT_NULL_CONSTRAINT

SQLSTATE: 0AKDC

The <nestType> type of the field <parent> contains a NOT NULL constraint. Delta does not support NOT NULL constraints nested within arrays or maps. To suppress this error and silently ignore the specified constraints, set <configKey> = true.

Parsed <nestType> type:

<nestedPrettyJson>

DELTA_NESTED_SUBQUERY_NOT_SUPPORTED

SQLSTATE: 0A000

Nested subquery is not supported in the <operation> condition.

DELTA_NEW_CHECK_CONSTRAINT_VIOLATION

SQLSTATE: 23512

<numRows> rows in <tableName> violate the new CHECK constraint (<checkConstraint>)

DELTA_NEW_NOT_NULL_VIOLATION

SQLSTATE: 23512

<numRows> rows in <tableName> violate the new NOT NULL constraint on <colName>

DELTA_NON_BOOLEAN_CHECK_CONSTRAINT

SQLSTATE: 42621

CHECK constraint ‘<name>’ (<expr>) should be a boolean expression.

DELTA_NON_DETERMINISTIC_EXPRESSION_IN_GENERATED_COLUMN

SQLSTATE: 42621

Found <expr>. A generated column cannot use a non deterministic expression.

DELTA_NON_DETERMINISTIC_FUNCTION_NOT_SUPPORTED

SQLSTATE: 0AKDC

Non-deterministic functions are not supported in the <operation> <expression>

DELTA_NON_LAST_MATCHED_CLAUSE_OMIT_CONDITION

SQLSTATE: 42601

When there are more than one MATCHED clauses in a MERGE statement, only the last MATCHED clause can omit the condition.

DELTA_NON_LAST_NOT_MATCHED_BY_SOURCE_CLAUSE_OMIT_CONDITION

SQLSTATE: 42601

When there are more than one NOT MATCHED BY SOURCE clauses in a MERGE statement, only the last NOT MATCHED BY SOURCE clause can omit the condition.

DELTA_NON_LAST_NOT_MATCHED_CLAUSE_OMIT_CONDITION

SQLSTATE: 42601

When there are more than one NOT MATCHED clauses in a MERGE statement, only the last NOT MATCHED clause can omit the condition

DELTA_NON_PARSABLE_TAG

SQLSTATE: 42601

Could not parse tag <tag>.

File tags are: <tagList>

DELTA_NON_PARTITION_COLUMN_ABSENT

SQLSTATE: KD005

Data written into Delta needs to contain at least one non-partitioned column.<details>

DELTA_NON_PARTITION_COLUMN_REFERENCE

SQLSTATE: 42P10

Predicate references non-partition column ‘<columnName>’. Only the partition columns may be referenced: [<columnList>]

DELTA_NON_PARTITION_COLUMN_SPECIFIED

SQLSTATE: 42P10

Non-partitioning column(s) <columnList> are specified where only partitioning columns are expected: <fragment>.

DELTA_NON_SINGLE_PART_NAMESPACE_FOR_CATALOG

SQLSTATE: 42K05

Delta catalog requires a single-part namespace, but <identifier> is multi-part.

DELTA_NOT_A_DATABRICKS_DELTA_TABLE

SQLSTATE: 42000

<table> is not a Delta table. Please drop this table first if you would like to create it with Databricks Delta.

DELTA_NOT_A_DELTA_TABLE

SQLSTATE: 0AKDD

<tableName> is not a Delta table. Please drop this table first if you would like to recreate it with Delta Lake.

DELTA_NOT_NULL_COLUMN_NOT_FOUND_IN_STRUCT

SQLSTATE: 42K09

Not nullable column not found in struct: <struct>

DELTA_NOT_NULL_CONSTRAINT_VIOLATED

SQLSTATE: 23502

NOT NULL constraint violated for column: <columnName>.

DELTA_NOT_NULL_NESTED_FIELD

SQLSTATE: 0A000

A non-nullable nested field can’t be added to a nullable parent. Please set the nullability of the parent column accordingly.

DELTA_NO_COMMITS_FOUND

SQLSTATE: KD006

No commits found at <logPath>

DELTA_NO_RECREATABLE_HISTORY_FOUND

SQLSTATE: KD006

No recreatable commits found at <logPath>

DELTA_NO_REDIRECT_RULES_VIOLATED

SQLSTATE: 42P01

Operation not allowed: <operation> cannot be performed on a table with redirect feature.

The no redirect rules are not satisfied <noRedirectRules>.

DELTA_NO_RELATION_TABLE

SQLSTATE: 42P01

Table <tableIdent> not found

DELTA_NO_START_FOR_CDC_READ

SQLSTATE: 42601

No startingVersion or startingTimestamp provided for CDC read.

DELTA_NULL_SCHEMA_IN_STREAMING_WRITE

SQLSTATE: 42P18

Delta doesn’t accept NullTypes in the schema for streaming writes.

DELTA_ONEOF_IN_TIMETRAVEL

SQLSTATE: 42601

Please either provide ‘timestampAsOf’ or ‘versionAsOf’ for time travel.

DELTA_ONLY_OPERATION

SQLSTATE: 0AKDD

<operation> is only supported for Delta tables.

DELTA_OPERATION_MISSING_PATH

SQLSTATE: 42601

Please provide the path or table identifier for <operation>.

DELTA_OPERATION_NOT_ALLOWED

SQLSTATE: 0AKDC

Operation not allowed: <operation> is not supported for Delta tables

DELTA_OPERATION_NOT_ALLOWED_DETAIL

SQLSTATE: 0AKDC

Operation not allowed: <operation> is not supported for Delta tables: <tableName>

DELTA_OPERATION_NOT_SUPPORTED_FOR_COLUMN_WITH_COLLATION

SQLSTATE: 0AKDC

<operation> is not supported for column <colName> with non-default collation <collation>.

DELTA_OPERATION_NOT_SUPPORTED_FOR_EXPRESSION_WITH_COLLATION

SQLSTATE: 0AKDC

<operation> is not supported for expression <exprText> because it uses non-default collation.

DELTA_OPERATION_ON_TEMP_VIEW_WITH_GENERATED_COLS_NOT_SUPPORTED

SQLSTATE: 0A000

<operation> command on a temp view referring to a Delta table that contains generated columns is not supported. Please run the <operation> command on the Delta table directly

DELTA_OPERATION_ON_VIEW_NOT_ALLOWED

SQLSTATE: 0AKDC

Operation not allowed: <operation> cannot be performed on a view.

DELTA_OPTIMIZE_FULL_NOT_SUPPORTED

SQLSTATE: 42601

OPTIMIZE FULL is only supported for clustered tables with non-empty clustering columns.

DELTA_OVERWRITE_MUST_BE_TRUE

SQLSTATE: 42000

Copy option overwriteSchema cannot be specified without setting OVERWRITE = ‘true’.

DELTA_OVERWRITE_SCHEMA_WITH_DYNAMIC_PARTITION_OVERWRITE

SQLSTATE: 42613

‘overwriteSchema’ cannot be used in dynamic partition overwrite mode.

DELTA_PARTITION_COLUMN_CAST_FAILED

SQLSTATE: 22525

Failed to cast value <value> to <dataType> for partition column <columnName>

DELTA_PARTITION_COLUMN_NOT_FOUND

SQLSTATE: 42703

Partition column <columnName> not found in schema [<schemaMap>]

DELTA_PARTITION_SCHEMA_IN_ICEBERG_TABLES

SQLSTATE: 42613

Partition schema cannot be specified when converting Iceberg tables. It is automatically inferred.

DELTA_PATH_DOES_NOT_EXIST

SQLSTATE: 42K03

<path> doesn’t exist, or is not a Delta table.

DELTA_PATH_EXISTS

SQLSTATE: 42K04

Cannot write to already existent path <path> without setting OVERWRITE = ‘true’.

DELTA_POST_COMMIT_HOOK_FAILED

SQLSTATE: 2DKD0

Committing to the Delta table version <version> succeeded but error while executing post-commit hook <name> <message>

DELTA_PROTOCOL_CHANGED

SQLSTATE: 2D521

ProtocolChangedException: The protocol version of the Delta table has been changed by a concurrent update. <additionalInfo> <conflictingCommit>

Refer to <docLink> for more details.

DELTA_PROTOCOL_PROPERTY_NOT_INT

SQLSTATE: 42K06

Protocol property <key> needs to be an integer. Found <value>

DELTA_READ_FEATURE_PROTOCOL_REQUIRES_WRITE

SQLSTATE: KD004

Unable to upgrade only the reader protocol version to use table features. Writer protocol version must be at least <writerVersion> to proceed. Refer to <docLink> for more information on table protocol versions.

DELTA_READ_TABLE_WITHOUT_COLUMNS

SQLSTATE: 428GU

You are trying to read a Delta table <tableName> that does not have any columns.

Write some new data with the option mergeSchema = true to be able to read the table.

DELTA_REGEX_OPT_SYNTAX_ERROR

SQLSTATE: 2201B

Please recheck your syntax for ‘<regExpOption>

DELTA_REPLACE_WHERE_IN_OVERWRITE

SQLSTATE: 42613

You can’t use replaceWhere in conjunction with an overwrite by filter

DELTA_REPLACE_WHERE_MISMATCH

SQLSTATE: 44000

Written data does not conform to partial table overwrite condition or constraint ‘<replaceWhere>’.

<message>

DELTA_REPLACE_WHERE_WITH_DYNAMIC_PARTITION_OVERWRITE

SQLSTATE: 42613

A ‘replaceWhere’ expression and ‘partitionOverwriteMode’=’dynamic’ cannot both be set in the DataFrameWriter options.

DELTA_REPLACE_WHERE_WITH_FILTER_DATA_CHANGE_UNSET

SQLSTATE: 42613

‘replaceWhere’ cannot be used with data filters when ‘dataChange’ is set to false. Filters: <dataFilters>

DELTA_ROW_ID_ASSIGNMENT_WITHOUT_STATS

SQLSTATE: 22000

Cannot assign row IDs without row count statistics.

Collect statistics for the table by running the following code in a Scala notebook and retry:

import com.databricks.sql.transaction.tahoe.DeltaLog

import com.databricks.sql.transaction.tahoe.stats.StatisticsCollection

import org.apache.spark.sql.catalyst.TableIdentifier

val log = DeltaLog.forTable(spark, TableIdentifier(table_name))

StatisticsCollection.recompute(spark, log)

DELTA_SCHEMA_CHANGED

SQLSTATE: KD007

Detected schema change:

streaming source schema: <readSchema>

data file schema: <dataSchema>

Please try restarting the query. If this issue repeats across query restarts without

making progress, you have made an incompatible schema change and need to start your

query from scratch using a new checkpoint directory.

DELTA_SCHEMA_CHANGED_WITH_STARTING_OPTIONS

SQLSTATE: KD007

Detected schema change in version <version>:

streaming source schema: <readSchema>

data file schema: <dataSchema>

Please try restarting the query. If this issue repeats across query restarts without

making progress, you have made an incompatible schema change and need to start your

query from scratch using a new checkpoint directory. If the issue persists after

changing to a new checkpoint directory, you may need to change the existing

‘startingVersion’ or ‘startingTimestamp’ option to start from a version newer than

<version> with a new checkpoint directory.

DELTA_SCHEMA_CHANGED_WITH_VERSION

SQLSTATE: KD007

Detected schema change in version <version>:

streaming source schema: <readSchema>

data file schema: <dataSchema>

Please try restarting the query. If this issue repeats across query restarts without

making progress, you have made an incompatible schema change and need to start your

query from scratch using a new checkpoint directory.

DELTA_SCHEMA_CHANGE_SINCE_ANALYSIS

SQLSTATE: KD007

The schema of your Delta table has changed in an incompatible way since your DataFrame

or DeltaTable object was created. Please redefine your DataFrame or DeltaTable object.

Changes:

<schemaDiff> <legacyFlagMessage>

DELTA_SCHEMA_NOT_PROVIDED

SQLSTATE: 42908

Table schema is not provided. Please provide the schema (column definition) of the table when using REPLACE table and an AS SELECT query is not provided.

DELTA_SCHEMA_NOT_SET

SQLSTATE: KD008

Table schema is not set. Write data into it or use CREATE TABLE to set the schema.

DELTA_SET_LOCATION_SCHEMA_MISMATCH

SQLSTATE: 42KD7

The schema of the new Delta location is different than the current table schema.

original schema:

<original>

destination schema:

<destination>

If this is an intended change, you may turn this check off by running:

%%sql set <config> = true

DELTA_SHALLOW_CLONE_FILE_NOT_FOUND

SQLSTATE: 42K03

File <filePath> referenced in the transaction log cannot be found. This can occur when data has been manually deleted from the file system rather than using the table DELETE statement. This table appears to be a shallow clone, if that is the case, this error can occur when the original table from which this table was cloned has deleted a file that the clone is still using. If you want any clones to be independent of the original table, use a DEEP clone instead.

DELTA_SHARING_CANNOT_MODIFY_RESERVED_RECIPIENT_PROPERTY

SQLSTATE: 42939

Pre-defined properties that start with <prefix> cannot be modified.

DELTA_SHARING_CURRENT_RECIPIENT_PROPERTY_UNDEFINED

SQLSTATE: 42704

The data is restricted by recipient property <property> that do not apply to the current recipient in the session.

For more details see DELTA_SHARING_CURRENT_RECIPIENT_PROPERTY_UNDEFINED

DELTA_SHARING_INVALID_OP_IN_EXTERNAL_SHARED_VIEW

SQLSTATE: 42887

<operation> cannot be used in Delta Sharing views that are shared cross account.

DELTA_SHARING_INVALID_PROVIDER_AUTH

SQLSTATE: 28000

Illegal authentication type <authenticationType> for provider <provider>.

DELTA_SHARING_INVALID_RECIPIENT_AUTH

SQLSTATE: 28000

Illegal authentication type <authenticationType> for recipient <recipient>.

DELTA_SHARING_INVALID_SHARED_DATA_OBJECT_NAME

SQLSTATE: 42K05

Invalid name to reference a <type> inside a Share. You can either use <type>’s name inside the share following the format of [schema].[<type>], or you can also use table’s original full name following the format of [catalog].[schema].[>type>].

If you are unsure about what name to use, you can run “SHOW ALL IN SHARE [share]” and find the name of the <type> to remove: column “name” is the <type>’s name inside the share and column “shared_object” is the <type>’s original full name.

DELTA_SHARING_MAXIMUM_RECIPIENT_TOKENS_EXCEEDED

SQLSTATE: 54000

There are more than two tokens for recipient <recipient>.

DELTA_SHARING_RECIPIENT_PROPERTY_NOT_FOUND

SQLSTATE: 42704

Recipient property <property> does not exist.

DELTA_SHARING_RECIPIENT_TOKENS_NOT_FOUND

SQLSTATE: 42704

Recipient tokens are missing for recipient <recipient>.

DELTA_SHOW_PARTITION_IN_NON_PARTITIONED_COLUMN

SQLSTATE: 42P10

Non-partitioning column(s) <badCols> are specified for SHOW PARTITIONS

DELTA_SHOW_PARTITION_IN_NON_PARTITIONED_TABLE

SQLSTATE: 42809

SHOW PARTITIONS is not allowed on a table that is not partitioned: <tableName>

DELTA_SOURCE_IGNORE_DELETE

SQLSTATE: 0A000

Detected deleted data (for example <removedFile>) from streaming source at version <version>. This is currently not supported. If you’d like to ignore deletes, set the option ‘ignoreDeletes’ to ‘true’. The source table can be found at path <dataPath>.

DELTA_SOURCE_TABLE_IGNORE_CHANGES

SQLSTATE: 0A000

Detected a data update (for example <file>) in the source table at version <version>. This is currently not supported. If this is going to happen regularly and you are okay to skip changes, set the option ‘skipChangeCommits’ to ‘true’. If you would like the data update to be reflected, please restart this query with a fresh checkpoint directory or do a full refresh if you are using DLT. If you need to handle these changes, please switch to MVs. The source table can be found at path <dataPath>.

DELTA_STARTING_VERSION_AND_TIMESTAMP_BOTH_SET

SQLSTATE: 42613

Please either provide ‘<version>’ or ‘<timestamp>

DELTA_STATS_COLLECTION_COLUMN_NOT_FOUND

SQLSTATE: 42000

<statsType> stats not found for column in Parquet metadata: <columnPath>.

DELTA_STREAMING_CANNOT_CONTINUE_PROCESSING_POST_SCHEMA_EVOLUTION

SQLSTATE: KD002

We’ve detected one or more non-additive schema change(s) (<opType>) between Delta version <previousSchemaChangeVersion> and <currentSchemaChangeVersion> in the Delta streaming source.

Please check if you want to manually propagate the schema change(s) to the sink table before we proceed with stream processing using the finalized schema at <currentSchemaChangeVersion>.

Once you have fixed the schema of the sink table or have decided there is no need to fix, you can set (one of) the following SQL configurations to unblock the non-additive schema change(s) and continue stream processing.

To unblock for this particular stream just for this series of schema change(s): set <allowCkptVerKey> = <allowCkptVerValue>.

To unblock for this particular stream: set <allowCkptKey> = <allowCkptValue>

To unblock for all streams: set <allowAllKey> = <allowAllValue>.

Alternatively if applicable, you may replace the <allowAllMode> with <opSpecificMode> in the SQL conf to unblock stream for just this schema change type.

DELTA_STREAMING_CHECK_COLUMN_MAPPING_NO_SNAPSHOT

SQLSTATE: KD002

Failed to obtain Delta log snapshot for the start version when checking column mapping schema changes. Please choose a different start version, or force enable streaming read at your own risk by setting ‘<config>’ to ‘true’.

DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE

SQLSTATE: 42KD4

Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).

For further information and possible next steps to resolve this issue, please review the documentation at <docLink>

Read schema: <readSchema>. Incompatible data schema: <incompatibleSchema>.

DELTA_STREAMING_INCOMPATIBLE_SCHEMA_CHANGE_USE_SCHEMA_LOG

SQLSTATE: 42KD4

Streaming read is not supported on tables with read-incompatible schema changes (e.g. rename or drop or datatype changes).

Please provide a ‘schemaTrackingLocation’ to enable non-additive schema evolution for Delta stream processing.

See <docLink> for more details.

Read schema: <readSchema>. Incompatible data schema: <incompatibleSchema>.

DELTA_STREAMING_METADATA_EVOLUTION

SQLSTATE: 22000

The schema, table configuration or protocol of your Delta table has changed during streaming.

The schema or metadata tracking log has been updated.

Please restart the stream to continue processing using the updated metadata.

Updated schema: <schema>.

Updated table configurations: <config>.

Updated table protocol: <protocol>

DELTA_STREAMING_SCHEMA_EVOLUTION_UNSUPPORTED_ROW_FILTER_COLUMN_MASKS

SQLSTATE: 22000

Streaming from source table <tableId> with schema tracking does not support row filters or column masks.

Please drop the row filters or column masks, or disable schema tracking.

DELTA_STREAMING_SCHEMA_LOCATION_CONFLICT

SQLSTATE: 22000

Detected conflicting schema location ‘<loc>’ while streaming from table or table located at ‘<table>’.

Another stream may be reusing the same schema location, which is not allowed.

Please provide a new unique schemaTrackingLocation path or streamingSourceTrackingId as a reader option for one of the streams from this table.

DELTA_STREAMING_SCHEMA_LOCATION_NOT_UNDER_CHECKPOINT

SQLSTATE: 22000

Schema location ‘<schemaTrackingLocation>’ must be placed under checkpoint location ‘<checkpointLocation>’.

DELTA_STREAMING_SCHEMA_LOG_DESERIALIZE_FAILED

SQLSTATE: 22000

Incomplete log file in the Delta streaming source schema log at ‘<location>’.

The schema log may have been corrupted. Please pick a new schema location.

DELTA_STREAMING_SCHEMA_LOG_INCOMPATIBLE_DELTA_TABLE_ID

SQLSTATE: 22000

Detected incompatible Delta table id when trying to read Delta stream.

Persisted table id: <persistedId>, Table id: <tableId>

The schema log might have been reused. Please pick a new schema location.

DELTA_STREAMING_SCHEMA_LOG_INCOMPATIBLE_PARTITION_SCHEMA

SQLSTATE: 22000

Detected incompatible partition schema when trying to read Delta stream.

Persisted schema: <persistedSchema>, Delta partition schema: <partitionSchema>

Please pick a new schema location to reinitialize the schema log if you have manually changed the table’s partition schema recently.

DELTA_STREAMING_SCHEMA_LOG_INIT_FAILED_INCOMPATIBLE_METADATA

SQLSTATE: 22000

We could not initialize the Delta streaming source schema log because

we detected an incompatible schema or protocol change while serving a streaming batch from table version <a> to <b>.

DELTA_STREAMING_SCHEMA_LOG_PARSE_SCHEMA_FAILED

SQLSTATE: 22000

Failed to parse the schema from the Delta streaming source schema log.

The schema log may have been corrupted. Please pick a new schema location.

DELTA_TABLE_ALREADY_CONTAINS_CDC_COLUMNS

SQLSTATE: 42711

Unable to enable Change Data Capture on the table. The table already contains

reserved columns <columnList> that will

be used internally as metadata for the table’s Change Data Feed. To enable

Change Data Feed on the table rename/drop these columns.

DELTA_TABLE_ALREADY_EXISTS

SQLSTATE: 42P07

Table <tableName> already exists.

DELTA_TABLE_FOR_PATH_UNSUPPORTED_HADOOP_CONF

SQLSTATE: 0AKDC

Currently DeltaTable.forPath only supports hadoop configuration keys starting with <allowedPrefixes> but got <unsupportedOptions>

DELTA_TABLE_ID_MISMATCH

SQLSTATE: KD007

The Delta table at <tableLocation> has been replaced while this command was using the table.

Table id was <oldId> but is now <newId>.

Please retry the current command to ensure it reads a consistent view of the table.

DELTA_TABLE_LOCATION_MISMATCH

SQLSTATE: 42613

The location of the existing table <tableName> is <existingTableLocation>. It doesn’t match the specified location <tableLocation>.

DELTA_TABLE_NOT_FOUND

SQLSTATE: 42P01

Delta table <tableName> doesn’t exist.

DELTA_TABLE_NOT_SUPPORTED_IN_OP

SQLSTATE: 42809

Table is not supported in <operation>. Please use a path instead.

DELTA_TABLE_ONLY_OPERATION

SQLSTATE: 0AKDD

<tableName> is not a Delta table. <operation> is only supported for Delta tables.

DELTA_TARGET_TABLE_FINAL_SCHEMA_EMPTY

SQLSTATE: 428GU

Target table final schema is empty.

DELTA_TIMESTAMP_GREATER_THAN_COMMIT

SQLSTATE: 42816

The provided timestamp (<providedTimestamp>) is after the latest version available to this

table (<tableName>). Please use a timestamp before or at <maximumTimestamp>.

DELTA_TIMESTAMP_INVALID

SQLSTATE: 42816

The provided timestamp (<expr>) cannot be converted to a valid timestamp.

DELTA_TIME_TRAVEL_INVALID_BEGIN_VALUE

SQLSTATE: 42604

<timeTravelKey> needs to be a valid begin value.

DELTA_TRUNCATED_TRANSACTION_LOG

SQLSTATE: 42K03

<path>: Unable to reconstruct state at version <version> as the transaction log has been truncated due to manual deletion or the log retention policy (<logRetentionKey>=<logRetention>) and checkpoint retention policy (<checkpointRetentionKey>=<checkpointRetention>)

DELTA_TRUNCATE_TABLE_PARTITION_NOT_SUPPORTED

SQLSTATE: 0AKDC

Operation not allowed: TRUNCATE TABLE on Delta tables does not support partition predicates; use DELETE to delete specific partitions or rows.

DELTA_UDF_IN_GENERATED_COLUMN

SQLSTATE: 42621

Found <udfExpr>. A generated column cannot use a user-defined function

DELTA_UNEXPECTED_ACTION_EXPRESSION

SQLSTATE: 42601

Unexpected action expression <expression>.

DELTA_UNEXPECTED_NUM_PARTITION_COLUMNS_FROM_FILE_NAME

SQLSTATE: KD009

Expecting <expectedColsSize> partition column(s): <expectedCols>, but found <parsedColsSize> partition column(s): <parsedCols> from parsing the file name: <path>

DELTA_UNEXPECTED_PARTIAL_SCAN

SQLSTATE: KD00A

Expect a full scan of Delta sources, but found a partial scan. path:<path>

DELTA_UNEXPECTED_PARTITION_COLUMN_FROM_FILE_NAME

SQLSTATE: KD009

Expecting partition column <expectedCol>, but found partition column <parsedCol> from parsing the file name: <path>

DELTA_UNEXPECTED_PARTITION_SCHEMA_FROM_USER

SQLSTATE: KD009

CONVERT TO DELTA was called with a partition schema different from the partition schema inferred from the catalog, please avoid providing the schema so that the partition schema can be chosen from the catalog.

catalog partition schema:

<catalogPartitionSchema>

provided partition schema:

<userPartitionSchema>

DELTA_UNIFORM_COMPATIBILITY_LOCATION_CANNOT_BE_CHANGED

SQLSTATE: 0AKDC

delta.universalFormat.compatibility.location cannot be changed.

DELTA_UNIFORM_COMPATIBILITY_LOCATION_NOT_REGISTERED

SQLSTATE: 42K0I

delta.universalFormat.compatibility.location is not registered in the catalog.

DELTA_UNIFORM_COMPATIBILITY_MISSING_OR_INVALID_LOCATION

SQLSTATE: 42601

Missing or invalid location for Uniform compatibility format. Please set an empty directory for delta.universalFormat.compatibility.location.

Failed reason:

For more details see DELTA_UNIFORM_COMPATIBILITY_MISSING_OR_INVALID_LOCATION

DELTA_UNIFORM_ICEBERG_INGRESS_VIOLATION

SQLSTATE: KD00E

Read Iceberg with Delta Uniform has failed.

For more details see DELTA_UNIFORM_ICEBERG_INGRESS_VIOLATION

DELTA_UNIFORM_INGRESS_NOT_SUPPORTED

SQLSTATE: 0A000

Create or Refresh Uniform ingress table is not supported.

DELTA_UNIFORM_INGRESS_NOT_SUPPORTED_FORMAT

SQLSTATE: 0AKDC

Format <fileFormat> is not supported. Only iceberg as original file format is supported.

DELTA_UNIFORM_NOT_SUPPORTED

SQLSTATE: 0AKDC

Universal Format is only supported on Unity Catalog tables.

DELTA_UNIFORM_REFRESH_NOT_SUPPORTED

SQLSTATE: 0AKDC

REFRESH identifier SYNC UNIFORM is not supported for reason:

For more details see DELTA_UNIFORM_REFRESH_NOT_SUPPORTED

DELTA_UNIFORM_REFRESH_NOT_SUPPORTED_FOR_MANAGED_ICEBERG_TABLE_WITH_METADATA_PATH

SQLSTATE: 0AKDC

REFRESH TABLE with METADATA_PATH is not supported for managed Iceberg tables

DELTA_UNIVERSAL_FORMAT_CONVERSION_FAILED

SQLSTATE: KD00E

Failed to convert the table version <version> to the universal format <format>. <message>

DELTA_UNIVERSAL_FORMAT_VIOLATION

SQLSTATE: KD00E

The validation of Universal Format (<format>) has failed: <violation>

DELTA_UNKNOWN_CONFIGURATION

SQLSTATE: F0000

Unknown configuration was specified: <config>

DELTA_UNKNOWN_PRIVILEGE

SQLSTATE: 42601

Unknown privilege: <privilege>

DELTA_UNKNOWN_READ_LIMIT

SQLSTATE: 42601

Unknown ReadLimit: <limit>

DELTA_UNRECOGNIZED_COLUMN_CHANGE

SQLSTATE: 42601

Unrecognized column change <otherClass>. You may be running an out-of-date Delta Lake version.

DELTA_UNRECOGNIZED_INVARIANT

SQLSTATE: 56038

Unrecognized invariant. Please upgrade your Spark version.

DELTA_UNRECOGNIZED_LOGFILE

SQLSTATE: KD00B

Unrecognized log file <fileName>

DELTA_UNSET_NON_EXISTENT_PROPERTY

SQLSTATE: 42616

Attempted to unset non-existent property ‘<property>’ in table <tableName>

DELTA_UNSUPPORTED_ABS_PATH_ADD_FILE

SQLSTATE: 0AKDC

<path> does not support adding files with an absolute path

DELTA_UNSUPPORTED_ALTER_TABLE_CHANGE_COL_OP

SQLSTATE: 0AKDC

ALTER TABLE CHANGE COLUMN is not supported for changing column <fieldPath> from <oldField> to <newField>

DELTA_UNSUPPORTED_ALTER_TABLE_REPLACE_COL_OP

SQLSTATE: 0AKDC

Unsupported ALTER TABLE REPLACE COLUMNS operation. Reason: <details>

Failed to change schema from:

<oldSchema>

to:

<newSchema>

DELTA_UNSUPPORTED_CLONE_REPLACE_SAME_TABLE

SQLSTATE: 0AKDC

You tried to REPLACE an existing table (<tableName>) with CLONE. This operation is

unsupported. Try a different target for CLONE or delete the table at the current target.

DELTA_UNSUPPORTED_COLUMN_MAPPING_MODE_CHANGE

SQLSTATE: 0AKDC

Changing column mapping mode from ‘<oldMode>’ to ‘<newMode>’ is not supported.

DELTA_UNSUPPORTED_COLUMN_MAPPING_PROTOCOL

SQLSTATE: KD004

Your current table protocol version does not support changing column mapping modes

using <config>.

Required Delta protocol version for column mapping:

<requiredVersion>

Your table’s current Delta protocol version:

<currentVersion>

<advice>

DELTA_UNSUPPORTED_COLUMN_MAPPING_SCHEMA_CHANGE

SQLSTATE: 0AKDC

Schema change is detected:

old schema:

<oldTableSchema>

new schema:

<newTableSchema>

Schema changes are not allowed during the change of column mapping mode.

DELTA_UNSUPPORTED_COLUMN_MAPPING_WRITE

SQLSTATE: 0AKDC

Writing data with column mapping mode is not supported.

DELTA_UNSUPPORTED_COLUMN_TYPE_IN_BLOOM_FILTER

SQLSTATE: 0AKDC

Creating a bloom filter index on a column with type <dataType> is unsupported: <columnName>

DELTA_UNSUPPORTED_COMMENT_MAP_ARRAY

SQLSTATE: 0AKDC

Can’t add a comment to <fieldPath>. Adding a comment to a map key/value or array element is not supported.

DELTA_UNSUPPORTED_DATA_TYPES

SQLSTATE: 0AKDC

Found columns using unsupported data types: <dataTypeList>. You can set ‘<config>’ to ‘false’ to disable the type check. Disabling this type check may allow users to create unsupported Delta tables and should only be used when trying to read/write legacy tables.

DELTA_UNSUPPORTED_DATA_TYPE_IN_GENERATED_COLUMN

SQLSTATE: 42621

<dataType> cannot be the result of a generated column

DELTA_UNSUPPORTED_DEEP_CLONE

SQLSTATE: 0A000

Deep clone is not supported for this Delta version.

DELTA_UNSUPPORTED_DESCRIBE_DETAIL_VIEW

SQLSTATE: 42809

<view> is a view. DESCRIBE DETAIL is only supported for tables.

DELTA_UNSUPPORTED_DROP_CLUSTERING_COLUMN

SQLSTATE: 0AKDC

Dropping clustering columns (<columnList>) is not allowed.

DELTA_UNSUPPORTED_DROP_COLUMN

SQLSTATE: 0AKDC

DROP COLUMN is not supported for your Delta table. <advice>

DELTA_UNSUPPORTED_DROP_NESTED_COLUMN_FROM_NON_STRUCT_TYPE

SQLSTATE: 0AKDC

Can only drop nested columns from StructType. Found <struct>

DELTA_UNSUPPORTED_DROP_PARTITION_COLUMN

SQLSTATE: 0AKDC

Dropping partition columns (<columnList>) is not allowed.

DELTA_UNSUPPORTED_EXPRESSION

SQLSTATE: 0A000

Unsupported expression type(<expType>) for <causedBy>. The supported types are [<supportedTypes>].

DELTA_UNSUPPORTED_EXPRESSION_GENERATED_COLUMN

SQLSTATE: 42621

<expression> cannot be used in a generated column

DELTA_UNSUPPORTED_FEATURES_FOR_READ

SQLSTATE: 56038

Unsupported Delta read feature: table “<tableNameOrPath>” requires reader table feature(s) that are unsupported by this version of Databricks: <unsupported>. Please refer to <link> for more information on Delta Lake feature compatibility.

DELTA_UNSUPPORTED_FEATURES_FOR_WRITE

SQLSTATE: 56038

Unsupported Delta write feature: table “<tableNameOrPath>” requires writer table feature(s) that are unsupported by this version of Databricks: <unsupported>. Please refer to <link> for more information on Delta Lake feature compatibility.

DELTA_UNSUPPORTED_FEATURES_IN_CONFIG

SQLSTATE: 56038

Table feature(s) configured in the following Spark configs or Delta table properties are not recognized by this version of Databricks: <configs>.

DELTA_UNSUPPORTED_FEATURE_STATUS

SQLSTATE: 0AKDE

Expecting the status for table feature <feature> to be “supported”, but got “<status>”.

DELTA_UNSUPPORTED_FIELD_UPDATE_NON_STRUCT

SQLSTATE: 0AKDC

Updating nested fields is only supported for StructType, but you are trying to update a field of <columnName>, which is of type: <dataType>.

DELTA_UNSUPPORTED_FSCK_WITH_DELETION_VECTORS

SQLSTATE: 0A000

The ‘FSCK REPAIR TABLE’ command is not supported on table versions with missing deletion vector files.

Please contact support.

DELTA_UNSUPPORTED_GENERATE_WITH_DELETION_VECTORS

SQLSTATE: 0A000

The ‘GENERATE symlink_format_manifest’ command is not supported on table versions with deletion vectors.

In order to produce a version of the table without deletion vectors, run ‘REORG TABLE table APPLY (PURGE)’. Then re-run the ‘GENERATE’ command.

Make sure that no concurrent transactions are adding deletion vectors again between REORG and GENERATE.

If you need to generate manifests regularly, or you cannot prevent concurrent transactions, consider disabling deletion vectors on this table using ‘ALTER TABLE table SET TBLPROPERTIES (delta.enableDeletionVectors = false)’.

DELTA_UNSUPPORTED_INVARIANT_NON_STRUCT

SQLSTATE: 0AKDC

Invariants on nested fields other than StructTypes are not supported.

DELTA_UNSUPPORTED_IN_SUBQUERY

SQLSTATE: 0AKDC

In subquery is not supported in the <operation> condition.

DELTA_UNSUPPORTED_LIST_KEYS_WITH_PREFIX

SQLSTATE: 0A000

listKeywithPrefix not available

DELTA_UNSUPPORTED_MANIFEST_GENERATION_WITH_COLUMN_MAPPING

SQLSTATE: 0AKDC

Manifest generation is not supported for tables that leverage column mapping, as external readers cannot read these Delta tables. See Delta documentation for more details.

DELTA_UNSUPPORTED_MERGE_SCHEMA_EVOLUTION_WITH_CDC

SQLSTATE: 0A000

MERGE INTO operations with schema evolution do not currently support writing CDC output.

DELTA_UNSUPPORTED_MULTI_COL_IN_PREDICATE

SQLSTATE: 0AKDC

Multi-column In predicates are not supported in the <operation> condition.

DELTA_UNSUPPORTED_NESTED_COLUMN_IN_BLOOM_FILTER

SQLSTATE: 0AKDC

Creating a bloom filer index on a nested column is currently unsupported: <columnName>

DELTA_UNSUPPORTED_NESTED_FIELD_IN_OPERATION

SQLSTATE: 0AKDC

Nested field is not supported in the <operation> (field = <fieldName>).

DELTA_UNSUPPORTED_NON_EMPTY_CLONE

SQLSTATE: 0AKDC

The clone destination table is non-empty. Please TRUNCATE or DELETE FROM the table before running CLONE.

DELTA_UNSUPPORTED_OUTPUT_MODE

SQLSTATE: 0AKDC

Data source <dataSource> does not support <mode> output mode

DELTA_UNSUPPORTED_PARTITION_COLUMN_IN_BLOOM_FILTER

SQLSTATE: 0AKDC

Creating a bloom filter index on a partitioning column is unsupported: <columnName>

DELTA_UNSUPPORTED_RENAME_COLUMN

SQLSTATE: 0AKDC

Column rename is not supported for your Delta table. <advice>

DELTA_UNSUPPORTED_SCHEMA_DURING_READ

SQLSTATE: 0AKDC

Delta does not support specifying the schema at read time.

DELTA_UNSUPPORTED_SORT_ON_BUCKETED_TABLES

SQLSTATE: 0A000

SORTED BY is not supported for Delta bucketed tables

DELTA_UNSUPPORTED_SOURCE

SQLSTATE: 0AKDD

<operation> destination only supports Delta sources.

<plan>

DELTA_UNSUPPORTED_STATIC_PARTITIONS

SQLSTATE: 0AKDD

Specifying static partitions in the partition spec is currently not supported during inserts

DELTA_UNSUPPORTED_STRATEGY_NAME

SQLSTATE: 22023

Unsupported strategy name: <strategy>

DELTA_UNSUPPORTED_SUBQUERY

SQLSTATE: 0AKDC

Subqueries are not supported in the <operation> (condition = <cond>).

DELTA_UNSUPPORTED_SUBQUERY_IN_PARTITION_PREDICATES

SQLSTATE: 0AKDC

Subquery is not supported in partition predicates.

DELTA_UNSUPPORTED_TIME_TRAVEL_MULTIPLE_FORMATS

SQLSTATE: 42613

Cannot specify time travel in multiple formats.

DELTA_UNSUPPORTED_TIME_TRAVEL_VIEWS

SQLSTATE: 0AKDC

Cannot time travel views, subqueries, streams or change data feed queries.

DELTA_UNSUPPORTED_TRUNCATE_SAMPLE_TABLES

SQLSTATE: 0A000

Truncate sample tables is not supported

DELTA_UNSUPPORTED_TYPE_CHANGE_IN_SCHEMA

SQLSTATE: 0AKDC

Unable to operate on this table because an unsupported type change was applied. Field <fieldName> was changed from <fromType> to <toType>.

DELTA_UNSUPPORTED_VACUUM_SPECIFIC_PARTITION

SQLSTATE: 0AKDC

Please provide the base path (<baseDeltaPath>) when Vacuuming Delta tables. Vacuuming specific partitions is currently not supported.

DELTA_UNSUPPORTED_WRITES_STAGED_TABLE

SQLSTATE: 42807

Table implementation does not support writes: <tableName>

DELTA_UNSUPPORTED_WRITES_WITHOUT_COORDINATOR

SQLSTATE: 0AKDC

You are trying to perform writes on a table which has been registered with the commit coordinator <coordinatorName>. However, no implementation of this coordinator is available in the current environment and writes without coordinators are not allowed.

DELTA_UNSUPPORTED_WRITE_SAMPLE_TABLES

SQLSTATE: 0A000

Write to sample tables is not supported

DELTA_UPDATE_SCHEMA_MISMATCH_EXPRESSION

SQLSTATE: 42846

Cannot cast <fromCatalog> to <toCatalog>. All nested columns must match.

DELTA_VACUUM_COPY_INTO_STATE_FAILED

SQLSTATE: 22000

VACUUM on data files succeeded, but COPY INTO state garbage collection failed.

DELTA_VERSIONS_NOT_CONTIGUOUS

SQLSTATE: KD00C

Versions (<versionList>) are not contiguous.

For more details see DELTA_VERSIONS_NOT_CONTIGUOUS

DELTA_VIOLATE_CONSTRAINT_WITH_VALUES

SQLSTATE: 23001

CHECK constraint <constraintName> <expression> violated by row with values:

<values>

DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED

SQLSTATE: 0A000

The validation of the properties of table <table> has been violated:

For more details see DELTA_VIOLATE_TABLE_PROPERTY_VALIDATION_FAILED

DELTA_WRITE_INTO_VIEW_NOT_SUPPORTED

SQLSTATE: 0A000

<viewIdentifier> is a view. You may not write data into a view.

DELTA_ZORDERING_COLUMN_DOES_NOT_EXIST

SQLSTATE: 42703

Z-Ordering column <columnName> does not exist in data schema.

DELTA_ZORDERING_ON_COLUMN_WITHOUT_STATS

SQLSTATE: KD00D

Z-Ordering on <cols> will be

ineffective, because we currently do not collect stats for these columns. Please refer to

<link>

for more information on data skipping and z-ordering. You can disable

this check by setting

‘%%sql set <zorderColStatKey> = false’

DELTA_ZORDERING_ON_PARTITION_COLUMN

SQLSTATE: 42P10

<colName> is a partition column. Z-Ordering can only be performed on data columns

Delta Sharing

DELTA_SHARING_ACTIVATION_NONCE_DOES_NOT_EXIST

SQLSTATE: none assigned

Activation nonce not found. The activation link you used is invalid or has expired. Regenerate the activation link and try again.

DELTA_SHARING_CROSS_REGION_SHARE_UNSUPPORTED

SQLSTATE: none assigned

Sharing between <regionHint> regions and regions outside of it is not supported.

DELTA_SHARING_GET_RECIPIENT_PROPERTIES_INVALID_DEPENDENT

SQLSTATE: none assigned

The view defined with the current_recipient function is for sharing only and can only be queried from the data recipient side. The provided securable with id <securableId> is not a Delta Sharing View.

DELTA_SHARING_MUTABLE_SECURABLE_KIND_NOT_SUPPORTED

SQLSTATE: none assigned

The provided securable kind <securableKind> does not support mutability in Delta Sharing.

DELTA_SHARING_ROTATE_TOKEN_NOT_AUTHORIZED_FOR_MARKETPLACE

SQLSTATE: none assigned

The provided securable kind <securableKind> does not support rotate token action initiated by Marketplace service.

DS_AUTH_TYPE_NOT_AVAILABLE

SQLSTATE: none assigned

<dsError>: Authentication type not available in provider entity <providerEntity>.

DS_CDF_NOT_ENABLED

SQLSTATE: none assigned

<dsError>: Unable to access change data feed for <tableName>. CDF is not enabled on the original delta table. Please contact your data provider.

DS_CDF_NOT_SHARED

SQLSTATE: none assigned

<dsError>: Unable to access change data feed for <tableName>. CDF is not shared on the table. Please contact your data provider.

DS_CDF_RPC_INVALID_PARAMETER

SQLSTATE: none assigned

<dsError>: <message>

DS_CLIENT_AUTH_ERROR_FOR_DB_DS_SERVER

SQLSTATE: none assigned

<dsError>: <message>

DS_CLIENT_ERROR_FOR_DB_DS_SERVER

SQLSTATE: none assigned

<dsError>: <message>

DS_CLOUD_VENDOR_UNAVAILABLE

SQLSTATE: none assigned

<dsError>: Cloud vendor is temporarily unavailable for <rpcName>, please retry.<traceId>

DS_DATA_MATERIALIZATION_COMMAND_FAILED

SQLSTATE: none assigned

<dsError>: Data materialization task run <runId> from org <orgId> failed at command <command>

DS_DATA_MATERIALIZATION_COMMAND_NOT_SUPPORTED

SQLSTATE: none assigned

<dsError>: Data materialization task run <runId> from org <orgId> does not support command <command>

DS_DATA_MATERIALIZATION_NO_VALID_NAMESPACE

SQLSTATE: none assigned

<dsError>: Could not find valid namespace to create materialization for <tableName>. Please contact your data provider to fix this.

DS_DATA_MATERIALIZATION_RUN_DOES_NOT_EXIST

SQLSTATE: none assigned

<dsError>: Data materialization task run <runId> from org <orgId> does not exist

DS_DELTA_ILLEGAL_STATE

SQLSTATE: none assigned

<dsError>: <message>

DS_DELTA_MISSING_CHECKPOINT_FILES

SQLSTATE: none assigned

<dsError>: Couldn’t find all part files of the checkpoint at version: <version>. <suggestion>

DS_DELTA_NULL_POINTER

SQLSTATE: none assigned

<dsError>: <message>

DS_DELTA_RUNTIME_EXCEPTION

SQLSTATE: none assigned

<dsError>: <message>

DS_EXPIRE_TOKEN_NOT_AUTHORIZED_FOR_MARKETPLACE

SQLSTATE: none assigned

<dsError>: The provided securable kind <securableKind> does not support expire token action initiated by Marketplace service.

DS_FAILED_REQUEST_TO_OPEN_DS_SERVER

SQLSTATE: none assigned

<dsError>: <message>

DS_FILE_LISTING_EXCEPTION

SQLSTATE: none assigned

<dsError>: <storage>: <message>

DS_FILE_SIGNING_EXCEPTION

SQLSTATE: none assigned

<dsError>: <message>

DS_FLAKY_NETWORK_CONNECTION

SQLSTATE: none assigned

<dsError>: Network connection is flaky for <rpcName>, please retry.<traceId>

DS_FOREIGN_TABLE_METADATA_REFRESH_FAILED

SQLSTATE: none assigned

<dsError>: <message>

DS_HADOOP_CONFIG_NOT_SET

SQLSTATE: none assigned

<dsError>: <key> is not set by the caller.

DS_ILLEGAL_STATE

SQLSTATE: none assigned

<dsError>: <message>

DS_INTERNAL_ERROR_FROM_DB_DS_SERVER

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_AZURE_PATH

SQLSTATE: none assigned

<dsError>: Invalid Azure path: <path>.

DS_INVALID_DELTA_ACTION_OPERATION

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_FIELD

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_ITERATOR_OPERATION

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_PARTITION_SPEC

SQLSTATE: none assigned

<dsError>: <message>

DS_INVALID_RESPONSE_FROM_DS_SERVER

SQLSTATE: none assigned

<dsError>: <message>

DS_MATERIALIZATION_QUERY_FAILED

SQLSTATE: none assigned

<dsError>: Query failed for <schema>.<table> from Share <share>.

DS_MATERIALIZATION_QUERY_TIMEDOUT

SQLSTATE: none assigned

<dsError>: Query timed out for <schema>.<table> from Share <share> after <timeoutInSec> seconds.

DS_MISSING_IDEMPOTENCY_KEY

SQLSTATE: none assigned

<dsError>: Idempotency key is require when query <schema>.<table> from Share <share> asynchronously.

DS_MORE_THAN_ONE_RPC_PARAMETER_SET

SQLSTATE: none assigned

<dsError>: Please only provide one of: <parameters>.

DS_NO_METASTORE_ASSIGNED

SQLSTATE: none assigned

<dsError>: No metastore assigned for the current workspace (workspaceId: <workspaceId>).

DS_PAGINATION_AND_QUERY_ARGS_MISMATCH

SQLSTATE: none assigned

<dsError>: Pagination or query arguments mismatch.

DS_PARTITION_COLUMNS_RENAMED

SQLSTATE: none assigned

<dsError>: Partition column [<renamedColumns>] renamed on the shared table. Please contact your data provider to fix this.

DS_QUERY_BEFORE_START_VERSION

SQLSTATE: none assigned

<dsError>: You can only query table data since version <startVersion>.

DS_QUERY_TIMEOUT_ON_SERVER

SQLSTATE: none assigned

<dsError>: A timeout occurred when processing <queryType> on <tableName> after <numActions> updates across <numIter> iterations.<progressUpdate> <suggestion> <traceId>

DS_RATE_LIMIT_ON_DS_SERVER

SQLSTATE: none assigned

<dsError>: <message>

DS_RECIPIENT_RPC_INVALID_PARAMETER

SQLSTATE: none assigned

<dsError>: <message>

DS_RESOURCE_ALREADY_EXIST_ON_DS_SERVER

SQLSTATE: none assigned

<dsError>: <message>

DS_RESOURCE_EXHAUSTED

SQLSTATE: none assigned

<dsError>: The <resource> exceeded limit: [<limitSize>]<suggestion>.<traceId>

DS_RESOURCE_NOT_FOUND_ON_DS_SERVER

SQLSTATE: none assigned

<dsError>: <message>

DS_SYSTEM_WORKSPACE_GROUP_PERMISSION_UNSUPPORTED

SQLSTATE: none assigned

Cannot grant privileges on <securableType> to system generated group <principal>.

DS_TIME_TRAVEL_NOT_PERMITTED

SQLSTATE: none assigned

<dsError>: Time travel query is not permitted unless history is shared on <tableName>. Please contact your data provider.

DS_UNAUTHORIZED

SQLSTATE: none assigned

<dsError>: Unauthorized.

DS_UNAUTHORIZED_D2O_OIDC_RECIPIENT

SQLSTATE: none assigned

<dsError>: Unauthorized D2O OIDC Recipient: <message>.

DS_UNKNOWN_EXCEPTION

SQLSTATE: none assigned

<dsError>: <traceId>

DS_UNKNOWN_QUERY_ID

SQLSTATE: none assigned

<dsError>: Unknown query id <queryID> for <schema>.<table> from Share <share>.

DS_UNKNOWN_QUERY_STATUS

SQLSTATE: none assigned

<dsError>: Unknown query status for query id <queryID> for <schema>.<table> from Share <share>.

DS_UNKNOWN_RPC

SQLSTATE: none assigned

<dsError>: Unknown rpc <rpcName>.

DS_UNSUPPORTED_DELTA_READER_VERSION

SQLSTATE: none assigned

<dsError>: Delta protocol reader version <tableReaderVersion> is higher than <supportedReaderVersion> and cannot be supported in the delta sharing server.

DS_UNSUPPORTED_DELTA_TABLE_FEATURES

SQLSTATE: none assigned

<dsError>: Table features <tableFeatures> are found in table<versionStr> <historySharingStatusStr> <optionStr>

DS_UNSUPPORTED_OPERATION

SQLSTATE: none assigned

<dsError>: <message>

DS_UNSUPPORTED_STORAGE_SCHEME

SQLSTATE: none assigned

<dsError>: Unsupported storage scheme: <scheme>.

DS_UNSUPPORTED_TABLE_TYPE

SQLSTATE: none assigned

<dsError>: Could not retrieve <schema>.<table> from Share <share> because table with type [<tableType>] is currently unsupported in Delta Sharing protocol.

DS_USER_CONTEXT_ERROR

SQLSTATE: none assigned

<dsError>: <message>

DS_VIEW_SHARING_FUNCTIONS_NOT_ALLOWED

SQLSTATE: none assigned

<dsError>: The following function(s): <functions> are not allowed in the view sharing query.

DS_WORKSPACE_DOMAIN_NOT_SET

SQLSTATE: none assigned

<dsError>: Workspace <workspaceId> domain is not set.

DS_WORKSPACE_NOT_FOUND

SQLSTATE: none assigned

<dsError>: Workspace <workspaceId> was not found.

Autoloader

CF_ADD_NEW_NOT_SUPPORTED

SQLSTATE: 0A000

Schema evolution mode <addNewColumnsMode> is not supported when the schema is specified. To use this mode, you can provide the schema through cloudFiles.schemaHints instead.

CF_AMBIGUOUS_AUTH_OPTIONS_ERROR

SQLSTATE: 42000

Found notification-setup authentication options for the (default) directory

listing mode:

<options>

If you wish to use the file notification mode, please explicitly set:

.option(“cloudFiles.<useNotificationsKey>”, “true”)

Alternatively, if you want to skip the validation of your options and ignore these

authentication options, you can set:

.option(“cloudFiles.ValidateOptionsKey>”, “false”)

CF_AMBIGUOUS_INCREMENTAL_LISTING_MODE_ERROR

SQLSTATE: 42000

Incremental listing mode (cloudFiles.<useIncrementalListingKey>)

and file notification (cloudFiles.<useNotificationsKey>)

have been enabled at the same time.

Please make sure that you select only one.

CF_AZURE_STORAGE_SUFFIXES_REQUIRED

SQLSTATE: 42000

Require adlsBlobSuffix and adlsDfsSuffix for Azure

CF_BUCKET_MISMATCH

SQLSTATE: 22000

The <storeType> in the file event <fileEvent> is different from expected by the source: <source>.

CF_CANNOT_EVOLVE_SCHEMA_LOG_EMPTY

SQLSTATE: 22000

Cannot evolve schema when the schema log is empty. Schema log location: <logPath>

CF_CANNOT_PARSE_QUEUE_MESSAGE

SQLSTATE: 22000

Cannot parse the following queue message: <message>

CF_CANNOT_RESOLVE_CONTAINER_NAME

SQLSTATE: 22000

Cannot resolve container name from path: <path>, Resolved uri: <uri>

CF_CANNOT_RUN_DIRECTORY_LISTING

SQLSTATE: 22000

Cannot run directory listing when there is an async backfill thread running

CF_CLEAN_SOURCE_ALLOW_OVERWRITES_BOTH_ON

SQLSTATE: 42000

Cannot turn on cloudFiles.cleanSource and cloudFiles.allowOverwrites at the same time.

CF_CLEAN_SOURCE_UNAUTHORIZED_WRITE_PERMISSION

SQLSTATE: 42501

Auto Loader cannot delete processed files because it does not have write permissions to the source directory.

<reason>

To fix you can either:

  1. Grant write permissions to the source directory OR
  2. Set cleanSource to ‘OFF’

You could also unblock your stream by setting the SQLConf spark.databricks.cloudFiles.cleanSource.disabledDueToAuthorizationErrors to ‘true’.

CF_DUPLICATE_COLUMN_IN_DATA

SQLSTATE: 22000

There was an error when trying to infer the partition schema of your table. You have the same column duplicated in your data and partition paths. To ignore the partition value, please provide your partition columns explicitly by using: .option(“cloudFiles.<partitionColumnsKey>”, “{comma-separated-list}”)

CF_EMPTY_DIR_FOR_SCHEMA_INFERENCE

SQLSTATE: 42000

Cannot infer schema when the input path <path> is empty. Please try to start the stream when there are files in the input path, or specify the schema.

CF_EVENT_GRID_AUTH_ERROR

SQLSTATE: 22000

Failed to create an Event Grid subscription. Please make sure that your service

principal has <permissionType> Event Grid Subscriptions. See more details at:

<docLink>

CF_EVENT_GRID_CREATION_FAILED

SQLSTATE: 22000

Failed to create event grid subscription. Please ensure that Microsoft.EventGrid is

registered as resource provider in your subscription. See more details at:

<docLink>

CF_EVENT_GRID_NOT_FOUND_ERROR

SQLSTATE: 22000

Failed to create an Event Grid subscription. Please make sure that your storage

account (<storageAccount>) is under your resource group (<resourceGroup>) and that

the storage account is a “StorageV2 (general purpose v2)” account. See more details at:

<docLink>

CF_EVENT_NOTIFICATION_NOT_SUPPORTED

SQLSTATE: 0A000

Auto Loader event notification mode is not supported for <cloudStore>.

CF_FAILED_TO_CHECK_STREAM_NEW

SQLSTATE: 22000

Failed to check if the stream is new

CF_FAILED_TO_CREATED_PUBSUB_SUBSCRIPTION

SQLSTATE: 22000

Failed to create subscription: <subscriptionName>. A subscription with the same name already exists and is associated with another topic: <otherTopicName>. The desired topic is <proposedTopicName>. Either delete the existing subscription or create a subscription with a new resource suffix.

CF_FAILED_TO_CREATED_PUBSUB_TOPIC

SQLSTATE: 22000

Failed to create topic: <topicName>. A topic with the same name already exists.<reason> Remove the existing topic or try again with another resource suffix

CF_FAILED_TO_DELETE_GCP_NOTIFICATION

SQLSTATE: 22000

Failed to delete notification with id <notificationId> on bucket <bucketName> for topic <topicName>. Please retry or manually remove the notification through the GCP console.

CF_FAILED_TO_DESERIALIZE_PERSISTED_SCHEMA

SQLSTATE: 22000

Failed to deserialize persisted schema from string: ‘<jsonSchema>

CF_FAILED_TO_EVOLVE_SCHEMA

SQLSTATE: 22000

Cannot evolve schema without a schema log.

CF_FAILED_TO_FIND_PROVIDER

SQLSTATE: 42000

Failed to find provider for <fileFormatInput>

CF_FAILED_TO_INFER_SCHEMA

SQLSTATE: 22000

Failed to infer schema for format <fileFormatInput> from existing files in input path <path>.

For more details see CF_FAILED_TO_INFER_SCHEMA

CF_FAILED_TO_WRITE_TO_SCHEMA_LOG

SQLSTATE: 22000

Failed to write to the schema log at location <path>.

CF_FILE_FORMAT_REQUIRED

SQLSTATE: 42000

Could not find required option: cloudFiles.format.

CF_FOUND_MULTIPLE_AUTOLOADER_PUBSUB_SUBSCRIPTIONS

SQLSTATE: 22000

Found multiple (<num>) subscriptions with the Auto Loader prefix for topic <topicName>:

<subscriptionList>

There should only be one subscription per topic. Please manually ensure that your topic does not have multiple subscriptions.

CF_GCP_AUTHENTICATION

SQLSTATE: 42000

Please either provide all of the following: <clientEmail>, <client>,

<privateKey>, and <privateKeyId> or provide none of them in order to use the default

GCP credential provider chain for authenticating with GCP resources.

CF_GCP_LABELS_COUNT_EXCEEDED

SQLSTATE: 22000

Received too many labels (<num>) for GCP resource. The maximum label count per resource is <maxNum>.

CF_GCP_RESOURCE_TAGS_COUNT_EXCEEDED

SQLSTATE: 22000

Received too many resource tags (<num>) for GCP resource. The maximum resource tag count per resource is <maxNum>, as resource tags are stored as GCP labels on resources, and Databricks specific tags consume some of this label quota.

CF_INCOMPLETE_LOG_FILE_IN_SCHEMA_LOG

SQLSTATE: 22000

Incomplete log file in the schema log at path <path>

CF_INCOMPLETE_METADATA_FILE_IN_CHECKPOINT

SQLSTATE: 22000

Incomplete metadata file in the Auto Loader checkpoint

CF_INCORRECT_BATCH_USAGE

SQLSTATE: 42887

CloudFiles is a streaming source. Please use spark.readStream instead of spark.read. To disable this check, set <cloudFilesFormatValidationEnabled> to false.

CF_INCORRECT_SQL_PARAMS

SQLSTATE: 42000

The cloud_files method accepts two required string parameters: the path to load from, and the file format. File reader options must be provided in a string key-value map. e.g. cloud_files(“path”, “json”, map(“option1”, “value1”)). Received: <params>

CF_INCORRECT_STREAM_USAGE

SQLSTATE: 42887

To use ‘cloudFiles’ as a streaming source, please provide the file format with the option ‘cloudFiles.format’, and use .load() to create your DataFrame. To disable this check, set <cloudFilesFormatValidationEnabled> to false.

CF_INTERNAL_ERROR

SQLSTATE: 42000

Internal error.

For more details see CF_INTERNAL_ERROR

CF_INVALID_ARN

SQLSTATE: 42000

Invalid ARN: <arn>

CF_INVALID_AZURE_CERTIFICATE

SQLSTATE: 42000

The private key provided with option cloudFiles.certificate cannot be parsed. Please provide a valid public key in PEM format.

CF_INVALID_AZURE_CERT_PRIVATE_KEY

SQLSTATE: 42000

The private key provided with option cloudFiles.certificatePrivateKey cannot be parsed. Please provide a valid private key in PEM format.

CF_INVALID_CHECKPOINT

SQLSTATE: 42000

This checkpoint is not a valid CloudFiles source

CF_INVALID_CLEAN_SOURCE_MODE

SQLSTATE: 42000

Invalid mode for clean source option <value>.

CF_INVALID_GCP_RESOURCE_TAG_KEY

SQLSTATE: 42000

Invalid resource tag key for GCP resource: <key>. Keys must start with a lowercase letter, be within 1 to 63 characters long, and contain only lowercase letters, numbers, underscores (_), and hyphens (-).

CF_INVALID_GCP_RESOURCE_TAG_VALUE

SQLSTATE: 42000

Invalid resource tag value for GCP resource: <value>. Values must be within 0 to 63 characters long and must contain only lowercase letters, numbers, underscores (_), and hyphens (-).

CF_INVALID_MANAGED_FILE_EVENTS_OPTION_KEYS

SQLSTATE: 42000

Auto Loader does not support the following options when used with managed file events:

<optionList>

We recommend that you remove these options and then restart the stream.

CF_INVALID_MANAGED_FILE_EVENTS_RESPONSE

SQLSTATE: 22000

Invalid response from managed file events service. Please contact Databricks support for assistance.

For more details see CF_INVALID_MANAGED_FILE_EVENTS_RESPONSE

CF_INVALID_SCHEMA_EVOLUTION_MODE

SQLSTATE: 42000

cloudFiles.<schemaEvolutionModeKey> must be one of {

<addNewColumns>

<failOnNewColumns>

<rescue>

<noEvolution>”}

CF_INVALID_SCHEMA_HINTS_OPTION

SQLSTATE: 42000

Schema hints can only specify a particular column once.

In this case, redefining column: <columnName>

multiple times in schemaHints:

<schemaHints>

CF_INVALID_SCHEMA_HINT_COLUMN

SQLSTATE: 42000

Schema hints can not be used to override maps’ and arrays’ nested types.

Conflicted column: <columnName>

CF_LATEST_OFFSET_READ_LIMIT_REQUIRED

SQLSTATE: 22000

latestOffset should be called with a ReadLimit on this source.

CF_LOG_FILE_MALFORMED

SQLSTATE: 22000

Log file was malformed: failed to read correct log version from <fileName>.

CF_MANAGED_FILE_EVENTS_BACKFILL_IN_PROGRESS

SQLSTATE: 22000

You have requested Auto Loader to ignore existing files in your external location by setting includeExistingFiles to false. However, the managed file events service is still discovering existing files in your external location. Please try again after managed file events has completed discovering all files in your external location.

CF_MANAGED_FILE_EVENTS_ENDPOINT_NOT_FOUND

SQLSTATE: 42000

You are using Auto Loader with managed file events, but it appears that the external location for your input path ‘<path>’ does not have file events enabled or the input path is invalid. Please request your Databricks Administrator to enable file events on the external location for your input path.

CF_MANAGED_FILE_EVENTS_ENDPOINT_PERMISSION_DENIED

SQLSTATE: 42000

You are using Auto Loader with managed file events, but you do not have access to the external location or volume for input path ‘<path>’ or the input path is invalid. Please request your Databricks Administrator to grant read permissions for the external location or volume or provide a valid input path within an existing external location or volume.

CF_MANAGED_FILE_EVENTS_ONLY_ON_SERVERLESS

SQLSTATE: 56038

Auto Loader with managed file events is only available on Databricks serverless. To continue, please move this workload to Databricks serverless or turn off the cloudFiles.useManagedFileEvents option.

CF_MAX_MUST_BE_POSITIVE

SQLSTATE: 42000

max must be positive

CF_METADATA_FILE_CONCURRENTLY_USED

SQLSTATE: 22000

Multiple streaming queries are concurrently using <metadataFile>

CF_MISSING_METADATA_FILE_ERROR

SQLSTATE: 42000

The metadata file in the streaming source checkpoint directory is missing. This metadata

file contains important default options for the stream, so the stream cannot be restarted

right now. Please contact Databricks support for assistance.

CF_MISSING_PARTITION_COLUMN_ERROR

SQLSTATE: 42000

Partition column <columnName> does not exist in the provided schema:

<schema>

CF_MISSING_SCHEMA_IN_PATHLESS_MODE

SQLSTATE: 42000

Please specify a schema using .schema() if a path is not provided to the CloudFiles source while using file notification mode. Alternatively, to have Auto Loader to infer the schema please provide a base path in .load().

CF_MULTIPLE_PUBSUB_NOTIFICATIONS_FOR_TOPIC

SQLSTATE: 22000

Found existing notifications for topic <topicName> on bucket <bucketName>:

notification,id

<notificationList>

To avoid polluting the subscriber with unintended events, please delete the above notifications and retry.

CF_NEW_PARTITION_ERROR

SQLSTATE: 22000

New partition columns were inferred from your files: [<filesList>]. Please provide all partition columns in your schema or provide a list of partition columns which you would like to extract values for by using: .option(“cloudFiles.partitionColumns”, “{comma-separated-list|empty-string}”)

CF_PARTITON_INFERENCE_ERROR

SQLSTATE: 22000

There was an error when trying to infer the partition schema of the current batch of files. Please provide your partition columns explicitly by using: .option(“cloudFiles.<partitionColumnOption>”, “{comma-separated-list}”)

CF_PATH_DOES_NOT_EXIST_FOR_READ_FILES

SQLSTATE: 42000

Cannot read files when the input path <path> does not exist. Please make sure the input path exists and re-try.

CF_PERIODIC_BACKFILL_NOT_SUPPORTED

SQLSTATE: 0A000

Periodic backfill is not supported if asynchronous backfill is disabled. You can enable asynchronous backfill/directory listing by setting spark.databricks.cloudFiles.asyncDirListing to true

CF_PREFIX_MISMATCH

SQLSTATE: 22000

Found mismatched event: key <key> doesn’t have the prefix: <prefix>

CF_PROTOCOL_MISMATCH

SQLSTATE: 22000

<message>

If you don’t need to make any other changes to your code, then please set the SQL

configuration: ‘<sourceProtocolVersionKey> = <value>

to resume your stream. Please refer to:

<docLink>

for more details.

CF_REGION_NOT_FOUND_ERROR

SQLSTATE: 42000

Could not get default AWS Region. Please specify a region using the cloudFiles.region option.

CF_RESOURCE_SUFFIX_EMPTY

SQLSTATE: 42000

Failed to create notification services: the resource suffix cannot be empty.

CF_RESOURCE_SUFFIX_INVALID_CHAR_AWS

SQLSTATE: 42000

Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-) and underscores (_).

CF_RESOURCE_SUFFIX_INVALID_CHAR_AZURE

SQLSTATE: 42000

Failed to create notification services: the resource suffix can only have lowercase letter, number, and dash (-).

CF_RESOURCE_SUFFIX_INVALID_CHAR_GCP

SQLSTATE: 42000

Failed to create notification services: the resource suffix can only have alphanumeric characters, hyphens (-), underscores (_), periods (.), tildes (~) plus signs (+), and percent signs (<percentSign>).

CF_RESOURCE_SUFFIX_LIMIT

SQLSTATE: 42000

Failed to create notification services: the resource suffix cannot have more than <limit> characters.

CF_RESOURCE_SUFFIX_LIMIT_GCP

SQLSTATE: 42000

Failed to create notification services: the resource suffix must be between <lowerLimit> and <upperLimit> characters.

CF_RESTRICTED_GCP_RESOURCE_TAG_KEY

SQLSTATE: 22000

Found restricted GCP resource tag key (<key>). The following GCP resource tag keys are restricted for Auto Loader: [<restrictedKeys>]

CF_RETENTION_GREATER_THAN_MAX_FILE_AGE

SQLSTATE: 42000

cloudFiles.cleanSource.retentionDuration cannot be greater than cloudFiles.maxFileAge.

CF_SAME_PUB_SUB_TOPIC_NEW_KEY_PREFIX

SQLSTATE: 22000

Failed to create notification for topic: <topic> with prefix: <prefix>. There is already a topic with the same name with another prefix: <oldPrefix>. Try using a different resource suffix for setup or delete the existing setup.

CF_SOURCE_DIRECTORY_PATH_REQUIRED

SQLSTATE: 42000

Please provide the source directory path with option path

CF_SOURCE_UNSUPPORTED

SQLSTATE: 0A000

The cloud files source only supports S3, Azure Blob Storage (wasb/wasbs) and Azure Data Lake Gen1 (adl) and Gen2 (abfs/abfss) paths right now. path: ‘<path>’, resolved uri: ‘<uri>

CF_STATE_INCORRECT_SQL_PARAMS

SQLSTATE: 42000

The cloud_files_state function accepts a string parameter representing the checkpoint directory of a cloudFiles stream or a multi-part tableName identifying a streaming table, and an optional second integer parameter representing the checkpoint version to load state for. The second parameter may also be ‘latest’ to read the latest checkpoint. Received: <params>

CF_STATE_INVALID_CHECKPOINT_PATH

SQLSTATE: 42000

The input checkpoint path <path> is invalid. Either the path does not exist or there are no cloud_files sources found.

CF_STATE_INVALID_VERSION

SQLSTATE: 42000

The specified version <version> does not exist, or was removed during analysis.

CF_THREAD_IS_DEAD

SQLSTATE: 22000

<threadName> thread is dead.

CF_UNABLE_TO_DERIVE_STREAM_CHECKPOINT_LOCATION

SQLSTATE: 42000

Unable to derive the stream checkpoint location from the source checkpoint location: <checkPointLocation>

CF_UNABLE_TO_DETECT_FILE_FORMAT

SQLSTATE: 42000

Unable to detect the source file format from <fileSize> sampled file(s), found <formats>. Please specify the format.

CF_UNABLE_TO_EXTRACT_BUCKET_INFO

SQLSTATE: 42000

Unable to extract bucket information. Path: ‘<path>’, resolved uri: ‘<uri>’.

CF_UNABLE_TO_EXTRACT_KEY_INFO

SQLSTATE: 42000

Unable to extract key information. Path: ‘<path>’, resolved uri: ‘<uri>’.

CF_UNABLE_TO_EXTRACT_STORAGE_ACCOUNT_INFO

SQLSTATE: 42000

Unable to extract storage account information; path: ‘<path>’, resolved uri: ‘<uri>

CF_UNABLE_TO_LIST_EFFICIENTLY

SQLSTATE: 22000

Received a directory rename event for the path <path>, but we are unable to list this directory efficiently. In order for the stream to continue, set the option ‘cloudFiles.ignoreDirRenames’ to true, and consider enabling regular backfills with cloudFiles.backfillInterval for this data to be processed.

CF_UNEXPECTED_READ_LIMIT

SQLSTATE: 22000

Unexpected ReadLimit: <readLimit>

CF_UNKNOWN_OPTION_KEYS_ERROR

SQLSTATE: 42000

Found unknown option keys:

<optionList>

Please make sure that all provided option keys are correct. If you want to skip the

validation of your options and ignore these unknown options, you can set:

.option(“cloudFiles.<validateOptions>”, “false”)

CF_UNKNOWN_READ_LIMIT

SQLSTATE: 22000

Unknown ReadLimit: <readLimit>

CF_UNSUPPORTED_CLOUD_FILES_SQL_FUNCTION

SQLSTATE: 0A000

The SQL function ‘cloud_files’ to create an Auto Loader streaming source is supported only in a Delta Live Tables pipeline. See more details at:

<docLink>

CF_UNSUPPORTED_FORMAT_FOR_SCHEMA_INFERENCE

SQLSTATE: 0A000

Schema inference is not supported for format: <format>. Please specify the schema.

CF_UNSUPPORTED_LOG_VERSION

SQLSTATE: 0A000

UnsupportedLogVersion: maximum supported log version is v<maxVersion>``, but encountered v``<version>. The log file was produced by a newer version of DBR and cannot be read by this version. Please upgrade.

CF_UNSUPPORTED_SCHEMA_EVOLUTION_MODE

SQLSTATE: 0A000

Schema evolution mode <mode> is not supported for format: <format>. Please set the schema evolution mode to ‘none’.

CF_USE_DELTA_FORMAT

SQLSTATE: 42000

Reading from a Delta table is not supported with this syntax. If you would like to consume data from Delta, please refer to the docs: read a Delta table (<deltaDocLink>), or read a Delta table as a stream source (<streamDeltaDocLink>). The streaming source from Delta is already optimized for incremental consumption of data.

Geospatial

EWKB_PARSE_ERROR

SQLSTATE: 22023

Error parsing EWKB: <parseError> at position <pos>

GEOJSON_PARSE_ERROR

SQLSTATE: 22023

Error parsing GeoJSON: <parseError> at position <pos>

For more details see GEOJSON_PARSE_ERROR

H3_INVALID_CELL_ID

SQLSTATE: 22023

<h3Cell> is not a valid H3 cell ID

For more details see H3_INVALID_CELL_ID

H3_INVALID_GRID_DISTANCE_VALUE

SQLSTATE: 22023

H3 grid distance <k> must be non-negative

For more details see H3_INVALID_GRID_DISTANCE_VALUE

H3_INVALID_RESOLUTION_VALUE

SQLSTATE: 22023

H3 resolution <r> must be between <minR> and <maxR>, inclusive

For more details see H3_INVALID_RESOLUTION_VALUE

H3_NOT_ENABLED

SQLSTATE: 0A000

<h3Expression> is disabled or unsupported. Consider enabling Photon or switch to a tier that supports H3 expressions

For more details see H3_NOT_ENABLED

H3_PENTAGON_ENCOUNTERED_ERROR

SQLSTATE: 22023

A pentagon was encountered while computing the hex ring of <h3Cell> with grid distance <k>

H3_UNDEFINED_GRID_DISTANCE

SQLSTATE: 22023

H3 grid distance between <h3Cell1> and <h3Cell2> is undefined

ST_DIFFERENT_SRID_VALUES

SQLSTATE: 22023

Arguments to “<sqlFunction>” must have the same SRID value. SRID values found: <srid1>, <srid2>

ST_INVALID_ARGUMENT

SQLSTATE: 22023

<sqlFunction>”: <reason>

ST_INVALID_ARGUMENT_TYPE

SQLSTATE: 22023

Argument to “<sqlFunction>” must be of type <validTypes>

ST_INVALID_CRS_TRANSFORMATION_ERROR

SQLSTATE: 22023

<sqlFunction>: Invalid or unsupported CRS transformation from SRID <srcSrid> to SRID <trgSrid>

ST_INVALID_ENDIANNESS_VALUE

SQLSTATE: 22023

Endianness ‘<e>’ must be either ‘NDR’ (little-endian) or ‘XDR’ (big-endian)

ST_INVALID_GEOHASH_VALUE

SQLSTATE: 22023

<sqlFunction>: Invalid geohash value: ‘<geohash>’. Geohash values must be valid lowercase base32 strings as described inhttps://en.wikipedia.org/wiki/Geohash#Textual_representation

ST_INVALID_PRECISION_VALUE

SQLSTATE: 22023

Precision <p> must be between <minP> and <maxP>, inclusive

ST_INVALID_SRID_VALUE

SQLSTATE: 22023

Invalid or unsupported SRID <srid>

ST_NOT_ENABLED

SQLSTATE: 0A000

<stExpression> is disabled or unsupported. Consider enabling Photon or switch to a tier that supports ST expressions

ST_UNSUPPORTED_RETURN_TYPE

SQLSTATE: 0A000

The GEOGRAPHY and GEOMETRY data types cannot be returned in queries. Use one of the following SQL expressions to convert them to standard interchange formats: <projectionExprs>.

WKB_PARSE_ERROR

SQLSTATE: 22023

Error parsing WKB: <parseError> at position <pos>

For more details see WKB_PARSE_ERROR

WKT_PARSE_ERROR

SQLSTATE: 22023

Error parsing WKT: <parseError> at position <pos>

For more details see WKT_PARSE_ERROR

Unity Catalog

CONFLICTING_COLUMN_NAMES_ERROR

SQLSTATE: 42711

Column <columnName> conflicts with another column with the same name but with/without trailing whitespaces (for example, an existing column named <columnName> ``). Please rename the column with a different name.

CONNECTION_CREDENTIALS_NOT_SUPPORTED_FOR_ONLINE_TABLE_CONNECTION

SQLSTATE: none assigned

Invalid request to get connection-level credentials for connection of type <connectionType>. Such credentials are only available for managed PostgreSQL connections.

CONNECTION_TYPE_NOT_ENABLED

SQLSTATE: none assigned

Connection type ‘<connectionType>’ is not enabled. Please enable the connection to use it.

DELTA_SHARING_READ_ONLY_RECIPIENT_EXISTS

SQLSTATE: none assigned

There is already a Recipient object ‘<existingRecipientName>’ with the same sharing identifier ‘<existingMetastoreId>’.

DELTA_SHARING_READ_ONLY_SECURABLE_KIND

SQLSTATE: none assigned

Data of a Delta Sharing Securable Kind <securableKindName> are read-only and can not be created, modified or deleted.

EXTERNAL_ACCESS_DISABLED_ON_METASTORE

SQLSTATE: none assigned

Credential vending is rejected for non Databricks Compute environment due to External Data Access being disabled for metastore <metastoreName>. Please contact your metastore admin to enable ‘External Data Access’ configuration on the metastore.

EXTERNAL_ACCESS_NOT_ALLOWED_FOR_TABLE

SQLSTATE: none assigned

Table with id <tableId> cannot be accessed from outside of Databricks Compute Environment due to its kind being <securableKind>. Only ‘TABLE_EXTERNAL’, ‘TABLE_DELTA_EXTERNAL’ and ‘TABLE_DELTA’ table kinds can be accessed externally.

EXTERNAL_USE_SCHEMA_ASSIGNED_TO_INCORRECT_SECURABLE_TYPE

SQLSTATE: none assigned

Privilege EXTERNAL USE SCHEMA is not applicable to this entity <assignedSecurableType> and can only be assigned to a schema or catalog. Please remove the privilege from the <assignedSecurableType> object and assign it to a schema or catalog instead.

EXTERNAL_WRITE_NOT_ALLOWED_FOR_TABLE

SQLSTATE: none assigned

Table with id <tableId> cannot be written from outside of Databricks Compute Environment due to its kind being <securableKind>. Only ‘TABLE_EXTERNAL’ and ‘TABLE_DELTA_EXTERNAL’ table kinds can be written externally.

FOREIGN_CATALOG_STORAGE_ROOT_MUST_SUPPORT_WRITES

SQLSTATE: none assigned

The storage location for a foreign catalog of type <catalogType> will be used for unloading data and can not be read-only.

HMS_SECURABLE_OVERLAP_LIMIT_EXCEEDED

SQLSTATE: none assigned

The number of <resourceType>s for input path <url> exceeds the allowed limit (<overlapLimit>) for overlapping HMS <resourceType>s.

INVALID_RESOURCE_NAME_DELTA_SHARING

SQLSTATE: none assigned

Delta Sharing requests are not supported using resource names

INVALID_RESOURCE_NAME_ENTITY_TYPE

SQLSTATE: none assigned

The provided resource name references entity type <provided> but expected <expected>

INVALID_RESOURCE_NAME_METASTORE_ID

SQLSTATE: none assigned

The provided resource name references a metastore that is not in scope for the current request

LOCATION_OVERLAP

SQLSTATE: none assigned

Input path url ‘<path>’ overlaps with <overlappingLocation> within ‘<caller>’ call. <conflictingSummary>.

REDSHIFT_FOREIGN_CATALOG_STORAGE_ROOT_MUST_BE_S3

SQLSTATE: none assigned

The storage root for Redshift foreign catalog has to be AWS S3.

SECURABLE_KIND_DOES_NOT_SUPPORT_LAKEHOUSE_FEDERATION

SQLSTATE: none assigned

Securable with kind <securableKind> does not support Lakehouse Federation.

SECURABLE_KIND_NOT_ENABLED

SQLSTATE: none assigned

Securable kind ‘<securableKind>’ is not enabled. If this is a securable kind associated with a preview feature, please enable it in workspace settings.

SECURABLE_TYPE_DOES_NOT_SUPPORT_LAKEHOUSE_FEDERATION

SQLSTATE: none assigned

Securable with type <securableType> does not support Lakehouse Federation.

SOURCE_TABLE_COLUMN_COUNT_EXCEEDS_LIMIT

SQLSTATE: none assigned

The source table has more than <columnCount> columns. Please reduce the number of columns to <columnLimitation> or fewer.

UC_AAD_TOKEN_LIFETIME_TOO_SHORT

SQLSTATE: none assigned

Exchanged AAD token lifetime is <lifetime> which is configured too short. Please check your Azure AD setting to make sure temporary access token has at least an hour lifetime.https://learn.microsoft.com/azure/active-directory/develop/active-directory-configurable-token-lifetimes

UC_AUTHZ_ACTION_NOT_SUPPORTED

SQLSTATE: none assigned

Authorizing <actionName> is not supported; please check that the RPC invoked is implemented for this resource type

UC_BUILTIN_HMS_CONNECTION_CREATION_PERMISSION_DENIED

SQLSTATE: none assigned

Cannot create a connection for a builtin hive metastore because user: <userId> is not the admin of the workspace: <workspaceId>

UC_BUILTIN_HMS_CONNECTION_MODIFY_RESTRICTED_FIELD

SQLSTATE: none assigned

Attempt to modify a restricted field in built-in HMS connection ‘<connectionName>’. Only ‘warehouse_directory’ can be updated.

UC_CANNOT_RENAME_PARTITION_FILTERING_COLUMN

SQLSTATE: none assigned

Failed to rename table column <originalLogicalColumn> because it’s used for partition filtering in <sharedTableName>. In order to proceed, you can remove the table from the share, rename the column, and share it with the desired partition filtering columns again. Though, this may break the streaming query for your recipient.

UC_CHILD_CREATION_FORBIDDEN_FOR_NON_UC_CLUSTER

SQLSTATE: none assigned

Cannot create <securableType><securable>’ under <parentSecurableType><parentSecurable>’ because the request is not from a UC cluster.

UC_CLOUD_STORAGE_ACCESS_FAILURE

SQLSTATE: none assigned

Failed to access cloud storage: <errMsg> exceptionTraceId=<exceptionTraceId>

UC_CONFLICTING_CONNECTION_OPTIONS

SQLSTATE: none assigned

Cannot create a connection with both username/password and oauth authentication options. Please choose one.

UC_CONNECTION_EXISTS_FOR_CREDENTIAL

SQLSTATE: none assigned

Credential ‘<credentialName>’ has one or more dependent connections. You may use force option to continue to update or delete the credential, but the connections using this credential may not work anymore.

UC_CONNECTION_EXPIRED_REFRESH_TOKEN

SQLSTATE: none assigned

The refresh token associated with the connection is expired. Please update the connection to restart the OAuth flow to retrieve a fresh token.

UC_CONNECTION_IN_FAILED_STATE

SQLSTATE: none assigned

The connection is in the FAILED state. Please update the connection with valid credentials to reactivate it.

UC_CONNECTION_MISSING_REFRESH_TOKEN

SQLSTATE: none assigned

There is no refresh token associated with the connection. Please update the OAuth client integration in your identity provider to return refresh tokens, and update or recreate the connection to restart the OAuth flow and retrieve the necessary tokens.

UC_CONNECTION_OAUTH_EXCHANGE_FAILED

SQLSTATE: none assigned

The OAuth token exchange failed with HTTP status code <httpStatus>. The returned server response or exception message is: <response>

UC_COORDINATED_COMMITS_NOT_ENABLED

SQLSTATE: none assigned

Supports for coordinated commits is not enabled. Please contact Databricks support.

UC_CREATE_FORBIDDEN_UNDER_INACTIVE_SECURABLE

SQLSTATE: none assigned

Cannot create <securableType><securableName>’ because it is under a <parentSecurableType><parentSecurableName>’ that is not active. Please delete the parent securable and recreate the parent.

UC_CREDENTIAL_ACCESS_CONNECTOR_PARSING_FAILED

SQLSTATE: none assigned

Failed to parse the provided access connector ID: <accessConnectorId>. Please verify its formatting and try again.

UC_CREDENTIAL_FAILED_TO_OBTAIN_VALIDATION_TOKEN

SQLSTATE: none assigned

Failed to obtain an AAD token to perform cloud permission validation on an access connector. Please retry the action.

UC_CREDENTIAL_INVALID_CLOUD_PERMISSIONS

SQLSTATE: none assigned

Registering a credential requires the contributor role over the corresponding access connector with ID <accessConnectorId>. Please contact your account admin.

UC_CREDENTIAL_INVALID_CREDENTIAL_TYPE_FOR_PURPOSE

SQLSTATE: none assigned

Credential type ‘<credentialType>’ is not supported for purpose ‘<credentialPurpose>

UC_CREDENTIAL_PERMISSION_DENIED

SQLSTATE: none assigned

Only the account admin can create or update a credential with type <storageCredentialType>.

UC_CREDENTIAL_TRUST_POLICY_IS_OPEN

SQLSTATE: none assigned

The trust policy of the IAM role to allow Databricks Account to assume the role should require an external id. Please contact your account admin to add the external id condition. This behavior is to guard against the Confused Deputy problem https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

UC_CREDENTIAL_UNPRIVILEGED_SERVICE_PRINCIPAL_NOT_SUPPORTED

SQLSTATE: none assigned

Service principals cannot use the CREATE_STORAGE_CREDENTIAL privilege to register managed identities. To register a managed identity, please assign the service principal the account admin role.

UC_CREDENTIAL_WORKSPACE_API_PROHIBITED

SQLSTATE: none assigned

Creating or updating a credential as a non-account admin is not supported in the account-level API. Please use the workspace-level API instead.

UC_DELTA_UNIVERSAL_FORMAT_CANNOT_PARSE_ICEBERG_VERSION

SQLSTATE: none assigned

Unable to parse Iceberg table version from metadata location <metadataLocation>.

UC_DELTA_UNIVERSAL_FORMAT_CONCURRENT_WRITE

SQLSTATE: none assigned

A concurrent update to the same iceberg metadata version was detected.

UC_DELTA_UNIVERSAL_FORMAT_INVALID_METADATA_LOCATION

SQLSTATE: none assigned

The committed metadata location <metadataLocation> is invalid. It is not a subdirectory of the table’s root directory <tableRoot>.

UC_DELTA_UNIVERSAL_FORMAT_MISSING_FIELD_CONSTRAINT

SQLSTATE: none assigned

The provided delta iceberg format conversion information is missing required fields.

UC_DELTA_UNIVERSAL_FORMAT_NON_CREATE_CONSTRAINT

SQLSTATE: none assigned

Setting delta iceberg format information on create is unsupported.

UC_DELTA_UNIVERSAL_FORMAT_TOO_LARGE_CONSTRAINT

SQLSTATE: none assigned

The provided delta iceberg format conversion information is too large.

UC_DELTA_UNIVERSAL_FORMAT_UPDATE_INVALID

SQLSTATE: none assigned

Uniform metadata can only be updated on Delta tables with uniform enabled.

UC_DEPENDENCY_DEPTH_LIMIT_EXCEEDED

SQLSTATE: none assigned

<resourceType><ref>’ depth exceeds limit (or has a circular reference).

UC_DEPENDENCY_DOES_NOT_EXIST

SQLSTATE: none assigned

<resourceType><ref>’ is invalid because one of the underlying resources does not exist. <cause>

UC_DEPENDENCY_PERMISSION_DENIED

SQLSTATE: none assigned

<resourceType><ref>’ does not have sufficient privilege to execute because the owner of one of the underlying resources failed an authorization check. <cause>

UC_DUPLICATE_CONNECTION

SQLSTATE: none assigned

A connection: ‘<connectionName>’ with the URL ‘<url>’ already exists.

UC_DUPLICATE_FABRIC_CATALOG_CREATION

SQLSTATE: none assigned

Attempted to create a Fabric catalog with url ‘<storageLocation>’ that matches an existing catalog, which is not allowed.

UC_DUPLICATE_TAG_ASSIGNMENT_CREATION

SQLSTATE: none assigned

Tag assignment with tag key <tagKey> already exists

UC_ENTITY_DOES_NOT_HAVE_CORRESPONDING_ONLINE_CLUSTER

SQLSTATE: none assigned

Entity <securableType> <entityId> does not have a corresponding online cluster.

UC_EXCEEDS_MAX_FILE_LIMIT

SQLSTATE: none assigned

There are more than <maxFileResults> files. Please specify [max_results] to limit the number of files returned.

UC_EXTERNAL_LOCATION_OP_NOT_ALLOWED

SQLSTATE: none assigned

Cannot <opName> <extLoc> <reason>. <suggestion>.

UC_FEATURE_DISABLED

SQLSTATE: none assigned

<featureName> is currently disabled in UC.

UC_FOREIGN_CATALOG_FOR_CONNECTION_TYPE_NOT_SUPPORTED

SQLSTATE: none assigned

Creation of a foreign catalog for connection type ‘<connectionType>’ is not supported. This connection type can only be used to create managed ingestion pipelines. Please reference Databricks documentation for more information.

UC_FOREIGN_CREDENTIAL_CHECK_ONLY_FOR_READ_OPERATIONS

SQLSTATE: none assigned

Only READ credentials can be retrieved for foreign tables.

UC_FOREIGN_KEY_CHILD_COLUMN_LENGTH_MISMATCH

SQLSTATE: none assigned

Foreign key <constraintName> child columns and parent columns are of different sizes.

UC_FOREIGN_KEY_COLUMN_MISMATCH

SQLSTATE: none assigned

The foreign key parent columns do not match the referenced primary key child columns. Foreign key parent columns are (<parentColumns>) and primary key child columns are (<primaryKeyChildColumns>).

UC_FOREIGN_KEY_COLUMN_TYPE_MISMATCH

SQLSTATE: none assigned

The foreign key child column type does not match the parent column type. Foreign key child column <childColumnName> has type <childColumnType> and parent column <parentColumnName> has type <parentColumnType>.

UC_GCP_INVALID_PRIVATE_KEY

SQLSTATE: none assigned

Access denied. Cause: service account private key is invalid.

UC_GCP_INVALID_PRIVATE_KEY_JSON_FORMAT

SQLSTATE: none assigned

Google Server Account OAuth Private Key has to be a valid JSON object with required fields, please make sure to provide the full JSON file generated from ‘KEYS’ section of service account details page.

UC_GCP_INVALID_PRIVATE_KEY_JSON_FORMAT_MISSING_FIELDS

SQLSTATE: none assigned

Google Server Account OAuth Private Key has to be a valid JSON object with required fields, please make sure to provide the full JSON file generated from ‘KEYS’ section of service account details page. Missing fields are <missingFields>

UC_IAM_ROLE_NON_SELF_ASSUMING

SQLSTATE: none assigned

The IAM role for this storage credential was found to be non self-assuming. Please check your role’s trust and IAM policies to ensure that your IAM role can assume itself according to the Unity Catalog storage credential documentation.

UC_ICEBERG_COMMIT_CONFLICT

SQLSTATE: none assigned

Cannot commit <tableName>: metadata location <baseMetadataLocation> has changed from <catalogMetadataLocation>.

UC_ICEBERG_COMMIT_INVALID_TABLE

SQLSTATE: none assigned

Cannot perform Managed Iceberg commit to a non Managed Iceberg table: <tableName>.

UC_ICEBERG_COMMIT_MISSING_FIELD_CONSTRAINT

SQLSTATE: none assigned

The provided Managed Iceberg commit information is missing required fields.

UC_ID_MISMATCH

SQLSTATE: none assigned

The <type> <name> does not have ID <wrongId>. Please retry the operation.

UC_INVALID_ACCESS_DBFS_ENTITY

SQLSTATE: none assigned

Invalid access of <securableType> <securableName> in the federated catalog <catalogName>. <reason>

UC_INVALID_CLOUDFLARE_ACCOUNT_ID

SQLSTATE: none assigned

Invalid Cloudflare account ID.

UC_INVALID_CREDENTIAL_CLOUD

SQLSTATE: none assigned

Invalid credential cloud provider ‘<cloud>’. Allowed cloud provider ‘<allowedCloud>’.

UC_INVALID_CREDENTIAL_PURPOSE_VALUE

SQLSTATE: none assigned

Invalid value ‘<value>’ for credential’s ‘purpose’. Allowed values ‘<allowedValues>’.

UC_INVALID_CREDENTIAL_TRANSITION

SQLSTATE: none assigned

Cannot update a connection from <startingCredentialType> to <endingCredentialType>. The only valid transition is from a username/password based connection to an OAuth token based connection.

UC_INVALID_CRON_STRING_FABRIC

SQLSTATE: none assigned

Invalid cron string. Found: ‘<cronString>’ with parse exception: ‘<message>

UC_INVALID_DIRECT_ACCESS_MANAGED_TABLE

SQLSTATE: none assigned

Invalid direct access managed table <tableName>. Make sure source table & pipeline definition are not defined.

UC_INVALID_EMPTY_STORAGE_LOCATION

SQLSTATE: none assigned

Unexpected empty storage location for <securableType><securableName>’ in catalog ‘<catalogName>’. In order to fix this error, please run DESCRIBE SCHEMA <catalogName>.<securableName> and refresh this page.

UC_INVALID_OPTIONS_UPDATE

SQLSTATE: none assigned

Invalid options provided for update. Invalid options: <invalidOptions>. Allowed options: <allowedOptions>.

UC_INVALID_OPTION_VALUE

SQLSTATE: none assigned

Invalid value ‘<value>’ for ‘<option>’. Allowed values ‘<allowedValues>’.

UC_INVALID_OPTION_VALUE_EMPTY

SQLSTATE: none assigned

<option>’ cannot be empty. Please enter a non-empty value.

UC_INVALID_R2_ACCESS_KEY_ID

SQLSTATE: none assigned

Invalid R2 access key ID.

UC_INVALID_R2_SECRET_ACCESS_KEY

SQLSTATE: none assigned

Invalid R2 secret access key.

UC_INVALID_RULE_CONDITION

SQLSTATE: none assigned

Invalid condition in rule ‘<ruleName>’. Compilation error with message ‘<message>’.

UC_INVALID_UPDATE_ON_SYSTEM_WORKSPACE_ADMIN_GROUP_OWNED_SECURABLE

SQLSTATE: none assigned

Cannot update <securableType><securableName>’ as it’s owned by an internal group. Please contact Databricks support for additional details.

UC_INVALID_WASBS_EXTERNAL_LOCATION_STORAGE_CREDENTIAL

SQLSTATE: none assigned

Provided Storage Credential <storageCredentialName> is not associated with DBFS Root, creation of wasbs External Location is prohibited.

UC_LOCATION_INVALID_SCHEME

SQLSTATE: none assigned

Storage location has invalid URI scheme: <scheme>.

UC_MALFORMED_OAUTH_SERVER_RESPONSE

SQLSTATE: none assigned

The response from the token server was missing the field <missingField>. The returned server response is: <response>

UC_METASTORE_ASSIGNMENT_STATUS_INVALID

SQLSTATE: none assigned

<metastoreAssignmentStatus>’ cannot be assigned. Only MANUALLY_ASSIGNABLE and AUTO_ASSIGNMENT_ENABLED are supported.

UC_METASTORE_CERTIFICATION_NOT_ENABLED

SQLSTATE: none assigned

Metastore certification is not enabled.

UC_METASTORE_DB_SHARD_MAPPING_NOT_FOUND

SQLSTATE: none assigned

Failed to retrieve a metastore to database shard mapping for Metastore ID <metastoreId> due to an internal error. Please contact Databricks support.

UC_METASTORE_HAS_ACTIVE_MANAGED_ONLINE_CATALOGS

SQLSTATE: none assigned

The metastore <metastoreId> has <numberManagedOnlineCatalogs> managed online catalog(s). Please explicitly delete them, then retry the metastore deletion.

UC_METASTORE_STORAGE_ROOT_CREDENTIAL_UPDATE_INVALID

SQLSTATE: none assigned

Metastore root credential cannot be defined when updating the metastore root location. The credential will be fetched from the metastore parent external location.

UC_METASTORE_STORAGE_ROOT_DELETION_INVALID

SQLSTATE: none assigned

Deletion of metastore storage root location failed. <reason>

UC_METASTORE_STORAGE_ROOT_READ_ONLY_INVALID

SQLSTATE: none assigned

The root <securableType> for a metastore cannot be read-only.

UC_METASTORE_STORAGE_ROOT_UPDATE_INVALID

SQLSTATE: none assigned

Metastore storage root cannot be updated once it is set.

UC_MODEL_INVALID_STATE

SQLSTATE: none assigned

Cannot generate temporary ‘<opName>’ credentials for model version <modelVersion> with status <modelVersionStatus>. ‘<opName>’ credentials can only be generated for model versions with status <validStatus>

UC_NO_ORG_ID_IN_CONTEXT

SQLSTATE: none assigned

Attempted to access org ID (or workspace ID), but context has none.

UC_ONLINE_CATALOG_NOT_MUTABLE

SQLSTATE: none assigned

The <rpcName> request updates <fieldName>. Use the online store compute tab to modify anything other than comment, owner and isolationMode of an online catalog.

UC_ONLINE_CATALOG_QUOTA_EXCEEDED

SQLSTATE: none assigned

Cannot create more than <quota> online stores in the metastore and there is already <currentCount>. You may not have access to any existing online stores. Contact your metastore admin to be granted access or for further instructions.

UC_ONLINE_INDEX_CATALOG_INVALID_CRUD

SQLSTATE: none assigned

online index catalogs must be <action> via the /vector-search API.

UC_ONLINE_INDEX_CATALOG_NOT_MUTABLE

SQLSTATE: none assigned

The <rpcName> request updates <fieldName>. Use the /vector-search API to modify anything other than comment, owner and isolationMode of an online index catalog.

UC_ONLINE_INDEX_CATALOG_QUOTA_EXCEEDED

SQLSTATE: none assigned

Cannot create more than <quota> online index catalogs in the metastore and there is already <currentCount>. You may not have access to any existing online index catalogs. Contact your metastore admin to be granted access or for further instructions.

UC_ONLINE_INDEX_INVALID_CRUD

SQLSTATE: none assigned

online indexes must be <action> via the /vector-search API.

UC_ONLINE_STORE_INVALID_CRUD

SQLSTATE: none assigned

online stores must be <action> via the online store compute tab.

UC_ONLINE_TABLE_COLUMN_NAME_TOO_LONG

SQLSTATE: none assigned

The source table column name <columnName> is too long. The maximum length is <maxLength> characters.

UC_ONLINE_TABLE_PRIMARY_KEY_COLUMN_NOT_IN_SOURCE_TABLE_PRIMARY_KEY_CONSTRAINT

SQLSTATE: none assigned

Column <columnName> cannot be used as a primary key column of the online table because it is not part of the existing PRIMARY KEY constraint of the source table. For details, please see <docLink>

UC_ONLINE_TABLE_TIMESERIES_KEY_NOT_IN_SOURCE_TABLE_PRIMARY_KEY_CONSTRAINT

SQLSTATE: none assigned

Column <columnName> cannot be used as a timeseries key of the online table because it is not a timeseries column of the existing PRIMARY KEY constraint of the source table. For details, please see <docLink>

UC_ONLINE_VIEWS_PER_SOURCE_TABLE_QUOTA_EXCEEDED

SQLSTATE: none assigned

Cannot create more than <quota> online table(s) per source table.

UC_ONLINE_VIEW_ACCESS_DENIED

SQLSTATE: none assigned

Accessing resource <resourceName> requires use of a Serverless SQL warehouse. Please ensure the warehouse being used to execute a query or view a database catalog in the UI is serverless. For details, please see <docLink>

UC_ONLINE_VIEW_CONTINUOUS_QUOTA_EXCEEDED

SQLSTATE: none assigned

Cannot create more than <quota> continuous online views in the online store, and there is already <currentCount>. You may not have access to any existing online views. Contact your online store admin to be granted access or for further instructions.

UC_ONLINE_VIEW_DOES_NOT_SUPPORT_DMK

SQLSTATE: none assigned

<tableKind> can not be created under storage location with Databricks Managed Keys. Please choose a different schema/catalog in a storage location without Databricks Managed Keys encryption.

UC_ONLINE_VIEW_INVALID_CATALOG

SQLSTATE: none assigned

Invalid catalog <catalogName> with kind <catalogKind> to create <tableKind> within. <tableKind> can only be created under catalogs of kinds: <validCatalogKinds>.

UC_ONLINE_VIEW_INVALID_SCHEMA

SQLSTATE: none assigned

Invalid schema <schemaName> with kind <schemaKind> to create <tableKind> within. <tableKind> can only be created under schemas of kinds: <validSchemaKinds>.

UC_ONLINE_VIEW_INVALID_TTL_TIME_COLUMN_TYPE

SQLSTATE: none assigned

Column <columnName> of type <columnType> cannot be used as a TTL time column. Allowed types are <supportedTypes>.

UC_OUT_OF_AUTHORIZED_PATHS_SCOPE

SQLSTATE: none assigned

Authorized Path Error. The <securableType> location <location> is not defined within the authorized paths for catalog: <catalogName>.

UC_OVERLAPPED_AUTHORIZED_PATHS

SQLSTATE: none assigned

The ‘authorized_paths’ option contains overlapping paths: <overlappingPaths>. Ensure each path is unique and does not intersect with others in the list.

UC_PAGINATION_AND_QUERY_ARGS_MISMATCH

SQLSTATE: none assigned

The query argument ‘<arg>’ is set to ‘<received>’ which is different to the value used in the first pagination call (‘<expected>’)

UC_PER_METASTORE_DATABASE_CONCURRENCY_LIMIT_EXCEEDED

SQLSTATE: none assigned

Too many requests to database from metastore <metastoreId>. Please try again later.

UC_PRIMARY_KEY_ON_NULLABLE_COLUMN

SQLSTATE: none assigned

Cannot create the primary key <constraintName> because its child column(s) <childColumnNames> is nullable. Please change the column nullability and retry.

UC_REQUEST_TIMEOUT

SQLSTATE: none assigned

This operation took too long.

UC_ROOT_STORAGE_S3_BUCKET_NAME_CONTAINS_DOT

SQLSTATE: none assigned

Root storage S3 bucket name containing dots is not supported by Unity Catalog: <uri>

UC_SCHEMA_EMPTY_STORAGE_LOCATION

SQLSTATE: none assigned

Unexpected empty storage location for schema ‘<schemaName>’ in catalog ‘<catalogName>’. Please make sure the schema uses a path scheme of <validPathSchemesListStr>.

UC_SERVICE_TEMPORARILY_UNAVAILABLE

SQLSTATE: none assigned

We’re experiencing a temporary issue while processing your request. Please try again in a few moments. If the problem persists, please reach out to support.

UC_STORAGE_CREDENTIAL_ACCESS_CONNECTOR_PARSING_FAILED

SQLSTATE: none assigned

Failed to parse the provided access connector ID: <accessConnectorId>. Please verify its formatting and try again.

UC_STORAGE_CREDENTIAL_DBFS_ROOT_CREATION_PERMISSION_DENIED

SQLSTATE: none assigned

Cannot create a storage credential for DBFS root because user: <userId> is not the admin of the workspace: <workspaceId>

UC_STORAGE_CREDENTIAL_DBFS_ROOT_INVALID_LOCATION

SQLSTATE: none assigned

Location <location> is not inside the DBFS root <dbfsRootLocation>

UC_STORAGE_CREDENTIAL_DBFS_ROOT_PRIVATE_DBFS_ENABLED

SQLSTATE: none assigned

DBFS root storage credential is not yet supported for workspaces with Firewall-enabled DBFS

UC_STORAGE_CREDENTIAL_DBFS_ROOT_PRIVATE_NOT_SUPPORTED

SQLSTATE: none assigned

DBFS root storage credential for current workspace is not yet supported

UC_STORAGE_CREDENTIAL_DBFS_ROOT_WORKSPACE_DISABLED

SQLSTATE: none assigned

DBFS root is not enabled for workspace <workspaceId>

UC_STORAGE_CREDENTIAL_FAILED_TO_OBTAIN_VALIDATION_TOKEN

SQLSTATE: none assigned

Failed to obtain an AAD token to perform cloud permission validation on an access connector. Please retry the action.

UC_STORAGE_CREDENTIAL_INVALID_CLOUD_PERMISSIONS

SQLSTATE: none assigned

Registering a storage credential requires the contributor role over the corresponding access connector with ID <accessConnectorId>. Please contact your account admin.

UC_STORAGE_CREDENTIAL_PERMISSION_DENIED

SQLSTATE: none assigned

Only the account admin can create or update a storage credential with type <storageCredentialType>.

UC_STORAGE_CREDENTIAL_SERVICE_PRINCIPAL_MISSING_VALIDATION_TOKEN

SQLSTATE: none assigned

Missing validation token for service principal. Please provide a valid ARM-scoped Entra ID token in the ‘X-Databricks-Azure-SP-Management-Token’ request header and retry. For details, checkhttps://docs.databricks.com/api/workspace/storagecredentials

UC_STORAGE_CREDENTIAL_TRUST_POLICY_IS_OPEN

SQLSTATE: none assigned

The trust policy of the IAM role to allow Databricks Account to assume the role should require an external id. Please contact your account admin to add the external id condition. This behavior is to guard against the Confused Deputy problem https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

UC_STORAGE_CREDENTIAL_UNPRIVILEGED_SERVICE_PRINCIPAL_NOT_SUPPORTED

SQLSTATE: none assigned

Service principals cannot use the CREATE_STORAGE_CREDENTIAL privilege to register managed identities. To register a managed identity, please assign the service principal the account admin role.

UC_STORAGE_CREDENTIAL_WASBS_NOT_DBFS_ROOT

SQLSTATE: none assigned

Location <location> is not inside the DBFS root, because of that we can’t create an storage credential <storageCredentialName>

UC_STORAGE_CREDENTIAL_WORKSPACE_API_PROHIBITED

SQLSTATE: none assigned

Creating or updating a storage credential as a non-account admin is not supported in the account-level API. Please use the workspace-level API instead.

UC_SYSTEM_WORKSPACE_GROUP_PERMISSION_UNSUPPORTED

SQLSTATE: none assigned

Cannot grant privileges on <securableType> to system generated group <principal>.

UC_TAG_ASSIGNMENT_WITH_KEY_DOES_NOT_EXIST

SQLSTATE: none assigned

Tag assignment with tag key <tagKey> does not exist

UC_UNSUPPORTED_HTTP_CONNECTION_BASE_PATH

SQLSTATE: none assigned

Invalid base path provided, base path should be something like /api/resources/v1. Unsupported path: <path>

UC_UNSUPPORTED_HTTP_CONNECTION_HOST

SQLSTATE: none assigned

Invalid host name provided, host name should be something likehttps://www.databricks.com without path suffix. Unsupported host: <host>

UC_UNSUPPORTED_LATIN_CHARACTER_IN_PATH

SQLSTATE: none assigned

Only basic Latin/Latin-1 ASCII chars are supported in external location/volume/table paths. Unsupported path: <path>

UC_UPDATE_FORBIDDEN_FOR_PROVISIONING_SECURABLE

SQLSTATE: none assigned

Cannot update <securableType><securableName>’ because it is being provisioned.

UC_WRITE_CONFLICT

SQLSTATE: none assigned

The <type> <name> has been modified by another request. Please retry the operation.

UNITY_CATALOG_EXTERNAL_COORDINATED_COMMITS_REQUEST_DENIED

SQLSTATE: none assigned

Request to perform commit/getCommits for table ‘<tableId>’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_CREATE_STAGING_TABLE_REQUEST_DENIED

SQLSTATE: none assigned

Request to create staging table ‘<tableFullName>’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_CREATE_TABLE_REQUEST_FOR_NON_EXTERNAL_TABLE_DENIED

SQLSTATE: none assigned

Request to create non-external table ‘<tableFullName>’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_GENERATE_PATH_CREDENTIALS_DENIED

SQLSTATE: none assigned

Request to generate access credential for path ‘<path>’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_GENERATE_TABLE_CREDENTIALS_DENIED

SQLSTATE: none assigned

Request to generate access credential for table ‘<tableId>’ from outside of Databricks Unity Catalog enabled compute environment is denied for security. Please contact Databricks support for integrations with Unity Catalog.

UNITY_CATALOG_EXTERNAL_GET_FOREIGN_CREDENTIALS_DENIED

SQLSTATE: none assigned

Request to get foreign credentials for securables from outside of Databricks Unity Catalog enabled compute environment is denied for security.

UNITY_CATALOG_EXTERNAL_UPDATA_METADATA_SNAPSHOT_DENIED

SQLSTATE: none assigned

Request to update metadata snapshots from outside of Databricks Unity Catalog enabled compute environment is denied for security.

WRITE_CREDENTIALS_NOT_SUPPORTED_FOR_LEGACY_MANAGED_ONLINE_TABLE

SQLSTATE: none assigned

Invalid request to get write credentials for managed online table in an online catalog.

Files API

FILES_API_API_IS_NOT_ENABLED

SQLSTATE: none assigned

<api_name> API is not enabled

FILES_API_API_IS_NOT_ENABLED_FOR_CLOUD_PATHS

SQLSTATE: none assigned

Requested method of Files API is not supported for cloud paths

FILES_API_AWS_ACCESS_DENIED

SQLSTATE: none assigned

Access to the storage bucket is denied by AWS.

FILES_API_AWS_ALL_ACCESS_DISABLED

SQLSTATE: none assigned

All access to the storage bucket has been disabled in AWS.

FILES_API_AWS_BUCKET_DOES_NOT_EXIST

SQLSTATE: none assigned

The storage bucket does not exist in AWS.

FILES_API_AWS_FORBIDDEN

SQLSTATE: none assigned

Access to the storage bucket is forbidden by AWS.

FILES_API_AWS_INVALID_AUTHORIZATION_HEADER

SQLSTATE: none assigned

The workspace is misconfigured: it must be in the same region as the AWS workspace root storage bucket.

FILES_API_AWS_INVALID_BUCKET_NAME

SQLSTATE: none assigned

The storage bucket name is invalid.

FILES_API_AWS_KMS_KEY_DISABLED

SQLSTATE: none assigned

The configured KMS keys to access the storage bucket are disabled in AWS.

FILES_API_AWS_UNAUTHORIZED

SQLSTATE: none assigned

Access to AWS resource is unauthorized.

FILES_API_AZURE_ACCOUNT_IS_DISABLED

SQLSTATE: none assigned

The storage account is disabled in Azure.

FILES_API_AZURE_CONTAINER_DOES_NOT_EXIST

SQLSTATE: none assigned

The Azure container does not exist.

FILES_API_AZURE_FORBIDDEN

SQLSTATE: none assigned

Access to the storage container is forbidden by Azure.

FILES_API_AZURE_HAS_A_LEASE

SQLSTATE: none assigned

Azure responded that there is currently a lease on the resource. Try again later.

FILES_API_AZURE_INSUFFICIENT_ACCOUNT_PERMISSION

SQLSTATE: none assigned

The account being accessed does not have sufficient permissions to execute this operation.

FILES_API_AZURE_INVALID_STORAGE_ACCOUNT_NAME

SQLSTATE: none assigned

Cannot access storage account in Azure: invalid storage account name.

FILES_API_AZURE_KEY_BASED_AUTHENTICATION_NOT_PERMITTED

SQLSTATE: none assigned

The key vault vault is not found in Azure. Check your customer-managed keys settings.

FILES_API_AZURE_KEY_VAULT_KEY_NOT_FOUND

SQLSTATE: none assigned

The Azure key vault key is not found in Azure. Check your customer-managed keys settings.

FILES_API_AZURE_KEY_VAULT_VAULT_NOT_FOUND

SQLSTATE: none assigned

The key vault vault is not found in Azure. Check your customer-managed keys settings.

FILES_API_AZURE_MI_ACCESS_CONNECTOR_NOT_FOUND

SQLSTATE: none assigned

Azure Managed Identity Credential with Access Connector not found. This could be because IP access controls rejected your request.

FILES_API_AZURE_PATH_INVALID

SQLSTATE: none assigned

The requested path is not valid for Azure.

FILES_API_AZURE_PATH_IS_IMMUTABLE

SQLSTATE: none assigned

The requested path is immutable.

FILES_API_CATALOG_NOT_FOUND

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_CLOUD_RESOURCE_EXHAUSTED

SQLSTATE: none assigned

<message>

FILES_API_COLON_IS_NOT_SUPPORTED_IN_PATH

SQLSTATE: none assigned

the ‘:’ character is not supported in paths

FILES_API_CONTROL_PLANE_NETWORK_ZONE_NOT_ALLOWED

SQLSTATE: none assigned

Databricks Control plane network zone not allowed.

FILES_API_DIRECTORIES_CANNOT_HAVE_BODIES

SQLSTATE: none assigned

A body was provided but directories cannot have a file body

FILES_API_DIRECTORY_IS_NOT_EMPTY

SQLSTATE: none assigned

The directory is not empty. This operation is not supported on non-empty directories.

FILES_API_DIRECTORY_IS_NOT_FOUND

SQLSTATE: none assigned

The directory being accessed is not found.

FILES_API_DUPLICATED_HEADER

SQLSTATE: none assigned

The request contained multiple copies of a header that is only allowed once.

FILES_API_DUPLICATE_QUERY_PARAMETER

SQLSTATE: none assigned

Query parameter ‘<parameter_name>’ must be present exactly once but was provided multiple times.

FILES_API_EMPTY_BUCKET_NAME

SQLSTATE: none assigned

The DBFS bucket name is empty.

FILES_API_EXPIRATION_TIME_MUST_BE_PRESENT

SQLSTATE: none assigned

expiration time must be present

FILES_API_EXPIRE_TIME_MUST_BE_IN_THE_FUTURE

SQLSTATE: none assigned

ExpireTime must be in the future

FILES_API_EXPIRE_TIME_TOO_FAR_IN_FUTURE

SQLSTATE: none assigned

Requested TTL is longer than supported (1 hour)

FILES_API_EXTERNAL_LOCATION_PATH_OVERLAP_OTHER_UC_STORAGE_ENTITY

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_FILE_ALREADY_EXISTS

SQLSTATE: none assigned

The file being created already exists.

FILES_API_FILE_NOT_FOUND

SQLSTATE: none assigned

The file being accessed is not found.

FILES_API_FILE_OR_DIRECTORY_ENDS_IN_DOT

SQLSTATE: none assigned

Files or directories ending in the ‘.’ character are not supported.

FILES_API_FILE_SIZE_EXCEEDED

SQLSTATE: none assigned

File size shouldn’t exceed <max_download_size_in_bytes> bytes, but <size_in_bytes> bytes were found.

FILES_API_GCP_ACCOUNT_IS_DISABLED

SQLSTATE: none assigned

Access to the storage bucket has been disabled in GCP.

FILES_API_GCP_BUCKET_DOES_NOT_EXIST

SQLSTATE: none assigned

The storage bucket does not exist in GCP.

FILES_API_GCP_FORBIDDEN

SQLSTATE: none assigned

Access to the bucket is forbidden by GCP.

FILES_API_GCP_KEY_DISABLED_OR_DESTROYED

SQLSTATE: none assigned

The customer-managed encryption key configured for that location is either disabled or destroyed.

FILES_API_GCP_REQUEST_IS_PROHIBITED_BY_POLICY

SQLSTATE: none assigned

The GCP requests to the bucket are prohibited by policy, check the VPC service controls.

FILES_API_HOST_TEMPORARILY_NOT_AVAILABLE

SQLSTATE: none assigned

Cloud provider host is temporarily not available; please try again later.

FILES_API_INVALID_CONTINUATION_TOKEN

SQLSTATE: none assigned

The provided page token is not valid.

FILES_API_INVALID_PAGE_TOKEN

SQLSTATE: none assigned

invalid page token

FILES_API_INVALID_PATH

SQLSTATE: none assigned

Invalid path: <validation_error>

FILES_API_INVALID_RANGE

SQLSTATE: none assigned

The range header is invalid.

FILES_API_INVALID_RESOURCE_FULL_NAME

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_INVALID_SESSION_TOKEN

SQLSTATE: none assigned

Invalid session token

FILES_API_INVALID_SESSION_TOKEN_TYPE

SQLSTATE: none assigned

Invalid session token type. Expected ‘<expected>’ but got ‘<actual>’.

FILES_API_INVALID_TIMESTAMP

SQLSTATE: none assigned

The timestamp is invalid.

FILES_API_INVALID_UPLOAD_TYPE

SQLSTATE: none assigned

Invalid upload type. Expected ‘<expected>’ but got ‘<actual>’.

FILES_API_INVALID_URL_PARAMETER

SQLSTATE: none assigned

The URL passed as parameter is invalid

FILES_API_INVALID_VALUE_FOR_OVERWRITE_QUERY_PARAMETER

SQLSTATE: none assigned

Query parameter ‘overwrite’ must be one of: true,false but was: <got_values>

FILES_API_INVALID_VALUE_FOR_QUERY_PARAMETER

SQLSTATE: none assigned

Query parameter ‘<parameter_name>’ must be one of: <expected> but was: <actual>

FILES_API_MALFORMED_REQUEST_BODY

SQLSTATE: none assigned

Malformed request body

FILES_API_MANAGED_CATALOG_FEATURE_DISABLED

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_METASTORE_NOT_FOUND

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_METHOD_IS_NOT_ENABLED_FOR_JOBS_BACKGROUND_COMPUTE_ARTIFACT_STORAGE

SQLSTATE: none assigned

Requested method of Files API is not supported for Jobs Background Compute Artifact Storage.

FILES_API_MISSING_CONTENT_LENGTH

SQLSTATE: none assigned

The content-length header is required in the request.

FILES_API_MISSING_QUERY_PARAMETER

SQLSTATE: none assigned

Query parameter ‘<parameter_name>’ is required but is missing from the request.

FILES_API_MISSING_REQUIRED_PARAMETER_IN_REQUEST

SQLSTATE: none assigned

The request is missing a required parameter.

FILES_API_MODEL_VERSION_IS_NOT_READY

SQLSTATE: none assigned

Model version is not ready yet

FILES_API_NOT_ENABLED_FOR_PLACE

SQLSTATE: none assigned

Files API for <place> is not enabled for this workspace/account

FILES_API_NOT_SUPPORTED_FOR_INTERNAL_WORKSPACE_STORAGE

SQLSTATE: none assigned

Requested method of Files API is not supported for Internal Workspace Storage

FILES_API_OPERATION_MUST_BE_PRESENT

SQLSTATE: none assigned

operation must be present

FILES_API_PAGE_SIZE_MUST_BE_GREATER_OR_EQUAL_TO_ZERO

SQLSTATE: none assigned

page_size must greater or equal to 0

FILES_API_PATH_END_WITH_A_SLASH

SQLSTATE: none assigned

Paths ending in the ‘/’ character represent directories. This API does not support operations on directories.

FILES_API_PATH_IS_A_DIRECTORY

SQLSTATE: none assigned

The given path points to an existing directory. This API does not support operations on directories.

FILES_API_PATH_IS_A_FILE

SQLSTATE: none assigned

The given path points to an existing file. This API does not support operations on files.

FILES_API_PATH_IS_NOT_A_VALID_UTF8_ENCODED_URL

SQLSTATE: none assigned

the given path was not a valid UTF-8 encoded URL

FILES_API_PATH_IS_NOT_ENABLED_FOR_DATAPLANE_PROXY

SQLSTATE: none assigned

Given path is not enabled for data plane proxy

FILES_API_PATH_MUST_BE_PRESENT

SQLSTATE: none assigned

path must be present

FILES_API_PATH_NOT_SUPPORTED

SQLSTATE: none assigned

<rejection_message>

FILES_API_PATH_TOO_LONG

SQLSTATE: none assigned

Provided file path is too long.

FILES_API_PRECONDITION_FAILED

SQLSTATE: none assigned

The request failed due to a precondition.

FILES_API_PRESIGNED_URLS_FOR_MODELS_NOT_SUPPORTED

SQLSTATE: none assigned

Files API for presigned URLs for models are not supported at the moment

FILES_API_R2_CREDENTIALS_DISABLED

SQLSTATE: none assigned

R2 is unsupported at the moment.

FILES_API_RANGE_NOT_SATISFIABLE

SQLSTATE: none assigned

The range requested is not satisfiable.

FILES_API_RECURSIVE_LIST_IS_NOT_SUPPORTED

SQLSTATE: none assigned

Recursively listing files is not supported.

FILES_API_REQUEST_GOT_ROUTED_INCORRECTLY

SQLSTATE: none assigned

Request got routed incorrectly

FILES_API_REQUEST_MUST_INCLUDE_ACCOUNT_INFORMATION

SQLSTATE: none assigned

Request must include account information

FILES_API_REQUEST_MUST_INCLUDE_USER_INFORMATION

SQLSTATE: none assigned

Request must include user information

FILES_API_REQUEST_MUST_INCLUDE_WORKSPACE_INFORMATION

SQLSTATE: none assigned

Request must include workspace information

FILES_API_RESOURCE_IS_READONLY

SQLSTATE: none assigned

Resource is read-only.

FILES_API_RESOURCE_NOT_FOUND

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_SCHEMA_NOT_FOUND

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_SECURE_URL_CANT_BE_ACCESSED

SQLSTATE: none assigned

The URL can’t be accessed.

FILES_API_SIGNATURE_VERIFICATION_FAILED

SQLSTATE: none assigned

The signature verification failed.

FILES_API_STORAGE_CONTEXT_IS_NOT_SET

SQLSTATE: none assigned

Storage configuration for this workspace is not accessible.

FILES_API_STORAGE_CREDENTIAL_NOT_FOUND

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_TABLE_TYPE_NOT_SUPPORTED

SQLSTATE: none assigned

Files API is not supported for <table_type>

FILES_API_UC_IAM_ROLE_NON_SELF_ASSUMING

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UC_MODEL_INVALID_STATE

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UC_PERMISSION_DENIED

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UC_RESOURCE_EXHAUSTED

SQLSTATE: none assigned

<message>

FILES_API_UC_UNSUPPORTED_LATIN_CHARACTER_IN_PATH

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UC_VOLUME_NAME_CHANGED

SQLSTATE: none assigned

<unity_catalog_error_message>

FILES_API_UNEXPECTED_ERROR_WHILE_PARSING_URI

SQLSTATE: none assigned

Unexpected error when parsing the URI

FILES_API_UNEXPECTED_QUERY_PARAMETERS

SQLSTATE: none assigned

Unexpected query parameters: <unexpected_query_parameters>

FILES_API_UNKNOWN_METHOD

SQLSTATE: none assigned

Unknown method <method>

FILES_API_UNKNOWN_SERVER_ERROR

SQLSTATE: none assigned

Unknown server error.

FILES_API_UNKNOWN_URL_HOST

SQLSTATE: none assigned

The URL host is unknown.

FILES_API_UNSUPPORTED_PATH

SQLSTATE: none assigned

The provided path is not supported by the Files API. Make sure the provided path does not contain instances of ‘../’ or ‘./’ sequences. Make sure the provided path does not use multiple consecutive slashes (e.g. ‘///’).

FILES_API_URL_GENERATION_DISABLED

SQLSTATE: none assigned

Presigned URL generation is not enabled for <cloud>.

FILES_API_VOLUME_TYPE_NOT_SUPPORTED

SQLSTATE: none assigned

Files API is not supported for <volume_type>.

FILES_API_WORKSPACE_IS_CANCELED

SQLSTATE: none assigned

The workspace has been canceled.

FILES_API_WORKSPACE_IS_NOT_FOUND

SQLSTATE: none assigned

Storage configuration for this workspace is not accessible.

Miscellaneous

ABAC_ROW_COLUMN_POLICIES_NOT_SUPPORTED_ON_ASSIGNED_CLUSTERS

SQLSTATE: none assigned

Query on table <tableFullName> with row filter or column mask assigned through policy rules isn’t supported on assigned clusters.

AZURE_ENTRA_CREDENTIALS_MISSING

SQLSTATE: none assigned

Azure Entra (aka Azure Active Directory) credentials missing.

Ensure you are either logged in with your Entra account

or have setup an Azure DevOps personal access token (PAT) in User Settings > Git Integration.

If you are not using a PAT and are using Azure DevOps with the Repos API,

you must use an Azure Entra access token.

Seehttps://docs.microsoft.com/azure/databricks/dev-tools/api/latest/aad/app-aad-token for steps to acquire an Azure Entra access token.

AZURE_ENTRA_CREDENTIALS_PARSE_FAILURE

SQLSTATE: none assigned

Encountered an error with your Azure Entra (Azure Active Directory) credentials. Please try logging out of

Entra https://portal.azure.com) and logging back in.

Alternatively, you may also visit User Settings > Git Integration to set

up an Azure DevOps personal access token.

AZURE_ENTRA_LOGIN_ERROR

SQLSTATE: none assigned

Encountered an error with your Azure Active Directory credentials. Please try logging out of

Azure Active Directory https://portal.azure.com) and logging back in.

CLEAN_ROOM_DELTA_SHARING_ENTITY_NOT_AUTHORIZED

SQLSTATE: none assigned

Credential generation for clean room delta sharing securable cannot be requested.

CLEAN_ROOM_HIDDEN_SECURABLE_PERMISSION_DENIED

SQLSTATE: none assigned

Securable <securableName> with type <securableType> and kind <securableKind> is clean room system managed, user does not have access.

CONSTRAINT_ALREADY_EXISTS

SQLSTATE: none assigned

Constraint with name <constraintName> already exists, choose a different name.

CONSTRAINT_DOES_NOT_EXIST

SQLSTATE: none assigned

Constraint <constraintName> does not exist.

COULD_NOT_READ_REMOTE_REPOSITORY

SQLSTATE: none assigned

Could not read remote repository (<repoUrl>).

Your current Git credentials provider is <gitCredentialProvider> and username is <gitCredentialUsername>.

  1. Your remote Git repo URL is valid.
  2. Your personal access token or app password has the correct repo access.

Error from Git: <gitErrorMessage>

COULD_NOT_RESOLVE_REPOSITORY_HOST

SQLSTATE: none assigned

Could not resolve host for <repoUrl>.

CSMS_BEGINNING_OF_TIME_NOT_SUPPORTED

SQLSTATE: none assigned

Parameter beginning_of_time cannot be true.

CSMS_CONTINUATION_TOKEN_EXPIRED

SQLSTATE: none assigned

Requested objects could not be found for the continuation token.

CSMS_INVALID_CONTINUATION

SQLSTATE: none assigned

Provided both ‘beginning_of_time=true’ and a ‘continuation_token’. When ‘beginning_of_time’ is set to ‘true’, ‘continuation_token’ should not be provided.

CSMS_INVALID_CONTINUATION_TOKEN

SQLSTATE: none assigned

Continuation token invalid. Cause: <msg>

CSMS_INVALID_MAX_OBJECTS

SQLSTATE: none assigned

Invalid value <value> for parameter max_objects, expected value in [<minValue>, <maxValue>]

CSMS_INVALID_URI_FORMAT

SQLSTATE: none assigned

Invalid URI format. Expected a volume (e.g. “/Volumes/catalog/schema/volume”) or cloud storage path (e.g. “s3://some-uri”)

CSMS_LOCATION_ERROR

SQLSTATE: none assigned

Failed to list objects. There are problems on the location that need to be resolved. Details: <msg>

CSMS_LOCATION_NOT_KNOWN

SQLSTATE: none assigned

No location found for uri <path>

CSMS_METASTORE_RESOLUTION_FAILED

SQLSTATE: none assigned

Unable to determine a metastore for the request.

CSMS_SERVICE_DISABLED

SQLSTATE: none assigned

Service is disabled

CSMS_UNITY_CATALOG_ENTITY_NOT_FOUND

SQLSTATE: none assigned

Unity Catalog entity not found. Ensure that the catalog, schema, volume and/or external location exists.

CSMS_UNITY_CATALOG_EXTERNAL_LOCATION_DOES_NOT_EXIST

SQLSTATE: none assigned

Unity Catalog external location does not exist.

CSMS_UNITY_CATALOG_EXTERNAL_STORAGE_OVERLAP

SQLSTATE: none assigned

URI overlaps with other volumes

CSMS_UNITY_CATALOG_METASTORE_DOES_NOT_EXIST

SQLSTATE: none assigned

Unable to determine a metastore for the request. Metastore does not exist

CSMS_UNITY_CATALOG_PERMISSION_DENIED

SQLSTATE: none assigned

Permission denied

CSMS_UNITY_CATALOG_TABLE_DOES_NOT_EXIST

SQLSTATE: none assigned

Unity Catalog table does not exist.

CSMS_UNITY_CATALOG_VOLUME_DOES_NOT_EXIST

SQLSTATE: none assigned

Unity Catalog volume does not exist.

CSMS_URI_MISSING

SQLSTATE: none assigned

Must provide uri

CSMS_URI_TOO_LONG

SQLSTATE: none assigned

Provided uri is too long. Maximum permitted length is <maxLength>.

DMK_CATALOGS_DISALLOWED_ON_CLASSIC_COMPUTE

SQLSTATE: none assigned

Databricks Default Storage cannot be accessed using Classic Compute. Please use Serverless compute to access data in Default Storage

GITHUB_APP_COULD_NOT_REFRESH_CREDENTIALS

SQLSTATE: none assigned

Operation failed because linked GitHub app credentials could not be refreshed.

Please try again or go to User Settings > Git Integration and try relinking your Git provider account.

If the problem persists, please file a support ticket.

GITHUB_APP_CREDENTIALS_NO_ACCESS

SQLSTATE: none assigned

The link to your GitHub account does not have access. To fix this error:

  1. An admin of the repository must go tohttps://github.com/apps/databricks/installations/new and install the Databricks GitHub app on the repository.

Alternatively, a GitHub account owner can install the app on the account to give access to the account’s repositories.

  1. If the app is already installed, have an admin ensure that if they are using scoped access with the ‘Only select repositories’ option, they have included access to this repository by selecting it.

Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.

If the problem persists, please file a support ticket.

GITHUB_APP_EXPIRED_CREDENTIALS

SQLSTATE: none assigned

Linked GitHub app credentials expired after 6 months of inactivity.

Go to User Settings > Git Integration and try relinking your credentials.

If the problem persists, please file a support ticket.

GITHUB_APP_INSTALL_ON_DIFFERENT_USER_ACCOUNT

SQLSTATE: none assigned

The link to your GitHub account does not have access. To fix this error:

  1. GitHub user <gitCredentialUsername> should go tohttps://github.com/apps/databricks/installations/new and install the app on the account <gitCredentialUsername> to allow access.
  2. If user <gitCredentialUsername> already installed the app and they are using scoped access with the ‘Only select repositories’ option, they should ensure they have included access to this repository by selecting it.

Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.

If the problem persists, please file a support ticket.

GITHUB_APP_INSTALL_ON_ORGANIZATION

SQLSTATE: none assigned

The link to your GitHub account does not have access. To fix this error:

  1. An owner of the GitHub organization <organizationName> should go tohttps://github.com/apps/databricks/installations/new and install the app on the organization <organizationName> to allow access.
  2. If the app is already installed on GitHub organization <organizationName>, have an owner of this organization ensure that if using scoped access with the ‘Only select repositories’ option, they have included access to this repository by selecting it.

Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.

If the problem persists, please file a support ticket.

GITHUB_APP_INSTALL_ON_YOUR_ACCOUNT

SQLSTATE: none assigned

The link to your GitHub account does not have access. To fix this error:

  1. Go tohttps://github.com/apps/databricks/installations/new and install the app on your account <gitCredentialUsername> to allow access.
  2. If the app is already installed, and you are using scoped access with the ‘Only select repositories’ option, ensure that you have included access to this repository by selecting it.

Refer tohttps://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.html#link-github-account-using-databricks-github-app for more information.

If the problem persists, please file a support ticket.

GIT_CREDENTIAL_GENERIC_INVALID

SQLSTATE: none assigned

Invalid Git provider credentials for repository URL <repoUrl>.

Your current Git credentials provider is <gitCredentialProvider> and username is <gitCredentialUsername>.

Go to User Settings > Git Integration to view your credential.

Please go to your remote Git provider to ensure that:

  1. You have entered the correct Git user email or username with your Git provider credentials.
  2. Your token or app password has the correct repo access.
  3. Your token has not expired.
  4. If you have SSO enabled with your Git provider, be sure to authorize your token.

GIT_CREDENTIAL_INVALID_PAT

SQLSTATE: none assigned

Invalid Git provider Personal Access Token credentials for repository URL <repoUrl>.

Your current Git credentials provider is <gitCredentialProvider> and username is <gitCredentialUsername>.

Go to User Settings > Git Integration to view your credential.

Please go to your remote Git provider to ensure that:

  1. You have entered the correct Git user email or username with your Git provider credentials.
  2. Your token or app password has the correct repo access.
  3. Your token has not expired.
  4. If you have SSO enabled with your Git provider, be sure to authorize your token.

GIT_CREDENTIAL_MISSING

SQLSTATE: none assigned

No Git credential configured, but credential required for this repository (<repoUrl>).

Go to User Settings > Git Integration to set up your Git credentials.

GIT_CREDENTIAL_NO_WRITE_PERMISSION

SQLSTATE: none assigned

Write access to <gitCredentialProvider> repository (<repoUrl>) not granted.

Make sure you (<gitCredentialUsername>) have write access to this remote repository.

GIT_CREDENTIAL_PROVIDER_MISMATCHED

SQLSTATE: none assigned

Incorrect Git credential provider for repository.

Your current Git credential’s provider (<gitCredentialProvider>) does not match that of the repository’s Git provider <repoUrl>.

Try a different repository or go to User Settings > Git Integration to update your Git credentials.

HIERARCHICAL_NAMESPACE_NOT_ENABLED

SQLSTATE: none assigned

The Azure storage account does not have hierarchical namespace enabled.

INVALID_FIELD_LENGTH

SQLSTATE: none assigned

<rpcName> <fieldName> too long. Maximum length is <maxLength> characters.

INVALID_PARAMETER_VALUE

SQLSTATE: none assigned

<msg>

For more details see INVALID_PARAMETER_VALUE

JOBS_TASK_FRAMEWORK_TASK_RUN_OUTPUT_NOT_FOUND

SQLSTATE: none assigned

Task Framework: Task Run Output for Task with runId <runId> and orgId <orgId> could not be found.

JOBS_TASK_FRAMEWORK_TASK_RUN_STATE_NOT_FOUND

SQLSTATE: none assigned

Task Framework: Task Run State for Task with runId <runId> and orgId <orgId> could not be found.

JOBS_TASK_REGISTRY_TASK_CLIENT_CONFIG_DOES_NOT_EXIST

SQLSTATE: none assigned

RPC ClientConfig for Task with ID <taskId> does not exist.

JOBS_TASK_REGISTRY_TASK_DOES_NOT_EXIST

SQLSTATE: none assigned

Task with ID <taskId> does not exist.

JOBS_TASK_REGISTRY_UNSUPPORTED_JOB_TASK

SQLSTATE: none assigned

Task Registry: Unsupported or unknown JobTask with class <taskClassName>.

PATH_BASED_ACCESS_NOT_SUPPORTED_FOR_EXTERNAL_SHALLOW_CLONE

SQLSTATE: none assigned

Path-based access to external shallow clone table <tableFullName> is not supported. Please use table names to access the shallow clone instead.

PATH_BASED_ACCESS_NOT_SUPPORTED_FOR_FABRIC

SQLSTATE: none assigned

Fabric table located at url ‘<url>’ is not found. Please use the REFRESH FOREIGN CATALOG command to populate Fabric tables.

PATH_BASED_ACCESS_NOT_SUPPORTED_FOR_TABLES_WITH_ROW_COLUMN_ACCESS_POLICIES

SQLSTATE: none assigned

Path-based access to table <tableFullName> with row filter or column mask not supported.

PERMISSION_DENIED

SQLSTATE: none assigned

User does not have <msg> on <resourceType><resourceName>’.

REDASH_DELETE_ASSET_HANDLER_INVALID_INPUT

SQLSTATE: none assigned

Unable to parse delete object request: <invalidInputMsg>

REDASH_DELETE_OBJECT_NOT_IN_TRASH

SQLSTATE: none assigned

Unable to delete object <resourceName> that is not in trash

REDASH_PERMISSION_DENIED

SQLSTATE: none assigned

Could not find or have permission to access resource <resourceId>

REDASH_QUERY_NOT_FOUND

SQLSTATE: none assigned

Unable to find the resource from query id <queryId>

REDASH_QUERY_SNIPPET_CREATION_FAILED

SQLSTATE: none assigned

Unable to create new query snippet

REDASH_QUERY_SNIPPET_QUOTA_EXCEEDED

SQLSTATE: none assigned

The quota for the number of query snippets has been reached. The current quota is <quota>.

REDASH_QUERY_SNIPPET_TRIGGER_ALREADY_IN_USE

SQLSTATE: none assigned

The specified trigger <trigger> is already in use by another query snippet in this workspace.

REDASH_RESOURCE_NOT_FOUND

SQLSTATE: none assigned

The requested resource <resourceName> does not exist

REDASH_RESTORE_ASSET_HANDLER_INVALID_INPUT

SQLSTATE: none assigned

Unable to parse delete object request: <invalidInputMsg>

REDASH_RESTORE_OBJECT_NOT_IN_TRASH

SQLSTATE: none assigned

Unable to restore object <resourceName> that is not in trash

REDASH_TRASH_OBJECT_ALREADY_IN_TRASH

SQLSTATE: none assigned

Unable to trash already-trashed object <resourceName>

REDASH_UNABLE_TO_GENERATE_RESOURCE_NAME

SQLSTATE: none assigned

Could not generate resource name from id <id>

REDASH_VISUALIZATION_CREATION_FAILED

SQLSTATE: none assigned

Unable to create new visualization

REDASH_VISUALIZATION_NOT_FOUND

SQLSTATE: none assigned

Could not find visualization <visualizationId>

REDASH_VISUALIZATION_QUOTA_EXCEEDED

SQLSTATE: none assigned

The quota for the number of visualizations on query <query_id> has been reached. The current quota is <quota>.

REPOSITORY_URL_NOT_FOUND

SQLSTATE: none assigned

Remote repository (<repoUrl>) not found.

Your current Git credentials provider is <gitCredentialProvider> and username is <gitCredentialUsername>.

Please ensure that:

  1. Your remote Git repo URL is valid.
  2. Your personal access token or app password has the correct repo access.

RESOURCE_ALREADY_EXISTS

SQLSTATE: none assigned

<resourceType><resourceIdentifier>’ already exists

RESOURCE_DOES_NOT_EXIST

SQLSTATE: none assigned

<resourceType><resourceIdentifier>’ does not exist.

ROW_COLUMN_ACCESS_POLICIES_NOT_SUPPORTED_ON_ASSIGNED_CLUSTERS

SQLSTATE: none assigned

Query on table <tableFullName> with row filter or column mask not supported on assigned clusters.

ROW_COLUMN_SECURITY_NOT_SUPPORTED_WITH_TABLE_IN_DELTA_SHARING

SQLSTATE: none assigned

Table <tableFullName> is being shared with Delta Sharing, and cannot use row/column security.

SERVICE_TEMPORARILY_UNAVAILABLE

SQLSTATE: none assigned

The <serviceName> service is temporarily under maintenance. Please try again later.

TABLE_WITH_ROW_COLUMN_SECURITY_NOT_SUPPORTED_IN_ONLINE_MODE

SQLSTATE: none assigned

Table <tableFullName> cannot have both row/column security and online materialized views.

TOO_MANY_ROWS_TO_UPDATE

SQLSTATE: none assigned

Too many rows to update, aborting update.