Hi ,
Thanks for reaching out to Microsoft Q&A.
Error suggests a data type mismatch between the table schema and the CSV file.
- Even though column names match, the data types inferred from the CSV file might be different from the table schema.
- You have
transfer_month STRING
in the table. - The CSV file might be inferred as another type (e.g., DATE or INT).
- You have
Try explicitly casting the transfer_month
column in the COPY INTO query. If the issue persists, use the following inferSchema
workaround to check how Databricks reads the CSV
df = spark.read.option("header", "true").csv("dbfs:/Volumes/catalogA/schemaA/data.csv")
- Modify your query to force all columns into the correct data type before merging.
- Databricks might interpret hidden special characters (e.g., non-breaking spaces). Run:
DESCRIBE DETAIL catalogB.schemaB.tableB;
If there’s a mismatch, rename the column:
Please feel free to click the 'Upvote' (Thumbs-up) button and 'Accept as Answer'. This helps the community by allowing others with similar queries to easily find the solution.