@Orowole, Ayebogbon-XT -- I understand the problem, by design an default the delta framework does optimistic control when writing data back to the file. However, the error you are facing is not related to the actual table its when the delta framework adds underlying log file which might have already got created via another thread.
- I don't think there is any straight forward solution to it.
- I believe this is related to Optimised write. This feature allows the delta framework to write less files than more small sized files. By default this is off, which means every time a thread writes something it has to invalidate a file (if its an update) and create a new data file which will then get updated in the checkpoint log file. In your case this seems to be case when the write happens two threads tries to create same checkpoing number file at the same time. You can turn on this feature by
delta.autoOptimize.optimizeWrite
at the table property. This can also get turned in DeltaFrameworkWriter option. - This might also be cause of autoCompact feature within the delta framework which compresses lots of files into fewer files to improve process which is managed automatically. You can play around this feature and see if that helps you. If you turn off, you need to make sure you maintain the table by manually compacting and vaccuming.
Regarding parquet format, this won't happen if you use parquet format. However you will loose all the nice feature comes along with delta table. Schema evolution, statistics, sql based query, change feed etc etc.
I don't have much knowledge on the data set you are dealing. What you can also do is that you can land the data as parquet and then a seperate process can process the parquet to a delta table (which might be your silver layer).
Mark as answer if this helps!