Partitioned parquet file into an Azure Database
I inserted a parquet file into an Azure database and it had low throughput so I thought if I partitioned the file I could load the partitions in parallel.
I partitioned the file on DefaultRating using pyspark and tried to insert but I'm not getting the right settings as it's not copying at all anymore.
Below is one of the partition folders based on DefaultRating:
Below is the source dataset settings, a simple copy data activity with the parquet dataset including the snappy part files in each partition folder.
This returned a path not found error. When the filename was the Output_25_10_24.parquet nothing was written: