Hi ,
Thanks for reaching out to Microsoft Q&A.
Below are some approaches you can take to troubleshoot and potentially resolve the performance issue without altering your existing code.
- Match or exceed your Synapse compute resources: Verify that the Spark environment in Fabric truly equates to your prior Medium (8 vCore) pool in Synapse; sometimes, capacity-based resources are allocated differently.
- Review concurrency and scheduling: Fabric capacity is shared among all workloads in a workspace, so you may not be getting the same dedicated cluster experience as in Synapse.
- Monitor Spark job stages: Identify bottlenecks or skewed data that might have shown up due to changes in the underlying Fabric environment or OneLake file structures.
- Check for environment overheads: Warmup times, autoscaling, and potential concurrency limits can dramatically affect total runtime if they’re not tuned.
By validating capacity usage, examining Spark settings, ensuring data partitioning is consistent, and monitoring concurrency, you should be able to approach the same performance you saw in Synapse—all without modifying your existing code. If the issue persists, engaging Microsoft support during this early phase of Fabric could yield targeted solutions.
Please feel free to click the 'Upvote' (Thumbs-up) button and 'Accept as Answer'. This helps the community by allowing others with similar queries to easily find the solution.