Why are incorrect v-cores allocated?
I created and ran a notebook with one executor.
In the monitor section, i see that 48 cores are allocated and i am unable to utilize more cores in other notebooks. I have to wait almost 30 mins after i stop the spark session to create the session again.
What is the issue here, am i doing some thing wrong?
Azure Synapse Analytics
-
Chandra Boorla • 8,560 Reputation points • Microsoft Vendor
2025-02-18T07:23:27.6366667+00:00 Hi @M Saad
Thank you for posting your query!
It seems that you are experiencing an issue with the allocation of vCores in your Spark session. In Azure Synapse, the total available cores for a workspace can be affected by several factors, including Dynamic Allocation Conflicts, Misconfigured
spark.executor.cores
and the number of active jobs and the configuration of your Spark pool. When you create a Spark pool, it defines a quota for the number of vCores that can be used.If your notebook is utilizing 48 cores, it may be due to the pool's configuration or the way resources are being allocated. If other notebooks are unable to utilize more cores, it could be because the total available cores for the workspace are being consumed by the active job or reserved for other operations.
Here are some troubleshooting steps to Resolve v-cores Allocation Issues:
Adjust Spark Configuration - Set
spark.executor.cores
to match the available cores per node (e.g.,spark.executor.cores=4
for a 4-core worker). Disable dynamic allocation withspark.dynamicAllocation.enabled=false
if precise control is needed. Explicitly definespark.executor.instances
to avoid over-provisioning.Cluster Manager Tuning - YARN, ensure
yarn.nodemanager.resource.cpu-vcores
matches the node's actual cores and aligns with Spark'sspark.executor.cores
. Kubernetes, setspark.kubernetes.executor.limit.cores
andspark.kubernetes.executor.request.cores
to match node capacity.Manually Stop Sessions After Use - Run
spark.stop()
in your notebook to free resources immediately. Run this at the end of your notebook to immediately stop the session:Limit Default Cores - In standalone mode, set
spark.deploy.defaultCores
to restrict total cores for applications. IsolateFor additional information, please refer the following Microsoft documentations:
- Configure Apache Spark settings
- Apache Spark in Azure Synapse Analytics Core Concepts
- Reservation of Executors as part of Dynamic Allocation in Synapse Spark Pools
- Quickstart: Deploy a Managed Apache Spark Cluster with Azure Databricks
For more detail, please refer to the following link, as they might offer some insights that could help you:
- How no. of cores and amount of memory of the executors can impact the performance of the Spark jobs?
Disclaimer: This response contains a reference to a third-party World Wide Web site. Microsoft is providing this information as a convenience to you. Microsoft does not control these sites and has not tested any software or information found on these sites; therefore, Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. There are inherent dangers in the use of any software found on the Internet, and Microsoft cautions you to make sure that you completely understand the risk before retrieving any software from the Internet.
I hope this information helps. Please do let us know if you have any further queries.
Kindly consider upvoting the comment if the information provided is helpful. This can assist other community members in resolving similar issues.
Thank you.
-
M Saad • 36 Reputation points
2025-02-18T07:52:14.97+00:00 Thank you for your answer. But,
I am using the default configurations (as seen in the image above) and i also checked the spark UI and confirmed that only one executor was used
The executor/driver size is "Small"(4 vCores, 16Gb memory) so in the monitoring screen the cores allocated should be 4*2 = 8. But that is not the case here. All the v-cores are being allocated.
-
Chandra Boorla • 8,560 Reputation points • Microsoft Vendor
2025-02-18T14:15:21.0366667+00:00 Since it confirms that only 4 cores (2 executors) are active, but the monitoring UI still reports 48 vCores allocated, there might be an issue with how resources are tracked or released in Synapse. Could you please try the following steps:
Delayed Resource Release - Even after stopping your session, Azure Synapse might take time (typically ~30 mins) to fully release the allocated vCores. Try running
spark.stop()
at the end of your notebook to free up resources faster.Cluster-Wide Allocation Behavior - Even if your session is using fewer cores, Synapse may pre-allocate vCores based on pool settings. Check "Monitor > Active Applications" to verify if other sessions are consuming resources.
UI Sync Issue - The monitoring UI may not immediately reflect actual vCore usage. Try refreshing the UI or restarting the Spark pool to update resource tracking.
If the issue still persists, could you try running another session after stopping the current one and see if the allocated vCores drop?
I hope this information helps.
-
Chandra Boorla • 8,560 Reputation points • Microsoft Vendor
2025-02-19T12:43:23.8633333+00:00 We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. In case if you have any resolution, please do share that same with the community as it can be helpful to others. Otherwise, will respond with more details and we will try to help.
-
Rakesh Govindula • 5 Reputation points • Microsoft Vendor
2025-02-20T04:00:22.8466667+00:00 Hi @M Saad,
This might be due to the tasks that are running in the first notebook. I suggest you first test and run the two synapse notebooks parallelly using notebook activity from synapse pipeline. Give the same spark pool and run the two notebooks in the for-each activity. Also, do the same in a sequential run of the for-each activity. Observe the changes in the spark run time by increasing the vCores in the spark pool. By this way, you can find the correct spark pool configurations for your requirement which you can for further usage.
I hope this information helps. Please do let us know if you have any further queries.
Kindly consider upvoting the comment if the information provided is helpful. This can assist other community members in resolving similar issues.
-
M Saad • 36 Reputation points
2025-02-20T04:23:02.07+00:00 I already tired the steps that you mentioned
Delayed Resource Release
This is not the issue because, I am only using 8 v-cores, even if they are not released there should still be 42 v-cores available i.e. [ 50 (workspace limit) - 8(in use) ]
Cluster-Wide Allocation Behavior
I already checked in the monitoring section, only one spark application was in use.
UI Sync Issue
This is not a UI sync issue because i am not able to create new spark session. I get an error saying that 48 vCores are allocated out of 50 vCores (workspace limit).
Sign in to comment