Azure Synapse Apache Spark pool in Current state: DeleteError. How do I get it into a succeeded state?

Kajetan Bocek 40 Reputation points
2025-03-06T12:56:23.7466667+00:00

Hi,

I tried to delete an Apache Spark pool. Deletion failed.

Jobs are still running, but I cannot make any changes to the configuration of the Spark pool.

Neither can I delete it.

I read that "unlinking all notebooks from the spark pool" may help, but given the number of notebooks I would much prefer to find another way. Unless there is a way to automate the relinking of the notebooks...

Many thanks in advance.

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
5,228 questions
0 comments No comments
{count} votes

Accepted answer
  1. J N S S Kasyap 860 Reputation points Microsoft External Staff
    2025-03-06T18:32:44.2933333+00:00

    Hi @Kajetan Bocek
    Thanks for posting your query!

    The DeleteError state in an Azure Synapse Apache Spark pool typically occurs when the pool is stuck in the deletion process due to active or pending jobs. If there are running Spark applications or sessions linked to the pool, Azure prevents the deletion to avoid disrupting ongoing workloads. Additionally, if the pool is referenced by other resources, such as notebooks or pipeline activities, it may create dependencies that block the deletion process.

    Additionally, Azure Synapse may have orphaned resources linked to the Spark pool, such as Livy sessions or temporary files, preventing a clean removal. If the metadata service or job queue is still processing commands related to the pool, it can result in a conflict where the system does not recognize the pool as fully deletable.

    Another cause could be a failed dependency resolution, where the Spark pool is still linked to multiple notebooks, pipelines, or jobs. If these resources reference the Spark pool, Azure may prevent deletion to ensure that dependent workloads are not impacted. While unlinking notebooks is sometimes recommended, manually detaching a large number of notebooks can be time-consuming.

    Possible Solutions:
    Navigate to Synapse Studio>Monitor>Apache Spark Applications to manually stop any active jobs. If manual stopping doesn't work, you can use the Azure REST API or PowerShell to forcefully terminate them.
    If the pool remains undeletable, use the following PowerShell command to force deletion:

    Remove-AzSynapseSparkPool -WorkspaceName "<workspace-name>" -Name "<spark-pool-name>" -Force
    

    Ensure that no notebooks, pipelines, or other resources are referencing the Spark pool. You may need to detach these dependencies before attempting deletion.

    If notebooks are still referencing the Spark pool, automating the unlinking process using REST API or PowerShell would be helpful.

    Please refer the below threads:
    https://learn.microsoft.com/en-us/answers/questions/1643121/how-to-bring-spark-pool-in-succeed-state-from-dele
    https://learn.microsoft.com/en-us/answers/questions/1848656/azure-apache-spark-pool-stuck-in-resourceinunupdat
    https://learn.microsoft.com/en-gb/answers/questions/2039099/i-cant-delete-a-synapse-spark-pool
    Hope this helps. Do let us know if you any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.