Session failed. Run the notebook to start a new session.

Roberto Ebratt 5 Reputation points
2024-12-19T19:11:33.6933333+00:00

I've recently created a spark pool in Azure Synapse Analytics. However, when i try to run a notebook, i got this error:

Spark_Ambiguous_SparkSubmit_SparkSubmitProcessFailedExitCode1: Livy session has failed. Session state: Dead. Error code: Spark_Ambiguous_SparkSubmit_SparkSubmitProcessFailedExitCode1. Job failed during run time with state=[dead]. TSG:spark-submit process failed with exit code 1. Check Livy logs for more information. Source: Unknown.

In the monitoring view, these logs are shown:

	at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:315)
	at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:185)
	at com.microsoft.azure.storage.blob.CloudBlob.exists(CloudBlob.java:1994)
	at com.microsoft.azure.storage.blob.CloudBlob.exists(CloudBlob.java:1981)
	at org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.exists(StorageInterfaceImpl.java:333)
	at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrieveMetadata(AzureNativeFileSystemStore.java:2200)
	... 17 more

stderr: 

YARN Diagnostics: 
org.apache.livy.utils.submit.exceptions.SparkSubmitProcessFailedException: spark-submit exited with non-zero status : exit code 1
Vector(stdout: , Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/mnt/tmp, Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/mnt/tmp, Exception in thread "main" org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: This request is not authorized to perform this operation., 	at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrieveMetadata(AzureNativeFileSystemStore.java:2265), 	at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getAncestor(NativeAzureFileSystem.java:3017), 	at org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:3043), 	at org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:3030), 	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2388), 	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:750), 	at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:526), 	at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:1026), 	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:223), 	at org.apache.spark.deploy.yarn.Client.run(Client.scala:1381), 	at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1837), 	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1027), 	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:193), 	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:216), 	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:92), 	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1118), 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1127), 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala), Caused by: com.microsoft.azure.storage.StorageException: This request is not authorized to perform this operation., 	at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:87), 	at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:315), 	at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:185), 	at com.microsoft.azure.storage.blob.CloudBlob.exists(CloudBlob.java:1994), 	at com.microsoft.azure.storage.blob.CloudBlob.exists(CloudBlob.java:1981), 	at org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.exists(StorageInterfaceImpl.java:333), 	at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrieveMetadata(AzureNativeFileSystemStore.java:2200), 	... 17 more, 
stderr: , 
YARN Diagnostics: )
Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
5,093 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Ketsha 250 Reputation points Microsoft Employee
    2024-12-19T19:33:43.5366667+00:00

    Hi Roberto -

    The error message indicates that there is a StorageException due to unauthorized access. Ensure that the storage account you are trying to access has the correct permissions. Specifically, make sure that the service principal or managed identity used by your Synapse workspace has the Storage Blob Data Contributor role assigned to it.

    Here are the steps to assign the "Storage Blob Data Contributor" role to a managed identity in Azure Synapse Analytics:

    Navigate to the Azure Portal: Open the Azure portal and go to the storage account that you want to grant access to.

    Access Control (IAM): In the storage account's menu, select "Access control (IAM)."

    Add Role Assignment: Click on "+ Add" and then select "Add role assignment."

    Select Role: In the "Role" dropdown, select "Storage Blob Data Contributor."

    Assign Access to Managed Identity:

    • In the "Assign access to" dropdown, select "Managed identity."
      • Click on "Select members" and choose the managed identity associated with your Azure Synapse workspace. The managed identity name is typically the same as your Synapse workspace name.
      Review and Assign: Click on "Review + assign" to complete the role assignment.

    This will grant the managed identity the necessary permissions to access the storage account and perform operations such as reading, writing, and deleting blobs.

    Reference link:

    Link : https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-handle-livy-error


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.