Unable to Access Databricks Serverless Compute from CLI and VS Code

Isaac Huang 5 Reputation points
2025-01-21T15:57:00.0333333+00:00

The following conditions exist while using Databricks with the premium plan:

  • Unity catalog is enabled.
  • No complaint security profile is enabled.
  • The workspace is located in a supported region for Serverless Notebook compute — West US2.
  • The workspace satisfies the requirements for Serverless compute.

All suggested methods from the documentation have been attempted, including the configuration for Serverless compute, but Serverless compute is still not visible as one of the available clusters in the VS Code Databricks plugin or Databricks CLI.

  • Local environment variable for DATABRICKS_SERVERLESS_COMPUTE_ID
  • Modifications in the Databricks Config file
  • Spark Session initialization from Python
Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,313 questions
{count} vote

1 answer

Sort by: Most helpful
  1. Sina Salam 16,526 Reputation points
    2025-01-22T13:50:31.9633333+00:00

    Hello Isaac Huang,

    Welcome to the Microsoft Q&A and thank you for posting your questions here.

    I understand that you are unable to access Databricks Serverless Compute from the CLI and VS Code despite meeting all the requirements and following the documentation.

    To clarify and simplify the previous answers given above and to follow the best practices and precise solution, kindly follow the below steps:

    1. Ensure that serverless compute is enabled for your workspace in the account console under the Feature enablement tab.
    2. Verify that you have the necessary permissions to access serverless compute. Sometimes, specific roles or permissions are required.
    3. Ensure you are using the latest versions of Databricks CLI and VS Code extension using shell command:
         -- Upgrade to latest version
         	pip install --upgrade databricks-cli
         -- Verify the CLI configuration:
             databricks configure --token
      
    4. Double-check that the local environment variable DATABRICKS_SERVERLESS_COMPUTE_ID is set correctly.
    5. Ensure that the modifications in the Databricks config file are correctly applied and saved.
    6. Verify that there are no cluster policies in place that might be restricting the visibility of serverless compute clusters.
    7. Verify that your network allows outbound traffic to Databricks and that firewall settings are not blocking traffic.
    8. Ensure you are using the correct syntax for initializing the Spark session from Python:
              from databricks.connect import DatabricksSession as SparkSession
              spark = DatabricksSession.builder.serverless(True).getOrCreate()
      
    9. Refer to the official Databricks documentation and community forums for any similar issues and potential solutions.
    10. If none of the above steps resolve the issue, contact Databricks support for further assistance.

    I hope this is helpful! Do not hesitate to let me know if you have any other questions.


    Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.