Configure MLflow for Azure Machine Learning

This article explains how to configure MLflow to connect to an Azure Machine Learning workspace for tracking, registry management, and deployment.

Azure Machine Learning workspaces are MLflow-compatible, which means they can act as MLflow servers without any extra configuration. Each workspace has an MLflow tracking URI that MLflow can use to connect to the workspace. Azure Machine Learning workspaces are already configured to work with MLflow, so no extra configuration is required.

However, if you work outside Azure Machine Learning, you need to configure MLflow to point to the workspace. Affected environments include your local machine, Azure Synapse Analytics, and Azure Databricks.

Important

When you use Azure compute infrastructure, you don't have to configure the tracking URI. It's automatically configured for you. Environments with automatic configuration include Azure Machine Learning notebooks, Jupyter notebooks that are hosted on Azure Machine Learning compute instances, and jobs that run on Azure Machine Learning compute clusters.

Prerequisites

  • The MLflow SDK mlflow package and the Azure Machine Learning azureml-mlflow plugin for MLflow. You can use the following command to install this software:

    pip install mlflow azureml-mlflow
    

    Tip

    Instead of mlflow, consider using mlflow-skinny. This package is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. It's recommended for users who primarily need MLflow tracking and logging capabilities but don't want to import the full suite of features, including deployments.

  • An Azure Machine Learning workspace. To create a workspace, see Create resources you need to get started.

  • Access permissions for performing MLflow operations in your workspace. For a list of operations and required permissions, see MLflow operations.

Configure the MLflow tracking URI

To do remote tracking, or track experiments running outside Azure Machine Learning, configure MLflow to point to the tracking URI of your Azure Machine Learning workspace.

To connect MLflow to an Azure Machine Learning workspace, you need the tracking URI of the workspace. Each workspace has its own tracking URI, which starts with the protocol azureml://.

  1. Get the tracking URI for your workspace:

    APPLIES TO: Azure CLI ml extension v2 (current)

    1. Sign in and configure your workspace:

      az account set --subscription <subscription-ID>
      az configure --defaults workspace=<workspace-name> group=<resource-group-name> location=<location> 
      
    2. Get the tracking URI by using the az ml workspace command:

      az ml workspace show --query mlflow_tracking_uri
      
  2. Configure the tracking URI:

    Use the set_tracking_uri() method to set the MLflow tracking URI to the tracking URI of your workspace.

    import mlflow
    
    mlflow.set_tracking_uri(mlflow_tracking_uri)
    

    Tip

    Some scenarios involve working in a shared environment like an Azure Databricks cluster or an Azure Synapse Analytics cluster. In these cases, it's useful to set the MLFLOW_TRACKING_URI environment variable at the cluster level rather than for each session. Setting the variable at the cluster level automatically configures the MLflow tracking URI to point to Azure Machine Learning for all sessions in the cluster.

Configure authentication

After you set up tracking, you also need to configure the authentication method for the associated workspace.

By default, the Azure Machine Learning plugin for MLflow performs interactive authentication by opening the default browser to prompt for credentials. But the plugin also supports several other authentication mechanisms. The azure-identity package provides this support. This package is installed as a dependency of the azureml-mlflow plugin.

The authentication process tries the following methods, one after another, until one succeeds:

  1. Environment: Account information that's specified via environment variables is read and used for authentication.
  2. Managed identity: If the application is deployed to an Azure host with a managed identity enabled, the managed identity is used for authentication.
  3. Azure CLI: If you use the Azure CLI az login command to sign in, your credentials are used for authentication.
  4. Azure PowerShell: If you use the Azure PowerShell Connect-AzAccount command to sign in, your credentials are used for authentication.
  5. Interactive browser: The user is interactively authenticated via the default browser.

For interactive jobs where there's a user connected to the session, you can rely on interactive authentication. No further action is required.

Warning

Interactive browser authentication blocks code execution when it prompts for credentials. This approach isn't suitable for authentication in unattended environments like training jobs. We recommend that you configure a different authentication mode in those environments.

For scenarios that require unattended execution, you need to configure a service principal to communicate with Azure Machine Learning. For information about creating a service principal, see Configure a service principal.

Use the tenant ID, client ID, and client secret of your service principal in the following code:

import os

os.environ["AZURE_TENANT_ID"] = "<Azure-tenant-ID>"
os.environ["AZURE_CLIENT_ID"] = "<Azure-client-ID>"
os.environ["AZURE_CLIENT_SECRET"] = "<Azure-client-secret>"

Tip

When you work in shared environments, we recommend that you configure these environment variables at the compute level. As a best practice, manage them as secrets in an instance of Azure Key Vault.

For instance, in an Azure Databricks cluster configuration, you can use secrets in environment variables in the following way: AZURE_CLIENT_SECRET={{secrets/<scope-name>/<secret-name>}}. For more information about implementing this approach in Azure Databricks, see Reference a secret in an environment variable, or refer to documentation for your platform.

If you'd rather use a certificate than a secret, you can configure the following environment variables:

  • Set AZURE_CLIENT_CERTIFICATE_PATH to the path of a file that contains the certificate and private key pair in Privacy Enhanced Mail (PEM) or Public-Key Cryptography Standards 12 (PKCS #12) format.
  • Set AZURE_CLIENT_CERTIFICATE_PASSWORD to the password of the certificate file, if it uses a password.

Configure authorization and permission levels

Some default roles like AzureML Data Scientist and Contributor are already configured to perform MLflow operations in an Azure Machine Learning workspace. If you use a custom role, you need the following permissions:

  • To use MLflow tracking:

    • Microsoft.MachineLearningServices/workspaces/experiments/*
    • Microsoft.MachineLearningServices/workspaces/jobs/*
  • To use the MLflow model registry:

    • Microsoft.MachineLearningServices/workspaces/models/*/*

To see how to grant access to your workspace to a service principal that you create or to your user account, see Grant access.

Troubleshoot authentication issues

MLflow tries to authenticate to Azure Machine Learning on the first operation that interacts with the service, like mlflow.set_experiment() or mlflow.start_run(). If you experience issues or unexpected authentication prompts during the process, you can increase the logging level to get more details about the error:

import logging

logging.getLogger("azure").setLevel(logging.DEBUG)

Set experiment name (optional)

All MLflow runs are logged to the active experiment. By default, runs are logged to an experiment named Default that's automatically created for you. You can configure the experiment that's used for tracking.

Tip

When you use the Azure Machine Learning CLI v2 to submit jobs, you can set the experiment name by using the experiment_name property in the YAML definition of the job. You don't have to configure it in your training script. For more information, see YAML: display name, experiment name, description, and tags.

Use the MLflow mlflow.set_experiment() command to configure your experiment.

experiment_name = "experiment_with_mlflow"
mlflow.set_experiment(experiment_name)

Configure support for a nonpublic Azure cloud

The Azure Machine Learning plugin for MLflow is configured by default to work with the global Azure cloud. However, you can configure the Azure cloud you're using by setting the AZUREML_CURRENT_CLOUD environment variable:

import os

os.environ["AZUREML_CURRENT_CLOUD"] = "AzureChinaCloud"

You can identify the cloud you're using with the following Azure CLI command:

az cloud list

The current cloud has the value IsActive set to True.

Now that your environment is connected to your workspace in Azure Machine Learning, you can start to work with it.