Connect Grafana to Azure Monitor Prometheus metrics

The most common way to analyze and present Prometheus data is with a Grafana dashboard. You can collect Prometheus metrics in Azure in the following ways:

This article explains how to configure Azure-hosted Prometheus metrics as a data source for Azure Managed Grafana, self-hosted Grafana running on an Azure virtual machine, or a Grafana instance running outside of Azure.

Azure Monitor workspace query endpoint

In Azure, Prometheus data is stored in an Azure Monitor workspace. When configuring the Prometheus data source in Grafana, you use the Query endpoint for your Azure Monitor workspace. To find the query endpoint, open the Overview page for your Azure Monitor workspace in the Azure portal.

A screenshot showing the query endpoint URL for an Azure Monitor workspace.

Configure Grafana

Azure Managed Grafana

When you create an Azure Managed Grafana instance, it's automatically configured with a managed system identity. The identity has the Monitoring Data Reader role assigned to it at the subscription level. This role allows the identity to read data any monitoring data for the subscription. This identity is used to authenticate Grafana to Azure Monitor. You don't need to do anything to configure the identity.

Create the Prometheus data source in Grafana.

To configure Prometheus as a data source, follow these steps:

  1. Open your Azure Managed Grafana workspace in the Azure portal.
  2. Select on the Endpoint to view the Grafana workspace.
  3. Select Connections and then Data sources.
  4. Select Add data source
  5. Search for and select Prometheus.
  6. Paste the query endpoint from your Azure Monitor workspace into the Prometheus server URL field.
  7. Under Authentication, select Azure Auth.
  8. Under Azure Authentication, select Managed Identity from the Authentication dropdown.
  9. Scroll to the bottom of the page and select Save & test.

Screenshot of configuration for Prometheus data source.

Frequently asked questions

This section provides answers to common questions.

I am missing all or some of my metrics. How can I troubleshoot?

You can use the troubleshooting guide for ingesting Prometheus metrics from the managed agent here.

Why am I missing metrics that have two labels with the same name but different casing?

Azure managed Prometheus is a case insensitive system. It treats strings, such as metric names, label names, or label values, as the same time series if they differ from another time series only by the case of the string. For more information, see Prometheus metrics overview.

I see some gaps in metric data, why is this occurring?

During node updates, you might see a 1-minute to 2-minute gap in metric data for metrics collected from our cluster level collectors. This gap occurs because the node that the data runs on is being updated as part of a normal update process. This update process affects cluster-wide targets such as kube-state-metrics and custom application targets that are specified. This occurs when your cluster is updated manually or via autoupdate. This behavior is expected and occurs due to the node it runs on being updated. This behavior doesn't affect any of our recommended alert rules.

Next steps