Hello Andrea ESPOSITO,
Greetings and Welcome to Microsoft Q&A! Thanks for posting the question.
I understand that you are encountering an issue with Azure OpenAI Metrics. I attempted to reproduce the issue but was unable to encounter any problems. The metrics appear to be displayed correctly. Here is the screen shot,
This is for Azure AI Foundry Chat Playground,
This is for Azure AI Foundry Assistant Playground,
Has there been a recent change in how Azure tracks metrics for Assistant models?
As of now, there has been no official announcement regarding changes in how Azure tracks metrics for Assistant models. However, discrepancies in metric reporting can occur between different tools, such as the Azure AI Foundry Chat Playground and the Assistant Playground. These differences may result from variations in how usage data is logged and reported by the platforms.
Is there a configuration or workaround to continue monitoring inference tokens?
Here are a few workarounds:
- Ensure Diagnostic Settings Configuration is enabled for your Azure OpenAI resource and includes relevant metrics like "Processed Inference Tokens" in the logs.
- Leverage Log Analytics Integration by connecting your Azure OpenAI resource to Log Analytics. This allows you to capture detailed logs and use custom queries to track token usage.
- Implement API-Level Monitoring by capturing request and response payloads when using the API and calculating token usage locally using OpenAI’s token estimation methods.
Could this be a known bug, or does it require configuration changes on my end?
The issue might be a configuration error. Ensure Diagnostic Settings are set up correctly to track "Processed Inference Tokens" and routed to Log Analytics or other destinations.
Kindly refer this Monitor Azure OpenAI and Azure OpenAI monitoring data reference.
I hope you understand. And, if you have any further query do let us know.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful.
Thank you!