unable to chat with assistant playground in azure openai service and agent playground in ai foundry project with azure for student subscription

Trg 0 Reputation points
2025-02-26T04:29:42.9633333+00:00

When using azure openai service and azure ai foundry with Azure for Students subscription. I'm able to deploy an interact with model via chat playground. But when i'm using assistant playground, it always hit rate limit even when the promt is very short ( like "hi" with blank instruction). I know there is the quota allocation for model with azure for students but the threshold is unlikely to be met because and I have waited and tried again many times. The same problem also occurs when I use chat playground with Add your data in azure openai service or in agent playground in azure ai foundry.

Is there any restriction to use these function of azure open ai or azure ai foundry with azure for students subscription?

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
3,732 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. SriLakshmi C 2,845 Reputation points Microsoft Vendor
    2025-02-26T13:10:22.1933333+00:00

    Hello Trg,

    Greetings and Welcome to Microsoft Q&A!

    I understand that you are using the Azure OpenAI Service and Azure AI Foundry with an Azure for Students subscription. You can deploy and interact with models using the chat playground without any issues. However, when you try to use the Assistant Playground or Agent Playground, you keep hitting a rate limit,

    Although the Chat Playground is working, the Assistant Playground is in preview mode, which may occasionally result in hitting the rate limit.

    For free trials or Student Subscriptions, Azure offers only 1,000 TPM (Tokens Per Minute) across all models, so you may have reached this limit.

    Please refer this Quotas and limits.

    Since you are using a Student Subscription, the quotas are very limited. To manage your usage effectively,

    Monitoring your Requests per Minute (RPM) and Tokens per Minute (TPM) in the Azure Portal under Usage + quotas for your Azure OpenAI resource.

    Adjust your deployment settings by lowering the Tokens per Minute allocation in Azure AI Foundry or Azure OpenAI Service to stay within quota limits.

    If you encounter rate limit errors, implement retry logic using exponential backoff strategies instead of sending multiple requests at once.

    If the current limits are insufficient for your needs, consider upgrading to a Pay-As-You-Go subscription for higher quotas and better access to features.

    Also Please refer this Azure subscription and service limits, quotas, and constraints.

    I Hope this helps. Do let me know if you have any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful.

    Thank you!


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.