Why can't I deploy more than one model on Azure AI Foundry?

Marc Aguilar 0 Puntos de reputación
2025-03-12T12:50:14.3866667+00:00

I am trying to teploy two models for a personal project I am developint, but when I deploy the second model, the first one gets deleted from the deployed models list. How can I solve this?Captura de pantalla 2025-03-12 a las 13.39.10

After adding the embedded model the gpt 35 is deleted:
Captura de pantalla 2025-03-12 a las 13.49.00

Preguntas y respuestas (Q&A) de Microsoft
Preguntas y respuestas (Q&A) de Microsoft
Use esta etiqueta para compartir sugerencias, solicitudes de características y errores con el equipo de Microsoft Q&A. El equipo de Microsoft Q&A evaluará sus comentarios periódicamente y proporcionará actualizaciones a lo largo del proceso.
506 preguntas
0 comentarios No hay comentarios
{count} votos

1 respuesta

Ordenar por: Muy útil
  1. Gao Chen 8,735 Puntos de reputación Personal externo de Microsoft
    2025-03-12T15:50:06.2633333+00:00

    Hello Marc Aguilar,

    Welcome to Microsoft Q&A!

    It seems like you are encountering an issue with deploying multiple models on Azure AI Foundry. Please verify the following:

    Deployment Configuration:

    • Ensure that your Azure AI Foundry project is correctly configured to support multiple model deployments. You might need to check the deployment settings and ensure that each model is assigned a unique deployment name and configuration.

    Resource Limitations:

    • There might be limitations on the number of models that can be deployed simultaneously based on your subscription or resource allocation. Verify if your Azure subscription supports multiple model deployments and if there are any resource constraints.

    Model Inference Service Configuration:

    • Make sure that the Azure AI model inference service is properly configured in your project. You may need to create connections to the desired resources and ensure that the models are available in the region of your Azure AI Services resource

    Enterprise Agent Support:

    • If you are using enterprise agents, note that there might be restrictions on the number of Azure Open AI connections associated with a project. Ensure that your project is set up to support multiple connections if needed.

    If you were to have all the correct configurations, to resolve the issue, you can follow these steps:

    Check Deployment Settings:

    • Go to the Model catalog section in the Azure AI Foundry portal.
    • Select the model you want to deploy and configure the deployment settings, ensuring each model has a unique deployment name

    Verify Resource Allocation:

    • Check your Azure subscription and resource allocation to ensure it supports multiple model deployments.

    Configure Model Inference Service:

    • Ensure that the Azure AI model inference service is properly configured in your project. Create connections to the desired resources if needed

    In case you need the information used as a reference: https://github.com/Azure/azure-sdk-for-python/issues/38921 and https://learn.microsoft.com/en-us/azure/ai-foundry/model-inference/how-to/create-model-deployments?pivots=ai-foundry-portal

    I hope the information provided was useful!

    Regards,

    Gao


    If the answer is the right solution, please click "Accept Answer" and kindly upvote it. If you have extra questions about this answer, please click "Comment".

    0 comentarios No hay comentarios

Su respuesta

Las respuestas se pueden marcar como respuestas aceptadas por el autor de la pregunta, lo que ayuda a los usuarios a conocer la respuesta que resolvió el problema del autor.