Muokkaa

Jaa


Models supported by Azure AI Agent Service

Agents are powered by a diverse set of models with different capabilities and price points. Model availability varies by region and cloud. Certain tools and capabilities require the latest models. The following models are available in the available SDKs. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see provisioned throughput in the Azure OpenAI documentation. You can use global standard models if they're supported in the regions listed here.

Azure OpenAI models

Azure AI Agent Service supports the same models as the chat completions API in Azure OpenAI, in the following regions.

Region gpt-4o, 2024-05-13 gpt-4o, 2024-08-06 gpt-4o-mini, 2024-07-18 gpt-4, 0613 gpt-4, 1106-Preview gpt-4, 0125-Preview gpt-4, vision-preview gpt-4, turbo-2024-04-09 gpt-4-32k, 0613 gpt-35-turbo, 0613 gpt-35-turbo, 1106 gpt-35-turbo, 0125 gpt-35-turbo-16k, 0613
eastus - - - - -
francecentral - - - - - - -
japaneast - - - - - - - - -
uksouth - - - - - - -
westus - - - - -

More models

The Azure AI Agent Service also supports the following models from the Azure AI Foundry model catalog.

  • Llama 3.1-70B-instruct
  • Mistral-large-2407
  • Cohere command R+

To use these models, you can use Azure AI Foundry portal to make a deployment, and then reference it in your agent.

  1. Go to the Azure AI Foundry portal and select Model catalog in the left navigation menu, and scroll down to Meta-Llama-3-70B-Instruct. You can also find and use one of the models listed previously.

  2. Select Deploy.

  3. In the Deployment options screen that appears, select Serverless API with Azure AI Content Safety.

    An image of the llama model project selection screen.

  4. Select your project and then select Subscribe and deploy.

    An image of the llama model deployment screen.

  5. Add the serverless connection to your hub/project. The deployment name you choose is the one that you reference in your code.

  6. When calling agent creation API, set the models parameter to your deployment name. For example:

    agent = project_client.agents.create_agent( model="llama-3", name="my-agent", instructions="You are a helpful agent" ) 
    

Next steps

Create a new Agent project