Use the Azure AI model inference endpoint to consume models
บทความ
Azure AI model inference in Azure AI services allows customers to consume the most powerful models from flagship model providers using a single endpoint and credentials. This means that you can switch between models and consume them from your application without changing a single line of code.
This article explains how to use the inference endpoint to invoke them.
Endpoints
Azure AI services expose multiple endpoints depending on the type of work you're looking for:
Azure AI model inference endpoint
Azure OpenAI endpoint
The Azure AI inference endpoint (usually with the form https://<resource-name>.services.ai.azure.com/models) allows customers to use a single endpoint with the same authentication and schema to generate inference for the deployed models in the resource. All the models support this capability. This endpoint follows the Azure AI model inference API.
Azure OpenAI models deployed to AI services also support the Azure OpenAI API (usually with the form https://<resource-name>.openai.azure.com). This endpoint exposes the full capabilities of OpenAI models and supports more features like assistants, threads, files, and batch inference.
Using the routing capability in the Azure AI model inference endpoint
The inference endpoint routes requests to a given deployment by matching the parameter name inside of the request to the name of the deployment. This means that deployments work as an alias of a given model under certain configurations. This flexibility allows you to deploy a given model multiple times in the service but under different configurations if needed.
For example, if you create a deployment named Mistral-large, then such deployment can be invoked as:
Install the package azure-ai-inference using your package manager, like pip:
pip install azure-ai-inference>=1.0.0b5
Warning
Azure AI Services resource requires the version azure-ai-inference>=1.0.0b5 for Python.
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
import os
from azure.ai.inference import ChatCompletionsClient
from azure.core.credentials import AzureKeyCredential
model = ChatCompletionsClient(
endpoint="https://<resource>.services.ai.azure.com/models",
credential=AzureKeyCredential(os.environ["AZUREAI_ENDPOINT_KEY"]),
)
Install the package @azure-rest/ai-inference using npm:
npm install @azure-rest/ai-inference
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
import ModelClient from "@azure-rest/ai-inference";
import { isUnexpected } from "@azure-rest/ai-inference";
import { AzureKeyCredential } from "@azure/core-auth";
const client = new ModelClient(
"https://<resource>.services.ai.azure.com/models",
new AzureKeyCredential(process.env.AZUREAI_ENDPOINT_KEY)
);
using Azure;
using Azure.Identity;
using Azure.AI.Inference;
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
ChatCompletionsClient client = new ChatCompletionsClient(
new Uri("https://<resource>.services.ai.azure.com/models"),
new AzureKeyCredential(Environment.GetEnvironmentVariable("AZURE_INFERENCE_CREDENTIAL"))
);
Use the reference section to explore the API design and which parameters are available. For example, the reference section for Chat completions details how to use the route /chat/completions to generate predictions based on chat-formatted instructions. Notice that the path /models is included to the root of the URL:
Request
POST https://<resource>.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview
Authorization: Bearer <bearer-token>
Content-Type: application/json
For a chat model, you can create a request as follows:
from azure.ai.inference.models import SystemMessage, UserMessage
response = client.complete(
messages=[
SystemMessage(content="You are a helpful assistant."),
UserMessage(content="Explain Riemann's conjecture in 1 paragraph"),
],
model="mistral-large"
)
print(response.choices[0].message.content)
var messages = [
{ role: "system", content: "You are a helpful assistant" },
{ role: "user", content: "Explain Riemann's conjecture in 1 paragraph" },
];
var response = await client.path("/chat/completions").post({
body: {
messages: messages,
model: "mistral-large"
}
});
console.log(response.choices[0].message.content)
requestOptions = new ChatCompletionsOptions()
{
Messages = {
new ChatRequestSystemMessage("You are a helpful assistant."),
new ChatRequestUserMessage("Explain Riemann's conjecture in 1 paragraph")
},
Model = "mistral-large"
};
response = client.Complete(requestOptions);
Console.WriteLine($"Response: {response.Value.Content}");
List<ChatRequestMessage> chatMessages = new ArrayList<>();
chatMessages.add(new ChatRequestSystemMessage("You are a helpful assistant"));
chatMessages.add(new ChatRequestUserMessage("Explain Riemann's conjecture in 1 paragraph"));
ChatCompletions chatCompletions = client.complete(new ChatCompletionsOptions(chatMessages));
for (ChatChoice choice : chatCompletions.getChoices()) {
ChatResponseMessage message = choice.getMessage();
System.out.println("Response:" + message.getContent());
}
Request
POST https://<resource>.services.ai.azure.com/models/chat/completions?api-version=2024-05-01-preview
Authorization: Bearer <bearer-token>
Content-Type: application/json
{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant"
},
{
"role": "user",
"content": "Explain Riemann's conjecture in 1 paragraph"
}
],
"model": "mistral-large"
}
If you specify a model name that doesn't match any given model deployment, you get an error that the model doesn't exist. You can control which models are available for users by creating model deployments as explained at add and configure model deployments.