Consume serverless API endpoints from a different Azure AI Foundry project or hub
In this article, you learn how to configure an existing serverless API endpoint in a different project or hub than the one that was used to create the deployment.
Important
Models that are in preview are marked as preview on their model cards in the model catalog.
Certain models in the model catalog can be deployed as serverless APIs. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
The need to consume a serverless API endpoint in a different project or hub than the one that was used to create the deployment might arise in situations such as these:
- You want to centralize your deployments in a given project or hub and consume them from different projects or hubs in your organization.
- You need to deploy a model in a hub in a particular Azure region where serverless deployment for that model is available. However, you need to consume it from another region, where serverless deployment isn't available for the particular models.
Prerequisites
An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a paid Azure account to begin.
A model deployed to a serverless API endpoint. This article assumes that you previously deployed the Meta-Llama-3-8B-Instruct model. To learn how to deploy this model as a serverless API, see Deploy models as serverless APIs.
You need to install the following software to work with Azure AI Foundry:
You can use any compatible web browser to navigate Azure AI Foundry.
Create a serverless API endpoint connection
Follow these steps to create a connection:
Connect to the project or hub where the endpoint is deployed:
Go to Azure AI Foundry and navigate to the project where the endpoint you want to connect to is deployed.
Get the endpoint's URL and credentials for the endpoint you want to connect to. In this example, you get the details for an endpoint name meta-llama3-8b-qwerty.
From the left sidebar of your project in AI Foundry portal, go to My assets > Models + endpoints to see the list of deployments in the project.
Select the deployment you want to connect to.
Copy the values for Target URI and Key.
Now, connect to the project or hub where you want to create the connection:
Go to the project where the connection needs to be created to.
Create the connection in the project:
From your project in AI Foundry portal, go to the bottom part of the left sidebar and select Management center.
From the left sidebar of the management center, select Connected resources.
Select New connection.
Select Serverless Model.
For the Target URI, paste the value you copied previously.
For the Key, paste the value you copied previously.
Give the connection a name, in this case meta-llama3-8b-connection.
Select Add connection.
At this point, the connection is available for consumption.
To validate that the connection is working:
Return to your project in AI Foundry portal.
From the left sidebar of your project, go to Build and customize > Prompt flow.
Select Create to create a new flow.
Select Create in the Chat flow box.
Give your Prompt flow a name and select Create.
Select the chat node from the graph to go to the chat section.
For Connection, open the dropdown list to select the connection you just created, in this case meta-llama3-8b-connection.
Select Start compute session from the top navigation bar, to start a prompt flow automatic runtime.
Select the Chat option. You can now send messages and get responses.