How to use the GPT-4o Realtime API for speech and audio (Preview)
Note
This feature is currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Azure OpenAI GPT-4o Realtime API for speech and audio is part of the GPT-4o model family that supports low-latency, "speech in, speech out" conversational interactions. The GPT-4o Realtime API is designed to handle real-time, low-latency conversational interactions. Realtime API is a great fit for use cases involving live interactions between a user and a model, such as customer support agents, voice assistants, and real-time translators.
Most users of the Realtime API need to deliver and receive audio from an end-user in real time, including applications that use WebRTC or a telephony system. The Realtime API isn't designed to connect directly to end user devices and relies on client integrations to terminate end user audio streams.
Supported models
Currently only gpt-4o-realtime-preview
version: 2024-10-01-preview
supports real-time audio.
The gpt-4o-realtime-preview
model is available for global deployments in East US 2 and Sweden Central regions.
Important
The system stores your prompts and completions as described in the "Data Use and Access for Abuse Monitoring" section of the service-specific Product Terms for Azure OpenAI Service, except that the Limited Exception does not apply. Abuse monitoring will be turned on for use of the gpt-4o-realtime-preview
API even for customers who otherwise are approved for modified abuse monitoring.
API support
Support for the Realtime API was first added in API version 2024-10-01-preview
.
Note
For more information about the API and architecture, see the Azure OpenAI GPT-4o real-time audio repository on GitHub.
Get started
Before you can use GPT-4o real-time audio, you need:
- An Azure subscription - Create one for free.
- An Azure OpenAI resource created in a supported region. For more information, see Create a resource and deploy a model with Azure OpenAI.
- You need a deployment of the
gpt-4o-realtime-preview
model in a supported region as described in the supported models section. You can deploy the model from the Azure AI Foundry portal model catalog or from your project in AI Foundry portal.
Here are some of the ways you can get started with the GPT-4o Realtime API for speech and audio:
- For steps to deploy and use the
gpt-4o-realtime-preview
model, see the real-time audio quickstart. - Download the sample code from the Azure OpenAI GPT-4o real-time audio repository on GitHub.
- The Azure-Samples/aisearch-openai-rag-audio repo contains an example of how to implement RAG support in applications that use voice as their user interface, powered by the GPT-4o realtime API for audio.
Connection and authentication
The Realtime API (via /realtime
) is built on the WebSockets API to facilitate fully asynchronous streaming communication between the end user and model.
Important
Device details like capturing and rendering audio data are outside the scope of the Realtime API. It should be used in the context of a trusted, intermediate service that manages both connections to end users and model endpoint connections. Don't use it directly from untrusted end user devices.
The Realtime API is accessed via a secure WebSocket connection to the /realtime
endpoint of your Azure OpenAI resource.
You can construct a full request URI by concatenating:
- The secure WebSocket (
wss://
) protocol - Your Azure OpenAI resource endpoint hostname, for example,
my-aoai-resource.openai.azure.com
- The
openai/realtime
API path - An
api-version
query string parameter for a supported API version such as2024-10-01-preview
- A
deployment
query string parameter with the name of yourgpt-4o-realtime-preview
model deployment
The following example is a well-constructed /realtime
request URI:
wss://my-eastus2-openai-resource.openai.azure.com/openai/realtime?api-version=2024-10-01-preview&deployment=gpt-4o-realtime-preview-deployment-name
To authenticate:
- Microsoft Entra (recommended): Use token-based authentication with the
/realtime
API for an Azure OpenAI Service resource with managed identity enabled. Apply a retrieved authentication token using aBearer
token with theAuthorization
header. - API key: An
api-key
can be provided in one of two ways:- Using an
api-key
connection header on the prehandshake connection. This option isn't available in a browser environment. - Using an
api-key
query string parameter on the request URI. Query string parameters are encrypted when using https/wss.
- Using an
Realtime API architecture
Once the WebSocket connection session to /realtime
is established and authenticated, the functional interaction takes place via events for sending and receiving WebSocket messages. These events each take the form of a JSON object.
Events can be sent and received in parallel and applications should generally handle them both concurrently and asynchronously.
- A client-side caller establishes a connection to
/realtime
, which starts a newsession
. - A
session
automatically creates a defaultconversation
. Multiple concurrent conversations aren't supported. - The
conversation
accumulates input signals until aresponse
is started, either via a direct event by the caller or automatically by voice activity detection (VAD). - Each
response
consists of one or moreitems
, which can encapsulate messages, function calls, and other information. - Each message
item
hascontent_part
, allowing multiple modalities (text and audio) to be represented across a single item. - The
session
manages configuration of caller input handling (for example, user audio) and common output generation handling. - Each caller-initiated
response.create
can override some of the outputresponse
behavior, if desired. - Server-created
item
and thecontent_part
in messages can be populated asynchronously and in parallel. For example, receiving audio, text, and function information concurrently in a round robin fashion.
Session configuration
Often, the first event sent by the caller on a newly established /realtime
session is a session.update
payload. This event controls a wide set of input and output behavior, with output and response generation properties then later overridable using the response.create
event.
The session.update
event can be used to configure the following aspects of the session:
- Transcription of user input audio is opted into via the session's
input_audio_transcription
property. Specifying a transcription model (whisper-1
) in this configuration enables the delivery ofconversation.item.audio_transcription.completed
events. - Turn handling is controlled by the
turn_detection
property. This property can be set tonone
orserver_vad
as described in the input audio buffer and turn handling section. - Tools can be configured to enable the server to call out to external services or functions to enrich the conversation. Tools are defined as part of the
tools
property in the session configuration.
An example session.update
that configures several aspects of the session, including tools, follows. All session parameters are optional and can be omitted if not needed.
{
"type": "session.update",
"session": {
"voice": "alloy",
"instructions": "Call provided tools if appropriate for the user's input.",
"input_audio_format": "pcm16",
"input_audio_transcription": {
"model": "whisper-1"
},
"turn_detection": {
"threshold": 0.4,
"silence_duration_ms": 600,
"type": "server_vad"
},
"tools": [
{
"type": "function",
"name": "get_weather_for_location",
"description": "gets the weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state such as San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"c",
"f"
]
}
},
"required": [
"location",
"unit"
]
}
}
]
}
}
Input audio buffer and turn handling
The server maintains an input audio buffer containing client-provided audio that has not yet been committed to the conversation state.
One of the key session-wide settings is turn_detection
, which controls how data flow is handled between the caller and model. The turn_detection
setting can be set to none
or server_vad
(to use server-side voice activity detection).
Without server decision mode
By default, the session is configured with the turn_detection
type effectively set to none
.
The session relies on caller-initiated input_audio_buffer.commit
and response.create
events to progress conversations and produce output. This setting is useful for push-to-talk applications or situations that have external audio flow control (such as caller-side VAD component). These manual signals can still be used in server_vad
mode to supplement VAD-initiated response generation.
- The client can append audio to the buffer by sending the
input_audio_buffer.append
event. - The client commits the input audio buffer by sending the
input_audio_buffer.commit
event. The commit creates a new user message item in the conversation. - The server responds by sending the
input_audio_buffer.committed
event. - The server responds by sending the
conversation.item.created
event.
Server decision mode
The session can be configured with the turn_detection
type set to server_vad
. In this case, the server evaluates user audio from the client (as sent via input_audio_buffer.append
) using a voice activity detection (VAD) component. The server automatically uses that audio to initiate response generation on applicable conversations when an end of speech is detected. Silence detection for the VAD can be configured when specifying server_vad
detection mode.
- The server sends the
input_audio_buffer.speech_started
event when it detects the start of speech. - At any time, the client can optionally append audio to the buffer by sending the
input_audio_buffer.append
event. - The server sends the
input_audio_buffer.speech_stopped
event when it detects the end of speech. - The server commits the input audio buffer by sending the
input_audio_buffer.committed
event. - The server sends the
conversation.item.created
event with the user message item created from the audio buffer.
Conversation and response generation
You can have one active conversation per session. The conversation accumulates input signals until a response is started, either via a direct event by the caller or automatically by voice activity detection (VAD).
- The server
conversation.created
event is returned right after session creation. - The client adds new items to the conversation with a
conversation.item.create
event. - The server
conversation.item.created
event is returned when the client adds a new item to the conversation.
Optionally, the client can truncate or delete items in the conversation:
- The client truncates an earlier assistant audio message item with a
conversation.item.truncate
event. - The server
conversation.item.truncated
event is returned to sync the client and server state. - The client deletes an item in the conversation with a
conversation.item.delete
event. - The server
conversation.item.deleted
event is returned to sync the client and server state.
Response interuption
The client response.cancel
event is used to cancel an in-progress response.
A user might want to interrupt the assistant's response or ask the assistant to stop talking. The server produces audio faster than realtime. The client can send a conversation.item.truncate
event to truncate the audio before it's played.
- The server's understanding of the audio with the client's playback is synchronized.
- Truncating audio deletes the server-side text transcript to ensure there isn't text in the context that the user doesn't know about.
- The server responds with a
conversation.item.truncated
event.
Related content
- Try the real-time audio quickstart
- See the Realtime API reference
- Learn more about Azure OpenAI quotas and limits