Muokkaa

Jaa


Quickstart: Server-side Audio Streaming

Important

Functionality described in this article is currently in public preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

Get started with using audio streams through Azure Communication Services Audio Streaming API. This quickstart assumes you're already familiar with Call Automation APIs to build an automated call routing solution.

Functionality described in this quickstart is currently in public preview.

Prerequisites

Set up a websocket server

Azure Communication Services requires your server application to set up a WebSocket server to stream audio in real-time. WebSocket is a standardized protocol that provides a full-duplex communication channel over a single TCP connection.

You can review documentation here to learn more about WebSockets and how to use them.

Receiving and sending audio streaming data

There are multiple ways to start receiving audio stream, which can be configured using the startMediaStreaming flag in the mediaStreamingOptions setup. You can also specify the desired sample rate used for receiving or sending audio data using the audioFormat parameter. Currently supported formats are PCM 24K mono and PCM 16K mono, with the default being PCM 16K mono.

To enable bidirectional audio streaming, where you're sending audio data into the call, you can enable the EnableBidirectional flag. For more details, refer to the API specifications.

Start streaming audio to your webserver at time of answering the call

Enable automatic audio streaming when the call is established by setting the flag startMediaStreaming: true.

This setting ensures that audio streaming starts automatically as soon as the call is connected.

var mediaStreamingOptions = new MediaStreamingOptions(
  new Uri("wss://YOUR_WEBSOCKET_URL"),
  MediaStreamingContent.Audio,
  MediaStreamingAudioChannel.Mixed,
  startMediaStreaming: true) {
  EnableBidirectional = true,
    AudioFormat = AudioFormat.Pcm24KMono
}
var options = new AnswerCallOptions(incomingCallContext, callbackUri) {
  MediaStreamingOptions = mediaStreamingOptions,
};

AnswerCallResult answerCallResult = await client.AnswerCallAsync(options);

When Azure Communication Services receives the URL for your WebSocket server, it establishes a connection to it. Once the connection is successfully made, streaming is initiated.

Start streaming audio to your webserver while a call is in progress

To start media streaming during the call, you can use the API. To do so, set the startMediaStreaming parameter to false (which is the default), and later in the call, you can use the start API to enable media streaming.

var mediaStreamingOptions = new MediaStreamingOptions(
  new Uri("wss://<YOUR_WEBSOCKET_URL"),
  MediaStreamingContent.Audio,
  MediaStreamingAudioChannel.Mixed,
  startMediaStreaming: false) {
  EnableBidirectional = true,
    AudioFormat = AudioFormat.Pcm24KMono
}
var options = new AnswerCallOptions(incomingCallContext, callbackUri) {
  MediaStreamingOptions = mediaStreamingOptions,
};

AnswerCallResult answerCallResult = await client.AnswerCallAsync(options);

Start media streaming via API call
StartMediaStreamingOptions options = new StartMediaStreamingOptions() {
  OperationContext = "startMediaStreamingContext"
};

await callMedia.StartMediaStreamingAsync();

Stop audio streaming

To stop receiving audio streams during a call, you can use the Stop streaming API. This allows you to stop the audio streaming at any point in the call. There are two ways that audio streaming can be stopped;

  • Triggering the Stop streaming API: Use the API to stop receiving audio streaming data while the call is still active.
  • Automatic stop on call disconnect: Audio streaming automatically stops when the call is disconnected.
StopMediaStreamingOptions options = new StopMediaStreamingOptions() {
  OperationContext = "stopMediaStreamingContext"
};

await callMedia.StopMediaStreamingAsync();

Handling audio streams in your websocket server

This sample demonstrates how to listen to audio streams using your websocket server.

private async Task StartReceivingFromAcsMediaWebSocket(Websocket websocket) {

  while (webSocket.State == WebSocketState.Open || webSocket.State == WebSocketState.Closed) {
    byte[] receiveBuffer = new byte[2048];
    WebSocketReceiveResult receiveResult = await webSocket.ReceiveAsync(
      new ArraySegment < byte > (receiveBuffer));

    if (receiveResult.MessageType != WebSocketMessageType.Close) {
      string data = Encoding.UTF8.GetString(receiveBuffer).TrimEnd('\0');
      var input = StreamingData.Parse(data);
      if (input is AudioData audioData) {
        // Add your code here to process the received audio chunk
      }
    }
  }
}

The first packet you receive contains metadata about the stream, including audio settings such as encoding, sample rate, and other configuration details.

{
  "kind": "AudioMetadata",
  "audioMetadata": {
    "subscriptionId": "89e8cb59-b991-48b0-b154-1db84f16a077",
    "encoding": "PCM",
    "sampleRate": 16000,
    "channels": 1,
    "length": 640
  }
}

After sending the metadata packet, Azure Communication Services (ACS) will begin streaming audio media to your WebSocket server.

{
  "kind": "AudioData",
  "audioData": {
    "timestamp": "2024-11-15T19:16:12.925Z",
    "participantRawID": "8:acs:3d20e1de-0f28-41c5…",
    "data": "5ADwAOMA6AD0A…",
    "silent": false
  }
}

Sending audio streaming data to Azure Communication Services

If bidirectional streaming is enabled using the EnableBidirectional flag in the MediaStreamingOptions, you can stream audio data back to Azure Communication Services, which plays the audio into the call.

Once Azure Communication Services begins streaming audio to your WebSocket server, you can relay the audio to your AI services. After your AI service processes the audio content, you can stream the audio back to the ongoing call in Azure Communication Services.

The example demonstrates how another service, such as Azure OpenAI or other voice-based Large Language Models, processes and transmits the audio data back into the call.

var audioData = OutStreamingData.GetAudioDataForOutbound(audioData)),
byte[] jsonBytes = Encoding.UTF8.GetBytes(audioData);

// Write your logic to send the PCM audio chunk over the WebSocket
// Example of how to send audio data over the WebSocket
await m_webSocket.SendAsync(new ArraySegment < byte > (jsonBytes), WebSocketMessageType.Text, endOfMessage: true, CancellationToken.None);

You can also control the playback of audio in the call when streaming back to Azure Communication Services, based on your logic or business flow. For example, when voice activity is detected and you want to stop the queued up audio, you can send a stop message via the WebSocket to stop the audio from playing in the call.

var stopData = OutStreamingData.GetStopAudioForOutbound();
byte[] jsonBytes = Encoding.UTF8.GetBytes(stopData);

// Write your logic to send stop data to ACS over the WebSocket
// Example of how to send stop data over the WebSocket
await m_webSocket.SendAsync(new ArraySegment < byte > (jsonBytes), WebSocketMessageType.Text, endOfMessage: true, CancellationToken.None);

Prerequisites

Set up a websocket server

Azure Communication Services requires your server application to set up a WebSocket server to stream audio in real-time. WebSocket is a standardized protocol that provides a full-duplex communication channel over a single TCP connection.

You can review documentation here to learn more about WebSockets and how to use them.

Receiving and sending audio streaming data

There are multiple ways to start receiving audio stream, which can be configured using the startMediaStreaming flag in the mediaStreamingOptions setup. You can also specify the desired sample rate used for receiving or sending audio data using the audioFormat parameter. Currently supported formats are PCM 24K mono and PCM 16K mono, with the default being PCM 16K mono.

To enable bidirectional audio streaming, where you're sending audio data into the call, you can enable the EnableBidirectional flag. For more details, refer to the API specifications.

Start streaming audio to your webserver at time of answering the call

Enable automatic audio streaming when the call is established by setting the flag startMediaStreaming: true.

This setting ensures that audio streaming starts automatically as soon as the call is connected.

MediaStreamingOptions mediaStreamingOptions = new MediaStreamingOptions(appConfig.getTransportUrl(), MediaStreamingTransport.WEBSOCKET, MediaStreamingContent.AUDIO, MediaStreamingAudioChannel.MIXED, true).setEnableBidirectional(true).setAudioFormat(AudioFormat.PCM_24K_MONO);
options = new AnswerCallOptions(data.getString(INCOMING_CALL_CONTEXT), callbackUri).setCallIntelligenceOptions(callIntelligenceOptions).setMediaStreamingOptions(mediaStreamingOptions);
Response answerCallResponse = client.answerCallWithResponse(options, Context.NONE);

When Azure Communication Services receives the URL for your WebSocket server, it establishes a connection to it. Once the connection is successfully made, streaming is initiated.

Start streaming audio to your webserver while a call is in progress

To start media streaming during the call, you can use the API. To do so, set the startMediaStreaming parameter to false (which is the default), and later in the call, you can use the start API to enable media streaming.

MediaStreamingOptions mediaStreamingOptions = new MediaStreamingOptions(appConfig.getTransportUrl(), MediaStreamingTransport.WEBSOCKET, MediaStreamingContent.AUDIO, MediaStreamingAudioChannel.MIXED, false)
    .setEnableBidirectional(true)
    .setAudioFormat(AudioFormat.PCM_24K_MONO);

options = new AnswerCallOptions(data.getString(INCOMING_CALL_CONTEXT), callbackUri)
    .setCallIntelligenceOptions(callIntelligenceOptions)
    .setMediaStreamingOptions(mediaStreamingOptions);

Response answerCallResponse = client.answerCallWithResponse(options, Context.NONE);

StartMediaStreamingOptions startMediaStreamingOptions = new StartMediaStreamingOptions()
    .setOperationContext("startMediaStreamingContext");

callConnection.getCallMedia().startMediaStreamingWithResponse(startMediaStreamingOptions, Context.NONE);     

Stop audio streaming

To stop receiving audio streams during a call, you can use the Stop streaming API. This allows you to stop the audio streaming at any point in the call. There are two ways that audio streaming can be stopped;

  • Triggering the Stop streaming API: Use the API to stop receiving audio streaming data while the call is still active.
  • Automatic stop on call disconnect: Audio streaming automatically stops when the call is disconnected.
StopMediaStreamingOptions stopMediaStreamingOptions = new StopMediaStreamingOptions()
    .setOperationContext("stopMediaStreamingContext");
callConnection.getCallMedia().stopMediaStreamingWithResponse(stopMediaStreamingOptions, Context.NONE);

Handling audio streams in your websocket server

This sample demonstrates how to listen to audio streams using your websocket server.

@OnMessage
public void onMessage(String message, Session session) {
  System.out.println("Received message: " + message);
  var parsedData = StreamingData.parse(message);
  if (parsedData instanceof AudioData) {
    var audioData = (AudioData) parsedData;
    sendAudioData(session, audioData.getData());
  }
}

The first packet you receive contains metadata about the stream, including audio settings such as encoding, sample rate, and other configuration details.

{
  "kind": "AudioMetadata",
  "audioMetadata": {
    "subscriptionId": "89e8cb59-b991-48b0-b154-1db84f16a077",
    "encoding": "PCM",
    "sampleRate": 16000,
    "channels": 1,
    "length": 640
  }
}

After sending the metadata packet, Azure Communication Services (ACS) will begin streaming audio media to your WebSocket server.

{
  "kind": "AudioData",
  "audioData": {
    "timestamp": "2024-11-15T19:16:12.925Z",
    "participantRawID": "8:acs:3d20e1de-0f28-41c5…",
    "data": "5ADwAOMA6AD0A…",
    "silent": false
  }
}

Sending audio streaming data to Azure Communication Services

If bidirectional streaming is enabled using the EnableBidirectional flag in the MediaStreamingOptions, you can stream audio data back to Azure Communication Services, which plays the audio into the call.

Once Azure Communication Services begins streaming audio to your WebSocket server, you can relay the audio to your AI services. After your AI service processes the audio content, you can stream the audio back to the ongoing call in Azure Communication Services.

The example demonstrates how another service, such as Azure OpenAI or other voice-based Large Language Models, processes and transmits the audio data back into the call.

private void sendAudioData(Session session, byte[] binaryData) {
    System.out.println("Data buffer---> " + binaryData.getClass().getName());
    if (session.isOpen()) {
        try {
            var serializedData = OutStreamingData.getStreamingDataForOutbound(binaryData);
            session.getAsyncRemote().sendText(serializedData);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

You can also control the playback of audio in the call when streaming back to Azure Communication Services, based on your logic or business flow. For example, when voice activity is detected and you want to stop the queued up audio, you can send a stop message via the WebSocket to stop the audio from playing in the call.

private void stopAudio(Session session) {
    if (session.isOpen()) {
        try {
            var serializedData = OutStreamingData.getStopAudioForOutbound();
            session.getAsyncRemote().sendText(serializedData);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

Prerequisites

Set up a websocket server

Azure Communication Services requires your server application to set up a WebSocket server to stream audio in real-time. WebSocket is a standardized protocol that provides a full-duplex communication channel over a single TCP connection.

You can review documentation here to learn more about WebSockets and how to use them.

Receiving and sending audio streaming data

There are multiple ways to start receiving audio stream, which can be configured using the startMediaStreaming flag in the mediaStreamingOptions setup. You can also specify the desired sample rate used for receiving or sending audio data using the audioFormat parameter. Currently supported formats are PCM 24K mono and PCM 16K mono, with the default being PCM 16K mono.

To enable bidirectional audio streaming, where you're sending audio data into the call, you can enable the EnableBidirectional flag. For more details, refer to the API specifications.

Start streaming audio to your webserver at time of answering the call

Enable automatic audio streaming when the call is established by setting the flag startMediaStreaming: true.

This setting ensures that audio streaming starts automatically as soon as the call is connected.

var mediaStreamingOptions = new MediaStreamingOptions(
	new Uri("wss://YOUR_WEBSOCKET_URL"),
	MediaStreamingContent.Audio,
	MediaStreamingAudioChannel.Mixed,
	startMediaStreaming: true)
{
	EnableBidirectional = true,
		AudioFormat = AudioFormat.Pcm24KMono
}
var options = new AnswerCallOptions(incomingCallContext, callbackUri)
{
	MediaStreamingOptions = mediaStreamingOptions,
};

AnswerCallResult answerCallResult = await client.AnswerCallAsync(options);

When Azure Communication Services receives the URL for your WebSocket server, it establishes a connection to it. Once the connection is successfully made, streaming is initiated.

Start streaming audio to your webserver while a call is in progress

To start media streaming during the call, you can use the API. To do so, set the startMediaStreaming parameter to false (which is the default), and later in the call, you can use the start API to enable media streaming.

const mediaStreamingOptions: MediaStreamingOptions = {
	transportUrl: transportUrl,
	transportType: "websocket",
	contentType: "audio",
	audioChannelType: "unmixed",
	startMediaStreaming: false,
	enableBidirectional: true,
	audioFormat: "Pcm24KMono"
}
const answerCallOptions: AnswerCallOptions = {
	mediaStreamingOptions: mediaStreamingOptions
};

answerCallResult = await acsClient.answerCall(
	incomingCallContext,
	callbackUri,
	answerCallOptions
);

const startMediaStreamingOptions: StartMediaStreamingOptions = {
	operationContext: "startMediaStreaming"
}

await answerCallResult.callConnection.getCallMedia().startMediaStreaming(startMediaStreamingOptions);

Stop audio streaming

To stop receiving audio streams during a call, you can use the Stop streaming API. This allows you to stop the audio streaming at any point in the call. There are two ways that audio streaming can be stopped;

  • Triggering the Stop streaming API: Use the API to stop receiving audio streaming data while the call is still active.
  • Automatic stop on call disconnect: Audio streaming automatically stops when the call is disconnected.
const stopMediaStreamingOptions: StopMediaStreamingOptions = {
	operationContext: "stopMediaStreaming"
}
await answerCallResult.callConnection.getCallMedia().stopMediaStreaming(stopMediaStreamingOptions);

Handling audio streams in your websocket server

This sample demonstrates how to listen to audio streams using your websocket server.

wss.on('connection', async (ws: WebSocket) => {
	console.log('Client connected');
	await initWebsocket(ws);
	await startConversation();
	ws.on('message', async (packetData: ArrayBuffer) => {
		try {
			if (ws.readyState === WebSocket.OPEN) {
				await processWebsocketMessageAsync(packetData);
			} else {
				console.warn(`ReadyState: ${ws.readyState}`);
			}
		} catch (error) {
			console.error('Error processing WebSocket message:', error);
		}
	});
	ws.on('close', () => {
		console.log('Client disconnected');
	});
});

async function processWebsocketMessageAsync(receivedBuffer: ArrayBuffer) {
	const result = StreamingData.parse(receivedBuffer);
	const kind = StreamingData.getStreamingKind();

	// Get the streaming data kind  
	if (kind === StreamingDataKind.AudioData) {
		const audioData = (result as AudioData);
		// process your audio data  
	}
}

The first packet you receive contains metadata about the stream, including audio settings such as encoding, sample rate, and other configuration details.

{
  "kind": "AudioMetadata",
  "audioMetadata": {
    "subscriptionId": "89e8cb59-b991-48b0-b154-1db84f16a077",
    "encoding": "PCM",
    "sampleRate": 16000,
    "channels": 1,
    "length": 640
  }
}

After sending the metadata packet, Azure Communication Services (ACS) will begin streaming audio media to your WebSocket server.

{
  "kind": "AudioData",
  "audioData": {
    "timestamp": "2024-11-15T19:16:12.925Z",
    "participantRawID": "8:acs:3d20e1de-0f28-41c5…",
    "data": "5ADwAOMA6AD0A…",
    "silent": false
  }
}

Sending audio streaming data to Azure Communication Services

If bidirectional streaming is enabled using the EnableBidirectional flag in the MediaStreamingOptions, you can stream audio data back to Azure Communication Services, which plays the audio into the call.

Once Azure Communication Services begins streaming audio to your WebSocket server, you can relay the audio to your AI services. After your AI service processes the audio content, you can stream the audio back to the ongoing call in Azure Communication Services.

The example demonstrates how another service, such as Azure OpenAI or other voice-based Large Language Models, processes and transmits the audio data back into the call.

async function receiveAudioForOutbound(data: string) { 
    try {
        const jsonData = OutStreamingData.getStreamingDataForOutbound(data);
        if (ws.readyState === WebSocket.OPEN) {
            ws.send(jsonData);
        } else {
            console.log("socket connection is not open.");
        }
    } catch (e) {
        console.log(e);
    }
}

You can also control the playback of audio in the call when streaming back to Azure Communication Services, based on your logic or business flow. For example, when voice activity is detected and you want to stop the queued up audio, you can send a stop message via the WebSocket to stop the audio from playing in the call.

async function stopAudio() {
	try {
		const jsonData = OutStreamingData.getStopAudioForOutbound();
		if (ws.readyState === WebSocket.OPEN) {
			ws.send(jsonData);
		} else {
			console.log("socket connection is not open.");
		}
	} catch (e) {
		console.log(e);
	}
}

Prerequisites

Set up a websocket server

Azure Communication Services requires your server application to set up a WebSocket server to stream audio in real-time. WebSocket is a standardized protocol that provides a full-duplex communication channel over a single TCP connection.

You can review documentation here to learn more about WebSockets and how to use them.

Receiving and sending audio streaming data

There are multiple ways to start receiving audio stream, which can be configured using the startMediaStreaming flag in the mediaStreamingOptions setup. You can also specify the desired sample rate used for receiving or sending audio data using the audioFormat parameter. Currently supported formats are PCM 24K mono and PCM 16K mono, with the default being PCM 16K mono.

To enable bidirectional audio streaming, where you're sending audio data into the call, you can enable the EnableBidirectional flag. For more details, refer to the API specifications.

Start streaming audio to your webserver at time of answering the call

Enable automatic audio streaming when the call is established by setting the flag startMediaStreaming: true.

This setting ensures that audio streaming starts automatically as soon as the call is connected.

media_streaming_configuration = MediaStreamingOptions(
    transport_url=TRANSPORT_URL,
    transport_type=MediaStreamingTransportType.WEBSOCKET,
    content_type=MediaStreamingContentType.AUDIO,
    audio_channel_type=MediaStreamingAudioChannelType.MIXED,
    start_media_streaming=True,
    enable_bidirectional=True,
    audio_format=AudioFormat.PCM24_K_MONO,
)
answer_call_result = call_automation_client.answer_call(
    incoming_call_context=incoming_call_context,
    media_streaming=media_streaming_configuration,
    callback_url=callback_uri,
)

When Azure Communication Services receives the URL for your WebSocket server, it establishes a connection to it. Once the connection is successfully made, streaming is initiated.

Start streaming audio to your webserver while a call is in progress

To start media streaming during the call, you can use the API. To do so, set the startMediaStreaming parameter to false (which is the default), and later in the call, you can use the start API to enable media streaming.

media_streaming_configuration = MediaStreamingOptions(
    transport_url=TRANSPORT_URL,
    transport_type=MediaStreamingTransportType.WEBSOCKET,
    content_type=MediaStreamingContentType.AUDIO,
    audio_channel_type=MediaStreamingAudioChannelType.MIXED,
    start_media_streaming=False,
    enable_bidirectional=True,
    audio_format=AudioFormat.PCM24_K_MONO
)

answer_call_result = call_automation_client.answer_call(
    incoming_call_context=incoming_call_context,
    media_streaming=media_streaming_configuration,
    callback_url=callback_uri
)

call_automation_client.get_call_connection(call_connection_id).start_media_streaming(
    operation_context="startMediaStreamingContext"
)

Stop audio streaming

To stop receiving audio streams during a call, you can use the Stop streaming API. This allows you to stop the audio streaming at any point in the call. There are two ways that audio streaming can be stopped;

  • Triggering the Stop streaming API: Use the API to stop receiving audio streaming data while the call is still active.
  • Automatic stop on call disconnect: Audio streaming automatically stops when the call is disconnected.
call_automation_client.get_call_connection(call_connection_id).stop_media_streaming(operation_context="stopMediaStreamingContext") 

Handling audio streams in your websocket server

This sample demonstrates how to listen to audio streams using your websocket server.

async def handle_client(websocket):
    print("Client connected")
    try:
        async for message in websocket:
            json_object = json.loads(message)
            kind = json_object["kind"]
            if kind == "AudioData":
                audio_data = json_object["audioData"]["data"]
                # process your audio data
    except websockets.exceptions.ConnectionClosedOK:
        print("Client disconnected")
    except websockets.exceptions.ConnectionClosedError as e:
        print(f"Connection closed with error: {e}")
    except Exception as e:
        print(f"Unexpected error: {e}")

The first packet you receive contains metadata about the stream, including audio settings such as encoding, sample rate, and other configuration details.

{
  "kind": "AudioMetadata",
  "audioMetadata": {
    "subscriptionId": "89e8cb59-b991-48b0-b154-1db84f16a077",
    "encoding": "PCM",
    "sampleRate": 16000,
    "channels": 1,
    "length": 640
  }
}

After sending the metadata packet, Azure Communication Services (ACS) will begin streaming audio media to your WebSocket server.

{
  "kind": "AudioData",
  "audioData": {
    "timestamp": "2024-11-15T19:16:12.925Z",
    "participantRawID": "8:acs:3d20e1de-0f28-41c5…",
    "data": "5ADwAOMA6AD0A…",
    "silent": false
  }
}

Sending audio streaming data to Azure Communication Services

If bidirectional streaming is enabled using the EnableBidirectional flag in the MediaStreamingOptions, you can stream audio data back to Azure Communication Services, which plays the audio into the call.

Once Azure Communication Services begins streaming audio to your WebSocket server, you can relay the audio to your AI services. After your AI service processes the audio content, you can stream the audio back to the ongoing call in Azure Communication Services.

The example demonstrates how another service, such as Azure OpenAI or other voice-based Large Language Models, processes and transmits the audio data back into the call.

async def send_data(websocket, buffer):
    if websocket.open:
        data = {
            "Kind": "AudioData",
            "AudioData": {
                "Data": buffer
            },
            "StopAudio": None
        }
        # Serialize the server streaming data
        serialized_data = json.dumps(data)
        print(f"Out Streaming Data ---> {serialized_data}")
        # Send the chunk over the WebSocket
        await websocket.send(serialized_data)

You can also control the playback of audio in the call when streaming back to Azure Communication Services, based on your logic or business flow. For example, when voice activity is detected and you want to stop the queued up audio, you can send a stop message via the WebSocket to stop the audio from playing in the call.

async def stop_audio(websocket):
    if websocket.open:
        data = {
            "Kind": "StopAudio",
            "AudioData": None,
            "StopAudio": {}
        }
        # Serialize the server streaming data
        serialized_data = json.dumps(data)
        print(f"Out Streaming Data ---> {serialized_data}")
        # Send the chunk over the WebSocket
        await websocket.send(serialized_data)

Clean up resources

If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about cleaning up resources.

Next steps