WebSocket Connection Error 1006 in Azure OpenAI Speech to Speech Chat Example

郭銍恩 25 Reputation points
2024-09-15T13:29:19.0033333+00:00

The .env file has been created, the application has been registered, and the role "Cognitive Services OpenAI Contributor" has been assigned in the resource group. The example in azure-sdk-for-js/sdk/openai/openai/samples/cookbook/simpleCompletionsPageruns without any issues. However, the following error appears in the azure-sdk-for-js/sdk/openai/openai/samples/cookbook/speechToSpeechChat example:

WebSocket connection to 'wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=simple&Authorization=Bearer%XXX-ConnectionId=3CF8CD64E4814885AEEB87E67749B54B' failed:

{
    "privSessionId": "3CF8CD64E4814885AEEB87E67749B54B",
    "privReason": 0,
    "privErrorDetails": "Unable to contact server. StatusCode: 1006, undefined Reason: undefined",
    "privErrorCode": 4
}
Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
1,685 questions
JavaScript API
JavaScript API
An Office service that supports add-ins to interact with objects in Office client applications.
975 questions
Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
2,944 questions
{count} votes

Accepted answer
  1. romungi-MSFT 45,036 Reputation points Microsoft Employee
    2024-09-16T12:12:30.53+00:00

    @郭銍恩 I think you are referring to sample on this repo.

    Looking at this sample, i see the env file is only updated with Azure OpenAI endpoint and speech region but the speech endpoint and keys are taken as input in either the HTML page or the env file. So, whenever the current project is sending a request it does not send the correct token for the correct speech endpoint to authenticate and get a response. I think this is what is causing the above error with the speech SDK.

    Looking at the same repo, the commit history shows the option to enter the speech keys and region if you use Azure speech SDK but the current sample does not have this option. See the commit history here. The code at this point also uses the keys & region of speech service if the option to use Azure speech is selected from the HTML.

    You might want to clone the code from above point and retry or raise an issue on the SDK for JS repo to correct the sample to use the correct configuration for speech resource. Thanks!!

    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.