Unable to Get Logical Results with Azure Pronunciation Assessment
I'm trying to use the pronunciationAssessment feature in the Azure Speech SDK, but I cannot get reasonable result. I've tested this with the word "school" and other words as well, but I always get a result of 0—no matter whether the word was…
How to set Speech sensitivity of Speech to Text to ignore all noise.
I need to set the speech sensitivity so that I can change it in noisy enviroments. How to set Speech sensitivity of Speech to Text to ignore all noise.
IPA phoneme for "Herrera" doesn't sound right
Hi, Here's what I'm using for the IPA phoneme for the Spanish name "Herrera." /eˈreɾa/ However, the first "r" isn't rolled and the second "r" sounds like a T. Is there another phoneme element I can use to get the rolled…
Realtime Recognizer not utilising with Semantic Segmentation
Hi all! I'm using the Azure speechsdk.SpeechRecognizer for transcribing streamed real-time audio. While the transcription works, continuous talking will result in large paragraphs being outputted rather than sentence by sentence. I included the…
Azure Speech Service Batch Synthesis
Azure Speech Service Batch Synthesis API is not creating the file as MP3 when the output format is correct (audio-24khz-160kbitrate-mono-mp3). Speech is created as WMA file
speech to text twilio telugu transcript is not coming empty transcript and intitally system is not responding
async def receive_json(self, text_data): try: event = text_data.get('event') if event == 'connected': logger.info("WebSocket connected event received") elif event == 'start': …
How to fix Exception with an error code: 0xe (SPXERR_MIC_NOT_AVAILABLE)
I have built a chatbot bot framework and am now looking to integrate speech functionality for the bot. I am trying to run the below code from ms learn quickstart for speech sdk using python. …
Inconsistencies in IPA Pronunciation in Text to Speech
Hi, I'm using SSML to ensure specific pronunciation, however, I'm experiencing some inconsistencies. For example, here's the word 'would': <speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='en-US'> <voice…
Issue with Continuous Speech Recognition Returning Omitted Words in Azure Speech Service
Dear Azure Technical Support, I’m using the Azure Speech Service for continuous speech recognition, following the official JavaScript sample from the cognitive-services-speech-sdk repository. I’ve encountered a behavior I’d like to clarify. When using…
Will word boundary event always be triggered before the Synthesizing event?
We are using speech SDK to do text to speech, and we need to highlight the speaking word by leveraging the word boundary event. From…
Bug Report: Mispronunciation of Isolated Hungarian Words in Azure Neural TTS (hu-HU-NoemiNeural), but not in context
Description: The Azure Neural TTS system is mispronouncing specific Hungarian words when using the hu-HU-NoemiNeural voice. The issue affects more than half of the vocabulary words in a recent production run of words (full SSML shared at bottom of this…
How to disable the default "Disfluency Removal" of filler words after STT transcription in Azure AI Speech?
Azure AI Speech Services defaults to removing many filler words (uh, eh, etc.) via post-transcription "Disfluency Removal". My use case includes presentation analysis for filler words, which requires a verbatim transcript. Is there a…
Can Pronunciation assessment be used with REST API?
Is it possible to utilize Pronunciation assessment with REST API and if so, what are the necessary steps to make it work?
Speech service SDK usage and issues
I am trying to connect the Azure Speech with my Azure OpenAI so that I have the option to use Azure OpenAI to ask queries either by text or voice method. Currently, I have issues with connecting the Azure AI Speech with my backend which is node.js. I am…
Azure TTS Error 404
I get error 404 when trying to fetch the mp3 file via fetch. I am using Node.js in the backend. More details: I created a functionality in my app that creates an XML document containing all SSML tags as specified by Microsoft Azure. Is it possible some…
Issue with Continuous Language Identification in Azure Speech SDK for Angular Application
We are currently using the "microsoft-cognitiveservices-speech-sdk" in our Angular application (version 14) for speech transcription and translation. The transcription and translation functionality is working as expected. However, we are…
Azure Speech Studio Andrew Multilingual voice sounds glitchy
I'm having some issues with the Andrew Multilingual (en-US-AndrewMultilingualNeural) voice in the Azure Speech Studio. There's a few instances in which the voice sounds raspy and really kind of glitchy. It seems to have a lot of trouble with the word…
SpeakSsmlAsync Result always Canceled
Hello, I am building a project using Azure's SpeechSynthesizer. SpeechLog.txt I am running into the following problem: when calling SpeakSsmlAsync(ssmlText), the result always has a canceled state, and I am having a hard time understanding why. When I…
I need to know wether this API "Post-call transcription and analytics" can work with nodejs?
I need to know wether this API "Post-call transcription and analytics" can work with nodejs? If it is not, where I can get a proper Conversation converstion API with multi user and multi language dedection and retrun a text with given…
When using batch speech transscription the ITN feature only applies to the first option of the nBest results.
When using batch transscription the ITN feature only applies to the first option of the nBest results, whitch is not necessarily the one with the highest confidence. The batch transscription service returns a json result with the following structure…