microsoft tts reads english wierd when its set to hebrew.
when the hebrew tts reads a sentence with seperated english letters it reads it weird. for example : "השם שלך הוא P A Z" doesn't say the letter "P", "A", "Z", instead it pronounces the sound of the letter. how can…
Azur language identification is not detecting language perfectly.
I am using Azure Continuous Language Identification in Azure Speech SDK with three languages: en-IN, hi-IN, mr-IN. It behaves differently, as when I say something in English, it transcribes it into Hindi or Marathi. In short, detecting the wrong…
video translation in Azure AI Speech - Supported regions
Dear Sir, Currently, video translation in Azure AI Speech is only supported in the East US region. Could you please let me know when can we expect the video translation in Azure AI Speech to be supported in other regions as well like West…
Multilingual Voice returns wrong language when using numbers text to speech
Hello, when I am using your api and the following voice: "en-US-AndrewMultilingualNeural" I also use "es-ES" and "en-US" to specify when to use each language. It works when using text but when a just a number like…
Why does Azure Speech-to-Text detect French accurately in a standalone Python script but perform poorly in a real-time video call integration?
I'm working on a real-time translation project using Azure Speech Services. When I run my translation code in a standalone Python script, it accurately recognizes and translates French and English speech. However, when the same Speech-to-Text…
Slovak Text-to-Speech Pronunciation Issues with 'r' and 'l' Since September Update
Hello, We are using your Text-to-Speech service in the Slovak language. Since around September, we’ve noticed that certain words containing the letters "r" or "l" are being pronounced incorrectly. For example, the word prvý is…
Custom Speech Training stuck on "processing" since 4 days
I trained four custom speech recognition models for the locales de-DE, fr-FR, en-GB, pl-PL (on Friday the 15th) in the Speech Studio but they are stuck on "processing" since then. How can I solve this issue? It's really an urgent…
How to sign for STT service from neuralspace on Azure?
I want to sign STT SaaS service. However, after configuring it on Azure, when I click on "Configure Account" I am redirected to https://azmarketplace.neuralspace.ai/?token=X when I got a 503 error without an option to ask help there. How to…
How to identify filler words in Azure AI Speech
Hi team. Is there any feature in Azure Speech that can help us identify filler words? Please point me to the right documentation if there is any. Thanks, Sai Vishnu Soudri
Issues Accessing Azure Speech to Text REST API Version 2024-11-15
How can the latest Azure Speech to Text REST API, version 2024-11-15, be used? The documentation states this version is generally available, but an attempt to access the API using their migration guide results in a "resource not found" error.…
Azure AI Speech Studio TextToSpeech with voice "fr-FR-RemyMultilingualNeural" shows "Error 400 Synthesis failed. StatusCode: NotFound"
Hello, So we've started getting the following error: "HTTPError: 400 Client Error: Synthesis failed. StatusCode: NotFound, Details: service does not exist: service endpoint…
I would like to know if there are any other avatars besides the Asian figures for text to speech ? How do I access them?
I would like to have the option to selfct differen avatar figures besides the Asian one shown.
Project Collaborator Cannot Access Voices in Speech Playground Voice Gallery
An AI studio resource and project have been created to experiment with text-to-speech functionality. While I have access to all sample voices in the voice gallery, my collaborator assigned as an Owner on both the resource and the project cannot see any…
Pronunciation Assessment fails to recognize several individual words
Several single words fail to process when using the Pronunciation Assessment service in our code or in the portal tool. Many do work but we're not able to determine why some words work and others don't. We use this service in a classroom setting to…
Realtime Recognizer not utilising with Semantic Segmentation
Hi all! I'm using the Azure speechsdk.SpeechRecognizer for transcribing streamed real-time audio. While the transcription works, continuous talking will result in large paragraphs being outputted rather than sentence by sentence. I included the…
[nnnn].word.json file not found in results wordBoundaryEnabled: true
I am using Azure AI Batch Syntheses something like this …
Audio to Audio translation
All of the information shows how to speech to text OR text to speech. Supposedly Microsoft Azure can do speech to speech and generate an AI voice that sounds natural to the person delivering the message but delivering it in different language. Where is…
Bug Report: Mispronunciation of Isolated Hungarian Words in Azure Neural TTS (hu-HU-NoemiNeural), but not in context
Description: The Azure Neural TTS system is mispronouncing specific Hungarian words when using the hu-HU-NoemiNeural voice. The issue affects more than half of the vocabulary words in a recent production run of words (full SSML shared at bottom of this…
AI text-to-speech is misreading a word in Catalan (tomàquets) but it reads perfectly its singular form (tomàquet), can you fix it?
Hello, I am using the text-to-speech service with Catalan. The word tomáquets in plural form is not read properly whereas tomàquet in singular is. The accent is misplaced. What can I do to get that fixed? Thank you,
Using Managed Identity to connect to Speech Service with NPM in Angular - Example needed
What are the steps to utilize managed identity in an Angular application to bypass API keys while connecting to the Speech Service? Examples implementation would be really helpfull