言語識別は、サポートされている言語の一覧と照合する際に、オーディオで話されている言語を識別するために使用されます。
言語識別 (LID) のユース ケースは次のとおりです。
- 音声テキスト変換認識: オーディオ ソース内の言語を識別し、それをテキストに文字起こしする必要がある場合。
- 音声翻訳: オーディオ ソース内の言語を識別し、それを別の言語に翻訳する必要がある場合。
音声認識では、言語識別の初期待機時間が長くなります。 このオプション機能は、必要な場合にのみ含めてください。
構成オプションを設定する
言語識別を、音声テキスト変換または音声翻訳と組み合わせて使用するいずれの場合でも、いくつかの一般的な概念と構成のオプションがあります。
その後、音声サービスに対して 1 回または継続的な認識を要求します。
この記事では、概念を説明するコード スニペットを提供します。 各ユース ケースについての完全なサンプルへのリンクが用意されています。
候補言語
AutoDetectSourceLanguageConfig
オブジェクトを使用して候補言語を指定します。 少なくとも 1 つの候補がオーディオに含まれていると想定しています。 開始時点の LID には最大 4 つの言語を含めることができ、継続的 LID には最大 10 の言語を含めることができます。 音声サービスは、提供された候補言語がオーディオに含まれていない場合でも、それらのうちの 1 つを返します。 たとえば、fr-FR
(フランス語) と en-US
(英語) が候補として指定されているが、ドイツ語が話されている場合、サービスからは fr-FR
または en-US
が返されます。
ダッシュ (-
) 区切りで完全なロケールを指定する必要がありますが、言語識別では基本言語ごとに 1 つのロケールのみが使用されます。 同じ言語に対して複数のロケール (en-US
と en-GB
など) を含めないでください。
var autoDetectSourceLanguageConfig =
AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
auto autoDetectSourceLanguageConfig =
AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
auto_detect_source_language_config = \
speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE", "zh-CN"));
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromLanguages([("en-US", "de-DE", "zh-CN"]);
NSArray *languages = @[@"en-US", @"de-DE", @"zh-CN"];
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
[[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages];
詳細については、サポートされる言語に関する記事を参照してください。
開始時と継続的な言語識別
Azure Cognitive Service for Speech では、開始時と継続的な言語識別 (LID) の両方がサポートされます。
Note
継続的な言語識別は、C#、C++、Java (音声テキスト変換の場合のみ)、JavaScript (音声テキスト変換の場合のみ)、Python の Speech SDK でのみサポートされています。
- 開始時の LID では、オーディオの最初の数秒間で 1 回、言語を識別します。 オーディオ内の言語が変わらない場合は、開始時の LID を使用します。 開始時の LID では、1 つの言語が検出され、5 秒未満で返されます。
- 継続的な LID では、オーディオの間、複数の言語を識別できます。 オーディオ内の言語が変わる可能性がある場合は、継続的な LID を使用します。 継続的な LID では、同じ文章内での言語の変更はサポートされていません。 たとえば、主にスペイン語を話しているときに、英語の単語を挿入した場合、単語ごとの言語の変更は検出されません。
開始時の LID または継続的な LID を実装するには、1 回または連続して認識するためのメソッドを呼び出します。 継続的な LID は、継続的な認識でのみサポートされます。
1 回または継続して認識する
言語識別は、認識オブジェクトと操作により実行されます。 オーディオの認識を Speech サービスに要求します。
Note
認識と識別を混同しなようにしてください。 言語識別があってもなくても認識は使用できます。
"1 回認識" メソッドを呼び出すか、継続的認識の開始および停止のメソッドを呼び出します。 次の中から選択します。
- 開始時の LID を使用して 1 回認識。 継続的な LID は、1 回の認識ではサポートされていません。
- 開始時の LID を使用した継続的な認識を使用します。
- 継続的な LID を使用した継続的な認識を使用します。
SpeechServiceConnection_LanguageIdMode
プロパティは、継続的な LID の場合のみ必須です。 それがない場合、Speech サービスは既定で開始時の LID になります。 サポートされている値は、開始時の LID の場合は AtStart
、継続的な LID の場合は Continuous
です。
// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
var result = await recognizer.RecognizeOnceAsync();
// Start and stop continuous recognition with At-start LID
await recognizer.StartContinuousRecognitionAsync();
await recognizer.StopContinuousRecognitionAsync();
// Start and stop continuous recognition with Continuous LID
speechConfig.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
await recognizer.StartContinuousRecognitionAsync();
await recognizer.StopContinuousRecognitionAsync();
// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
auto result = recognizer->RecognizeOnceAsync().get();
// Start and stop continuous recognition with At-start LID
recognizer->StartContinuousRecognitionAsync().get();
recognizer->StopContinuousRecognitionAsync().get();
// Start and stop continuous recognition with Continuous LID
speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
recognizer->StartContinuousRecognitionAsync().get();
recognizer->StopContinuousRecognitionAsync().get();
// Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
SpeechRecognitionResult result = recognizer->RecognizeOnceAsync().get();
// Start and stop continuous recognition with At-start LID
recognizer.startContinuousRecognitionAsync().get();
recognizer.stopContinuousRecognitionAsync().get();
// Start and stop continuous recognition with Continuous LID
speechConfig.setProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
recognizer.startContinuousRecognitionAsync().get();
recognizer.stopContinuousRecognitionAsync().get();
# Recognize once with At-start LID. Continuous LID isn't supported for recognize once.
result = recognizer.recognize_once()
# Start and stop continuous recognition with At-start LID
recognizer.start_continuous_recognition()
recognizer.stop_continuous_recognition()
# Start and stop continuous recognition with Continuous LID
speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
recognizer.start_continuous_recognition()
recognizer.stop_continuous_recognition()
音声テキスト変換の使用
音声テキスト変換認識は、オーディオ ソース内の言語を識別し、それをテキストに文字起こしする必要がある場合に使用します。 詳しくは、「音声変換の概要」をご覧ください。
注意
開始時の言語識別を使用した音声テキスト変換認識は、C#、C++、Python、Java、JavaScript、Objective-C の Speech SDK でサポートされています。 継続的言語識別を使用した音声テキスト変換認識は、C#、C++、Java、JavaScript、Python の Speech SDK でのみサポートされています。
現在、継続的言語識別を使用して音声テキスト変換認識を行うには、コード例に示すように、wss://{region}.stt.speech.microsoft.com/speech/universal/v2
エンドポイント文字列から SpeechConfig を作成する必要があります。 今後の SDK リリースでは、その設定は必要ありません。
言語識別を使用した音声テキスト変換認識のその他の例については、GitHub をご覧ください。
using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Audio;
var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey","YourServiceRegion");
var autoDetectSourceLanguageConfig =
AutoDetectSourceLanguageConfig.FromLanguages(
new string[] { "en-US", "de-DE", "zh-CN" });
using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
using (var recognizer = new SpeechRecognizer(
speechConfig,
autoDetectSourceLanguageConfig,
audioConfig))
{
var speechRecognitionResult = await recognizer.RecognizeOnceAsync();
var autoDetectSourceLanguageResult =
AutoDetectSourceLanguageResult.FromResult(speechRecognitionResult);
var detectedLanguage = autoDetectSourceLanguageResult.Language;
}
using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Audio;
var region = "YourServiceRegion";
// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
var endpointUrl = new Uri(endpointString);
var config = SpeechConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
// Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
config.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
var stopRecognition = new TaskCompletionSource<int>();
using (var audioInput = AudioConfig.FromWavFileInput(@"en-us_zh-cn.wav"))
{
using (var recognizer = new SpeechRecognizer(config, autoDetectSourceLanguageConfig, audioInput))
{
// Subscribes to events.
recognizer.Recognizing += (s, e) =>
{
if (e.Result.Reason == ResultReason.RecognizingSpeech)
{
Console.WriteLine($"RECOGNIZING: Text={e.Result.Text}");
var autoDetectSourceLanguageResult = AutoDetectSourceLanguageResult.FromResult(e.Result);
Console.WriteLine($"DETECTED: Language={autoDetectSourceLanguageResult.Language}");
}
};
recognizer.Recognized += (s, e) =>
{
if (e.Result.Reason == ResultReason.RecognizedSpeech)
{
Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
var autoDetectSourceLanguageResult = AutoDetectSourceLanguageResult.FromResult(e.Result);
Console.WriteLine($"DETECTED: Language={autoDetectSourceLanguageResult.Language}");
}
else if (e.Result.Reason == ResultReason.NoMatch)
{
Console.WriteLine($"NOMATCH: Speech could not be recognized.");
}
};
recognizer.Canceled += (s, e) =>
{
Console.WriteLine($"CANCELED: Reason={e.Reason}");
if (e.Reason == CancellationReason.Error)
{
Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}");
Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
}
stopRecognition.TrySetResult(0);
};
recognizer.SessionStarted += (s, e) =>
{
Console.WriteLine("\n Session started event.");
};
recognizer.SessionStopped += (s, e) =>
{
Console.WriteLine("\n Session stopped event.");
Console.WriteLine("\nStop recognition.");
stopRecognition.TrySetResult(0);
};
// Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
// Waits for completion.
// Use Task.WaitAny to keep the task rooted.
Task.WaitAny(new[] { stopRecognition.Task });
// Stops recognition.
await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
}
}
言語識別を使用した音声テキスト変換認識のその他の例については、GitHub をご覧ください。
using namespace std;
using namespace Microsoft::CognitiveServices::Speech;
using namespace Microsoft::CognitiveServices::Speech::Audio;
auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey","YourServiceRegion");
auto autoDetectSourceLanguageConfig =
AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
auto recognizer = SpeechRecognizer::FromConfig(
speechConfig,
autoDetectSourceLanguageConfig
);
speechRecognitionResult = recognizer->RecognizeOnceAsync().get();
auto autoDetectSourceLanguageResult =
AutoDetectSourceLanguageResult::FromResult(speechRecognitionResult);
auto detectedLanguage = autoDetectSourceLanguageResult->Language;
// Creates an instance of a speech config with specified subscription key and service region.
// Note: For multi-lingual speech recognition with language id, it only works with speech v2 endpoint,
// you must use FromEndpoint api in order to use the speech v2 endpoint.
// Replace YourServiceRegion with your region, for example "westus", and
// replace YourSubscriptionKey with your own speech key.
string speechv2Endpoint = "wss://YourServiceRegion.stt.speech.microsoft.com/speech/universal/v2";
auto speechConfig = SpeechConfig::FromEndpoint(speechv2Endpoint, "YourSubscriptionKey");
// Set the mode of input language detection to either "AtStart" (the default) or "Continuous".
// Please refer to the documentation of Language ID for more information.
// https://aka.ms/speech/lid?pivots=programming-language-cpp
speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
// Define the set of languages to detect
auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "zh-CN" });
// Creates a speech recognizer using file as audio input.
// Replace with your own audio file name.
auto audioInput = AudioConfig::FromWavFileInput("en-us_zh-cn.wav");
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, autoDetectSourceLanguageConfig, audioInput);
// promise for synchronization of recognition end.
promise<void> recognitionEnd;
// Subscribes to events.
recognizer->Recognizing.Connect([](const SpeechRecognitionEventArgs& e)
{
auto lidResult = AutoDetectSourceLanguageResult::FromResult(e.Result);
cout << "Recognizing in " << lidResult->Language << ": Text =" << e.Result->Text << std::endl;
});
recognizer->Recognized.Connect([](const SpeechRecognitionEventArgs& e)
{
if (e.Result->Reason == ResultReason::RecognizedSpeech)
{
auto lidResult = AutoDetectSourceLanguageResult::FromResult(e.Result);
cout << "RECOGNIZED in " << lidResult->Language << ": Text=" << e.Result->Text << "\n"
<< " Offset=" << e.Result->Offset() << "\n"
<< " Duration=" << e.Result->Duration() << std::endl;
}
else if (e.Result->Reason == ResultReason::NoMatch)
{
cout << "NOMATCH: Speech could not be recognized." << std::endl;
}
});
recognizer->Canceled.Connect([&recognitionEnd](const SpeechRecognitionCanceledEventArgs& e)
{
cout << "CANCELED: Reason=" << (int)e.Reason << std::endl;
if (e.Reason == CancellationReason::Error)
{
cout << "CANCELED: ErrorCode=" << (int)e.ErrorCode << "\n"
<< "CANCELED: ErrorDetails=" << e.ErrorDetails << "\n"
<< "CANCELED: Did you update the subscription info?" << std::endl;
recognitionEnd.set_value(); // Notify to stop recognition.
}
});
recognizer->SessionStopped.Connect([&recognitionEnd](const SessionEventArgs& e)
{
cout << "Session stopped.";
recognitionEnd.set_value(); // Notify to stop recognition.
});
// Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
recognizer->StartContinuousRecognitionAsync().get();
// Waits for recognition end.
recognitionEnd.get_future().get();
// Stops recognition.
recognizer->StopContinuousRecognitionAsync().get();
言語識別を使用した音声テキスト変換認識のその他の例については、GitHub をご覧ください。
AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE"));
SpeechRecognizer recognizer = new SpeechRecognizer(
speechConfig,
autoDetectSourceLanguageConfig,
audioConfig);
Future<SpeechRecognitionResult> future = recognizer.recognizeOnceAsync();
SpeechRecognitionResult result = future.get(30, TimeUnit.SECONDS);
AutoDetectSourceLanguageResult autoDetectSourceLanguageResult =
AutoDetectSourceLanguageResult.fromResult(result);
String detectedLanguage = autoDetectSourceLanguageResult.getLanguage();
recognizer.close();
speechConfig.close();
autoDetectSourceLanguageConfig.close();
audioConfig.close();
result.close();
// Shows how to do continuous speech recognition on a multilingual audio file with continuous language detection. Here, we assume the
// spoken language in the file can alternate between English (US), Spanish (Mexico) and German.
// If specified, speech recognition will use the custom model associated with the detected language.
public static void continuousRecognitionFromFileWithContinuousLanguageDetectionWithCustomModels() throws InterruptedException, ExecutionException, IOException
{
// Continuous language detection with speech recognition requires the application to set a V2 endpoint URL.
// Replace the service (Azure) region with your own service region (e.g. "westus").
String v2EndpointUrl = "wss://" + "YourServiceRegion" + ".stt.speech.microsoft.com/speech/universal/v2";
// Creates an instance of a speech config with specified endpoint URL and subscription key. Replace with your own subscription key.
SpeechConfig speechConfig = SpeechConfig.fromEndpoint(URI.create(v2EndpointUrl), "YourSubscriptionKey");
// Change the default from at-start language detection to continuous language detection, since the spoken language in the audio
// may change.
speechConfig.setProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
// Define a set of expected spoken languages in the audio, with an optional custom model endpoint ID associated with each.
// Update the below with your own languages. Please see https://docs.microsoft.com/azure/cognitive-services/speech-service/language-support
// for all supported languages.
// Update the below with your own custom model endpoint IDs, or omit it if you want to use the standard model.
List<SourceLanguageConfig> sourceLanguageConfigs = new ArrayList<SourceLanguageConfig>();
sourceLanguageConfigs.add(SourceLanguageConfig.fromLanguage("en-US", "YourEnUsCustomModelID"));
sourceLanguageConfigs.add(SourceLanguageConfig.fromLanguage("es-MX", "YourEsMxCustomModelID"));
sourceLanguageConfigs.add(SourceLanguageConfig.fromLanguage("de-DE"));
// Creates an instance of AutoDetectSourceLanguageConfig with the above 3 source language configurations.
AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.fromSourceLanguageConfigs(sourceLanguageConfigs);
// We provide a WAV file with English and Spanish utterances as an example. Replace with your own multilingual audio file name.
AudioConfig audioConfig = AudioConfig.fromWavFileInput( "es-mx_en-us.wav");
// Creates a speech recognizer using file as audio input and the AutoDetectSourceLanguageConfig
SpeechRecognizer speechRecognizer = new SpeechRecognizer(speechConfig, autoDetectSourceLanguageConfig, audioConfig);
// Semaphore used to signal the call to stop continuous recognition (following either a session ended or a cancelled event)
final Semaphore doneSemaphone = new Semaphore(0);
// Subscribes to events.
/* Uncomment this to see intermediate recognition results. Since this is verbose and the WAV file is long, it is commented out by default in this sample.
speechRecognizer.recognizing.addEventListener((s, e) -> {
AutoDetectSourceLanguageResult autoDetectSourceLanguageResult = AutoDetectSourceLanguageResult.fromResult(e.getResult());
String language = autoDetectSourceLanguageResult.getLanguage();
System.out.println(" RECOGNIZING: Text = " + e.getResult().getText());
System.out.println(" RECOGNIZING: Language = " + language);
});
*/
speechRecognizer.recognized.addEventListener((s, e) -> {
AutoDetectSourceLanguageResult autoDetectSourceLanguageResult = AutoDetectSourceLanguageResult.fromResult(e.getResult());
String language = autoDetectSourceLanguageResult.getLanguage();
if (e.getResult().getReason() == ResultReason.RecognizedSpeech) {
System.out.println(" RECOGNIZED: Text = " + e.getResult().getText());
System.out.println(" RECOGNIZED: Language = " + language);
}
else if (e.getResult().getReason() == ResultReason.NoMatch) {
if (language == null || language.isEmpty() || language.toLowerCase().equals("unknown")) {
System.out.println(" NOMATCH: Speech Language could not be detected.");
}
else {
System.out.println(" NOMATCH: Speech could not be recognized.");
}
}
});
speechRecognizer.canceled.addEventListener((s, e) -> {
System.out.println(" CANCELED: Reason = " + e.getReason());
if (e.getReason() == CancellationReason.Error) {
System.out.println(" CANCELED: ErrorCode = " + e.getErrorCode());
System.out.println(" CANCELED: ErrorDetails = " + e.getErrorDetails());
System.out.println(" CANCELED: Did you update the subscription info?");
}
doneSemaphone.release();
});
speechRecognizer.sessionStarted.addEventListener((s, e) -> {
System.out.println("\n Session started event.");
});
speechRecognizer.sessionStopped.addEventListener((s, e) -> {
System.out.println("\n Session stopped event.");
doneSemaphone.release();
});
// Starts continuous recognition and wait for processing to end
System.out.println(" Recognizing from WAV file... please wait");
speechRecognizer.startContinuousRecognitionAsync().get();
doneSemaphone.tryAcquire(30, TimeUnit.SECONDS);
// Stop continuous recognition
speechRecognizer.stopContinuousRecognitionAsync().get();
// These objects must be closed in order to dispose underlying native resources
speechRecognizer.close();
speechConfig.close();
audioConfig.close();
for (SourceLanguageConfig sourceLanguageConfig : sourceLanguageConfigs)
{
sourceLanguageConfig.close();
}
autoDetectSourceLanguageConfig.close();
}
言語識別を使用した音声テキスト変換認識のその他の例については、GitHub をご覧ください。
auto_detect_source_language_config = \
speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE"])
speech_recognizer = speechsdk.SpeechRecognizer(
speech_config=speech_config,
auto_detect_source_language_config=auto_detect_source_language_config,
audio_config=audio_config)
result = speech_recognizer.recognize_once()
auto_detect_source_language_result = speechsdk.AutoDetectSourceLanguageResult(result)
detected_language = auto_detect_source_language_result.language
import azure.cognitiveservices.speech as speechsdk
import time
import json
speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
weatherfilename="en-us_zh-cn.wav"
# Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
speech_config = speechsdk.SpeechConfig(subscription=speech_key, endpoint=endpoint_string)
audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
# Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(
languages=["en-US", "de-DE", "zh-CN"])
speech_recognizer = speechsdk.SpeechRecognizer(
speech_config=speech_config,
auto_detect_source_language_config=auto_detect_source_language_config,
audio_config=audio_config)
done = False
def stop_cb(evt):
"""callback that signals to stop continuous recognition upon receiving an event `evt`"""
print('CLOSING on {}'.format(evt))
nonlocal done
done = True
# Connect callbacks to the events fired by the speech recognizer
speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING: {}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED: {}'.format(evt)))
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
# stop continuous recognition on either session stopped or canceled events
speech_recognizer.session_stopped.connect(stop_cb)
speech_recognizer.canceled.connect(stop_cb)
# Start continuous speech recognition
speech_recognizer.start_continuous_recognition()
while not done:
time.sleep(.5)
speech_recognizer.stop_continuous_recognition()
NSArray *languages = @[@"en-US", @"de-DE", @"zh-CN"];
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
[[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages];
SPXSpeechRecognizer* speechRecognizer = \
[[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
autoDetectSourceLanguageConfiguration:autoDetectSourceLanguageConfig
audioConfiguration:audioConfig];
SPXSpeechRecognitionResult *result = [speechRecognizer recognizeOnce];
SPXAutoDetectSourceLanguageResult *languageDetectionResult = [[SPXAutoDetectSourceLanguageResult alloc] init:result];
NSString *detectedLanguage = [languageDetectionResult language];
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromLanguages(["en-US", "de-DE"]);
var speechRecognizer = SpeechSDK.SpeechRecognizer.FromConfig(speechConfig, autoDetectSourceLanguageConfig, audioConfig);
speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult) => {
var languageDetectionResult = SpeechSDK.AutoDetectSourceLanguageResult.fromResult(result);
var detectedLanguage = languageDetectionResult.language;
},
{});
音声テキスト変換のカスタム モデル
注意
カスタム モデルを使用した言語検出は、リアルタイムの音声テキスト変換と音声翻訳でのみ使用できます。 バッチ文字起こしでは、既定の基本モデルの言語検出のみサポートされます。
このサンプルでは、カスタム エンドポイントで言語検出を使用する方法を示します。 検出された言語が en-US
の場合、この例では既定のモデルが使用されます。 検出された言語が fr-FR
の場合、この例ではカスタム モデルのエンドポイントが使用されます。 詳細については、「Custom Speech モデルのデプロイ」を参照してください。
var sourceLanguageConfigs = new SourceLanguageConfig[]
{
SourceLanguageConfig.FromLanguage("en-US"),
SourceLanguageConfig.FromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR")
};
var autoDetectSourceLanguageConfig =
AutoDetectSourceLanguageConfig.FromSourceLanguageConfigs(
sourceLanguageConfigs);
このサンプルでは、カスタム エンドポイントで言語検出を使用する方法を示します。 検出された言語が en-US
の場合、この例では既定のモデルが使用されます。 検出された言語が fr-FR
の場合、この例ではカスタム モデルのエンドポイントが使用されます。 詳細については、「Custom Speech モデルのデプロイ」を参照してください。
std::vector<std::shared_ptr<SourceLanguageConfig>> sourceLanguageConfigs;
sourceLanguageConfigs.push_back(
SourceLanguageConfig::FromLanguage("en-US"));
sourceLanguageConfigs.push_back(
SourceLanguageConfig::FromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR"));
auto autoDetectSourceLanguageConfig =
AutoDetectSourceLanguageConfig::FromSourceLanguageConfigs(
sourceLanguageConfigs);
このサンプルでは、カスタム エンドポイントで言語検出を使用する方法を示します。 検出された言語が en-US
の場合、この例では既定のモデルが使用されます。 検出された言語が fr-FR
の場合、この例ではカスタム モデルのエンドポイントが使用されます。 詳細については、「Custom Speech モデルのデプロイ」を参照してください。
List sourceLanguageConfigs = new ArrayList<SourceLanguageConfig>();
sourceLanguageConfigs.add(
SourceLanguageConfig.fromLanguage("en-US"));
sourceLanguageConfigs.add(
SourceLanguageConfig.fromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR"));
AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
AutoDetectSourceLanguageConfig.fromSourceLanguageConfigs(
sourceLanguageConfigs);
このサンプルでは、カスタム エンドポイントで言語検出を使用する方法を示します。 検出された言語が en-US
の場合、この例では既定のモデルが使用されます。 検出された言語が fr-FR
の場合、この例ではカスタム モデルのエンドポイントが使用されます。 詳細については、「Custom Speech モデルのデプロイ」を参照してください。
en_language_config = speechsdk.languageconfig.SourceLanguageConfig("en-US")
fr_language_config = speechsdk.languageconfig.SourceLanguageConfig("fr-FR", "The Endpoint Id for custom model of fr-FR")
auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(
sourceLanguageConfigs=[en_language_config, fr_language_config])
このサンプルでは、カスタム エンドポイントで言語検出を使用する方法を示します。 検出された言語が en-US
の場合、この例では既定のモデルが使用されます。 検出された言語が fr-FR
の場合、この例ではカスタム モデルのエンドポイントが使用されます。 詳細については、「Custom Speech モデルのデプロイ」を参照してください。
SPXSourceLanguageConfiguration* enLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"en-US"];
SPXSourceLanguageConfiguration* frLanguageConfig = \
[[SPXSourceLanguageConfiguration alloc]initWithLanguage:@"fr-FR"
endpointId:@"The Endpoint Id for custom model of fr-FR"];
NSArray *languageConfigs = @[enLanguageConfig, frLanguageConfig];
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
[[SPXAutoDetectSourceLanguageConfiguration alloc]initWithSourceLanguageConfigurations:languageConfigs];
var enLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("en-US");
var frLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("fr-FR", "The Endpoint Id for custom model of fr-FR");
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromSourceLanguageConfigs([enLanguageConfig, frLanguageConfig]);
音声翻訳を実行する
音声翻訳は、オーディオ ソース内の言語を識別し、それを別の言語に翻訳する必要がある場合に使用します。 詳細については、「音声翻訳の概要」を参照してください。
Note
言語識別を使用する音声翻訳は、C#、C++、Python の Speech SDK でのみサポートされています。
現在、言語識別を使用した音声翻訳では、コード例に示すように、wss://{region}.stt.speech.microsoft.com/speech/universal/v2
エンドポイント文字列から SpeechConfig を作成する必要があります。 今後の SDK リリースでは、その設定は必要ありません。
言語識別を使用した音声翻訳のその他の例については、GitHub を参照してください。
using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Audio;
using Microsoft.CognitiveServices.Speech.Translation;
public static async Task RecognizeOnceSpeechTranslationAsync()
{
var region = "YourServiceRegion";
// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
var endpointUrl = new Uri(endpointString);
var config = SpeechTranslationConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
// Source language is required, but currently ignored.
string fromLanguage = "en-US";
speechTranslationConfig.SpeechRecognitionLanguage = fromLanguage;
speechTranslationConfig.AddTargetLanguage("de");
speechTranslationConfig.AddTargetLanguage("fr");
var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
using (var recognizer = new TranslationRecognizer(
speechTranslationConfig,
autoDetectSourceLanguageConfig,
audioConfig))
{
Console.WriteLine("Say something or read from file...");
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
if (result.Reason == ResultReason.TranslatedSpeech)
{
var lidResult = result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);
Console.WriteLine($"RECOGNIZED in '{lidResult}': Text={result.Text}");
foreach (var element in result.Translations)
{
Console.WriteLine($" TRANSLATED into '{element.Key}': {element.Value}");
}
}
}
}
using Microsoft.CognitiveServices.Speech;
using Microsoft.CognitiveServices.Speech.Audio;
using Microsoft.CognitiveServices.Speech.Translation;
public static async Task MultiLingualTranslation()
{
var region = "YourServiceRegion";
// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/v2";
var endpointUrl = new Uri(endpointString);
var config = SpeechTranslationConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
// Source language is required, but currently ignored.
string fromLanguage = "en-US";
config.SpeechRecognitionLanguage = fromLanguage;
config.AddTargetLanguage("de");
config.AddTargetLanguage("fr");
// Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
config.SetProperty(PropertyId.SpeechServiceConnection_LanguageIdMode, "Continuous");
var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
var stopTranslation = new TaskCompletionSource<int>();
using (var audioInput = AudioConfig.FromWavFileInput(@"en-us_zh-cn.wav"))
{
using (var recognizer = new TranslationRecognizer(config, autoDetectSourceLanguageConfig, audioInput))
{
recognizer.Recognizing += (s, e) =>
{
var lidResult = e.Result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);
Console.WriteLine($"RECOGNIZING in '{lidResult}': Text={e.Result.Text}");
foreach (var element in e.Result.Translations)
{
Console.WriteLine($" TRANSLATING into '{element.Key}': {element.Value}");
}
};
recognizer.Recognized += (s, e) => {
if (e.Result.Reason == ResultReason.TranslatedSpeech)
{
var lidResult = e.Result.Properties.GetProperty(PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult);
Console.WriteLine($"RECOGNIZED in '{lidResult}': Text={e.Result.Text}");
foreach (var element in e.Result.Translations)
{
Console.WriteLine($" TRANSLATED into '{element.Key}': {element.Value}");
}
}
else if (e.Result.Reason == ResultReason.RecognizedSpeech)
{
Console.WriteLine($"RECOGNIZED: Text={e.Result.Text}");
Console.WriteLine($" Speech not translated.");
}
else if (e.Result.Reason == ResultReason.NoMatch)
{
Console.WriteLine($"NOMATCH: Speech could not be recognized.");
}
};
recognizer.Canceled += (s, e) =>
{
Console.WriteLine($"CANCELED: Reason={e.Reason}");
if (e.Reason == CancellationReason.Error)
{
Console.WriteLine($"CANCELED: ErrorCode={e.ErrorCode}");
Console.WriteLine($"CANCELED: ErrorDetails={e.ErrorDetails}");
Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
}
stopTranslation.TrySetResult(0);
};
recognizer.SpeechStartDetected += (s, e) => {
Console.WriteLine("\nSpeech start detected event.");
};
recognizer.SpeechEndDetected += (s, e) => {
Console.WriteLine("\nSpeech end detected event.");
};
recognizer.SessionStarted += (s, e) => {
Console.WriteLine("\nSession started event.");
};
recognizer.SessionStopped += (s, e) => {
Console.WriteLine("\nSession stopped event.");
Console.WriteLine($"\nStop translation.");
stopTranslation.TrySetResult(0);
};
// Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
Console.WriteLine("Start translation...");
await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
Task.WaitAny(new[] { stopTranslation.Task });
await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
}
}
}
言語識別を使用した音声翻訳のその他の例については、GitHub を参照してください。
auto region = "YourServiceRegion";
// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
auto endpointString = std::format("wss://{}.stt.speech.microsoft.com/speech/universal/v2", region);
auto config = SpeechTranslationConfig::FromEndpoint(endpointString, "YourSubscriptionKey");
auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE" });
// Sets source and target languages
// The source language will be detected by the language detection feature.
// However, the SpeechRecognitionLanguage still need to set with a locale string, but it will not be used as the source language.
// This will be fixed in a future version of Speech SDK.
auto fromLanguage = "en-US";
config->SetSpeechRecognitionLanguage(fromLanguage);
config->AddTargetLanguage("de");
config->AddTargetLanguage("fr");
// Creates a translation recognizer using microphone as audio input.
auto recognizer = TranslationRecognizer::FromConfig(config, autoDetectSourceLanguageConfig);
cout << "Say something...\n";
// Starts translation, and returns after a single utterance is recognized. The end of a
// single utterance is determined by listening for silence at the end or until a maximum of 15
// seconds of audio is processed. The task returns the recognized text as well as the translation.
// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
auto result = recognizer->RecognizeOnceAsync().get();
// Checks result.
if (result->Reason == ResultReason::TranslatedSpeech)
{
cout << "RECOGNIZED: Text=" << result->Text << std::endl;
for (const auto& it : result->Translations)
{
cout << "TRANSLATED into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
}
}
else if (result->Reason == ResultReason::RecognizedSpeech)
{
cout << "RECOGNIZED: Text=" << result->Text << " (text could not be translated)" << std::endl;
}
else if (result->Reason == ResultReason::NoMatch)
{
cout << "NOMATCH: Speech could not be recognized." << std::endl;
}
else if (result->Reason == ResultReason::Canceled)
{
auto cancellation = CancellationDetails::FromResult(result);
cout << "CANCELED: Reason=" << (int)cancellation->Reason << std::endl;
if (cancellation->Reason == CancellationReason::Error)
{
cout << "CANCELED: ErrorCode=" << (int)cancellation->ErrorCode << std::endl;
cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails << std::endl;
cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
}
}
using namespace std;
using namespace Microsoft::CognitiveServices::Speech;
using namespace Microsoft::CognitiveServices::Speech::Audio;
using namespace Microsoft::CognitiveServices::Speech::Translation;
void MultiLingualTranslation()
{
auto region = "YourServiceRegion";
// Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
auto endpointString = std::format("wss://{}.stt.speech.microsoft.com/speech/universal/v2", region);
auto config = SpeechTranslationConfig::FromEndpoint(endpointString, "YourSubscriptionKey");
// Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
speechConfig->SetProperty(PropertyId::SpeechServiceConnection_LanguageIdMode, "Continuous");
auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" });
promise<void> recognitionEnd;
// Source language is required, but currently ignored.
auto fromLanguage = "en-US";
config->SetSpeechRecognitionLanguage(fromLanguage);
config->AddTargetLanguage("de");
config->AddTargetLanguage("fr");
auto audioInput = AudioConfig::FromWavFileInput("whatstheweatherlike.wav");
auto recognizer = TranslationRecognizer::FromConfig(config, autoDetectSourceLanguageConfig, audioInput);
recognizer->Recognizing.Connect([](const TranslationRecognitionEventArgs& e)
{
std::string lidResult = e.Result->Properties.GetProperty(PropertyId::SpeechServiceConnection_AutoDetectSourceLanguageResult);
cout << "Recognizing in Language = "<< lidResult << ":" << e.Result->Text << std::endl;
for (const auto& it : e.Result->Translations)
{
cout << " Translated into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
}
});
recognizer->Recognized.Connect([](const TranslationRecognitionEventArgs& e)
{
if (e.Result->Reason == ResultReason::TranslatedSpeech)
{
std::string lidResult = e.Result->Properties.GetProperty(PropertyId::SpeechServiceConnection_AutoDetectSourceLanguageResult);
cout << "RECOGNIZED in Language = " << lidResult << ": Text=" << e.Result->Text << std::endl;
}
else if (e.Result->Reason == ResultReason::RecognizedSpeech)
{
cout << "RECOGNIZED: Text=" << e.Result->Text << " (text could not be translated)" << std::endl;
}
else if (e.Result->Reason == ResultReason::NoMatch)
{
cout << "NOMATCH: Speech could not be recognized." << std::endl;
}
for (const auto& it : e.Result->Translations)
{
cout << " Translated into '" << it.first.c_str() << "': " << it.second.c_str() << std::endl;
}
});
recognizer->Canceled.Connect([&recognitionEnd](const TranslationRecognitionCanceledEventArgs& e)
{
cout << "CANCELED: Reason=" << (int)e.Reason << std::endl;
if (e.Reason == CancellationReason::Error)
{
cout << "CANCELED: ErrorCode=" << (int)e.ErrorCode << std::endl;
cout << "CANCELED: ErrorDetails=" << e.ErrorDetails << std::endl;
cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
recognitionEnd.set_value();
}
});
recognizer->Synthesizing.Connect([](const TranslationSynthesisEventArgs& e)
{
auto size = e.Result->Audio.size();
cout << "Translation synthesis result: size of audio data: " << size
<< (size == 0 ? "(END)" : "");
});
recognizer->SessionStopped.Connect([&recognitionEnd](const SessionEventArgs& e)
{
cout << "Session stopped.";
recognitionEnd.set_value();
});
// Starts continuos recognition. Use StopContinuousRecognitionAsync() to stop recognition.
recognizer->StartContinuousRecognitionAsync().get();
recognitionEnd.get_future().get();
recognizer->StopContinuousRecognitionAsync().get();
}
言語識別を使用した音声翻訳のその他の例については、GitHub を参照してください。
import azure.cognitiveservices.speech as speechsdk
import time
import json
speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
weatherfilename="en-us_zh-cn.wav"
# set up translation parameters: source language and target languages
# Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
translation_config = speechsdk.translation.SpeechTranslationConfig(
subscription=speech_key,
endpoint=endpoint_string,
speech_recognition_language='en-US',
target_languages=('de', 'fr'))
audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages
auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
# Creates a translation recognizer using and audio file as input.
recognizer = speechsdk.translation.TranslationRecognizer(
translation_config=translation_config,
audio_config=audio_config,
auto_detect_source_language_config=auto_detect_source_language_config)
# Starts translation, and returns after a single utterance is recognized. The end of a
# single utterance is determined by listening for silence at the end or until a maximum of 15
# seconds of audio is processed. The task returns the recognition text as result.
# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
# shot recognition like command or query.
# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
result = recognizer.recognize_once()
# Check the result
if result.reason == speechsdk.ResultReason.TranslatedSpeech:
print("""Recognized: {}
German translation: {}
French translation: {}""".format(
result.text, result.translations['de'], result.translations['fr']))
elif result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(result.text))
detectedSrcLang = result.properties[speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult]
print("Detected Language: {}".format(detectedSrcLang))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(result.no_match_details))
elif result.reason == speechsdk.ResultReason.Canceled:
print("Translation canceled: {}".format(result.cancellation_details.reason))
if result.cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(result.cancellation_details.error_details))
import azure.cognitiveservices.speech as speechsdk
import time
import json
speech_key, service_region = "YourSubscriptionKey","YourServiceRegion"
weatherfilename="en-us_zh-cn.wav"
# Currently the v2 endpoint is required. In a future SDK release you won't need to set it.
endpoint_string = "wss://{}.stt.speech.microsoft.com/speech/universal/v2".format(service_region)
translation_config = speechsdk.translation.SpeechTranslationConfig(
subscription=speech_key,
endpoint=endpoint_string,
speech_recognition_language='en-US',
target_languages=('de', 'fr'))
audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
# Set the LanguageIdMode (Optional; Either Continuous or AtStart are accepted; Default AtStart)
translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_LanguageIdMode, value='Continuous')
# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages
auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
# Creates a translation recognizer using and audio file as input.
recognizer = speechsdk.translation.TranslationRecognizer(
translation_config=translation_config,
audio_config=audio_config,
auto_detect_source_language_config=auto_detect_source_language_config)
def result_callback(event_type, evt):
"""callback to display a translation result"""
print("{}: {}\n\tTranslations: {}\n\tResult Json: {}".format(
event_type, evt, evt.result.translations.items(), evt.result.json))
done = False
def stop_cb(evt):
"""callback that signals to stop continuous recognition upon receiving an event `evt`"""
print('CLOSING on {}'.format(evt))
nonlocal done
done = True
# connect callback functions to the events fired by the recognizer
recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
# event for intermediate results
recognizer.recognizing.connect(lambda evt: result_callback('RECOGNIZING', evt))
# event for final result
recognizer.recognized.connect(lambda evt: result_callback('RECOGNIZED', evt))
# cancellation event
recognizer.canceled.connect(lambda evt: print('CANCELED: {} ({})'.format(evt, evt.reason)))
# stop continuous recognition on either session stopped or canceled events
recognizer.session_stopped.connect(stop_cb)
recognizer.canceled.connect(stop_cb)
def synthesis_callback(evt):
"""
callback for the synthesis event
"""
print('SYNTHESIZING {}\n\treceived {} bytes of audio. Reason: {}'.format(
evt, len(evt.result.audio), evt.result.reason))
if evt.result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("RECOGNIZED: {}".format(evt.result.properties))
if evt.result.properties.get(speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult) == None:
print("Unable to detect any language")
else:
detectedSrcLang = evt.result.properties[speechsdk.PropertyId.SpeechServiceConnection_AutoDetectSourceLanguageResult]
jsonResult = evt.result.properties[speechsdk.PropertyId.SpeechServiceResponse_JsonResult]
detailResult = json.loads(jsonResult)
startOffset = detailResult['Offset']
duration = detailResult['Duration']
if duration >= 0:
endOffset = duration + startOffset
else:
endOffset = 0
print("Detected language = " + detectedSrcLang + ", startOffset = " + str(startOffset) + " nanoseconds, endOffset = " + str(endOffset) + " nanoseconds, Duration = " + str(duration) + " nanoseconds.")
global language_detected
language_detected = True
# connect callback to the synthesis event
recognizer.synthesizing.connect(synthesis_callback)
# start translation
recognizer.start_continuous_recognition()
while not done:
time.sleep(.5)
recognizer.stop_continuous_recognition()
コンテナーを実行して使用する
音声コンテナーは、WebSocket ベースのクエリ エンドポイント API シリーズを提供します。これには、Speech SDK および Speech CLI を介してアクセスします。 既定では、Speech SDK と Speech CLI ではパブリック音声サービスが使用されます。 コンテナーを使用するには、初期化方法を変更する必要があります。 キーとリージョンではなく、コンテナー ホスト URL を使用します。
コンテナーで言語 ID を実行する場合は、SpeechRecognizer
または TranslationRecognizer
ではなく、SourceLanguageRecognizer
オブジェクトを使用します。
コンテナーの詳細については、 言語識別音声コンテナーの攻略ガイドを参照してください。
音声テキスト変換バッチ文字起こしを実装する
Batch transcription REST API で言語を認識するには、Transcriptions_Create 要求の本文で languageIdentification
プロパティを使用します。
警告
バッチ文字起こしでは、既定の基本モデルの言語識別のみサポートされます。 文字起こし要求で言語識別とカスタム モデルの両方が指定された場合、サービスはフォールバックして、指定された候補言語の基本モデルを使用します。 これにより、予期しない認識結果が発生する可能性があります。
音声テキスト変換シナリオで言語識別とカスタム モデルの両方が必要な場合、バッチ文字起こしの代わりにリアルタイム音声テキスト変換を使用してください。
次の例では、4 つの候補言語がある場合の languageIdentification
プロパティの使用方法を示しています。 要求プロパティの詳細については、「バッチ文字起こしを作成する」を参照してください。
{
<...>
"properties": {
<...>
"languageIdentification": {
"candidateLocales": [
"en-US",
"ja-JP",
"zh-CN",
"hi-IN"
]
},
<...>
}
}
関連するコンテンツ