SpeechRecognitionEngine.RecognizeAsync 方法
定义
重要
一些信息与预发行产品相关,相应产品在发行之前可能会进行重大修改。 对于此处提供的信息,Microsoft 不作任何明示或暗示的担保。
启动异步语音识别操作。
重载
RecognizeAsync() |
执行单个异步语音识别操作。 |
RecognizeAsync(RecognizeMode) |
执行一个或多个异步语音识别操作。 |
注解
这些方法执行单个或多个异步识别操作。 识别器对其加载和启用的语音识别语法执行每个操作。
在调用此方法期间,识别器可能会引发以下事件:
SpeechDetected. 当识别器检测到它可以标识为语音的输入时引发。
SpeechHypothesized. 当输入使用活动语法之一创建不明确的匹配时引发。
SpeechRecognitionRejected 或 SpeechRecognized。 当识别器完成识别操作时引发。
RecognizeCompleted. 在操作完成时 RecognizeAsync 引发。
若要检索异步识别操作的结果,请将事件处理程序附加到识别器的事件 SpeechRecognized 。 每当识别器成功完成同步或异步识别操作时,它就引发此事件。 如果识别不成功,Result则可以在 事件的处理程序RecognizeCompleted中访问 的 对象上的 RecognizeCompletedEventArgs 属性为 null
。
异步识别操作可能由于以下原因而失败:
在 或 InitialSilenceTimeout 属性的超时间隔到期BabbleTimeout之前,不会检测到语音。
识别引擎检测语音,但在其任何已加载和启用 Grammar 的对象中都找不到匹配项。
在执行识别之前, SpeechRecognitionEngine 必须至少加载一个 Grammar 对象。 若要加载语音识别语法,请使用 LoadGrammar 或 LoadGrammarAsync 方法。
若要修改识别器处理与识别相关的语音或静音计时的方式,请使用 BabbleTimeout、 InitialSilenceTimeout、 EndSilenceTimeout和 EndSilenceTimeoutAmbiguous 属性。
若要执行同步识别,请使用 方法之 Recognize 一。
RecognizeAsync()
- Source:
- SpeechRecognitionEngine.cs
- Source:
- SpeechRecognitionEngine.cs
执行单个异步语音识别操作。
public:
void RecognizeAsync();
public void RecognizeAsync ();
member this.RecognizeAsync : unit -> unit
Public Sub RecognizeAsync ()
示例
以下示例演示演示基本异步语音识别的控制台应用程序的一部分。 该示例创建 , DictationGrammar将其加载到进程内语音识别器中,并执行一个异步识别操作。 包含事件处理程序,用于演示识别器在操作期间引发的事件。
using System;
using System.Globalization;
using System.Speech.Recognition;
using System.Threading;
namespace AsynchronousRecognition
{
class Program
{
// Indicate whether asynchronous recognition is complete.
static bool completed;
static void Main(string[] args)
{
// Create an in-process speech recognizer.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(new CultureInfo("en-US")))
{
// Create a grammar for choosing cities for a flight.
Choices cities = new Choices(new string[]
{ "Los Angeles", "New York", "Chicago", "San Francisco", "Miami", "Dallas" });
GrammarBuilder gb = new GrammarBuilder();
gb.Append("I want to fly from");
gb.Append(cities);
gb.Append("to");
gb.Append(cities);
// Construct a Grammar object and load it to the recognizer.
Grammar cityChooser = new Grammar(gb);
cityChooser.Name = ("City Chooser");
recognizer.LoadGrammarAsync(cityChooser);
// Attach event handlers.
recognizer.SpeechDetected +=
new EventHandler<SpeechDetectedEventArgs>(
SpeechDetectedHandler);
recognizer.SpeechHypothesized +=
new EventHandler<SpeechHypothesizedEventArgs>(
SpeechHypothesizedHandler);
recognizer.SpeechRecognitionRejected +=
new EventHandler<SpeechRecognitionRejectedEventArgs>(
SpeechRecognitionRejectedHandler);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(
SpeechRecognizedHandler);
recognizer.RecognizeCompleted +=
new EventHandler<RecognizeCompletedEventArgs>(
RecognizeCompletedHandler);
// Assign input to the recognizer and start an asynchronous
// recognition operation.
recognizer.SetInputToDefaultAudioDevice();
completed = false;
Console.WriteLine("Starting asynchronous recognition...");
recognizer.RecognizeAsync();
// Wait for the operation to complete.
while (!completed)
{
Thread.Sleep(333);
}
Console.WriteLine("Done.");
}
Console.WriteLine();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
// Handle the SpeechDetected event.
static void SpeechDetectedHandler(object sender, SpeechDetectedEventArgs e)
{
Console.WriteLine(" In SpeechDetectedHandler:");
Console.WriteLine(" - AudioPosition = {0}", e.AudioPosition);
}
// Handle the SpeechHypothesized event.
static void SpeechHypothesizedHandler(
object sender, SpeechHypothesizedEventArgs e)
{
Console.WriteLine(" In SpeechHypothesizedHandler:");
string grammarName = "<not available>";
string resultText = "<not available>";
if (e.Result != null)
{
if (e.Result.Grammar != null)
{
grammarName = e.Result.Grammar.Name;
}
resultText = e.Result.Text;
}
Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
grammarName, resultText);
}
// Handle the SpeechRecognitionRejected event.
static void SpeechRecognitionRejectedHandler(
object sender, SpeechRecognitionRejectedEventArgs e)
{
Console.WriteLine(" In SpeechRecognitionRejectedHandler:");
string grammarName = "<not available>";
string resultText = "<not available>";
if (e.Result != null)
{
if (e.Result.Grammar != null)
{
grammarName = e.Result.Grammar.Name;
}
resultText = e.Result.Text;
}
Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
grammarName, resultText);
}
// Handle the SpeechRecognized event.
static void SpeechRecognizedHandler(
object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine(" In SpeechRecognizedHandler.");
string grammarName = "<not available>";
string resultText = "<not available>";
if (e.Result != null)
{
if (e.Result.Grammar != null)
{
grammarName = e.Result.Grammar.Name;
}
resultText = e.Result.Text;
}
Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
grammarName, resultText);
}
// Handle the RecognizeCompleted event.
static void RecognizeCompletedHandler(
object sender, RecognizeCompletedEventArgs e)
{
Console.WriteLine(" In RecognizeCompletedHandler.");
if (e.Error != null)
{
Console.WriteLine(
" - Error occurred during recognition: {0}", e.Error);
return;
}
if (e.InitialSilenceTimeout || e.BabbleTimeout)
{
Console.WriteLine(
" - BabbleTimeout = {0}; InitialSilenceTimeout = {1}",
e.BabbleTimeout, e.InitialSilenceTimeout);
return;
}
if (e.InputStreamEnded)
{
Console.WriteLine(
" - AudioPosition = {0}; InputStreamEnded = {1}",
e.AudioPosition, e.InputStreamEnded);
}
if (e.Result != null)
{
Console.WriteLine(
" - Grammar = {0}; Text = {1}; Confidence = {2}",
e.Result.Grammar.Name, e.Result.Text, e.Result.Confidence);
Console.WriteLine(" - AudioPosition = {0}", e.AudioPosition);
}
else
{
Console.WriteLine(" - No result.");
}
completed = true;
}
}
}
注解
此方法执行单个异步识别操作。 识别器对其加载的和启用的语音识别语法执行操作。
在调用此方法期间,识别器可能会引发以下事件:
SpeechDetected. 当识别器检测到它可以标识为语音的输入时引发。
SpeechHypothesized. 当输入使用活动语法之一创建不明确的匹配时引发。
SpeechRecognitionRejected 或 SpeechRecognized。 当识别器完成识别操作时引发。
RecognizeCompleted. 在操作完成时 RecognizeAsync 引发。
若要检索异步识别操作的结果,请将事件处理程序附加到识别器的事件 SpeechRecognized 。 每当识别器成功完成同步或异步识别操作时,它就引发此事件。 如果识别不成功,Result则可以在 事件的处理程序RecognizeCompleted中访问 的 对象上的 RecognizeCompletedEventArgs 属性为 null
。
若要执行同步识别,请使用 方法之 Recognize 一。
此方法将存储在任务中,它返回该方法的同步对应项可能引发的所有非使用异常。 如果异常存储在返回的任务中,则在等待任务时将引发该异常。 使用异常(如 ArgumentException)仍会同步引发。 有关存储的异常,请参阅 引发的 Recognize()异常。
另请参阅
- BabbleTimeout
- InitialSilenceTimeout
- EndSilenceTimeout
- EndSilenceTimeoutAmbiguous
- SpeechDetected
- SpeechHypothesized
- SpeechRecognitionRejected
- SpeechRecognized
- RecognizeCompleted
- Recognize()
- EmulateRecognizeAsync(String)
适用于
RecognizeAsync(RecognizeMode)
- Source:
- SpeechRecognitionEngine.cs
- Source:
- SpeechRecognitionEngine.cs
执行一个或多个异步语音识别操作。
public:
void RecognizeAsync(System::Speech::Recognition::RecognizeMode mode);
public void RecognizeAsync (System.Speech.Recognition.RecognizeMode mode);
member this.RecognizeAsync : System.Speech.Recognition.RecognizeMode -> unit
Public Sub RecognizeAsync (mode As RecognizeMode)
参数
- mode
- RecognizeMode
指示是否执行一个或多个识别操作。
示例
以下示例演示演示基本异步语音识别的控制台应用程序的一部分。 该示例创建 , DictationGrammar将其加载到进程内语音识别器中,并执行多个异步识别操作。 异步操作将在 30 秒后取消。 包含事件处理程序,用于演示识别器在操作期间引发的事件。
using System;
using System.Globalization;
using System.Speech.Recognition;
using System.Threading;
namespace AsynchronousRecognition
{
class Program
{
// Indicate whether asynchronous recognition is complete.
static bool completed;
static void Main(string[] args)
{
// Create an in-process speech recognizer.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(new CultureInfo("en-US")))
{
// Create a grammar for choosing cities for a flight.
Choices cities = new Choices(new string[] { "Los Angeles", "New York", "Chicago", "San Francisco", "Miami", "Dallas" });
GrammarBuilder gb = new GrammarBuilder();
gb.Append("I want to fly from");
gb.Append(cities);
gb.Append("to");
gb.Append(cities);
// Construct a Grammar object and load it to the recognizer.
Grammar cityChooser = new Grammar(gb);
cityChooser.Name = ("City Chooser");
recognizer.LoadGrammarAsync(cityChooser);
// Attach event handlers.
recognizer.SpeechDetected +=
new EventHandler<SpeechDetectedEventArgs>(
SpeechDetectedHandler);
recognizer.SpeechHypothesized +=
new EventHandler<SpeechHypothesizedEventArgs>(
SpeechHypothesizedHandler);
recognizer.SpeechRecognitionRejected +=
new EventHandler<SpeechRecognitionRejectedEventArgs>(
SpeechRecognitionRejectedHandler);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(
SpeechRecognizedHandler);
recognizer.RecognizeCompleted +=
new EventHandler<RecognizeCompletedEventArgs>(
RecognizeCompletedHandler);
// Assign input to the recognizer and start asynchronous
// recognition.
recognizer.SetInputToDefaultAudioDevice();
completed = false;
Console.WriteLine("Starting asynchronous recognition...");
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Wait 30 seconds, and then cancel asynchronous recognition.
Thread.Sleep(TimeSpan.FromSeconds(30));
recognizer.RecognizeAsyncCancel();
// Wait for the operation to complete.
while (!completed)
{
Thread.Sleep(333);
}
Console.WriteLine("Done.");
}
Console.WriteLine();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
// Handle the SpeechDetected event.
static void SpeechDetectedHandler(object sender, SpeechDetectedEventArgs e)
{
Console.WriteLine(" In SpeechDetectedHandler:");
Console.WriteLine(" - AudioPosition = {0}", e.AudioPosition);
}
// Handle the SpeechHypothesized event.
static void SpeechHypothesizedHandler(
object sender, SpeechHypothesizedEventArgs e)
{
Console.WriteLine(" In SpeechHypothesizedHandler:");
string grammarName = "<not available>";
string resultText = "<not available>";
if (e.Result != null)
{
if (e.Result.Grammar != null)
{
grammarName = e.Result.Grammar.Name;
}
resultText = e.Result.Text;
}
Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
grammarName, resultText);
}
// Handle the SpeechRecognitionRejected event.
static void SpeechRecognitionRejectedHandler(
object sender, SpeechRecognitionRejectedEventArgs e)
{
Console.WriteLine(" In SpeechRecognitionRejectedHandler:");
string grammarName = "<not available>";
string resultText = "<not available>";
if (e.Result != null)
{
if (e.Result.Grammar != null)
{
grammarName = e.Result.Grammar.Name;
}
resultText = e.Result.Text;
}
Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
grammarName, resultText);
}
// Handle the SpeechRecognized event.
static void SpeechRecognizedHandler(
object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine(" In SpeechRecognizedHandler.");
string grammarName = "<not available>";
string resultText = "<not available>";
if (e.Result != null)
{
if (e.Result.Grammar != null)
{
grammarName = e.Result.Grammar.Name;
}
resultText = e.Result.Text;
}
Console.WriteLine(" - Grammar Name = {0}; Result Text = {1}",
grammarName, resultText);
}
// Handle the RecognizeCompleted event.
static void RecognizeCompletedHandler(
object sender, RecognizeCompletedEventArgs e)
{
Console.WriteLine(" In RecognizeCompletedHandler.");
if (e.Error != null)
{
Console.WriteLine(
" - Error occurred during recognition: {0}", e.Error);
return;
}
if (e.InitialSilenceTimeout || e.BabbleTimeout)
{
Console.WriteLine(
" - BabbleTimeout = {0}; InitialSilenceTimeout = {1}",
e.BabbleTimeout, e.InitialSilenceTimeout);
return;
}
if (e.InputStreamEnded)
{
Console.WriteLine(
" - AudioPosition = {0}; InputStreamEnded = {1}",
e.AudioPosition, e.InputStreamEnded);
}
if (e.Result != null)
{
Console.WriteLine(
" - Grammar = {0}; Text = {1}; Confidence = {2}",
e.Result.Grammar.Name, e.Result.Text, e.Result.Confidence);
Console.WriteLine(" - AudioPosition = {0}", e.AudioPosition);
}
else
{
Console.WriteLine(" - No result.");
}
completed = true;
}
}
}
注解
如果 mode
为 Multiple,则识别器将继续执行异步识别操作, RecognizeAsyncCancel 直到调用 或 RecognizeAsyncStop 方法。
在调用此方法期间,识别器可能会引发以下事件:
SpeechDetected. 当识别器检测到它可以标识为语音的输入时引发。
SpeechHypothesized. 当输入使用活动语法之一创建不明确的匹配时引发。
SpeechRecognitionRejected 或 SpeechRecognized。 当识别器完成识别操作时引发。
RecognizeCompleted. 在操作完成时 RecognizeAsync 引发。
若要检索异步识别操作的结果,请将事件处理程序附加到识别器的事件 SpeechRecognized 。 每当识别器成功完成同步或异步识别操作时,它就引发此事件。 如果识别不成功,Result则可以在 事件的处理程序RecognizeCompleted中访问 的 对象上的 RecognizeCompletedEventArgs 属性为 null
。
异步识别操作可能由于以下原因而失败:
在 或 InitialSilenceTimeout 属性的超时间隔到期BabbleTimeout之前,不会检测到语音。
识别引擎检测语音,但在其任何已加载和启用 Grammar 的对象中都找不到匹配项。
若要执行同步识别,请使用 方法之 Recognize 一。
另请参阅
- BabbleTimeout
- InitialSilenceTimeout
- EndSilenceTimeout
- EndSilenceTimeoutAmbiguous
- SpeechDetected
- SpeechHypothesized
- SpeechRecognitionRejected
- SpeechRecognized
- RecognizeCompleted
- Recognize()
- EmulateRecognizeAsync(String)