Jaa


System.Speech.Recognition Namespace

Contains Windows Desktop Speech technology types for implementing speech recognition.

Classes

AudioLevelUpdatedEventArgs

Provides data for the AudioLevelUpdated event of the SpeechRecognizer or the SpeechRecognitionEngine class.

AudioSignalProblemOccurredEventArgs

Provides data for the AudioSignalProblemOccurred event of a SpeechRecognizer or a SpeechRecognitionEngine.

AudioStateChangedEventArgs

Provides data for the AudioStateChanged event of the SpeechRecognizer or the SpeechRecognitionEngine class.

Choices

Represents a set of alternatives in the constraints of a speech recognition grammar.

DictationGrammar

Represents a speech recognition grammar used for free text dictation.

EmulateRecognizeCompletedEventArgs

Provides data for the EmulateRecognizeCompleted event of the SpeechRecognizer and SpeechRecognitionEngine classes.

Grammar

A runtime object that references a speech recognition grammar, which an application can use to define the constraints for speech recognition.

GrammarBuilder

Provides a mechanism for programmatically building the constraints for a speech recognition grammar.

LoadGrammarCompletedEventArgs

Provides data for the LoadGrammarCompleted event of a SpeechRecognizer or SpeechRecognitionEngine object.

RecognitionEventArgs

Provides information about speech recognition events.

RecognitionResult

Contains detailed information about input that was recognized by instances of SpeechRecognitionEngine or SpeechRecognizer.

RecognizeCompletedEventArgs

Provides data for the RecognizeCompleted event raised by a SpeechRecognitionEngine or a SpeechRecognizer object.

RecognizedAudio

Represents audio input that is associated with a RecognitionResult.

RecognizedPhrase

Contains detailed information, generated by the speech recognizer, about the recognized input.

RecognizedWordUnit

Provides the atomic unit of recognized speech.

RecognizerInfo

Represents information about a SpeechRecognizer or SpeechRecognitionEngine instance.

RecognizerUpdateReachedEventArgs

Returns data from a RecognizerUpdateReached or a RecognizerUpdateReached event.

ReplacementText

Contains information about a speech normalization procedure that has been performed on recognition results.

SemanticResultKey

Associates a key string with SemanticResultValue values to define SemanticValue objects.

SemanticResultValue

Represents a semantic value and optionally associates the value with a component of a speech recognition grammar.

SemanticValue

Represents the semantic organization of a recognized phrase.

SpeechDetectedEventArgs

Returns data from SpeechDetected or SpeechDetected events.

SpeechHypothesizedEventArgs

Returns notification from SpeechHypothesized or SpeechHypothesized events.

This class supports the .NET Framework infrastructure and is not intended to be used directly from application code.

SpeechRecognitionEngine

Provides the means to access and manage an in-process speech recognition engine.

SpeechRecognitionRejectedEventArgs

Provides information for the SpeechRecognitionRejected and SpeechRecognitionRejected events.

SpeechRecognizedEventArgs

Provides information for the SpeechRecognized, SpeechRecognized, and SpeechRecognized events.

SpeechRecognizer

Provides access to the shared speech recognition service available on the Windows desktop.

SpeechUI

Provides text and status information on recognition operations to be displayed in the Speech platform user interface.

StateChangedEventArgs

Returns data from the StateChanged event.

Enums

AudioSignalProblem

Contains a list of possible problems in the audio signal coming in to a speech recognition engine.

AudioState

Contains a list of possible states for the audio input to a speech recognition engine.

DisplayAttributes

Lists the options that the SpeechRecognitionEngine object can use to specify white space for the display of a word or punctuation mark.

RecognizeMode

Enumerates values of the recognition mode.

RecognizerState

Enumerates values of the recognizer's state.

SubsetMatchingMode

Enumerates values of subset matching mode.

Remarks

The Windows Desktop Speech Technology software offers a basic speech recognition infrastructure that digitizes acoustical signals, and recovers words and speech elements from audio input.

Applications use the System.Speech.Recognition namespace to access and extend this basic speech recognition technology by defining algorithms for identifying and acting on specific phrases or word patterns, and by managing the runtime behavior of this speech infrastructure.

Create Grammars

You create grammars, which consist of a set of rules or constraints, to define words and phrases that your application will recognize as meaningful input. Using a constructor for the Grammar class, you can create a grammar object at runtime from GrammarBuilder or SrgsDocument instances, or from a file, a string, or a stream that contains a definition of a grammar.

Using the GrammarBuilder and Choices classes, you can programmatically create grammars of low to medium complexity that can be used to perform recognition for many common scenarios. To create grammars programmatically that conform to the Speech Recognition Grammar Specification 1.0 (SRGS) and take advantage of the authoring flexibility of SRGS, use the types of the System.Speech.Recognition.SrgsGrammar namespace. You can also create XML-format SRGS grammars using any text editor and use the result to create GrammarBuilder, SrgsDocument , or Grammar objects.

In addition, the DictationGrammar class provides a special-case grammar to support a conventional dictation model.

See Create Grammars in the System Speech Programming Guide for .NET Framework for more information and examples.

Manage Speech Recognition Engines

Instances of SpeechRecognizer and SpeechRecognitionEngine supplied with Grammar objects provide the primary access to the speech recognition engines of the Windows Desktop Speech Technology.

You can use the SpeechRecognizer class to create client applications that use the speech recognition technology provided by Windows, which you can configure through the Control Panel. Such applications accept input through a computer's default audio input mechanism.

For more control over the configuration and type of recognition engine, build an application using SpeechRecognitionEngine, which runs in-process. Using the SpeechRecognitionEngine class, you can also dynamically select audio input from devices, files, or streams.

See Initialize and Manage a Speech Recognition Engine in the System Speech Programming Guide for .NET Framework for more information.

Respond to Events

SpeechRecognizer and SpeechRecognitionEngine objects generate events in response to audio input to the speech recognition engine. The AudioLevelUpdated, AudioSignalProblemOccurred, AudioStateChanged events are raised in response to changes in the incoming signal. The SpeechDetected event is raised when the speech recognition engine identifies incoming audio as speech. The speech recognition engine raises the SpeechRecognized event when it matches speech input to one of its loaded grammars, and raises the SpeechRecognitionRejected when speech input does not match any of its loaded grammars.

Other types of events include the LoadGrammarCompleted event which a speech recognition engine raises when it has loaded a grammar. The StateChanged is exclusive to the SpeechRecognizer class, which raises the event when the state of Windows Speech Recognition changes.

You can register to be notified for events that the speech recognition engine raises and create handlers using the EventsArgs classes associated with each of these events to program your application's behavior when an event is raised.

See Using Speech Recognition Events in the System Speech Programming Guide for .NET Framework for more information.

See also