Share via


Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

Add Content Using GrammarBuilder Methods (Microsoft.Speech)

An application can use one of the Append() overloaded methods to build an existing GrammarBuilder instance in an incremental fashion. A number of types can be appended to a GrammarBuilder instance, including a Choices instance, a SemanticResultKey instance, a SemanticResultValue instance, a String, or another GrammarBuilder instance.

In addition to the Append() overloaded methods, there are five static Add() methods on the GrammarBuilder class. All of these static methods return a GrammarBuilder instance that is created from a Choices instance and a GrammarBuilder instance (GrammarBuilder can precede Choices), a string and a GrammarBuilder instance (GrammarBuilder can precede the string), or two GrammarBuilder instances. In addition to these methods, the GrammarBuilder class has five static overloads for the addition operator, permitting the same kinds of combinations that are used for the Add() methods.

The following example creates a grammar in a console application that recognizes input phrases such as "I would like to fly from New York to Dallas". The example first appends the string "I would like to fly from" to the empty GrammarBuilder instance. Then the example appends the Choices instance cities, and then the string "to", and finally appends the cities object a second time. A handler for the SpeechRecognized event can determine the sentence spoken by the application user but cannot parse the component parts of the sentence, particularly the "from" and "to" cities.

using System;
using Microsoft.Speech.Recognition;

namespace SampleRecognition
{
  class Program
  {
    static void Main(string[] args)

    // Initialize an in-process speech recognition engine.
    {
      using (SpeechRecognitionEngine recognizer =
         new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US")))
      {

        // Create a grammar.
        Choices cities = new Choices(new string[] { 
          "Los Angeles", "New York", "Chicago", "San Francisco", "Miami", "Dallas" });

        GrammarBuilder gb = new GrammarBuilder();
        gb.Append("I want to fly from");
        gb.Append(cities);
        gb.Append("to");
        gb.Append(cities);

        // Create a Grammar object and load it to the recognizer.
        Grammar g = new Grammar(gb);
        g.Name = ("City Chooser");
        recognizer.LoadGrammarAsync(g);

        // Attach a handler for the SpeechRecognized event.
        recognizer.SpeechRecognized +=
          new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);

        // Set the input to the recognizer.
        recognizer.SetInputToDefaultAudioDevice();

        // Start recognition.
        recognizer.RecognizeAsync();
        Console.WriteLine("Starting asynchronous speech recognition... ");

        // Keep the console window open.
        Console.ReadLine();
      }
    }

    // Handle the SpeechRecognized event.
    static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
    {
      Console.WriteLine("Speech recognized: " + e.Result.Text);
    }
  }
}

The next example is nearly identical to the previous example, but presents a more sophisticated grammar that can parse out the departure city and destination city. This example uses SemanticResultKey instances that associate the Choices instance (cities) with separate semantic keys that represent the departure city and destination city. The example uses the Append() method to incorporate the SemanticResultKey objects in the GrammarBuilder instance. A handler for the SpeechRecognized event can extract information about the two cities involved in a flight by using the semantic keys "DepartureCity" and "DestinationCity." For more information, see Add Semantics to a GrammarBuilder Grammar (Microsoft.Speech) and Use a SemanticResultKey to Extract a SemanticResultValue (Microsoft.Speech).

using System;
using Microsoft.Speech.Recognition;

namespace SampleRecognition
{
  class Program
  {
    static void Main(string[] args)

    // Initialize an in-process speech recognition engine.
    {
      using (SpeechRecognitionEngine recognizer =
         new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US")))
      {

        // Create a grammar.
        Choices cities = new Choices(new string[] { 
          "Los Angeles", "New York", "Chicago", "San Francisco", "Miami", "Dallas" });

        GrammarBuilder gb = new GrammarBuilder();
        gb.Append("I would like to fly from");
        gb.Append(new SemanticResultKey("DepartureCity", cities));
        gb.Append("to");
        gb.Append(new SemanticResultKey("DestinationCity", cities));

        // Create a Grammar object and load it to the recognizer.
        Grammar g = new Grammar(gb);
        g.Name = ("City Chooser");
        recognizer.LoadGrammarAsync(g);

        // Attach a handler for the SpeechRecognized event.
        recognizer.SpeechRecognized +=
          new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);

        // Set the input to the recognizer.
        recognizer.SetInputToDefaultAudioDevice();

        // Start recognition.
        recognizer.RecognizeAsync();
        Console.WriteLine("Starting asynchronous speech recognition... ");

        // Keep the console window open.
        Console.ReadLine();
      }
    }

    // Handle the SpeechRecognized event.
    static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
    {
      Console.WriteLine("Speech recognized: " + e.Result.Text);
      Console.WriteLine();
      Console.WriteLine("Semantic results:");
      Console.WriteLine("  The departure city is: " + 
        e.Result.Semantics["DepartureCity"].Value);
      Console.WriteLine("  The destination city is: " + 
        e.Result.Semantics["DestinationCity"].Value);
    }
  }
}

See Also

Concepts

Create Grammars Using GrammarBuilder (Microsoft.Speech)