Partilhar via


Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

SpeechRecognitionEngine.SetInputToWaveFile Method

Configures the SpeechRecognitionEngine object to receive input from a Waveform audio format (.wav) file.

Namespace:  Microsoft.Speech.Recognition
Assembly:  Microsoft.Speech (in Microsoft.Speech.dll)

Syntax

'Declaration
Public Sub SetInputToWaveFile ( _
    path As String _
)
'Usage
Dim instance As SpeechRecognitionEngine
Dim path As String

instance.SetInputToWaveFile(path)
public void SetInputToWaveFile(
    string path
)

Parameters

Remarks

If the recognizer reaches the end of the input file during a recognition operation, the recognition operation finalizes with the available input. Any subsequent recognition operations can generate an exception, unless you update the input to the recognizer.

The audio file can be in WMA format provided that both of the following conditions are met.

  • The WMA codec is available.

  • The target operating system has the appropriate Windows features installed or enabled.

    • A computer running Microsoft Windows Server 2003 x64 Editions must have Windows Media Format 9.5 Software Development Kit (SDK) x64 Edition installed.

    • A computer running Microsoft Windows Server 2008 or Windows Server 2008 R2 must have the Desktop Experience feature installed, as described in the following procedure.

    • Windows Vista and Windows 7 already have the Desktop Experience feature enabled automatically.

To install the Desktop Experience feature (Windows Server 2008, Windows Server 2008 R2)

  1. On the Start menu, click Server Management.

  2. In the Server Management dialog box, in the Features Summary pane, click Add Features.

  3. In the Select Features dialog box, select the Desktop Experience check box, and then click Install.

Examples

The following example shows part of a console application that demonstrates basic speech recognition. The example uses input from an audio file, testing123.wav, that contains the phrase, "testing testing one two three". The example generates the following output.

Starting asynchronous recognition...
  Recognized text =  Testing testing 123
  End of stream encountered.
Done.

Press any key to exit...
using System;
using System.Globalization;
using System.IO;
using Microsoft.Speech.AudioFormat;
using Microsoft.Speech.Recognition;
using System.Threading;

namespace InputExamples
{
  class Program
  {
    // Indicate whether asynchronous recognition is complete.
    static bool completed;

    static void Main(string[] args)
    {
      using (SpeechRecognitionEngine recognizer =
        new SpeechRecognitionEngine(new CultureInfo("en-US")))
      {

        // Create a grammar, construct a Grammar object, and load it to the recognizer.
        GrammarBuilder test = new GrammarBuilder("Testing testing 1,2,3");
        Grammar testing = new Grammar(test);
        testing.Name = "123";

        recognizer.LoadGrammar(testing);

        // Configure the input to the recognizer.
        recognizer.SetInputToWaveFile(@"C:\test\testing123.wav");

        // Attach event handlers.
        recognizer.SpeechRecognized +=
          new EventHandler<SpeechRecognizedEventArgs>(
            SpeechRecognizedHandler);
        recognizer.RecognizeCompleted +=
          new EventHandler<RecognizeCompletedEventArgs>(
            RecognizeCompletedHandler);

        // Perform recognition of the whole file.
        Console.WriteLine("Starting asynchronous recognition...");
        completed = false;
        recognizer.RecognizeAsync(RecognizeMode.Multiple);

        while (!completed)
        {
          Thread.Sleep(333);
        }
        Console.WriteLine("Done.");
      }

      Console.WriteLine();
      Console.WriteLine("Press any key to exit...");
      Console.ReadKey();
    }

    // Handle the SpeechRecognized event.
    static void SpeechRecognizedHandler(
      object sender, SpeechRecognizedEventArgs e)
    {
      if (e.Result != null && e.Result.Text != null)
      {
        Console.WriteLine("  Recognized text =  {0}", e.Result.Text);
      }
      else
      {
        Console.WriteLine("  Recognized text not available.");
      }
    }

    // Handle the RecognizeCompleted event.
    static void RecognizeCompletedHandler(
      object sender, RecognizeCompletedEventArgs e)
    {
      if (e.Error != null)
      {
        Console.WriteLine("  Error encountered, {0}: {1}",
          e.Error.GetType().Name, e.Error.Message);
      }
      if (e.Cancelled)
      {
        Console.WriteLine("  Operation cancelled.");
      }
      if (e.InputStreamEnded)
      {
        Console.WriteLine("  End of stream encountered.");
      }

      completed = true;
    }
  }
}

See Also

Reference

SpeechRecognitionEngine Class

SpeechRecognitionEngine Members

Microsoft.Speech.Recognition Namespace

SetInputToAudioStream

SetInputToDefaultAudioDevice

SetInputToNull

SetInputToWaveStream

RecognizeCompleted