次の方法で共有


Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

SpeechRecognitionEngine.AudioPosition Property

Gets the current location in the audio stream being generated by the device that is providing input to the SpeechRecognitionEngine.

Namespace:  Microsoft.Speech.Recognition
Assembly:  Microsoft.Speech (in Microsoft.Speech.dll)

Syntax

'Declaration
Public ReadOnly Property AudioPosition As TimeSpan
    Get
'Usage
Dim instance As SpeechRecognitionEngine
Dim value As TimeSpan

value = instance.AudioPosition
public TimeSpan AudioPosition { get; }

Property Value

Type: System.TimeSpan
The current location in the audio stream being generated by the input device.

Remarks

The AudioPosition property references the input device's position in its generated audio stream. By contrast, the RecognizerAudioPosition property references the recognizer's position within its audio input. These positions can be different. For example, if the recognizer has received input for which it has not yet generated a recognition result then the value of the RecognizerAudioPosition property is less than the value of the AudioPosition property.

Examples

The following example creates a simple grammar, constructs a Grammar object, and loads it to the SpeechRecognitionEngine. A handler for the SpeechDetected event writes to the console the AudioPosition, RecognizerAudioPosition, and AudioLevel when the speech recognizer detects speech at its input.

using System;
using Microsoft.Speech.Recognition;

namespace SampleRecognition
{
  class Program
  {
    private static SpeechRecognitionEngine recognizer;
    public static void Main(string[] args)
    {

      // Initialize a SpeechRecognitionEngine object for US English.
      recognizer = new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US"));
      recognizer.SetInputToDefaultAudioDevice();

      // Create a grammar for finding services in different cities.
      Choices services = new Choices(new string[] { "restaurants", "hotels", "gas stations" });
      Choices cities = new Choices(new string[] { "Seattle", "Boston", "Dallas" });

      GrammarBuilder findServices = new GrammarBuilder("Find");
      findServices.Append(services);
      findServices.Append("near");
      findServices.Append(cities);

      // Create a Grammar object from the GrammarBuilder and load it to the recognizer.
      Grammar servicesGrammar = new Grammar(findServices);
      recognizer.LoadGrammarAsync(servicesGrammar);

      // Add handlers for events.
      recognizer.SpeechRecognized +=
        new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
      recognizer.SpeechDetected +=
        new EventHandler<SpeechDetectedEventArgs>(recognizer_SpeechDetected);

      // Start asynchronous recognition.
      recognizer.RecognizeAsync();
      Console.WriteLine("Starting asynchronous recognition...");

      // Keep the console window open.
      Console.ReadLine();
    }

    // Gather information about detected speech and write it to the console.
    static void recognizer_SpeechDetected(object sender, SpeechDetectedEventArgs e)
    {
      Console.WriteLine();
      Console.WriteLine("Speech detected:");
      Console.WriteLine("  Audio level: " + recognizer.AudioLevel);
      Console.WriteLine("  Audio position at the event: " + e.AudioPosition); 
      Console.WriteLine("  Current audio position: " + recognizer.AudioPosition);
      Console.WriteLine("  Current recognizer audio position: " + recognizer.RecognizerAudioPosition);
    }

    // Write the text of the recognition result to the console.
    static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
    {
      Console.WriteLine("\nSpeech recognized: " + e.Result.Text);

      // Add event handler code here.
    }
  }
}

See Also

Reference

SpeechRecognitionEngine Class

SpeechRecognitionEngine Members

Microsoft.Speech.Recognition Namespace