ActivationSignalDetectionConfiguration.ApplyTrainingDataAsync Method
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Asynchronously provides input data in the specified format and attempts to complete a training step (if a training process is available for the signal detector of this configuration).
public:
virtual IAsyncOperation<DetectionConfigurationTrainingStatus> ^ ApplyTrainingDataAsync(ActivationSignalDetectionTrainingDataFormat trainingDataFormat, IInputStream ^ trainingData) = ApplyTrainingDataAsync;
/// [Windows.Foundation.Metadata.RemoteAsync]
IAsyncOperation<DetectionConfigurationTrainingStatus> ApplyTrainingDataAsync(ActivationSignalDetectionTrainingDataFormat const& trainingDataFormat, IInputStream const& trainingData);
[Windows.Foundation.Metadata.RemoteAsync]
public IAsyncOperation<DetectionConfigurationTrainingStatus> ApplyTrainingDataAsync(ActivationSignalDetectionTrainingDataFormat trainingDataFormat, IInputStream trainingData);
function applyTrainingDataAsync(trainingDataFormat, trainingData)
Public Function ApplyTrainingDataAsync (trainingDataFormat As ActivationSignalDetectionTrainingDataFormat, trainingData As IInputStream) As IAsyncOperation(Of DetectionConfigurationTrainingStatus)
Parameters
- trainingDataFormat
- ActivationSignalDetectionTrainingDataFormat
The voice training data formats supported by the ActivationSignalDetector for the digital assistant.
- trainingData
- IInputStream
The voice training data.
Returns
The voice training data states recognized by the ActivationSignalDetector for the digital assistant.
- Attributes
Remarks
Digital assistant applications can train keyword detectors to more accurately recognize an individual user's voice by algorithmically applying customizations to the detector based on speech data. For example, training a spoken keyword detector to only detect the keyword when spoken by a specific person.
This is achieved through a series of ActivationSignalDetectionConfiguration training steps, where each step consumes a logical fragment of speech input data.