Freigeben über


Integrating Cortana in your Universal Windows App using Visual Studio 2015

Microsoft has introduced Cortana in 2014 Build Event. With just one year now Cortana has become one of the best personal assistance. Initially using Cortana the user can activate systems commands, but now using Cortana the user can activate any applications either by foreground or background voice command. In this blogs let’s see how to launch the universal windows app using Cortana.

Let’s understand what is foreground and back ground voice commands, in foreground voice command the application will be launched by the Cortana. Where as in background voice command the Cortana activates the background task of the application and the results are populated in the Cortana’s
Canvas.

Foreground Voice Command Architecture

Step 1: VCD (Voice Command Definition) is an XML file which contains all the Commands to activate the app. When the app run by the user in the foreground, then the VCD file gets installed in the Cortana.

Step 2: User can use any command defined in the VCD file to launch the App.

Step 3, 4 ,5 6 : The speech spoken by the user is sent to Windows Speech platform and Microsoft Speech Recognition service in cloud, together they identify what the user has tried to said.

Step 7: Now Cortana receives the text said by the user and decides which application has to be launched and passes the recognized string to the application OnActivated event.

To make this happen the three important steps

  1. Create the voice commands in the VCD (Voice Command Definition) file
  2. Then Register the VCD XML file on Application starts
  3. Then handle the voice command on the App Activation

Creating the VCD (Voice Command Definition) file is the first and foremost step.

Let’s see what VCD file is and how to create commands.

Step 1: Add an XML file to the Solution and name it as VoiceCommandDefinition.xml

Step 2: Add CommandSet Element and specify the language the commands are targeted.

E.g “en-us” for English, this commandset language should match the language set in the user device.

Step 3: Add CommandPrefix to the CommandSet, which is nothing but the unique name you can provide to your application. This name is used as a prefix or suffix to the voice command to activate the app.

E.g       <CommandPrefix> Universal Messenger, </CommandPrefix>

                 <Example> Text Sam "Hello World" </Example>

Step 4: Add a command to the CommandSet, and name it.

 E.g          <Command Name="showConversation">

Step 5:  Add an example to the command which will be displayed in the Cortana canvas.

E.g         <Example> show my conversation with Sam </Example>

Step 6: Add a ListenFor text, the text for which the Cortana listens. The ListenFor elements in the command phrases may contain optional words. These are wrapped in square brackets. User may include or may not include in there command.

E.g       <ListenFor > show [my] conversation with {user} </ListenFor>

Step 7: Add the Feedback text which will be spoken by the Cortana when launching the app.

E.g      <Feedback> Showing conversation with {user} </Feedback>

Step 8: And finally add navigate (this Element specified that the command is of Foreground activation)

E.g               <Navigate/>

 

An Sample VCD file commands are shown below..

<?xml version="1.0" encoding="utf-8"?>

<VoiceCommands xmlns="https://schemas.microsoft.com/voicecommands/1.2">

  <CommandSet xml:lang="en-us" Name="UniversalAppCommandSet_en-us">

    <CommandPrefix> Universal Messenger, </CommandPrefix>

    <Example> Text Sam "Hello World" </Example>

 

    <Command Name="showConversation">

      <Example>  show my conversation with sam  </Example>

      <ListenFor > show [my] conversation with {user} </ListenFor>

      <Feedback> Showing conversation with {user} </Feedback>

      <Navigate/>

    </Command>

 

    <PhraseList Label="user">

      <Item>Sam</Item>

      <Item>John</Item>

    </PhraseList>

  </CommandSet>

</VoiceCommands>

 

The Phraselist are the list of string which are used along with the commands.

Now the VCD file is created and we can register the file to the Cortana.

To Install the VCD file we need to use the InstallCommandDefinitionsFromStorageFileAsync Method takes the VCD file as Parameter and register it to the Cortana.

Add the following lines of code to the onLaunched method in App.xaml.cs, so the Cortana can resister and listen to the commands.

 var storageFile = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(newUri("ms-appx:///Sample.xml"));

await Windows.ApplicationModel.VoiceCommands.VoiceCommandDefinitionManager.InstallCommandDefinitionsFromStorageFileAsync(storageFile);

When the VCD file is installed in the Cortana, the user can see the Application name and an example text provided in the VCD file will appear in the Help Section of Cortana. Now the user is ready to go with voice command to activate the app.

User Opens Cortana and says the voice command, the Cortana understands the commands said by the user and Activates the App.

In my sample application I have created voice command to how the conversation with some user. When I trigger the voice command the Cortana understands it and launch my application.

Till now we have created the VCD file and registered the VCD file o Cortana, but we haven’t handled the on Activated event. But still we are able to open our app using the Command we defined in our VCD file. But we cannot do anything specific to the command.

When we invoke the command we will see Cortana displaying the application in the Canvas.

 

The Third and final step is to handle the Voice command in the Application OnActivated event.

The IActivatedEventArgs has a property called Kind which specified the kind of activation is it. We can check if the application is launched using voiceCommand or by any other means, if the app is launched using voice command then we need to get the text spoken by the user, to get the string we need to cast the IActivatedEventArgs to VoiceCommandActivatedEventArgs.

The VoiceCommandActivatedEventArgs has a property called Result which will provide the SpeechRecognitionResult.

The SpeechRecognitionResult has a property called Text, which provides the Text Spoken by the user, Then using this text we can decide which page has to be displayed.

That’s our Application is integrated with the Cortana. We can launch our app using the voice commands.
In my next blog let’s see how to integrate the Cortana to the background task.

UniversalMessanger.zip

Comments

  • Anonymous
    July 09, 2015
    Hi, Can we please get demo code for this. I wish to play with Cortana feature before, start implementing in my app. Thank you, Harish.

  • Anonymous
    July 09, 2015
    Hi Harish, I have attached the code, please comment here, if you are facing any issues. Thanks, Arun

  • Anonymous
    July 17, 2015
    Hi, how do you load all the files into Visual Studio 2015?

  • Anonymous
    July 21, 2015
    Hi Sean, You can open by double clicking the solution (.Sln) file.

  • Anonymous
    August 09, 2015
    Would the same code work on Windows 10 Mobile?

  • Anonymous
    August 25, 2015
    Hi Andy, yes the same code works in windows phone as well.

  • Anonymous
    October 19, 2015
    Hello, I can't manage to have my app showed when I ask Cortana: "What can I say?". Apparently, everything goes well: microphone capability, vcd file, install command set... I'm using Windows 10, Visual Studio 2015 (VSO synchronized), Emulator 8.1 WVGA 4 inch 512 MB. This is a Universal App written in Javascript with WinJS. VCD is set with xml:lang="en-us"; the emulator too; my laptop language setting too. Cortana is activated (sign in to Microsoft account done). What is missing in here? It's been a few days since I started troubleshooting...