Compartilhar via


How to use the combined Motion API for Windows Phone 8

[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]

 

You can use the Motion API to create Windows Phone applications that use the device’s orientation and movement in space as an input mechanism. The Windows Phone platform includes APIs to obtain raw sensor data from the device’s Compass, Gyroscope, and Accelerometer sensors. However the Motion API handles the complex math necessary to combine the data from these sensors and produce easy-to-use values for the device’s attitude and motion.

This topic walks you through creating two different applications that use the Motion API.

  1. The first application is very simple and simply rotates a triangle on the screen in response to changes in the device’s rotation.

  2. The second application is an augmented reality application that uses the device’s camera and the Motion API to allow the user to label points in space around the device.

The Motion API used by these samples requires all of the supported Windows Phone sensors. Therefore these sample applications will fail gracefully but will not work properly on the emulator, or on devices that don’t have all the expected hardware sensors.

Note

In addition to the APIs described in this topic from the Microsoft.Devices.Sensors namespace, you can also program the phone’s sensors by using the similar classes in the Windows.Devices.Sensors namespace.

This topic contains the following sections.

 

Creating a simple motion-based app

The application described in this section uses the RenderTransform property of a Polygon object to rotate the polygon. The Angle property of the RenderTransform is updated with the value of the Yaw property of the AttitudeReading class. This causes the triangle to rotate as the device is rotated.

To create a simple motion-based app

  1. In Visual Studio, create a new **Windows Phone App ** project. This template is in the Windows Phone category.

  2. This application requires references to the assemblies containing the sensor APIs and the XNA Framework. From the Project menu, click Add Reference…, select Microsoft.Devices.Sensors and Microsoft.Xna.Framework, and then click OK.

  3. In the MainPage.xaml file, place the following XAML code in the Grid element named “ContentPanel”.

    <StackPanel>
      <TextBlock Text="attitude" Style="{StaticResource PhoneTextLargeStyle}"/>
      <Grid Margin="12 0 12 0">
        <TextBlock Height="30" HorizontalAlignment="Left"  Name="yawTextBlock" Text="YAW: 000" VerticalAlignment="Top" Foreground="Red" FontSize="25" FontWeight="Bold"/>
        <TextBlock Height="30" HorizontalAlignment="Center"  Name="pitchTextBlock" Text="PITCH: 000" VerticalAlignment="Top" Foreground="Green" FontSize="25" FontWeight="Bold"/>
        <TextBlock Height="30" HorizontalAlignment="Right"   Name="rollTextBlock" Text="ROLL: 000" VerticalAlignment="Top"  Foreground="Blue" FontSize="25" FontWeight="Bold"/>
      </Grid>
      <Grid Height="200">
        <Polygon Name="yawtriangle"
          Points="205,135 240,50 275,135"
          Stroke="Red"
          StrokeThickness="2" >
          <Polygon.Fill>
            <SolidColorBrush Color="Red" Opacity="0.3"/>
          </Polygon.Fill>
          <Polygon.RenderTransform>
            <RotateTransform CenterX="240" CenterY="100"></RotateTransform>
          </Polygon.RenderTransform>
        </Polygon>
        <Polygon Name="pitchtriangle"
          Points="205,135 240,50 275,135"
          Stroke="Green"
          StrokeThickness="2" >
          <Polygon.Fill>
            <SolidColorBrush Color="Green" Opacity="0.3"/>
          </Polygon.Fill>
          <Polygon.RenderTransform>
            <RotateTransform CenterX="240" CenterY="100"></RotateTransform>
          </Polygon.RenderTransform>
        </Polygon>
        <Polygon Name="rolltriangle"
          Points="205,135 240,50 275,135"
          Stroke="Blue"
          StrokeThickness="2" >
          <Polygon.Fill>
            <SolidColorBrush Color="Blue" Opacity="0.3"/>
          </Polygon.Fill>
          <Polygon.RenderTransform>
            <RotateTransform CenterX="240" CenterY="100"></RotateTransform>
          </Polygon.RenderTransform>
        </Polygon>
      </Grid>
      <TextBlock Text="acceleration" Style="{StaticResource PhoneTextLargeStyle}"/>
      <Grid Margin="12 0 12 0">
        <TextBlock Height="30" HorizontalAlignment="Left"  Name="xTextBlock" Text="X: 000" VerticalAlignment="Top" Foreground="Red" FontSize="25" FontWeight="Bold"/>
        <TextBlock Height="30" HorizontalAlignment="Center"  Name="yTextBlock" Text="Y: 000" VerticalAlignment="Top" Foreground="Green" FontSize="25" FontWeight="Bold"/>
        <TextBlock Height="30" HorizontalAlignment="Right"   Name="zTextBlock" Text="Z: 000" VerticalAlignment="Top"  Foreground="Blue" FontSize="25" FontWeight="Bold"/>
      </Grid>
      <Grid Height="300">
        <Line x:Name="xLine" X1="240" Y1="150" X2="340" Y2="150" Stroke="Red" StrokeThickness="4"></Line>
        <Line x:Name="yLine" X1="240" Y1="150" X2="240" Y2="50" Stroke="Green" StrokeThickness="4"></Line>
        <Line x:Name="zLine" X1="240" Y1="150" X2="190" Y2="200" Stroke="Blue" StrokeThickness="4"></Line>
      </Grid>
    </StackPanel>
    

    This code creates three TextBlock controls to display the numeric values for the yaw, pitch, and roll of the device, in degrees. Also, three triangles are created to graphically show the values. A RotateTransform is added to each triangle and the point to use as the center of rotation is specified. The angle of the RotateTransform will be set in the C# code-behind page to animate the triangles according to the orientation of the phone.

    Next, three more TextBlock controls are used to show the acceleration of the device along each axis numerically. Then, three lines are added to show the acceleration graphically.

  4. Now open the MainPage.xaml.cs code-behind page and add using directives for the sensors and the XNA Framework namespaces to other using directives at the top of the page.

    using Microsoft.Devices.Sensors;
    using Microsoft.Xna.Framework;
    
  5. Declare a variable of type Motion at the top of the MainPage class definition.

    public partial class MainPage : PhoneApplicationPage
    {
      Motion motion;
    
  6. Next, override the OnNavigatedTo(NavigationEventArgs) method of the Page class. This method is called whenever the user navigates to the page. In this method, the IsSupported property is checked. Not all devices have the necessary sensors to use this feature, so you should always check this value before using the API. Next, the Motion object is initialized and an event handler is attached to the CurrentValueChanged event. This event is raised at the interval specified in the TimeBetweenUpdates parameter. The default value is 2 milliseconds. Finally, the acquisition of data is started by calling the Start method. It is possible for this method to throw an exception, so place it within a try block.

    protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
    {
      // Check to see whether the Motion API is supported on the device.
      if (! Motion.IsSupported)
      {
        MessageBox.Show("the Motion API is not supported on this device.");
        return;
      }
    
      // If the Motion object is null, initialize it and add a CurrentValueChanged
      // event handler.
      if (motion == null)
      {
        motion = new Motion();
        motion.TimeBetweenUpdates = TimeSpan.FromMilliseconds(20);
        motion.CurrentValueChanged +=
            new EventHandler<SensorReadingEventArgs<MotionReading>>(motion_CurrentValueChanged);
      }
    
      // Try to start the Motion API.
      try
      {
        motion.Start();
      }
      catch (Exception ex)
      {
        MessageBox.Show("unable to start the Motion API.");
      }
    }
    
  7. The current value changed event is raised periodically to provide new sensor data to the application. The event handler is called from a background thread that does not have access to the application’s UI. Use BeginInvoke to call CurrentValueChanged on the UI thread. This method, which accepts a MotionReading object as a parameter, will be defined next.

    void motion_CurrentValueChanged(object sender, SensorReadingEventArgs<MotionReading> e)
    {
      // This event arrives on a background thread. Use BeginInvoke to call
      // CurrentValueChanged on the UI thread.
      Dispatcher.BeginInvoke(() => CurrentValueChanged(e.SensorReading));
    }
    
  8. Finally, create the CurrentValueChanged method. This method sets the Text property of the TextBlock objects to the yaw, pitch, and roll attitude reading values. Next, the Angle parameter of the each triangle’s RenderTransform is set to rotate each triangle according to the related attitude value. The MathHelper class from the XNA Framework is used to convert from radians to degrees. Next, the acceleration TextBlock objects are updated to show the current acceleration value along each axis. Finally, the lines are updated to graphically illustrate the acceleration of the device.

    private void CurrentValueChanged(MotionReading e)
    {
      // Check to see if the Motion data is valid.
      if (motion.IsDataValid)
      {
        // Show the numeric values for attitude.
        yawTextBlock.Text =
            "YAW: " + MathHelper.ToDegrees(e.Attitude.Yaw).ToString("0") + "°";
        pitchTextBlock.Text =
            "PITCH: " + MathHelper.ToDegrees(e.Attitude.Pitch).ToString("0") + "°";
        rollTextBlock.Text =
            "ROLL: " + MathHelper.ToDegrees(e.Attitude.Roll).ToString("0") + "°";
    
        // Set the Angle of the triangle RenderTransforms to the attitude of the device.
        ((RotateTransform)yawtriangle.RenderTransform).Angle =
            MathHelper.ToDegrees(e.Attitude.Yaw);
        ((RotateTransform)pitchtriangle.RenderTransform).Angle =
            MathHelper.ToDegrees(e.Attitude.Pitch);
        ((RotateTransform)rolltriangle.RenderTransform).Angle =
            MathHelper.ToDegrees(e.Attitude.Roll);
    
        // Show the numeric values for acceleration.
        xTextBlock.Text = "X: " + e.DeviceAcceleration.X.ToString("0.00");
        yTextBlock.Text = "Y: " + e.DeviceAcceleration.Y.ToString("0.00");
        zTextBlock.Text = "Z: " + e.DeviceAcceleration.Z.ToString("0.00");
    
        // Show the acceleration values graphically.
        xLine.X2 = xLine.X1 + e.DeviceAcceleration.X * 100;
        yLine.Y2 = yLine.Y1 - e.DeviceAcceleration.Y * 100;
        zLine.X2 = zLine.X1 - e.DeviceAcceleration.Z * 50;
        zLine.Y2 = zLine.Y1 + e.DeviceAcceleration.Z * 50;
      }
    }
    
  9. Make sure your device is connected to your computer and start debugging by pressing F5 in Visual Studio. Rotate the device into various positions and notice how the yaw, pitch, and roll values change depending on the orientation of the device. Next, wave the device up and down and from side to side to see how the acceleration values change. Unlike the Accelerometer API, the acceleration of gravity is filtered out of the reading so that when the device is still, the acceleration is zero along all axes.

Creating an augmented reality app

Augmented reality is a term that refers to applications that overlay enhancements, such as graphics or audio, over a live view of the world. This example application uses the PhotoCamera API to display the video feed from the device’s camera. Text labels are placed on top of the video feed to label points in space. The Motion API is used to dynamically position the text labels so that they move through the camera viewfinder as the device orientation changes. This creates the effect that the labels are pinned to points in space around the device. This application allows the user to click on a point in the camera viewfinder, which is then transformed from screen space into world space, and enter text to label the point.

This application described in the following procedure uses XNA Framework libraries for the computations that project points on the screen into 3D space and back.

To create an augmented reality app

  1. In Visual Studio, create a new **Windows Phone App ** project. This template is in the Windows Phone category.

  2. Add references to the assemblies containing the sensor APIs and the XNA Framework. From the Project menu, click Add Reference…, select Microsoft.Devices.Sensors, Microsoft.Xna.Framework, and Microsoft.Xna.Framework.Graphics, and then click OK.

  3. Create the user interface in XAML. This application uses a Rectangle object with a VideoBrush to display the video stream from the device’s camera. This is the same technique described in the topic How to create a base camera app for Windows Phone 8.

    The XAML code also creates a TextBox control that allows the user to specify a name that will label the point in space they selected. The TextBox is placed within a Canvas control. The Canvas object is hidden until the user taps somewhere on the screen. At that point it will be shown so that the user can input text. When the user presses Enter, the Canvas is hidden again.

    In the MainPage.xaml file, place the following XAML code in the Grid element named “ContentPanel”.

    <Rectangle Width="640" Height="480" Canvas.ZIndex="1">
      <Rectangle.Fill>
        <VideoBrush x:Name="viewfinderBrush" />
      </Rectangle.Fill>
      <Rectangle.RenderTransform>
        <RotateTransform Angle="90" CenterX="240" CenterY="240"></RotateTransform>
      </Rectangle.RenderTransform>
    </Rectangle>
    
    <Canvas Name="TextBoxCanvas" Background="#BB000000" Canvas.ZIndex="99" Visibility="Collapsed">
      <TextBlock Text="name this point" Margin="20,130,0,0"/>
      <TextBox Height="72" HorizontalAlignment="Left" Margin="8,160,0,0" Name="NameTextBox"
        VerticalAlignment="Top" Width="460" KeyUp="NameTextBox_KeyUp" />
    </Canvas>
    
  4. In MainPage.xaml.cs, add the following using statements to the existing using statements at the top of the file. The Microsoft.Devices.Sensors namespace provides access to the Motion API. Microsoft.Devices is used to access the PhotoCamera API. This application does not use the XNA Framework to render graphics, but these namespaces expose helper functions that will be used to do the math needed to project points from screen space to world space and back. The final using directive is used to disambiguate the XNA Framework Matrix type, making the rest of the code easier to read.

    using Microsoft.Devices.Sensors;
    using Microsoft.Devices;
    using Microsoft.Xna.Framework;
    using Microsoft.Xna.Framework.Graphics;
    using Matrix = Microsoft.Xna.Framework.Matrix;
    
  5. Declare class member variables at the top of the MainPage class definition. First, the Motion and PhotoCamera objects are declared. Then lists are declared to store Vector3 objects, which represent points in world space, and TextBlock objects, which are used to display a label for each point. Next, a Point variable is created. This will store the point that the user touches on the device’s screen. The final variables are a Viewport object and some Matrix objects that will be used to project points from world space to screen space and back.

    public partial class MainPage : PhoneApplicationPage
    {
      Motion motion;
      PhotoCamera cam;
    
      List<TextBlock> textBlocks;
      List<Vector3> points;
      System.Windows.Point pointOnScreen;
    
      Viewport viewport;
      Matrix projection;
      Matrix view;
      Matrix attitude;
    
  6. In the page’s constructor, initialize the lists of Point and TextBlock objects.

    // Constructor
    public MainPage()
    {
      InitializeComponent();
    
      // Initialize the list of TextBlock and Vector3 objects.
      textBlocks = new List<TextBlock>();
      points = new List<Vector3>();
    }
    
  7. The next method is a helper method that initializes the Viewport and Matrix objects that are used to transform points from screen space to world space and back. A Viewport defines a rectangle onto which a 3D volume projects. To initialize it, pass in the width and height of the render surface – in this case, this is the width and height of the page. The Viewport structure exposes the methods Project and Unproject, which do the math of projecting the point between screen space and world space. These methods also require a projection matrix and a view matrix, which are also initialized here.

    public void InitializeViewport()
    {
      // Initialize the viewport and matrixes for 3d projection.
      viewport = new Viewport(0, 0, (int)this.ActualWidth, (int)this.ActualHeight);
      float aspect = viewport.AspectRatio;
      projection = Matrix.CreatePerspectiveFieldOfView(1, aspect, 1, 12);
      view = Matrix.CreateLookAt(new Vector3(0, 0, 1), Vector3.Zero, Vector3.Up);
    }
    
  8. In OnNavigatedTo(NavigationEventArgs), the camera is initialized and set as the source for the VideoBrush that was defined in XAML. Next, the Motion object is initialized, after checking to make sure that the API is supported on the device. Finally, an event handler for the MouseLeftButtonUp event is registered. This handler will be called when the user touches the screen.

    protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
    {
      // Initialize the camera and set the video brush source.
      cam = new Microsoft.Devices.PhotoCamera();
      viewfinderBrush.SetSource(cam);
    
      if (!Motion.IsSupported)
      {
        MessageBox.Show("the Motion API is not supported on this device.");
        return;
      }
    
      // If the Motion object is null, initialize it and add a CurrentValueChanged
      // event handler.
      if (motion == null)
      {
        motion = new Motion();
        motion.TimeBetweenUpdates = TimeSpan.FromMilliseconds(20);
        motion.CurrentValueChanged +=
            new EventHandler<SensorReadingEventArgs<MotionReading>>(motion_CurrentValueChanged);
      }
    
      // Try to start the Motion API.
      try
      {
        motion.Start();
      }
      catch (Exception ex)
      {
        MessageBox.Show("unable to start the Motion API.");
      }
    
      // Hook up the event handler for when the user taps the screen.
      this.MouseLeftButtonUp +=
        new MouseButtonEventHandler(MainPage_MouseLeftButtonUp);
    
      base.OnNavigatedTo(e);
    }
    
  9. In the MouseLeftButtonUp event handler, the Visibility property of the Canvas containing the TextBox control is checked. If the Canvas is visible, then the user is in the process of entering text and the event should be ignored. If the Canvas is not visible, the point where the user touched the screen is saved in the pointOnScreen variable. The CurrentValue property of the Motion object is queried for the current attitude of the device, which is saved in the class variable attitude. Finally, the Canvas containing the TextBox is made visible and the TextBox is given focus.

    void MainPage_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
    {
      // If the Canvas containing the TextBox is visible, ignore
      // this event.
      if (TextBoxCanvas.Visibility == Visibility.Visible)
      {
        return;
      }
    
      // Save the location where the user touched the screen.
      pointOnScreen = e.GetPosition(LayoutRoot);
    
      // Save the device attitude when the user touched the screen.
      attitude = motion.CurrentValue.Attitude.RotationMatrix;
    
      // Make the Canvas containing the TextBox visible and
      // give the TextBox focus.
      TextBoxCanvas.Visibility = Visibility.Visible;
      NameTextBox.Focus();
    }
    
  10. The event handler for the CurrentValueChanged event of the Motion class is called on a background thread. Use BeginInvoke to call another handler method on the UI thread.

    void motion_CurrentValueChanged(object sender, SensorReadingEventArgs<MotionReading> e)
    {
      // This event arrives on a background thread. Use BeginInvoke
      // to call a method on the UI thread.
      Dispatcher.BeginInvoke(() => CurrentValueChanged(e.SensorReading));
    }
    
  11. The CurrentValueChanged method, defined below, is where the list of points in 3D space are projected into screen space, according to the current attitude of the device. First, the Viewport structure is checked and the InitializeViewport defined above is called if it is necessary. Next, the attitude of the device is obtained from the MotionReading object. The coordinate system of the Motion API is different from that used by the XNA Framework, so to make sure the points are transformed correctly, the attitude matrix is rotated by 90 degrees around the X axis.

    Next, the method loops over each point in the application’s list of points. For each one, a world matrix is created that represents the offset in world space to the point. This matrix, along with the view and projection matrices defined earlier, are passed into the Project method of the Viewport structure. This method returns a Vector3 object for which the X and Y values are the screen coordinates of the projected point. The Z value indicates the depth of the point. If this value is less than zero or greater than one, then the point is “behind” the camera, and therefore the TextBlock for the point is hidden. If the point is in front of the camera, a TranslateTransform object is created using the X and Y values of the projected point and then assigned to the TextBlock associated with the point.

    private void CurrentValueChanged(MotionReading reading)
    {
      // If the viewport width is 0, it needs to be initialized.
      if (viewport.Width == 0)
      {
        InitializeViewport();
      }
    
      // Get the RotationMatrix from the MotionReading.
      // Rotate it 90 degrees around the X axis
      //   to put it in the XNA Framework coordinate system.
      Matrix attitude =
        Matrix.CreateRotationX(MathHelper.PiOver2) * reading.Attitude.RotationMatrix;
    
      // Loop through the points in the list.
      for (int i = 0; i < points.Count; i++)
      {
        // Create a World matrix for the point.
        Matrix world = Matrix.CreateWorld(points[i], new Vector3(0, 0, 1), new Vector3(0, 1, 0));
    
        // Use Viewport.Project to project the point from 3D space into screen coordinates.
        Vector3 projected = viewport.Project(Vector3.Zero, projection, view, world * attitude);
    
        if (projected.Z > 1 || projected.Z < 0)
        {
          // If the point is outside of this range, it is behind the camera.
          // So hide the TextBlock for this point.
           textBlocks[i].Visibility = Visibility.Collapsed;
        }
        else
        {
          // Otherwise, show the TextBlock.
          textBlocks[i].Visibility = Visibility.Visible;
    
          // Create a TranslateTransform to position the TextBlock.
          // Offset by half of the TextBlock's RenderSize to center it on the point.
          TranslateTransform tt = new TranslateTransform();
          tt.X = projected.X - (textBlocks[i].RenderSize.Width / 2);
          tt.Y = projected.Y - (textBlocks[i].RenderSize.Height / 2);
          textBlocks[i].RenderTransform = tt;
        }
      }
    }
    
  12. Next, implement the KeyUp event handler that was assigned to the TextBox control in XAML. This event is called as the user enters text into the TextBox. For this application, the handler is used to add new points when the user presses the Enter key, so the first piece of code exits the handler if any other key was pressed. If Enter was pressed, the Canvas is hidden again. Next, the handler checks to see whether any of the objects that are needed for this operation are null, and if so, exits the method.

    Next, the point that was obtained previously in the MouseLeftButtonUp event handler is transformed into the format that is required by the Viewport structure’s Unproject method. The attitude value obtained in MouseLeftButtonUp is transformed into XNA coordinate space by rotating it 90 degrees around the X axis. Then, Unproject is called to transform the point in screen space into a point in 3D space. The unprojected point is then normalized and scaled and the AddPoint helper method is called to add the point and accompanying TextBox to the application’s lists.

    private void NameTextBox_KeyUp(object sender, KeyEventArgs e)
    {
      // If the key is not the Enter key, don't do anything.
      if (e.Key != Key.Enter)
      {
        return;
      }
    
      // When the TextBox loses focus. Hide the Canvas containing it.
      TextBoxCanvas.Visibility = Visibility.Collapsed;
    
      // If any of the objects we need are not present, exit the event handler.
      if (NameTextBox.Text == "" || pointOnScreen == null || motion == null)
      {
        return;
      }
    
      // Translate the point before projecting it.
      System.Windows.Point p = pointOnScreen;
      p.X = LayoutRoot.RenderSize.Width - p.X;
      p.Y = LayoutRoot.RenderSize.Height - p.Y;
      p.X *= .5;
      p.Y *= .5;
    
      // Use the attitude Matrix saved in the OnMouseLeftButtonUp handler.
      // Rotate it 90 degrees around the X axis
      // to put it in the XNA Framework coordinate system.
      attitude = Matrix.CreateRotationX(MathHelper.PiOver2) * attitude;
    
    
      // Use Viewport.Unproject to translate the point on the screen to 3D space.
      Vector3 unprojected =
        viewport.Unproject(new Vector3((float)p.X, (float)p.Y, -.9f), projection, view, attitude);
      unprojected.Normalize();
      unprojected *= -10;
    
      // Call the helper method to add this point.
      AddPoint(unprojected, NameTextBox.Text);
    
      // Clear the TextBox.
      NameTextBox.Text = "";
    }
    
  13. AddPoint is a helper method that takes a point in 3D space and a string and adds them to the UI and to the application’s lists. After a Point and TextBox are added to the lists, they are displayed in the CurrentValueChanged method defined previously.

    private void AddPoint(Vector3 point, string name)
    {
      // Create a new TextBlock. Set the Canvas.ZIndexProperty to make sure
      // it appears above the camera rectangle.
      TextBlock textblock = new TextBlock();
      textblock.Text = name;
      textblock.FontSize = 124;
      textblock.SetValue(Canvas.ZIndexProperty, 2);
      textblock.Visibility = Visibility.Collapsed;
    
      // Add the TextBlock to the LayoutRoot container.
      LayoutRoot.Children.Add(textblock);
    
      // Add the TextBlock and the point to the List collections.
      textBlocks.Add(textblock);
      points.Add(point);
    }
    
  14. Finally, in the OnNavigatedFrom(NavigationEventArgs) method of the PhoneApplicationPage, you should call the Dispose()()() of the PhotoCamera object to minimize the camera’s power consumption while the application is inactive and to expedite the camera shutting down.

    protected override void OnNavigatedFrom(System.Windows.Navigation.NavigationEventArgs e)
    {
      // Dispose camera to minimize power consumption and to expedite shutdown.
      cam.Dispose();
    }
    

Now you should be able to run the application on your device. Hold the device up in a portrait orientation. Look for an object, such as a door or a window, in the camera viewfinder. Touch the object to bring up the naming text box. Type in a name for the point in space and press the Enter key. You should be able to rotate the device around and see that the label is always over the same point in space. Keep in mind that this application doesn’t use the movement of the device through space, only its orientation, so the labeled points will not line up properly if you move the device significantly.

The following helper method can be added to the application to label the front, back, left, right, top, and bottom of 3D space relative to the device’s sensor. This can be helpful to visualize how the application is working.

private void AddDirectionPoints()
{
  AddPoint(new Vector3(0, 0, -10), "front");
  AddPoint(new Vector3(0, 0, 10) , "back");
  AddPoint(new Vector3(10, 0, 0) , "right");
  AddPoint(new Vector3(-10, 0, 0) , "left");
  AddPoint(new Vector3(0, 10, 0) , "top");
  AddPoint(new Vector3(0, -10, 0), "bottom");
}