Partager via


Responding to touch input (DirectX and C++)

[ This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation ]

Touch events are handled in the same way as mouse and stylus events: by a generic input type called a pointer. This pointer represents screen position data from the current active input source. Here we talk about how you can support touch input in your Windows Runtime app using DirectX with C++.

Introduction

Here are the four basic pointer events on the CoreWindow that your app can handle. For our purposes here, we'll treat these pointer events as touch input events.

  • PointerMoved. The user has moved their finger over the input surface.
  • PointerPressed. The user's finger has made contact with the input surface.
  • PointerReleased. The user has stopped contact with the input surface.
  • PointerExited. The user's finger has moved outside the bounding box of the window.

If you use DirectX and XAML interop, use the touch events provided by the XAML framework, which operate on individual XAML elements. These events are provided as part of the Windows::UI::Xaml::UIElement type.

There are also more advanced input types defined in Windows::UI::Input that deal with gestures and manipulations, which are sequences of events interpreted by the GestureRecognizer type. These input types include dragging, sliding, cross-sliding, and holding gestures.

Handling the basic touch events

Let's look at handlers for the three most common basic touch events:

In this section, we assume that you've created a view provider for your Windows Runtime app using DirectX. If you haven't done this, see How to set up your Windows Runtime app to display a DirectX view.

First, let's populate the touch pointer event handlers. In the first event handler, OnPointerPressed, we get the x-y coordinates of the pointer from the CoreWindow that manages our display when the user touches the screen or clicks the mouse. Set the handlers up in your implementations of IFrameworkView::Initialize or IFrameworkView::SetWindow. (The example here uses SetWindow.)

void MyTouchApp::SetWindow(
    _In_ CoreWindow^ window
    )
{
    // .. Other window event initialization here ...

    window->PointerPressed +=
        ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &CoreWindowEvents::OnPointerPressed);
    window->PointerReleased +=
        ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &CoreWindowEvents::OnPointerReleased);
    window->PointerMoved +=
        ref new TypedEventHandler<CoreWindow^, PointerEventArgs^>(this, &CoreWindowEvents::OnPointerMoved);
    
        }

Note  If you're using DirectX and XAML interop, XAML provides its own view (FrameworkView). Register for the pointer events provided on the UIElement types for the XAML elements in your layout instead. For more info, see Quickstart: Handling pointer input.

 

Now, create the corresponding callbacks to obtain and handle the pointer data. If your app doesn't use GestureRecognizer to interpret these pointer events and data, track the pointer ID values to distinguish between pointers on multi-touch devices. (Most touch surfaces are multi-touch input, which means that many pointers can be in play at the same time.) Track the ID and position of the pointer for each event so you can take the correct action for the movement or gesture associated with that pointer.

OnPointerPressed

void MyTouchApp::OnPointerPressed(
                                           _In_ CoreWindow^ sender,
                                           _In_ PointerEventArgs^ args)
{
    // get the current pointer position
    uint32 pointerID = args->CurrentPoint->PointerId;
    XMFLOAT2 position = XMFLOAT2( args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y );

    auto device = args->CurrentPoint->PointerDevice;
    auto deviceType = device->PointerDeviceType;
    

       // If the pointer press event is in a specific control, set the control to active 
    // if the control is activated by a press.
    // Set the tracking variables for this control to the pointer ID and starting position.

}
        

Note  XMFLOAT2 is defined in DirectXMath.h.

 

The OnPointerMoved event handler fires whenever the pointer moves, at every tick that the user drags it on the screen. Use the next code example to keep the app aware of the current location of the moving pointer:

OnPointerMoved

void MyTouchApp::OnPointerMoved(
                                        _In_ CoreWindow ^sender,
                                        _In_ PointerEventArgs ^args)
{
    uint32 pointerID = args->CurrentPoint->PointerId;
    XMFLOAT2 position = XMFLOAT2( args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y );

    // Update the position in any tracking variables for this pointer ID. 
}

Finally, we need to stop recording the touch input and possibly interpret the movement when the user stops touching the screen.

OnPointerReleased

void MyTouchApp::OnPointerReleased(
                                             _In_ CoreWindow ^sender,
                                             _In_ PointerEventArgs ^args)
{
    uint32 pointerID = args->CurrentPoint->PointerId;
    XMFLOAT2 position = XMFLOAT2( args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y );

    // Process the touch input results here. Set the tracking variable pointer ID value 
    // to 0 when you're done.
   
}

In this example, the code simply retrieves the current pointer ID value and the position for that pointer. It's up to you to interpret that data. For control activation, handling the PointerPressed event suffices. For more complex touch actions, you must create some variables or methods to capture the pointer ID and initial position for the touch gesture, and to update those variables or call those methods as the pointer moves and is released.

Turning touch events into gestures

Now that your app can detect touch input with callbacks for the basic touch events, your app must interpret those events. One way to do this is to use the GestureRecognizerclass. A GestureRecognizer object takes pointer events (such as PointerPressed, PointerMoved, and PointerReleased), processes them, and provides events of its own (such as Tapped, Holding, or Dragging). These higher-level events are called gestures, and they are designed to fulfill apps’ most common user-interaction requirements.

Note  If your scene has multiple objects or controls that need to be independently manipulated at the same time, consider using a separate GestureRecognizer for each of them.

 

Here's how to use GestureRecognizer with the touch events you processed in the previous section.

  1. Create at least one GestureRecognizer object for your app and initialize its GestureSettings for each gesture you want to support. Some gestures include:

    • Tap. The user single-taps or double-taps the touch surface.
    • Hold. The user presses on the surface and holds the press for some length of time.
    • Drag. The user presses on the surface and moves the press in some direction.
    • Manipulate. The user makes a slide, pinch, or stretch gesture to scale, zoom, or rotate the display or an object.

    There are a set of events on the GestureRecognizer that you can handle for these gestures, including:

    Let's look at how to register for these events, and provide the GestureRecognizer with the touch input data it needs to identify these events. In this example, you add the following code to your implementation of IFrameworkView::SetWindow to create a GestureRecognizer for the double tap gesture and to register a handler for the GestureRecognizer::Tapped event. The gesture recognizer is declared as:

    Platform::Agile<Windows::UI::Input::GestureRecognizer> m_gestureRecognizer;

    m_gestureRecognizer = ref new GestureRecognizer(); 
    
    m_gestureRecognizer->GestureSettings = 
            GestureSettings::DoubleTap; 
    
    m_gestureRecognizer->Tapped += 
            ref new TypedEventHandler<GestureRecognizer^, TappedEventArgs^>(this, &CommandListRenderer::OnTapped); 
    
    
  2. Create handlers in your app for the gestures you want to interpret, and hook them up to the GestureRecognizer’s events. In the example, you create a single handler on the view provider class, OnTapped, to handle the tap event:

    void MyTouchApp::OnTapped( 
        _In_ GestureRecognizer^ gestureRecognizer, 
        _In_ TappedEventArgs^ args 
        ) 
    { 
        if (args->TapCount == 2) // the tap event is a double tap
        { 
            HandlePointerDoubleTapped(args->Position); 
        } 
    } 
    
    void MyTouchApp::HandlePointerDoubleTapped(Point position) 
    { 
        // Recenter the object around the screen location of the first tap of the 
        //double tap gesture.
        m_recenter = true; 
        m_recenterStartPosition.x = m_viewPosition.X; 
        m_recenterStartPosition.y = m_viewPosition.Y; 
        m_recenterStartZoom = m_zoom; 
    } 
    
  3. However, for the GestureRecognizer object to get the touch input data for these events, you must provide it by passing the pointer data to it from your basic touch input event handlers (like OnPointerPressed in our previous example).

    To do that, instruct your app’s handlers for the basic input events (PointerPressed, PointerReleased, and PointerMoved) to call the corresponding ProcessDownEvent, ProcessUpEvent, and ProcessMoveEvents methods on your GestureRecognizer.

    void MyTouchApp::OnPointerPressed( 
        _In_ CoreWindow^ window, 
        _In_ PointerEventArgs^ args 
        ) 
    { 
        m_gestureRecognizer->ProcessDownEvent(args->CurrentPoint); 
    } 
    
    void MyTouchApp::OnPointerReleased( 
        _In_ CoreWindow^ window, 
        _In_ PointerEventArgs^ args 
        ) 
    { 
        m_gestureRecognizer->ProcessUpEvent(args->CurrentPoint); 
    } 
    
    // We don't need to provide pointer move event data if we're just looking for double
    // taps, but  we'll do so anyhow just because many apps will need this data.
    void MyTouchApp::OnPointerMoved( 
        _In_ CoreWindow^ window, 
        _In_ PointerEventArgs^ args 
        ) 
    { 
        m_gestureRecognizer->ProcessMoveEvents(args->GetIntermediatePoints()); 
    } 
    
    

    Now, the GestureRecognizer can track the sequence of inputs and, when it encounters a set of inputs that match a gesture, raise an event—in this case, the GestureRecognizer::Tapped event.

In more complex scenarios, especially ones that have more than one GestureRecognizer, you need to determine which GestureRecognizer instances are active based on touch-hit testing.

Dispatching event messages

In the IFrameworkView::Run method you implemented on the view provider for your app, you created your main processing loop. Inside that loop, call CoreEventDispatcher::ProcessEvents for the event dispatcher on your app's CoreWindow, like this:

CoreWindow::GetForCurrentThread()->Dispatcher->ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);

The CoreProcessEventsOption::ProcessAllIfPresent enumeration value tells ProcessEvents to dispatch the callbacks for each event in the message queue at the time ProcessEvents is called. By doing this, your app handles the changes in the user's touch input and updates your pointer data such that it's synced to the rendering, keeping the feedback smooth. Use this CoreProcessEventsOption value if your app has a constantly running, timer-driven render loop and you want to process your input events on a timer or with every iteration of the loop. Again, you must call this on every iteration of the loop, or on each timer interval.

Alternatively, your app might go into a pause or background state where rendering is suspended. In this case, when you enter the state, call ProcessEvents with CoreProcessEventsOption::ProcessOneAndAllPending, which processes any current and pending event messages as your app is paused or suspended.

The following code sample for an implementation of IFrameworkView::Run chooses between the two CoreProcessEventsOption values for ProcessEvents based on app state:

void MyTouchApp::Run()
{
    while (!m_coreWindowClosed)
    {
        switch (m_updateState)
        {
            case UpdateEngineState::Deactivated: // the app's process is not active (it's suspended)
            case UpdateEngineState::Snapped: // the app's window is snapped
                if (!m_renderNeeded)
                {
                    CoreWindow::GetForCurrentThread()->Dispatcher->
                        ProcessEvents(CoreProcessEventsOption::ProcessOneAndAllPending);
                    break;
                }
            default: // the app is active and not snapped
                    CoreWindow::GetForCurrentThread()->Dispatcher->
                        ProcessEvents(CoreProcessEventsOption::ProcessAllIfPresent);
                    m_myGameObject->Render();
                    m_myGameObject->PresentSwapChain(); // calls IDXGISwapChain::Present()
                    m_renderNeeded = false;
                }
        }
    }
}

For more info about dispatching event messages, see Working with event messaging and CoreWindow (DirectX and C++).

Touch input DirectX samples

Here are some complete code samples to walk you through touch input and gesture support for your Windows Runtime app using DirectX

Quickstart: Handling pointer input

Responding to user interaction (DirectX and C++)

Working with event messaging and CoreWindow (DirectX and C++)