Partager via


Chapter 13: Enhancing the Hilo Browser User Interface

In the final version of Hilo, the Annotator and Browser applications provide a number of enhanced user interface (UI) features. For example, the Hilo Browser now provides buttons to launch the Annotator application, to share photos via Flickr, and touch screen gestures to pan and zoom images. In this chapter we will see how these features were implemented.

Launching the Share Dialog and Hilo Annotator

The Hilo Browser application now allows you to share selected photos through an online photo sharing application. It also allows you to edit selected photos by launching the Hilo Annotator application. The Hilo Browser has been extended to make it easier to perform these two actions. In the first version of the Browser double-clicking (or double-tapping) on a photo launches the Annotator in order to edit the photo. The Browser now uses the double-click gesture to launch the slide show mode where the carousel is hidden and the selected photo is shown at a larger scale (Figure 1).

Figure 1 Browser in slideshow mode

Gg248103.00319394-782a-45fd-b2ea-c9603acb8dbb-thumb(en-us,MSDN.10).png

The Hilo Browser provides two buttons: one to start the Share dialog and the other to start the Hilo Annotator application (Figure 2). Normally these two buttons appear as icons in the top right hand corner of Browser, but when you hover the mouse cursor over the icon the Browser shows the edge of the button and its caption. These two buttons are not button controls, in fact they are not even child windows, instead the images are Direct2D bitmaps and the click action is performed through hit testing.

Figure 2 Hilo Browser showing the buttons to launch Annotator and Share

Gg248103.2dec7487-4e5d-44bd-b553-7f6b099e90aa-thumb(en-us,MSDN.10).png

The top half of the client area for Browser is implemented by the CarouselPaneMessageHandler class. This class has two members called m_annotatorButtonImage and m_sharingButtonImage which are references to bitmap objects that implement the ID2D1Bitmap interface. Both members are initialized in the CarouselPaneMessageHandler::CreateDeviceResources method by loading the image from a bitmap resource bound to the Browser process (Figure 3).

Figure 3 Icons for Browser's buttons

Gg248103.22b0dcc7-c617-4fb0-a4f4-253117089bba(en-us,MSDN.10).png Gg248103.ee544bf8-bb85-4051-87ab-3e73732e44b5(en-us,MSDN.10).png

When the window is resized the message handler calls the CarouselPaneMessageHandler::CalculateApplicationButtonRects method to determine the position of each button image and a rectangle for the selection. When the window is redrawn the CarouselPaneMessageHandler::DrawClientArea method is called and this draws the images at these calculated positions using the ID2D1RenderTarget::DrawBitmap method. However, an image on its own does not show user feedback nor handle mouse clicks.

The user feedback is provided by testing to see if the mouse has hovered over the button and then drawing the selection rectangle, the following discussion is for the Annotator button but it also applies to the Share button. The first action, testing to see if the mouse is hovering over the button, is carried out in the CarouselPaneMessageHandler::CheckForMouseHover method in response to a mouse move message, and the code in CheckForMouseHover is shown in Listing 1. This code calls Direct2DUtility::HitTest which simply tests whether the mouse position is within the selection rectangle for the button. The result from the hit test is saved in the Boolean variable m_isAnnotatorButtonMouseHover which is used later on in the code.

Listing 1 Hit testing for the Annotator button

if (Direct2DUtility::HitTest(m_annotateButtonSelectionRect.rect, mousePosition))
{
   if (!m_isAnnotatorButtonMouseHover)
   {
      needsRedraw = true;
      m_isAnnotatorButtonMouseHover = true;
      CalculateApplicationButtonRects();
   }
}
else
{
   if (m_isAnnotatorButtonMouseHover)
   {
      needsRedraw = true;
      m_isAnnotatorButtonMouseHover = false;
      CalculateApplicationButtonRects();
   }
}

When the carousel window is redrawn the m_isAnnotatorButtonMouseHover member variable is checked, and if it indicates that the mouse is hovering over the button the selection rectangle is outlined in a solid color and the rectangle is filled with the same color but with a 25% opacity as shown in Listing 2. There is no code to remove this rectangle when the mouse moves away from the button, in this situation the entire carousel window is redrawn without the selection.

Listing 2 Code to draw the selection box around a button image

// Draw selection box for annotate button
if (m_isAnnotatorButtonMouseHover)
{
   m_selectionBrush->SetOpacity(1.0f);
   m_renderTarget->DrawRoundedRectangle(m_annotateButtonSelectionRect, m_selectionBrush);

   m_selectionBrush->SetOpacity(0.25f);
   m_renderTarget->FillRoundedRectangle(m_annotateButtonSelectionRect, m_selectionBrush);

   m_renderTarget->DrawTextLayout(
      D2D1::Point2(m_annotateButtonSelectionRect.rect.left, m_annotateButtonSelectionRect.rect.bottom),
      m_textLayoutAnnotate,
      m_fontBrush);
}

m_renderTarget->DrawBitmap(m_annotatorButtonImage, m_annotateButtonImageRect);

Mouse clicks are handled in a similar way. Listing 3 shows an excerpt from the handler for the WM_LBUTTONUP mouse message. If the Browser history stack is expanded then button clicks are ignored, but if the history stack is not expanded then there is a check to see if the mouse is above one or other of the buttons, and if so either the MediaPaneMessageHandler::LaunchAnnotator or MediaPaneMessageHandler::ShareImages method is called.

Listing 3 Handling Browser button mouse clicks

if (!clickProcessed)
{
   if (m_isHistoryExpanded)
   {
      // other code
   }
   else
   {
      // Check if the user clicked the share or annotate application button
      if (m_isAnnotatorButtonMouseHover || m_isSharingButtonMouseHover)
      {
         ComPtr<IMediaPane> mediaPane;
         hr = m_mediaPane->QueryInterface(&mediaPane);

         if (SUCCEEDED(hr))
         {
            if (m_isAnnotatorButtonMouseHover)
            {
               mediaPane->LaunchAnnotator();
            }
            else
            {
               mediaPane->ShareImages();
            }
         }
      }
   }
}

The MediaPaneMessageHandler::LaunchAnnotator was covered in an earlier chapter (it simply calls the Windows API function CreateProcess to start the Annotator application). The code to share photos is implemented in Browser. The MediaPaneMessageHandler::ShareImages method calls a static method on the ShareDialog class to show a modal dialog. The ShareDialog class will be covered in Chapter 15.

Touch Screen Gestures

Windows 7 provides support for gestures on touch screen computers. Gestures are movements of one or more fingers on the touch screen. Some gestures have corresponding mouse movements but since a multitouch screen can respond to two finger movements there are some gestures that cannot be replicated with the mouse.

Touch screen gestures are relayed to an application through the WM_GESTURE message. The lParam parameter of this message is a handle that is passed to the GetGestureInfo function to return information about the gesture in a GESTUREINFO structure. The caller of the GetGestureInfo function allocates the GESTUREINFO structure and the function fills the structure. Any code that handles the WM_GESTURE message must close the handle by calling the CloseGestureInfoHandle function. The members of the GESTUREINFO structure that you can use are shown in the following table.

Member

Description

cbSize

The size of the structure, in bytes.

dwFlags

The state of the gesture such as begin, inertia, and end.

dwID

An identifier to indicate the gesture that is happening.

ptsLocation

A POINTS structure containing the coordinates associated with the gesture. These coordinates are always relative to the origin of the screen.

ullArguments

A 64-bit unsigned integer that contains the arguments for gestures that fit into eight bytes. This is the extra information that is unique for each gesture type and is also passed through the message wParam parameter.

The type of gesture is identified through the dwID field of the GESTUREINFO structure (these values are shown in the following table) and the status of the gesture (whether the gesture has started or finished) is passed through the wFlags field.

Name

Description

ullArgument

ptsLocation

GID_ZOOM

The zoom gesture.

The distance between the two points.

The center of the zoom.

GID_PAN

The pan gesture.

The distance between the two points.

The current position of the pan.

GID_ROTATE

The rotation gesture.

The angle of rotation if the GF_BEGIN flag is set. Otherwise, the angle change since the rotation has started.

The center of the rotation.

GID_TWOFINGERTAP

The two-finger tap gesture.

The distance between the two fingers.

The center of the two fingers.

GID_PRESSANDTAP

The press and tap gesture.

The delta between the first finger and the second finger. This value is stored in the lower 32 bits of the ullArgument in a POINT structure.

The position that the first finger comes down on.

Hilo provides processing for just two of these gestures: GID_PAN and GID_ZOOM. To do this there are two virtual methods OnPan and OnZoom on the WindowMessageHandler class that is the base class of the message handler class hierarchy. The WindowMessageHandler::OnMessageReceived method handles messages sent to a window and Listing 4 shows how the message is handled. At the start of the code you can see where the GetGestureInfo function is called to obtain information about the gesture and at the end is the call to the CloseGestureInfoHandle function to clean up the resources if the message is handled.

Once the gesture information has been obtained, the handler code tests for the two gestures that can be handled. The handler then decodes the parameters appropriately before calling the virtual method to allow the child window message handler class to respond to the gesture.

Listing 4 Code to handle gesture messages

case WM_GESTURE:
   {
      bool handled = false;

      GESTUREINFO info;
      info.cbSize = sizeof(info);
      if (::GetGestureInfo((HGESTUREINFO)lParam, &info))
      {
         switch(info.dwID)
         {
         case GID_PAN:
            {
               D2D1_POINT_2F panLocation = Direct2DUtility::GetPositionForCurrentDPI(info.ptsLocation);
               hr = OnPan(panLocation, info.dwFlags);
               if (SUCCEEDED(hr))
               {
                  if (S_OK == hr)
                  {
                     handled = true;
                  }
               }
               break;
            }

         case GID_ZOOM:
            {
               static double previousValue = 1;
               switch(info.dwFlags)
               {
               case GF_BEGIN:
                     hr = OnZoom(1.0f);
                     break;
               case 0:
                     hr = OnZoom(static_cast<float>(LODWORD(info.ullArguments) / previousValue));
                     break;
               }

               if (SUCCEEDED(hr))
               {
                  previousValue = LODWORD(info.ullArguments);
                  if (S_OK == hr)
                  {
                     handled = true;
                  }
               }

               break;
            }
         }
      }

      if (handled)
      {
         ::CloseGestureInfoHandle((HGESTUREINFO)lParam);
         *result = 0;
      }
      else
      {
         *result = 1;
      }

      break;
   }

Implementing the Pan Gesture

The pan gesture occurs when you sweep your finger across the touch screen. Typically an application will animate an item on screen to mirror this movement. Windows 7 provides inertia notifications so that an application can respond accordingly. After you take your finger off the touch screen at the end of a pan gesture, Windows 7 calculates the trajectory based on the velocity and angle of motion. It continues to send WM_GESTURE messages of type GID_PAN that are flagged with GF_INERTIA. Windows 7 will send these messages reducing the speed of the movement so that eventually the gesture messages stop. In effect the Windows 7 inertia engine is providing the positions for a deceleration animation.

The Browser application handles touch gestures in the media pane. The pan gesture is used to move between photos. It is handled by the MediaPaneMessageHandler::PanImage method. This method will only scroll one photo position for each individual pan gesture. To scroll further you have to repeat the gesture. The PanImage method handles the start of the pan gesture by storing the start position. When the next message for the gesture is received, the PanImage method can use the previous position to determine by how much the photo should pan.

The Browser interprets a pan gesture to mean, move to the next photo in the following direction. Once the gesture has started, the Browser will always complete the action even if you change the finger movement. For example, if you pan to the left and then before removing your finger pan to the right, Browser will always pan to the left photo.

The panning movement is carried out through an animation and will occur after the gesture has finished. Since the Browser will only pan one photo position at a time, it will not respond to the inertia messages for the gesture, in fact it treats them as indicating that the gesture has ended. The reason is that the Windows 7 inertia engine calculates the inertia response according to the speed of the finger movement and it could result in scrolling several photo positions. When the message is received indicating that the pan gesture has ended (or is generating inertia messages) the media pane completes the scrolling of the photos by rendering an acceleration-deceleration animation with a decelerating movement from the current position to the final position.

Implementing the Zoom Gesture

The zoom gesture is a pinch between two fingers: if the distance between the fingers decreases the indication is that the item size should decrease (zoom out) and if the distance between the fingers increases the indication is that the item size should increase (zoom in).

Listing 4 shows that when the zoom gesture starts, the dwFlags parameter is GF_BEGIN and this is handled by saving the current distance between the two fingers. This value is used when handling subsequent zoom gesture messages to determine the change in the size of the image. The subsequent messages as part of this gesture have a value of 0 for the dwFlags parameter. These subsequent messages are handled by zooming to the proportional change in the distance between the fingers since the last message.

Conclusions

The Hilo Browser provides several UI features. This chapter explained how the Browser implements the buttons that allow you to launch the Annotator and Share applications, and it explained how the touch screen gestures were implemented. In the next chapter we will see how the Hilo applications provide jump lists and taskbar thumbnail images.

Next | Previous | Home