Partager via


SoCalDevGal wants you to be an expert in Windows 7 Touch and Multi Touch capabilities

Windows 7

My curiosity often takes me down new pathways.  Just like a typical dev, after stating the business case, writing out the problem, writing the key use cases and ‘hiring’ a designer, I am ready to open up Visual Studio and start coding my first multi touch Windows 7 application, right?  Nope, wrong – way wrong.  So, I was reading the sample code that I’ve managed to find, or get the from the product team and I realized a little problem.  Which turns out to be a really big thing.  Just what exactly constitutes a touch gesture anyway?  Coming from the simple world of mouse clicks, i.e. either it’s a single left click, a single right click, a double left click, etc…gestures have to be TRANSLATED in a way that will be intuitive and natural.  Given my background and general interest in linguists and translation problems, I am, of course, now totally fascinated with understanding more. 

Also, I’ve been studying a bit more on the details behind the different types of hardware that support AND on what is native to Window 7 and what I can choose to code.  Well, well, before I write that code, I think it will be beneficial to share what I’ve learned with you, so here goes.  First on the hardware side:

1. Capacitive touch – such as HP TouchSmart tx2, Dell Latitude XT,  works by a finger (and it must be a finger, or, less commonly, a specialized stylus) touch disrupting an electrical field which covers the entire screen of the device, it can be single, dual  or multi-touch supported – either built into the device or can be overlayed (N-Trig) – be aware that you may need to install additional drives to make maximum use of your hardware.  This is the more expensive multi touch option.

2. Infrared touch – such as MS-Surface, HP TouchSmart, works by infrared field of light (usually built into the corners or the frame of the device), anything (i.e. any object) can disrupt that field, ‘touch’ is capture on or slightly above the surface of the device.  NextWindow sells both overlays and integrated systems.  Infrared is cheaper to scale is often used in over-sized wall displays.

Wikipedia on the topic ‘Touchscreen’ has a bit more detail if you are interested – here.

Next on the software side – I have found a key learning area is to take a look at what we’ve done with MS Surface.  Yes, I know we are NOT here to talk about programming the Surface table, but let’s face it, those folks have a few years practice on this whole natural gesture thing and we shouldn’t just blow that off.  To that end on my weekly geekSpeak show last week, we hosted developer Brad Cunningham from InterKnowlogy.  On the show, he talked about the whole ‘move from clicks to touches’ application development migration and I really learned a bunch.  During his presentation, Brad had a great suggestion to help end users to understand supported gestures.  In a newly created  application, his company is going to include an opening simulation to help those users understand how to use gestures when playing the game.  Brad also talked about the paradigm shift from GUI to NUI (new user interface).  He recommended the book ‘Designing Gestural Interfaces’ by Dan Saffer

Continuing this discussion, I am going to attempt to list levels of gestural support built into Windows 7 (and some pre-loaded applications).  I will continue by listing the types of gestures that you can code.  I will also translate some gestures to mouse actions.  Brad made a great point when he noted the a finger isn’t as precise as a mouse pointer, so some actions that are relatively easily done with a mouse, such as drag and drop really don’t translate well from a usability standpoint to a touch-only device.  Below is a starter list to translate touch gestures from mouse clicks (most of which Brad provided, I did add some other items from other documentation).  I’d be really interested to hear any feedback you might have on this list as well!

Touch Gesture(s) Mouse Action(s)
Single finger tap Mouse up, Mouse down (left click)
Double tap with single finger Double left click
Flick single finger Click and toss
Fling single finger, tap to stop Click and toss to scroll, click to stop
Fling double finger, tap to stop none
Press and hold with single finger, tap with second finger Right click
Slide Click and drag to scroll
Double finger pinch to shrink or drill in on a specific location Roll the mouse wheel, but cannot select a specific area (of a photo for example)
Double finger spread to enlarge or drill out Roll the mouse wheel
Single finger drag to move or drop Click and drag
Multiple finger drag to move or drop none

Reading this list should start to give you an idea of the potential complexity around touch application architecture.  Another interesting consideration is whether you will make any changes to your UI depending on whether a user interacts with a mouse or with touch.  An example of how this has been implemented in Windows 7 is in the new jumplists (for an example look – here).  If you invoke a jumplist with a mouse click, then the spacing between the items on the jumplist is narrower than if you’ve invoked that jumplist via touch.  The thinking behind this is is that you need more space to be able to select jumplist items with your finger than with a mouse.  If you aren’t familiar, a jumplist is a list of most commonly chosen ‘next’ items, invoked by right clicking an application in the taskbar.  Below is an example from Windows Media Player.

Windows 7 Windows Media Player Jumplist

 

Another consideration when creating touch enabled applications is understanding the core Windows 7 OS settings.  There is a new applet in the Windows 7 Control Panel called ‘Pen and Touch’ that allows you to set up these core settings for touch.  You’ll note that ‘Double-tap’ by default is set to be equivalent to a mouse double-click, and that ‘press and hold’ equals a mouse right-click, but also that these default settings are adjustable at the OS level.

Touch section of Windows 7 Control Panel

It‘s probably time to take a look at touch in action.  Below are a couple of short videos, in each I demonstrate a couple of touch enabled applications.  Some applications ship with Windows 7 (Paint), other samples that you can get from either the Windows 7 SDK or the Windows 7 Touch Pack.  First the lagoon screen saver:

The next demo video shows how built-in touch capabilities work with some applications on built in to Windows 7.  I show IE8, the XPS Viewer and Paint as examples.  Here I also mention the concepts of ‘good, better or best’ touch applications.  I’ll explain this idea in more detail later in this blog post.

The last video shows some examples from the Windows 7 SDK samples of manually coded applications that support either single gestures or multi touch.  For this last section, you can easily get the source code as well. 

Now that you are as intrigued as I am (which I assume is true since you are STILL reading this long blog post!), let’s get to some architecture concepts.  First thing to understand is the idea of ‘Good, Better or Best’ touch applications.  Of course, the hardware capabilities must support whatever level you wish to implement.  Here’s a chart to get us started.

Type Support Built in Apps Description
Good built in IE8 single finger, simple select, scroll
Better you code/WM_GESTURE XPS Viewer simple gesture, zoom, pan
Best you code/WM_TOUCH Paint multi touch also possibly inertia

One thing to be aware of is that I am talking here about coding with .NET 4.0.  Some of these gestures can be captured using .NET 3.5, but not in exactly the same way.  Even within .NET 4.0 there are variations between beta one and beta two, such as those around Windows 7 WM_Touch manipulations or raw gestures.  For example beta one supports manipulations only, (which can be also be done in .NET 3.5 by working with stylus events), beta two can pick up raw gestures. As I’ve mentioned previously there are some managed wrappers available already.  Also of note is that our teams are working on converging the Surface and .NET 4.0 codebases – for more detail see Channel9 videos on the basics of Window touch applications – here.

We’ve obviously got lots more to think about in the new world on touch based application programming.  Hopefully this blog post has got you thinking.  More detail as I start to work on my first application over the next few weeks…

Comments

  • Anonymous
    June 07, 2009
    Great job with this Lynn!! Probably the single best post on Windows 7 I have seen to date :-)

  • Anonymous
    June 08, 2009
    funny.. we just talked about this for at least a hour in labs about who will set the definition for the way touch works. Its so new that there is very little in the way of a uniform command interface for the users.

  • Anonymous
    July 19, 2009
    Thanks for the descriptive post. I have had a tough time finding a general overview, and until this post, I didn't realize that WM_TOUCH and WM_GESTURE were mutually exclusive.