Partager via


How creating Surface apps is different, part 1

In winter/spring of this year I was asked to do some applications-development training for our internal teams in China and Japan as well as our external partners. In addition to my own experiences developing Surface apps, I talked with many team members about what they thought future developers should know. One result of this process was a list of how developing for the Surface is different from the desktop. Over the next two posts I’ll share this list. I admit some of these things seem pretty obvious after you hear them. That’s often the mark of a very usable piece of information (at other times it can be the mark of restating the obvious.) Here goes:

No screen orientation

The assumption that computer displays have one orientation starts high in the system and goes down deep. Even the computer in the projector has what it thinks is “up.” The OS, UI frameworks, and development tools all think you want an application where everything is oriented the same way.

We are highly reliant on WPF’s ability to rotate user interface elements in any direction you want. This ability to set a “transform” at any level in your UI is one of the primary reasons we decided to use WPF as the main platform for Surface applications development. In developing for Surface we often put the bulk of the UI in a “UserControl” so it can be replicated for multiple users and oriented to face them. The Photos demo is a good example of where each photo or video is a UserControl. There is a transform set on each of these so that the photo is scaled, positioned, and rotated however the user wants.

Designing the user interface without an orientation is also very difficult. It usually takes a few iterations before all the elements of the UI do not imply an orientation. A close look at the demo applications will reveal some places where the design assumes the user is on one side of the Surface.

Multiple “mouse pointers”

There is a lot of similarity between a single “contact” on the Surface and the mouse pointer on a regular PC screen. Dragging your finger across the Surface is very similar to dragging a mouse. Unfortunately, the conventional computing system is built to expect just one mouse pointer. Even if you connect multiple mice to a single PC, you still just get one mouse pointer on the screen.

Fortunately, WPF is flexible enough to allow us to put Surface "contact" events into its event stream. So in addition to seeing the mouse events, your UI will see events generated from Surface interaction. WPF does not do much more for you at this point though. What your UI does with a bunch of contacts moving over it is up to the UI. In a paint application it can just draw on the screen lines that reflect the positions of the contacts it sees. For an application like photos it has to do some math with all the contacts it sees in order for the photo to respond intuitively to the user. This can be a lot more complicated than things are in the single-mouse world. A goal of the SDK is to simplify this for Surface application developers by providing controls that give you the behavior you want without having to handle all the events directly. Robert Levy will talk more about this in his posts.

More in Part 2.

Comments

  • Anonymous
    November 21, 2007
    "WPF is flexible enough to allow us to put Surface "contact" events into its event stream." How do you make this? Can you give me example?

  • Anonymous
    November 21, 2007
    Over at the Surface blog they're talking about what goes into creating an application for Surface. Two

  • Anonymous
    November 28, 2007
    The comment has been removed

  • Anonymous
    November 28, 2007
    I have found a class named InputManager. I looks like it responsible for all input in WPF. It has support for Mouse,Keyboard and Stylus. I wonder if it could be used to implement "touch" devices? I understand that controls like ScatterView is a custom control. But how about a Button. Do I have to inherit from button and write my own code so that it response to contacts? Or it is possible to emulate mouse input and use standard wpf controls?

  • Anonymous
    November 29, 2007
    that's a great question, nesher.  i'll try to cover that in a more detailed post about our WPF layer in a few weeks. -r

  • Anonymous
    November 30, 2007
    great news rlevy, I'm waiting for your post :)

  • Anonymous
    December 01, 2007
    "Unfortunately, the conventional computing system is built to expect just one mouse pointer." Unfortunately, the conventional computing system uses only one mouse pointer by default. Support for multiple mouse pointers or generally, multiple input devices is present in the form of low level raw-input in Win32.

  • Anonymous
    December 02, 2007
    Microsoft does support multiple mice in Managed apps via the MultiPoint SDK - http://channel9.msdn.com/Showpost.aspx?postid=266221 Download at: http://www.microsoft.com/downloads/details.aspx?FamilyID=a137998b-e8d6-4fff-b805-2798d2c6e41d&DisplayLang=en adam...

  • Anonymous
    December 06, 2007
    Adamhill, unfortunatly MultiPoint does not support touchpad and stylus on my Tablet PC (Toshiba Portege m400).

  • Anonymous
    December 19, 2007
    Wenn can we await Part 2? Also I would like to read more about SDK.

  • Anonymous
    January 03, 2008
    How creating Surface apps is different, part 2 Continuing the list from my previous post Multiple simultaneous

  • Anonymous
    June 30, 2008
    The information u provided was really helpfull,i am waiting for ur post part 2.