Partager via


How did WPF become the primary platform for Surface?

When I joined the Surface team at the start of 2004 it was already clear that we were doing something different, and that the traditional Windows UI platforms were not going to work.  The UI platforms of the day were designed to produce apps that are operated by a single user with a keyboard and mouse on a desktop PC.  A table-top screen with users sitting on any side wielding the equivalent of a mouse per finger was not something these frameworks were designed for.

Our first plan was to go the game-development route and create our own UI framework on top of DirectX.  Being the scrappy group we are, we actually created three different frameworks to try out different ideas.  For me it was also a chance to learn DirectX.  While developing the frameworks, we also developed applications to try them out.  The Paint program you’ve seen in the demos is one of these apps that is still going strong.

When we were ready to consolidate our frameworks in the spring of ’04 we started hearing more about a GPU-accelerated UI framework called Avalon (now called Windows Presentation Framework or WPF.)  Since it was from the Windows group, we only expected a nicer User/GDI but we nonetheless did the due diligence and explored it.  Even in those early stages everyone was pretty amazed at the demos and what the API provided.  The “RenderTransform” on Avalon allowed you to rotate parts of your app to face arbitrary directions.  Another part of Avalon that encouraged us to use it was “RoutedEvents.”  This lets us send events from the Surface input hardware right to the appropriate application UI components.  It was almost as if they had Surface in mind even before anyone ever heard of Surface.

After I did some proof-of-concept work, the Surface dev team self-hosted on Longhorn and started doing all of our app development on WPF (Surface trivia: the first WPF app running on Surface was a kids' “memory” game or what Dr. Neil called "Match the Pairs".)  This was before the Longhorn reset, and things were not painless; but we remained productive.  We followed the WPF team as they designed a brand new UI framework, implemented it, tested it, supported developers using it, created tools for it, and continue to maintain it.  Had we gone ahead and implemented our own UI framework we would have had to do a lot of the same.  I’d much rather be writing cool Surface applications.

P.S. I've been slacking off the blog for a while doing my day job.  I now have a bunch of articles in the pipeline on development topics but tell me if there is anything development-wise you would like to hear more about - Kevin

Comments

  • Anonymous
    March 14, 2009
    The comment has been removed

  • Anonymous
    March 16, 2009
    On the Surface team we’ve done a lot of investigations of 3D in use in the UI.  I’ll ping one of our UX people to see if they can blog about it. A 3D UI to replace or augment the traditional desktop UI has been a dream for some time.  In 1995, when I was working on Windows 95, Ian Ellison-Taylor made a rough prototype of a 3D desktop using a 3D library that was a precursor to Direct 3D.  It was fun to watch but would have been a tough sell as a replacement to the Windows desktop millions of people were used to even then.  There were also lot of other efforts before this to move to the 3D realm.  (Anyone remember the Doom Unix admin console?) To me the challenges with 3D in the UI are not around implementation but rather are around producing a functional and satisfying user experience.  You can go to YouTube and find a lot of great looking and fun 3D user interfaces and desktop replacements.  However, these things been around for many years and none have really caught on.  I often wonder how these would pass the test of “grab an average computer user, replace their desktop with this, give them a few tasks, and see how they do.”

  • Anonymous
    March 16, 2009
    The comment has been removed