Partager via


Surface Developer Tip: The Core Interaction Framework

The Surface platform supports two UI frameworks: WPF & XNA.  When our the Microsoft Surface SP1 platform was launched earlier this year we mainly talked about the improvements made for WPF developers.  Today I’d like to talk about a big addition we made to the Surface SDK for XNA developers: The “Core Interaction Framework” (or CIF). 

The Core Interaction Framework aims to speed up common UI implementation tasks and provide users with a more consistent experience.  Before describing the CIF though, I need to explain what lead us to creating it.

The WPF programming model consists primarily of developers laying out a set of controls to define their UI structure, setting properties to customize the look and feel of those controls, and then writing code to handle specific events that the controls raise.

XNA on the other hand is a much lower level framework where the programming model is made up of two primitive stages: 1) apps are asked to “update” their internal state – this is usually when they’ll ask each input device (mouse, keyboard, gamepad, or Surface) for their state and then do a bunch of processing to interpret and decide how to respond to that input.  2) apps are asked to “draw” onto the screen based on their state.  These two things happen really really fast over and over again to enable high quality UI where the developer has a ton of flexibility.

(Yes, this is a gross oversimplification of both frameworks but it’s enough background for the topic of this post :)

For Surface v1, we decided to enable the development of Surface apps with XNA but really put most of our investments into the WPF side of things.  We went this way because the control-based model of WPF enabled us to work on figuring out good ways to handle common interaction design problems (through research/experimentation/iteration) and then wrap our “best practices” up into reusable controls that apps could then build upon and customize.  For a simple example, if a user puts 2 fingers down on a button and only lifts 1 finger up, should the button “click” or wait until the other finger also lifts up?  We came to a good solution to this problem (it’s more nuanced than just yes or no so I’ll save that for a later post) and then built those best practices into our Button, RepeatButton, ToggleButton, RadioButton, and CheckBox controls.  So now whenever you use Surface WPF applications, their buttons all have the same behaviors and nuances.

Meanwhile, XNA developers were simply empowered to leverage Surface input in their Update & Draw loops.  Even without a controls framework, several of our early partners decided to go with XNA instead of WPF.  Some because their developers had more familiarity with the XNA programming model (coming from a DirectX background, for example) and some because they needed the powerful graphics capabilities only provided by XNA.  The apps these partners built were quite compelling & well received.  However, back in Redmond we noticed the inconsistencies in how our WPF controls behaved and how the equivalents our partners had to create from scratch in XNA behaved.  Seeing these subtle differences made us realize that there was an opportunity for us to improve the Surface experience for both developers and end users: 1) our partners were having to invest time in solving these basic problems rather than working on the unique scenarios of their apps and 2) users had to discover/remember behavioral differences for common interactions when switching between apps.

For the SP1 release of the Microsoft Surface platform we decided to take on these problems and mitigate them with what we call the “Core Interaction Framework” or CIF.  Many XNA developers want low level control of things like how their UI is rendered and how hit testing is performed – but very few want to get into the nitty gritty of how buttons and scrollbars actually behave.  So what we did with the CIF is create a bunch of reusable infrastructure classes that handle just the nitty gritty of how common controls behave – without making any assumptions or requirements regarding how the app does rendering or hit testing.  These classes are essentially configurable “state machines” – you set them up, delegate certain touch inputs to them, and then have your UI render based on the state they report.  Some controls have relatively simple state: button consists of “am I currently being pressed” and “was I just clicked”.  Other controls are a little more complicated: scrollbar reports back what part of it (track vs. thumb) is currently being touched, where along the track the thumb should be rendered and how big it should be, and what the currently selected ‘value’ is. 

The basic steps to leveraging CIF are pretty simple:

  1. Construct a UIController.  This is the object that will orchestrate passing input between your app and the various control state machines.
  2. Construct state machine objects to represent each of your controls (ButtonStateMachine, ListBoxStateMachine, …)
  3. Create a hit testing callback method and provide it to the UIController.  This delegate will be passed info about each touch input.  Your job is to call Set*HitTestDetails on each of those inputs to tell the controller which state machine the input corresponds to.
  4. In your Update method, call UIController.Update().  This will trigger the controller to do some internal processing and invoke your hit testing callback with the latest Surface input data. 
  5. Also in your Update method, check your state machines objects for things you need to react to for example:
        if (myButtonStateMachine.GotClicked) { // ... }
  6. In your Draw method, render each of your controls according to their state.  For example, you’d use different visuals to represent a button state machine when it’s IsPressed==true.

And that’s it!  You can get really creative on how these controls look on the screen (a curved scrollbar, for example) while being confident that the interaction model for those things is consistent with other Surface apps and leveraging all of the research done by the Surface team.

The Surface SDK also has sample code which shows how to use these state machines in XNA with visuals that align with our WPF controls, but that’s a totally optional add-on.

Everything you need to get started with this is in the Samples directory of the Surface SDK:

.\Core\Framework -  The code for the CIF itself.  You can compile and use this as-is or make any changes/additions to meet your app’s needs.

.\Core\Framework\CoreFramework.chm – This help file contains API reference documentation for all public classes in the CIF.  You can refer to this as a quick reference to what the CIF contains and how to use it.

.\Core\Cloth – A new sample we created for the SP1 release which demonstrates nearly every feature of the CIF.  It’s fun to play with too :)

.\Core\XnaScatter – An updated version of the sample from Surface v1 which now uses a CIF-based implementation of ScatterView for much simpler code

If you use these things, we’d love to get your feedback.  We’d also love feedback if you’re a Surface XNA developer and decide not to use the CIF for new projects.

-Robert