다음을 통해 공유


Context-Aware Dialogue with Kinect: Part 2 of our Q&A with Leland Holmquest

Earlier this month I posted part of an interview with Leland Holmquest, who wrote our fascinating April issue exploration of the Kinect SDK (Context-Aware Dialogue with Kinect), in which he creates a Kinect-enabled Windows Presentation Foundation application as part of his PhD coursework at George Mason University. The result is a virtual office assistant driven by context-aware dialogue and multimodal communication. Here is the conclusion of our conversation about his experience working with the Kinect SDK.

Michael Desmond: Any advice for readers who might be interested in developing Kinect-aware applications of their own?

Leland Holmquest: The best piece of advice I can give is to go through the samples that Microsoft provides and literally play with it first. Understand what each sample project is demonstrating and how it relates to the project that you want to create. Don’t read others comments or tips (and especially from sources that were hacking prior to the SDK release) until after you are already comfortable and understand. It’s not that there isn’t good stuff out there – there is a lot of good stuff. But there is some bad advice and it can quickly lead you down bad roads. I mention this a little in my article.

The other tip I can offer is to share with the community what you have done. Kinect can be used in so many different ways and in different domains. That’s one of the things that I think make Kinect such an exciting product. Just think of the movie Minority Report. We can do all of those cool UI things that were done in that movie. We really are limited just by our imaginations.

MD: What value have you personally gotten out of writing this feature? How does the process of writing an article like this one help refine and extend your own skill sets?

LH: I really like writing about this stuff. I find that by describing a technical solution, one tends to describe things more carefully and precisely than we normally simply think about. This forces us to develop a better understanding of the solution and its technical details. Plus, we don’t want to put anything out for public consumption that would be wrong and expose our own ignorance, so we do a more thorough job of researching.

For example, in my article I mention how you can succinctly build objects by inlining the declaration of properties inside a constructor. I thought this was new to .NET Framework 4. Fortunately, I looked it up and found that it had actually been a part of the .NET 3.0 -- I just didn’t know about it.

In addition, programming the Kinect is very rewarding. It opened my mind and got my creativity really flowing. Everywhere I look now I see software application crying out for a NUI that Kinect can provide. While it may not be appropriate in every scenario, for those that it is appropriate, it is amazing and easy to incorporate. Basically limitless with a creative mind.

MD: Any thoughts going forward? What might you consider exploring next?

LH: We can talk about incorporating gestures, depth processing, and 3D modeling. There is so much potential for what one can do with Kinect, we could keep writing for the next several years and still just scratch the surface. What will be the most fun is responding to others comments and questions. I am certain we will see things that no one else even thought about doing. I can’t wait!