Udostępnij za pośrednictwem


Charles Petzold on his DirectX Factor Column

Charles Petzold this month launches his new DirectX Factor column, which will explore the opportunities developers have in Windows 8 to write fast, native code for both traditional client and mobile/tablet form factors. I caught up with Charles and asked him about his plans with the new column.

Michael Desmond: Your new column is called DirectX Factor. Why DirectX? Why now?

Charles Petzold: The short answer is: Windows 8.

For almost a decade now, I’ve been coding for various .NET platforms using C# and XAML, so when Microsoft unveiled Windows 8, of course it was great to see a XAML-based framework in the Windows 8 API. At the same time, I was quite intrigued to see support of C++, not as a managed language but to generate native code, and moreover, to see an extensive DirectX API that can be accessed only from C++.

Well, of course, it all makes sense. This is Windows, and we want to be able to write high-performance code for Windows. We want an option to use a language that’s been designed to generate fast native code, and we want the option of targeting an API that’s been designed for high-performance utilization of video and sound hardware.

Windows 8 gives us these options, and this is essential: As Windows runs on smaller and less hefty processors on tablets and other mobile devices, and as we strive to make the user interfaces of our Windows applications become faster and more fluid, C++ and DirectX have become more important than ever. This is what I’d like to explore in this column.

Desmond: So should we be looking to move away from managed platforms and languages?

Petzold: No, no, no, not at all. But we need to choose our languages and platforms intelligently. For many applications, XAML in combination with a managed language is fine. But if you need high-performance graphics, or text, or sound, don’t kid yourself thinking that C# and XAML represent the peak of efficiency. Get serious.

I think we’ll be seeing more solutions where applications are coded in multiple languages. Programmers will identify and isolate the time-critical and graphics-intensive stuff, and code that in C++ in the form of a Windows Runtime Component library, which can then be accessed by C# or JavaScript code.

Sure it would be great to use one universal language and platform. But programming is not some abstract mathematical exercise. This is the real world, and we need to be very alert to performance issues. We need to know when managed code is fine, and when native code is essential.

Desmond: You've been focused on Windows Phone and mobile/touch development for a couple years. How does the new column impact that focus? Will we still see Windows Phone coverage here?

Petzold: Windows 8 actually highlights a problem with Windows Phone 7. The Windows Phone 7 API supports managed code only, with two application platforms. If you can’t get your XAML-based Windows Phone 7 program to run fast enough, you can try rewriting it for XNA, but with XNA you’re still using a managed language and you’re still going through a managed API. And if you can’t get your Windows Phone 7 XNA program to fun fast enough, you’re stuck. You’ve hit the wall.

The Windows Phone 8 API corrects this deficiency in the same way as Windows 8 -- by supporting native C++ code and DirectX. Naturally that provokes the question: What’s involved in developing DirectX code that can run on both Windows 8 and Windows Phone 8 hardware? That’s something I’d definitely like to explore in this column.

Desmond: What topics can readers expect the DirectX Factor column to address in the near future?

Petzold: I’ve decided to begin exploring DirectX under Windows 8 with the sound-generation component, which has the rather unsexy name XAudio2. But within a few months I expect to get into 3D graphics. As we developers work with more intimate touch-based user interfaces, I am particularly interested in how we can use touch to manipulate complex graphical objects. I actually think this is one of the keys to giving rather bulky fingers more precision on the screen.

All the sample programs I’ll be describing in DirectX Factor will combine DirectX with a XAML-based user interface. That’s something we really haven’t been able to do before, and I’m thrilled to have the opportunity to write about these techniques.

Comments

  • Anonymous
    January 10, 2013
    I remember reading some time back when WPF first came out that it sat on top of DirectX for its graphics. Now you're saying that this wasn't the case and it was slow all along ... whats the truth here ... was Direct3D only used for WPF 3D graphics? How about WPF 2D graphics then? Now they're saying that if you're serious about graphics speed you have to develop in C++. Seems like a bit of a mess.

  • Anonymous
    January 10, 2013
    It looks like WPF was a layer sitting on top of DX, a thin layer possibly, but a layer none the less. C++/directx (or to be fair, ogl) were always going to be the best route for pure 3d performance- but a lot of it is going to depend on how much you need. If you're trying to write a tripleA level engine, then you're going to need to go that route. If you're writing something more casual, for that matter more interested in writing a game, then use a premade engine.

  • Anonymous
    January 11, 2013
    Blah, should sign in before posting- It looks like the wpf was a layer on dx, a layer is going to lose something in translation. If you're writing an engine such a thing may be unacceptable. But for (most) app level performance it's probably not an issue. Even from the gamedev perspective it may not be worth it to actually write your own DX rather than use a scripting language in one of the popular engine choices. YMMV.

  • Anonymous
    January 14, 2013
    The comment has been removed

  • Anonymous
    February 06, 2013
    Figuring out how much "speed" is needed is important. Last year, I developed a complex 3D graphic UI that mirrored all of the operations of a large industrial tool. The original graphic models were the same ones the hardware team used to build the stuff [autotest inventor] and there was plenty of realtime animation.. This was pretty much pushing the limit [many "experts" told me it would fail]....but the development was MUCH simpler than if it had been done at a lower level....