次の方法で共有


Making the wheel just a little more rounded...

I've been busy the past few weeks ramping up on Windows Communication Foundation and, more importantly, helping the team decide what we will actually do with it in Visual Studio Orcas.

The next release of Visual Studio will be coming out 'sometime after' Windows Vista, and the idea is to help developers take advantage of the new features in the platform.  (Avalon, Indigo, etc.)  So Vista developers need our next release to be on time.  Everyone wants it to be bug free.  And I am sure Soma would like it to be on-budget too.

So where does that leave me and my team?  Doing two things: Figuring out the right set of features we can deliver for Orcas and also exploring new ways we can do more with less.

I'll save the 'how we decided to close down on features' until I can publicly talk about the great things that we will do in Orcas to support WCF.  For now, let me talk about number two briefly.

Perhaps the greatest thing we (the VS Indigo QA team) has going for us is that we don't have any testcase baggage.  No maintenance, no existing infrastructure.  A perfect tabula rasa for automation.  So right now we are deciding how we can innovate in that space to do more with our testcases.

Each of these topics deserves their own, individual blog post but I'd like to give you a hint at what tricks we have up our sleeves for testing VS which will ultimately lead to a more stable product for you.

1. Enabling more testhooks within the Visual Studio product
In Whidbey only a few testcases used 'white box approaches' and invoked Visual Studio APIs and objects directly.  Sure running a test by driving the UI and interacting with the product just like developers would is great, but it is slow and fragile.

By designing our test infrastructure with testhooks in mind we can write our automation faster and enjoy the speed/reliability benefits as well.

2. Testcase maintenance tools
For folks working on the VB compiler it isn't uncommon to maintain more than 3,000 automated tests.  For UI-designer features ~300 is more reasonable, but that still is a lot.

Something we are working on is structuring our testcases so we can put them all into a single project within Visual Studio (rather than each testcase being its own, separate entity).  That way if I need to refractor an interface in some base support layer all my testcases will be updated at the same time.

3. How we measure test coverage and product quality
Metrics are definitely helpful when trying to understand the state of the product, but finding the right set of metrics can be extremely difficult.  Code coverage?  Active bugs?  Closed/Resolved bugs?  What about automation stability? 

I'm currently working on a tool to normalize existing data to make it more actionable.  For example in Whidbey we said, "Developer X has Y bugs assigned to her, which is bad".  In Orcas we can say "developer X has a bug temperature of Y, so no worries". 

What is ‘bug temperature’ you ask?  I don’t know yet, but it might look something like this:

severity * A + priority * B + days active * C + blocking status * D

Basically a way to take our existing bugs and rather than having a flat number, measure our outstanding bug debt with a little more perspective.  One horrible bug is greater than five minor issues.

So that's just a taste of what I have been up two.  In the comming weeks I'll post more about the specifics of each of those new testing projects I am spearheading as well as giving you a preview of what Indigo/WCF features Visual Studio will support for our next release.

Until next time, code on!