Freigeben über


'Wall Time' as a way of determining correctness

Quite a few years back, I joined the Windows Server 2003 team as a developer.  Windows Server 2003 is a massive organization – I was part of the UDDI team which resided under Directory Services.

Our QA team had a suite of about 300 automated tests.  The collected the ‘wall time’ of a test run – so literally how many minutes/seconds it took for each test to run.  The ‘wall time’ amounts were aggregated into a test suite run time.

When I first joined, that suite of tests was taking about 2 hours to complete.  That was a concern because the test suite used to only take about 20 minutes.  We used a homegrown solution to store our test data and this was 2002, so there was no SQL Server 2005 Business Intelligience to take advantage of

My first task when I joined was to determine if there was a performance problem – the number of tests in the suite had increased, and so had our functionality – and if there was one, to fix it.  I spent a long while in our profiling tools (crude compared to what we have today) and tracked the problem down to a Thread.Sleep() that someone had accidently put in the code.

What was ironic was that we had several unit tests that targetted that particular piece of code.  Looking back, at the test results for those unit tests, I could see that the ‘wall time’ for those tests had almost tripled.  On their own, these slower unit tests only added a little bit of time to the test run.  But when the Thread.Sleep() code those unit tests were targetting werer hit by our load/scalability tests, the slowdown because expontential.

What we didn’t have then was a way to express how long a test was supposed to take.  We checked for return values, the presence of exceptions, etc, but we couldn’t say that a test would fail if it took longer than some amount of time.

This ‘timeout’ is something that is built into Visual Studio Team System.  If you look at the properties of a test, you’ll see a timeout property.

Image8

The value of this property is a handy way to define the performance characteristics of a given test.  Run times will of course differ based on hardware, but this property is a good sanity test. 

Thanks,

Eric.