다음을 통해 공유


Challenges in Test – Impeccable Infrastructure

In order to test a product there is always a fair amount of infrastructure needed to keep the wheels running.  Just a few examples of the types of infrastructure that test requires are:

  • Test harness
  • Logger
  • Common code for building test automation
  • Reporting websites
  • Databases for storing various results
  • Machines for running tests in the lab
  • Topology for your product to simulate customer deployments
  • Tools for managing your infrastructure
  • Tools for testing & possibly for customers
  • Etc.

As you might imagine this can become quite an undertaking and in larger teams this is sometimes delegated to a dedicated lab team to support.

On top of all this work is the set of tests that your team is running & many times it also supports your development team for their development needs.  A high quality infrastructure is something that can sometimes be taken for granted, until there is a problem which can at best slow down your team’s efficiency or at worse invalidate your testing due to some bad assumptions.

Bugs are an equal opportunity employer so similar to the previous post how you need to prove your product is correct, that extends to your test infrastructure. When you find a bug are you ready & able to prove that it resides in the product code & is not a problem with your test infrastructure?  Here are a couple real world examples of both to further demonstrate the need to focus on how you are building your test infrastructure.

Will this thing ever finish?

Working on a sync engine for Microsoft Identity Integration Server (MIIS) we were investigating the ability of the product to scale.  As such a tester had instrumented some counters to track the rate that sync occurred as it ran through a couple million objects.  This process was taking a couple days, which was expected due to the scale being tested.  The tester was then leveraging the data about their rate of sync to determine the performance of the synchronization engine and then extrapolate out an ETA for the test to complete.  As he plotted this out each day the rate would drop and he would give an updated ETA.  Below is a hypothetical table for the data that was being collected.

Day Rate (Objs/day) Objects Remaining Est. Days remaining
1 50,000 2,000,000 40.0
2 50,000 1,950,000 39.0
3 49,000 1,900,000 38.8
4 48,000 1,851,000 38.6
5 45,000 1,803,000 40.1
6 42,000 1,758,000 41.9
7 40,000 1,716,000 42.9

Note: This data is completely made up just for illustration as it was years ago & I don’t have the actual data.

After several days of work, we realized that every day the rate moved enough that it would never complete. Sound the alarms, we have a bug!

Now you spin up your developer & proudly point to a newly found bug.  After some investigation through it is found out low & behold yes there is a bug, but it is the test infrastructure causing it.  What is that you say?  How can my test cause a product bug? In this case it turned out to be that the way the tester implemented the counters for monitor the product were doing an expensive query via WMI on a VERY aggressive basis. This query was tying up system resources & therefore causing the slow down in the product.

Lesson learned is to be aware of the observer effect and try to ensure your test code is as unobtrusive as possible when measuring items like performance.

Test A says pass, but Test B says fail?

In another example of working on performance on Forefront Identity Manager (FIM) we were in the midst of a major improvement to performance.  We had some very talented developers on the project & as a great developer they had written some excellent unit tests to help test their progress closer to the API level.  Similarly we had some tests which covered end to end UI scenarios for performance. The developers reported great progress in improving our performance and so we kicked off our tests to see how we were looking end to end.  It so happened that our management was meeting to discuss our project progress with their managers, so they were eager for an update for performance. We kick off the UI performance tests & find that everything is timing out!

We have two very different results, so which should we tell our managers to report? We have to figure out which test is correct and why is there a difference. Well a good start is to minimize the variables, so we took the tests out of the picture by doing some manual testing.  Sure enough our manual results were inline with what the UI tests were showing,

Technorati Tags: SDET,Test

but why are the tests showing different results?  We then started profiling the SQL server using both the unit tests & the UI test. Sure enough the queries reported the differences we saw in the amount of time taken.  This goes back to knowing your product, as based on our knowledge of the product we had a good idea of where to start looking.  We then ultimately debugged into the unit test & realized that it had a config switch that added some additional optimizations which the dev had forgotten to enable in the product.

Build it & they will come

All this to say whenever you build any piece of code, there are sure to be bugs to come from it. Teams must not view something as just test code & therefore hold it to a lower quality bar, otherwise it will quickly cost them time & invalidate their results. 

Similarly as a tester you get to build all kinds of infrastructure to help make the team more efficient. This also means you get to learn all kinds of new technology.  Need to build a new report?  What tech do you want to use to build it? Why not write it in Silverlight or WPF to learn something new?  You get to be the PM, Dev, & Tester for these types of items.  Just remember you want to test it thoroughly, otherwise there might be a bug that will bite you later.