Udostępnij za pośrednictwem


Just because you can test it doesn’t mean you should

A wise test lead once said to me, “do as little as possible while still ensuring quality”.  He wasn’t giving me tips on how to be a slacker =)  I think what he wanted to say was there always seems to be more work than there are people so it pays to be as efficient as possible,  especially when prolonged testing can hold up shipping a product.

 

One thing I’ve seen at Microsoft is sometimes the tendency to want to test everything, which isn’t a bad thing as long as you don’t have to ship =)  Hence the title – “just because you can test it doesn’t mean you should.”  If say you have several components on a stack calling each other, it makes sense to just test at the highest layer to automatically get coverage for lower components.  Some people also call this end to end testing.  When you test each component separately, you may have the following issues:

  • Duplication of effort.  Tests of lower level components doing the same testing as end to end tests.
  • You may not be testing a component the way it will be used in the product.  For example, your tests call an API passing it X, but the caller in the product will only ever call the API with Y.
  • Holes in coverage.  Because you’re testing components in isolation, you may miss main stream customer scenarios.

 

So my recommendation?

  • Test at the highest layer whenever possible.
  • If the top layer is not yet available, then temporarily test the lower layer with the intention that they will be replaced with end to end tests.
  • Focus testing on main customer scenarios first taking a top down approach.  This maximizes coverage and at the same time focuses on scenarios customers will hit first.
  • Use code coverage tools to drill down on coverage of components.  If code is not being used by top-level components then question whether the code should be there at all.
  • Push devs to do unit testing of their components.

 

Of course there are times when component testing is called for.

  • Component code cannot be easily hit by end to end testing.  An example is negative  or error testing.
  • A component is consumed by multiple clients or it's a public API that anybody can consume.  In this case you should treat this component as the highest layer.
  • Performance tests should ideally go through the top layer and each component logs their own performance numbers.  But if you want to test things like throughput it might make sense to go through the component directly.
  • Any testing where going through the top layer would just be too slow or not possible like  security testing, stress testing, fault injection, etc.  For example, you may need to ensure an app can open 100K different files for fuzz testing but going through the UI would be painfully slow.

 

What about MVVM (Model View ViewModel) you say?  This is where you have a layer (ViewModel) that encapsulates data and functionality needed by the UI allowing you have to have a very thin UI layer, even making it easy to replace the UI layer if needed.  Some folks prefer testing against the ViewModel as opposed to going through the UI.  As somebody who has done both, I can tell you that testing against the ViewModel is much easier and UI testing can be a pain.  But in my opinion, the easy way is not worth the risk.  We saw several bugs slip through because the thin UI layer that supposedly didn’t have any bugs of course did.  Going through the ViewModel has its uses like expediting certain operations but I don’t recommend exclusively testing against it.

 

So to close, test end to end first, and component test only if you need to.  Lazy testers are efficient testers, unless they’re  trying to outsource their work on craigslist =)  Feel free to comment if you have any thoughts on this topic.

 

-Samson Tanrena

Comments

  • Anonymous
    December 13, 2011
    I disagree with you on the MVVM part. Tests which target the ViewModel can be executed orders of magnitued faster, and be much more thorough than testing the View, plus are debuggable in ways which are impossible with View testing. Yes, there needs to be some tests which ensure that the View is wired up correctly to the ViewModel, but that's it. That's the only thing the View should be doing, is wiring itself up to the ViewModel; anything that's actually interesting to test should live in the ViewModel or lower.

  • Anonymous
    December 13, 2011
    As you say Test at the highest layer whenever possible, then how do we unit test and do TDD?

  • Anonymous
    December 13, 2011
    The comment has been removed

  • Anonymous
    December 14, 2011
    The comment has been removed

  • Anonymous
    December 14, 2011
    "the developer who should've already done unit testing." It's this attitude which causes a lot of bit rot. If the developers tests aren't part of the daily test runs they become bloat, overhead and dead code. The unit testing should never be done. It needs to constantly happen.

  • Anonymous
    December 14, 2011
    @jader3rd, I wasn't implying that unit tests are only run/written once.  In fact my team runs them daily and with every checkin.  A commendable practice is to checkin unit tests at the same time as features.  Of course this doesn't preclude devs from adding more unit tests later on. Regards, Samson

  • Anonymous
    December 14, 2011
    As the code grows older it starts to rot. The only way we can improve is to change it. How do we know our changes won't break it and cause regressions... Because we tested it. ..how can we trust our tests - because we only wrote production code to make a test pass. When you start being selective, you lose this confidence and have holes where regressions could occur. If you need to take the short-term gain to ship something asap then at least be aware you have taken on long term compromise. Just be aware of the trade offs.

  • Anonymous
    December 14, 2011
    The comment has been removed

  • Anonymous
    December 14, 2011
    Perhaps this might be the explanation for Microsoft's perennial QC issues...

  • Anonymous
    December 14, 2011
    Hmm! Actually I'm a bit disappointed that you're an MS employee. I really like MS software and make a living developing mainly in .Net. It is well known that low-level continuous unit testing, and preferably TDD, is the best way of ensuring quality, reducing the number of bugs and thus speeding up the whole release cycle. Microsoft, through MSDN, conference attendance, Visual Studio testing frameworks etc push this idea themselves, so I'm more than a little surprised at your post - I disagree with you on almost every point, you're just plain wrong and I'm wondering whether this career-limiting post was well considered.

  • Anonymous
    December 14, 2011
    The comment has been removed

  • Anonymous
    December 14, 2011
    The comment has been removed

  • Anonymous
    December 14, 2011
    The comment has been removed

  • Anonymous
    December 15, 2011
    Kraiven, I've been a professional developer for 21 years and have had good success with the principles I cited.  BTW, Windows Live Tester's position is strong, defensible, and well-articulated.  Even if it's provacative, it should not be in any way career-limiting.

  • Anonymous
    December 15, 2011
    The comment has been removed

  • Anonymous
    December 15, 2011
    The comment has been removed

  • Anonymous
    December 15, 2011
    The comment has been removed

  • Anonymous
    December 15, 2011
    I am a mature developer, and I believe the author is a mature developer.  TDD is not a panacea, nor is a software project doomed without it. I would also point out that I said I agree software should be tested at the highest level possible.  That doesn't mean you wait for a complete vertical slice of functionality.  (But, if a vertical slice IS available, you use it.) More importantly, I think the O&M phase of the project life-cycle needs to be taken into consideration.  Automated integration tests don't die when the software goes into production.  They actually become even more important in maintenance mode, when the software is updated/enhanced, often by developers who aren't familiar with it. In my opinion, unit tests usually only need to be run once and so are of limited value.  You only need to rerun a unit test if you modify the specific piece of code that it is testing.  A suite of automated integration tests is far more useful and should be run at every check-in.

  • Anonymous
    December 15, 2011
    The comment has been removed

  • Anonymous
    December 15, 2011
    To be able to refactor code, unit/programmer tests need to be independent of the software architecture.  To allow this, the tests need to be written at the highest level practical.  Anything is possible, but intelligent people can do trade offs and determine what is most practical.   To be able to refactor, to do evolutionary design, one needs to be able to change the underlying structure of the code and rely on the existing test suites to validate that nothing has changed.  If one needs to change both the code and the tests, then the cost goes up and the benefit of the tests go down.

  • Anonymous
    December 15, 2011
    The comment has been removed