Share via


Myth Busting Testing #3: Test Automation is like car insurance.

Ok, Myth #3: An automated test must find bugs to be useful.

Here's a scenario for you: you have a car, and you have an car insurance, right? Do you really need car insurance? if you're a safe driver, maybe you have a claim every 5 years, and maybe you pay out more in premiums than you get back (which must be the case, generally speaking, otherwise insurance companies would be out of business, right?). You haven't had a ticket in 5 years, you haven't had an accident in 10. But aren't you glad you have the insurance? You know if something happens, you'll be covered.

So think of test automation like that insurance policy.

Reality: Test's don't have to "find bugs" to be useful. Test automation's real value lies in validating that defects have not been introduced in previously working code. Tests that pass provide meaningful data about the state of the codebase under development and the development process.

Just as insurance gives you peace of mind, so to does test automation.

Let me give you another example.

My team has been in a coding milestone for 10 weeks now. Monday, we start a stabilization period. When we started the coding milestone, all our automated testing was passing 100%. Our smoke tests, for the most part, have been at 100% throughout the 10 weeks. Up until about 4 weeks ago, our set of daily automated tests were passing at 100% . But we took a partner drop, and sure enough, we had 2 failures in that larger set right after that. Next week, another pickup, and now 4 failures, the next week 7, and the next week 9. And we had fixed many of the previous weeks failures and bugs filed against our partner, but we couldn't get ahead of the incoming breaking changes. 

So here I am, using my test automation finding bugs as an example of why a test's entire purpose in life is not necessarily finding bugs. Huh? Adam, didn't you say that the real value lies in the test validating that defects haven't been introduce? Well, what useful data can you derive from the tests passing consistently?

  • Confidence: higher confidence in giving my partners drops of our code.
  • Where to spend time: less time testing in those more stable areas, and focus more on the areas that we've seen defects in.
  • Our Churn: If things aren't failing at the rate we expected, we probably aren't seeing as many dev check-ins as we expected.
  • Partner Churn: If things aren't failing when we pick up partner drops, then we need to find out why the breaking changes we expected aren't causing failures. Did they actually make it in? Or are we just doing an amazing job of fixing all the breaks before we pick their code up?
  • Assessment of process: I can look at the test check-ins, and see if we aren't seeing failures because our dev and test teams are communicating efficiently about the breaking changes and addressing them before they fail in our test lab.

There's really alot of valuable data from a well designed test automation system that has well designed tests that "don't find bugs" because they are passing regularly.

One more time: 

Test's don't have to "find bugs" to be useful. Test automation's real value lies in validating that defects have not been introduced in previously working code. Tests that pass provide meaningful data about the state of the codebase under development and the development process.

Comments