Compartilhar via


Stop writing bad tests. Write only the tests that you can do great.

I've been working on a talk on ways to make unit testing easier. I has not been going well; I'd come up with an approach I liked, do most of the slides for it, come back to it, and be unhappy with what I had written.

This happened – and I am not exaggerating – 4 times in a row.

In the 5th try, as I was working through the techniques I was going to talk about, I realized something. But let me back up a bit first…

Pretty much every introduction for unit testing starts with a very simple scenario using a very simple class; the flow is something like:

  1. Figure out what a method does
  2. Write a test for it
  3. Repeat

Or, if you are doing TDD, you swap the order and write the test before the method.

With a small class and small problem space, this works well; it's simple for developers to understand, and you can therefore present it to a group and they walk out thinking that they understand unit testing.

There is one *tiny* problem, however.

It doesn't work on real classes – and by that, I mean the classes that are in most real system. We all know that the testability of existing codebases is low, and we also know that most developers do not have the design skills or experience to take that sort of code and write good unit tests for it.

But that is what developers try to do, because THAT IS WHAT OUR INTRODUCTION TOLD THEM UNIT TESTING IS ABOUT.

So, they take their existing code, add in a bunch of interfaces so they can inject dependencies, pull in their favorite mocking library, shake it around a bit, and end up with a unit test.

Hmm…

  • There is a lot of test code that takes time to write
  • Because of the high coupling, changes in many areas of the system will require changes in the test
  • The test is very hard to understand, and it's often not clear whether the test is actually testing what it says it is testing
  • Developers do not feel like they are being successful in writing unit tests.

And – AND – there is very little chance of the test driving improvements, which is one of the main reasons we are advocating for a unit-testing approach.

We have been going about this in the wrong way.

We should focus on teaching developers how to look at code and figure out what different units are lurking in a single class, and also teaching them how to extract those units out so that simple tests can be written for them.

And… we should lighten up on the "you should write tests for everything", because these expensive complex tests aren't doing anybody any good.

Comments

  • Anonymous
    October 10, 2016
    You are right, Eric.One way I've heard it described is "we introduce TDD by teaching people how to wrap tests around bad designs. We can get away with that because the introductory problems are so small."
  • Anonymous
    October 18, 2016
    I worked in a hellish legacy code base. Martin Flowers "Working Effectively with Legacy Code" showed me light at the end of the tunnel. He discusses how to get a grip on the things discussed here.
  • Anonymous
    October 21, 2016
    That's the beauty of integration tests. Pulling out discrete chunks of logic into unit testable pieces isn't a completely safe operation, and thus, especially with very complex code, how are you going to prove you didn't introduce a regression without a test suite that has decent coverage before refactoring?
  • Anonymous
    February 19, 2017
    Divide and conquer - not everything makes sense or is the easiest to test via unit tests. Separation into a few well defined buckets, each with its own rules makes developer's life easier. Like any self-imposed limitation: simplifies & helps.