다음을 통해 공유


Unit testing session

During TechEd'04 in Amsterdam I attended a Birds Of a Feather session on how to integrate unit testing into the Software Development Life-Cycle. For those who are unfamiliar with the format of these sessions, it's a speaker moderated discussion, where we can ask questions and have them answered and commented by the speaker and other attendees. I enjoyed this session format very much as it's highly interactive.

This session was moderated by James Newkirk. The audience placed a few questions that led to various discussions on unit testing and the correct way to do them (or not to do them), unit testing techniques and personal experiences both positive and negative when using these techniques.

Below are the topics discussed during this session and I thought of placing them here, along with some of my personal impressions on the various subjects.

  1. Testing abstract classes with different implementations

How to test a virtual implementation of a class, when different implementations can spawn at any time ?

James Newkirk was of the opinion that you should abstract the tests when a new implementation appears.

My opinion on the subject follows along the same lines. If I'm doing TDD, I want to write test code for all my implementation (this is my opinion on testing privates as well, BTW). So, I'm a firm believer in testing the implementation not the contract, because that's where my logic will be. It's also my opinion that I should code my unit tests just like I do my production code. So, when there's the need to do some refactoring in my unit test code, I'll do it. This includes doing an Extract Interface, or Extract Class, or any other abstraction refactoring.

  1. How much to test / testing privates

This is a topic that has always had strong arguments pro and against as was also evident in this session. While James Newkirk's opinion is not to test privates, I'm not so sure we shouldn't. James doesn't like to test the private methods, because it hinders refactoring. But if you don't have a test for a piece of logic, how can you refactor it (this seems like a catch 22 to me) ?

As I've stated above I'm not so sure we shouldn't test private members. I guess that this depends on how I do TDD. There are two different approaches here. Write code test-first only considering the publicly visible members (or the public interface), or write tests for all your logic and during refactoring, if a given method turns out not to be used by any other class other than the test class, use the Hide Method refactoring. I use this last approach, for two reasons:

  1. If I'm only coding tests for the public interface, I'll have no idea how much logic I'll be having "under the hood" or beneath the public interface. But typically, as someone pointed out during this discussion, this is were most of the logic will be;
  2. The other reason is that this allows me to code this (soon to be) "internal" logic, using a test-driven approach, which gives me confidence in the implementation and how I refactor it.

This approach, however, leaves me with a bunch of orphaned test methods, since they can no longer access the methods they should be exercising. This is when I usually refactor the test code as well, replacing method invocation with reflective method discovery and invocation.

  1. Legacy code w/o tests

The scenario is the following. You have a large code-base for which you don't have any unit tests. You'd like to add unit tests, but where do you start ?

James answered this question in a very pragmatic way IMO. "Don't add unit tests for anything you aren't about to change." He said.

Well this seems a very sensible decision. He then elaborated on this by stating that it could be done by establishing a virtual "boundary" of unit tests defined by the things about to be changed. Tests can be written against this "boundary" and the changes made on the inside of the boundary.

  1. How to convince people to code test-first?

I see this issue as a very hot topic. Primarily because, I have been trying to convince people to adopt this way of developing. And I know from personal experience that it's hard to find the right arguments. Strangely enough, I've found that direct management (PMs or Dev Leads) are more receptive to the message of TDD than the actual developers.

James Newkirk stated that developing software "...is not about the individual developer, it's about the team and the continuous maintainability of the code". Someone also pointed out that "blinking lights changing from red to green help a lot!" I couldn't agree more.

Additionally, James pointed out a "technique" that I've also used with some success, which is to ask someone to test their own code. Most of the times people find it hard to write tests against their own code, because they usually don't write their code to be tested.

This debate spawned another thread of discussion. If the developers write unit tests, which was usually a big chunk of the work that was done by QA teams (white-box testing), the QA teams must start testing at a different (higher) level.

  1. Mock Objects

 

Just like Benjamin posted, "James described mock objects as the situation where object A interacts with objects B and you need some way of 'switching out' Object B to create a 'dummy response' from the calls of Object A."

True, but it's necessary to determine why would we want to 'switch out' Object B. I usually 'switch out' object B if it isn't part of my domain code (maybe it's an object belonging to a third-party API). If my Object B is, for instance a DAL component and it uses ADO.NET to connect to the database, I would use (dynamic) mock objects to replace the ADO.NET's data access components in object B using the "Inversion of Control" pattern. It may be somewhat complex, but it promotes decoupling and testability, which rate very high in my trade-off scale, so I usually think it's worth it and end up implementing it.

  1. Testing against a DB.

Sometimes it's hard to mock out the data access layer components, someone pointed out. This is true and there's usually a tendency to end up testing against the database.

Actually, James Newkirk says that there are situations where he prefers to test against the database. "When writing unit tests for DAL components use the actual database. When writing unit tests for components that use DAL components, mock out the DAL components." James said.

I have a different point of view. If the DAL components where written by me, I'd like to test not only the components that use the DAL components, but also the interaction between them. On the other hand, data connectivity components supplied by the underlying API (system or third-party) are not part of my domain code and I can  (and probably should) mock them out.

  1. How long should tests run ?

This was an interesting point, as people have very different experiences. Someone said they had test that took 6 hours to complete, others have them built into the build process and run the whole suite nightly for as long as they take. Another delegate's tests took about 2 hours to complete.

In James' opinion, 1 hour is too long, because tests should provide rapid feedback to the developer and promote a very fast turn around in the development lifecycle. The recommendation that came out of this debate was that in large projects we should separate developer unit tests and run all the tests (all test suites from all developers) during the daily build cycle. Some integration problems will arise only during build, but it's a trade-off.

  1. Automatic test generation.

Commercial products don't take into account the actual intention of the methods being tested and so usually create 'poor' tests. On the other hand, Visual Studio Team System has a built-in functionality to implement a boiler plate test for any given method.

  1. How do you unit test GUIs ?

In any discussion on unit testing this topic usually comes up. In this session people mentioned some commercial products that they've used with some success, although they couldn't survive for more than a few builds without breaking them. A pretty "standard" response to this issue is to use the Model View Controller design pattern to separate presentation and presentation control logic and eventually test them individually.

The problem is that when we're coding unit tests against an API, the interface is programmatic, whilst when we are coding unit tests for a GUI, the interface is visual and designed for human and not machine interaction.

I've used NUnitForms and NUnitASP with some success to get around this situation and exercise the visual elements of the GUI individually. The main drawback with both of these extensions to NUnit is the relatively small number of controls supported (granted that they are the most used).

 

Well, a few other questions and discussions came up, but were marginal to the previous ones that were the main thread of the session.

Comments

  • Anonymous
    July 05, 2004
    good posting.
  • Anonymous
    July 05, 2004
    Hi Jose

    Long time we don't see each other (tip: cgey - tranq). I think I was in the same session than you, although I didn't see you.

    Nice post.

    I would like just to comment some topics. In my opinion TDD should be considered as a demanded best practice. After adopting TDD I could never rollback. I think the best argument is “Try it”. BTW, to facilitate this approach, VS2005 should come with Unit Testing in all versions (I’m being repetitive, because everybody talks about the same), encouraging developers to use it as a practice.
    Also I was a little surprised about “6 hours duration testing”, and I think unit testing is being driven too far in some organizations. I guess sometimes, people are confusing unit tests with load and functional tests. In my opinion, the tests should run as fast as possible to provide the correct and expected implementation, and they also should run in granular tests, encouraging developers to do it. If tests take too long, the developer will avoid them. If some longer test should be done, then there should be two test runs, the developer’s tests and the daily build tests.

    Abraço
  • Anonymous
    July 05, 2004
    Hey, Hugo.

    Sorry I missed you.

    It's also my opinion that tests shouldn't take very long to run, or we might be discuraged to run them. i believe test-driven development is still in it's infancy. We haven't all the answers yet (as was evident by the questions on mock objects and testing databases, class internals and so forth). It's my opinion that the "industry" (I don't particularly like this metaphor, but I'll leace that for a further posting) is starting to absorbe these techniques and that it'll take a while for them to be adopted widely. As more and more developers incorporate the coding of unit tests into the development lifecycle, I believe some of the issues discussed in this session will reach consensus and, perhaps, we'll even see some unit testing patterns for them ;-)

    Abraço