次の方法で共有


Do you use a debugger when you develop?

I posted a earlier blog about wanting support for managed co-routines in the next version of VS. Interestingly enough, that didn't garner much interest. Instead, what seemed to pique people was the fact that i really don't use a debugger when developing managed code. It wasn't a boast, just something I'm observed. The single highest contributing factor is probably that i develop in a completely different way between managed/unamanaged code. Because unmanaged code (C++) is so high maintenance I tend to code in way that allows me to implement the logic i want as quickly as possible with unnecessary overhead. High maintenance in this sense refers not to conceptual complexity, but purely from a coding complexity stand point. The need to maintain header/implementation files. The enormous amount of error handling and keeping track of memory that I have to do in unmanaged code leads me to normally create large methods with a lot of logic in them that i end up needing to debug to understand.

Refactoring that code into something simpler and more maintainable is extremely hard to do because you can't just extract methods (or any other number of refactorings) because in general you end up having to do a lot of extra effort yourself to handle a lot of the nasty stuff that the runtime does for you autimatically now.

With managed code I tend to write smaller, simpler objects with simple messages that are easy to understand and verify. Tracking a bug down is usually quite simple because you'll either have an exception, or you can pretty easily figure out where the error occurred. Fixing the bug is just a matter of asking: "ok, who could have called this with the wrong values", sitting and mulling on it, and then, in a gestalt flash realizing the issue, fixing it, rerunning and having everything work fine.

I've also noticed this when coding in a functional language. Except the difference is much more marked in that case. Generally if the code compiles then it's correct. And if it's not it's very very obviously not. i.e. on the first input to the system it will be clear the whole thing is borked.

Maybe I do actually use a debugger on my code. Except that the debugger is more a function of a good compiler/runtime/unit tests than an actual tool that i fire up. What does the debugger get me? A callstack... which is probably not useful because my methods shouldn't really care about who called them. Values of variables... which is usually not to useful because I should understand what the values could be through the use of immutable objects which validate at construction time and method that validate upon entry.

Is it just me, or are there others out there who have found that they use a debugger less and less (or not at all)?

Comments

  • Anonymous
    June 09, 2004
    Presumably there are plenty of others. I do not use debugger myself when develop - a compiler and a printf do just fine.
  • Anonymous
    June 09, 2004
    The comment has been removed
  • Anonymous
    June 09, 2004
    The comment has been removed
  • Anonymous
    June 09, 2004
    I always single-step through all new code I write.. Gets me most of the benefits of a code inspection with a minimal time investment.

    This is a well-known technique. It slows you down enough to ensure that you see what you have written, instead of what you think you have written.

    Similar techniques have been used by people in other fields. People proofreading texts, for example, read each word in reverse, two letters by two, thus avoiding brain's tendency to auto-correct misspelled words.
  • Anonymous
    June 09, 2004
    It has been my consistent impression that I use the debugger much less than those around me, but that when I do use it I am much more profficient in its use.

    When I am doing TDD, I almost completely stop using the debugger.


  • Anonymous
    June 10, 2004
    The comment has been removed
  • Anonymous
    June 10, 2004
    I'm in the same camp with the TDD guys. I use the debugger much less often these days because of Test first development and liberal use of log4net since my test failures typically point out where any bugs are.

    I also occasionally fire up NCover to show me where I don't have test coverage to give me a sense of where I should spend more time manually stepping through the code.
  • Anonymous
    June 10, 2004
    Developing using the debugger
  • Anonymous
    June 10, 2004
    c# - not at all :)
  • Anonymous
    June 10, 2004
    I use the debugger only in exceptional situations: like when I get a segfault :) I use the debugger to find out where it happened and then I close it. Assertion/exceptions are a much better way to ensure correctness since they also act as code documentation. Logging helps too. And testing is a must: "If you didn't test it then it doesn't work" (Bjarne Stroutroup, I think).

    Now, one question: what does functional / non-functional has to do with how many bugs are catched by the compiler? The type system is probably the most important factor here.
  • Anonymous
    June 10, 2004
    The comment has been removed
  • Anonymous
    June 10, 2004
    In C# and ASP.NET, when it fails from the compiler, the stack trace is usually enough to pinpoint the problem and refactor. Runtime exceptions which are uncaught can be traced from cordbg, which seems to be automatically invoked from the clr. Runtime anomolies (features, not bugs!) which don't throw exceptions show up during testing and require serious tinkering. The .NET Reflector is a valuable tool for this.
    I dunno, this seems to count as debugging..as programming technique evolves, and the development environment gets more sophisticated, someone will invent a new terminology to describe what we used to know as #if dgb assert...
  • Anonymous
    June 12, 2004
    Radu: Good point. I think i meant "a strongly typed functional language with static type checking". As in, at compile time all type problems are identified.
  • Anonymous
    June 13, 2004
    On the contrary, I find myself in the debugger more often than in C++, but suspect it's a function of experience.

    In native C++, I write a whole lot of code, run it through a whole lot of tests that I've also written, and fix it if necessary. I rarely use the debugger until something comes back from QA (fortunately not that often, but inversely proportional to the number of unit tests I have). I know the language and libraries so well that they rarely surprise me.

    In C#, I write small amounts of code, run it because it's quick to do so, then I run it again under the debugger to find out where the exception was thrown. I curse, read another chunk of the documentation, fix it, and continue. The library is vast, intimidating and often surprises me (the language is ... not).

    The main problem is that the documentation is too busy giving a wealth of examples, but not being concise about anything. It's hard to pick up preconditions for calling a function or the invariants of the class, and even harder to select which overload to use from the plethora of alternatives. Basically, the framework became too rich in an attempt to make it easy to use and the documentation is verbose but not coherent or precise. The result is that I feel forced into experimental programming, backed up by the debugger.

    I feel dirty.
  • Anonymous
    June 13, 2004
    James: Could you provide an example along with this. I'd like to send this feedback along. Thanks!