Freigeben über


Always write a spec, part one

Joel had a great series of articles many years ago about the benefits of writing functional specifications, that is, specifications of how the product looks to its users. I want to talk a bit about technical specifications, that is, a specification of how something actually works behind the scenes. A while back, I described how I wrote a seemingly-trivial piece of code by first writing the technical spec, and then writing the code to exactly implement the spec, and the test cases to test each statement of the spec. I use this technique frequently; it almost always saves me time and effort. If you don't write a spec first, it's very easy to get bogged down in a morass of bugs, and hard to know when you're actually done writing the code.

Here's an example of another seemingly-trivial function that smart people gave a good half-dozen completely wrong answers to because they didn't write a spec. Notice how once I state what the spec is, the code follows very naturally.

I thought I might go through a recent example of a slightly less trivial problem. I needed a function in the compiler which could take a valid list of arguments A1 to an applicable method M, and give me back two lists. The first result list is a list of local variable declarations, L, the second, another argument list, A2. Here's the kicker: evaluating each member of A2 must have no side effects. And, the combination of executing L followed by M(A2) must give exactly the same results as executing M(A1). (I also knew that the members of A1 had already had all the conversions generated to make them match the types of M's parameter list, and so on. Overload resolution was completely done at this point.)

Why I needed such a function is not particularly important; perhaps you can speculate as to why I needed it.

So I started writing a spec, starting with the description above: the function T takes a valid list... blah blah blah.

That's not enough of a spec to actually write code.

I realized that some expressions in A1 might have side effects, and some might not. If an expression had side effects, then I could declare a temp local, execute the side effects, assign the result to the local, and then use the local as the expression in the same place in A2.

Super. Now I can write more sentences in my spec.


For each argument x in A1, we potentially generate a temporary value as follows: First, if x has no side effects, then simply keep x in the appropriate place in A2. Otherwise, if x does have side effects, then generate a temporary variable declaration in L, var temp = x. Then put an expression that evaluates the temporary variable in the appropriate place in A2.


Then I thought "can I correctly implement this spec?"

I started thinking about the possible cases for the input, A1. The members of A1 could be passed by value, out, or ref; I already knew they were correct, since the particular method M was an applicable method of argument list A1. In this particular point in the compilation process, there was no need to worry about whether there were "params" arguments, whether there were missing arguments with default values, and so on; that had already been taken care of. I also noted that whether they were "ref" or "out" was irrelevant; ref and out are the same thing behind the scenes. As far as the compiler is concerned, the only difference between them is that the definite assignment rules are different for each. So we had two cases for each argument: it's being passed by value, or by ref/out.

If it's being passed by ref/out, then what we actually do is we pass the managed address of a variable to the method. In M(ref int x), the type of x is actually a special type int&, "managed reference to variable of type int", which is not a legal type in C#. That's why we hide it from you by requiring a goofy "ref" syntax any time you want to talk about a managed reference to a variable.

Unfortunately, our IL generation pass was written with the assumption that all local variables are never of these magic managed-reference-to-variable types. I was then faced with a choice. Either come up with a technique for working with this limitation, or rewrite the most complicated code in the IL gen to support this scenario. (The code which figures out how to deal with managed references is quite complex.) I decided to opt for the former strategy. Which meant more spec work, since our spec now doesn't handle these cases!


For each argument x in A1, we potentially generate a temporary value as follows:

First, if x has no side effects, then simply keep x in the appropriate place in A2.

Otherwise, if x does have side effects and corresponds to a value parameter, then generate ... etc.

Otherwise, if x does have side effects and corresponds to a ref/out parameter, then a miracle happens.


Clearly that last bit needed some work.

So I wrote down all the possible cases I could think of. Clearly x has to be a variable if its type is "reference to variable", so that makes for a small number of cases:


Otherwise, if x does have side effects and corresponds to a ref/out parameter, then the possible cases are:

* x is a local variable
* x is a value parameter
* x is an out parameter
* x is a ref parameter
* x is a field of an instance
* x is a static field
* x is an array element
* x is a pointer dereferenced with *
* x is a pointer dereferenced with [ ]

and for each, a miracle happens.


Again, clearly this is not good enough. I re-examined my list to see if I could eliminate any of these cases. Locals, parameters and static fields never have side effects, so we can eliminate them.

Also, at this point in our analysis, I knew that pointer dereferences of the form "pointer[index]" had already been rewritten into the form "*(pointer + index)", which meant that in practice, we would never actually hit that last case in this algorithm; the second-last case would take care of both.


Otherwise, if x does have side effects and corresponds to a ref/out parameter, then the possible cases are:

* x is a field of an instance
* x is an array element
* x is a pointer dereferenced with *

and for each, a miracle happens.


I then started to think of what side effects could happen for each. We could have "ref instance.field", "ref array[index]", or "ref *pointer", and "instance", "array", "index" and "pointer" can all be expressions that have a side effect. ("field" cannot, it merely names a field.) So now we can use the same specification as before:


Otherwise, if x does have side effects and corresponds to a ref/out parameter, then the possible cases are:

  • x is ref/out instance.field: in this case, add var temp=instance to L and ref/out temp.field to A2.
  • x is ref/out array[index]: in this case, add var t1 = array and var t2 = index to L and ref/out t1[t2] to A2.
  • x is ref/out *pointer: in this case, add var temp = pointer to L and ref/out *temp to A2.

And now we have something I can implement. So I sent this proposed spec around for review, while I started plugging away at the implementation.

This spec is wrong. Can you spot the bugs in my implementation that my coworkers found by reading my spec?

Next time, the thrilling conclusion.

Comments

  • Anonymous
    November 19, 2009
    > x is ref/out instance.field: in this case, add var temp=instance to L and ref/out temp.field to A2. What if "instance" is a struct? Copying that for ref/out passing would change observable behavior. That's one bug. There is a second (and more general) bug in the spec too. Can you find it as well? -- Eric

  • Anonymous
    November 19, 2009
    You've changed the order of evaluation. A parameter with no side effects could be affected by the side effects of a later parameter - and after the rewrite, the later parameter will be evaluated first. Bingo! -- Eric

  • Anonymous
    November 19, 2009
    On the whole, the scheme doesn't seem to account well for mutual dependencies between argument expressions, when one expression mutates values used by another expression. For example, this call:   int x = 0;   F(x, ++x); // (0, 1) According to the rules above, first argument doesn't have side effects, while second one does. So the rewrite would be:   int temp1 = ++x;   F(x, temp1); // (1, 1) > x is ref/out array[index]: in this case, add var t1 = array and var t2 = index to L and ref/out t1[t2] to A2. This needs generalization to multidimensional arrays, though I don't see why that would be a problem (since this requires to always generate a local for "index", regardless of whether it has side effects or not, it seems that it doesn't run into the problem mentioned above). Nice! No one caught this one, and in fact, the implementation is wrong. -- Eric

  • Anonymous
    November 19, 2009
    The comment has been removed

  • Anonymous
    November 19, 2009
    Eric, This thread should be REQUIRED reading for anyone developing software. It clearly illustrates the potential for disaster when one starts implementing code on an "ad hoc" basis.... David

  • Anonymous
    November 19, 2009
    What about passing in an assignment expression, like 'M(t = 5)'? Am I reading your spec correctly if I think it'll do "var temp = t = 5;" then? Correct. That's perfectly legal. -- Eric In the same vein, what happens if I do 'M(() => t)'? Are you then doing "var temp = () => t;" which would try to capture a nonexisting variable t? At this point in the process we have already done full overload resolution. If there is no outer variable t then compilation has already halted with an error. -- Eric And if we're talking lambdas, something like "var temp = x => x" isn't valid at all anyway. Or is this already a stage in the compiler where lambdas and anymous delegates have been appropriately converted? Correct. This is happening after overload resolution has already converted all the arguments to their correct types, provided missing arguments, worked out the param array rewritings, and so on. -- Eric

  • Anonymous
    November 19, 2009
    Here's a very interesting corner case that has to do with array covariance:    class Program
       {
           static void Foo(object[] a)
           {
               Bar(out a[0], Baz());
           }
           static void Bar(out object x, int n)
           {
               x = "Bar";
           }
           static int Baz()
           {
               Console.WriteLine("Baz");
               return 0;
           }
           static void Main()
           {
               Uri[] a = { null };
               Foo(a);
           }
       } If you run this, you'll see that it throws ArrayTypeMismatchException, and does not print "Baz". This is due to rules of argument evaluation (7.4.1 in the spec), which specify that, when array element is used as an out or ref argument, the runtime typecheck to ensure that the array is of the correct type to be assigned to is performed immediately after the argument expression is evaluated, and before the next argument is evaluated. If rewritten following the rules you've specified, we get this:        static void Foo(object[] a)
           {
               var arg1_array = a;
               var arg1_index = 0;
               var arg2 = Baz();
               Bar(out arg1_array[arg1_index], arg2);
           } So now Baz() will run with all its side effects before exception occurs. If code calling Foo() catches the exception, it can observe those side effects. Holy goodness. That's awesome! -- Eric

  • Anonymous
    November 19, 2009
    > x is ref/out array[index]: in this case, add var t1 = array and var t2 = index to L and ref/out t1[t2] to A2. One other problem here is that "index" may need an implicit conversion before it can be used as an array index, and that conversion can have side effects. As defined - "var t2 = index" - type of t2 will be the same type as index, and so conversion will only happen inside the function call at the point where we do "t1[t2]", and after all other arguments are evaluated. At this point in the compilation we already have inserted all the conversions necessary into the program tree to correctly represent its semantics and exactly match all the types. So the index expressions are already of integer type. But good thought. -- Eric    During a normal call, the conversion would be performed before the following argument is evaluated. So the source:    class Foo
       {
           public static implicit operator int (Foo foo)
           {
               Console.WriteLine("Foo");
               return 0;
           }
       }    class Program
       {
           static int Bar()
           {
               Console.WriteLine("Bar");
               return 0;
           }        static void Baz(int x, int y) {}        static void Main()
           {
               int[] a = { 1 };
               Baz(a[new Foo()], Bar()); 
           }
       } prints "Foo" then "Bar", but the expansion:            var arg1_array = a;
               var arg1_index = new Foo();
               var arg2 = Bar();
               Baz(arg1_array[arg1_index], arg2); prints "Bar" then "Foo". This also means that I was wrong earlier, and multidimensional array expansion is not trivial, because there conversions happen for each index in turn, from left to right; so we must do a conversion for each index before evaluating the next one in the expansion as well.

  • Anonymous
    November 19, 2009
    I have been reading your posts for a long time now. Thank you for sharing your experience with the rest of us. I am working for a company that is trying to develop its best practices right now. They have been an ad hoc shop for about 5 years. There are two recommendations that I will make to them based on this article. One - write the spec and two - get it reviewed. You have demonstrated that even an expert can have their work improved upon when a peer reviews the work with a slightly different perspective and experience.

  • Anonymous
    November 19, 2009
    I only really understood what this was all about after reading the other comments. I mistakenly thought you were automatically translating a method call to a set of variable declarations plus a method call in some other context than the context where the original method call resided. Thanks for putting up with my confusion, Eric. :-)

  • Anonymous
    November 19, 2009
    The comment has been removed

  • Anonymous
    November 19, 2009
    The comment has been removed

  • Anonymous
    November 19, 2009
    I'm wracking my brain trying to figure out what this code could be doing -- is it preparing to inline the target method?

  • Anonymous
    November 19, 2009
    After all the things the others have discovered I really wonder if one can do (much) better then evaluating all arguments in order into local variables including all checks that will be performed when the arguments are passed to the method. Conceptually something like calling the method twice - ones to evaluate all the side effects and checks, then capture whatever arrives inside the (proxy) method into locals, and finally call the method with the values of the locals.

  • Anonymous
    November 19, 2009
    @Daniel: Actually, the thing I had in mind here would be to try to avoid rewriting it as a method call + set of locals completely, and instead generate an extra method that simply propagates the call and use it to capture the bindings. The obvious benefit is that all ref/out semantics, and everything else related to the call, is preserved exactly. So, say, if we have:    M1(Mod1 e1, Mod2 e2, ...) where (e1, e2, ...) are expressions of types (T1, T2, ...) and ref/out modifiers (Mod1, Mod2, ...), we generate a private method:   _M1(Mod1 T1 a1, Mod2 T2 a2, ...) {      // Do whatever it is we want to do with arguments before the call      ..      M1(Mod1 a1, Mod2 a2, ...);   } and then call _M1(e1, e2, ...) rather than M1(e1, e2, ...). The only problem here is that the code that does something to arguments before the call can need some locals from the scope of the original method; but those we can easily pass to _M1 as additional out/ref arguments. This will break down for locals of type TypedReference and ArgIterator, though, so if we need to be able to reference any random local, this is not the way to go.

  • Anonymous
    November 19, 2009
    That is exactly what I meant - _M1 is what I called the proxy method. But because I don't know what Eric wants to do, I could not tell if it is possible to to generate a new method or if one must simulate all effects of the call in the local variable assignments. I actually thought about using twice as much parameters in the proxy method passing the arguments in and out again. Something like  T1 l1; T2 l2;  _M1(Mod1 e1, Mod2 e2, out l1, out l2)  M1(Mod1 l1, Mod2 l2) and  _M1(Mod1 T1 a1, Mod2 T2 a2, out T1 a1evaluated, out T2 evaluated)  {     a1evaluated = a1;     a2evaluated = a2;  } Somehow this seems not right or good to me, but because of the lack of context I cannot really judge it.

  • Anonymous
    November 19, 2009
    uh, does anybody else happen to see this kind of a thing as an argument that complexity kills? (i guess another way of saying it is, never buy in to a feature that is at anything less than version 3.1?)

  • Anonymous
    November 19, 2009
    in some ways, I think this is why TDD is so good.   Building a set of tests is essentially the same as building a "spec" - you have to do the same thinking and analysis, with the built in advantage that you end up with a tool to guarantee the code conforms to the spec.

  • Anonymous
    November 19, 2009
    I had a similar problem recently (not for writing a compiler, but for writing the code "by hand")  when I had to capture the argument list in a Tuple, and later call the method (in fact I would like to build the expression tree) My technique is not so smart: just call Tuple.Create instead of the method. It can't handle the ref parameters, so no miracle is needed. But I have to allocate memory for each call. If Tuple was a atruct instead of a class, I think the two problems would be almost equivalent. Will it be allowed to have a Tuple<string,int&>? I think I tried and it did not work. Is is a limitation of C# or of the CLR ?(e.g. fields can not be of type int&)

  • Anonymous
    November 19, 2009
    The comment has been removed

  • Anonymous
    November 20, 2009
    I'm wondering exactly how TDD would help here. If you forget about a case in the spec, how are you going to write a test case for it?

  • Anonymous
    November 20, 2009
    The comment has been removed

  • Anonymous
    November 20, 2009
    Pavel, I would agree with what you said and considered modifying what darren said to say a tool that "HELPS guarantee...". I decided to leave it as it was as I thought about an analogy with a hammer.  A hammer is a tool for driving nails.  It is of course possible to use a hammer incorrectly and bend a nail or smash your thumb.  It is not necessarily the failure of the hammer that caused your thumb to be smashed.  You could possibly even argue it did exactly what it was supposed to do, apply a concentrated force to a small area (sadly, your thumb). In the same way it is not the test's failure to account for all cases. A tool's performance is limited by other factors such as the user's skill level and correct application.  In the specific case here, the tests' performances are limited by spec accuracy.  It is almost impossible to get a 100% accurate spec, and therefore the tests will almost always be limited by it.  I guess my point is that you may have guaranteed the code worked to the spec, but it was unfortunately the wrong spec which isn't the test's fault. To your point about regression testing, I would agree with one caveat, and that is that you can only know that the TESTED PORTIONS work at least as well as they did previously.  It is possible that there is another untested corner case which worked as expected previously (perhaps by chance/luck), which could be broken after modification to correct an unidentified corner case. Just my take though.  Also, sorry about the all caps above, I wasn't sure how to provide emphasis otherwise. Is html accepted in the comments?

  • Anonymous
    November 20, 2009
    Eric, Out of curiosity how do you dertmine that " x has no side effects "

  • Anonymous
    November 20, 2009
    Is this a partial spec for a function that determines where the IL tail. optimization may be output by the compiler for recursive methods? I'm also interested to know where you put this spec so that other developers can find it.  Code comments?  Project artifacts?  Documentation share?

  • Anonymous
    November 20, 2009
    A few things just keep popping up in my mind:

  • SCRUM, XP (as in eXtreme Programming), Agile Manifesto, and the rest of it ("people, not processes" is the slogan there);
  • The term "Analysis Paralysis". No comments on that one;
  • A popular saying, "devil is in the details"; ... That's enough for now. :-) " If you don't write a spec first, it's very easy to get bogged down in a morass of bugs, and hard to know when you're actually done writing the code." I agree, as would anyone who has written more than a hundred lines of code in their lives; but, as anyone who has had a big stick whacked over them for missing an important deadline, I just do not see the real-life developers writing practically all of their code in English before doing it in C#. For sure, a spec should state the key functional requirements, but if it tries to specify just about everything, down to every implementation detail, it will definitely help to produce a bug-free code that will become totally obsolete by the time it is completed. "Speed of action is becoming more important than the quality of action". This is a statement I ran into on one of the Big Biz mags' websites (maybe, even Forbes itself -- not sure). If the bosses have this mindset, well... behold Window Vista, I'd say! :-)
  • Anonymous
    November 23, 2009
    The comment has been removed