Share via


Writing code is the easy part

Having been at Microsoft for a little over four months now, I've started to get into the swing of the development process here. I've been mostly helping

Something I've noticed is that writing code, performance tuning, or debugging is the easy part.  However, it's often not the most time consuming part of my day.

So what sort of non coding/debugging things take time?

  • Syncing your machine to get the latest builds and code updates (although most people start this before heading home at night.)
  • If the latest build didn't install correctly, figuring out what went wrong.
  • Getting your various IDE & tools settings back after installing the latest build completely wipes out the previous installation.
  • Finding what you're looking for in the vast source trees that are Whidbey and the CLR.
  • Running the same 24 step process to recreate a bug scenario for the 1019th time.
  • Updating pre-checkin tests so that they'll continue to run after your code goes into source control.
  • Figuring out why a particular pre-checkin test fails on your machine, but not for anybody else.
  • Shepherding your submitted code changes through the gauntlet system that ensures build breakers don't get into source control.

In short, there's a lot of process involved in contributing code to a product. I don't see any substantial ways that it could be made better without sacrificing quality. However, so much of it could be done by a reasonably trained monkey.  It could be worse though. An recent email from my fiancee (who's an interior designer) had this to say: “Meanwhile I am using my two college degrees and 17 years of work experience to make labels for tile.”

Comments

  • Anonymous
    August 04, 2004
    The comment has been removed
  • Anonymous
    August 04, 2004
    The comment has been removed
  • Anonymous
    August 04, 2004
    Other time consumers

    * Design Reviews. Necessary but the team might waste hours making up their mind on which way to go...

    * Getting consensus from project stakeholders from multiple departments. This could take months...

    * Analyze & Fix of a corrupt VSS database...
  • Anonymous
    August 04, 2004
    Sounds like this MS project is set up so that builds don't break, you can't break a build.

    There are benefits and drawbacks both ways, but I've seen the "hate mail" approach lead to night after night of broken builds; the bigger the team, the worse the problem.
  • Anonymous
    August 04, 2004
    Bob, it would be the same if you worked on the runtime, because you dont have to use the same runtime for all your code. If/when Sun open up the Java source we will write the ant build files bring it in to the bootstrap process. It already has the challenge of bringing up Ant and the XML parser without each other.

    The gump nightly build is not a language thing; the modern languages just make linking so much easier. Gump is actually written in python, and as of this morning is running mono, which currently tests the ant .net tasks, and will soon test axis interop.

    What it is is process. and here is the process

    -you make your source public in a good SCM repository.

    -you use a decent test framework, and add tests everywhere

    -you test all the time.

    What gump delivers is integration testing. It ensures that changes to one bit of code propagate across all dependencies in the chain. It isn't perfect: it doesnt do a thing for backwards compatibility, and doesn't bring into the loop closed source projects. But it is one of the best examples of distributed software integration I know of, CPAN being the other.
  • Anonymous
    August 04, 2004
    One process speed up we did at a previous company was to have the build machine zip up the intermediate files (.obj, .res, .pdb). When individual developers synced with the source control (using the label for the overnight build), they also grabbed the corresponding zips. Thus they didn't have to recompile, just re-link. This saved a lot of time, since a full build took two to three hours. Fetching the zip files and linking took ten to fifteen minutes.

    Linking on the local machine seemed to be enough to get the paths right for debugging symbols.

    Furthermore, you could sync up with any official build or any recent overnight build, so jumping back to last Friday wasn't an expensive experiment.

    We used the hate-mail approach for broken builds, and we never went more than a day without a working overnight build. The developer responsible for the breakage had to fix it first. Meanwhile, everyone else could stay productive with the latest successful build and quickly sync with the corrected build when it became available.
  • Anonymous
    August 05, 2004
    How much does a build-break cost in doughnuts? -- :o)
  • Anonymous
    August 05, 2004
    We don't do donuts here (at least within the VS teams that I'm part of.) Why? Because you can't check in a change that breaks the build (in theory at least.)

    There's a whole system that takes your changed files, compiles them, and runs them through a series of test. Only if everything succeeds does the system make the actual check-in for you.
  • Anonymous
    August 05, 2004
    That sounds like a good safety measure if it doesn't get in the way. Does it take more time on your part, or is it automated enough that it feels like ordinary source control?
  • Anonymous
    August 06, 2004
    Mike: It doesn't really get in our way, except for the time factor.

    From a web page, we submit a set of files (or a change list.) The system takes the changes, applies them to the most recent version of the checked-in sources, and builds the whole shebang.

    Next, the system runs through the same set of tests that devs are supposed to run before they check in anything. If everything works OK, the system then makes the checkin on your behalf.

    Put another way, we don't check in directly to the VCS. Rather, our changes go through an automated gauntlet that does it automatically.

    In theory, nothing that breaks the build will never make it into VCS. The biggest downside is that all checkins are serialized through a set of dedicated machines. Some times you'll wait hours to get your changes into the VCS.
  • Anonymous
    August 06, 2004
    The comment has been removed
  • Anonymous
    August 06, 2004
    A reader: Some comments on your points

    1)A gauntlet-like system probably may be overkill in small to medium dev teams. If you don't have the resources to use one, don't do it. As for being a bottleneck, you're right.

    2)In my (admittedly limited so far) experience, I haven't seen many problems of this nature.

    3) We use a nightly build system. A dev decides when they want to grab a particular build. The tools used to create the "official" builds don't change frequently.

    4/5) This comes down to writing good tests, as well as good inter-team communication. The system we use doesn't solve every issue, but it saves us from many small, stupid mistakes. On the whole, I think the additional effort is worth it.