I wish I had written these...

From Eric Sink, via Dana Epp:

"My Life as a Code Economist"

And Dana's original article:

"The cost of fixing bugs..."

Comments

  • Anonymous
    November 27, 2005
    The comment has been removed

  • Anonymous
    November 27, 2005
    The comment has been removed

  • Anonymous
    November 27, 2005
    Economics? I've had enough previous employers where the cost of fixing a bug was blamed on the person who reported the bug. After seeing a posting by one of your colleagues who was penalized for helping to improve bug reporting, plus a few comments on MiniMsft's site, it sort of looks like Microsoft is par for that course.

    There are more than two groups of software engineers. Some have different opinions than others about what kind of bug should be considered a show-stopper. Some have different opinions than others about whether warranty service should be provided.

  • Anonymous
    November 27, 2005
    The comment has been removed

  • Anonymous
    November 27, 2005
    till_pero, clearly you've never worked as a professional software developer (or at least not on any project of reasonable complexity).

    There are NO projects that ship without bugs. None. Even mission critical ones like the projects that run airplanes, nuclear power plants and space ships. Those applications have a very low tolerance for bugs, and have a massive amount of overhead associated with ensuring that the project have as few defects as possible, but they still ship with bugs.

  • Anonymous
    November 27, 2005
    The comment has been removed

  • Anonymous
    November 27, 2005
    The comment has been removed

  • Anonymous
    November 28, 2005
    People who compare engineering to software engineering and claim that since bridges don't fall down on a regular basis software should be flawless bother me.

    Tell you what, when you're told to design a car that one month later has to be able to go under water, then 6 months later, right before it's due to ship, also has to be able to go into outer space, then we'll talk.

    Engineering has a set number of variables whereas software has a (practically) unlimited number of code paths.

    Given a limited amount of resources one has to determine where efforts should be directed to.

  • Anonymous
    November 28, 2005
    The comment has been removed

  • Anonymous
    November 28, 2005
    The comment has been removed

  • Anonymous
    November 28, 2005
    The comment has been removed

  • Anonymous
    November 28, 2005
    Larry,
    there's a difference between:
    - shipping products with bugs.
    and
    - shipping products with known bugs.

    It is possible to ship even large systems without known bugs. Im not talking about the kind of bugs you get due to a defective (or even non-existant) QA, but the kind of bugs that slip by even extensive (but provably not extensive enough) QA.

    There are many reasons products ship with known bugs, but usually it boils down to engineers and developers being overruled by the ones wanting to make a quick buck. But whatever the reason, it's always a deviation from good business ethics - at best.

  • Anonymous
    November 28, 2005
    The comment has been removed

  • Anonymous
    November 28, 2005
    Maurits, that's a good point. Personally, I classify this in the same category as off-by-one overruns and heap overruns.

    Two years ago, if you'd asked a security professional, they would have told you that a one byte overrun of the stack wasn't exploitable. Then the hackers showed how it was exploitable. Similarly with heap overruns.

    Our knowledge of what is exploitable and what is not exploitable constantly changes over time. This is why Microsoft requires that developers take ANNUAL security training - because the exploit landscape constantly changes.

    From what little I've seen about this particular problem, it appears that something like that happened here - a condition that was previously thought to be unexploitable was shown to be exploitable.

  • Anonymous
    November 28, 2005
    Monday, November 28, 2005 11:10 AM by vince

    > and for reasons I don't understand no one
    > ever sues.

    Legal proof is far far harder than engineering proof. Even when you can accomplish legal proof, it's rare for awards to even cover the expenses. The outcome shown in "A Civil Action" is more common than "Erin Brockovich", but more common than both is for the case to just be abandoned because victims can't afford to produce legal proof.

    Monday, November 28, 2005 5:52 PM by Maurits

    > http://www.fordham.edu/halsall/ancient/hamcode.html
    > "... if a man builds a house badly, and it
    > falls and kills the owner, the builder is to
    > be slain..."

    Yeah, but if the builder can afford better lawyers, and if the builder isn't stupid enough to do something like telling the truth in court, then how are you going to prove it...

  • Anonymous
    November 28, 2005
    The comment has been removed

  • Anonymous
    November 29, 2005
    Sometimes I wonder if people just need a target to rant against! Seriously ... relax, people. Speaking from 20 years in IT, MS is by no means the worst bug shipper/admit-er/fix-er. All other vendor provide shoddy support from time to time ... in my opinion the cumulative worst is another major sw/hw/services vendor.

  • Anonymous
    November 30, 2005
    The comment has been removed

  • Anonymous
    November 30, 2005
    The comment has been removed

  • Anonymous
    November 30, 2005
    The comment has been removed

  • Anonymous
    December 01, 2005
    Andrew,

    As some of my comments on my blog point out in the original article, I agree that we need to place pressure on the vendors from time to time. My point though is that pressure needs to be applied through a responsible workflow. If security researchers really wish to protect the safety and security of their clients while elevating their own credibility in the industry, they must follow responsible disclosure practices.

    Researchers have every right to be able to disclose their findings. The balance is doing so while respecting the well-being of the rest of the Internet. This wasn't the case. They didn't even make an effort to notify MIcrosoft beforehand.

    And I am not absolving Microsoft from responsibility here. They have a TERRIBLE track record when it comes to responding to some threats (see EEyes Upcoming Advisories on just a few examples of vulnerabilities going on over 200 days now). But its hard to work on those when they have to respond to new attack patterns that are in the wild.

    Further to this, Microsoft showed their human frailty in their security response practices with this incident. During triage of the original bug a threat model would have been performed and it is apparent this attack vector wasn't even considered. And it should have been. But now its a moot point. Now they are in defensive response mode in an effort to protect all their clients.

    How does the irresponsible disclosure benefit us as the user? It doesn't. It actually put us all at MORE risk. And that's not acceptable.

    Naive? Perhaps. But that's because I believe in the disclosure process. It requires both sides to work. When either side collapses, the whole thing is shot. This incident is proof of that.

  • Anonymous
    May 31, 2009
    PingBack from http://portablegreenhousesite.info/story.php?id=4182

  • Anonymous
    June 13, 2009
    PingBack from http://gardendecordesign.info/story.php?id=4302

  • Anonymous
    June 15, 2009
    PingBack from http://mydebtconsolidator.info/story.php?id=21163

  • Anonymous
    June 17, 2009
    PingBack from http://pooltoysite.info/story.php?id=7754