My mom doesn't care about space
Now, before I go on, in the interest of not filling this article with assorted exceptions for perfect accuracy which I could never possibly achieve anyway, kindly allow me to make the general point bearing in mind that, as usual, what I write is only approximately correct.
Hardly any customers actually care about space. My mom is definately in the “I don't care” camp and I bet your mom is too. Never once has she called me and complained that “Word is too big”. So, my friends, it turns out speed is everything.
But wait, why then do I devote all this attention to measuring space, whole articles in fact, if nobody cares? Well, the reason is that as often as not space is speed. Yup you heard it here first folks, all that stuff they taught you about time-space tradeoffs is bunk: your default assumption should be that smaller is faster. Which is why we pay so much attention to space, because often the primary cause of slowness turns out to be bigness. Whether it's page faults, cache misses, or just tons of IO, space kills.
I love my mom, but she wouldn't know a soft-fault if it hit her on the head. She just knows it's too slow.
Comments
- Anonymous
March 15, 2004
The comment has been removed - Anonymous
March 15, 2004
I think you and I are both saying that space matters, even though Mom doesn't think it about it much. And sometimes it matters in not the way you were told :) - Anonymous
March 15, 2004
here's a good company/team motto:
"We care about space so your mom doesn't have to."
:) - Anonymous
March 16, 2004
That's very true. Space matters less every day, too. People who find a program and consider themselves good with computers may (the numbers who will are dwindling :) complain about how horribly big Windows XP is, for instance. ;) But as technology progresses it matters less and less.
As far as smaller programs being faster, that may be true in general, but it's not always true. If you save 10% of your space by using an archive format the decompresses using ten times as many CPU cycles, it's a judgment call. Well, it depends on how many CPU cycles you already have, but if your decompressor was already slow I don't think it's clear cut.. Usually, though, yes, hard drives are far slower than even the worst of errors you can make that don't use them. - Anonymous
March 16, 2004
Now that CPU's are so fast compared to memory, reducing the memory footprint is often the best way to increase speed. Time is becoming proportional to space, more and more, every year.
I used to optimize for speed by focusing on the code executed. Now I tend to focus on the size of repeated data structures. - Anonymous
March 16, 2004
James writes that "Space matters less each day" but I don't think I can agree with that. I think I like Frank's position better "CPU's are so fast compared to memory..."
Though main memory is getting more abundant, and disk is also getting more abundant, I think from that we can only conclude that footprint matters less. For example I think there are theme sets we ship with Windows XP (bitmaps, sounds, etc.) that rival the size of the entire Windows 3.1 distribution. Is this a bad thing? I don't think so... I think it's what people want and they can afford the disk footprint nowadays.
But when it comes down to memory used for core processing activities... space is the leading indicator of speed which is why we have to care about it so much, even though my mom doesn't.
Time-space tradeoffs are out there, but it's only by being very clever that you can actually gain speed by using space. Lacking that careful plan, bigger is slower and smaller is faster. And it's likely to become more so in the future. - Anonymous
April 03, 2004
When you say "time-space tradeoffs is bunk", I am a bit surprised. Early algorithmics (what you will find in Introduction to Algorithms, Cormen et. All) just assume independence between time to access 1 memory unit and overall memory consumption. But many recent algorithmic works (see the whole Data Streams theory) does not make such assumption any more. There is still a time-space tradeoff, but with a dependency between the time to access 1 memory unit and the overall memory consumption. - Anonymous
April 29, 2004
If your routine can fit into the L1 cache on your CPU of choice, then fine, small is good. However, there are times when bigger is better -- unrolling loops, replacing functions with inline code, etc., can all speed things up under certain circumstances (and indeed these are often done by compilers in the name of optimisation). There are no hard and fast rules, and it all depends on what the code in question is doing.
Also, given that (once the startup cost is paid) most apps spend 99% of their time waiting for the user to do something, speed is not really an issue. As the process that does 99% of the work on my PC during the day is the "System Idle Process", I'd rather Word was 50% slower and 50% smaller! - Anonymous
June 16, 2005
These are notes I took at Joel Pobar’s “What’s New in .NET 2.0” breakout session.
New Features
GZip... - Anonymous
June 25, 2006
PingBack from http://www.philosophicalgeek.com/2005/03/30/a-month-with-nunit/ - Anonymous
May 20, 2009
At Microsoft you can't say you're excited about anything you have to say that you're "super excited".