Productivity -- but to what extreme?
By far the most important feature of the CLR (and WinFX and managed code generally) is developer productivity. We want developers to be able to build better applications, faster on the managed platform than anywhere else. As a side note, I once saw a presentation that demonstrated how this principle actually helped “solve world hunger” by letting developers get projects done faster and spend the extra time on social issues… Now this does not mean that stuff like security, web services, performance, etc are not important, they certainly are, but productivity is our reason for being.
Now I am not sure we are exactly solving world hunger, but I do think we are letting developers focus on their core business problem while they leave the plumbing to the platform. As much as I’d love to say this is all because of the greatness of the CLR or the consistency of the API set, it also has a ton to do with the quality of the tools support we get with Visual Studio.
I think that is all motherhood and apple pie right? (love to hear your feedback) So where am I going with this?
Well, today I had a heated discussion with some of the smartest people on the CLR about the balance between developer productivity and performance. The debate goes something like this:
Me: Feature X will make developers more productive, so we should do it!
Other Guy: Feature X will make developers apps so slow it will not mater, they will use something else…
Now clearly, both positions are a little extreme… but there is clearly a lot of room in the grey area… Where should we fall in that gray area? When I think of environments such as VBRun, they clearly prioritized developer productivity above performance (although, VBRun perf was actually pretty good). Other environments such as the C runtime library clearly prioritized performance ahead of productivity.
So, what advice would you give my team? Say you had $100 to spend abstractly on “more productivity features” or “better performance” where would you spend your money? And if you have any hardnosed folks that still eat-and-drink unmanaged (win32) code all day long, I’d love to hear from them as well.
Thanks!
Comments
Anonymous
June 09, 2004
I can buy more hardware, I can't buy more hours in the day.Anonymous
June 09, 2004
Productivity... Upgrading machines is cheap compared to hiring new people and training them to get them up to speed just so that they can help implement the backlog of features we need in the next version of our app because we spend too long with verbose code to do what we need instead of more time chasing the random bugs and quirks that always seem to pop up. (Yes that was spit out as one big sentence. :-p)Anonymous
June 09, 2004
I've always liked a tiered approach. That is, some underlying feature is fast but the way to access it and get all that performance may not be particularly easy. Then you build some sort of easy access layer on top of it that makes the feature easier to use but you take a perf hit. That way, everybody is happy.
Now it may turn out in your case this layered approach won't work. If so, it's really hard to say because first of all my performance goals aren't likely to be the same as someone else's. What your opponent believes is a perf issue might not matter at all to me but it might be critical to someone else. Obviously MS believes that garbage collection was worth the perf hit in order to make development easier (and more reliable). But at some point in the past, the perf hit was bad enough to make that solution not commercially viable.
I really seem to be dancing around on this reply. :-( OK, to answer your question based on the assumed truth of the statement "so slow it will not matter", I'd put $80 on making it fast now, with $20 to make it as easy to use as makes sense. Later on you can think about a simpler API, if needed, to improve developer productivty kind of like VB's My classes concept. BTW, for other scenarios where perf isn't such a critical issue I switch that to $70 dev productivity and $30 perf.Anonymous
June 09, 2004
The comment has been removedAnonymous
June 09, 2004
The answer to your question is glaringly obvious. The ongoing increases in hardware speeds means that the people on your team worrying about nebulous performance issues simply aren't paying attention. Developer time is vastly more expensive than better hardware. I'd be willing to bet that the people who claim that the productivity adding features are "too expensive" have never actually attempted to profile them and find out....Anonymous
June 09, 2004
The comment has been removedAnonymous
June 09, 2004
Machines are a finite resource. While we spend all of our money buying new machines we could have just as easily implemented the same algorithms on our old machines with a focus on performance. It is always sad to see a 1 year old machine get kicked to the side because it was perceived to be too slow to complete a crucial computation.
What we don't realize is that we are at a crucial juncture where we are enabling people through computing that can't buy a new machine every six months. We are making promises still to have broadband in every home and a laptop in the hands of every grade school student. These are individuals that need to run our software on old machines, cheap machines, hand-me-downs if you will. We need to focus on performance to enable a broader adoption.
People still have PII's out there and old Athlon boxes, and they get online every day and run apps from all over the place. You start throwing .NET apps on those same machines that operate completely fine without it and you find the .NET apps run slowly and sluggish compartively. That just isn't a good way to introduce your new platform, no matter how bound you are to disk-based IO, and no matter how you are tuned to operate best with 256 megs of RAM or more.Anonymous
June 09, 2004
Machines are a finite resource. While we spend all of our money buying new machines we could have just as easily implemented the same algorithms on our old machines with a focus on performance. It is always sad to see a 1 year old machine get kicked to the side because it was perceived to be too slow to complete a crucial computation.
What we don't realize is that we are at a crucial juncture where we are enabling people through computing that can't buy a new machine every six months. We are making promises still to have broadband in every home and a laptop in the hands of every grade school student. These are individuals that need to run our software on old machines, cheap machines, hand-me-downs if you will. We need to focus on performance to enable a broader adoption.
People still have PII's out there and old Athlon boxes, and they get online every day and run apps from all over the place. You start throwing .NET apps on those same machines that operate completely fine without it and you find the .NET apps run slowly and sluggish compartively. That just isn't a good way to introduce your new platform, no matter how bound you are to disk-based IO, and no matter how you are tuned to operate best with 256 megs of RAM or more.
How would I spend the money? Split it down the middle at 50/50. If I don't need to go out and buy new machines all the time, I'll have more money in my pocket to spend on your software so you can hire more devs and have better overall feature implementation. I don't care how you split 100 dollars, when you could be spending 200 dollars instead had you not forced me to buy a new machine.Anonymous
June 09, 2004
I think the issue is one of superficial productivity gains.
If an interface is easy to use, folks will use it and move on with life. Later, when it comes time to "get real" with their product, their QA department may complain that the software isn't clearing the performance bar. The developer then faces the difficult and expensive task of pouring over code to discover what's slow and understand why. To me this a potential disastrous loss of productivity, especially if the interface in question is large and the developer has built a large amount of functionality around it.
Claiming that hardware resources are cheap and developer time is expensive is not living in the real world. For most commercial non-proprietary scenarios, software performance remains critical.Anonymous
June 09, 2004
The comment has been removedAnonymous
June 09, 2004
Many years ago when I first started programming I used assembler. Then moved on to C, then C++ then VB. I made these moves for productivity although I lost a certain amount of control/speed.
Currently there are very few people who use assembler. Therefore, the majority of people to one extent or another have made that compromise. This leads to 2 points:
1. Your question is too coarsely grained. It depends on the productivity gain and the amount of speed lost.
2. It also depends at which point in time the question is asked. The compromise might not be appropriate now, but at some point in the future it might. As technology improves there is a natural bias in favour of productivity. If one is designing something that will last a long time that it is important that this bias is borne in mind and though is given to when the crossover (in favour of productivity) is likely to occur for most people.
Alex KazovicAnonymous
June 09, 2004
Although it might be easy to conclude that productivity should always be the first priority because you can buy faster hardware anyway and for less than you can get developer time, it's a bit too simplified.
First of all, in order for performance to really matter we're talking about client/server scenario's (performance in desktop apps is unimportant, or rather, let the gamedevelopers write assembler to differentiate themselves!)
On the server, it's not always true that you can just keep buying more hardware to speed things up. At a certain point you've hit the top and to increase speed from there on, you need to get into clustering and other complicated things, which in turn again requires people to set it up, maintain it or even adjust software to run correctly on it. And judging by how slow websites are becoming and how internet usage is still growing, I'd say that's more and more turning into a major issue.
So for typical desktop applications, $100 on productivity. For server-side applications, $50/50. We need productivity there as well, because without it, we won't have anything worthwhile to run quickly anytime soon anyway.Anonymous
June 09, 2004
I'm with Pavel on this.
In many (but not all) situations developers could afford to be less productive but
be forced to writer more sustainable code.Anonymous
June 09, 2004
The comment has been removedAnonymous
June 09, 2004
Productivity = lower cost of development
Performance = higher usability benefit to the end user
ASP.Net = lower deployment cost to the user
Windows Forms = much higher usability benefit to the user
Cost vs benefit: Cost is easy to measure, benefit is very hard to measure.
And in corporate environments, the decision-maker is seldom the end-user.
And places much higher emphasis on quantifiable factors, not fuzzy factors.
So, in this context, cost affects the sale, but usability doesn't. So you should optimise for productivity rather than performance.
By us developers selling more software, MS makes more $ to plow back into better performance in future (e.g. when investment in productivity becomes marginal, and/or hardware advances slow down).
Also: time spent on better performance is often not an investment since hardware advances soon provide the same speed benefits "for free".Anonymous
June 09, 2004
The comment has been removedAnonymous
June 09, 2004
You can't always buy better hardware. If you deal with hundreds of millions of records (in terabytes) of data there just isn't any hardware available to handle it. You have to code for performance or at least for scalability, your code has to be written so it can run distributed. Easy code ("written" using your mouse) just doesn't cut it here. Microsoft website's backstage articles offer some insight, start with simple, mouse and wizard generated code but then tweak it so it can be actually used in a production environment.
As for my personal value ladder:
1. Readability
2. Performance
3. Productivity
Most of the times the productive code will have to be tweaked anyways, so all the productivity gains go down the drain, escpecially when the changes have to be done by a different developer, who knows a little more than how to drag a database connection onto a form.Anonymous
June 09, 2004
As far as .NET changing the world, you'll likely be inspired by the NxOpinion application. For a quick ramp-up, check out the 4 1/2 minute video case study linked to in the upper-right corner of the Microsoft PressPass story: http://www.microsoft.com/presspass/features/2004/Jan04/01-21NxOpinion.aspAnonymous
June 09, 2004
Productivity!
In 18-24 months, on state-of-the-art hardware, my app'll be twice as fast anyway.
I'm going to get more performance gains from adjusting my programming habits than I'll get from any "tweak". And if the programming environment lets me change habits without having to kick dead whales down the beach, then I'll be more likely to make drastic and necessary changes.Anonymous
June 09, 2004
I don't believe the developer. There is no trade-off between productivity and performance, in reality. I have never seen a single situation where a true trade-off exists.
Developers who say there is a trade-off are thinking about a fundamental design flaw that produces the perceived "trade-off". The problem is the design flaw -- perhaps it goes so deep, it will take enormous effort to eliminate it. But if a design produces a trade-off between productivity and performance, it is by definition, flawed. Fix the flaw, and the trade-off goes away -- you can have both productivity and performance.
Post the "trade-off" and let us dissect it!Anonymous
June 09, 2004
The comment has been removedAnonymous
June 10, 2004
I'll side with those on the "Productivity means writing GOOD code, not just the number of lines."
Brad, your comment, "Now this does not mean that stuff like security, web services, performance, etc are not important, they certainly are, but productivity is our reason for being." is very revealing about the bias at Microsoft about what "productivity means. Writing lots of insecure code isn't productive. Writing lots of code that isn't maintainable because it is poorly written isn't productive in the long run. Productivity is not lines of code.
I'm very excited about what I've seen in VS2005 in terms of productivity. And I generally do agree that productivity is more important than performance. How many times have I spoken at developer events and gotten questions about how Windows and/or SQL Server and/or ASP.NET performs. I never answer the question, I just ask how many of them are currently working on a project where, ignoring security, server administration and just focusing on perfomance, how many of them are working on a project where a single dual-proc server with a couple Gigs of RAM in it couldn't handle the load? I load balance all my servers just so I can do application upgrades without downtime, but none of my apps need to be load balanced to handle the load. Very few people can say otherwise.
We do have several 3rd party ISV apps that have to be load balanced, but that is because they were written with memory leaks and lose track of database connections, not because of the amount of traffic.
Well, that pretty much answers it for me. We need better code, not more code. And we need better code, not faster code. We need tools like FxCop and NUnit presented in a way that help developers who aren't experienced in a disciplined environment learn about the tools and how to use them. And then have VS make very easy, but tight integration with the tools. Those of us who use tools like these all the time find them easy enough, but if I hadn't had someone teaching me NUnit for a couple days several years ago, I would have given up. FxCop was easy, but it had to be run separately and until recently you couldn't make the changes to your code while you had it open because it locked the dll!Anonymous
June 10, 2004
Unfortunately there's no Moore's law for software development. Therefore, any productivity gains we can get are valuable. I figure you guys are smart enough to keep making the engine run fast under the hood.
Of course balancing that against any performance hits that are so bad as to make the app unusable. No point in being able to build even MORE unusable apps.Anonymous
June 10, 2004
The comment has been removedAnonymous
June 10, 2004
Here're a couple of quote that I think apply:
"The ability to simplify means to eliminate the unnecessary so that the necessary may speak."
-Hans Hofmann, Introduction to the Bootstrap, 1993
"Things should be made as simple as possible, but not any simpler."
- Einstein
Here're some questions to ask yourself:
Why did Java displace a lot of COM in the marketplace?
What's J2EE's chief complaint among developers?
What would .NET adoption look like it were twice as fast as opposed to half as hard?Anonymous
June 10, 2004
The comment has been removedAnonymous
June 10, 2004
I'm going to toss one final comment even though it will get buried. eAndy, has pointed out that documentation may be the key to a true reliability story. Proper documentation can lead users to write better code, and to find code that is most ideal for what they are trying to do. Crosslinks and crossrefs can confuse some users, but I think the ability to turn on such a powerful feature for the dev that doesn't mind reading through 5 different options so he can choose the best is indispensable.
Brian Grunkmeyer recently commented on why some cancellation code I had written was flawed. I knew it was flawed the moment I put it out there, however, it solved a problem that needed to be solved, and so I used it. Now, he mentioned that there were a number of things that could go wrong that made my code bad, and my response was quite clear. 1) Tell me what I can look for to minimize my risk through documentation, 2) Fix those things that I couldn't work-around without being the BCL. I highly recommend reading any posts by Brian Grunkmeyer as he is an extremely intelligent guy and is responsible for quite a bit of the performance under the hood that everyone talks about. However, realizing the trade-offs that are made through his comments is key to understanding why performance at the developer level is so important since it exists to augment every location in the BCL where they had to make a trade-off.
http://weblogs.asp.net/justin_rogers/archive/2004/05/22/139649.aspx
That forces me to decide to take another $10 from the productivity crew and give it to the UE crew so they can make better documentation. I hate hearing things like, well, we didn't have enough time to focus on perf for the first version, so we punted it to the second version. Why bother punting it to the second version, when you know that all of the productivity features being added are going to again put pressure on the ability to focus on performance?
Stop making that trade-off, as Frank Hileman wrote, and instead find ways to remove the trade-off. If that means providing one less feature to improve 4 others, the improvement on the 4 others is also a feature of the sys
tem.Anonymous
June 10, 2004
Hard to give a meaningful answer without more information, but in general my opinion is go for performance with the caveat that caution is advised. Sure, for some developers hardware upgrades are no problem, but there are many who for reasons of personal finance or corporate IT policy won't be able to upgrade so quickly - and we all know how quickly the experience of using performance-hamstrung software products palls. It depends how badly the latter group will be thus affected, and also the relative size of the two groups. Annoying 50% of developers is going to cause brouhaha. Annoying 10% while pleasing most of the others is not.Anonymous
June 10, 2004
"You know you've achieved perfection in design, not when you have nothing more to add, but when you have nothing more to take away."Anonymous
June 12, 2004
I want people to write tools and frameworks spending $100 dollars on performance so that the rest of the world can spend $100 on productivity.Anonymous
June 12, 2004
The comment has been removedAnonymous
June 12, 2004
Correction:
If there is an easy, slow performing way of doing a task, there better be a more performant way of doing it too -even though it may be more difficult.Anonymous
June 12, 2004
The comment has been removedAnonymous
June 16, 2004
I don't envy having to make this determination. What I will say is that for my own part, I'd probably spend the money 50/50.
My bias: I have worked for a lot of small-medium sized companies for whom Time-To-Market was every bit as important as overall performance when the product got there.
If I was approached to trim my $100 budget by 10%, I'd probably (after a lengthy budget battle) take the $10 completely off the performance side. Though rarely ideal, its likely easier to explain to a customer that they need an extra server afterwards, than it is to explain why their software will take 6 months longer to deliver.Anonymous
May 29, 2009
PingBack from http://paidsurveyshub.info/story.php?title=brad-abrams-productivity-but-to-what-extremeAnonymous
June 09, 2009
PingBack from http://besteyecreamsite.info/story.php?id=994Anonymous
June 19, 2009
PingBack from http://mydebtconsolidator.info/story.php?id=6894Anonymous
June 19, 2009
PingBack from http://debtsolutionsnow.info/story.php?id=2944