Share via


VMware – the economics of falling skies … and disk footprints.

There’s a phrase which has being go through my head recently: before coming to to Microsoft I ran my a small business; I thought our bank manager was OK, but one of my fellow directors – someone with greater experience in finance than I’ll ever have – sank the guy with 7 words “I have a professional disregard for him.”. I think of “professional disregard” when hearing people talk about VMware. It’s not that people I’m meeting simply want to see another product – HyperV – displace VMware (well, those people would, wouldn’t they ?) , but that nothing they see from VMware triggers those feelings of “professional regard” which you have for some companies – often your toughest competitor.

When you’ve had a sector to yourself for a while, having Microsoft show up is scary. Maybe that’s why Paul Maritz was appointed to the top job at VMware. His rather sparse entry on Wikipedia says that Maritz was born in Zimbabwe in 1955 (same year as Bill Gates and Ray Ozzie, not to mention Apple’s Steve Jobs, and Eric Schmidt – the man who made Novell the company it is today) and that in the 1990’s he was often said to be the third-ranking executive in Microsoft (behind Gates and Steve Ballmer, born in early 1956). The late 90s was when people came to see us as “the nasty company” . It’s a role that VMware seem to be sliding into: even people I thought of as being aligned with VMware now seem inclined to kick them.

Since the beta of Hyper-V last year, I’ve being saying that the position was very like that with Novell in the mid 1990s. The first point of similarity is on economics. Novell Netware was an expensive product and with the kind of market share where a certain kind of person talks of “monopoly”. That’s a pejorative word, as well as one with special meanings to economists and lawyers. It isn’t automatically illegal or even bad to have a very large share (just as very large proportions in parliament can make you either Nelson Mandela or Robert Mugabe). Market mechanisms which act to ensure “fair” outcomes rely on buyers being able to change to another seller (and vice versa – some say farmers are forced to sell to supermarkets on unfair terms) if one party is locked in then terms can be dictated. Microsoft usually gets accused of giving too much to customers for too little money. Economists would say that if a product is over priced, other players will step in – regulators trying wanting to reduce the amount customers get from Microsoft argue they are preserving such players. Economists don’t worry so much about that side, but say new suppliers need more people to buy the product, which means lower prices, so a new entrant must expect to make money at a lower price: they would say if Microsoft makes a serious entry into a existing market dominated by one product, that product is overpriced. Interestingly I’ve seen the VMware side claim that HyperV , Xen and other competitors are not taking market share and VMware’s position is as dominant as ever.

The second point of similarity is that when Windows NT went up against entrenched Netware it was not our first entry into networking – I worked for RM where we OEM’d MS-NET (a.k.a 3COM 3+ Open, IBM PC-Lan program) and OS/2 Lan manager (a.k.a. 3+Open). Though not bad products for their time – like Virtual server – they did little to shift things away from the incumbent. The sky did not fall in on Novell when we launched NT, but that was when people stopped seeing NetWare as the only game in town. [A third point of similarity]. Worse, new customers began to dismiss its differentiators as irrelevant and that marks the beginning of end. 
Having been using that analogy for a while it’s nice to see no less a person than a Gartner Vice President, David Cappuccio, envisaging a Novell-like future for VMware.  In piece entitled “Is the sky falling on VMware” SearchServerVirtualization.com  also quotes him as saying that “ 'good enough' always wins out in the long run”.   I hate “good enough” because so often it is used for “lowest common denominator”I’ve kept the words of a Honda TV ad with me for a several years.

Ever wondered what the most commonly used in the world is ?
”OK”
Man's favourite word is one which means all-right, satisfactory, not bad
So why invent the light bulb, when candles are OK ?
Why make lifts, if stairs are OK ?
Earth's OK, Why go to the moon ?
Clearly, not everybody believes OK is OK.
We don't.

Some people advance the idea that we don’t need desktop apps because web apps are “good enough”. Actually, for a great many purposes, they aren’t. Why have a bulky laptop when a netbook is “good enough” ?. Actually for many purposes it is not. Why pay for Windows if Linux is ‘free’ … I think you get the pattern here. But it is our constant challenge to explain why one should have a new version of Windows or Office when the old version was “good enough” ? The answer – any economist will give you - is that when people choose to spend extra money, whatever differentiates one product from the other is relevant to them and outweighs the cost (monetary or otherwise , real or perceived) : then you re-define “good enough” the old version is not good enough any more. If we don’t persuade customers of that, we can’t make them change. [Ditto people who opt for Apple: they’d be spectacularly ignorant not to know a Mac costs more, so unless they are acting perversely they must see differentiators, relevant to them, which justify both the financial cost and the cost of forgoing Windows’ differentiators. Most people of course, see no such thing.]. One of the earliest business slogans to get  imprinted on me was “quality is meeting the customers needs”: pointless gold-plating is not “quality”. In that sense “Good enough” wins out: not everything that one product offers over and above another is a meaningful improvement. The car that leaves you stranded at the roadside isn’t meeting your needs however sophisticated its air conditioning, the camera you don’t carry with you to shoot photos isn’t meeting your needs even if it could shoot 6 frames a second, the computer system which is down when you need it is (by definition) not meeting your needs. A product which meets more of your needs is worth more.

A supplier can charge more in the market with choices (VMware, Novell, Apple) only if they persuade enough people accept the differentiators in their products meet real needs and are worth a premium. In the end Novell didn’t persuade enough, Apple have not persuaded a majority but enough for a healthy business, and VMware ? Who knows what enough is yet, never mind if they will get that many. If people don’t see the price as a premium but as a  legacy of being able to overcharge when there was no choice then it becomes the “VMware tax” as  Zane Adam calls it in our video interview. He talked about mortgaging everything to pay for VMware: the product which costs more than you can afford doesn’t meet your needs either, whatever features it may have.

I’ll come back to cost another time – there’s some great work which Matt has done which I want to borrow rather than plagiarize. It needs a long post and I can already see lots of words scrolling up my screen so want to give the rest of this post to one of VMware’s irrelevant feature claims :disk footprint.  Disk space is laughably cheap these days, and in case you missed the announcement Hyper-v Server now boots from flash – hence the Video above: before you run off to do this for yourself, check what set-ups are supported in production. And note it is only Hyper-V server not Windows Server, or client versions of Windows. The steps are all on this blog already. See How to install an image onto a VHD file, (I used a fixed size of 4GB). Just boot from VHD stored on a bootable USB stick. Simples.

I’ve never met a customer who cares about a small footprint: VMware want you to believe a tiny little piece of code must need less patching, give better uptime, and be more trustworthy  than a whole OS – even a pared down one like Windows Server Core or Hyper-V server. Now Jeff, who writes on the virtualization team blog , finally decided he’d heard enough of this and decided it was time to sink it once and for all . It’s a great post (with follow-up).  If you want to talk about patching and byte counts, argues Jeff, let’s count bytes in patches over a representative period:  Microsoft Hyper-V Server 2008 had 26 patches, not all of which required re-boots, and many were delivered as combined updates. They totalled 82 MB.  VMware ESXi 3.5 had 13 patches, totalling over 2.7 GB. That’s not a misprint 2700 MB against 82 (see VMware sometimes does give you more), that’s because VMware releases a whole new ESXi image every time they release a patch so  every ESXi patch requires a reboot. Could that be why VMotion (Live Migration, as now found in R2 of HyperV), seemed vital to them and merely important to us ? When we didn’t have it it was the most relevant feature. Jeff goes to town on VMware software quality – including the “Update 2” debacle, that wasn’t the worst thing though. The very worst thing that can happen in on a virtualized platform is  VM’s breaking out of containment and running code on the host: Since the host needs to access the VMs’ memory for snapshots, saving, migration, a VM that can run code on the host can impact all the other VMs. So CVE-2009-1244 : “A critical vulnerability in the virtual machine display function allows a guest operating system users to execute arbitrary code on the host OS” is very alarming reading.

And that’s the thing – how can have a regard for a competitor who doesn’t meet the customers needs on on security or reliability, and who calls things like disk space to justify costing customers far, far more money ?

Comments

  • Anonymous
    January 01, 2003
    David I did say that live migration was important, it's the feature most often thought of as "missing" in the first version of Hyper-V, and the the most important addition in R2 according to most 3rd parties. But seriously do VMs break out of their confinement VMware-style if you have the wrong firmware on disk controller ? These updates are few and far between. As for fault tolerance, in my view it's protecting against the wrong thing, in too restricted a way. If Windows is as unreliable as Vmware want you to believe then it is more likely to blue screen than the hardware is to fail. I've just got this laptop back from having faulty memory replaced and that caused Windows to blue screen. So you wouldn't get a nice neat fail-over, but a blue screen on two nodes. The application - which wasn't designed to be clustered in this way - is more likely still to fail. The failure gets replicated. If the app or guest os needs patching, FT doesn't keep it up. Only clustering at the application level can do that. Then the configurations are really restricted....   If a customer thinks that FT is valuable there are products which add it to Windows and to Hyper-V (Marathon have the best known).

  • Anonymous
    January 01, 2003
    The comment has been removed

  • Anonymous
    January 01, 2003
    David, point taken. But

  • VMware rolled out an update which took servers down. Our record might not be perfect but we've never done that.

  • VMware have had a vulnerabilty which breaks the most import rule of virtualization. We haven't had a security issue like that either.

  • In terms of total patches required all the tracking sites show Microsoft need fewer and fix quicker. Yes you are right that if you look at the virtualization stack you can live with "normal" patching provided you don't have to take VMs down. However if you are running windows workloads and you need to reboot the Windows host OS you've got to plan for downtime when you apply the self same patch to the guests. And I agree people are more interested in the overall management questions of how to keep everything patched, and know the impact of patching any given OS, than just being able to patch virtualization (or any directory, or file serving, or database).  

  • Anonymous
    August 17, 2009
    I think you'll find vmotion / live migration are actually rather important because hardware/firmware needs patching, not just the OS. Oh and as for your definition of "good enough", VMware vSphere raised the bar with their fault tolerance which means a few clicks for a fault tolerant system instead of having to worry about setting up application clustering (even assuming the app can be clustered).  I think it's Hyper-V that still isn't "good enough" at the moment :)

  • Anonymous
    August 24, 2009
    The comment has been removed