次の方法で共有


Amazon has 5 times the compute power of its nearest 14 rivals combined – yeah right…

Just spotted a few tweets that say words along the lines of “AWS has five times the power of its nearest 14 rivals combined”. And it’s all validated and indisputable because it's from a Gartner Report.

So do I really have to believe that Amazon has datacenters that huge? I mean we all know where they are and we can use tools like Google Maps and Bing Maps satellite imagery and aerial photography to get a feel for how big these things are.

Well, it must therefore mean they are getting incredible server density in to their average (compared to their biggest competitors – Google and Microsoft I mean) sized datacenters. I’m sure the 14th placed rival is a fairly small outfit, I don’t know who they are, but to my mind that means Amazon are managing to get somewhere between 5 and 14 times the numbers of servers per square foot in their datacenters? As an ex-boss of mine used to say “…do those numbers sound reasonable?”. I mean, are we really saying that where one operator manages to squeeze say 250,000 servers in to a section of its datacenter, Amazon is managing to squeeze somewhere between 1.25 million and 3.5 million? No, my ex-boss is right – those numbers just don’t sound reasonable…

So I obviously had to investigate a little deeper. And it turns out that according to a Gartner analysis, AWS offers five times the utilized compute content of the other 14 cloud providers in the Gartner Magic Quadrant. Combined. One commentator at websitedevelopmentny.org posited “in terms of total productive capacity - that is, resources seeing utilization. Implication: MSFT has lots of idle CPUs” (see here https://websitedevelopmentny.org/blog/wievblog.php?id=5812).

Are lots of idle CPUs a measure of how bad a cloud service is or how good it is? I guess that depends on the aims the cloud operator had when they provisioned that amount of physical resource in the first place. If their plan was that all capacity would be used immediately, then it’s bad. It shows their own assessment of what customers want and the customers’ own assessments of what they want are highly divergent. But having spare capacity does mean being in a position to satisfy customer demand. A cloud operator that is running on 99.9% utilisation is not in a great place really: they’re not in a position to respond to demand. A cloud operator with say 70% utilisation therefore has 30% capacity. So in my book, as long as capacity is purchased with the long term aim of being agile and able to respond to customers’ demands for compute, then it’s a good thing. A really good thing. I mean a really, really, really good thing. One of the aims of cloud computing is to give the appearance of infinite resources. We all know resources are finite in reality. The closer a cloud operator is to 100% utilisation, the less it’s able to stand up to one of the tenets of cloud computing. It really means they become “less cloudy”.

So in my book, the accurate Gartner statement (“AWS offers five times the utilized compute content of the other 14 cloud providers in the Gartner Magic Quadrant”) is something to be celebrated, certainly by Microsoft: one of the 14 rivals. If you’ve invested some of your technology to the care of an operator and let’s say, 12 months down the line you need some spare capacity but they can’t give it to you when you need it - that puts you in an awkward situation.

How about a headline like “14 of AWS’s rivals offer 5 times more spare compute capacity to their customers than Amazon” for taking things out of context from that Gartner report…

Planky - @plankytronixx

Comments

  • Anonymous
    September 22, 2014
    I don't think the logic in this article works.  'Gross compute utilization' is a measurement independent of a 'percent utilization'.  In other words, Gartner seems to be speaking to a gross measurement of what's in use, not a rate of what's in use.