次の方法で共有


Changing Data Center Behavior Based on Chargeback Metrics

On July 8, 2008, 150 attendees joined in at the Microsoft-hosted National Data Center Energy Efficiency Strategy Workshop. The sidebar opposite summarizes the overall aims of this workshop.  Image 4.1

During the workshop, I delivered a presentation on “Incenting the Right Behaviors in the Data Center.” If you want to see my presentation, you can review the content at https://www.energetics.com/datacenters08/pdfs/Belady_Microsoft.pdf

And if you would like to see the response from industry observers, check out this link: https://www.networkworld.com/news/2008/070908-good-incentives-boost-data-center-energy.html?page=1.

The two main points in my presentation were:

  • Costs in the data center are proportional to power usage rather than space.
  • Power efficiency is more of a behavior problem than it is a technology problem.

I then went on to discuss the background to these claims.

Charging for Space

Historically, data centers have always charged for space. As a consequence, organizations charged for space in the data center did everything they could to increase server compute density, demanding more processor cores, memory and IO into each U in the data center. Server manufacturers responded (as they occasionally have been known to do) by providing just that: servers that are space efficient when measuring processing power against rack space. The downside of these space efficient designs was that power consumption in the racks increased significantly, which were much more difficult to cool.

Coping with Energy Cost Increases

In my Electronics Cooling Article from 2007 (https://electronics-cooling.com/articles/2007/feb/a3/), I wrote about the fact that data center infrastructure and energy costs have increased substantially to the point where they actually cost more than the IT they support. The article speaks to the fact that in the last decade these costs were negligible and didn’t even show up on the radar screen relative to the IT costs. However, an inflection has occurred and they have now become the primary cost drivers in the data center. Since publishing the article, this effect has been compounded even further by significant increases in the cost of energy. Who could have predicted just a year ago energy costs would increase (although I did 7 years ago ( https://www.greenm3.com/2008/03/christian-belad.html) With oil spiking to $140 a barrel and electricity costs on the rise, the unpredictability of business costs are around energy and not space. So wouldn’t it make sense to for businesses to incent the organization for efficiency.

Data centers must charge customers in a way that more closely reflects the overall costs of running a data center.

Breaking Down Data Center Costs

A breakdown of US data center costs at Microsoft produces what Mike Manos calls the data center “PacMan” – a pie chart that bears a certain resemblance to a ghost gobbling game that I remember from my student days. You can see the “PacMan” chart on Page Four of my presentation.

What this chart shows is the following cost ratios:

Area Percentage
Land 2%
Architectural 7%
Core and Shell Costs 9%
Mechanical/Electrical 82%

Analyzing the Figures

Analyzing this chart further, we see that over 80% of the costs for a data center scale with power consumption and less than 10% scale with space. So, on this basis, why the heck were we charging our customers for space? Our unambiguous conclusion was that our charging models were driving the wrong type of behavior. Basically, dense was dumb. We needed a charging model that reflected the costs that we experienced, and this charging model would then change the behavior of the users of IT in the data center.

Changing the Charging Model

In my presentation, I described how Microsoft now charges for data center services based on a function of kW used. If someone upgrades to a high-density blade server, they do not reduce their costs unless they also save power. This change created a significant shift in thinking among our customers, together with quite a bit of initial confusion, requiring us to answer the stock question “You’re charging for WHAT?” with “No, we’re charging for WATTS!”

Recording the Changes

From our perspective, our charging model is now more closely aligned with our costs. By getting our customers to consider the power that they use rather than space, then power efficiency becomes their guiding light. This new charging model has already resulted in the following changes:

  • Optimizing the data center design
    • Implement best practices to increase power efficiency.
    • Adopt newer, more power efficient technologies.
    • Optimize code for reduced load on hard disks and processors.
    • Engineer the data center to reduce power consumption.
  • Sizing equipment correctly
    • Drive to eliminate Stranded Compute by:
      • Increase utilization by using virtualization and power management technologies.
      • Selecting servers based on application throughput per watt.
      • Right sizing the number of processor cores and memory chips for the application needs.
    • Drive to eliminate stranded power and cooling—ensure that the total capacity of the data center is used. Another name for this is data center utilization and it means that you better be using all of your power capacity before you build your next data center. Otherwise, why did you have the extra power or cooling capacity in the first place...these are all costs you didn’t need.

I will be discussing the concepts of stranded compute, power, and cooling in greater detail in later posts.

Moving the Goalposts

I think it will take quite a bit of time for manufacturers to realize that the goalposts have moved. At present, it is quite difficult to get the answer to questions such as “What is the processing capacity of your servers per kilowatt of electricity used?” However, I do believe this change will come, which will drive rapid innovation along an entirely different vector, where system builders compete to create the most energy efficient designs. The benchmarking body, SPEC, has already started down this path with their SPECpower benchmark, but this needs to be done with applications.

Summarizing the Vision

I would like to end with a quote from my friend James Hamilton, who in his blog wrote about a forum he participated in around the time of the the EPA Workshop.

“Our conclusion from the session was that power savings of nearly 4x where both possible and affordable using only current technology. ”

James’s comment (about his session) is exactly correct, the technology is already there, we just need to grab it. This supports my initial point that we need modified behaviors to do that. Today’s charging models that align costs with the volume a customer uses in the data center do not provide any motivation to save power. However, with the right level of incentives, power-based charging could drive a new and dazzling era of change in the computing industry. We should do what we can to help achieve this vision.

Author

Christian Belady, P.E., Principal Power and Cooling Architect

Comments