Watt Matters in Energy Efficiency!

By Dileep Bhandarkar, Ph. D.
Distinguished Engineer
Global Foundation Services, Microsoft

I recently had the privilege to deliver a keynote speech titled “Watt Matters in Datacenters” at the Server Design Summit in Santa Clara, CA on December 1, 2010. My keynote focused on energy efficiency in datacenters and the need for holistic optimization of the server hardware and datacenter infrastructure. I have covered this topic in a previous blog but wanted to take this opportunity to discuss some of the work that we have been doing with our industry hardware partners to optimize server design for cloud computing.

As part of our commitment to driving efficiencies in our datacenters, we have been working actively with server OEMs, microprocessor manufacturers, disk drive vendors, memory suppliers and other component suppliers to increase the efficiency of server designs. We share our detailed requirements, develop concept designs, and work with our partners on optimized designs that are then shared out and available to the industry at large. We firmly believe that this strong partnership approach and sharing our best practices is the best way to advance the state of the datacenter industry and meet our cloud computing needs.

I also presented some opportunities that server designers should consider to further optimize for cloud computing including:

  • Better Alignment with Datacenter Technologies
  • 480V 3 phase power supplies at rack level
  • In rack UPS instead of current central UPS
  • Rightsizing of platforms for major workloads
  • Low Power DIMMs and Processors
  • Tiny CPU cores – Atom? Bobcat? ARM?
  • System on a Chip for lower platform power
  • Dynamic Power Capping
  • Designs for higher temperature operation
  • Rack level power and cooling
  • Enhanced support of virtualization

After the holidays, I will discuss these topics in greater detail in an upcoming white paper. I witnessed a huge amount of interest in what I shared at the Server Design Summit and have made my presentation available here.

Wishing you and yours a very happy holiday season! 

- Dileep

Find more information on our datacenter strategies on GFS’ web site at www.globalfoundationservices.com

Comments

  • Anonymous
    December 21, 2010
    The comment has been removed
  • Anonymous
    January 12, 2011
    Interesting graph.  A few comments:

The relative performance/watt degration when going to the faster and higher power CPUs is dependent on the actual workload run.  If the workload utilizes the CPU heavily, the optimal performance/watt can shift to the 80W SKU.  This also affects the Perf/W/$. * Tiny CPU cores are beneficial if they are tightly integrated and the back-end hardware infrastructure is ammortized across the maximum number of CPU cores.  In other words, 6 loosely coupled dual-core Atom compute nodes is not as efficient as one 12-core Intel X567x compute node * Another important item not mentioned is autonomic power management of the hardware.  The more difficult it is for customers to enable the power management features, the less likely they are to use then.  If a power management feature can be configured at POST time and then run automatically in the background, the system is more likely to be optimized when the customer uses it.

  • Anonymous
    January 15, 2011
    Bob W.  Loved your final paragraph, very true.  I would love for the servers I manage automatically clock themselves down w/o me worrying about it.  To actually have them sleep at prescribed hours would be interesting - setting an "alarm clock" to wake them back up when it is time for work -- or have them wake up when a request is made via a network call -- from a user connecting in late..  It seems there is more that can be done in this area and it would save a lot of power world wide.