Freigeben über


What’s the Upside to a Downturn? Recessions Heighten Focus on Efficiency!

 By Christian Belady

 

What’s the upside to a downturn? As the economy shrinks and budget and program cutting expands throughout our industry, how can data center facility and IT infrastructure teams identify opportunities that create value for their companies? In other words, how do we emerge stronger from a recession? For most business services professionals (people in any organization for that matter), acknowledging inefficiencies is the first step towards taking action, and with action comes insight, evidence, and real options. Like any well-managed business, we routinely check our assumptions and planning needs against our assessment of the economic environment. As part of this process, we look at many scenarios and options to improve efficiencies, reduce costs, and increase the return on our investments.

 

We have spoken at many conferences in the past couple of years and shared our experiences and ideas on how this industry could collectively improve to become more efficient, in part by reducing our carbon footprint. The uncertainty presented in our industry today – regarding the environment, the economy, and how people react to change – reconfirms the importance of developing a data center strategy that can react to unpredictable and dynamic business needs. In this blog we would like to share some of our experiences over the past couple years and the lessons we’ve learned about how to measure and increase efficiency.

 

We’d also like to turn up the dial on our invitation to the rest of the industry to work together with us in these areas. An important example involves our belief that data centers could operate at much higher temperatures, which would end much of the need for expensive, energy inefficient cooling equipment. To that end we urge all companies to encourage ASHRAE to further open its recommended operating temperature ranges beyond the adjustments they recently made. Additionally, if we all work towards recommending improvements to hardware specifications, server manufacturers could increase the efficiency of their products. With so much at stake--both in helping our companies through difficult financial times and in protecting our global environment--it’s time to work together. We should be striving to run more efficiently as an industry, not just as separate corporations.

 

Sharing Microsoft’s Energy Efficiency Best Practices for Data Center Operations

 

So what are some of the elements in our efficiency program that have reduced our needs? Here are some examples:

 

1) To begin with, we applied our own best practices on energy efficiency that we published about a year ago. These were all things that we felt were pretty important for foundational improvement in our data center operations efficiency. They included:

1. Engineer the data center for cost and energy efficiency

2. Optimize the design to assess multiple factors

3. Optimize provisioning for maximum efficiency and productivity

4. Monitor and control data center performance in real time

5. Make data center operational excellence part of organizational culture

6. Measure power usage effectiveness (PUE)

7. Use temperature control and airflow distribution

8. Eliminate the mixing of hot and cold air

9. Use effective air-side or water-side economizers

10. Share and learn from industry partners

 

2) We also invested in a team led by computer industry veteran Dr. Dileep Bhandarkar (a distinguished engineer in our organization), to optimize our server performance, cost, and power. These technologists are turning over every stone to squeeze efficiency out of our hardware. The focus has been on working with hardware suppliers to increase the efficiency of power supplies and right-size the system, including the processor, memory, and storage. More importantly, the team has focused on selecting hardware based not only on total cost of ownership but also on the best performance per watt for the particular application. As a result we have seen a 13% drop in average server power in operations, driven by an even steeper drop in power for the servers we added in the past year. During that same period, interestingly, ASHRAE power curves actually showed an increase in server power by 4%.

 

3) But perhaps the overarching thing that drove the above elements was when we changed our internal chargeback model about 1.5 years ago from charging for space to charging for power allocated/consumed. We presented on this for the first time in July 2008 to the EPA. This proved to help reduce energy and also reduce carbon emissions. We later blogged about this and how efficiency is not only a technology problem but perhaps even more a behavior problem. If you are interested in more details on how we do our chargeback models, visit our Power of Software blog on this topic.

 

4) Also, it’s in our DNA to use software—especially Microsoft software—to achieve our goals. Virtualization technology such as Hyper-V and our new Windows Azure platform in Windows Server is being utilized to reduce energy use, and using Microsoft System Center management products to improve efficiency throughout our operations. The purpose of this blog isn’t to promote these products, and we realize many data centers are built on other platforms. But the point is that smart use of software can give you a lot more bang for your hardware buck, and at the same time help you make significant gains in reducing your carbon footprint.

 

So our key takeaways from this experience are:

 

1) All of the best practices and technology we are pursuing help improve efficiency and reduce carbon footprint, but only if adoption occurs. As we have seen within the industry, adoption has been slow. However, with the right incentives, adoption can and will happen.

 

2) Our chargeback models provided incentives that improved efficiency far beyond our expectations and drove our total mega watt needs for our operations much lower than expected.

 

3) New software solutions are enabling data center teams to make dramatic efficiency improvements, primarily in the areas of virtualization, automation, and streamlined management functions. Moving to new software may be a tough sell in this economic climate, but the right products will make the purchases well worth the investment.

 

We feel it’s important for us to openly share information and best practices around energy efficiency because we believe the data center industry as a whole needs to work together in order to make the dramatic gains needed to make a difference for our companies and the planet. We invite other data center teams to explore our best practices and approaches, enhance them with your own, and apply them in your situation as you see fit.

 

We certainly don’t pretend to have all the answers, and we’re very interested in considering approaches that have worked for other companies, or ideas that may bear fruit in the future. The data center industry is an exciting place to work right now because there is so much focus on what we do and so much opportunity for our work to make the world a better place—whether that’s by reducing our carbon footprint and the amount of water we use, or by enabling further advances in online services that take our societies to new levels of collaboration and opportunities for all.

 

Please join us in sharing information about what your company or team is doing to increase efficiency, or comment on the opinions we’ve expressed here, so that we can engage in a dialogue that benefits us all.

 

Thank you, in advance, for helping to advance the industry as a whole.

 

Also, be sure to visit our team’s blog in a couple of weeks when Daniel Costello, director of Research & Engineering, will be sharing more information on how we are advancing our efficiency best practices within our Generation 4 Modular Data Center plans.

 

/cb

 

Christian Belady, power and cooling architect, Global Foundation Services, Microsoft

Comments

  • Anonymous
    January 01, 2003
    Part 1: A couple of years ago, when our Data Center Services’ Research & Engineering team within

  • Anonymous
    February 06, 2009
    Christian, Well said! You and Microsoft are constantly challenging the institutionalized paradigms of today's data center design, operations, and especially IT hardware/Software composition. This will only help to make all of us better at what we do. We too believe that software has a significant role to play in the very near future as we seek to maximize the utilization rates of our IT assets. As a class these are typically running in the 5 to 10% utilization rate which if they were employees or industrial manufacturing machines would be subject to immediate reductions to achieve a 90% utilization rate. Let's see the industry drive towards the objective of increased productivity (how about starting with a measurement of IT productivity) from our already deployed IT assets. Until we have a real DCeP tool we believe the CUPS (compute units per second) model is an easy to use proxy to help IT and facility managers drive towards improved productivity. http://www.emerson.com/edc/docs/EnergyLogicMetricPaper.pdf As always - we appreciate your fine efforts in this area. JP

  • Anonymous
    February 17, 2009
    Christian, You mention wanting to operate data centers at higher temperatures than is the common practice today (even beyond the slight ASHRAE changes last year).  What temperature would you like to see for data centers?  Are there different short-term and long-term maximum ambient temperatures? --kb