After you exhaust the quick-fix methods of reducing energy consumption, consider other longer-term cost-saving measures. In this section, we discuss some of the most promising opportunities.
Calculate performance metrics. Power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE) are two common metrics used to characterize a data center’s energy consumption. The Green Grid, a global consortium of organizations collaborating to improve the resource efficiency of information technology and data centers, developed both metrics. You can find more information on calculation methods in the white paper The Green Grid Data Center Power Efficiency Metrics: PUE and DCiE.
In general, DCiE is the ratio of IT equipment power to the total power of the data center, and PUE is its reciprocal (the ratio of total data center power to IT equipment power). Variations of PUE have been proposed to help compare data center performance during specific stages of operation, but they haven’t been widely adopted. Other performance metrics that take into account server productivity include The Green Grid’s data center energy productivity metric and JouleX’s—now Cisco, a multinational technology conglomerate—performance per watt metric.
Install infrastructure-management software. These tools can simplify the benchmarking and commissioning process, and they can continuously monitor system performance via network sensors. Their real-time benchmarking can notify users of any system failures and validate efficiency improvements. Additionally, the data the infrastructure-management software collects will help improve the effectiveness of other measures, including airflow management.
Create a facility energy model and complete an analysis. There are a growing number of tools available for data center operators to perform energy-modeling analysis for different scales and systems. Before a new data center facility is built, some engineering analysis is completed, but for many facilities that’s the first and last analysis. By modeling over the course of facility operation and before any expansion or upgrade, you can maintain optimized energy consumption levels. For a review of available energy-modeling tools for data centers, refer to the IEEE article Data Center Energy Consumption Modeling: A Survey (PDF).
Efficient IT equipment
Buy energy-efficient servers. Standards for server efficiency are still in their infancy, so it can be difficult to know which servers will use less energy before you buy one. Many blade servers (thin, minimalist hardware) use low-power processors to save on energy and hardware requirements. Microsoft engineers report that low-power processors offer the greatest value for the cost—in terms of computing performance per dollar per watt. The Standard Performance Evaluation Corporation (SPEC) website contains a list of Published SPEC Benchmark Results of energy use across a range of server models.
Look to professional consortiums or trade organizations for recommendations and comparisons of server models according to energy use. For example, The Green Grid’s Library and Tools web page provides content and tools to assist in benchmarking, evaluating, and comparing equipment and facility performance. And for larger facilities, unbranded or white-box servers are a popular option. These servers let you tailor their performance to fit data center computing applications.
Spin fewer disks. Implementing a massive array of idle disks (MAID) system can save up to 85% of storage power and cooling costs, according to manufacturers’ claims. Typically, data is stored on hard disk drives (HDDs) that must remain spinning (and therefore constantly consuming energy) for their information to be retrieved. MAID systems, though, catalog information according to how often it’s retrieved and place seldom-accessed data on disks that are spun down or left idle until the data is needed. One disadvantage to this approach is a decrease in system speed because the HDDs must spin up again before the data is accessible. The lower operating costs and reduced energy consumption of MAID systems come at the expense of higher latency and limited redundancy: If a MAID system is used for backup data storage, the data it contains may not be immediately available if another server fails. The disk will first have come up to speed before it can be used. In addition to MAID, tape storage has similar drawbacks but can also save data centers energy and money on long-term storage.
Replace spinning disks with solid-state drives. Although experts disagree about the exact amount of energy savings associated with flash-based solid-state drives (SSDs), there’s a general consensus that SSD energy consumption is less than that of HDDs. SSDs also offer a faster read time, delivering energy savings for storage systems without the latency and redundancy limitations of MAID storage. However, because SSDs only hold a competitive advantage over HDDs in specific server applications, you should carefully consider their attributes relative to your computing needs before making the switch.
Separate hot and cold air streams. Most data centers suffer from poor airflow management, which has two detrimental effects: First, the mixed air recirculates around and above servers, warming as it rises and making the higher servers less reliable. Second, data center operators must set supply-air temperatures lower and airflows higher to compensate for air mixing, thus wasting energy.
Setting up servers in alternating hot and cold aisles is one of the most effective ways to manage airflow (Figure 3). This allows delivery of cold air to the fronts of the servers, while waste heat concentrates and collects behind the racks. As part of this configuration, operators can close off gaps within and between server racks to minimize the flow and mixing of air between hot and cold aisles. LBNL researchers found that with hot-cold isolation, air-conditioner fans could maintain proper temperature while operating at a lower speed, resulting in 75% energy savings for the fans alone.
Figure 3: How to set up a hot aisle–cold aisle configuration
The hot aisle–cold aisle concept confines hot and cold air to separate aisles. Limiting the mixing of hot and cold air means it takes less energy to cool the servers.
Reduce bypass-airflow losses. In data centers that have poor airflow-management strategies or significant air leakage, bypass airflow can occur when cold conditioned air cycles back to the computer room air conditioner before it cools any equipment, resulting in wasted energy. In fact, an LBNL study found that up to 60% of the total cold air supply can be lost via bypass airflow. The main causes of bypass-airflow losses are unsealed cable cutout openings and poorly placed perforated tiles in hot aisles. You can eliminate this type of problem by identifying bypasses during a study of the cooling system’s airflow patterns. A white paper by Schneider Electric, How to Fix Hot Spots in the Data Center (PDF), provides guidance and links to additional resources related to airflow analysis and troubleshooting for cooling systems.
Bring in more fresh air. When the temperature and humidity outside are mild, economizers can save energy by bringing in cool outside air rather than using refrigeration or other cooling equipment to cool the building’s return air. Economizers have two benefits: They have lower capital costs than many conventional systems, and they reduce energy consumption by making use of free cooling when ambient outside temperatures are sufficiently low. In northern climates, this may be the case the majority of the year. It’s important to put economizers on a recurring maintenance schedule to ensure that they remain operational. Remember to pay attention to the dampers: If they’re stuck open, they can go unnoticed for a long time and increase HVAC energy consumption.
Use evaporative cooling. When conditions are right, you can use water that’s been evaporatively cooled in a cooling tower to cool the data center facility, rather than using a refrigerant-based cooling system. In northern climates, the opportunity to use this so-called “free cooling” with a tower/coil approach (also referred to as a water-side economizer) can exceed 75% of the total annual operating hours. In southern climates, free cooling may only be available during 20% of operating hours. While this type of economizer is operating, the free cooling that it provides can reduce the energy consumption of a chilled-water plant by up to 75%. The National Snow and Ice Data Center (NSIDC) provides a successful case study—NSIDC Green Data Center: Overview—for the use of direct-indirect evaporative cooling in a data center.
Upgrade your chiller. Many general best practices for chilled-water systems (including the use of centrifugal and screw chillers to optimize chilled-water temperatures) also apply to cooling systems for data centers. If your facility isn’t already taking advantage of these techniques, consult an HVAC expert—you may find that highly cost-effective savings are available.
Install ultrasonic humidification. Ultrasonic humidifiers use less energy than other humidification technologies because they don’t boil the water or lose hot water when flushing the reservoir. Additionally, the cool mist they emit absorbs energy from the supply air and causes a secondary cooling effect. This is particularly effective in a data center application with concurrent humidification and cooling requirements. One US Department of Energy case study demonstrated that when a data center’s humidification system was retrofitted with an ultrasonic humidifier, it reduced humidification energy use by 96%. However, ultrasonic humidification systems in data centers generally require pure water, so it’s important to factor in the cost and energy consumption of a high-quality water filtration system, such as one that uses reverse osmosis. If a water filter isn’t used, a thin layer of minerals can build up on server components and short out the electronics. For an example of a real incentivized project’s cost savings, see Ryan Hammond’s presentation—SMUD Custom Incentive Program (PDF, slide 14)—for the Emerging Technologies Coordinating Council quarterly meeting.
Cool server cabinets directly. Although some facility managers have a reflexive aversion to having water near their computer rooms, some cooling systems bring water to the base of the server cabinets or racks. This practice allows the cabinets to cool more directly and efficiently than the entire room (Figure 4). Many vendors offer direct-cooled server racks, and several industry observers have speculated that direct-cooling techniques will dominate the future of heat management in data centers. This is typically a new-construction measure.
Figure 4: Direct-cooled server racks increase cooling efficiency
Rather than cooling the servers indirectly by cooling the entire room, direct-cooled server racks circulate cool liquid below the server cabinet—a more efficient approach.
Give the servers a bath. Liquid-immersion server cooling is another direct-cooling approach, albeit a relatively new technique. Thanks to early demonstrations that have yielded exceptional energy savings and system performance, this approach is quickly gaining interest. Liquid-immersion server cooling is usually a new-construction measure and rarely used, except in hyperscale environments, where it continues to be developed and applied because it has significant energy-savings potential.
One approach promoted by Green Revolution Cooling (GRC), a leader in immersion cooling for data centers, submerges high-performance blade servers in a vat of inexpensive nonconductive mineral oil held at a specific temperature and slowly circulated around the servers. This technique saves energy for two reasons: First, the mineral oil’s heat capacity is more than 1,000 times greater than that of air’s heat capacity, resulting in improved heat transfer. Second, the oil pulls heat directly off the electrical components instead of just removing heat from the air around the server. These factors enable cooling efficiency to be greatly improved, and they allow the working fluid to operate at a warmer temperature than would otherwise be possible with air cooling. A 2014 Submersion Cooling Evaluation (PDF) from Pacific Gas and Electric Co.’s Emerging Technologies Program found that GRC’s system was able to yield more than 80% savings in energy consumption and peak demand for cooling.
Let servers heat the building. The National Renewable Energy Laboratory (NREL) uses a warm-water liquid-heat-recovery approach to cool its Peregrine supercomputer and simultaneously heat the building. Sealed dry-disconnect heat pipes circulate water past the cores, then capture and reuse the recovered waste heat as the primary heat source for the data center, offices, and laboratory space. NREL also uses the recovered waste heat to condition ventilation makeup air. NREL estimates that its liquid-based cooling technique saves $1 million annually in avoided energy costs.