News

Ten Tips to Make a Legacy Data Center More Energy Efficient
February 02, 2009

By Staff
Appeared in Building Operating Management/FacilitiesNet

Many companies are aggressively pursuing energy-saving strategies, coalescing around a new marketplace paradigm to go, buy, and sell green; a message that often comes from the boardroom. Until recently, however, data centers remained a special case. The typical mission critical facility is a large consumer of energy, representing the lion's share of facility operating costs. But reliability and availability requirements hampered efforts to improve energy efficiency.

In reality these goals are not mutually exclusive. In fact, as power densities push the limits on what can be supported by cooling technologies, energy efficiency can now be a first-step solution.

Many data center energy-efficiency strategies are easiest to implement in new construction, where they have the potential to substantially reduce the total cost of ownership. But facility executives responsible for a 10- to 15-year-old data center facility can also win in the efficiency game by pursuing a variety of energy-saving techniques. Doing so also positions facility executives as valued partners to top management in meeting the company's wider goal of promoting sustainability.

Because data centers run 24/7/365, even small improvements can achieve significant energy savings. Substantial capital commitments or massive budgets are not required, and modest investments can often be recouped quickly. The following 10 items make up starting point concentrated on systems, equipment and controls that are common to legacy data center operations.

1. Maintain Underfloor Pressure. To begin, seal all unnecessary openings in the raised floor. Common uncontrolled openings include structured cabling cutouts underneath cabinets, openings at the building columns and gaps at the perimeter building envelope. Properly sealing these areas will make it easier to maintain underfloor air pressure, and reduce the strain on the mechanical systems while conserving the cold supply air for its intended use; to cool the IT equipment.

2. Properly Implement Hot Aisle/Cold Aisle Concepts. In a data center's hot aisle/cold aisle configuration, cold supply air from the underfloor plenum is brought into the cold aisle at the front of each server rack via perforated floor tiles. These cold aisles should be kept separate from hot discharge air of the hot aisle, or the back side of each rack.

To optimize performance, the proper quantity of perforated tiles must be placed only in the cold aisles. It is also important to provide a direct path for return air from above each hot aisle to the computer room air conditioning (CRAC) units. Proper tile placement; coupled with sealing openings in the raised floor; will keep the hot rack discharge air from improperly mixing with cold supply air entering the racks. It is also not advisable to mix perforated tiles with varied percentages of free area in an attempt to generate higher supply airflow rates. In fact, too much flow (typically from grate tiles) may actually result in the supply air bypassing the server inlets. Instead, standard free area perforated tiles with air flow rates governed by underfloor air pressure should be sufficient for most rack densities.

Placing return duct extensions on the CRAC units or return grilles over the hot aisles will provide a path for hot air to return to the CRAC units without elevating the cold air supply temperature delivered to the racks. This strategy costs little to implement because return grilles are inexpensive and appropriate perforated tile placement only requires periodically reviewing arrangements and identifying optimum placement for current space loads.

3. Re-evaluate HVAC System Operating Fundamentals. Adjusting existing controls procedures can be among the most economical and easily implemented energy efficiency strategies in a legacy data center. A common computer room mantra of the past had been "the colder, the better." When originally designed, many data center computer rooms aimed to maintain a uniform temperature of 72 degrees F. Until about 10 years ago, there was no distinction between hot aisles and cold aisles. Data center users operated under the mistaken assumption that a uniform temperature should — and could — be maintained throughout the space.

Thanks to the ongoing efforts of industry organizations to develop standards (for example, ASHRAE Technical Committee TC 9.9 and its “Thermal Guidelines for Data Processing Environment” or Telcordia GR-3028-CORE), facility executives have gradually recognized that the only place in the computer room where temperature matters is at the supply air inlet to the computer equipment itself. Optimal thermal range conditions at the server inlet currently recommended by TC 9.9 fall between 65 degrees F and 80.6 degrees.

Working back from this range, it becomes crucial to consider how mechanical systems are configured to deliver cool air to the computer load. With the new emphasis on maintaining the cold aisle temperature at the wider thermal range, supply air temperatures can be raised and the chilled water supply temperature set points can be increased as well. Raising the chilled water loop’s supply temperature and thereby reducing the difference between the chilled water and condenser water loop temperatures decrease the chiller’s workload, while still cooling the same amount of fluid. For example, by raising the chilled water supply temperature on a generic chiller from 44 degrees F to 54 degrees, chiller power requirement can be reduced from 0.62 KW/ton to 0.48 KW/ton.

4. Optimize Mechanical System. Old mechanical equipment can be significant energy hogs. But purchasing more efficient equipment may not fit into the coming year’s capital plan. An alternative option: Optimize the performance of existing systems.

Many data centers employ equipment that operates at constant volume such as CRACs, pumps, fans, and chillers that run continuously 24/7/365, consuming enormous amounts of energy. If motors on such equipment can be retrofitted with variable frequency drives (VFDs) and control sequences can be adjusted to enable equipment groups to operate at part-load while maintaining set points, it is possible to realize significant energy savings. A decrease of just 10 percent in fan speed may result in as much as a 27 percent reduction in energy use and prolong the life of the equipment.

A common first step is to retrofit the CRAC unit supply fan motors with VFDs and control all the computer room CRAC units in unison so that only the airflow needed to meet underfloor pressure set point is supplied, conserving energy while still meeting cooling requirements. The savings can provide a payback in less than a year in many parts of the country.

5. Identify Strategic Locations for Thermostats. In most legacy data centers, thermostats are installed in the CRAC return air stream, where air flow may be unpredictable. This can result in uneven CRAC unit loading, which in turn leads to variations in server inlet temperatures. Relocating the thermostats to the supply air stream, where discharge air can be controlled, will provide more uniform underfloor and server inlet temperatures. Relocating the thermostat also enables discharge air temperatures to be increased with more control and accuracy. Pairing this strategy with the airside and waterside shifts in Tip No. 3 will further conserve chiller energy while still providing acceptable supply air temperature to servers.

Read more on this article>>