News

How Is Your Data Center Performing?
June 30, 2006

By Staff
Appeared in Processor

When you hear the words “data center performance,” you might first envision a super server in the middle of it all, making the data center run like a well-oiled machine. But according to Syska Hennessy (www.syska.com), a New York-based engineering, technology, and consulting firm, data center performance depends on more than what your server is pumping out. In fact, there is a precise science to measuring how your data center is performing. The Syska Hennessy solution examines 11 different aspects of data center performance and measures them on a scale of 1 to 10.

Today’s Focus
Christopher M. Johnston, vice president and critical facilities chief engineer at Syska Hennessy, says the data center performance industry downsized considerably after dismal market conditions post-Y2K, when a number of the major firms either went out of business or changed their market focus to other areas. Johnston says, “Since that time, practically all of the available data center space has been absorbed by the market, and a large amount of new space is being designed and built. All of the firms that focus on this market are very busy, and we are no exception.”

John Passanante, vice president of technology at Syska Hennessy, says reliability continues to be a major focus in the data center performance industry. He says IT managers who have been pointing out the vulnerability of their critical facilities to upper management are starting to be taken more seriously. Passanante says, “More clients are looking for qualified firms such as Syska Hennessy Group to perform gap assessments to identify major reliability deficiencies and assist IT and facilities managers build a business case for investment dollars to be allocated to upgrading deficient systems.”

Passanante says technology trends such as high-density servers or blades, grid computing, and the growth of SANs are introducing new challenges when trying to determine future space requirements. He says there is a “sandwich effect” that is currently compressing more equipment within cabinet space but increasing the depth requirements. Passanante says, “The exponential increase in heat and power loads within a data center is transforming the industry in thinking in terms of watts per cabinet footprint rather than watts per overall data center square footage. The 25-year data center has become obsolete. The challenge has become to think in a maximum of three-year increments but leaving systems capable of future growth.”

Meeting The Challenges
Johnston says the major performance challenge today is high-density cooling. He says, “Until the last couple of years, we rarely encountered a major data center with an average computer load density over the equipment floor greater than 50 watts per square foot. Today, many clients routinely want 100 to 125 watts average with the flexibility to have spot loads of 200 watts, and some clients want 400 watts average or higher.” Johnston says add the need to have adequate cold air available 6 feet above the floor in the cold aisle to cool the top server in a cabinet, and the challenge increases. He says Syska Hennessy has numerous different solutions to deal with this challenge, ranging from increased raised floor depth and closer cooling unit placement to overhead distribution to liquid-cooling of cabinets.

A second challenge, in Johnston’s opinion, is to maintain or increase the efficiency of the equipment floor. Using traditional methods, increased computer load density requires an increased percentage of the available space being occupied by cooling and electrical equipment, reducing the usable percentage for computer equipment.

Johnston notes, “We are employing numerous solutions for this second challenge. One is to space the columns much wider than formerly—56- x 40-foot spacing is not uncommon. Another is to use higher-capacity cooling units and to install them and the PDUs in service galleries outside the equipment room, leaving only 2- x 2-foot remote power panels in the equipment room. A third is to have a two-story facility with the equipment room and service galleries upstairs and the electrical and mechanical infrastructure below.”

Johnston says a third challenge is the often-ignored problem with harmonic voltage distortion. “Computer equipment cannot function reliably if its input voltage is distorted beyond a certain level. Normally, a UPS system keeps the distortion below that level. When a UPS transfers to bypass, the distortion seen by the computer equipment it supplies increases. This distortion is produced by the other UPS equipment that is online, plus the variable speed drives now being used in the cooling system to reduce operating costs.” He says it’s not uncommon today to find that 85% of the total electrical load is the type that produces harmonic voltage distortion.

Syska Hennessy is dealing with this third challenge by modeling the entire electrical system for harmonic voltage distortion and by taking increased care in selecting UPS and variable speed drive equipment that will maintain the distortion within limits during a worst-case event.

Made Stronger
Passanante says data centers are also encountering cabling infrastructure challenges. He says as bandwidth requirements are increasing to support 10 Gigabit Ethernet and beyond, cabling manufacturers and standards committees are working to develop higher-grade copper cabling, which is still the more cost-effective option over the use of fiber-optic cabling. Passanante says, “In order to satisfy the higher bandwidth requirements using copper cabling, diameters of the cabling must be increased to deal with alien cross talk interference. It is now critical that new testing procedures be developed that can account for the effects of alien cross talk. It is no longer possible to simply modify the existing testing equipment to include this additional test.”

Passanante says current testing only accounts for one cable at a time. The challenge, he says, is to estimate what the effect of multiple cables surrounding any one cable will be. This is what has caused the delay in the release of an approved standard for the use of 10 Gigabit Ethernet-capable copper cabling.

Johnston says, “Most data centers have infrastructure that is more reliable in some systems (such as electrical) than in others (such as disaster preparedness). The need for balance between systems is a lesson that we have learned well during our 78-year history as a consulting engineer. The old adage that a chain is only as strong as its weakest link is especially true in data centers.”