Length of blackout tests critical facilities
September 01, 2003

By Staff
Appeared in Building Operating Management/FacilitiesNet

For telecommunications companies like AT&T, loss of utility power in a blackout is only half of the problem. “The first thing people want to do in an emergency is pick up the phone,” says Alan Abrahamson, real estate operations director. The additional calls strain the system just when it is at its most vulnerable.

AT&T maintained service throughout the blackout. But the company’s mission-critical facilities and strategies were tested.

The company has portable generators with quick-connect capability that can be trucked to critical locations if an on-site unit fails. But on Aug. 14, the backup plan needed a back-up.

One generator had to be moved into New York City over the George Washington Bridge. But traffic was at a standstill. “The truck got stuck for hours before a police escort got it through,” says Abrahamson. “Fortunately, we had battery life to carry us through.”

Most data center and telecommunications spaces remained up and running. But some reportedly did go down.

One Achilles heel was equipment failure. “Given the length of the blackout, the likelihood of having a component failure is fairly high,” says Peter Gross, chief technology officer, EYP Mission Critical Facilities.

To minimize the risk of failure, continuous maintenance is important. So is ongoing testing, like running generators on full load. At AT&T, says Abrahamson, “tests are done on a regular basis. It’s a very rigorous schedule.”

Human action — or inaction — can undermine reliability. In an emergency, operators can be overwhelmed by alarms. “They have to be able to discern which are critical and respond properly,” Gross says. Training is crucial.

The outage exposed the way “equipment creep” threatens reliability, says Rob Friedel, senior vice president, Syska Hennessy Group. Equipment creep occurs as more power-hungry, heat-generating gear is crammed into space not designed for the added load. “That cut into backup time and system reliability,” Friedel says. “People thought they were well-covered but they weren’t.”

Even when the infrastructure performed well, the length of the outage raised questions. Consider a data center located within a larger facility. “You have a little box within a big box,” says Donald Sposato, director, corporate engineering and technical services, Becton Dickinson. The data center cooling system may be designed to maintain 70 F, while the building itself is kept at 74 F. “If the temperature of the big box starts to creep up, the little box has to work harder,” Sposato says. A long outage on a very hot day could strain the capacity of the data center cooling system.