Data center epitomizes 'mission critical'
The Indiana University Data Center in Bloomington features Tier III, N+1 redundancy, cost-saving flywheel UPS technology, and switchgear with full bypass capability to help ensure power reliability.
The $32.7 million, 82,700-sq-ft Indiana University Data Center in Bloomington is critical for the institution of higher learning. Bearing heavy responsibility, the data center is required to ensure the safety and security of the university’s most prized networking, computer processing, and data storage equipment.
The data center supports teaching, research, and administration at all eight university campuses, and enhances the university’s competitiveness for research grants. It secures the university’s digital and technological assets, including the supercomputers Big Red and Quarry, and the Bloomington hub of Indiana’s statewide I-Light network.
Finally, the data center and its staff provide backup data space, network capability, and critical redundancy to the state of Indiana.
As one of the largest data centers in the region and the largest for higher education in the state, it currently holds 2.8 petabytes of information and supports computing on thousands of servers. Designed for longevity and resiliency, the data center’s exterior is a concrete structure that was built bunker-style to help defend against power outages, flooding, and even an F5 tornado.
The facility is a Tier III, N+1 configuration that provides the necessary redundancy to continue powering the data center through all but the most severe catastrophes. Any component in the power distribution system can be taken offline during a planned outage, without interrupting power to compute equipment. The configuration provides for fuel storage located on-site that can enable 24 hours of continuous power and meets other requirements of a Tier III design.
Three 12-kV circuits provide power to the facility. The main feed is from Duke Energy, a primary utility source for Indiana, Ohio, and Kentucky. Another feed is from the campus’ distribution system, which is situated away from the data center. If an event interrupts the circuit from Duke Energy, power automatically switches to the distribution feed. The third circuit, which requires manual switching in the event of an emergency, provides additional redundancy.
All three circuits connect to a 15-kV switchgear lineup that acts as the brain of the data center’s on-site power system. Power travels from the switchgear to four substations: two for compute, or server, rooms—CS1 and CS2, and two for mechanical rooms—MS1 and MS2.
Substation CS1 powers the enterprise compute room and is a dual main breaker configuration, which keeps it connected to the normal utility source or the on-site generation. In the event of a power disruption, substation CS1 links directly to a pair of 1.5-MW generators via ASCO generator paralleling switchgear.
If a utility source interruption occurs, two 750-kVA uninterruptible power supply (UPS) units provide interim power for up to 25 secs, or until the generator main breaker closes, rerouting the load to the generators. These UPS units utilize flywheel energy storage to maintain the load until the generators are online.
The short duration of effective power produced by flywheel technology emphasized the need for fast, reliable switchgear.
“We carefully selected our switchgear knowing that power must be transferred in under 25 secs,” said Robert Lowden, director, Enterprise Infrastructure for the university. “Our ASCO switchgear reliably transfers power within 10 secs, relieving the flywheel energy storage that would otherwise drop the load after 25 secs.”
Specifying flywheel technology rather than conventional batteries saved a significant amount of floor space and decreased the university’s carbon footprint. Flywheel technology also eliminates the cost of battery replacement every five years and of safely disposing lead and hazardous chemicals.
Both CS1 UPS units are paralleled on another switchgear lineup. This configuration enables feeds to the enterprise room, which houses the critical compute loads.
Substation CS2 powers the research compute room. A 1,300-kvA UPS powers the noncritical compute nodes and an additional 500-kvA UPS powers the critical management nodes and storage arrays. This 500-kVA UPS is also connected to substation CS1 via an ASCO 7000 series automatic transfer switch. This allows the critical research load to run on a generator if necessary. In order to maintain N+1 redundancy for enterprise computing in the event of a generator failure, the ASCO generator paralleling switchgear automatically shuts down the 500-kvA UPS and sheds the research load.
The first of two mechanical substations, MS1, is also a dual main breaker configuration that’s integrated with the generator switchgear. Only substations MS1 and CS1 link to the two generators’ room.
Substation MS1 supplies power to one of the three data center chillers, the cooling towers, and all associated pumps and cooling devices required to keep the data center cool.
In addition to its cooling responsibility, MS1 also powers emergency lighting and the university’s building management system (BMS). The BMS manages all critical building controls and temperature.
Substation MS2 provides power to chillers two and three. However, chiller two is fed from an ASCO 7000 series automatic transfer switch, enabling it to switch from MS2 to MS1 during a power outage or to cover cooling duties during maintenance if chiller one is taken offline.
Programmable logic controllers (PLCs) in all switchgear help facilities staff manage building power through the BMS. The PLCs report all generator start-ups and shutdowns, power transfers due to outages and maintenance, and breaker status.
The university’s operations center monitors everything, including fuel capacity and consumption. Everything reported to the operations center is recorded for historical analysis to review at a later date, if necessary.
Staff monitor, but cannot control, the system via remote computer. University officials decided that system control should only be available at the equipment, so when an alarm sounds, personnel must go to the source of each alarm to take corrective action.
A 1,500-kW load bank located on-site facilitates testing. It connects to the same switchgear lineup as the generators. This allows staff to perform full-load tests on the generators, without having to transfer power from the utility annually. Since the load bank is located on-site, the university saves time, manpower, and expenses that accumulate when renting equipment.
Transferring power intelligently is an integral part of any backup and emergency power system. If a generator did not start or a load failed to transfer, the flywheel UPS units will reach their discharge limit and drop the load. To protect against this type of dilemma, the power system features bypass capability. Most equipment within the facility is linked to switchgear, which helps bypass or reroute power in case of an emergency or power disruption.
To help maintain the data center, a service vendor visits the facility semiannually to ensure the generators, switchgear, UPS units, and other equipment are functioning properly. The vendor changes filters and oil, replaces belts, and satisfies other maintenance requirements annually, or as needed when spot checks occur.
Facilities staff also received hands-on training during the commissioning process that included actually transferring power to generators and restarting UPS units. All-in-all, the Hoosiers continue to strive for excellence, and the data center sits at the top of the class.
Information provided by Indiana University.
- Events & Awards
- Magazine Archives
- Oil & Gas Engineering
- Salary Survey
- Digital Reports
- Survey Prize Winners
Annual Salary Survey
Before the calendar turned, 2016 already had the makings of a pivotal year for manufacturing, and for the world.
There were the big events for the year, including the United States as Partner Country at Hannover Messe in April and the 2016 International Manufacturing Technology Show in Chicago in September. There's also the matter of the U.S. presidential elections in November, which promise to shape policy in manufacturing for years to come.
But the year started with global economic turmoil, as a slowdown in Chinese manufacturing triggered a worldwide stock hiccup that sent values plummeting. The continued plunge in world oil prices has resulted in a slowdown in exploration and, by extension, the manufacture of exploration equipment.
Read more: 2015 Salary Survey