Meeting electrical infrastructure demands in data centers
Wiring types, methods
Base electrical infrastructure design and equipment selection are supported by appropriate specification of data center wiring. From the type of wiring chosen to its installation methods, voltages, and support, wiring is literally the veins of the data center’s body.
Copper is the preferred conductor material for its ease of use, historical low risk, and ability to work in tight places. That being said, aluminum conductors can be used for large feeders when first-cost reduction is necessary, even though aluminum wires are more challenging to terminate into circuit breakers or a bus, as aluminum expands and contracts more than copper when the load is changed. The larger aluminum conductors often require more space within switchgear, switchboards, and panelboards. Aluminum connections also require more testing and maintenance. A best practice with aluminum conductors is to annually thermoscan the joints and terminations during peak loading conditions. Tightening substandard joints and connections is then usually performed at a time where risk of a critical load outage is minimized.
A variety of wiring methods are used in data centers. Data centers are primarily filled with overhead and underground conductors in conduits and ducts, while bus ducts, cable trays, and cable buses are also employed.
Electrical contractors prefer underground conductors because they perceive that installed costs will be reduced by automatically saving 5 ft of run on both ends and eliminating hanging expense. They assume that the same number and size of conductors are installed underground as overhead. Proper design using the Neher-McGrath calculations often requires larger numbers and sizes of conductors to be installed underground than overhead, reducing or eliminating this perceived advantage. Underground conductors must be sized larger to counter the added insulation naturally provided by the ground. With overhead conductors, however, it’s easier to get rid of the naturally generated heat.
Additionally, scalable, modular data center construction can make proper installation of underground ducts for future equipment challenging, as there’s no 100% accurate way to know where the ducts should surface for future build-out.
Care must be taken to size conductors appropriately for elevated ambient temperatures in computer equipment racks, data hall hot aisles, and electrical equipment rooms. NEC Table 310.15(B)(16) assumes that the ambient temperature is 86 F. However, where the ambient temperature is higher than 86 F, the conductor won’t continuously carry the load current for which it’s rated at 86 F, and must be derated for the actual ambient temperature.
While sometimes used in the data center’s electrical infrastructure, bus ducts face both reliability and maintainability issues because of the multiple joints found in bus ducts. Bus duct joints are typically found every 10 ft in straight runs, so for every 100 ft of straight run, there can be as many as 11 joints (remember that fittings, elbows, etc., add additional joints). This can make bus ducts more susceptible to failure and maintenance more difficult. Additionally, bus ducts are factory-assembled products made to fit field measurements. If any of the measurements are wrong or a bus duct piece doesn’t fit, it cannot be field-modified. A new piece must be ordered from the factory, often with a considerable wait attached.
Cable trays, typically used overhead, resemble a ladder hanging from the ceiling and are used in a data center’s electrical design for their reliable, flexible, low-cost installation. Single conductor and multi-conductor cables can be installed in cable tray, and armored cables are often specified to provide increased fault tolerance. Cable tray can be easily modified in the field to fit conditions, so precise measurement is not as critical as it is for bus duct. It is important to realize that every cable in a cable tray can be lost if just one faults and burns, unless all of the cables are armored. Another crucial point is that stacking cable trays one above another can lead to cascading failures. If a cable faults in the bottom tray, it could cause a fire that burns every cable in that tray as well as in the ones above.
Cable bus is an alternative to bus duct that has many advantages. Assembled like a cable tray with large single conductor power cables run within it, including spacer blocks between the cables, it can be easily modified in the field to fit field conditions. Contrary to bus ducts, cable buses typically only have two terminations (one at each end, with solid cable in between) and no joints, making them more reliable. The reduced number of terminations and joints also reduces maintenance.
Voltage and installation
Both low and medium voltages are used in today’s data centers. Proper voltage selection is beyond the scope of this article. Selection of appropriate insulation types is essential for providing desired reliability. Low-voltage (600 V or below) insulation on conductors is usually rated at 94 F with NEC type thermoplastic high heat nylon (THHN) coated wire insulation being used overhead in dry locations and NEC type rubber high heat waterproof (RHHW-2) or XLP-2 (cross-linked polyethylene) in damp, wet, or underground locations. Medium-voltage cables (1,000 V or more) are usually shielded, with 194 F or 221 F rated ethylene propylene rubber (EPR) or XLP insulation, and with 100%, 133%, or 173% insulation levels selected, based on the system neutral grounding.
If the system neutral is solidly grounded, then a 100% insulation level is normally specified. If the system neutral is impedance grounded and will be allowed to operate for up to an hour phase grounded, then 133% insulation level is normally specified. If the system neutral is impedance grounded and will be allowed to operate for more than one hour with a phase grounded, then 173% insulation level is normally specified. (High voltage is 69,000 V or greater, which is not normally used inside data centers and is typically designed to be installed outdoors for utilities.)
Data centers require a highly robust and reliable electrical infrastructure that far surpasses that of commercial and industrial facilities. In addition, elevated temperatures are encountered in many areas as operators try to increase PUE and operational efficiency. Meeting these operational efficiencies requires proper specification of equipment and wiring, and implementation of design methods with appropriate voltages and systems. A coordinated effort is needed to ensure the data center’s electrical infrastructure will be built to last.
Christopher M. Johnston is a senior vice president and the chief engineer for Syska Hennessy Group's critical facilities team. Johnston specializes in the planning, design, construction, testing, and commissioning of mission critical 7x24 facilities, and leads team research and development efforts to address current and impending technical issues in critical and hypercritical facilities. With more than 40 years of engineering experience, he has served as quality assurance officer and supervising engineer on many projects.
Case Study Database
Get more exposure for your case study by uploading it to the Plant Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.
These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.
Click here to visit the Case Study Database and upload your case study.
Annual Salary Survey
In a year when manufacturing continued to lead the economic rebound, it makes sense that plant manager bonuses rebounded. Plant Engineering’s annual Salary Survey shows both wages and bonuses rose in 2012 after a retreat the year before.
Average salary across all job titles for plant floor management rose 3.5% to $95,446, and bonus compensation jumped to $15,162, a 4.2% increase from the 2010 level and double the 2011 total, which showed a sharp drop in bonus.