Designing data center electrical distribution systems

Designing efficient and reliable data center electrical systems requires looking through the eyes of the electrical engineer—and the owner.

By Eduard Pacuku, PE, Jacobs, Philadelphia September 17, 2014
Learning objectives
  • Understand the preliminary considerations of designing data center electrical distribution systems.
  • Know how to design efficient data centers that can also accommodate growth.
  • Identify the codes and standards that apply to designing data center electrical distribution systems. 

Data centers are among the hottest developments in the technology world. The growing needs of the Internet of Things have forced the biggest players in the computing world to spend billions of dollars on new multi-megawatt data centers. This boom in data center construction is largely fueled by the growing use of cloud services, which has put a strain on server capacity (see Figure 1). Additionally, data centers are considered mission critical when their operation is of importance to organizations’ economic or functional needs. Even a disruption of a few seconds in the operation of certain types of mission critical data centers could cost millions of dollars.

This article explores data center design through the eyes of both the owner and the electrical engineer. It also discusses the key components of data centers and touches on the codes and standards that apply to data centers and their components.

Preliminary considerations

Data centers, many having servers as their main components, need electrical power to survive. It is, therefore, only natural that any talk about building a data center should begin with figuring out the electrical needs and how to satisfy those needs.

Capacity: Before deciding anything else, the owner must decide the capacity of the data center (in megawatts). In previous planning efforts, it was common to use W/sq ft. However, today it is more common to discuss kW per rack, which may vary from 5 to 60 kW. This power concentration per rack can also drive cooling system type and capacity, which must be planned for in the capacity. The owner also needs to consider future capacity.

Another big decision is to determine the level of redundancy. Reliability is very important for data centers, and disruptions are costly. But the cost of building a data center increases significantly with higher reliability. Therefore, the owner should decide where to draw the line, and determine how much risk is acceptable.

Auxiliary power: After the data center capacity is decided, the facility power must be computed. The facility power includes data center heating and cooling. A focus of recent years is to make the facility (non-data) power as low as possible to improve efficiencies and lower operating costs. To address the efficiency of facility power within a data center, the term “power usage effectiveness” (PUE) was coined. The closer to unity the PUE is, the smaller the nonproducing facility power is.

Years ago, it was normal to account for the facility and cooling load as being half of the total power delivered to the data centers. That means that if a data center had a capacity of 10 MW, the facility and cooling load also would be 10 MW, leading to a PUE of 2. PUE equal to 2 is deemed to be average efficiency, but not satisfactory to many data center owners. New technologies have pushed the PUE very close to unity.

Reliability and tiers: To classify data centers in terms of reliability, the Uptime Institute created standards referred to as Tiers (see Table 1). Data centers are classified in four Tiers. Tier I data centers don’t have a redundant electrical distribution system, and their components don’t have redundant capacity. Tier II data centers differ from Tier I data centers in that they have components with redundant capacity. Tier III data centers have dual-powered IT equipment and more than one distribution path to the servers. Tier IV data centers have all the features that Tier III data centers have. In addition, Tier IV data centers are fault tolerant in that they have more than one electrical power distribution path. Tier IV data centers have HVAC equipment that is also dual powered and have storage capacity.

Determining which Tier to select depends on numerous factors. Many organizations used to have large consolidated data centers, which led to choosing a Tier III or Tier IV system. Also, many organizations involved in financial industries choose Tier III and Tier IV systems. Other organizations choose to have multiple data centers that can handle data needs when another center goes down, leading to an ability to use lower Tier systems.

Usage: Data centers are also categorized according to their usage. These include data centers serving a private domain, such as a corporation or a government entity; data centers serving a public domain, such as Internet providers; and multi-user data centers.

Power distribution: Currently, there is debate about what kind of electrical power to use to feed data centers. Should it be ac or dc? Each has merits. Recently, dc power has received increasing consideration because data center computing equipment uses dc power. Having dc power distribution eliminates the need for transformers and ac-to-dc converters on the server floor. Using dc also eliminates harmonics because there is no switching of power. In addition, using dc eliminates conversion steps, which leads to higher efficiency (each conversion step introduces losses), thereby decreasing cost.

However, ac has been the dominating form of power distribution for many years (see Figure 2). The benefits of ac include readily available equipment, lower costs, and easier maintenance (because the maintenance crews already know the equipment and the spare parts are readily available). Historically, most ac power distribution systems were designed at 208/120 V. The ever-evolving technologies have helped make the case for using higher ac voltages at 400/415 V, and even 480 V because of the higher power demands and efficiencies delivered by newer electrical equipment.

PUE: Another important factor in data center design and construction is PUE. The closer the PUE is to unity, the better. A data center with PUE of 1.5 is considered the middle line of efficiency. A PUE above that number shows an inefficient data center; a data center with PUE below 1.5 is considered to be efficient. A data center with a PUE of 1.2 is considered to be very efficient. The most important part of a data center is the IT equipment. If there were no supporting (auxiliary) loads, the PUE would be 1. Because the auxiliary loads are necessary, the PUE is always greater than 1. The auxiliary loads include HVAC loads and small electrical loads, such as lighting and receptacles.

Electrical design

After the owner decides on the above considerations, the work of the design professionals begins, especially for the electrical engineers. Electrical engineers have to come up with a design that is efficient, has enough capacity for future growth, and avoids unnecessary frills.

Power distribution elements: There are many parts to electrical power distribution. It starts with utility transformers, which in large data centers are owned by the data center’s owner. After the power is stepped down from the utility transmission voltage to the distribution level, it goes through distribution switchgear that redirects power to where it is needed. Typically, the power must be stepped down again, more often than not, via substation transformers and through more than one path. The standby power, usually present in today’s data centers, is often introduced at this level, bringing with it the automatic transfer switch equipment. From the ATS, the power goes to the servers (often via a UPS system), where it switches from ac to dc power to be used by the servers. The next layer of distribution includes switchboards and panelboards that feed the auxiliary load, HVAC loads, and regular house loads. Power monitoring systems could also be employed at this point, which could provide very important information on how different pieces of equipment are working and how power is being used.

Going through so many pieces of equipment requires meticulous work. The design professional must be mindful of the cost of equipment and cables and also the losses introduced by each piece of equipment. Having so many pieces of electrical and mechanical equipment means that the engineer also must be mindful of many codes and regulations associated with these designs.

Relevant codes: The relevant codes for data center design professionals include ANSI/TIA -942-2005: Telecommunications Infrastructure Standard for Data Centers, NFPA 70: National Electrical Code, and ASHRAE: Standard 90.1: Energy Standard for Buildings Except Low-Rise Residential Buildings. Other very important codes include International Building Code, International Mechanical Code, International Plumbing Code, International Fire Code, International Fuel Gas Code, International Energy Conservation Code, NFPA 72: National Fire Alarm and Signaling Code, and NFPA 90A: Standard for the Installation of Air-Conditioning and Ventilation Systems.

Depending on the size of the data center and the type of building hosting it, other codes such as NFPA 13: Standard for Installation of Sprinkler Systems, NFPA 30: Flammable and Combustible Liquids Code, NFPA 10: Standard for Portable Fire Extinguishers, NFPA 101: Life Safety Code, NFPA 110: Standard for Emergency and Standby Systems, NFPA 780: Standard for Installation of Lightning Systems, and NFPA 20: Standard for Installation of Stationary Pumps for Fire Protection may apply.

Utility service: As with any other project, designers start by considering the utility service. Because of the importance of reliability, owners must engage early on with the utility company to discuss the service. Depending on the size of the data center, the service options include a separate dedicated utility line or an existing, very reliable line.

The electrical designer, in close collaboration with the owner, must decide how many layers of equipment will be there. The more equipment introduced, the more points of failure are present. In mission critical facilities, it is important to avoid single points of failure.

The utility service will most likely be medium voltage. Depending on the size and location of the data center, the service could be between 13.8 and 345 kV. The next step is to step down the voltage to a level usable for the servers. Most data center IT equipment works with dual voltage, 100 to 120 V ac and 200 to 240 V ac. The higher voltage—208 or 240 V—increases efficiency, thereby lowering losses. Having servers powered at 415 V ac further increases data center efficiency, making for a better PUE. If the designer decides to use the higher voltage, 415 V, the auxiliary mechanical load would then be at 480 V. This means that autotransformers must be used to take the power from 415 V to 480 V.

At what point does one decide to convert the medium voltage to low voltage (below 600 V)? The answer to this question depends on the size of the data center and the distance from the service drop. If the data center is part of a campus, the data center can be quite far from the service drop. If that is the case, it is preferable to distribute the electrical power at a voltage level as high as possible, typically 13.8 kV. If the service voltage is higher than 13.8 kV, the first transformation will be at the service entrance, stepping down the voltage from whatever the utility voltage is to 13.8 kV. This power is delivered to the data center where the second transformation takes place, stepping the voltage down to 480 V or 415 V.

Redundancy: What sets data centers apart is the level of redundancy. But everything comes at a price. The more layers of redundancy that are added, the more expensive construction of the data center becomes. Granted, having a data center blackout (or brownout) is very expensive as well.

The servers, by design, come with two power supply options. In addition, they are backed up by batteries. Therefore, there are two different normal power supplies to each server. That means that the servers would be served from two different substations. To be fully redundant, the substations need to be fed from two different utility lines. In the best-case scenario, the utility lines have a tie between them at some point in the electrical distribution system, and each utility line has enough capacity to carry the entire load of the data center. This scenario describes a fully redundant, normal power data center (see Figure 3).

The normal power redundancy is very important, but it is not enough by itself. The normal power is often backed up by a standby system. The standby system is generally composed of generators, which could be diesel, natural gas, or a hybrid. Diesel generators are the preferred type of generation because they are reliable machines and can be easily maintained. Depending on the type of building the data center is housed in, the generators may or may not be part of the life safety system. Nevertheless, the generators are usually set to be ready to back up the power system very quickly, usually in 10 to 30 sec. The time depends on how long the server backup batteries can last.

Final thoughts

Although designing a data center’s electrical distribution system may seem straightforward, there are inherent challenges. The electrical engineer must:

  • Work closely with the owner to determine current and future data center capacity.
  • Work with the owner to decide which data center Tier would be appropriate for the client’s needs.
  • Work closely with the owner to determine the level of redundancy.
  • Design a system simple enough to be easy to operate, but one that is also robust.
  • Eliminate single points of failure.
  • Design a very efficient system with the goal of achieving a PUE under 1.5.
  • Apply the relevant industry codes and regulations.

Designing data centers is complex (see Figure 4). Building data centers is very expensive, as is their operation and maintenance. Continuous collaboration with the owner is extremely important—more so than in any other type of project. The successful completion and implementation of the design depends on that collaboration.


Eduard Pacuku is electrical project engineer at Jacobs, where he spends the majority of his time designing electrical distribution systems for universities (including laboratories), health care facilities, and data centers.