Modular data centers

Using a modular design approach in data centers will help lower capital spending while at the same time increase efficiency, reliability, and expandability.


Modular design, or the construction of an object by joining together standardized units to form larger compositions, plays an essential role in the planning, design, and construction of a data center. The notion of modularity is all around us, ranging from cellular organisms to computer programming to the construction of commercial office buildings. Modular design in commercial buildings is not new and has been used for many reasons, for example in building zoning, manufacturing modules off-site, facility expansion, and high-rise requirements.

When comparing the modularity of power and cooling systems of standard commercial office buildings with those of data centers, certainly both will include multiples of certain types of equipment (chiller, pump, air handling unit, transformer, UPS, power distribution unit, etc.). What differs in a data center compared to an office building is the amount of equipment in the facility on a per-unit area basis.

Consider that a 15,000-sq-ft data center will have as much basic power and cooling infrastructure as a 400,000-sq-ft office building. On top of this, the data center will have reliability requirements that far exceed those of the office building, resulting in even more power and cooling equipment. (The “extra” equipment is used as standby in case of equipment failure and for planned maintenance activities, keeping the overall system operational. This equipment will be configured in blocks with arrangements such as N+1, N+2, 2(N+1), etc. based on the reliability requirements.) Why is modular design an important strategy when attempting to optimize the reliability, efficiency and scalability of a data center? This article will explore some of the answers to that question.

Hallmarks of modular design

As a new data center goes live, there is typically a period of time when the IT equipment is ramping up to its full potential. This will most often come in the form of empty raised floor space that is waiting for build-out with servers, networking, and storage gear. Even after the IT gear is installed, there will be period of time before the utilization drives up the power consumption (and heat dissipation) of the IT gear beyond its minimum. In some data centers this might take a few months, and in others even longer. Many data centers contain IT equipment that is designed to never hit 100% of its computing ability (this is done for capacity and redundancy considerations). This exemplifies why a data center facility needs to be planned and designed with malleability and the capability to react to shifts, expansions, and contractions in power use as the business needs of the organization drive the IT equipment requirements.

So what does a modular data center look like? The needs of the end user will drive the specific type of design approach (see Figure 1), but all approaches will have similar characteristics that will help in achieving the user’s optimization goals:

  1. Container: This is typically what one might think of first in discussing modular data centers. Containerized data centers were first introduced using standard 20- and 40-ft shipping containers. Newer designs now use custom-built containers with insulated walls and other features that are better suited for housing computing equipment. Because the containers will need central power and cooling systems, they will typically be grouped and fed from a central source. Expansion is accomplished by installing additional containers along with the required additional power and cooling source.
  2. Industrialized data center: The idea behind this type of data center comes from a hybrid model of a traditional brick-and-mortar data center and the containerized data center. The data center is built in increments like the container, but the process allows for much more customization of power and cooling system choices. The modules are connected to a central spine containing “people spaces,” while the power and cooling equipment is located adjacent to the data center modules. Expansion is accomplished by mirroring the installation and requires additional power and cooling sources.
  3. Traditional data center: Using traditional design and construction techniques for data centers is a very viable approach when planning a modular data center. There are different approaches to building in modularity. The entire shell of the building can be built leaving space for future data center growth. The infrastructure area needs to be carefully planned to ensure sufficient space for future installation of power and cooling equipment. Also, the central plant will need to continue to operate and support the IT loads during expansion. A second method is to leave space on the site for future expansion of a new data center (mega)module. This allows for an isolated construction process, but if the intention is to interconnect the power and cooling systems, the same issues regarding unintentional failures will come into play.

Optimizing the modular design

Within the context of needing additional power and cooling equipment due to reliability concerns, and the need to increase, decrease, or shift power for the IT equipment, applying a modularized approach will also reduce energy consumption in the data center. Using a monolithic approach for power and cooling systems yields smaller amounts of equipment that is larger in size (as compared to a modular design approach). For smaller, less complex data centers this approach is entirely acceptable. However, for large data centers with multiple raised floors, possibly having different reliability requirements, a monolithic approach would produce significant difficulties in optimizing the reliability, scalability, and efficiency of the data center.

To demonstrate this idea, consider a data center that is designed to be expanded from the day-one build of one data hall to a total of three data halls. To achieve concurrent maintainability, the power and cooling systems will be designed to an N+2 topology. To optimize the system design and equipment selection, the operating efficiencies of the electrical distribution system and the chiller equipment need to be obtained. At a minimum, the operating efficiencies should be calculated at four points: 25%, 50%, 75%, and 100% of total operating capacity. The following parameters are to be used in the analysis:

  1. Electrical/UPS system: For the purposes of the analysis, a double conversion UPS was used. The unloading curves were generated using a three parameter analysis model and capacities defined in accordance with the European Commission “Code of Conduct on Energy Efficiency and Quality of AC Uninterruptible Power Systems (UPS).” The system was analyzed at 25%, 50%, 75%, and 100% of total IT load.
  2. Chillers: Water-cooled chillers were modeled using the ASHRAE minimum energy requirements (for kW/ton) and a biquadratic-in-ratio-and-[delta] T equation for modeling the compressor power consumption. The system was analyzed at 25%, 50%, 75%, and 100% of total IT load.

Analysis approach

The goal of the analysis is to build a mathematical model defining the relationship between the electrical losses at the four loading points, comparing two system types (see Figure 2 - bottom of page). This same approach is used to determine the chiller consumption. The following two system types are the basis for the analysis:

  1. Monolithic design: The approach used in this design is that 100% of the IT is covered by one monolithic system. Also, it is assumed that the monolithic system has the ability to modulate (power output or cooling capacity) to match the four loading points.
  2. Modular design: This approach consists of providing four equal-sized units that correspond to the four loading points.

It is important to understand that this analysis demonstrates how to go about developing a numerical relationship between energy efficiency of a monolithic versus a modular system type. There are many additional variables that will change the output and may have a significant effect on the comparison of the two system types.

For the electrical system, the efficiency losses of a monolithic system were calculated at the four loading points. The resulting data points were then compared to the efficiency losses of four modular systems each loaded to one-quarter of the IT load (mimicking how the power requirements increase over time). Using the modular system efficiency loss as the denominator, and the efficiency losses of the monolithic system as the numerator, a multiplier was developed.

For the chillers, the same approach is taken comparing chiller compressor power of monolithic and modular systems. A monolithic chiller system was modeled at the four loading points to determine the peak power at each point. Then four modular chiller systems were modeled, each at one-quarter of the IT load. Using the modular system efficiency loss as the denominator, and the efficiency losses of the monolithic system as the numerator, a multiplier was developed. The electrical and chiller system multipliers can be used as an indicator during the process of optimizing energy use, expandability, first cost, and reliability.


Data centers are planned to meet the requirements of the IT equipment, both what is planned for day one and what is planned for a future installation. The rate that the IT equipment “goes live” in the data center will also shape the solution for the building design, power, and cooling systems. For many industry sectors, using a modular design approach will help in lowering capital spent while at the same time increasing efficiency, reliability, and expandability. The analysis techniques demonstrated in this article can be used as a component to the overall planning and design of the data center.

Kosik is principal data center Energy technologist with HP Enterprise Business, Technology Services. He is a member of the Consulting-Specifying Engineer editorial advisory board.

Modeling electrical and chiller systems 

Modeling the electrical and chiller systems and comparing monolithic and modular design approaches resulted in the following findings:

  1. At 25% load, the electrical and chiller systems demonstrated significant power inefficiencies. The electrical losses are more than 2 times greater than the modular system, while the chiller system will draw more than 1.5 times the power of a modular solution.
  2. Between 25% and 50% of total load, the multipliers of both the electrical and chiller systems started to decrease, down to 1.3 for the electrical systems and 1.2 for the chiller systems.
  3. Between 50% and 100%, the rate of increase in efficiencies slows such that between 75% and 100% loading, there is little change (relative to the other loading points).
  4. There is a close correlation (based on the shape of the two curves) between the electrical and chiller system efficiency multipliers

The Top Plant program honors outstanding manufacturing facilities in North America. View the 2015 Top Plant.
The Product of the Year program recognizes products newly released in the manufacturing industries.
Each year, a panel of Control Engineering and Plant Engineering editors and industry expert judges select the System Integrator of the Year Award winners in three categories.
Pipe fabrication and IIoT; 2017 Product of the Year finalists
The future of electrical safety; Four keys to RPM success; Picking the right weld fume option
A new approach to the Skills Gap; Community colleges may hold the key for manufacturing; 2017 Engineering Leaders Under 40
Control room technology innovation; Practical approaches to corrosion protection; Pipeline regulator revises quality programs
The cloud, mobility, and remote operations; SCADA and contextual mobility; Custom UPS empowering a secure pipeline
Infrastructure for natural gas expansion; Artificial lift methods; Disruptive technology and fugitive gas emissions
Power system design for high-performance buildings; mitigating arc flash hazards
VFDs improving motion control applications; Powering automation and IIoT wirelessly; Connecting the dots
Natural gas engines; New applications for fuel cells; Large engines become more efficient; Extending boiler life

Annual Salary Survey

Before the calendar turned, 2016 already had the makings of a pivotal year for manufacturing, and for the world.

There were the big events for the year, including the United States as Partner Country at Hannover Messe in April and the 2016 International Manufacturing Technology Show in Chicago in September. There's also the matter of the U.S. presidential elections in November, which promise to shape policy in manufacturing for years to come.

But the year started with global economic turmoil, as a slowdown in Chinese manufacturing triggered a worldwide stock hiccup that sent values plummeting. The continued plunge in world oil prices has resulted in a slowdown in exploration and, by extension, the manufacture of exploration equipment.

Read more: 2015 Salary Survey

Maintenance and reliability tips and best practices from the maintenance and reliability coaches at Allied Reliability Group.
The One Voice for Manufacturing blog reports on federal public policy issues impacting the manufacturing sector. One Voice is a joint effort by the National Tooling and Machining...
The Society for Maintenance and Reliability Professionals an organization devoted...
Join this ongoing discussion of machine guarding topics, including solutions assessments, regulatory compliance, gap analysis...
IMS Research, recently acquired by IHS Inc., is a leading independent supplier of market research and consultancy to the global electronics industry.
Maintenance is not optional in manufacturing. It’s a profit center, driving productivity and uptime while reducing overall repair costs.
The Lachance on CMMS blog is about current maintenance topics. Blogger Paul Lachance is president and chief technology officer for Smartware Group.
The maintenance journey has been a long, slow trek for most manufacturers and has gone from preventive maintenance to predictive maintenance.
This digital report explains how plant engineers and subject matter experts (SME) need support for time series data and its many challenges.
This digital report will explore several aspects of how IIoT will transform manufacturing in the coming years.
Maintenance Manager; California Oils Corp.
Associate, Electrical Engineering; Wood Harbinger
Control Systems Engineer; Robert Bosch Corp.
This course focuses on climate analysis, appropriateness of cooling system selection, and combining cooling systems.
This course will help identify and reveal electrical hazards and identify the solutions to implementing and maintaining a safe work environment.
This course explains how maintaining power and communication systems through emergency power-generation systems is critical.
click me