Modular data centers

Using a modular design approach in data centers will help lower capital spending while at the same time increase efficiency, reliability, and expandability.


Modular design, or the construction of an object by joining together standardized units to form larger compositions, plays an essential role in the planning, design, and construction of a data center. The notion of modularity is all around us, ranging from cellular organisms to computer programming to the construction of commercial office buildings. Modular design in commercial buildings is not new and has been used for many reasons, for example in building zoning, manufacturing modules off-site, facility expansion, and high-rise requirements.

When comparing the modularity of power and cooling systems of standard commercial office buildings with those of data centers, certainly both will include multiples of certain types of equipment (chiller, pump, air handling unit, transformer, UPS, power distribution unit, etc.). What differs in a data center compared to an office building is the amount of equipment in the facility on a per-unit area basis.

Consider that a 15,000-sq-ft data center will have as much basic power and cooling infrastructure as a 400,000-sq-ft office building. On top of this, the data center will have reliability requirements that far exceed those of the office building, resulting in even more power and cooling equipment. (The “extra” equipment is used as standby in case of equipment failure and for planned maintenance activities, keeping the overall system operational. This equipment will be configured in blocks with arrangements such as N+1, N+2, 2(N+1), etc. based on the reliability requirements.) Why is modular design an important strategy when attempting to optimize the reliability, efficiency and scalability of a data center? This article will explore some of the answers to that question.

Hallmarks of modular design

As a new data center goes live, there is typically a period of time when the IT equipment is ramping up to its full potential. This will most often come in the form of empty raised floor space that is waiting for build-out with servers, networking, and storage gear. Even after the IT gear is installed, there will be period of time before the utilization drives up the power consumption (and heat dissipation) of the IT gear beyond its minimum. In some data centers this might take a few months, and in others even longer. Many data centers contain IT equipment that is designed to never hit 100% of its computing ability (this is done for capacity and redundancy considerations). This exemplifies why a data center facility needs to be planned and designed with malleability and the capability to react to shifts, expansions, and contractions in power use as the business needs of the organization drive the IT equipment requirements.

So what does a modular data center look like? The needs of the end user will drive the specific type of design approach (see Figure 1), but all approaches will have similar characteristics that will help in achieving the user’s optimization goals:

  1. Container: This is typically what one might think of first in discussing modular data centers. Containerized data centers were first introduced using standard 20- and 40-ft shipping containers. Newer designs now use custom-built containers with insulated walls and other features that are better suited for housing computing equipment. Because the containers will need central power and cooling systems, they will typically be grouped and fed from a central source. Expansion is accomplished by installing additional containers along with the required additional power and cooling source.
  2. Industrialized data center: The idea behind this type of data center comes from a hybrid model of a traditional brick-and-mortar data center and the containerized data center. The data center is built in increments like the container, but the process allows for much more customization of power and cooling system choices. The modules are connected to a central spine containing “people spaces,” while the power and cooling equipment is located adjacent to the data center modules. Expansion is accomplished by mirroring the installation and requires additional power and cooling sources.
  3. Traditional data center: Using traditional design and construction techniques for data centers is a very viable approach when planning a modular data center. There are different approaches to building in modularity. The entire shell of the building can be built leaving space for future data center growth. The infrastructure area needs to be carefully planned to ensure sufficient space for future installation of power and cooling equipment. Also, the central plant will need to continue to operate and support the IT loads during expansion. A second method is to leave space on the site for future expansion of a new data center (mega)module. This allows for an isolated construction process, but if the intention is to interconnect the power and cooling systems, the same issues regarding unintentional failures will come into play.

Optimizing the modular design

Within the context of needing additional power and cooling equipment due to reliability concerns, and the need to increase, decrease, or shift power for the IT equipment, applying a modularized approach will also reduce energy consumption in the data center. Using a monolithic approach for power and cooling systems yields smaller amounts of equipment that is larger in size (as compared to a modular design approach). For smaller, less complex data centers this approach is entirely acceptable. However, for large data centers with multiple raised floors, possibly having different reliability requirements, a monolithic approach would produce significant difficulties in optimizing the reliability, scalability, and efficiency of the data center.

To demonstrate this idea, consider a data center that is designed to be expanded from the day-one build of one data hall to a total of three data halls. To achieve concurrent maintainability, the power and cooling systems will be designed to an N+2 topology. To optimize the system design and equipment selection, the operating efficiencies of the electrical distribution system and the chiller equipment need to be obtained. At a minimum, the operating efficiencies should be calculated at four points: 25%, 50%, 75%, and 100% of total operating capacity. The following parameters are to be used in the analysis:

  1. Electrical/UPS system: For the purposes of the analysis, a double conversion UPS was used. The unloading curves were generated using a three parameter analysis model and capacities defined in accordance with the European Commission “Code of Conduct on Energy Efficiency and Quality of AC Uninterruptible Power Systems (UPS).” The system was analyzed at 25%, 50%, 75%, and 100% of total IT load.
  2. Chillers: Water-cooled chillers were modeled using the ASHRAE minimum energy requirements (for kW/ton) and a biquadratic-in-ratio-and-[delta] T equation for modeling the compressor power consumption. The system was analyzed at 25%, 50%, 75%, and 100% of total IT load.

Analysis approach

The goal of the analysis is to build a mathematical model defining the relationship between the electrical losses at the four loading points, comparing two system types (see Figure 2 - bottom of page). This same approach is used to determine the chiller consumption. The following two system types are the basis for the analysis:

  1. Monolithic design: The approach used in this design is that 100% of the IT is covered by one monolithic system. Also, it is assumed that the monolithic system has the ability to modulate (power output or cooling capacity) to match the four loading points.
  2. Modular design: This approach consists of providing four equal-sized units that correspond to the four loading points.

It is important to understand that this analysis demonstrates how to go about developing a numerical relationship between energy efficiency of a monolithic versus a modular system type. There are many additional variables that will change the output and may have a significant effect on the comparison of the two system types.

For the electrical system, the efficiency losses of a monolithic system were calculated at the four loading points. The resulting data points were then compared to the efficiency losses of four modular systems each loaded to one-quarter of the IT load (mimicking how the power requirements increase over time). Using the modular system efficiency loss as the denominator, and the efficiency losses of the monolithic system as the numerator, a multiplier was developed.

For the chillers, the same approach is taken comparing chiller compressor power of monolithic and modular systems. A monolithic chiller system was modeled at the four loading points to determine the peak power at each point. Then four modular chiller systems were modeled, each at one-quarter of the IT load. Using the modular system efficiency loss as the denominator, and the efficiency losses of the monolithic system as the numerator, a multiplier was developed. The electrical and chiller system multipliers can be used as an indicator during the process of optimizing energy use, expandability, first cost, and reliability.


Data centers are planned to meet the requirements of the IT equipment, both what is planned for day one and what is planned for a future installation. The rate that the IT equipment “goes live” in the data center will also shape the solution for the building design, power, and cooling systems. For many industry sectors, using a modular design approach will help in lowering capital spent while at the same time increasing efficiency, reliability, and expandability. The analysis techniques demonstrated in this article can be used as a component to the overall planning and design of the data center.

Kosik is principal data center Energy technologist with HP Enterprise Business, Technology Services. He is a member of the Consulting-Specifying Engineer editorial advisory board.

Modeling electrical and chiller systems 

Modeling the electrical and chiller systems and comparing monolithic and modular design approaches resulted in the following findings:

  1. At 25% load, the electrical and chiller systems demonstrated significant power inefficiencies. The electrical losses are more than 2 times greater than the modular system, while the chiller system will draw more than 1.5 times the power of a modular solution.
  2. Between 25% and 50% of total load, the multipliers of both the electrical and chiller systems started to decrease, down to 1.3 for the electrical systems and 1.2 for the chiller systems.
  3. Between 50% and 100%, the rate of increase in efficiencies slows such that between 75% and 100% loading, there is little change (relative to the other loading points).
  4. There is a close correlation (based on the shape of the two curves) between the electrical and chiller system efficiency multipliers

No comments
The Top Plant program honors outstanding manufacturing facilities in North America. View the 2013 Top Plant.
The Product of the Year program recognizes products newly released in the manufacturing industries.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
The true cost of lubrication: Three keys to consider when evaluating oils; Plant Engineering Lubrication Guide; 11 ways to protect bearing assets; Is lubrication part of your KPIs?
Contract maintenance: 5 ways to keep things humming while keeping an eye on costs; Pneumatic systems; Energy monitoring; The sixth 'S' is safety
Transport your data: Supply chain information critical to operational excellence; High-voltage faults; Portable cooling; Safety automation isn't automatic
Case Study Database

Case Study Database

Get more exposure for your case study by uploading it to the Plant Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.

These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.

Click here to visit the Case Study Database and upload your case study.

Maintaining low data center PUE; Using eco mode in UPS systems; Commissioning electrical and power systems; Exploring dc power distribution alternatives
Synchronizing industrial Ethernet networks; Selecting protocol conversion gateways; Integrating HMIs with PLCs and PACs
Why manufacturers need to see energy in a different light: Current approaches to energy management yield quick savings, but leave plant managers searching for ways of improving on those early gains.

Annual Salary Survey

Participate in the 2013 Salary Survey

In a year when manufacturing continued to lead the economic rebound, it makes sense that plant manager bonuses rebounded. Plant Engineering’s annual Salary Survey shows both wages and bonuses rose in 2012 after a retreat the year before.

Average salary across all job titles for plant floor management rose 3.5% to $95,446, and bonus compensation jumped to $15,162, a 4.2% increase from the 2010 level and double the 2011 total, which showed a sharp drop in bonus.

2012 Salary Survey Analysis

2012 Salary Survey Results

Maintenance and reliability tips and best practices from the maintenance and reliability coaches at Allied Reliability Group.
The One Voice for Manufacturing blog reports on federal public policy issues impacting the manufacturing sector. One Voice is a joint effort by the National Tooling and Machining...
The Society for Maintenance and Reliability Professionals an organization devoted...
Join this ongoing discussion of machine guarding topics, including solutions assessments, regulatory compliance, gap analysis...
IMS Research, recently acquired by IHS Inc., is a leading independent supplier of market research and consultancy to the global electronics industry.
Maintenance is not optional in manufacturing. It’s a profit center, driving productivity and uptime while reducing overall repair costs.
The Lachance on CMMS blog is about current maintenance topics. Blogger Paul Lachance is president and chief technology officer for Smartware Group.