Data Center Power Use Effectiveness (PUE): Understanding the Contributing Factors

12/01/2008


If you are interested in data center design or performance, you’ve undoubtedly been seeing sales sheets, white papers, and news reports of products and techniques for increasing data center energy efficiency.

 

The metric most often quoted in data center power use efficiency is Power Use Effectiveness (PUE) and its inverse, Data Center Infrastructure Efficiency (DCIE). PUE and DCIE compare the data center facility’s overall power consumption to the power consumed by the information and communication technology (ICT) equipment. These metrics, which are primarily facilities-based, will probably serve as the framework for a future standard; however, they are still being developed. Whether or not the metrics are being used prematurely depends on how they are being used (i.e., for self-promotion or for encouraging discussion), and if the testing procedures or simulation algorithms are disclosed.

 

Until a formal, recognized standard with a rigorous, engineering-based process to judge data center energy efficiency is developed, presented for public review, and formally adopted by a widespread consensus, published efficiency ratings will continue to be based on the interpretation of those who report the figures.

 

The good news is that there already exist some established energy standards, such as ANSI/ASHRAE/IESNA Standard 90.1-2007, Energy Standard for Buildings Except Low-Rise Residential Buildings, for other areas of the building that impact the data center’s efficiency (see Figure 1). This article examines these interdependencies, enabling designers, owners, and facility decision makers to ask tough questions when faced with new construction and renovation projects for data centers.

 

BUILDING ENVELOPE

Relative to the energy required to cool and power ICT equipment, the energy impact from the building envelope is small. However, the importance of code compliance and the effects of moisture migration cannot be overlooked.

 

For data centers, the integrity of the building’s vapor barrier is extremely important, as it safeguards against air leakage caused by the forces of wind and differential air pressure. It also minimizes migration of moisture driven by differential vapor pressure. Most data center cooling equipment is designed for sensible cooling only (no moisture removal from the air). Higher than expected moisture levels in the data center will result in greater energy consumption and possible operational problems caused by excessive condensate forming on the cooling coils in the air handling equipment.

 

HVAC, LIGHTING, AND POWER SYSTEMS

Standard 90.1 addresses, in great detail, the energy performance of HVAC and lighting systems, including control strategies, economizer options, and climate-specific topics. However, it provides little guidance on power distribution systems as they are applied to data centers. The lack of developed standards on UPS and the efficiency of the overall power delivery chain (from incoming utility power right down to the individual piece of ICT equipment) is a major gap that needs to be filled.

 

HVAC is the biggest non-ICT energy consumer in a data center facility. ASHRAE Standard 90.1 presents minimum energy performance of individual components such as chillers, direct-expansion (DX) systems, pumps, fans, motors, and heat-rejection equipment. To be in compliance with the standard, equipment must meet the specified energy-use metrics.

 

To determine whole-building energy performance, 90.1 sets forth a procedure that prescriptively defines how a given building’s energy performance (the “proposed” building) compares to the calculated theoretical energy performance (the “budget” building). This same method—with some augmentation to address data center-specific design and operations issues—can and should be used to determine the budget PUE and DCIE to benchmark data center energy use. (Incidentally, this method is used for the Energy and Atmosphere credit category in the U.S. Green Building Council LEED rating system, so as more data centers look to become LEED certified, this process needs to be used anyway.)

 

BUILDING USE AND CONTROL STRATEGIES

This is an area where existing energy standards for buildings need to be adapted to accommodate the parameters that are unique to data centers. Commercial office buildings and energy-intensive facilities, such as laboratories and hospitals, do not have the same characteristics of data centers, so there are no available templates to use as a starting point:

 

  • The magnitude of process load (the ICT equipment) as compared to power load attributable to the occupants, lighting, and building envelope heat transfer is far greater in data centers than in any other commercial-building type.
  • Because data centers and other technology-intensive facilities can be gradually populated with technology equipment over time, energy usage must be examined at full- and part-load conditions. The estimated energy use of both the 90.1 baseline and the proposed data center design should be modeled according to the schedule of technology equipment implementation. Also, the energy modeling must include the level of reliability and equipment redundancy at both part-load and full build-out conditions. (If dual paths of critical power and multiple pieces of cooling equipment are used to achieve concurrent maintainability and fault tolerance, the same strategy must be used as the critical-load scales up.)

The concept behind demonstrating energy at part-load conditions is to raise awareness during system design and while specifying major power distribution and cooling equipment and equipment control sequences. This part-load performance must be documented using the energy simulation procedures described in 90.1.

 

  • Energy codes for commercial buildings assume that the primary function of the building is for human occupancy. Therefore, all of the energy-efficiency ratings in existing building energy performance standards are based on the ability to maintain an indoor environment that is comfortable and safe for humans. The indoor environmental conditions required for ICT equipment are far different than those required for human comfort and safety.

The baseline building should be designed to maintain the maximum recommended ASHRAE Class 1 environmental requirements of 78 F at the inlet to the technology equipment, as described in ASHRAE’s Thermal Guidelines for Data Processing Environments. Also, as per the Guide, the supply air dew point shall correspond to the maximum and minimum dew point temperatures at 78 F and 40% RH (51.7 F) and 78 F and 55% RH (60.5 F). The baseline system shall use the same temperature rise across the technology equipment to maintain the same air quantities, fan motor power, and sensible heat gain.

 

  • Most energy-efficiency strategies for commercial buildings rely on an economizer cycle to reduce power required for cooling by using the ambient conditions either directly (as with an outdoor air economizer) or indirectly (as with a water-side economizer or heat reclaim). Using outside air for cooling in data centers is a very viable concept; researchers are in the early phases of studying the long-term effects of outdoor air pollutants and temperature/moisture swings on ICT equipment.
  • Reliability and operational continuity are still the cornerstones for judging the success of a data center. These requirements can result in multiple paths of active power and cooling and central plant equipment running at low loads. Consider a data center that is brought online and populated with servers, storage devices, and networking gear over two to three years before the facility is running at full load—the facility would have multiple power and cooling distribution paths and central plant equipment (chillers, pumps, UPS) running at 10% to 20% of capacity for several months. In addition to inefficient power use, this type of scenario can cause operational problems if not addressed during the design phase.

CLIMATE

Climate has the greatest influence on a data center’s energy efficiency, followed closely by HVAC system type, electrical distribution system reliability, and part-loading conditions. Because the primary effect of climate is on HVAC system power use, a climate-based energy analysis is required to accurately determine PUE and DCIE.

 

The energy use of the HVAC system (DX, chilled-water, air-cooled, water-cooled) will vary based on the outdoor air dry-bulb and wet-bulb temperatures, the chilled-water supply temperature (for chilled-water systems only), and the wet-bulb temperature entering the cooling coil (DX systems). The annual energy consumption of a particular HVAC system is determined by the use of biquadratic formulas developed to estimate electrical usage of vapor compression cooling equipment. Depending on the type of HVAC system being considered, the variables used in these equations represent temperatures for outdoor wet bulb, outdoor dry bulb, chilled-water supply, and condenser water supply.

 

If we were to stitch all of this together into a process to determine PUE and DCIE, the energy use components would breakdown as shown in Table 5.

 

Using this process, we can develop a profile of how PUE and DCIE vary by HVAC system type and climate. Running an analysis in accordance with the requirements of 90.1, with specific parameters such as building size, construction materials, computer equipment load, and reliability configuration, will result in a minimum energy performance profile for a particular data center. We can then use the profile to create minimum PUE and DCIE requirements for different HVAC systems in different climate zones.

 

Judging a facility’s energy performance without putting it in context of the climate and HVAC system type will result in skewed data that is not truly representative.

 

Pumping Equipment type ASHRAE allowable gpm/ton
Table 1: Allowable chilled and condenser water pump power, based on paragraphs G3.1.3.10 and G3.1.3.11 of 90.1. Source for all tables: William Kosik.
Chilled water2.4
Condenser water3.0

 

 

Supply air volume Constant volume (systems 1-4) Variable volume (systems 5-8)
Table 2: Allowable fan power, based on Table and paragraph G3.1.2.9 of ASHRAE 90.1.
>20,000 cfm17.25 + (cfm - 20,000) x 0.000862524 + (cfm - 20,000) x 0.0012
≤20,000 cfm17.25 + (cfm - 20,000) x 0.00082524 + (cfm - 20,000) x 0.001125

 

 

Heat rejection equipment Minimum gpm/hp
Table 3: Allowable power for heat rejection power, based on Table 6.8.1G of ASHRAE 90.1.
Propeller or axial fan cooling towers38.2
Centrifugal fan cooling towers20.0

 

 

Equipment type Minimum tons Maximum tons kW/ton c1 c2 c3 c4 c5 c6 T1 T2
Curve equation - % of kW/ton = f (T1, T2) = c1 + c2*T1 + c3*T12 + c4*T2 + c5*T22 + c6*T1*T2
Table 4: Minimum energy performance of the four primary data center HVAC system types and the variables for the biquadratic equation used to estimate annual energy consumption, based on data from ASHRAE 90.1 and DOE-2.2 energy analysis simulation algorithms.
Air-cooled DX (kW/ton)5.411.31.09DX cooling coil entering wet-bulb temperatureOutdoor dry-bulb temperature-1.063930.03066-0.000130.015420.00005-0.00021
11.320.01.11
20.063.31.22
63.3>63.31.26
Water-cooled DX (kW/ton)5.411.30.99DX cooling coil entering wet-bulb temperatureOutdoor dry-bulb temperature-1.063930.03066-0.000130.015420.00005-0.00021
11.320.01.06
20.063.31.11
Air-cooled chilled water (kW/ton)5.411.31.35Chilled-water supply temperatureOutdoor dry-bulb temperature0.93631-0.010160.00022-0.002450.00014-0.00022
11.320.01.35
20.063.31.35
63.3>63.31.35
Water-cooled chilled water (kW/ton)5.411.30.79Chilled-water supply temperatureEntering condenser-water temperature1.15362-0.030680.000310.006710.00005-0.00009
11.320.00.79
20.063.30.79

 

 

Table 5: A breakdown of the components required to determine PUE and DCIE.
HVAC loadsChiller/CDPR unit powerkWh% of total
Fan powerkWh% of total
Pump powerkWh% of total
Heat rejectionkWh% of total
HumidificationkWh% of total
Other HVACkWh% of total
Electrical lossesUPS losskWh% of of total
PDU losskWh% of total
Backup generator standby loss (heaters, chargers, fuel system, controls)kWh% of total
Power distribution losskWh% of total
LightingkWh% of total
Other electricalkWh% of total
ICTICT equipmentkWh% of of total

LOOKING FORWARD

The Green Grid, an organization whose mission is to promote and develop technical standards on data center power efficiency, has made the most progress in attempting to use metrics to develop a detailed, repeatable approach for reporting efficiency ratings. Until standards are in place, the Green Grid’s yardsticks will help level the playing field for both new data center design and existing facility energy-efficiency reporting.

 

Author Information
Kosik is energy and sustainability director of HP Critical Facilities Services, A Company of HP, Chicago. He is a member of the Editorial Advisory Board of Consulting-Specifying Engineer.


The Top Plant program honors outstanding manufacturing facilities in North America. View the 2015 Top Plant.
The Product of the Year program recognizes products newly released in the manufacturing industries.
Each year, a panel of Control Engineering and Plant Engineering editors and industry expert judges select the System Integrator of the Year Award winners in three categories.
Pipe fabrication and IIoT; 2017 Product of the Year finalists
The future of electrical safety; Four keys to RPM success; Picking the right weld fume option
A new approach to the Skills Gap; Community colleges may hold the key for manufacturing; 2017 Engineering Leaders Under 40
Control room technology innovation; Practical approaches to corrosion protection; Pipeline regulator revises quality programs
The cloud, mobility, and remote operations; SCADA and contextual mobility; Custom UPS empowering a secure pipeline
Infrastructure for natural gas expansion; Artificial lift methods; Disruptive technology and fugitive gas emissions
Power system design for high-performance buildings; mitigating arc flash hazards
VFDs improving motion control applications; Powering automation and IIoT wirelessly; Connecting the dots
Natural gas engines; New applications for fuel cells; Large engines become more efficient; Extending boiler life

Annual Salary Survey

Before the calendar turned, 2016 already had the makings of a pivotal year for manufacturing, and for the world.

There were the big events for the year, including the United States as Partner Country at Hannover Messe in April and the 2016 International Manufacturing Technology Show in Chicago in September. There's also the matter of the U.S. presidential elections in November, which promise to shape policy in manufacturing for years to come.

But the year started with global economic turmoil, as a slowdown in Chinese manufacturing triggered a worldwide stock hiccup that sent values plummeting. The continued plunge in world oil prices has resulted in a slowdown in exploration and, by extension, the manufacture of exploration equipment.

Read more: 2015 Salary Survey

Maintenance and reliability tips and best practices from the maintenance and reliability coaches at Allied Reliability Group.
The One Voice for Manufacturing blog reports on federal public policy issues impacting the manufacturing sector. One Voice is a joint effort by the National Tooling and Machining...
The Society for Maintenance and Reliability Professionals an organization devoted...
Join this ongoing discussion of machine guarding topics, including solutions assessments, regulatory compliance, gap analysis...
IMS Research, recently acquired by IHS Inc., is a leading independent supplier of market research and consultancy to the global electronics industry.
Maintenance is not optional in manufacturing. It’s a profit center, driving productivity and uptime while reducing overall repair costs.
The Lachance on CMMS blog is about current maintenance topics. Blogger Paul Lachance is president and chief technology officer for Smartware Group.
The maintenance journey has been a long, slow trek for most manufacturers and has gone from preventive maintenance to predictive maintenance.
This digital report explains how plant engineers and subject matter experts (SME) need support for time series data and its many challenges.
This digital report will explore several aspects of how IIoT will transform manufacturing in the coming years.
Maintenance Manager; California Oils Corp.
Associate, Electrical Engineering; Wood Harbinger
Control Systems Engineer; Robert Bosch Corp.
This course focuses on climate analysis, appropriateness of cooling system selection, and combining cooling systems.
This course will help identify and reveal electrical hazards and identify the solutions to implementing and maintaining a safe work environment.
This course explains how maintaining power and communication systems through emergency power-generation systems is critical.
click me