Data Center Power Use Effectiveness (PUE): Understanding the Contributing Factors

12/01/2008


If you are interested in data center design or performance, you’ve undoubtedly been seeing sales sheets, white papers, and news reports of products and techniques for increasing data center energy efficiency.

 

The metric most often quoted in data center power use efficiency is Power Use Effectiveness (PUE) and its inverse, Data Center Infrastructure Efficiency (DCIE). PUE and DCIE compare the data center facility’s overall power consumption to the power consumed by the information and communication technology (ICT) equipment. These metrics, which are primarily facilities-based, will probably serve as the framework for a future standard; however, they are still being developed. Whether or not the metrics are being used prematurely depends on how they are being used (i.e., for self-promotion or for encouraging discussion), and if the testing procedures or simulation algorithms are disclosed.

 

Until a formal, recognized standard with a rigorous, engineering-based process to judge data center energy efficiency is developed, presented for public review, and formally adopted by a widespread consensus, published efficiency ratings will continue to be based on the interpretation of those who report the figures.

 

The good news is that there already exist some established energy standards, such as ANSI/ASHRAE/IESNA Standard 90.1-2007, Energy Standard for Buildings Except Low-Rise Residential Buildings, for other areas of the building that impact the data center’s efficiency (see Figure 1). This article examines these interdependencies, enabling designers, owners, and facility decision makers to ask tough questions when faced with new construction and renovation projects for data centers.

 

BUILDING ENVELOPE

Relative to the energy required to cool and power ICT equipment, the energy impact from the building envelope is small. However, the importance of code compliance and the effects of moisture migration cannot be overlooked.

 

For data centers, the integrity of the building’s vapor barrier is extremely important, as it safeguards against air leakage caused by the forces of wind and differential air pressure. It also minimizes migration of moisture driven by differential vapor pressure. Most data center cooling equipment is designed for sensible cooling only (no moisture removal from the air). Higher than expected moisture levels in the data center will result in greater energy consumption and possible operational problems caused by excessive condensate forming on the cooling coils in the air handling equipment.

 

HVAC, LIGHTING, AND POWER SYSTEMS

Standard 90.1 addresses, in great detail, the energy performance of HVAC and lighting systems, including control strategies, economizer options, and climate-specific topics. However, it provides little guidance on power distribution systems as they are applied to data centers. The lack of developed standards on UPS and the efficiency of the overall power delivery chain (from incoming utility power right down to the individual piece of ICT equipment) is a major gap that needs to be filled.

 

HVAC is the biggest non-ICT energy consumer in a data center facility. ASHRAE Standard 90.1 presents minimum energy performance of individual components such as chillers, direct-expansion (DX) systems, pumps, fans, motors, and heat-rejection equipment. To be in compliance with the standard, equipment must meet the specified energy-use metrics.

 

To determine whole-building energy performance, 90.1 sets forth a procedure that prescriptively defines how a given building’s energy performance (the “proposed” building) compares to the calculated theoretical energy performance (the “budget” building). This same method—with some augmentation to address data center-specific design and operations issues—can and should be used to determine the budget PUE and DCIE to benchmark data center energy use. (Incidentally, this method is used for the Energy and Atmosphere credit category in the U.S. Green Building Council LEED rating system, so as more data centers look to become LEED certified, this process needs to be used anyway.)

 

BUILDING USE AND CONTROL STRATEGIES

This is an area where existing energy standards for buildings need to be adapted to accommodate the parameters that are unique to data centers. Commercial office buildings and energy-intensive facilities, such as laboratories and hospitals, do not have the same characteristics of data centers, so there are no available templates to use as a starting point:

 

  • The magnitude of process load (the ICT equipment) as compared to power load attributable to the occupants, lighting, and building envelope heat transfer is far greater in data centers than in any other commercial-building type.
  • Because data centers and other technology-intensive facilities can be gradually populated with technology equipment over time, energy usage must be examined at full- and part-load conditions. The estimated energy use of both the 90.1 baseline and the proposed data center design should be modeled according to the schedule of technology equipment implementation. Also, the energy modeling must include the level of reliability and equipment redundancy at both part-load and full build-out conditions. (If dual paths of critical power and multiple pieces of cooling equipment are used to achieve concurrent maintainability and fault tolerance, the same strategy must be used as the critical-load scales up.)

The concept behind demonstrating energy at part-load conditions is to raise awareness during system design and while specifying major power distribution and cooling equipment and equipment control sequences. This part-load performance must be documented using the energy simulation procedures described in 90.1.

 

  • Energy codes for commercial buildings assume that the primary function of the building is for human occupancy. Therefore, all of the energy-efficiency ratings in existing building energy performance standards are based on the ability to maintain an indoor environment that is comfortable and safe for humans. The indoor environmental conditions required for ICT equipment are far different than those required for human comfort and safety.

The baseline building should be designed to maintain the maximum recommended ASHRAE Class 1 environmental requirements of 78 F at the inlet to the technology equipment, as described in ASHRAE’s Thermal Guidelines for Data Processing Environments. Also, as per the Guide, the supply air dew point shall correspond to the maximum and minimum dew point temperatures at 78 F and 40% RH (51.7 F) and 78 F and 55% RH (60.5 F). The baseline system shall use the same temperature rise across the technology equipment to maintain the same air quantities, fan motor power, and sensible heat gain.

 

  • Most energy-efficiency strategies for commercial buildings rely on an economizer cycle to reduce power required for cooling by using the ambient conditions either directly (as with an outdoor air economizer) or indirectly (as with a water-side economizer or heat reclaim). Using outside air for cooling in data centers is a very viable concept; researchers are in the early phases of studying the long-term effects of outdoor air pollutants and temperature/moisture swings on ICT equipment.
  • Reliability and operational continuity are still the cornerstones for judging the success of a data center. These requirements can result in multiple paths of active power and cooling and central plant equipment running at low loads. Consider a data center that is brought online and populated with servers, storage devices, and networking gear over two to three years before the facility is running at full load—the facility would have multiple power and cooling distribution paths and central plant equipment (chillers, pumps, UPS) running at 10% to 20% of capacity for several months. In addition to inefficient power use, this type of scenario can cause operational problems if not addressed during the design phase.

CLIMATE

Climate has the greatest influence on a data center’s energy efficiency, followed closely by HVAC system type, electrical distribution system reliability, and part-loading conditions. Because the primary effect of climate is on HVAC system power use, a climate-based energy analysis is required to accurately determine PUE and DCIE.

 

The energy use of the HVAC system (DX, chilled-water, air-cooled, water-cooled) will vary based on the outdoor air dry-bulb and wet-bulb temperatures, the chilled-water supply temperature (for chilled-water systems only), and the wet-bulb temperature entering the cooling coil (DX systems). The annual energy consumption of a particular HVAC system is determined by the use of biquadratic formulas developed to estimate electrical usage of vapor compression cooling equipment. Depending on the type of HVAC system being considered, the variables used in these equations represent temperatures for outdoor wet bulb, outdoor dry bulb, chilled-water supply, and condenser water supply.

 

If we were to stitch all of this together into a process to determine PUE and DCIE, the energy use components would breakdown as shown in Table 5.

 

Using this process, we can develop a profile of how PUE and DCIE vary by HVAC system type and climate. Running an analysis in accordance with the requirements of 90.1, with specific parameters such as building size, construction materials, computer equipment load, and reliability configuration, will result in a minimum energy performance profile for a particular data center. We can then use the profile to create minimum PUE and DCIE requirements for different HVAC systems in different climate zones.

 

Judging a facility’s energy performance without putting it in context of the climate and HVAC system type will result in skewed data that is not truly representative.

 

Pumping Equipment type

ASHRAE allowable gpm/ton

Table 1: Allowable chilled and condenser water pump power, based on paragraphs G3.1.3.10 and G3.1.3.11 of 90.1. Source for all tables: William Kosik.

Chilled water

2.4

Condenser water

3.0

 

 

Supply air volume

Constant volume (systems 1-4)

Variable volume (systems 5-8)

Table 2: Allowable fan power, based on Table and paragraph G3.1.2.9 of ASHRAE 90.1.

>20,000 cfm

17.25 + (cfm - 20,000) x 0.0008625

24 + (cfm - 20,000) x 0.0012

≤20,000 cfm

17.25 + (cfm - 20,000) x 0.000825

24 + (cfm - 20,000) x 0.001125

 

 

Heat rejection equipment

Minimum gpm/hp

Table 3: Allowable power for heat rejection power, based on Table 6.8.1G of ASHRAE 90.1.

Propeller or axial fan cooling towers

38.2

Centrifugal fan cooling towers

20.0

 

 

Equipment type

Minimum tons

Maximum tons

kW/ton

c1

c2

c3

c4

c5

c6

T1

T2

Curve equation - % of kW/ton = f (T1, T2) = c1 + c2*T1 + c3*T12 + c4*T2 + c5*T22 + c6*T1*T2
Table 4: Minimum energy performance of the four primary data center HVAC system types and the variables for the biquadratic equation used to estimate annual energy consumption, based on data from ASHRAE 90.1 and DOE-2.2 energy analysis simulation algorithms.

Air-cooled DX (kW/ton)

5.4

11.3

1.09

DX cooling coil entering wet-bulb temperature

Outdoor dry-bulb temperature

-1.06393

0.03066

-0.00013

0.01542

0.00005

-0.00021

11.3

20.0

1.11

20.0

63.3

1.22

63.3

>63.3

1.26

Water-cooled DX (kW/ton)

5.4

11.3

0.99

DX cooling coil entering wet-bulb temperature

Outdoor dry-bulb temperature

-1.06393

0.03066

-0.00013

0.01542

0.00005

-0.00021

11.3

20.0

1.06

20.0

63.3

1.11

Air-cooled chilled water (kW/ton)

5.4

11.3

1.35

Chilled-water supply temperature

Outdoor dry-bulb temperature

0.93631

-0.01016

0.00022

-0.00245

0.00014

-0.00022

11.3

20.0

1.35

20.0

63.3

1.35

63.3

>63.3

1.35

Water-cooled chilled water (kW/ton)

5.4

11.3

0.79

Chilled-water supply temperature

Entering condenser-water temperature

1.15362

-0.03068

0.00031

0.00671

0.00005

-0.00009

11.3

20.0

0.79

20.0

63.3

0.79

 

 

Table 5: A breakdown of the components required to determine PUE and DCIE.

HVAC loads

Chiller/CDPR unit power

kWh

% of total

Fan power

kWh

% of total

Pump power

kWh

% of total

Heat rejection

kWh

% of total

Humidification

kWh

% of total

Other HVAC

kWh

% of total

Electrical losses

UPS loss

kWh

% of of total

PDU loss

kWh

% of total

Backup generator standby loss (heaters, chargers, fuel system, controls)

kWh

% of total

Power distribution loss

kWh

% of total

Lighting

kWh

% of total

Other electrical

kWh

% of total

ICT

ICT equipment

kWh

% of of total

LOOKING FORWARD

The Green Grid, an organization whose mission is to promote and develop technical standards on data center power efficiency, has made the most progress in attempting to use metrics to develop a detailed, repeatable approach for reporting efficiency ratings. Until standards are in place, the Green Grid’s yardsticks will help level the playing field for both new data center design and existing facility energy-efficiency reporting.

 

Author Information

Kosik is energy and sustainability director of HP Critical Facilities Services, A Company of HP, Chicago. He is a member of the Editorial Advisory Board of Consulting-Specifying Engineer.



No comments
The Top Plant program honors outstanding manufacturing facilities in North America. View the 2013 Top Plant.
The Product of the Year program recognizes products newly released in the manufacturing industries.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
The true cost of lubrication: Three keys to consider when evaluating oils; Plant Engineering Lubrication Guide; 11 ways to protect bearing assets; Is lubrication part of your KPIs?
Contract maintenance: 5 ways to keep things humming while keeping an eye on costs; Pneumatic systems; Energy monitoring; The sixth 'S' is safety
Transport your data: Supply chain information critical to operational excellence; High-voltage faults; Portable cooling; Safety automation isn't automatic
Case Study Database

Case Study Database

Get more exposure for your case study by uploading it to the Plant Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.

These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.

Click here to visit the Case Study Database and upload your case study.

Maintaining low data center PUE; Using eco mode in UPS systems; Commissioning electrical and power systems; Exploring dc power distribution alternatives
Synchronizing industrial Ethernet networks; Selecting protocol conversion gateways; Integrating HMIs with PLCs and PACs
Why manufacturers need to see energy in a different light: Current approaches to energy management yield quick savings, but leave plant managers searching for ways of improving on those early gains.

Annual Salary Survey

Participate in the 2013 Salary Survey

In a year when manufacturing continued to lead the economic rebound, it makes sense that plant manager bonuses rebounded. Plant Engineering’s annual Salary Survey shows both wages and bonuses rose in 2012 after a retreat the year before.

Average salary across all job titles for plant floor management rose 3.5% to $95,446, and bonus compensation jumped to $15,162, a 4.2% increase from the 2010 level and double the 2011 total, which showed a sharp drop in bonus.

2012 Salary Survey Analysis

2012 Salary Survey Results

Maintenance and reliability tips and best practices from the maintenance and reliability coaches at Allied Reliability Group.
The One Voice for Manufacturing blog reports on federal public policy issues impacting the manufacturing sector. One Voice is a joint effort by the National Tooling and Machining...
The Society for Maintenance and Reliability Professionals an organization devoted...
Join this ongoing discussion of machine guarding topics, including solutions assessments, regulatory compliance, gap analysis...
IMS Research, recently acquired by IHS Inc., is a leading independent supplier of market research and consultancy to the global electronics industry.
Maintenance is not optional in manufacturing. It’s a profit center, driving productivity and uptime while reducing overall repair costs.
The Lachance on CMMS blog is about current maintenance topics. Blogger Paul Lachance is president and chief technology officer for Smartware Group.