Project success is the sum of all parts

In a data center, uptime is more than just the sum of its pieces.

09/27/2012


DeCoster is CEO of Mission Critical West. He is an acknowledged expert in critical facilities infrastructure, and has consulted for hundreds of data centers and satellite earth stations over his 30-year career.In most cases, following best practices can resolve nearly any engineering issue. Let me give you an example. 

On the surface, the data center had all the pieces: good engineering, commissioning, proactive maintenance, training, procedures, monitoring, and redundancy. Uptime should be assured. Yet in one reported outage event earlier this summer, it all came crashing down. What happened? 

Several things happened. A double substation utility power loss from a cable fault started the mess, perhaps a bit uncommon but not rare. The site went to generator backup; OK, so far so good. But then one generator that was maintained earlier went offline due to overheating. A cooling fan that did not operate was the culprit, perhaps due to a human error following maintenance. Then the distributed loads failed over to another protected power feed, which immediately opened due to an incorrectly configured circuit breaker. Then the IT software attempted to transfer the affected loads transparently to another site, except that the transfer failed causing a catastrophic crash. On the surface, none of the four events should have affected loads. In fact, no two or even three of them should have. 

This story is not unique. Several other major data center load losses have occurred in the past year alone, all with some kind of redundant configuration. Yet in a recent study I did on data center availability, we discovered dozens of data centers with less than Tier IV resiliency that experienced no load losses, in some cases for periods of more 15 years. So what lesson, if any, is behind all of this? 

Data center availability or uptime is more than the sum of its pieces. The sites we found that were successful routinely employed “best practices” at virtually all phases of design, construction, and operation.  Engineering took into account lessons learned from early grid collapses, technology improvements, and cumulative industry failure assessments. Monitoring was excellent and done in real time, often with trending information. Configurations, even if less than 2N+1, were sound for the mission and more redundant in areas known to be weakest, such as batteries as opposed to magnetics where budgets were at issue. Labeling was clear and complete. 

Commissioning was not just done, but done correctly. Maintenance was reliability-centered, proactive, and managed. Security and change procedures were rigid. Vendor technicians were not allowed to go off script. If any component with system impact was changed out, testing followed on that subsystem before it was relied upon. There were double sign-offs on method of procedure/method and procedure (MOPs/MAPs) before any actions on critical gear.  Escalation procedures were in place and instantly available on building management system/network management system (BMS/NMS). And in the optimal case, almost every conceivable failure eventuality was brainstormed by stakeholders, with simulations scripted and rehearsed for both system effect and personnel training. All of these separate considerations must be integrated, not just into the formal plan, but into the mind-set. In large measure, it is attitude. And commissioning and continued testing are particularly critical. 

Commissioning, or testing in a broad sense, provides piece of mind to data center management. “We passed Cx” sounds good, but what does it mean? You load-banked a UPS or genset for hours. You checked harmonics. Great, but what happens in a real-world transfer scenario when phase displacement exists, or poor power factor (PF), or high capacitance, or cumulative inrushes, or improperly high recharge, or something else? A 15-minute multi-string sealed battery was rundown tested, but was it tested with one string out? The replacement circuit breaker (CB) arrived, but did you notice that it was set to fastest trip for legal reasons? 

Every subsystem should be individually wrung out at worst-case conditions, then the entire system should be tested in the worst-case anticipated loading and transfer scenarios. Any major changes or modifications to critical elements afterward signal a need for reasonable retest for confirmation. This requires been-there, done-that experience to execute well, but these steps may have saved a crash one summer day. 

Murphy’s Law says if something can go wrong, it will. To combat Murphy’s Law when engineering building systems, follow best practices.

 


Dennis DeCoster is CEO of Mission Critical West. He is an acknowledged expert in critical facilities infrastructure, and has consulted for hundreds of data centers and satellite earth stations over his 30-year career.



No comments
The Top Plant program honors outstanding manufacturing facilities in North America. View the 2013 Top Plant.
The Product of the Year program recognizes products newly released in the manufacturing industries.
The Leaders Under 40 program features outstanding young people who are making a difference in manufacturing. View the 2013 Leaders here.
The new control room: It's got all the bells and whistles - and alarms, too; Remote maintenance; Specifying VFDs
2014 forecast issue: To serve and to manufacture - Veterans will bring skill and discipline to the plant floor if we can find a way to get them there.
2013 Top Plant: Lincoln Electric Company, Cleveland, Ohio
Case Study Database

Case Study Database

Get more exposure for your case study by uploading it to the Plant Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.

These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.

Click here to visit the Case Study Database and upload your case study.

Bring focus to PLC programming: 5 things to avoid in putting your system together; Managing the DCS upgrade; PLM upgrade: a step-by-step approach
Balancing the bagging triangle; PID tuning improves process efficiency; Standardizing control room HMIs
Commissioning electrical systems in mission critical facilities; Anticipating the Smart Grid; Mitigating arc flash hazards in medium-voltage switchgear; Comparing generator sizing software

Annual Salary Survey

Participate in the 2013 Salary Survey

In a year when manufacturing continued to lead the economic rebound, it makes sense that plant manager bonuses rebounded. Plant Engineering’s annual Salary Survey shows both wages and bonuses rose in 2012 after a retreat the year before.

Average salary across all job titles for plant floor management rose 3.5% to $95,446, and bonus compensation jumped to $15,162, a 4.2% increase from the 2010 level and double the 2011 total, which showed a sharp drop in bonus.

2012 Salary Survey Analysis

2012 Salary Survey Results

Maintenance and reliability tips and best practices from the maintenance and reliability coaches at Allied Reliability Group.
The One Voice for Manufacturing blog reports on federal public policy issues impacting the manufacturing sector. One Voice is a joint effort by the National Tooling and Machining...
The Society for Maintenance and Reliability Professionals an organization devoted...
Join this ongoing discussion of machine guarding topics, including solutions assessments, regulatory compliance, gap analysis...
IMS Research, recently acquired by IHS Inc., is a leading independent supplier of market research and consultancy to the global electronics industry.
Maintenance is not optional in manufacturing. It’s a profit center, driving productivity and uptime while reducing overall repair costs.
The Lachance on CMMS blog is about current maintenance topics. Blogger Paul Lachance is president and chief technology officer for Smartware Group.