Beyond the hype: Data will drive predictive maintenance
Merging the virtual world with real plant floor issues is the biggest surrounding Big Data, artificial intelligence (AI) and the Internet of Things (IoT).
Today’s manufacturing news is full of stories about Big Data, artificial intelligence (AI), the Internet of Things (IoT) and cloud computing. It’s often hard to distinguish between hype and reality amidst this noise, but there is a reason these topics keep coming up. We’re seeing more application of these technologies in our daily life from speech recognition and driverless cars, to intelligent assistants and greater connectivity.
While consumers reap the immediate benefits of these technologies as they become available, the industrial world is slower to adopt new technologies. Mistakes are costly, investments are large and existing assets are not easy to replace. The result is a slower, prove-it-to-me technology adoption cycle, and often a willingness to let others go first.
The use of data and analytics in maintenance and machine operations, however, has been expanding, particularly over the last few years. The introduction of advanced control systems, data historians, predictive maintenance (PdM) and advanced pattern recognition technologies, along with computerized maintenance management systems (CMMS) and enterprise asset management (EAM) systems have significantly improved asset availability and reduced downtime.
The current technology trends of cloud computing, Big Data, and machine learning will drive the next wave of improvement in asset management, but we shouldn’t expect that methods developed for consumer markets will translate fluidly to industrial markets. The nature of the data is different and the problems are different. As the industrial world begins to leverage Big Data analytics, use cases are emerging as success stories for other organizations. One growing area is data analytics around machine maintenance.
Three factors in downtime
Let’s start with the reasons why machines break. It boils down to basic physics. Here are three key facts regarding machine degradation and how machines fail:
- All machines face multiple sources of degradation: chemical, fatigue, abrasion, friction, etc.
- The rates of these core degradation mechanisms will vary depending on the design, usage, and environment of the machine.
- Most machines are complex and consist of many components, and the various sources of degradation effect components at different rates.
The implication from these basic facts is that predicting failure is like predicting the weather. Many factors are involved in a “chaotic” process that makes exact, long-term prediction of failure difficult to even impossible.
Now traditional PdM, sounds like a good idea. The basic concept behind PdM is to intervene and intercept degradation before it causes ultimate failure. The catch comes from complexity and interaction of the factors mentioned above; how do we know when and where to intervene? For the most part, setting traditional PdM schedules is educated guesswork based on OEM recommendations or what was done in the past. Detailed, data-based analysis is difficult and does not happen frequently. That said, time-based, PdM is better than nothing.
Because of these reasons most maintenance professionals have recognized that some form of “condition-based maintenance” is better. Various PdM technologies have been developed, such as vibration analysis for rotating equipment, thermography for electrical systems, and ultrasound for piping and vessel thickness. However, the “predictive” name is a bit of a misnomer. These technologies don’t actually predict failure; rather, they detect and bring to light signs of deterioration that can drive a maintenance action before failure results. These methods are focused on different failure modes and sensor approaches, and are built around specific technologies to detect specific kinds of problems.
PdM approaches are well established across most industries and have proven valuable, but they are inherently limited to the types of failure modes they are designed to detect. Within the power and process industries, the prevalence of digital control systems, increased instrumentation and standardization of data historians has fed growing interest in how to make sense of this much larger set of data.
In the early 2000’s technology emerged to facilitate better use of sensor data. Mathematical methods generally called “anomaly detection” or “advanced pattern recognition” were developed to detect equipment degradation like traditional PdM technologies, but they were based on the generally available analog data such as pressures, temperatures, and flows. Sensor management systems that use a wide variety of data are now accepted and being deployed in the power generation and oil and gas industries.
These anomaly detection techniques are empirical (meaning data-based), and analyze typical sensor data by building models of “normal” operation and then alerting when abnormal conditions occur (hence the term “anomaly detection”). These methods are near-real time or “streaming” analyses, and essentially cancel out the variation of normal machine operation to allow the detection of more subtle variation caused by deterioration. Application of these technologies has proven return on investment and made catastrophic failure of monitored equipment very rare.
The next frontier for data
So what are the next areas we can expect data analytics technologies to be applied? Now that sensor data is being archived using data historians, and most customers have EAM systems for work execution, there is growing interest in “data-mining” this information and combining it with the anomaly detection methods to improve real-time diagnostics (most anomaly detection methods still require some level of expert interpretation) and to eventually forecast time-to-failure (prognostics). This process starts with advanced asset performance management systems.
However, a big challenge in the industrial analytics world is that most machine learning and AI techniques are data-driven. While the industrial world has a lot of data, the type of data needed for analysis from a reliability and maintenance perspective is hard to access because these systems were not designed for this type of analysis. The data needed to improve diagnostics and prognostics is around failures and fault cases, and since modern equipment is already very reliable, this kind of information can be harder to come by than one might think.
Even though most industrial companies are sensitive about their operational data, many will begin to understand that sharing their de-identified fault and failure data with others to gain access to a larger pool of shared information will be beneficial. This is one area where there are significant differences between the industrial and consumer worlds. In the industrial world, failure can be highly varied and rare. Since there is currently no industrial equivalent to Google or Amazon to aggregate machine data across enterprises, the pools of data available to develop these kinds of analytics are limited to large enterprises or OEMs.
With pooled, de-identified performance and fault case data, the potential for the next wave of machine data analytics emerges. With fault cases and modern analysis techniques it will be possible to match emerging fault patterns to historic cases. With a “library” of previous similar cases to compare against, automated diagnostics can provide a description of the problem, and based on past deterioration rates, provide meaningful statistical forecasts of potential time-to-failure.
Even for equipment that is not especially well-instrumented, larger pools of data will support better statistical analysis based on equipment that is placed in similar operating conditions, rather than broader generic analysis that is typical today. Better decisions can be made when establishing maintenance strategy if the engineer understands the true failure rate of a component based on similar equipment that is running in similar conditions. Current technology typically relies on OEM recommendations or industry studies that often were conducted many years ago.
Objective maintenance
For most companies today, the reality is that maintenance strategy development is a subjective, experience-driven process. The data to make objective decisions is often sparse, doesn’t exist, or is difficult to access. Moving to condition based approaches skirts much of this problem by basing activities on the true current condition of an asset, but even these techniques still often require significant expertise and can be improved.
The potential for continued application of advanced data analysis to machine operations is bright, but the challenges are more than just knowing AI or machine learning methods. Access to enough of the right kind of data will be key, and for many companies, that may mean being willing to share their data in a give-for-get quid pro quo. Hopefully companies will recognize the benefits of sharing information and improving operations will outweigh the concerns over helping competition.
Pooling of anonymized performance, fault, and failure data will be key to providing the “raw material” needed for application of advanced data analysis methods. Better on-line diagnostics, prognostics, and all the tools needed to support optimized, condition-based maintenance are all possible if proper mechanisms emerge that allow companies to share their data in appropriate ways.
Back 2 Basics: Maintenance Strategies:
There are four essential maintenance strategies in place today:
- Reactive maintenance: Often called ‘run to failure’, this strategy is fairly self-explanatory. Maintenance is only performed when the asset fails.
- Preventive maintenance: In this strategy, maintenance is performed based on a calendar or counting basis; for example, replacing a part every three month or every 10,000 cycles. It does not take into account the relative health of the asset.
- Predictive maintenance: In this case, the asset’s performance is monitored and maintenance scheduled when that performance falls out of a pre-set range (for example, a motor that runs between 150 and 180 degrees F is considered in range.)
- Computerized maintenance management system (CMMS): This strategy combines the asset management information from predictive maintenance with trending tools to provide a better sense of overall plant floor operational information.
By the Numbers:
The March 2017 Plant Engineering Maintenance Survey looked at the prevailing maintenance strategies used in manufacturing:
- 78% Preventive Maintenance
- 61% Reactive Maintenance
- 59% CMMS
- 47% Predictive Maintenance.
David Bell is senior product manager for APM at GE Digital.
Do you have experience and expertise with the topics mentioned in this content? You should consider contributing to our WTWH Media editorial team and getting the recognition you and your company deserve. Click here to start this process.