Calculating control benefits

A verifiable approach to calculations will result in greater project success.

By Merle Likins, Principal Consultant, Yokogawa October 1, 2012

A recent study by McKinsey and Company called “Outpacing Change” found that productivity gains actually improved product quality, with the most productive companies having a 5% higher yield of quality products than the underperforming ones.

The other interesting finding was the better performing companies had a much different approach to operations and continuous improvement. One of the key differences was a well-planned strategy for improvements that began with locating the areas with the largest potential payback.

A simple hunch that replacing a control system or re-engineering a process will yield significant financial or other benefits won’t get approval, as management must typically see numbers and careful estimates before allocating money for new projects. Therefore, a well-planned strategy needs to be designed that will provide a realistic expectation of the potential benefits from a process automation or equipment upgrade.

The first phase of such a strategy is a benefits study to show expected improvements in throughput, quality, reliability, and/or other areas. A benefits study requires mathematical calculations, and the personnel performing the study must possess the requisite engineering knowledge, along with the skills and experience needed to make informed decisions.

Planning a benefits study

Defining goals is a prerequisite to performing a successful benefits study, and this requires a thorough understanding of the specific operations and the overall business objectives. Goal definition will determine the areas that should be targeted for improvement.

A benefits study will yield estimates of possible outcomes, with the accuracy of the estimate varying based on a number of factors. Therefore, it’s best to err on the side of caution by providing estimates on the more conservative side of calculations. In the end, the estimates will be only as good as the data used, and the guidance from those who understand the process.

The first step in any benefits study is becoming thoroughly knowledgeable with the systems and processes under study. This step is called unit familiarization—and it begins with examination of overall process drawings and documents such as process flow diagrams (PFDs), piping/process and instrumentation diagrams (P&IDs) and written process descriptions. After this step is completed, the next activity is a meeting with the relevant parties that comprise the project benefits team.

The purpose of the meeting is to review the processes and ensure that nothing was overlooked in the system assessment during unit familiarization. In addition to reviewing the information gathered from unit familiarization, an agreement should be reached on where benefit opportunities can be realized. These opportunities can include improving energy efficiency, increasing throughput, minimizing raw materials consumed, or other parameters. After the agreement is reached, all the information should be documented as part of the project baseline.

The next step involves collecting data relevant to the unit. This usually comes from the plant historian. The historical process data should include information about temperature, pressure, flow, compositions, and other relevant process data. Utility costs such as steam, electricity, cooling water, etc.—and prices for the final products—are required to determine the cost/benefit ratio of implementing improvements.

Defining CTQ trees

The project benefits team will define the critical-to-quality (CTQ) variables, common to Six Sigma methodology, that will be used to extract the key quantifiable measurements. The CTQ variables are important in aligning improvement efforts with expectations of final results.

For example, if an improvement in throughput is the goal, the CTQ would be the measurable estimate of what percentage gain can be expected in throughput. In addition to defining the CTQs, an agreement should be reached on how the economic calculations will be performed.

After determining the CTQs and setting expectations, the number crunching begins. In addition to reviewing the raw data and trends, some type of filtering of the data will be required. Correlations between CTQs and manipulated variables will also need to be made.

After preliminary numerical analysis, a preliminary results consultation is conducted with the project team. During this meeting, the potential benefits discovered are discussed, and the team determines if any potential areas for improvement were overlooked. A proposal might also be made for purchases of new equipment or components, such as an analyzer or a multivariable controller.

Establishing a valid statistical basis

A benefits estimate, based on historical data, defines the process variability of the CTQs, which will enable an estimate of the potential benefits. Variability is based on a standard deviation (SD) that consists of an overall basis and a pooled basis.

The overall SD is the measure of the current operation made by using a snapshot, or the hourly averages of snapshots. It shows the total variability of the unit, which can be caused by a variety of disturbances.

The pooled SD takes subgroups, such as an operating shift of 8 or 12 hours, to show how the operators and the control system can do a better job of controlling the unit. This is the measurement of the process capability. Think of the process capability as the top performance of the best operator on his or her best day with ideal conditions: an optimal performance.

With overall variability established, operating constraints can be added, such as the amount of raw materials available for use. In order not to violate the constraints, one or more setpoints must include a margin of error due to process variability.

However, if variability can be reduced, the margin of error for the setpoints can be reduced. Reducing the variability and moving the setpoints closer to the optimal values is the goal, as this is where the maximum payoff is achieved.

Preparing and reviewing the data

Data preparation typically requires review of large amounts of information. The example below shows part of the historical data for a particular process, with these data captured in 15-minute snapshots over a 6-month time period for a process with 17,000 data points.

Graphical representations of the data will typically contain trends and spikes. The data should be judiciously filtered to remove spikes caused by malfunctions such as a transmitter failure or a unit trip, but not filtered to a degree that removes normal process variations. Once the data is filtered, hourly averages of the 15-minute snapshots can be created to yield smoother trends.

Capability analysis shows possible improvements

The next step is to undertake a capability analysis, which is a set of calculations performed to determine if the system is able to meet the specifications or requirements outlined in the beginning of the project.

A data set is required to perform the calculations, and to create a control chart that demonstrates whether the data is statistically stable. The control chart shows if the desired improvement is feasible, and specifications or requirements from the unit familiarization will provide the numerical values within which the system is expected to operate.

The capability analysis will predict the extent of possible process improvements by looking at current and past operating states. The goal of the capability analysis is to show where improvements can be made by shrinking the deviations in the process.

Different approach required for batch

Finding areas for improvement in a batch process usually involves time as the most important CTQ variable. Shorter batch times lead to more batches being produced, which translates into more usable throughput if quality can be maintained at sufficiently high levels.

Historical data is necessary to calculate the standard deviation for the batch CTQs. In general, the reduction of one standard deviation is an achievable goal. Operators need to be consulted to see if the goal is realistic, and data from past batch operations must be examined.

In addition, sales and marketing should be consulted to see if the there’s a monetary gain for increasing throughput, and to determine acceptable levels of quality. It’s important to know if a quality improvement will result in more sales, or if market share will be lost if the variability among batches increases.

The historian should contain the cycle time for each batch. The standard deviation time is calculated for the cycle times. Once again, a reduction of one standard deviation is realistic to determine how many extra batches can be achieved if the process is improved.

If there is no historian and large amounts of data can’t be viewed, as few as two operators can be interviewed to estimate average and best times. To achieve best results, each operator should be interviewed separately, and all the steps and required times to produce a batch should be written down. These data can then be used to produce the cycle average.

Each operator should then be asked how much time would be needed in optimal conditions for each step, and the sum of these step times will provide the minimum batch cycle time. To be conservative, one should assume a fraction of the difference between “average” and “best,” and then use this number to determine the number of possible extra batches that can be run in a given time period.

Summary

The last step is preparing a report for review with the project team. Past improvement practices may have focused on reducing the percentage of nonconforming products, but this approach shifts the focus to reducing the variations of the process to help move the bar closer to optimal performance.

Once benefits are quantified and reviewed with the project team, costs must be calculated for the method of control needed to achieve those benefits. Those methods may be as simple as enforcing mode control on controllers, to implementing advanced regualtory controls, to more advanced methods such as multivariable control techniques. Costs of hardware, software, and engineering or even process equipment may be needed in order to calculate the corresponding return on investment.

But the key is determining the benefits using a consistent, understood, and accepted methodology of calculations using sound statistical methods. This helps engineers validate their intuition of improvements that can be made to their process.

Using a statistically verifiable approach to calculate benefits from improved process control will result in a greater likelihood of success for your requested project by providing management with the objective evidence of benefits needed for allocating funds and resources.

Dr. Merle Likins, PE, is principal consultant, Yokogawa. 


Reactor benefit study

A simple reactor where a feed stream of raw materials is converted to a product by controlling the reaction temperature and other variables is a good example of how process improvements can be determined with a benefit study. In this case, the exit composition of the stream shows a significant amount of variability, indicating the process can be improved.

The capability analysis of the variability is demonstrated by the dotted black line that shows the overall variability. The middle red line shows the pooled standard deviation or the process capability. The significant difference between the two indicates a better job can be done controlling the process.

The variability in the outlet concentration could be caused by a variety of factors. In this case—inlet composition, reactor temperature, and feed rate are considered—and a simple regression analysis of the three variables is performed. This isn’t intended to be a precise correlation, but rather to show the opportunity for improvement, and which variable(s) contribute to the variability.

The importance of each variable is indicated by the sequential sum of squares that comes from the regression analysis. More specifically, the regression analysis helps explain how the value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed.

In the case of the simple reactor, the factors are the inlet composition, the reactor temperature, and the feed rate. A regression analysis is performed for each variable. The inlet compositions are important but can’t be controlled, so that choice is eliminated. The feed rate isn’t important to the reducing the variability of the output stream, but the reactor temperature is. Therefore, controlling the temperature is where the most significant improvements can be achieved.

The capability analysis showed that the temperature could in fact be controlled, but that it wasn’t being controlled to the minimum acceptable value. The correlation analysis performed earlier indicates that lowering the temperature five degrees, which will still keep it above the minimum specifications, will yield a savings of $237,809 a year, primarily through lower energy consumption.