Feedback controllers do their best
But meeting a specific performance measure is harder than it looks.
A feedback controller measures the output of a process and then manipulates the input as needed to drive the process variable toward the desired setpoint. A controller reacts to setpoint changes initiated by the operators as well as random disturbances to the process variable caused by external forces. It repeats this measure-decide-actuate sequence over and over until the process variable matches the setpoint.
Unfortunately, that’s easier said than done. The inertia of the process prevents the controller from altering the process variable instantaneously, so the controller has to settle for “close enough,” at least in the near term. Exactly how close depends on how the controller is designed, and the design is typically based on how much slack between the process variable and the setpoint is acceptable for a particular application.
Case in point
Consider, for example, a baking process where the dough in the oven must be heated without scorching. When an operator bumps up the oven temperature at the beginning of a batch, the controller must turn up the heat enough to reach the required baking temperature but without overdoing it. In this application, close equates to limited overshoot or, “as near as possible to the setpoint without exceeding it too much.”
In contrast, the process of warming up the interior of a luxury car on a cold day requires more speed than accuracy. The driver of the car typically wants the interior temperature to reach a comfortable range as soon as possible, even if that means the car’s temperature controller has to raise the temperature so quickly that it overshoots the setpoint. Here, close equates to minimum rise time or, “as near as possible to the setpoint as soon as possible.”
Closeness can be quantified in terms of an objective function or a performance measure that shows with a single statistic how well the process variable has matched the setpoint. There is a variety of performance measures to choose from, depending on which definition of “close” is most appropriate for a particular application. The “Basic performance measures” graphic shows four of the more straightforward performance measures often used in applications where the setpoint changes frequently.
Conversely, the chosen performance measure dictates how the controller should be designed in order to achieve the desired closed-loop results. The minimal overshoot necessary for the baking application would require that the controller be tuned to be especially conservative about how it manipulates the process. For the car temperature application, the controller would have to be tuned to be more aggressive in order to achieve the fastest possible rise time.
This is a fairly straightforward calculation when the chosen performance measure comes with a set of formulae that translate observable characteristics of the process directly into controller settings. For example, the classic Ziegler-Nichols tuning rules use the ultimate gain and ultimate period of the process to generate PID parameters that will endow the controller with the ability to produce a ¼ decay ratio in response to a setpoint change.
Unfortunately, it’s not usually that easy. Most performance measures require more extensive calculations to determine the tuning parameters that will provide optimal results. The integrated performance measures shown in the second graphic are prime examples.
Trial and error
Designing a controller to optimize any of these performance measures typically involves a mathematical model of the process, an off-line simulation program, and an iterative trial-and-error test. The designer first guesses at a set of tuning parameters that could conceivably yield optimal results and then configures a simulation of a controller thus tuned operating on a model of the process.
The designer next exercises the simulated closed-loop system with a prescribed series of setpoint changes and disturbances chosen to mimic whatever conditions the real closed-loop system is expected to experience. At the end of the first iteration, the designer computes the performance measure using the values of the process variable generated by the simulated test. For the second iteration, the designer adjusts the initial set of tuning parameters in such a way as to produce an even smaller value for the performance measure when the test is repeated under the same simulated conditions.
That’s the tricky part. It’s not always obvious how the tuning parameters can be improved. Doing so requires either considerable experience with loop tuning or some sort of optimization algorithm that can make an educated guess based on the results of the previous test(s).
Either manually or with computer guidance, the designer repeats this trial-and-error procedure until no further reductions in the performance measure are forthcoming. The final set of tuning parameters can then be loaded into the real controller to keep the real process variable close to the setpoint under real operating conditions.
Basic performance measures
This strip chart shows a process variable (black line) responding to a setpoint change (green line) in a case where the controller has been tuned to be particularly aggressive about eliminating slack between the two as soon as possible. The value of four different performance measures can be gleaned from this chart by measuring how far the process variable’s first and second peaks have exceeded the setpoint (A and B, respectively), the amount by which the setpoint has changed (C), the time (D) required for the process variable to reach the setpoint for the first time after the setpoint change, and the time (E) required for the process variable to settle into a range within 5% of the setpoint (the shaded area). To wit:
Overshoot (in percent) = 100 A/C
Decay ratio = B/A
Rise time = D
Settling time = E
Integrated performance measures
= Integral of absolute errors (IAE)
= Integral of squared errors (ISE)
= Integral of time and absolute errors (ITAE)
These performance measures all take into account the errors e(t) between the process variable and the setpoint that have occurred at every time t within a sampling period of T seconds. Tuning the controller to minimize each of these quantities produces different performance characteristics for the closed-loop system. For example, minimizing ITAE produces a controller that fights extra hard to eliminate errors that occur toward the end of the T-second sampling period, especially the tail end of the process variable’s response to a setpoint change or disturbance.
Is the answer essentially technical, or commercial?
While there are many technical aspects of control strategy, let us not forget that there are many commercial or business elements also driving the discussion. Peter Martin, vice president of business value solutions for Invensys Operations Management, keeps this top of mind and encourages process people to consider their activities in light of what really matters in the executive suite. He says, “A process is in control when each loop produces a response that meets the objectives of the business on an ongoing basis. It typically involves a conceptual cascade strategy with the primary loop focused on maximizing profitability and other business objectives, and the secondary loops the traditional process loops of the plant. Traditionally, optimizers were used to meet the needs of the primary loops, but the increased speed of business has made this impractical. Going forward, real-time control theory will be applied to both business and process variables in an overall real-time business control system.”
Perhaps it’s something of a letdown to think of the loops running the process as secondary to the business considerations, but such is the reality of the world of manufacturing, and Martin has always encouraged engineers to avoid losing sight of that fact. The question you have to ask yourself is if you know what those business objectives are. There is hardly a process manufacturing facility in the world that is not subject to global variations in commodity prices, energy source availability, and capricious markets for finished products. Such things are indeed driving the speed of business.
- Peter Welander
- Deciding whether a loop is running well or not depends on what that loop is expected to do in the larger process context.
- Different types of processes demand different control strategies. Matching them can be tricky.
- Ultimately you have to determine if the answer is really technical, or if there are larger drivers at work.
Vance VanDoren, PhD, PE, is Control Engineering contributing content specialist. Reach him at firstname.lastname@example.org.
Read more on control strategy from Vance VanDoren at
- Events & Awards
- Magazine Archives
- Oil & Gas Engineering
- Salary Survey
- Digital Reports
Annual Salary Survey
Before the calendar turned, 2016 already had the makings of a pivotal year for manufacturing, and for the world.
There were the big events for the year, including the United States as Partner Country at Hannover Messe in April and the 2016 International Manufacturing Technology Show in Chicago in September. There's also the matter of the U.S. presidential elections in November, which promise to shape policy in manufacturing for years to come.
But the year started with global economic turmoil, as a slowdown in Chinese manufacturing triggered a worldwide stock hiccup that sent values plummeting. The continued plunge in world oil prices has resulted in a slowdown in exploration and, by extension, the manufacture of exploration equipment.
Read more: 2015 Salary Survey