Fixing PID, Part 2

Proportional-integral-derivative controllers may be ubiquitous, but they’re not perfect.

04/28/2014


Part 1 of this series (Control Engineering, Nov. 2012) looked at several issues that limit the performance of the theoretical PID algorithm in real-world applications of feedback control.

All of these problems are exacerbated by uncertainty. Sometimes the controller lacks sufficient information about the controlled process to know how much and how long to apply a control effort. Sometimes the controller can't even tell if it's done a good job or how to do a better job in the future when sensor placement, physical limitations of the sensing technology, or measurement noise make the process variable hard to measure.

Measurement noise is particularly troublesome for a PID controller's derivative action. To compute the "D" component of its next control effort, the controller computes the latest change in the error (the difference between the process variable and the setpoint) and multiplies that by the derivative gain or rate parameter.

When random electrical interference or other glitches in the sensor's output cause the controller to see fictitious changes in the process variable, the controller's derivative action increases or decreases unnecessarily. If the noise is particularly severe or the derivative gain is particularly high, the controller's subsequent chaotic control efforts may be not only unnecessary, but also damaging to the actuator and perhaps even to the controlled process itself.

Filtering

The simplest solution to this problem is to reduce the derivative gain when measurement noise is high, but doing so limits its effectiveness. The measurement noise itself can sometimes be reduced by fixing the sensor or by filtering the process variable measurement mathematically. A process variable filter essentially averages the sensor's most recent outputs to produce a better estimate of the process variable's actual value.

However, process variable filters have their limitations. They only work if the measurement noise is truly random, sometimes increasing and sometimes decreasing the sensor's output in equal measure. If those positive and negative blips also occur with equal frequency, then the filter's averaging operation will tend to cancel them out. But if the measurement noise tends to skew the sensor's output consistently in one direction or the other, the filtered process variable will tend to run consistently too high or too low, thereby deceiving the controller into working too hard or too little.

A process variable filter also slows the controller's reaction time. If the filter is configured to average a particularly long sequence of sensor outputs, it will do a better job of cancelling out random blips, but it will also tend to miss the most recent changes in the actual process variable. The filter needs to see a sustained change in the sensor's output before it can report a new value of the process variable to the controller. The controller can't even see, let alone react to rapid, short-term changes in the process variable, as discussed in the "Filtering" section below.

As a compromise, some PID controllers can be configured to filter the process variable to differing degrees when computing the proportional, integral, and derivative actions. The derivative action requires the most filtering since that's where measurement noise causes the most problems. The proportional action may benefit from less filtering (that is, a filter incorporating a shorter sequence of sensor outputs) in order to remain responsive to short-term changes in the process variable. And since the integral action itself serves as a filter, it may require no process variable filtering at all.

Alternately, a filter can be applied to the control effort instead of the process variable. Doing so permits the measurement noise to enter into the PID calculations (especially the "D"), but the noisy control effort that results is smoothed by the filter before reaching the actuator. A filter can also help slow down the control effort to prevent overly dramatic fluctuations in the process's behavior in cases where the process is particularly sensitive to the actuator's movements.

On the other hand, a filter on the control effort can make the process appear more sluggish than it really is. An operator looking for faster closed-loop performance might try re-tuning the controller to be more aggressive without realizing that the problem is the damping effect of the filter, not the process. The controller tuning and the control effort filter sometimes end up battling each other unnecessarily when different operators have implemented one without checking for the other.

Deadband

The effects of measurement noise can also be mitigated by simply ignoring insignificant changes in the sensor's output under the assumption that they're probably just artifacts of the measurement noise and are too small to make a difference in the controller's choice of control efforts anyway. So long as the error between the process variable and the setpoint remains within a range known as the deadband, the controller simply does nothing.

The trick is determining how much of a change in the error is small enough to ignore. If the deadband is set too large, significant changes in the behavior of the process may be overlooked. But if it is set too small, the controller will react unnecessarily to every fictitious blip in the sensor's output, even if the actual process variable has already reached the setpoint.

Unfortunately, a deadband also glosses over small changes in the setpoint. If an operator tries to move the process into a higher or lower operating range that falls within the current deadband, the resulting change in the error will be ignored and the controller will do nothing. If the deadband is too large, the controller's precision will suffer. That is, it may be able to make a refrigerated space five degrees colder, but not just one.


<< First < Previous 1 2 Next > Last >>

WILLIAM , CA, United States, 05/16/14 01:56 PM:

The most effective addition is "feed-forward" action (FFD), also known as "bias calculation". Given the current input conditions, you calculate the required output to hit the setpoint. Even better if you add lead/lag to each. The PID then acts to slowly fine-tune to the setpoint. It often requires more sensors, but one usually has that with a PLC or SCADA. The simplest form is "setpoint windage" in some panel PID controllers. Automotive and turbine engines have long used FFD (accel pump in carburetors, MAP, MAF, TPS, IAT, and crankshaft sensors).
Jouko , Non-US/Not Applicable, Finland, 05/20/14 07:42 AM:

The reason why PI control is commonly used instead of PID is the noise. When you have to use heavy filtering in the D-part you are actually canceling the derivation. Thus it is better not to use the derivative part at all.
Regarding the feed-forward, it is definitely what you should use whenever it is possible.
LARRY , WA, United States, 05/21/14 05:06 PM:

One of the reasons that "D-filters" tend to have rather poor performance is that they are typically "model free" -- their results are based on the past history of raw inputs, which can be very noisy. A short history leads to high noise sensitivity, while a long history leads to a delayed estimate indicative of past rather than present conditions. Model based approaches (Kalman filters, Active Disturbance Rejection state observers, alpha-beta filters, etc.) do not suffer as much from this. Model updates continuously enforce consistency among model state variables, which provides "smoothing" benefits similar to frequency-selective filters. Then, feedback is calculated from the current-time model state, avoiding the worst of the time delay problems.
The Top Plant program honors outstanding manufacturing facilities in North America. View the 2013 Top Plant.
The Product of the Year program recognizes products newly released in the manufacturing industries.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
The true cost of lubrication: Three keys to consider when evaluating oils; Plant Engineering Lubrication Guide; 11 ways to protect bearing assets; Is lubrication part of your KPIs?
Contract maintenance: 5 ways to keep things humming while keeping an eye on costs; Pneumatic systems; Energy monitoring; The sixth 'S' is safety
Transport your data: Supply chain information critical to operational excellence; High-voltage faults; Portable cooling; Safety automation isn't automatic
Case Study Database

Case Study Database

Get more exposure for your case study by uploading it to the Plant Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.

These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.

Click here to visit the Case Study Database and upload your case study.

Maintaining low data center PUE; Using eco mode in UPS systems; Commissioning electrical and power systems; Exploring dc power distribution alternatives
Synchronizing industrial Ethernet networks; Selecting protocol conversion gateways; Integrating HMIs with PLCs and PACs
Why manufacturers need to see energy in a different light: Current approaches to energy management yield quick savings, but leave plant managers searching for ways of improving on those early gains.

Annual Salary Survey

Participate in the 2013 Salary Survey

In a year when manufacturing continued to lead the economic rebound, it makes sense that plant manager bonuses rebounded. Plant Engineering’s annual Salary Survey shows both wages and bonuses rose in 2012 after a retreat the year before.

Average salary across all job titles for plant floor management rose 3.5% to $95,446, and bonus compensation jumped to $15,162, a 4.2% increase from the 2010 level and double the 2011 total, which showed a sharp drop in bonus.

2012 Salary Survey Analysis

2012 Salary Survey Results

Maintenance and reliability tips and best practices from the maintenance and reliability coaches at Allied Reliability Group.
The One Voice for Manufacturing blog reports on federal public policy issues impacting the manufacturing sector. One Voice is a joint effort by the National Tooling and Machining...
The Society for Maintenance and Reliability Professionals an organization devoted...
Join this ongoing discussion of machine guarding topics, including solutions assessments, regulatory compliance, gap analysis...
IMS Research, recently acquired by IHS Inc., is a leading independent supplier of market research and consultancy to the global electronics industry.
Maintenance is not optional in manufacturing. It’s a profit center, driving productivity and uptime while reducing overall repair costs.
The Lachance on CMMS blog is about current maintenance topics. Blogger Paul Lachance is president and chief technology officer for Smartware Group.