Models Aid Controller Design

In order to maintain a process variable at some desired setpoint, a feedback controller must be configured to accommodate the behavior of the specific process to which it has been applied. It needs to know how much and how long to react to errors between the process variable and setpoint. In academic circles, the traditional method for endowing the controller with the necessary insight is to cr...


In order to maintain a process variable at some desired setpoint, a feedback controller must be configured to accommodate the behavior of the specific process to which it has been applied. It needs to know how much and how long to react to errors between the process variable and setpoint.

In academic circles, the traditional method for endowing the controller with the necessary insight is to create a mathematical model of the process that can be translated into a suitable controller design. Doing so allows the controller to compute control efforts in the present that will eliminate errors in the future.

Modeling the process involves quantifying the relationship between each series of inputs applied by the controller and the resulting series of process variable measurements. That is typically accomplished by means of a mathematical equation that can numerically duplicate the behavior of the process.

A simple example

Consider the simple mechanical process originally detailed in “From Math to Models” ( Control Engineering , November 2006) and reproduced in the graphic “Inverted pendulum model no. 1.” (Page IP2.) It consists of a spring-mounted load that oscillates left and right when forced by a disturbance or a controller’s efforts. The process variable is the resulting angle of deflectionθ. Equation [1] gives a simplified model for this process.

Like any model, its accuracy depends on determining the form of the equation that best represents the process’s behavior and the values of the equation’s parameters that best represent the process’s physical characteristics. The equation’s form and parameters must all be deduced, measured, estimated, or otherwise identified correctly for the model to be useful for controller design.

Model identification can generally be accomplished either theoretically or empirically. First-principles modeling is a theoretical technique where both the model’s form and parameters are derived from an analysis of fundamental chemical, physical, and thermodynamic phenomena that govern the behavior of the process.

Inverted pendulum model no. 1 was derived according to the first principles of gravity and spring forces. Gravity will tend to topple any column tilted at an angle ofθ with a force of mg·sin(θ). An angular spring bent by an equal amount will tend to straighten itself with a force of kθ. The combination of these two opposing forces determines the process’s angular acceleration θ'' according to Newton’s Second Law of Motion.

The resulting model takes the form of a sine function which relates the angular positionθ to time t as in equation [1]. The model’s parameters are the mass of the load m, the height of its center of mass h, the spring constant k, the acceleration due to gravity g, and the initial angular velocity θ’ o imparted by the force that set the process in motion.

Not that easy

Unfortunately, the principles that govern the behavior of most real-life processes aren’t generally so simple to identify, especially when the process involves chemical and physical and thermodynamic phenomena. Large processes with multiple input and output variables also tend to be too complex to identify by means of first principles.

An energy balance analysis is often a more practical alternative for modeling a complex process. The model’s form and parameters are both derived from adding up all the energy flows that enter the process and equating that total with all the energy flows that leave the process plus all the energy that’s stored within the process.

The same can be done with a mass balance analysis predicated on a similar principle: material flowing into a process must either flow out or be stored. The individual principles that are responsible for all those mass or energy flows need not be identified.

Empirical modeling

Simpler still are the various empirical modeling techniques that identify the process’s behavior through experimentation rather than theoretical analysis. The graphic “Inverted pendulum model no. 2” shows how an actual inverted pendulum could be modeled empirically without any references to chemistry, physics, thermodynamics, or mass/energy balance.

If the pendulum is given a sharp push and its subsequent oscillations are measured over time, the resulting trend chart shows the process’s impulse response , and the mathematical equivalent of that trend is the process model.

In this case, the impulse response appears to be a decaying sinusoid, so the most appropriate form for the process model is equation [4]. The model’s parameters are the decay rateα (the inverse of the time constant), the amplitude A, and the frequency ω. The values of these parameters can all be extracted directly from the trend chart either by inspection or by numerical curve-fitting.


Note that this strictly empirical technique yielded a different model than the first-principles analysis which neglected viscous forces like friction and wind resistance. These forces oppose the motion of an object in proportion to its current speed and would tend to dampen the pendulum’s oscillations over time. This damping effect is represented in the empirical model as the negative exponential term in equation [4]. The absence of a similar term in equation [1] causes the theoretical model to predict incorrectly that the pendulum will oscillate with the same amplitude forever.

Such discrepancies are the rule rather than the exception when it comes to modeling real-life processes. Even when the governing principles are well known, additional tweaking is almost always required to match the model’s predictions with actual experimental results. The only way to verify the accuracy of a theoretical model is to compare its behavior to the real-life process when both are exercised with the same series of inputs.

Theoretical modeling can also benefit from real-life experimentation when it comes to determining the values of the model’s parameters. In the case of the theoretical pendulum model, the surest way to determine the value of m is to physically weigh whatever structure the pendulum represents. And the value of the spring constant k can be determined most readily by measuring how far the structure deflects when subjected to a sustained force of known magnitude.

Controller design

But by far the most difficult challenge of model-based controller design is translating the completed model into a suitable controller. Some model-based design techniques are so mathematically intensive that they require years of graduate study to comprehend fully.

The first theoretical hurdle is understanding how a model that depicts an impulse response like equation [4] can be used to predict the process’s response to inputs that aren’t impulses. After all, very few processes are controlled by smacking them with abrupt impulses. If a controller is going to be able to move a structure that acts like an inverted pendulum, it will need more information about the process than equation [4] can provide.

Or so it would seem. In reality, the impulse response for a process that happens to be linear contains all the information necessary to characterize its behavior completely under any circumstances. That is, an impulse response model like equation [4] can be used to predict the process’s response to any sequence of inputs, not just individual impulses.

Superposition makes it possible

Understanding exactly how that works requires a mind-numbingly arcane mathematical analysis involving integral calculus, but some insight can be gleaned from considering the remarkably useful characteristic that defines a process as linear– superposition . A process that obeys the Superposition Principle doubles its output when its input is doubled. Moreover, if two separate input sequences are added together point by point and applied to a linear process, the resulting output sequence will equal the sum of the outputs generated by the two input sequences when they were applied individually.

The Superposition Principle is what makes the impulse response so useful for modeling and controlling a linear process. A controller can, in effect, use the impulse response model and the superposition principle to construct any proposed sequence of control efforts out of individual impulses then compute how the process would likely react to it. This is the essence of a model-based controller’s predictive abilities that allow it to choose the current control efforts required to achieve the desired results in the future. For more details about how the Superposition Principle works, see “Process Controllers Predict the Future” on page 42 of this issue.

Better still, a remarkably wide variety of real-life processes are linear such as electrical processes composed of resistors, capacitors, and inductors and mechanical processes composed of springs, masses, and dampers. In fact, any process that can be effectively controlled by a simple PID loop must be linear since the PID algorithm’s effectiveness depends on the process’s adherence to the Superposition Principle.

Furthermore, even processes that aren’t inherently linear can sometimes be linearized . If the control effort and the resulting process variable are restricted to sufficiently narrow ranges around their steady-state values, then the process will appear to behave in a linear fashion. This is how a pH controller can get away with dividing up its complete operating range into several subranges and using gain scheduling to modify its tuning parameters according to which subrange the process variable is in at the moment. Each set of tuning parameters constitutes a separate controller that has been designed to accommodate the more-or-less linear behavior of the process within that particular subrange.

For more reading on process models, go to:

Select the best process control, January 2007

From math to models, November 2006

Profitable process control, March 2006

Model predictive controller, January 2006

Author Information

Vance VanDoren is consulting editor to Control Engineering. Reach him at .

No comments
The Top Plant program honors outstanding manufacturing facilities in North America. View the 2013 Top Plant.
The Product of the Year program recognizes products newly released in the manufacturing industries.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
A cool solution: Collaboration, chemistry leads to foundry coat product development; See the 2015 Product of the Year Finalists
Raising the standard: What's new with NFPA 70E; A global view of manufacturing; Maintenance data; Fit bearings properly
Sister act: Building on their father's legacy, a new generation moves Bales Metal Surface Solutions forward; Meet the 2015 Engineering Leaders Under 40
Cyber security cost-efficient for industrial control systems; Extracting full value from operational data; Managing cyber security risks
Drilling for Big Data: Managing the flow of information; Big data drilldown series: Challenge and opportunity; OT to IT: Creating a circle of improvement; Industry loses best workers, again
Pipeline vulnerabilities? Securing hydrocarbon transit; Predictive analytics hit the mainstream; Dirty pipelines decrease flow, production—pig your line; Ensuring pipeline physical and cyber security
Upgrading secondary control systems; Keeping enclosures conditioned; Diagnostics increase equipment uptime; Mechatronics simplifies machine design
Designing positive-energy buildings; Ensuring power quality; Complying with NFPA 110; Minimizing arc flash hazards
Building high availability into industrial computers; Of key metrics and myth busting; The truth about five common VFD myths

Annual Salary Survey

After almost a decade of uncertainty, the confidence of plant floor managers is soaring. Even with a number of challenges and while implementing new technologies, there is a renewed sense of optimism among plant managers about their business and their future.

The respondents to the 2014 Plant Engineering Salary Survey come from throughout the U.S. and serve a variety of industries, but they are uniform in their optimism about manufacturing. This year’s survey found 79% consider manufacturing a secure career. That’s up from 75% in 2013 and significantly higher than the 63% figure when Plant Engineering first started asking that question a decade ago.

Read more: 2014 Salary Survey: Confidence rises amid the challenges

Maintenance and reliability tips and best practices from the maintenance and reliability coaches at Allied Reliability Group.
The One Voice for Manufacturing blog reports on federal public policy issues impacting the manufacturing sector. One Voice is a joint effort by the National Tooling and Machining...
The Society for Maintenance and Reliability Professionals an organization devoted...
Join this ongoing discussion of machine guarding topics, including solutions assessments, regulatory compliance, gap analysis...
IMS Research, recently acquired by IHS Inc., is a leading independent supplier of market research and consultancy to the global electronics industry.
Maintenance is not optional in manufacturing. It’s a profit center, driving productivity and uptime while reducing overall repair costs.
The Lachance on CMMS blog is about current maintenance topics. Blogger Paul Lachance is president and chief technology officer for Smartware Group.