Appropriate automation: Human system dynamics for control systems

Technology Update: Distilling a basic control problem to its essence can show what cannot be automated, which is induction. Automation can facilitate induction; induction cannot (yet) be automated.

By Steven D. Harris and Jennifer Narkevicius, PhD February 28, 2014

Distilling a basic control problem to its logical essence can shed light on what cannot be automated. The answer, in a word, is induction. And while induction can be facilitated by appropriate automation, it cannot (yet) be automated.

How automation relates to process control

Process control is usually construed to include architectures and algorithms for maintaining the output of a specific industrial process within a desired range. Automation, in this context, concerns the implementation of the control process inside a machine. Examining the control theory of large complex processes can determine what cannot be automated in process control applications. A canonical model can be applied at any useful level of abstraction of process control systems. The question of "correctness" can illustrate why this approach is essential.

Paul Fitts, a well-known U.S. Air Force experimental psychologist, said in 1953 that humans are better than machines at induction, and machines are better at deduction. Machines, in fact, can deduce circles around humans. Humans, on the other hand, can induce circles around machines, which can’t induce at all (at least not yet). John Boyd, a well-known fighter pilot, echoed the sentiment when he introduced the OODA loop (1975). OODA stands for observe, orient, decide, and act [which is very close to the control loop: sense, decide, actuate]. And physicist David Deutsch has rekindled the discussion in his recent article in Aeon Magazine. So, perhaps the most fundamental questions to be addressed in the course of engineering a control system are:

  • Where are deduction and induction in the control problems?
  • How can the machine’s powers of deduction be harnessed to facilitate human induction?

The concept of hybrid intelligence can provide a perspective on complex human-machine systems with intent to address these questions explicitly, though the meaning here differs from the term as used in computer science. 

Human-system integration

In large complex systems, such as national infrastructure and defense systems, the human component accounts for more than half of the lifecycle cost of the system (GAO Report, 2003), and those costs are increasing. Yet, even though human-system integration (HSI) is an increasingly important component in systems engineering, few systems engineers are exposed to the issues of human-system integration in academic training.

The International Council on Systems Engineering (INCOSE) defines system engineering as "an interdisciplinary approach and means to enable the realization of successful systems. Successful systems must satisfy the needs of its [sic] customers, users, and other stakeholders … the systems engineer often serves to elicit and translate customer needs into specifications that can be realized by the system development team … and supports a set of lifecycle processes beginning early in conceptual design and continuing throughout the life cycle of the system through its manufacture, deployment, use, and disposal" (Haskins, 2011).

In case anyone slept through Psychology 101, here’s the missing part: There is no such thing as a fully automated system. All systems engineered serve some human purpose, and they all have a human-system interface. In every definition of process control, you will find the term "limit" or a synonym. Where does the limit come from? It arises from the intended purpose of the process control system.

That purpose and human system interface are often implicit and unspecified, yet they are crucial to the success of any process control system, even those with very little actual human contact.

Fundamental cognition research has interesting (perhaps even disturbing) implications for the future of control engineering that have only recently begun to be recognized (cf Helbing, 2013).

Consider a quintessential process control problem: the national electric power grid infrastructure. The Smart Grid is a U.S. government program designed to incentivize conversion of the electric power grid control infrastructure from primarily electromechanical devices to so-called intelligent systems, which are software-controlled components designed to protect components of a national network. These new components are capable of communicating among themselves and with the supervisory control and data acquisition (SCADA) systems and their human operators at major control centers.

The emerging Smart Grid system may, in fact, be much more vulnerable to catastrophic failure than the obsolescent electromechanical system that it will replace. Two insights from this research help illustrate the point:

1. The Smart Grid will not behave according to the laws of physics. The old grid behaves largely in accordance with Kirchoff’s and Ohms’ (established, physical) Laws. In contrast, the Smart Grid will not behave according to established laws of physics. Rather, it will exhibit dynamical properties that are largely unknown. Its failure modes will follow (as-yet unknown) logical structures and will not be bound by physical structures.

2. When it fails, the Smart Grid will implode at the speed of light. Failures in the old grid propagate at an estimated 300 miles per second, limited by the time required for the electromechanical devices to detect and react to the environment. This usually provides sufficient time for SCADA operators to recognize, diagnose, and react to contain the outage and minimize damage to the grid. Failures in the Smart Grid will propagate at near the speed of light across a largely random logical network, becoming part of the operating environment of other systems connected logically rather than physically.

Complex systems, designs

The prospect of a catastrophic failure in the Smart Grid can be glimpsed in the infamous lights-out incident in the Superdome in New Orleans on Feb. 4, 2013 (Palmer, 2013). The failure occurred in a newly refurbished and upgraded power distribution system in the Superdome in which a "design defect" was identified as the proximal cause-in a device that has been in production and use since the "early 1990s." The control components were apparently configured to function as specified in the documentation, and there were seemingly no conventional human operator errors. This "design defect" was undocumented and observed in all similar devices installed there.

In short, the new system did exactly what was expected of it, and the lights went out. This is an example of what human factors specialists call latent human error: error in the system that remains after all the other sources of error have been removed. Error that was, it turns out, designed into the system. Latent human error emerges in those situations in which the dynamical properties of the system are not sufficiently understood by the operators, often because they are not even understood by the system’s designers.

Emergence in complex systems is not a new phenomenon. It reveals itself in numerous ways, but one of the most challenging is in the realm of latent human error. For example, HSI specialists talk about design-induced human error-error that occurs because some property of the system didn’t become evident until humans interacted with it in its natural (or simulated) real-world context.

Latent human errors are examples of emergent properties-those unexpected interactions (behaviors) among components and their environments. Cognition is an emergent property.

Model-based system engineering

In effect, the Smart Grid and other complex grids currently in development represent a new kind of cognitive system, one for which there are scant few design principles. Model-based system engineering (MBSE), an initiative of INCOSE, is really our best hope for identifying and predicting the behavior of the massive infrastructure systems under construction.

MBSE is the only approach appropriate to the issues at hand. Mathematical models and statistical data do not exist to allow prediction of the behavior of these new cognitive systems. The emergent properties of the systems that arise from interactions can only be exposed to observation and analysis through some sort of dynamical generation.

MBSE is not new. There are many research and test systems, some illustrated in Figure 1, being used in research programs looking at exactly these issues (Smullen and Harris, 1988; Jones, Plott, Jones, Olthoff, and Harris, 2011). But this approach is not yet used extensively in the design and development of components of the next generation of the Smart Grid infrastructure.

MBSE: Predictive limits

Recent research (Aleo, Martinez, and Valverde, 2013, and Doyle and Csete, 2011) has raised the specter that even the biggest and best computer models have fundamental flaws that limit the ability to predict behavior. Research by several labs (cf Mortveit and Reidys, 2008) illustrates that even a very simple network can exhibit disjoint phase spaces. Mathematics to analyze and predict the behavior of simple systems do not yet exist, let alone large complex systems.

In other words, the only way to do this is to generate the dynamics and observe what happens. More elegantly, simulate, and test the systems with human cognition in the loop.

Back to basics: IO

Figure 2a depicts a simple control system architecture. It has inputs (S) and outputs (R) and not much else. Early experimental psychologists (really early, 1800s early) invented the concept in Figure 2a as a way of thinking about human and animal behavior. They set about cataloging the patterns of relations among inputs (stimuli) and outputs (responses) in what became known as the S-R school of psychology, also known as behaviorism.

The system in Figure 2a responds to information from its environment and acts upon that environment. It is a component in a simple closed-loop control process. Yet, closer examination of the process reveals that it is considerably more complex than it may appear. In the 1930s, researchers began to argue that there must be some transformational process between the S and the R. This new school of psychology referred to the intervening processes as the "organismic" processes (hence the "O"), illustrated in Figure 2b. For many years there was a raging controversy as to whether S-R or S-O-R was the right model, but this controversy largely died with the advent of the digital computer and its rise as a metaphor for studying behavior. The S-O-R research perspective embodied in computer models of cognitive functions eventually provided impetus for the emergence of cognitive science, which asserts that something interesting must be going on between the S and the R in a complex system. That interesting process is induction.

There are two criteria for "solving" an induction problem, and both presume that a solution can be generated and recognized:

  1. Can a found solution be confirmed through a deductive test?
  2. Is the found solution useful to the problem solver?

The first criterion is critical to mathematical induction. A solution is generated (often by guessing), and then the solution is tested by deduction. The second criterion is the essence of real-world induction. A solution is generated (by guessing, luck, genius, etc.), and tested by interaction in the real world. It is the confusion between these two criteria that has led many to attempt inappropriate automation; that is, using computers to do something they are incapable of. The history of so-called expert systems is littered with research successes that were, in fact, real-world failures (cf Harris and Helander, 1984; Harris and Owens, 1986).

Examples of inherently inductive problems are: any form of diagnosis (medical, mechanical, financial, forensic, or software failure); sensor fusion and target recognition; any action selection (because it must of necessity be guided by goals); nearly anything that requires creativity, invention, or synthesis of a novel approach, or that relates to acting on goals.

Deduction, on the other hand, is the process of reasoning from the general to the specific. Deduction can be automated. In truth, computing as known in the automation world is the veritable embodiment of deduction.

No inductive automation

While there is a closed-form solution to each deduction problem, induction problems have no closed-form solutions. There is no such thing as a "correct" answer to a non-trivial induction problem in the sense that all observers will compute the same answer from the same evidence. The goodness of the result depends on the purpose of the induction and on the utility of the result. The result of the induction process is good if it results in effective fulfillment of its purpose. It depends on the intentionality of the control system in which the induction process is embedded. Intentionality, by its very nature, cannot be observed; it can only be inferred; that is, induced. This is the reason induction cannot, by itself, be automated.

Intentionality introduces a messy term into the concept of "correctness" in the context of process control. Denman (2013) and related work (cf Goodloe and Muñoz, 2013) seek to exploit automated theorem proving techniques to establish the correctness of models of non-polynomial dynamical systems. In terms that are relevant to their work, intentionality is an extra term in the system model that has an indeterminate impact on system behavior, and, as we discussed above, it cannot be observed. Hence, the system equation can’t be reduced in a way that removes the intentionality term. Complex dynamical systems can and will enter disjoint phase spaces (Mortveit et al., 2008).

In real-world process control applications, human intentionality acts as a quantum parameter that can change the system phase space. The implication: Any system with humans in it can be neither proven correct, nor computed definitively. Since one can’t know if a real-world process control system model is "correct," the only recourse is to build it (or a surrogate for it) and iterate the design until it appears to do what you want. Hence, the requirement for MBSE.

Engineering process control systems

There is no such thing as a fully automated system. The human-system interface is fundamental to process control. Most system development efforts focus on two aspects of the interface:

  1. Trying to automate most/all of some complex function, and/or
  2. Focusing on "modalities" of the interface (visual display formatting, voice control, brain waves, etc.).

The key is to identify the part that cannot be automated (that is, the induction part), "program" humans to do that part, and program your machines to support them. The usual approach is the other way around: automate as much as you think you can, then program the humans to take up the slack. It’s the obsolete and wrong-headed "manual override" idea. INCOSE recognizes this conundrum, and the immense costs of failures in HSI. The nonprofit organization has established an HSI working group, included a section on HSI in the Handbook of System Engineering, and is planning a "caucus" on the need for transforming system engineering to address the issue.

Hybrid for appropriate automation

An approach is needed to guide process control engineering in the quest for appropriate automation. Sometimes insights can come from unlikely places. Early research in psychology may have something to offer here.

Figure 3 depicts a model of a closed-loop control system that embodies the first principles discussed. The ideas in Figure 3 are not entirely new, nor purely theoretical. Although it incorporates ideas such as search, proposed by Newell and Simon (1975), as a requisite for intelligence, an earlier version of the figure has been used as a reference design for military and commercial sensor management systems, and similar models abound in the human factors and HSI research literature. (cf Harris, Ballard, Girard and Gluckman, 1993; Wickens and Carswell, 2004).

Note the words "human," "software," and "hardware" do not appear in the figure. The architecture can be applied to any intelligent control system, whether it be purely silicon-based, mixed silicon-organic based, or even purely organic. Systems that are organized as in Figure 3, and that comprise human and machine components, exhibit capabilities and limitations as a system that are not evident-and may not even exist in either component alone because they are emergent. Thus such systems are hybrid intelligence systems. Such systems can routinely "solve" induction problems.

This processor is canonical because it entails the essential components of a real-time, closed-loop control system capable of intelligent behavior; yet, it does not specify how the processes are implemented. As a simple rule of thumb, at least for the foreseeable future, the induction authority should be human, and the deductive engine should be a machine, but those allocations of function might change.

IBM’s Watson system, for example, may be capable of induction (Fan, Ferrucci, Gondek and Kalyanpur, 2010). Watson is the first of an emerging generation of machine intelligence systems intended to operate in real-world environments to solve real-world problems. Yet, such intelligent machines can’t operate in a completely automated fashion. Their recommendations for solution to induction problems are just one more source of information to the human control authority. Watson is designed to be a team member, not the final decision-maker. Such machines must be integrated with human cognition in a coherent fashion to augment and enhance human cognitive performance. Hence, with the advent of Watson, new principles for integrating human and machine intelligence into a cohesive whole are needed. We trust that improved understanding of the fundamentals of induction and human cognition will contribute measurably to the next generation of control engineering.

What cannot be automated? From the perspective of first principles in control engineering, at least one answer is: Induction cannot be automated, at least not yet. Failure to understand the implications of that answer can be disastrous. New technology—IBM’s Watson being the progenitor—may force us to revisit this principle in the near future.

– Steven D. Harris is president of Rational, LLC. Jennifer McGovern Narkevicius, PhD, is the managing director of Jenius, LLC and co-chair of INCOSE’s Human Systems Integration Working Group. Edited by Mark T. Hoske, content manager, CFE Media, Control Engineering, mhoske@cfemedia.com.

Key concepts

  • Automation, in this context, concerns the implementation of the control process inside a machine.
  • Even with closed-loop control, humans are involved.
  • Inductive logic devices aren’t available in process control, yet.

Consider this

What efficiencies will be available when computing power and programming allow machine-based inductive reasoning with closed loop control?

ONLINE

www.incose.org 

This is a full-length version of an article scheduled to appear under the same headline in the March 2014 Control Engineering print and digital edition. See related articles tagged below. 

References

Aledo, J.A., Martinez, S., and Valverde, J.C. (2013). Parallel discrete dynamical systems on independent local functions. Journal of Computational and Applied Mathematics, 237, pp. 335-339. Boy, G.A. (2013). Orchestrating Human Centered Design. London: Springer-Verlag.

Boyd, J.R. (1975). Destruction and creation. Unpublished lecture notes, 1976.

Becker, B. Greenberg, R.J., Haimovich, A.M., Parisi, M., and Harris, S. (1991) F-14 sensor fusion and integrated mode requirements, Technical Proceedings of the 1991 Joint Service Data Fusion Symposium, pp. 629-647, Laurel, MD: John Hopkins University APL Labs.

Denman, W. (2013). MetiTarski and the Verification of Dynamical Systems. Presentation at National Institute of Aerospace / Langley Research Center, 8 July.

Doyle, J.C., and Csete, M. Architecture, constraints and behavior. PNAS, vol. 108, 13 Sep 2011, pp. 15624-15630.

Fan, J., Ferrucci, D., Gondek, D., and Kalyanpur, A. PRISMATIC: Inducing knowledge from a large scale lexicalized relation resource. Proceedings of the NAACL HLT 1020 First International Workshop on Formalisms and Methodology for Learning by Reading. Los Angeles, CA: Association for Computational Linguistics, 2010, pp. 122-127.

Fitts, P.M. Human Engineering for an Effective Air Navigation and Traffic Control System. Washington, DC. National Research Council, 1951.

Goodloe, A., and Muñoz, C. (2013). Compositional Verification of a Communication Protocol for a Remotely Operated Aircraft, Science of Computer Programming, Volume 78, Issue 7, pp. 813-827.

Government Accounting Office (2003). Military Personnel: Navy Actions Needed to Optimize Ship Crew Size and Reduce Total Ownership Costs, GAO-03-520. Washington, DC.

Harris, S.D., and Helander, M.G. Machine intelligence in real systems: Some ergonomics issues. In G. Salvendy (Ed.), Proceedings of the First International Conference on Human-Computer Interaction, Amsterdam: Elsevier, 1984.

Harris, S.D., and Owens, J.M. Some critical factors that limit the effectiveness of machine intelligence in military systems applications. Journal of Computer-Based Instruction, 13-2, 1986, pp. 30-33.

Harris, S.D., Ballard, L., Girard, R. and Gluckman, J. (1993). Sensor fusion and situation assessment: Future F/A-18. In Levis A. and Levis, I. S. (Eds.), Science of Command and Control: Part III Coping with Change. Fairfax, VA: AFCEA International Press.

Haskins (2011). INCOSE Systems Engineering Handbook v. 3.2. San Diego, CA: International Council on Systems Engineering.

Helbing, D. Globally networked risks and how to respond. Nature, 497, pp. 51-59, 2 May 2013.

Jones, M., Plott, C., Jones, M., Olthoff, T., and Harris, S. The Cab Technology Integration Lab: A locomotive simulator for human factors research. In Proceedings of Human Factors and Ergonomics Society 54th Annual Meeting. Human Factors and Ergonomics Society, Oct 27, 2010.

Mortveit, H., and Reidys, C. (2008). An Introduction to Sequential Dynamical Systems. New York: Springer, 2008.

Newell, A., Simon, H.A. Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 19(3), pp. 113-126, 1976.

Palmer, J.A. Superdome partial power outage. Report prepared for Entergy New Orleans, Inc., SMG, Louisiana Stadium and Exposition District. Louisville, KY: Palmer Engineering and Forensics, 21 Mar 2013.

Reason, J. (1990). Human Error, Cambridge: Cambridge University Press.

Smullen, R.R., and Harris, S.D., "Air Combat Environment Test and Evaluation Facility (ACETEF)." In North Atlantic Treaty Organization Advisory Group for Aerospace Research and Development (AGARD) Conference Proceedings (No. 452,"Flight Test Techniques." Papers presented at Flight Mechanics Panel Symposium, Edwards Air Force Base, CA, 17-20 October 1988.

Tolman, E. C., and Brunswik, E. (1935). The Organism and the Causal Texture of the Environment. Psychological Review, 42, pp. 43-77.

Wickens, C.D., and Carswell, C.M (2007). Human Information Processing. In G. Salvendy (Ed.), Handbook of Human Factors. N.Y.: John Wiley & Sons.