Appropriate automation: Human system dynamics for control systems
Back to basics: IO
Figure 2a depicts a simple control system architecture. It has inputs (S) and outputs (R) and not much else. Early experimental psychologists (really early, 1800s early) invented the concept in Figure 2a as a way of thinking about human and animal behavior. They set about cataloging the patterns of relations among inputs (stimuli) and outputs (responses) in what became known as the S-R school of psychology, also known as behaviorism.
The system in Figure 2a responds to information from its environment and acts upon that environment. It is a component in a simple closed-loop control process. Yet, closer examination of the process reveals that it is considerably more complex than it may appear. In the 1930s, researchers began to argue that there must be some transformational process between the S and the R. This new school of psychology referred to the intervening processes as the "organismic" processes (hence the "O"), illustrated in Figure 2b. For many years there was a raging controversy as to whether S-R or S-O-R was the right model, but this controversy largely died with the advent of the digital computer and its rise as a metaphor for studying behavior. The S-O-R research perspective embodied in computer models of cognitive functions eventually provided impetus for the emergence of cognitive science, which asserts that something interesting must be going on between the S and the R in a complex system. That interesting process is induction.
There are two criteria for "solving" an induction problem, and both presume that a solution can be generated and recognized:
- Can a found solution be confirmed through a deductive test?
- Is the found solution useful to the problem solver?
The first criterion is critical to mathematical induction. A solution is generated (often by guessing), and then the solution is tested by deduction. The second criterion is the essence of real-world induction. A solution is generated (by guessing, luck, genius, etc.), and tested by interaction in the real world. It is the confusion between these two criteria that has led many to attempt inappropriate automation; that is, using computers to do something they are incapable of. The history of so-called expert systems is littered with research successes that were, in fact, real-world failures (cf Harris and Helander, 1984; Harris and Owens, 1986).
Examples of inherently inductive problems are: any form of diagnosis (medical, mechanical, financial, forensic, or software failure); sensor fusion and target recognition; any action selection (because it must of necessity be guided by goals); nearly anything that requires creativity, invention, or synthesis of a novel approach, or that relates to acting on goals.
Deduction, on the other hand, is the process of reasoning from the general to the specific. Deduction can be automated. In truth, computing as known in the automation world is the veritable embodiment of deduction.
No inductive automation
While there is a closed-form solution to each deduction problem, induction problems have no closed-form solutions. There is no such thing as a "correct" answer to a non-trivial induction problem in the sense that all observers will compute the same answer from the same evidence. The goodness of the result depends on the purpose of the induction and on the utility of the result. The result of the induction process is good if it results in effective fulfillment of its purpose. It depends on the intentionality of the control system in which the induction process is embedded. Intentionality, by its very nature, cannot be observed; it can only be inferred; that is, induced. This is the reason induction cannot, by itself, be automated.
Intentionality introduces a messy term into the concept of "correctness" in the context of process control. Denman (2013) and related work (cf Goodloe and Muñoz, 2013) seek to exploit automated theorem proving techniques to establish the correctness of models of non-polynomial dynamical systems. In terms that are relevant to their work, intentionality is an extra term in the system model that has an indeterminate impact on system behavior, and, as we discussed above, it cannot be observed. Hence, the system equation can't be reduced in a way that removes the intentionality term. Complex dynamical systems can and will enter disjoint phase spaces (Mortveit et al., 2008).
In real-world process control applications, human intentionality acts as a quantum parameter that can change the system phase space. The implication: Any system with humans in it can be neither proven correct, nor computed definitively. Since one can't know if a real-world process control system model is "correct," the only recourse is to build it (or a surrogate for it) and iterate the design until it appears to do what you want. Hence, the requirement for MBSE.
Engineering process control systems
There is no such thing as a fully automated system. The human-system interface is fundamental to process control. Most system development efforts focus on two aspects of the interface:
- Trying to automate most/all of some complex function, and/or
- Focusing on "modalities" of the interface (visual display formatting, voice control, brain waves, etc.).
The key is to identify the part that cannot be automated (that is, the induction part), "program" humans to do that part, and program your machines to support them. The usual approach is the other way around: automate as much as you think you can, then program the humans to take up the slack. It's the obsolete and wrong-headed "manual override" idea. INCOSE recognizes this conundrum, and the immense costs of failures in HSI. The nonprofit organization has established an HSI working group, included a section on HSI in the Handbook of System Engineering, and is planning a "caucus" on the need for transforming system engineering to address the issue.
Annual Salary Survey
After almost a decade of uncertainty, the confidence of plant floor managers is soaring. Even with a number of challenges and while implementing new technologies, there is a renewed sense of optimism among plant managers about their business and their future.
The respondents to the 2014 Plant Engineering Salary Survey come from throughout the U.S. and serve a variety of industries, but they are uniform in their optimism about manufacturing. This year’s survey found 79% consider manufacturing a secure career. That’s up from 75% in 2013 and significantly higher than the 63% figure when Plant Engineering first started asking that question a decade ago.