Benefits of digitizing reality for workers in manufacturing

Digitizing reality is now possible for workers thanks to technology advances such as the Internet of Things (IoT). This new reality allows workers to benefit from augmented reality (AR), mixed reality (MR) and virtual reality (VR) to solve old problems in new and better ways.

By Michael D. Thomas July 10, 2019

Tools and machines have been central to workers’ realities through all the industrial revolutions. However, large portions of a worker’s reality only recently could be digitized with Internet of Things (IoT) devices and approaches. In 2015, Henning Kagermann, former CEO of SAP AG, argued this “digitization—the continuing convergence of the real and the virtual worlds will be the main driver of innovation and change in all sectors of our economy.”[1]

This simple act of creating digital streams produces information that can be expressed in many different ways, on many different types of materials, and in many different systems.[2]

Such reality presentation technologies include the eXtended Reality (XR) family of technologies.[3] – augmented reality (AR), mixed reality (MR) and virtual reality (VR) – as well as more mature and accepted technologies, such as smartphones, tablets, and PC flatscreens. When combined with the IoT, analytics and artificial intelligence (AI), applications can be created to aid workers by making their realities more intelligent.

Intelligent reality for manufacturing

An intelligent reality is defined as a technologically enhanced reality that aids human cognitive performance and judgment. As compared to the base reality, an intelligent reality can have much greater dimensionality, reduced occlusion, transcendence of distance, better guidance, and improved communication with other actors. While finance and cybersecurity domains may be considered, the focus of this article is on intelligent realities based on physical realities and fed by the IoT.

For example, consider a technician looking at a machine while wearing an AR head mounted display (HMD) that can see the service history with prediction of future failures. This gives the worker a view on the fourth dimension of time backwards and forwards. Instead of having to take the machine apart, the worker can see an IoT-driven MR rendering projected on the outside casing. By glancing away from the machine, the technician can see a virtual rendering of the operations of the same type of machine at a distant location.

The technician can interface with artificial and human remote experts about next steps, which could include the expert driving virtual overlays of the AR view. The worker can communicate with appropriate management systems through the same HMD without having to use a phone or laptop. As a wearable computer, the HMD brings distant resources into the worker’s operational reality.

An intelligent reality, like a machine on a factory floor, may be near a worker. The factory also might be halfway around the world and understood by the user through 3-D modeling of the factory. AR or VR headsets may be involved, but do not have to be – smartphone screens or flatscreens on a desktop may be a better option. The worker may be mobile and use an AR head mounted display or smartphone; or the worker may be stationed in a command center at the company headquarters. They may be observing a reality in real time, or they may be performing data-driven review of a past event. In all cases, though, the context dominates –visually and in the design of the presentation.

Intelligent reality can be achieved today with off-the-shelf technologies spanning IoT, analytics, XR technologies, and more traditional user interface technologies. This can help decision makers and architects navigate the expansive terrain of technologies and enable intelligent realities for workers.

Comparing the XR space with more traditional mobile and desktop flatscreens leads to consideration of intelligent reality architecture and the development of intelligent reality applications. Then specific use cases propose combinations of reality presentation technologies, IoT and AI.

Reality-virtuality continuum, modern eXtended reality

In 1995, Paul Milgram et al published a paper “A Taxonomy of Mixed Reality Visual Displays,” which introduced the reality-virtuality continuum.[4] This paper discusses the current state of XR and the role of mobile and stationary flatscreens.

The user starts with a normal view of physical surroundings and can progress to a completely digital view. In between the extremes is MR, which is the mixing of the user’s physical reality with one or more digital realities. MR assumes an AR device is capable of stereoscopic rendering of dynamic 3-D scenes on top of a physical view of the world.

A virtual environment is completely digital, but not necessarily completely immersive. Options include the immersive experience of a VR HMD and large flatscreens not worn by the user. Both VR HMDs and virtual environments rendered on flatscreens can provide a user with a dynamic, real-time 3-D rendering of a remote or abstract 3-D reality.

AR does not exclude mobile flatscreens. In 1995, the terms “smartphone” and “tablet,” didn’t exist, but Milgram and others described monitor based (non-immersive) video displays – that is, window-on-the-world (WoW) displays – upon which computer generated images are electronically or digitally overlaid.

Intelligent reality architecture

An architecture for an intelligent reality should be centered on aiding a worker’s cognition and performance. For workers in an IoT-enabled reality, the cornerstone of intelligent reality architecture is the integration and sense making of raw IoT data. With the data and analytical foundations in place, an architectural view of the reality presentation technologies is presented for making the best tactical last-mile user interface (UI) decisions for rendering to the workers.

IoT data pipeline

A streaming analytics engine is necessary for perceptions in real-time.[5] A streaming analytics engine analyzes data streams in motion as the atomic events of the stream pass by. In addition to applying analytical methods, it also can provide inferences derived from machine learning (ML) models and contribute to the training of such models.

General model of device and reality interaction

On the UI front, wearable and portable computers, including XR HMDs and smartphones, and high-performance graphics that can faithfully render realities have enabled intelligent reality. While HMDs represent an important shift in computing, they are still wearables that may not be acceptable to many users and use cases. The following architectural view attempts to de-emphasize the importance of HMDs in reality intelligence architecture.

The general concept of reality includes physical and abstract realities. A machine is a physical reality while the supply chain that created the machine is an abstract reality derived from data – a data reality. Data realities, at the most abstract, can be de-coupled from any physical reality. For example, large volumes of live streaming data from a commodities market could be used to form a data reality, which a user could explore in VR.

Due to the see-through nature of AR devices, proximate physical reality is always part of an AR experience. However, an implementor who may choose to use a MR device to satisfy a use case has nothing to do with the proximate physical reality – for example, rendering a 3-D model of a supply chain independently. They may make this choice because users prefer MR over VR because being aware of physical surroundings is more comfortable. Thus, MR could be remote or local.

Digital twin overlay

A good digital twin takes the information about the design, production and operational life of the asset and virtualizes it into a digital asset. They can be tested and modified in ways that would never happen to an operating physical asset. Instead of one expensive crash test of a car, millions of crashes could be performed virtually. Rather than a couple of turns around a test track, a car could be virtually driven for millions of miles across multiple tests with different service histories. Such tests could be used to feed ML neural nets, which are then queried when servicing the real asset.

With an intelligent reality application, the digital twin can be overlaid onto the physical twin. When a bus rolls into the garage, a fleet manager can view important output from the bus’s digital twin as an AR overlay. An example is showing an alert because the bus is overdue for an oil change. The real power of the digital twin would come from more nuanced cases that aren’t simple violations of established single-dimensional rules. Perhaps the bus is within the accepted ranges across several aspects of maintenance, but the digital twin sees a combination of near violations greatly increases the risk of a mission-critical failure. The AR device can communicate this information to the manager, who can act appropriately.

Feeding AI from XR devices

When a factory deploys a thousand AR HMDs to workers, they are also deploying at least a thousand head mounted cameras. Those cameras are well-positioned to provide a rich set of video content. Such content can be piped through a computer and then on to ML and other analytical models. In addition to video from the cameras, HMDs can transmit precise information about the position and orientation of the wearer’s head.

For manufacturing, an AR-enabled workplace could generate ML models that are trained based on head position, gaze, placement of components in the workspace, and quality outcomes. Once trained, such a model could detect small movements and practices that lead to poor quality outcomes and suggest better practices immediately through the HMD. Such learning can be deployed back in to workers’ intelligent realities.

While VR doesn’t offer the same connection to the physical world, a VR HMD can communicate position and orientation of the worker’s head. Eye tracking also is making it into XR products. Such information can be used to improve the simulation and strengthen the understanding of how humans would react in the physical analog of the virtual environment.

Smart facility maintenance, digital twin, AI

As sensor-laden facilities become smarter, digital twins can be developed to aid in the care of smart facilities. Ideally, a digital twin for a facility would include operational models of machine behavior provided by manufacturers. When manufacturers deploy machines that “phone home” with operational data they can build a strong digital twin with ML based on many cases of production usage.

An individual facility can combine those digital twins into a composite digital twin for the entire facility. The facility digital twin would also absorb building plans, such as computer-aided design (CAD) drawings, and encapsulate all of it in to a gaming engine project that reaches out to the included models. The gaming engine app front-ends the digital twin for human consumption over AR or VR.

For performing the maintenance work, the digital twin provides a powerful asset. An AI expert could either completely replace a human expert, or, more likely, significantly aid the human expert.

For the field technician, a handheld computer such as a smartphone or tablet with an AR app remains a valid choice, but an HMD device is a much more plausible fit than in the previous example. For many field technicians, being heads-up and hands-free is advantageous enough to justify the cost of an AR HMD. An HVAC technician, for example, can enter a machine room and fix an air handler without ever consulting a manual or occupying their hands with a tablet or smartphone and without ever possessing a check list beforehand. Before leaving, the HMD could even direct them to address several other issues so they would not have to return to the room for a while.

Combining XR, AI and IoT should lead to greater penetration of data science into the world of work. When these technologies operationalize analytics, cost savings can be realized across an enterprise and improve manufacturing in many ways.

Michael D. Thomas, senior systems architect, SAS Institute, and Industrial Internet Consortium (IIC) Member. The Industrial Internet Consortium is a CFE Media content partner. Edited by Chris Vavra, production editor, Control Engineering, CFE Media,


Keywords: augmented reality, mixed reality, virtual reality, eXtended reality

The idea of eXtended reality (XR) allows devices to digitize a worker’s reality to provide a new perspective on the surrounding world and help see solutions they can’t in the “real” world.

An architecture for an intelligent reality should be centered on aiding a worker’s cognition and performance.

Taken together, XR, artificial intelligence (AI) and the Internet of Things (IoT) can enhance manufacturing and provide new solutions to old problems.

Consider this

What other benefits could XR provide and how would it change manufacturing operations?

ONLINE extra

Click here to read the article “Intelligent Realities For Workers Using Augmented Reality, Virtual Reality and Beyond.”


[1] H Kagermann, “Change Through Digitization-Value Creation in the Age of Industry 4.0,” Management of Permanent Change, p 23, 2015. Available:

[2] J. Brennen, D. Kreiss, “Digitization,” The International Encyclopedia of Communication Theory and Philosophy, p 557, 2016.

[3] C. Fink, “War of AR/VR?MR/XR Words,” Forbes, Oct 2017. Available

[4] P. Milgram et al., “Augmented Reality: A class of displays on the reality-virtuality continuum,” Proceedings Volume 2351, Telemanipulator and Telepresence Technologies, Dec 1995. Available:

[5] B. Klenz, “How to Use Streaming Analytics to Create a Real-Time Digital Twin,” SAS Global Forum 2018, Mar 2018. Available:

[6] L Belli et al., “A Scalable Big Stream Cloud Architecture for the Internet of Things,” International Journal of Systems and Service-Oriented Engineering, 5(4), 26-53, October-December 2015.

[7] R. Hackathorn and T. Margolis, “Immersive Analytics: Building Virtual Data Worlds for Collaborative Decision Support,” IEEE VR2016 Workshop, March 2016. Available:

Original content can be found at Control Engineering.

Author Bio: Michael D. Thomas, senior systems architect, SAS Institute, and Industrial Internet Consortium (IIC) Member.