Remote intelligence enables many-to-many solutions
As remote automation increasingly involves multiple data producers and many distributed consumers, the need for intelligence at the edge becomes essential.
Significant and permanent changes to the workforce are requiring companies to accelerate adoption of remote access technologies in general and improved remote intelligence in particular. Traditional remote solutions were adopted out of necessity due to geographically distributed industrial assets, but today the workforce is increasingly geographically dispersed due to the social and business impacts of COVID-19.
Employers are faced with difficulties recruiting qualified workers or finding people willing to relocate to any specific location, regardless of whether it is in a city or outside heavily populated areas. Instead of the classic approach of bringing workers to the work, it is now necessary to bring the work to the workers, wherever they are located.
This paradigm shift can be enabled with remote intelligence consisting of advanced computing and communication systems installed in the field. This is key for connecting many assets with workers.
For these reasons, end users are finding it important to enable remote access of automation systems and accomplishing this requires a greater degree of edge intelligence for those remote systems. Intelligent edge solutions need to communicate with industrial smart devices, pre-process data, and provide connectivity to higher-level computing and visualization systems, often while overcoming bandwidth and network reliability constraints (Figure 1).
Creating practical remote solutions with acceptable performance is difficult to accomplish but not impossible using traditional technologies. A better approach is combining modern edge-capable software and computing platforms to accelerate automation and connectivity by delivering comprehensive remote intelligence.
Many-to-many remote asset connections
Traditional remote automation implementations can often be thought of as “many-to-one” propositions, with many geographically distributed assets connected to one centralized location. For example, numerous oil wells or pipeline pumping stations spread over great distances but integrated with a single main control room. Or perhaps a fleet of automated machines installed across the globe and reporting back to the original equipment manufacturer (OEM) using virtual private network (VPN) connections so the OEM can monitor performance and provide support.
Today, remote automation needs are shifted towards “many-to-many” scenarios. There was already a drive for technologies enabling employees to work from home, or anywhere, to minimize the risks, costs, and delays associated with travel. This has only been accelerated by the impact of the COVID pandemic and the resulting social/professional “remote work” and “work at home” societal movements.
Beyond the evolving business drivers for remote automation, there are ever-present technical challenges that must be addressed:
- Performance: Connectivity calls for acceptable bandwidth (for lots of data), responsiveness (low latency) and uptime (minimal outages).
- Context: Data from many disparate sources must be embedded with contextualization within the communications, so that higher-level applications can identify the source and meaning of data, which enables them to sort, filter, assign, display, and otherwise act on the data as needed.
- Pre-processing: The growing quantity of sensors and rapid poll times mean data with a very high resolution is available. This data must be pre-processed into acceptable averages to reduce network traffic and avoid escalating storage from a manageable data “lake” to an unwieldy data “ocean.”
- Scalability: Solutions need to work for both small and large controller sizes, and for one or many controllers. It is important to monitor individual remote equipment, but it can be even more valuable to analyze and compare a fleet of assets.
- Computing and control: Some remote projects are justified with basic monitoring, but more often there must be significant computing and control capabilities at the edge.
- Cybersecurity: Most remote solutions will leverage the internet in some manner, so cybersecurity becomes a key concern. Some projects involve the cloud and a bidirectional path (data, response/control), but even monitor-only projects are vulnerable. Any attack in the middle is problematic.
Autonomy at the edge is necessary to address each of these requirements and challenges for remote intelligence.
Establishing remote intelligence
Developing modern remote intelligence often requires more than can be expected from traditional programmable logic controllers (PLCs), programmable automation controllers (PACs), and human-machine interfaces (HMIs). These devices will operate in legacy applications and are a good fit for many new projects, but a range of strategies is needed to address various use cases such as:
- Upgrading existing PLC installations with improved monitoring.
- For new or existing manufacturing, adding closed-loop, real-time supervisory control.
- Developing higher-level analytics.
For existing installations with many PLC-operated pieces of equipment arranged to produce in parallel or as a sequential production line, it is useful to install a small supervisory control and data acquisition (SCADA) system as a line supervisor monitoring all points of interest. The right software suite, with appropriate connectivity, will run autonomously in a facility on an industrial PC (IPC), and enable remote users to log into it with appropriate credentials to monitor operations and make manual adjustments if needed.
In many cases, it is desirable to have this remote monitoring and adjustment capability supplemented with more of a closed-loop control scheme. For this role, an edge controller is most appropriate because it provides the general-purpose computing needed for data collection, processing, visualization, and sharing, supplemented by the real-time deterministic control necessary to execute control logic in the most reliable manner.
An edge controller compresses the hardware architecture because analytics are performed at the edge of the process—and perhaps may be integrated with cloud resources—yet can be applied for autonomous control (Figure 2).
Higher-level analytics also are often needed at the edge for a variety of reasons. Operational improvements, especially those based on analyses of widely-distributed assets, are possible when data is collected, stored, processed and analyzed locally and in the cloud, and visualized—with results returned to edge-located systems for implementation.
Application example: Energy pricing, costs
For example, when production demands and capabilities at various sites are analyzed in concert with energy pricing, it can be determined what times of day or night a production line should run to minimize operating costs. For this role, an IPC, or sometimes an edge controller, running software for edge communication provides the connectivity fabric required for an IoT implementation (Figure 3).
Application example: Applying remote intelligence
One consumer goods manufacturer, with operations worldwide, already had many production lines in service. Those involved knew there were production inefficiencies on the lines, and suspected there were opportunities for improving energy usage, but they needed to obtain and analyze data to prove it. The manufacturer needed to evaluate and optimize production speeds, determine and resolve stoppage root-causes, detect air leaks, and perform other tasks. These are the types of low-hanging fruit that can be harvested by remote intelligence initiatives.
At one site, the company installed an IPC with an edge/IoT software suite to collect pertinent sensor data, such as airflow, power, and other equipment status information. The data was stored locally in full resolution, and then preprocessed so it could efficiently be made available in the cloud infrastructure. The data was aggregated across multiple machines and production lines to find opportunities for improvements. The result of these analyses provided the company with the information they needed to manually make operational changes.
The initial implementation was proven-out at one site, but the concept can readily be installed at other sites and scaled up so that analyses can be performed on operations worldwide. Note that this was not a closed-loop solution because operators still needed to make some decisions. However, automation improvements often do not require this level of full autonomy because the best solutions often result from providing better information to human operators so they can take action to drive overall efficiency.
Remote intelligence as a platform
For many manufacturers, “remote” can mean operating across a building or around the world, and often network connectivity and edge processing capability is a limiting factor. The multiple users and computing resources which need to connect with remote sites can be located anywhere. In years past, remote connections were often many-to-one, but today many-to-many is much more the norm and is driving the need for better technologies.
Traditional PLCs and HMIs can be used for some forms of remote connectivity, but the most capable and secure solutions require improved remote intelligence in the form of edge controllers and IPCs running modern edge-capable software. Today, software and computing platforms can help users create new and retrofit solutions for addressing remote connectivity, monitoring, and control—addressing the challenges of many-to-many implementations.
Original content can be found at Control Engineering.