The deployment of modern computing resources with cloud-native models of software life-cycle management in the industrial automation vertical will become ever-more pervasive.
Across a range of industries and specifically in the industrial automation vertical, there is broad agreement that the deployment of modern computing resources with cloud-native models of software life-cycle management will become ever-more pervasive. Placing virtualized computing resources nearer to where multiple streams of data are created is well-established as the path to address system latency, privacy, cost, and resiliency challenges that a pure cloud computing approach cannot address. This paradigm shift was initiated at Cisco Systems around 2010 under the label “fog computing” and progressively morphed into what is now known as edge computing.
Mission-critical industrial requirements
That said, the full potential of this transformation in both computing and data analytics is far from being realized. The mission-critical requirements are much more stringent than what the cloud-native paradigms can deliver. This is especially true because mission-critical applications have specific requirements:
• Heterogeneous hardware. Typical industrial automation settings have different architectures (x86, Arm), as well as a variety of compute configurations on the floor.
• Security. The security requirements and their mitigations vary from device to device and need to be handled carefully.
• Innovation. While some industrial applications can continue with the legacy paradigm of going unchanged for a decade or more, most of the industrial world now additionally requires modern data analytics and monitoring of applications in their installations.
• Data privacy. As in other areas of IT, data permission management is increasingly complex within connected machines and needs to be managed right from the origination of the data.
• Real-time determinism. The real-time determinism provided by controllers remains critical to the safety and security of the operation.
For these reasons, the market is seeking what Lynx Software Technologies calls the mission-critical edge. This concept is born out of the incorporation of requirements typical of embedded computing (security, real-time operation, and safe, deterministic behaviors), into modern networked, virtualized, containerized life-cycle management and data- and intelligence-rich computing.
The role of mission-critical edge
Without a fully manifested mission-critical edge, we will not be able to address the many pain points characterizing the current industrial electronic infrastructure. In particular, we will not be able to securely consolidate, orchestrate, and enrich with the fruits of data analytics and artificial intelligence the many poorly connected, fragmented, and aging subsystems controlling today’s industrial environments.
Figure 1: A distributed system of systems is intended to address the challenges of the many poorly connected, fragmented, and aging subsystems controlling today’s industrial environments. Lynx has identified the evolution of the industrial operational architecture (the architecture of the infrastructure on the industrial automation floor) as one of the most appropriate targets for the realization of the full mission-critical edge paradigm.
The broad architecture shown in Figure 1 illustrates our vision for enabling this:
• Distributed and interconnected, mixed-criticality-capable, virtualized multicore computing nodes (system of systems)
• Networking support that includes traditional IT communications (e.g., Ethernet, Wi-Fi) but also deterministic legacy field buses, moving toward IEEE time-sensitive networking (TSN) and public and private 4G/5G, also moving toward determinism
• Support for data distribution within and across nodes, based on standard middleware (OPC UA, MQTT, DDS, and more), that also strives toward determinism (e.g., OPC UA over TSN)
• Distributed nodes that are remotely managed and software that is delivered and orchestrated as virtual machines (VMs) and containers – the model of modern cloud-native micro-services
Paired with new, more powerful, and scalable multicore platforms, a mission-critical edge computing approach can provide a unified, uniform infrastructure going from the machine to the industrial floor and into the telco edge and cloud, thereby enabling a fundamental decoupling between hardware and software. Applications, packaged as VMs and, increasingly, as containers, can be life-cycle–managed and orchestrated across all the layers of this infrastructure.
Integration into today’s fragmented industrial environments
Many of the poorly connected, fragmented, and aging subsystems controlling today’s physical environments can be effectively and securely consolidated, orchestrated, and enriched with the fruits of data analytics and artificial intelligence.
Figure 2 shows how the infrastructure would look when the mission-critical edge is deployed and embedded into the operational technologies area of the factory. There is a distributed set of nodes – some very close to the plant, some far away. Effectively, this is like a distributed data center, but it contains a far more heterogeneous, interconnected, virtualized set of computing resources, which can host the applications where needed and when needed. These will be deployed in the form of virtual machines and containers orchestrated from the cloud or locally.
Figure 2: How the infrastructure would look when the mission-critical edge is deployed and embedded into the operational technologies area of the factory. There is a distributed set of nodes — some very close to the plant, some far away.
Let’s discuss a specific use case at an Audi manufacturing plant, more specifically for the Audi A3. Audi’s plant in Neckarsulm, Germany, has 2,500 autonomous robots on its production line. Each robot is equipped with a tool of some kind, from glue guns to screwdrivers, and performs a specific task required to assemble an Audi automobile.
Audi assembles up to approximately 1,000 vehicles every day at the Neckarsulm factory, and there are 5,000 welds in each car. To ensure the quality of its welds, Audi performs manual quality-control inspections. It is impossible to inspect 1,000 cars every day manually, however, so Audi uses the industry’s standard sampling method, pulling one car off the line each day and using ultrasound probes to test the welding spots and record the quality of every spot. Sampling is costly, labor-intensive, and error-prone. So the objective was to inspect 5,000 welds per car inline and infer the results of each weld within microseconds.
A machine-learning algorithm was created and trained for accuracy by comparing the predictions it generated with actual inspection data that Audi provided. Remember that there is a rich set of data at the edge that can be accessed. The machine-learning model used data generated by the welding controllers, which showed electric voltage and current curves during the welding operation. The data also included such parameters as the weld configurations, the types of metal used, and the health of the electrodes.
The models were then deployed at two levels – first at the line itself and also at the cell level. The systems were able to predict poor welds before they were performed (Figure 3). This exercise has substantially raised the bar in terms of quality. Central to its success was the collection and processing of data relating to a mission-critical process at the edge (i.e., on the production line) rather than in the cloud. In consequence, adjustments to the process could be made in real time.
Harvesting the benefits of integration
Quite a lot of progress remains to be made in a number of technical areas. The focus at Lynx is primarily around two of them.
The first area is delivering deterministic behavior in multicore systems. As multiple systems are consolidated to operate on a single multicore processor, the sharing of resources such as memory and I/O causes interference, which means that guaranteeing the behavior of time-critical functionality becomes problematic.
The other area of focus is delivering strict isolation for applications to ensure high levels of system reliability and security.
Other topics include providing time-sensitive data management, edge analytics, and networking functionality for these complex connected systems. For example, what will be the right approach for deploying the orchestration and scheduling for these deterministic, time-sensitive systems?
In conclusion, the mission-critical edge is here and is starting to realize the original intent of fog computing. We are beginning to harvest the great benefits that result from real integration at the boundary between embedded technology and information technology.
Much more work is needed, however, and it will take a village. A broad set of ecosystem partners will be required to simplify how this technology is delivered to the marketplace.
This article is based on a keynote talk for the IoT, connectivity, and security session during the Embedded Forum at electronica 2020. View the full talk at embedded-electronics-forum.com(free registration required).This article was originally published on EE Times Europe.Flavio Bonomi is a board technology adviser for Lynx Software Technologies. He co-founded Silicon Valley fog computing startup Nebbiolo Technologies and was a fellow, vice president, and head of the advanced architecture and research organization at Cisco Systems.
You need to Login or Register to be able to continue reading