ISA Interchange

A Reference Architecture for Intelligence at Edge

Written by Sukanta Kumar Rout | Oct 10, 2023 5:44:50 PM

Background:

In one of my previous articles Edge SW Stack we discussed the components and architecture of it. A robust and flexible Edge software stack is the necessity for implementing any use case on the edge device.

Many smart factory and Industry 4.0 use cases and implementations are happening at edge layer nowadaysand for good reason. Customers are carefully evaluating the best options for implementation based on the business capability they wish to achieve.

 Certain use cases need a cloud-first approach—some needs to be on-premise. There is another strategy to deploy them at the edge (near to the shopfloor machines). There are many reasons why edge deployment is the best choice for most customers. For example, there are certain use cases where there is a need for near real-time action based on data collected from the machines and for safety critical machines where a command/control need to work in real time.

Cloud certainly has some latency involved with it when it comes to real time analysis, machine control and actions which need to be taken based on certain analysis of the collected data.

Most of the use case types which are deployed on edge devices are of analytics type, where based on certain input parameters either a Rule Engine or a machine learning algorithm has to take some action and provide notifications and alerts to the operator, etc.

In this article we will thoroughly discuss a reference implementation of an edge analytics architecture and some of the building blocks of it.

 

The Architecture:

Let's discuss this is greater detail.

1- Device data collectors (connector) apps:

When it comes to analytics, we need proper data and to acquire that data from machines and sensors. We also need to have connector apps which can talk to the devices and collect the required data. The connectors are basically the implementation of various industrial protocols such as Modbus, OPC UA client, S7, MELSEC, EthernetIP, etc. The main job of these connector apps is to connect to the physical devices and collect the required data tags configured for analysis.

After the data is collected, the other job expected from the connector is to send this data to a module. This module can be called a MessageBus or DataBus or an EventBus. This component is to basically decouple the various components communication between them , which are deployed on the edge device.

A messaging-based architecture in this framework provides the ability for a decoupled architecture and a uniform communication interface between various modules of the framework.

2- MessageBus/DataBus/EventBus module:

This is the backbone for the data exchange between various modules of the framework, and works on the Publish-Subscribe concept. The device data collectors (Connectors) are basically the data publishers. Once the data is collected from the machine, it is then ingested (published) with a topic to the DataBus.

Each connector publishes data with a certain topic defined to the Databus with a user defined frequency (every 1 second, or 10 milliseconds for example). For messaging architectures, we need a message broker. In this reference implementation, I have decided to use Redis as a broker.

I wanted a solution that has the ability to as a Message Broker (Pub-Sub architecture) and can handle high speed data ingestion suitable for ML and analytics, and that can store data at the edge with minimum memory footprint.

I have found that Redis is a good fit here, as the memory and CPU requirements are minimal. Unlike any other message brokers which are heavy and can't be put on all kind of edge devices, as edge devices in general have limited computing resources available with it. Redis is fast, and the data storage platform has the ability to act as a message broker the same time.

The below configuration sample is for the connector app to publish some data value names, "Tag1" and "Tag2" to a topic called "machine1/process1/tightening".

   "DataBus": {
 "Type": "Data",
"Name": "Redis Pub/Sub",
"Enabled": true,
 "Server": "localhost:6379",
"PublishTopic": "machine1/process1/tightening",

"PublishTags": [

       {

         "Id": "Tag1"

       },

       {

         "Id": "Tag2"

       }

     ]

   }

3- Rule Engine:

This is the module which is basically implemented for limit/threshold validation, creating simple business rules based on json to validate and evaluate certain machine conditions. For example, there is a boiler and we need to monitor the temperature of the boiler every 60 seconds and once it has reached certain limits we need to trigger alerts to the user and once it has crossed certain point (For example 90 ℃) we need to send an command to shut down the boiler.

For this, we can create a simple rule as a json input to the rule engine as below.

For details on implementing a rule engine please refer to this RulesEngine:

BoilerTemperature <= 30 or BoilerTemperature >= 90

The rule engine continuously evaluates the input data published by the data collectors and if the condition has satisficed, it then publishes a command message to the DataBus for the control applications to take action.

4- Control App(s) :

The control app is an application which basically subscribe to the command message topics from the DataBus. Once the command message is generated and published by the Rule Engine, the control app takes the command and in turn generates and sends a control message to the machine to take action.

For example, the rule engine generates a command message when the boiler temperature has crossed 90 ℃, it sends a command message to data bus as defined below.

"DataBus": {

     "Type": "Command",

     "Name": "Redis Pub/Sub",

     "Enabled": true,

     "Server": "localhost:6379",

     "PublishTopic": "command/boiler1/temperature/critical",

   }

Once the above command received by the DataBus the control app which subscribes to this particular topic "command/boiler1/temperature/critical" takes the message and issues a device command to the machine controller to shut down the boiler. The control app basically implements the device communication protocol to send the command to the Sensor/PLC to take action.65- Machine Learning App:

Machine learning algorithms are important to take preventive actions and predicting any failures before they happen. Also, for forecasting, image analysis and many more such cases. Here I have implemented a simple Anomaly detection algorithm. We can use any ML framework or libraries we are familiar with. Here, I chose to use the Microsoft Machine Learning framework called ml-dotnet

There are some reasons for me to choose this as coming from the Dot Net programming background, it was easier for me to learn and implement quickly, as it contains all high-level APIs, and easily deploys features and a cross platform library. It's your choice if you want to use R, Python etc. The only thing to keep in mind is that your app should have the mechanism of publish-subscribe pattern.

I have created a simple Anomaly detection ML algorithm, which takes the past datasets of the Boiler temperature (which I stored in the Redis DB at the edge device) and then creates ML models out of it. Based on those locally created ML models and training data, it can then be able to predict the anomaly's in real-time. The actual and predicted values can be generated out of this algorithm and can be visualized near to the machine.

If there is a possibility of connecting to the cloud platforms it is also possible to bring in those ML models which are training at the cloud to the Edge device.

6- Visualization App:

For visualization, we can go with many options including either custom developed web apps or ready to deploy apps like Grafana. There are also many other opensource visualization frameworks available, to choose from based on your needs.

I have used Grafana as it has features like in-built rule-based data visualization, notification and alert features and drag-drop based UI creation for the need of displaying the data. In a matter of minutes, I can create a different charts and visualizations screens based on the need and no coding knowledge is required. Even for a machine operator with some training, it is possible to use this tool.

The detailed implementation of this architecture is beyond the scope of this article but with the references provided here you should be able to create one.

EndNote:

The main reason to discuss this analytics engine/framework you read about here is to give a direction on how with minimal knowledge to use opensource technology. I have used all opensource technology in implementing this. We can develop an Edge Analytics platform, useful for the customers and users with minimum Capex. We don't have to spend on high end edge devices or proprietary technology to develop an analytics platform. The above reference architecture can also run on devices like RaspberryPi with Linux OS.

Do let me know your thoughts on this and in case you have similar (or better) thoughtsI would be very interested to have a further discussion on this topic.