ISA Interchange

Welcome to the official blog of the International Society of Automation (ISA).

This blog covers numerous topics on industrial automation such as operations & management, continuous & batch processing, connectivity, manufacturing & machine control, and Industry 4.0.

The material and information contained on this website is for general information purposes only. ISA blog posts may be authored by ISA staff and guest authors from the automation community. Views and opinions expressed by a guest author are solely their own, and do not necessarily represent those of ISA. Posts made by guest authors have been subject to peer review.

All Posts

How to Use Pattern Recognition Software to Automate the Analysis of Plant Historian Data

In this information age, data is everywhere. How can we improve efficiencies and organize this information into useable nuggets? Plant and operations managers receive vast amounts of both structured and unstructured data every day. This article explains how information can be accessed quickly and affordably to improve performance.

Historians are a repository for data from many systems, making them a good source for advanced analytics. However, process historian tools are not ideal for automating the analysis of the data or search queries. They are "write" optimized and not "read/analytics" optimized. Finding the relevant historical event and building the process context is usually a time-consuming and laborious task.

 

 

A level of operational intelligence and understanding of data are required to improve process performance and overall efficiency. Process engineers and other personnel must be able to search time-series data over a specific timeline and visualize all related plant events quickly and efficiently. Part of this is the time-series data generated by the process control and automation, lab, and other plant systems, as well as the usual annotations and observations made by operators and engineers.

Predicting process performance today

To run a plant smoothly, process engineers and operators need to be able to accurately predict process performance or the outcome of a batch process, while eliminating false positives. Accurately predicting process events requires accurate process historian or time-series search tools and the ability to apply meaning to the patterns identified within the process data.

Although there are a variety of process analytics solutions in the industrial software market, these largely historian-based software tools often require a great deal of interpretation and manipulation and are not automated. They perform rear-looking trends or export raw data into Microsoft Excel. The tools used to visualize and interpret process data are typically trending applications, reports, and dashboards. These can be helpful, but are not particularly good at predicting outcomes.

Predictive analytics, a relatively new dimension to analytics tools, can give valuable insights about what will happen in the future based on historical data, both structured and unstructured. Many predictive analytics tools start by using an enterprise approach and require more sophisticated distributed computing platforms, such as Hadoop or SAP Hana. These are powerful and useful for many analytics applications, but represent a more complex approach to managing both plant and enterprise data. Companies using this enterprise data management approach often must employ specialized data scientists to help organize and cleanse the data. In addition, data scientists are not intimately familiar with the process like engineers and operators, which limits their ability to achieve the best results.

Furthermore, many of these advanced tools are perceived as engineering-intensive "black boxes" in which the user only knows the inputs and expected outcome, without any insight into how the result was determined. Understandably, for many operational and asset-related issues, this approach is too expensive and time consuming. This is why many vendors target only the 1 percent of critical assets, ignoring many other opportunities for process improvement.

 

Requires significant engineering

Data cleaning, filtering, modeling, validating, and iterating is necessary on results/models.

Sensitive to change

Users needed continual training.

Requires data scientist

Plants have to hire additional workers, or engineers spent too much time trying to be data scientists.

Not plug and play

Installation and deployment require significant time and money.

Black box engineering

User cannot see how results are determined.

 

Managing big data without a data scientist

There are just a handful of solution suppliers that are taking a different approach to industrial process data analytics and leveraging unique multidimensional search capabilities for stakeholders. This approach combines visualizing process historian time-series data, overlaying similar matched historical patterns, and providing context from data captured by engineers and operators.

The ideal pattern recognition solution provides on premise, packaged virtual server deployment. It easily integrates to the local copy of the plant historian database archives and evolves over time toward a scalable architecture to communicate with the available enterprise distributed computing platforms. This newer technology uses "pattern search-based discovery and predictive-style process analytics" targeting the average user. It is typically easily deployed in fewer than two hours, without requiring a data modeling solution or data scientist. Often called "self-service analytics," this software puts the power of extensive search and analytics into the hands of the process experts, engineers, and operators who can best identify areas for improvement.

Another problem typically presented by historian time-series data is the lack of a robust search mechanism along with the ability to annotate effectively. By combining the search capabilities on structured time-series process data and data captured by operators and other subject-matter experts, users can predict more precisely what is occurring or will likely occur within their continuous and batch industrial processes.

According to Peter Reynolds, senior consultant at ARC Advisory Group, "The new platform is built to make operator shift logs searchable in the context of historian data and process information. In a time when the process industries may face as much as a 30 percent decline in the skilled workforce through retiring workers, knowledge capture is a key imperative for many industrial organizations."

Self-service analytics delivers:

  • cost-efficient virtualized deployment ("plug and play") within the available infrastructure
  • a deep knowledge of both process operations and data analytics techniques to avoid the need for specialized data scientists
  • easy scalability for corporate big data initiatives and environments
  • a model-free predictive process analytics (discovery, diagnostic, and predictive) tool that complements and augments, rather than replaces, existing historian information architectures

Better way to search

Using pattern recognition and machine learning algorithms permits users to search process trends for specific events or to detect process anomalies, unlike traditional historian desktop tools. Much like the music app Shazam, self-service analytics work by identifying significant patterns in data or "high energy content" and matching it to similar patterns in its database, instead of trying to match each note of a song. Shazam identifies songs quickly and accurately using this technique, because if it takes too long to get an answer, the user will close the search.

These technologies form the critical base layer of the new systems technology stack. It makes use of the existing historian databases and creates a data layer that performs a column store to index the time-series data. These next-generation systems also work well with leading process historian suppliers. Typically, they are designed to be simple to install and deploy via a virtual machine without affecting the existing historian infrastructure.

 

Column store with in-memory indexing of historian data

Search technology based on pattern matching and machine learning algorithms empowering users to find historical trends that define process events and conditions

Diagnostic capabilities to quickly find the cause of detected anomalies and process situations

Knowledge and event management and process data contextualization

Identification, capturing, and sharing important process analysis among billions of process data points

Capture capabilities that support manually created event frames or bookmarks by users or ones automatically generated by third party applications. These annotations are visible within the context of specific trends.

Monitoring capabilities that integrate predictive analytics and early warning detection of abnormal process events on saved historical patterns or searches and that leverage live process data. Operators have a live view to determine if recent process changes match the expected process behavior and can proactively adjust settings when they do not.

 

Shift in the way analytics are accessed

The technology playing field for manufacturers and other industrial organizations has changed. To remain competitive, companies must use analytics tools to uncover areas for efficiency improvements.

"There is an immediate need to search time-series data and analyze these data in context with the annotations made by both engineers and operators to be able to make faster, higher quality process decisions. If users want to predict process degradation or an asset or equipment failure, they need to look beyond time-series and historian data tools and be able to search, learn by experimentation, and detect patterns in the vast pool of data that already exists in their plant," added Reynolds.

Fortunately, this new process analytics model can support the necessary "retooling" of traditional process historian visualization tools for a very low cost investment in terms of both time and money. 

A version of this article also was published at InTech magazine.

Bert Baeck
Bert Baeck
Bert Baeck, senior vice president of self service analytics at Software AG. Previously he was co-founder and chief executive officer of TrendMiner. His professional experience includes more than 10 years within big data and analytics and the manufacturing industry. Before he started TrendMiner, he was a process optimization engineer for Bayer MaterialScience (now called Covestro). Baeck is an engineer with a twist: an analytical, lateral thinker who has business savvy and a strong can-do mentality. He holds a master’s degree in computer science and a master’s degree in microelectronics from the University of Ghent. His personal motto is, “Failure is not the worst outcome. Mediocrity is.”

Related Posts

Onward and Upward to 2025: Proud of a Great Year

As my year as president of the International Society of Automation (ISA) comes to a close, I wanted to ta...
Prabhu Soundarrajan Dec 20, 2024 10:00:00 AM

How Did Automation Professionals Benefit from ISA in 2024?

The International Society of Automation (ISA) is proud to be the professional home of thousands of member...
Kara Phelps Dec 17, 2024 9:30:00 AM

Ensuring RCM or DCS Redundancy and Its Security in a Complex Industrial Environment

In industrial automation, remote control managers (RCM) or distributed control systems (DCS) are critical...
Ashraf Sainudeen Dec 13, 2024 10:00:00 AM