ISA Interchange

Welcome to the official blog of the International Society of Automation (ISA).

This blog covers numerous topics on industrial automation such as operations & management, continuous & batch processing, connectivity, manufacturing & machine control, and Industry 4.0.

The material and information contained on this website is for general information purposes only. ISA blog posts may be authored by ISA staff and guest authors from the automation community. Views and opinions expressed by a guest author are solely their own, and do not necessarily represent those of ISA. Posts made by guest authors have been subject to peer review.

All Posts

Ask the Automation Pros: How Can You Get the Most Knowledge from a Data Historian?

The following discussion is part of an occasional series, “Ask the Automation Pros,” authored by Greg McMillan, industry consultant, author of numerous process control books and 2010 ISA Life Achievement Award recipient. Program administrators will collect submitted questions and solicit responses from automation professionals. Past Q&A videos are available on the ISA YouTube channel; you can view the playlist here. You can read posts from this series here.

Erik Cornelsen’s Question

In a continuous-process manufacturing plant, in which data is cyclic recorded, for example at every 0.5, 1 or 2 seconds, what interesting insights can we get from this large amount of historical data, and which are the main challenges to extract value from it?

George Buckbee’s Response

First of all, what a great question! In my experience, historical data is a tremendous asset that is often highly under-utilized. Some studies have shown that less than 0.5% of data is analyzed. Not too surprising, considering the massive amount of data streaming in from a production process. A typical oil refinery might have 3000-5000 control loops, each updating PV, setpoint, output and mode every 0.5 to 2 seconds. Uncompressed, that could be as much as 15 MB per hour.

Insights can be developed using software tools and analysis. Control loop performance monitoring (CLPM) software should be an absolute must in today’s world. This will help you to identify which loops are performing poorly, and help to direct you to corrective actions. Most plants do not have enough staff to address all the issues, and CLPM software can help you to prioritize those areas that need your attention the most.

With CLPM software, you can identify poorly tuned controllers, instrumentation issues, valve problems, operator overload and even changes in the process. CLPM tools are not just for control engineers, but can also deliver insights to operations, process engineers and management. When CLPM results are benchmarked against peer operations, tremendous insights can be gained.

There are many challenges in extracting value. First, many historians are not gathering data at the frequencies you suggest. Many plants are still defaulting to once/minute data, which is simply too slow to be used for analysis of process dynamics and control. Furthermore, data compression settings in historians can eliminate even more information. In today’s world, data storage is extremely cheap, and you should not risk losing insights by compressing data.

Secondly, historical data must be appropriately organized to allow you to extract related information as a group. Each controller’s PV, SP, OP and MODE should be groups and/or named similarly.

Third, data historians do not always capture the relevant context of the process. For example, the historian should contain flags or methods to determine if the process is in startup, shutdown, product transition or some other abnormal state of operation. You should always analyze process and controller data in the context of the overall operation. A statistical average for the month might be useful to the accounting department, but it doesn’t really show what is happening minute-to-minute.

Fourth, the historian and CLPM system should be monitored to ensure they are functioning properly. Too many things can go wrong: Server updates, communication losses and permissions issues can cause a loss of continuity between the control system, the historian and analytical systems. The data is useless if it is not communicated.

Michel Ruel’s Response

I often explained to my clients that they are "data-rich but information-poor." While they may have a wealth of stored data, it's crucial to recognize that this data can serve as a goldmine of valuable insights.

First, it’s important to distinguish between raw data and actionable insights. Raw data can be extensive and overwhelming but effectively transforming that data into meaningful information can significantly enhance decision-making processes.

Additionally, I emphasize the power of algorithms in identifying critical issues such as oscillations, valve stiction and performance problems within control loops. By deploying these advanced tools, organizations can proactively address inefficiencies and improve overall system performance.

Continuous real-time monitoring of key variables (such as process variable [PV], setpoint [SP], controller output [CO], mode and alarms) is essential. This ongoing vigilance not only ensures optimal operations but also leads to quicker response times and more efficient management of control processes.

Moreover, while automation plays a crucial role in data analysis, it is equally important to have skilled personnel overseeing this process; these persons should have an excellent understanding of the process. Human oversight helps to capture nuances that might otherwise be overlooked, ensuring that the insights generated are contextually relevant and actionable.

By focusing on these aspects, organizations can better leverage their data to derive meaningful insights and drive improved performance.

Greg McMillan’s Response

Data historians often have too much compression and too slow an update rate that hides the process response intricacies and the critical accurate identification of the total loop dead time especially for fast loops. In a conversation I just had with Peter Morgan, we are both on the same page that the compression and update rate set by IT (information technology) is often too large and slow, which is particularly concerning for the investigations of incidents. Obviously, the knowledge that is lost cannot be recovered, so we need to be proactively involved with setting the compression and update rate before commissioning an automation system. Additionally, I advocate for measurement spans to be as small as possible since the accuracy of most measurements other than Coriolis and magnetic flow meters are in a percent of span with limited rangeability. Also, control valves must be precise throttling valves not oversized since resolution is percent of span and friction near closed position is much higher. Operation near the closed position has much poorer nonlinearity, resolution and lost motion. Limit cycles from a poor valve response are confusing to say the least, hiding incidents and other problems, particularly in terms of tuning and interactions.

The actions of on-off valves associated with the control loops need to be included in the data history. Since I have personally experienced stroking times of more than 100 seconds for some large valves, the historization of limit switch feedback is needed for state-based control to automatically deal with incidents and for procedure automation to automate startups and product grade and type transitions.

Online loop performance metric programs need to be employed to automatically compute from data history metrics such as maximum absolute integrated error and peak error for load disturbances and rise time, overshoot, undershoot and settling time for setpoint changes. To generate good data for data analytics and artificial neural networks, the manipulated flows and all sources of change (e.g., raw material and recycle stream flow, temperature and composition) need to be included. Hopefully, there are setpoint changes to capture setpoint response metrics. Ideally, tests are conducted where the controller is put in manual and a small output change made to identify the open loop and closed loop load response metrics. The controller is only momentarily put in manual to see closed loop response but traditionally needs to be in manual for 98% response time for a traditional identification of the open loop response time of a self-regulating process. I created a method to identify essential open loop dynamics in a test duration within 6 deadtimes by identifying deadtime, slope and inflection point. I developed for the digital twin a Mimic Rapid Modeler block that can identify the essential dynamics of a self-regulating, integrating and runaway process in less than 6 dead times. Processes with large time constants are treaded as near-integrating enabling a transition to integrating process tuning rules that has proven to be advantageous. I also developed for the digital twin a Mimic PID Performance block that can automatically capture the metrics for the PID responses to load disturbances and setpoint changes. I provided a user’s guide and shared test results via Word and PowerPoint files. Send me a message on LinkedIn with your email if you would want me to send you these supporting files.

There is a concern about too much transmitter damping and signal filtering affecting data integrity as discussed in my 2025 October Control Talk Column and my 2012 February Control Talk Blog.

https://www.controlglobal.com/home/blog/11340194/what-is-the-best-transmitter-damping-or-signal-filter-setting

https://www.controlglobal.com/control-talk-column/article/55327625/ensuring-data-integrity-in-process-control-the-critical-role-of-signal-filtering-and-damping

The online metrics from key performance indicators (KPI) in the virtual plant and actual plant should be converted to dollars of revenue and dollars of cost and totalized over a month. Online metrics on process efficiency, often expressed as raw material mass or energy used per unit mass of product, must be multiplied by the production rate at the time and the cost per unit mass or energy used to get dollars cost per unit time. Online metrics on process capacity must have production rate multiplied by the price of product(s) sold. This is best accomplished by getting an accounting representative to participate in the development and use of the metrics.

The running averages are started at the beginning of the representative part of a continuous or batch operation. Time periods could be about 8 hours to capture shift performance. Shorter time periods are used to provide prediction of batch end points and the procedure automation capability to deal with startup and abnormal conditions.

Dollars per unit mass or unit energy used and produced should be applied as factors in the computation of the metrics to show financial impact. The dollars can be totalized to show the benefit for each month providing online data for production unit accounting. Nothing impresses management more than dollars generated.

While there are many key performance indicators (KPI) used in industry, the most impressive are those that show the benefits in dollars from increases in process efficiency, capacity and flexibility on a dynamic ongoing basis.

Here are some insights on the challenges of using data via a series of 2010 Control Talk Columns “Drowning in Data; Starving for Information.”

https://www.controlglobal.com/measure/calibration/article/11381621/automation-professionals-data-drowning-in-data-starving-for-information-1-control-global

https://www.controlglobal.com/manage/systems-integration/article/11381659/automation-data-drowning-in-data-starving-for-information-2-control-global

Data Analytics Systems | Drowning in Data, Starving for Info-Part 3 | Control Global | Control Global

Process Analyzers | Drowning in Data, Starving for Information, 4 | Control Global | Control Global

Pat Dixon’s Response

Historians are an essential part of any industrial operation. Those that do not learn from history are doomed to repeat it.

Our historians are full of data, yet much of it is rarely seen. That effectively turns historians into digital landfills. Data can decompose if it isn’t used when produced. While past history can inform present day decisions, aging data will not be as pertinent as recent data.

Data at frequencies higher than 1 second can be useful for analyzing machine health, but is not typically stored in a historian. Process data at 1 second sample rate or longer can be useful for process analysis, but of course faster is better if you want to ensure accurate dynamic correlation and identification.

It is not necessary to store every piece of data in a historian. Data that rarely changes, such as configuration or tuning changes, do not need to fill historians with 1 second samples. However, failure to include an important process value means the data is lost and cannot be retrieved.

Accumulation of data in a historian can cause storage demands to grow while much of the information becomes stale. To manage this load, historians rely on exceptions, compression and separation of short- and long-term data so that fidelity is preserved without overwhelming memory. Exceptions filter out unimportant samples near the source, although these settings must be updated as instruments age. Compression removes low-value or repetitive data using algorithms such as swinging-door compression, and long-term averages further reduce storage requirements.

Today, storage cost is less of a constraint than in the past, but historians must still support backfilling and future forecasting. Most facilities have far too many tags for humans to monitor, so contextualization tools like Asset Framework organize signals into equipment- or process-based structures that make information easier to use. Data also gains value when combined across multiple facilities, making cloud connectivity essential.

Despite abundant data, many organizations lack the people or time to analyze it. Hiring analytics specialists is difficult and often inefficient because demand fluctuates. Outsourcing allows skilled partners to scale support as needed, especially since modern remote access and read-only security practices make off-site analysis feasible. Most facilities and companies do not have people with the time or skillsets to do the data analytics work. Increasing headcount to bring those people in is never an easy sell. Even if you do add such people, in order to make it cost effective you need to keep these people fully loaded with data analytics work. The problem is the demand is not constant. There are times when urgent problems require putting everything else aside to see what the data is telling you. If you are fully staffed for that occasional demand condition, you will have idle resources when those demands conditions don’t exist.

That is why outsourcing is a good option. With Industry 4.0 connectivity, you do not have to have people on premises. An outsourcing arrangement allows a partner to scale up or down for your demands without impacting your headcount. Firms that specialize in data analytics work and have paper industry domain knowledge can remotely connect to a system and get the historical data. While there can be reluctance to enable access to the outside, COVID made this a practical necessity and set a new expectation in the workplace. There are always cybersecurity concerns, but data analytics work is not writing to your system; it is a read-only case where the data is pulled into analysis to yield a discovery. Even in the case where security expectations cannot be met, exporting to external media or sending CSV files can get the job done.

Artificial intelligence is becoming a critical tool in Industry 4.0. for forecasting, maintenance, diagnostics and control. Achieving these capabilities, however, requires overcoming challenges such as overfitting, limited datasets and black-box models. Techniques like physics-informed neural networks improve accuracy and interpretability, while sensitivity analysis helps expose model behavior — though correlated inputs must be considered. High-quality datasets remain a major challenge, as industrial data contains noise, sparse lab samples and limited process variation.

Ultimately, the value of data depends on having a complete data analytics ecosystem. This includes connectivity, visualization, pre-processing, analysis and collaboration. A robust solution must connect to diverse data sources (process data, lab data, analyzers, databases), visualize large datasets effectively, provide powerful tools for cleansing and pre-processing, support a wide range of analytical methods and enable collaboration across teams and sites. With these capabilities, organizations can turn their “digital landfills” into strategic assets. In the 4th industrial era, those who understand how to use data — not just collect it — will lead.

Julie Smith’s Response

I agree with everyone that there is a lot of data out there, and it's a challenge to turn it into usable information. Performance metrics are one way to mine the data, but they require training to interpret properly. Sometimes a simpler tool works better. I've found graphical methods can be helpful in many cases. Not only the standard variable versus time or metric over time, but unconventional graphs like histograms or PV/OP scatter plots. Scatter plots are particularly useful in finding hidden oscillators caused by over tuned loops. I recall one process had an over aggressive loop that over a 24 hour period generated a beautiful Fibonacci spiral! Made it easy to convince people that a change was needed. 

Michael Taube’s Response

Two (2) issues I see:

  1. To paraphrase the situation, we have “data, data everywhere, but none of it’s useful!” Long term historians (governed and managed by IT) contain years — decades? — of data: all of it compressed, averaged… USELESS. AI/machine/deep learning requires two (2) things: 1) LOTS of uncompressed high frequency data that includes, 2) lots of “anomalies;” this is the only way for the statistics (that’s all AI really is!) to find the “edge of the cliff.” So anyone promoting that “years of training data” is available from these historians should be shown the door (and none too gently)! 
  2. Quoting Admiral H. G. Rickover: “People get things done.” Sadly, there aren’t enough (skilled and knowledgeable) people with the time and aptitude to monitor and analyze all the potential data sources within most/all organizations. Outsourcing this to second or third parties carries risks in terms of cybersecurity, as well as self-serving incentives (e.g., the insurance salesman telling you that you need MORE insurance), in addition to these companies lacking sufficient engineering knowledge and expertise in/with YOUR facility. 

From a process control/optimization consulting perspective, I spend a fair amount of time collating/organizing data — long term and short term — to look for patterns: large/frequent deviations, valves at limits, nonlinearity, valve stiction, periodic changes in behavior — at SP, smooth v. cycling — as well as looking for cause and effect relationships based on hard-earned (and mostly nontransferable) engineering knowledge and experience. This takes time, and most organizations have long been under the “Do More With Less” More-Bad-Advice mantra, so engineering/technical staff are unable to undertake this kind of analysis because they’re too lean and overloaded with other more urgent work (i.e., “firefighting”). And, while control monitoring software is ostensibly available to provide some type(s) of control loop analysis as described above, based on my recon, successful implementations have been “hit and miss” — mostly miss! 

Mark Darby’s Response

A key functionality of historians is its trending capability, something we take for granted today. Imagine tuning or troubleshooting a control loop with just a few trend recorders and the error indicators on board-mounted PID controllers. Time series trends are useful for cause and effect analysis (i.e., who dunnit?) and assessing dynamic behavior. XY plots are helpful in assessing static relationships and nonlinearities. And one I believe is not used enough is histograms (or frequency plots) which are useful for assessing variance and the shape of the distribution (e.g., similar to Gaussian, or multiple peaks, non-symmetric, etc.) as opposed to just reporting the mean and standard deviation.

Others have reported on the problem of using compressed data. Of course, non-compressed data is always preferred, but if the compression parameters are correctly set, it will not be a problem. But if the compression is overdone, it can lead to problems of causality — such as output responses appearing before input changes.

Historians often include calculation capability and some may include access to programming languages like Python for performing more sophisticated calculations and analytics. Historians are often accessed to bring data into third party tools for fitting models. While normal operating data is often not informative (think of noisy data imposed on a nearly constant signal), careful “slicing” out of “bad” data can yield good results. Slicing criteria include data that represents loops not in the proper mode, saturation of valves (at 0% or 100%), upsets resulting in large deviations and sections in which there is little movement in setpoints and measured disturbance variables.

For loop analysis, especially flow and pressure loops, one-minute data is not sufficient. Fortunately, modern historians support faster data collection. If faster data cannot be achieved, one needs an external data collection routine to support faster collection, which can be applied to a limited number of user-specified variables on a temporary basis.

Historians are often the source of data from an executed plant test, not just analyzing longer-term past history. One needs to be concerned about extrapolation from old data.      

Russ Rhinehart’s Response

If the process changes, then old data from the historian may be irrelevant, and should not be used in process modeling or analysis. For instance, if the piping path changes due to repairs or improvements, then delays might be substantially different. If tank levels are changed, then lags between variables or variance on mixed composition could substantially change. If catalyst is regenerated, heat exchanger fouling removed, raw material changed, distillation trays collapse…  Be sure the old and new data represent the same behavior being investigated.

Ed Farmer’s Response

In the early days of automatic process control, the data involved was limited to that involved with control loops. It was selected and the monitoring of it was designed around process control needs. Sometimes the “big picture” emerged, but often it didn’t — those with lots of experience in a plant might “see” it, but less experienced people struggled to see how “this” might be impacting “that.” A lot of things have happened since then — beginning with programmable controllers and migrating to modern equipment capable of providing the “all things everywhere” view of the plant and process. In pipeline systems, there were often multiple states or countries involved, all accompanied with a tremendous need to see “the big picture.” Understanding all that stuff was facilitated by displays that showed and told “the story.” When there appeared to be developing problems, nearly everyone could near-instantly see when, where and how much.

Harder still, though, were problems involving seemingly unrelated parts of a system that somehow became involved with each other. Sometimes this might be related to feed stock, or perhaps a failure in some “pumping up” or “drawing down” equipment that moved an “operating point.” A search was often undertaken using archived data and supplementary analysis on combinations of it. Sometimes, a combination of items that should answer a question just “ask” a couple more. Again, a “big picture” data set facilitates resolution. Sometimes a mystery needs broader thinking and analysis technique (e.g., Fourier Analysis) to help identify a source.

Simply put, data archives can make disturbing issues understandable, and process or instrumentation issues findable.

About the Authors

Erik Cornelsen is an automation and process control engineer at DPS Group, a leading system integrator based in Scotland. With over a decade of experience, Erik has worked and lived in six countries, contributing to diverse industrial sectors, including food and beverages, logistics and construction materials. He holds a master’s degree in mechanical engineering from INSA de Lyon (France) and is a Chartered Engineer, a member of the Institution of Mechanical Engineers (UK) and an active member of ISA.

George Buckbee is a P.E. and ISA fellow, and is now president of Sage Feedback LLC. With over 40 years of practical industry experience, George worked across many process industries all over the globe. Since the early 2000's, George was at the forefront of developing control loop performance monitoring and other software tools. The author of two books published by ISA and dozens of articles about process control, George is also a well-known instructor and presenter at conferences and in webinars. He holds a B.S. in chemical engineering from Washington University in St. Louis, and an M.S. in chemical engineering from the University of California at Santa Barbara.

Michel Ruel, retired engineer, ISA Fellow, is a recognized expert in process control and control performance monitoring. Now retired, he led a team that implemented innovative and highly effective control strategies across a wide range of industries, including mining and metals, aerospace, energy, pulp and paper and petrochemicals. An accomplished author of numerous books and publications, Ruel is also a software designer specializing in instrumentation and process control. He is the founding president of Top Control Inc. and has contributed to projects in multiple countries. In addition, Ruel is a sought-after lecturer for various professional associations.

Gregory K. McMillan retired as a Senior Fellow from Solutia Inc. in 2002 and retired as a senior principal software engineer in Emerson Process Systems and Solutions simulation R&D in 2023. Greg is an ISA Fellow and the author of more than 200 articles and papers, 100 Q&A posts, 80 blogs, 200 columns and 20 books. He was one of the first inductees into the Control Global Process Automation Hall of Fame in 2001, and received the ISA Lifetime Achievement Award in 2010, ISA Mentor Award in 2020 and ISA Standards Achievement Award in 2023. His LinkedIn profile is: https://www.linkedin.com/in/greg-mcmillan-5b256514/ 

Pat Dixon, PE, PMP is president of www.DPAS-INC.com, a system integrator with expertise in data analytics and advanced control. Pat has experience in industrial automation beginning in 1984, having worked for SD Warren Paper, Honeywell, Pavilion Technologies, DuPont and Emerson as well as system integrators. Pat is a professional engineer in four states and a certified project manager. His LinkedIn profile is: https://www.linkedin.com/in/dixonpatrick/

Julie F. Smith retired from being the global automation and process control leader for DuPont. She has 35 years of experience in the process industry, having been part of numerous engineering and operations activities across the globe. She has written several papers and columns highlighting the value of modeling and simulation. Julie has a BS in chemical engineering from Rensselaer Polytechnic Institute and an MChE from the University of Delaware. 

Michael Taube is a principal consultant at S&D Consulting, Inc. Serving the greater process industries as an independent consultant since 2002, he pursues his passion to make things better than they were yesterday by identifying the problems no one else sees or is willing to admit to and willingly “gets his hands dirty” to solve the problems no one else can. Due to the continued occurrence of individual injuries and fatalities as well as large-scale industrial incidents, he collaborates with operational excellence and safety culture experts to promote a real and lasting cultural shift in the process industries to help make ZERO incidents a reality. He graduated from Texas A&M University in 1988 with a Bachelor of Science degree in chemical engineering. His LinkedIn profile is: https://www.linkedin.com/in/michaeltaube/

Russell Rhinehart has experience in both the process industry (13 years) and academe (31 years). He is a fellow of ISA and AIChE, and a CONTROL Automation Hall of Fame inductee. He served as president of the American Automatic Control Council and editor-in-chief of ISA Transactions. Now “retired,” Russ is working to disseminate engineering techniques with his web site (www.r3eda.com), short courses, books and monthly articles. His 1968 B.S. in ChE and M.S. in NucE are both from the U of Maryland. His 1985 Ph.D. in ChE is from North Carolina State U.

Mark Darby is an independent consultant with CMiD Solutions. He provides process control-related services to the petrochemical, refining and mid/upstream industries in the design and implementation of advanced regulatory and multivariable predictive controls. Mark is an ISA senior member. He served on the TR5.9 committee that produced the PID technical report and has presented at ISA technical conferences. Mark frequently publishes and presents on topics related to process control and real-time optimization. He is a contributing author to the McGraw-Hill Process/Industrial Instruments and Controls Handbook, Sixth Edition. His LinkedIn profile is: www.linkedin.com/in/mark-darby-5210921

Ed Farmer completed a BSEE and a Physics Master degree at California State University - Chico. He retired in 2018 after 50 years of electrical and control systems engineering. Much of his work involved oil industry automation projects around the world and application of the LeakNet pipeline leak detection and location system he patented. His publications include three ISA books, short courses, numerous periodical articles and blogs. He is an ISA Fellow and Mentor.

Greg McMillan
Greg McMillan
Greg McMillan has more than 50 years of experience in industrial process automation, with an emphasis on the synergy of dynamic modeling and process control. He retired as a Senior Fellow from Solutia and a senior principal software engineer from Emerson Process Systems and Solutions. He was also an adjunct professor in the Washington University Saint Louis Chemical Engineering department from 2001 to 2004. Greg is the author of numerous ISA books and columns on process control, and he has been the monthly Control Talk columnist for Control magazine since 2002. He is the leader of the monthly ISA “Ask the Automation Pros” Q&A posts that began as a series of Mentor Program Q&A posts in 2014. He started and guided the ISA Standards and Practices committee on ISA-TR5.9-2023, PID Algorithms and Performance Technical Report, and he wrote “Annex A - Valve Response and Control Loop Performance, Sources, Consequences, Fixes, and Specifications” in ISA-TR75.25.02-2000 (R2023), Control Valve Response Measurement from Step Inputs. Greg’s achievements include the ISA Kermit Fischer Environmental Award for pH control in 1991, appointment to ISA Fellow in 1991, the Control magazine Engineer of the Year Award for the Process Industry in 1994, induction into the Control magazine Process Automation Hall of Fame in 2001, selection as one of InTech magazine’s 50 Most Influential Innovators in 2003, several ISA Raymond D. Molloy awards for bestselling books of the year, the ISA Life Achievement Award in 2010, the ISA Mentoring Excellence award in 2020, and the ISA Standards Achievement Award in 2023. He has a BS in engineering physics from Kansas University and an MS in control theory from Missouri University of Science and Technology, both with emphasis on industrial processes.

Books:

Advances in Reactor Measurement and Control
Good Tuning: A Pocket Guide, Fourth Edition
New Directions in Bioprocess Modeling and Control: Maximizing Process Analytical Technology Benefits, Second Edition
Essentials of Modern Measurements and Final Elements in the Process Industry: A Guide to Design, Configuration, Installation, and Maintenance
101 Tips for a Successful Automation Career
Advanced pH Measurement and Control: Digital Twin Synergy and Advances in Technology, Fourth Edition
The Funnier Side of Retirement for Engineers and People of the Technical Persuasion
The Life and Times of an Automation Professional - An Illustrated Guide
Advanced Temperature Measurement and Control, Second Edition
Models Unleashed: Virtual Plant and Model Predictive Control Applications

Related Posts

Ask the Automation Pros: How Can You Get the Most Knowledge from a Data Historian?

The following discussion is part of an occasional series, “Ask the Automation Pros,” authored by Greg McM...
Greg McMillan Dec 16, 2025 7:00:01 AM

AI in Industrial Applications: Common Misconceptions

Introduction Artificial intelligence (AI) is often presented as a universal solution, promising gains in ...
Steve Mustard Dec 11, 2025 3:30:00 PM

ISA Podcast Explores Leadership Lessons from Bees

The International Society of Automation (ISA) podcast, Podomation, curates and shares insightful discussi...
Kara Phelps Dec 4, 2025 7:00:00 AM