The following discussion is part of an occasional series, “Ask the Automation Pros,” authored by Greg McMillan, industry consultant, author of numerous process control books and 2010 ISA Life Achievement Award recipient. Program administrators will collect submitted questions and solicit responses from automation professionals. Past Q&A videos are available on the ISA YouTube channel; you can view the playlist here. You can read posts from this series here.
What are the prospects for an autonomous process manufacturing unit? What processes are more likely to have a “lights-out” plant? Is a semi-autonomous plant more reasonable? What are the potential problems and solutions for different manufacturing units?
Discrete manufacturing is more likely to operate in a lights-out, fully automated manner with no human in the loop. Machines that have repetitive operations can be predictable. The number of process disturbances tend to be minimal.
Continuous processes (in which batch is a variant) have inherent variability because the number of disturbances variables is large and in some cases unknown. An example is the strength of a sheet of paper. There are well known relationships to mass of fiber and moisture content, but many factors such as fiber quality, contaminants and ambient conditions can have unpredictable influences. Instrumentation advances now make such predictions more feasible, but we are nowhere near taking humans out of the loop. The same could be said about molecular weight out of a reactor, biological oxygen demand in effluent and other such analytical properties. Also consider continuous manufacturing has high energy sources and materials that can be flammable, explosive and toxic. A nuclear power plant is not an operation to take your eyes off of. While predictions and artificial intelligence advances can lead to more automation and less labor, we are nowhere near taking humans out of the loop in continuous manufacturing. I will conclude by saying any facilities hanging on to obsolete control systems and relying on eBay for spare parts have no business thinking AI can take humans out of their facilities.
For the process industries, remote monitoring is a more realistic goal. This can take on many forms, often centered around predictive maintenance or process optimization. AI can definitely be of help here, given enough properly curated data.
One of the loftiest variants is elimination of the night shift, with an unmanned control room monitored by another location during the wee hours. It can be done, but there are several important prerequisites. The first is having everything in electronic format. This may seem obvious, but is not a given in many older facilities. The other important prerequisite is alarm management. If your alarm rate per operator is above the ISA recommended limit, then remote operation is not possible. Alarms need to be infrequent and meaningful if you are going to trust another location to respond to them in a timely manner. Alarm responses also need to be well-defined and capable of remote execution (i.e., not requiring a 3 a.m. phone call).
There is much improvement to be had, but it starts with a realistic assessment of what is possible, and how far away the current state lies from that goal.
I’ll be blunt: “Workerless factories” are pure science fiction; quoting C3PO from “Star Wars: Attack of the Clones,” “Machines making machines?! How perverse!”
The ugly truth is that, regardless of what is being made/manufactured, equipment requires consistent maintenance. It wears out, breaks down, etc. And ONLY people — living breathing human beings — are capable of assessing, diagnosing and physically repairing the equipment. Automation is capable of operating the equipment without workers, but designing and implementing that is another human-only capability… for the time being.
One day, it may be possible for AI to generate control code/configuration when provided with some specifications, but the specifications will still come from people. Monitoring is one area where AI might be a useful tool — emphasis on tool. A tool used by people to preemptively diagnose potential/actual failures. But I’m reminded of another science fiction movie, “2001: A Space Odyssey,” where the HAL9000 computer misdiagnoses a pending failure of some comms gear, before going on a killing spree, no less!
Bottom line: We still need people to design, operate, maintain and repair manufacturing facilities; we’re still a very long way from “autonomous manufacturing.”
I heard secondhand of someone who runs his machine shop “lights out.” In process industries, automation and improved reliability has resulted in fewer “operators” but as long as the industry is handling flammable, hazardous chemicals, there will need to be human intervention — someone in charge. Would you want to live close to a plant processing ethylene oxide “lights out?”
I recall reading about a dairy milking barn that was “lights out,” so to speak; cows came in to eat on their own, robots cleaned the utters and did the milking. They even had an automated system to clean out the waste. Lots of automation examples on the farm.
The previous responses capture the challenges with autonomous plant operation, especially plants with the potential for runaway or other types of unstable operation. I think it is worth mentioning pre-requisites that are needed to move in the autonomous direction, which would also benefit on-site monitored operation. The following come to mind.
A key one is a properly functioning safety instrumented system, separate from the basic controls, which automatically detects hazardous conditions and takes appropriate action. In the most extreme case, this would be the shutdown of the effected unit(s) or, in the less severe case, bringing the process to a safe state.
For startup and shutdown, this includes Interlocks to prevent unsafe conditions. And to cover cases where not all steps can be automated, procedural automation, to ensure sequential steps are performed in the proper order to reduce human error and enforce best practice.
Intelligent alarming and monitoring to enable quicker response to process deviations when human involvement may be necessary. Monitoring could include anomaly detection and predictive analytics, which could recommend or potentially trigger automatic action.
I wanted to follow up on our earlier discussions around the prospects for autonomous manufacturing and share a set of real-world examples that I believe directly reinforce many of the points raised thus far in the ISA “Ask the Pros” conversation.
I intentionally focused on facilities that truly operate lights-out, meaning no routine human presence on the manufacturing floor, rather than loosely automated plants. This eliminated Tesla Gigafactory in Nevada (90% automated) and a few other well-known semi-autonomous examples. The referenced examples span discrete manufacturing, electronics, robotics, machine tools and appliances, and they exist today at commercial scale.
What stood out to me is how closely these implementations align with the feedback shared by all and already articulated in the article:
Factories such as Fanuc’s robot plants, Philips’ razor facility, Foxconn and Xiaomi’s lights-out electronics factories and Okuma’s Dream Site all demonstrate that when variability is engineered out, procedures are automated and state-based control is well-defined, true autonomy is achievable and sustainable.
Rather than contradicting the case for semi-autonomous operations in process industries, these examples strengthen it. They show that autonomy is not theoretical, but conditional. Where variability, energy hazards and material risks dominate, the semi-autonomous model you described remains the most responsible path forward. Where physics, materials and workflows can be tightly bounded, autonomy already exists.
I see these facilities as proof points that help sharpen the broader discussion. Not as an argument that all manufacturing should be autonomous, but as evidence that the principles you outlined do work when applied in the right context and with the right intent.
I included links to each case study for further reading in case any are useful for future writing, teaching, or discussion. I would enjoy the community perspectives on which of these you all see as transferable lessons versus industry-specific outliers.
Always appreciate the depth and realism everyone brings to these conversations.
While a batch operation without high pressures or temperatures, recycle streams, raw material variations, aging equipment, pushing design limits, instrumentation failures or errors, flammable or hazardous materials could be a candidate for autonomous control, it would be a rare find.
People are often acting at a lower level due to lack of knowledge chasing instead of solving problems. I have seen a plant with operators bogged down dealing with continual upsets from recycle streams, equipment pushed way beyond design limits and instrumentation problems, causing five or more trips per year. Installing triple measurements and middle signal selection and using dynamic simulations and procedure automation reduced the trips to less than two per decade.
Much more realistic than autonomous control is a semi-autonomous operation where the roles of operators and professionals responsible for automation, process, mechanical, safety system design, maintenance and continual improvement has been elevated to proactively address possible operational problems creating what Michael Taube detailed as a high reliability organization (HRO) in the three-part series of Control Talk column “Improving safety performance: competence vs. compliance” (May, June, July 2024). Michael says data shows that failures are more likely due to system errors than human errors. I have seen where lack of continual updated training has led to operators trying to run controllers in manual creating problems for them and next shift.
I see the semi-autonomous future where engineers, technicians and operators are continually learning and sharing expertise enhanced by using an updated digital twin and dynamic simulation for education and innovation. Rather than dealing with trips, they are continually developing, testing and implementing process control improvements such as adaptive control for the inevitable nonlinearities, instrumentation with much better 5Rs (resolution, repeatability, rangeability, response time and reliability), procedure automaton to automate startups and product transitions, state-based control to address equipment problems, override control to preemptively correct for process conditions pushing process and equipment limits and model predictive control to recognize and address interactions and promote optimization by future values of constraints and controlled variables.
Realistic understanding is essential for PID tuning problems, instrumentation 5Rs, variable lags and unmeasured disturbances as a severe limitation of neural networks (NN) and artificial intelligences (AI) developed based on field data. Dynamic first principle simulations are needed to provide cause and effect analysis and much more data that can greatly reduce the risk for gain reversals during NN interpolation and bizarre results during NN extrapolation. See my January Control feature article “Digital Twin key to prosperous process control” to learn much more about the potential role of dynamic simulation to improve plant reliability and performance.
To greatly improve data integrity, I suggest the ongoing series of Control Talk columns with Mike Glass. In the January issue, I conclude the column with my following advice as to how to maximize measurement performance and minimize maintenance considering device selection and installation based on my personal experience. Lower maintenance costs and financial benefits from increases in process efficiency and capacity often outweigh increased initial hardware costs.
Next time, let us discuss how final control elements — valves, actuators and positioners — can create further data integrity issues. Even with perfect measurements, valve problems can introduce their own forms of data corruption.
Here are some thoughts by Ed Farmer, retired electrical and control systems engineer with over a half-century of wandering in Moore-land, on the evolution of control systems.
Back in the mid-‘60s, my trip through electrical engineering college was interrupted by being drafted for the Vietnam war. As it turned out, I did well on the Army’s aptitude testing and went through some special training followed by assignment to a unit that did a bunch of secret stuff with electronics technology. There were various teams, some focused on particular equipment and others configured for special operations involving it. The diversity was intended to ensure someone capable of dealing with the equipment or mission was always where needed, but careful to avoid having anyone know very much about the “big picture” which was “classified.”
This, of course, enabled control by means of workforce task and knowledge structure. That required more people and knowledge organization in order to maintain acceptably limited knowledge breadth and depth. Over the years since then, electronics has managed to dramatically increase the capabilities of the electronic equipment, which automated much of what it was doing beyond visibility or even a need to be understood. This illustrates “intelligent” technology improving mission capability while dramatically reducing the number of people and visible complexity involved. This reduced the quantity of directly involved humans and increased the uniformity of the “work” that was produced.
I don’t think about it much anymore, but it is interesting to consider the quantity and quality of the work produced by machines aided by humans, as opposed to what could be done by even more humans doing the best they could with the primitive machines. Of course, back in the old days, a team could quickly adapt for unusual circumstances, but the more modern approach involves contracting out some complex programming and equipment restructuring.
I finished engineering school at the beginning of the ‘70s. A great deal of our program moved beyond basic circuits or devices and into the exploding world of electronic system design. Moore’s Law (by Gordon E. Moore, co-founder of Intel) foretold doubling the capability of electronic hardware every couple of years. This motivated understanding more of engineering from “first principles” and more of system design from “operations research.” Before long, we could hold something in our hand that, back at our beginning, needed four floors of a large building. Design objectives expanded from simple functions to the edge of possibility. An engineering project that used to involve controlling a single valve from a particular measurement grew into systematic solutions that were, in their own right, far more complex than the purpose for which they were applied. The world seen in Moore’s Law became even more vivid. Increases in control system science and technology seemed to motivate greater projects with even more built-in science and convenience. Applying it motivated ever-deeper dives into partial multivariable differential equations and ever-broader thinking.
My biggest market grew from some early success in detecting leaks from pipelines. The technology used improved sensitivity by two orders of magnitude, but that, as I’m sure Moore would have noticed, launched attention to all kinds of improvements. The more we succeeded the more we found to do next. A need for some technology twist, slide or expansion motivated a sea of thought, mostly rooted in science. The more we did, the more there was to do. Networking helped a lot.
Eventually, one of our best customers collected quite a few pipelines spread over half a dozen states and consolidated their operation into a nice air-conditioned office-floor in a downtown high-rise in a large city. That was quite a few years ago… hmmm. I wonder where Moore-like thinking might be taking it now…
All this happened and was motivated by experience - triggered creative thought based in science which motivated transition into technology and operations research. One has to wonder what the end of this might look like. A “solution” depends on an idea and the ability to implement it. History suggests there may be a lot of growth and adjustment before finding the next big moment. What may be needed there depends on a lot of things that humans have become good at finding and implementing. The technology, and the ability of those involved humans, has to be pertinent to the journey.
Moore described a basic characteristic of “the path,” but…
Gregory K. McMillan retired as a Senior Fellow from Solutia Inc. in 2002 and retired as a senior principal software engineer in Emerson Process Systems and Solutions simulation R&D in 2023. Greg is an ISA Fellow and the author of more than 200 articles and papers, 100 Q&A posts, 80 blogs, 200 columns and 20 books. He was one of the first inductees into the Control Global Process Automation Hall of Fame in 2001, and received the ISA Lifetime Achievement Award in 2010, ISA Mentor Award in 2020 and ISA Standards Achievement Award in 2023. His LinkedIn profile is: https://www.linkedin.com/in/greg-mcmillan-5b256514/
Pat Dixon, PE, PMP is president of www.DPAS-INC.com, a system integrator with expertise in data analytics and advanced control. Pat has experience in industrial automation beginning in 1984, having worked for SD Warren Paper, Honeywell, Pavilion Technologies, DuPont and Emerson as well as system integrators. Pat is a professional engineer in four states and a certified project manager. His LinkedIn profile is: https://www.linkedin.com/in/dixonpatrick/
Julie F. Smith retired from being the global automation and process control leader for DuPont. She has 35 years of experience in the process industry, having been part of numerous engineering and operations activities across the globe. She has written several papers and columns highlighting the value of modeling and simulation. Julie has a BS in chemical engineering from Rensselaer Polytechnic Institute and an MChE from the University of Delaware.
Michael Taube is a principal consultant at S&D Consulting, Inc. Serving the greater process industries as an independent consultant since 2002, he pursues his passion to make things better than they were yesterday by identifying the problems no one else sees or is willing to admit to and willingly “gets his hands dirty” to solve the problems no one else can. Due to the continued occurrence of individual injuries and fatalities as well as large-scale industrial incidents, he collaborates with operational excellence and safety culture experts to promote a real and lasting cultural shift in the process industries to help make ZERO incidents a reality. He graduated from Texas A&M University in 1988 with a Bachelor of Science degree in chemical engineering. His LinkedIn profile is: https://www.linkedin.com/in/michaeltaube/
Bob Heider has over 50 years as a process control engineer with an emphasis on the design of advanced process controls and process development. He spent 33 years with Monsanto in various plant and corporate engineering roles and worked with Greg on first PROVOX system installation.
Bob has 16 years as an adjunct professor at Washington University's department of chemical engineering.
Bob also worked for five years at Confluence Solar, providing control expertise to support the company mission to develop premium quality single crystal silicon substrates for solar applications. Presently, he is an independent engineering consultant for various confidential clients. He is a Fellow of the International Society of Automation.
Mark Darby is an independent consultant with CMiD Solutions. He provides process control-related services to the petrochemical, refining and mid/upstream industries in the design and implementation of advanced regulatory and multivariable predictive controls. Mark is an ISA senior member. He served on the TR5.9 committee that produced the PID technical report and has presented at ISA technical conferences. Mark frequently publishes and presents on topics related to process control and real-time optimization. He is a contributing author to the McGraw-Hill Process/Industrial Instruments and Controls Handbook, Sixth Edition. His LinkedIn profile is: www.linkedin.com/in/mark-darby-5210921
Edin Rakovic started as an application engineer supporting advanced simulation and control technologies, and eventually built and led teams from 1 to 60+, driving digital transformation and operational readiness initiatives across the chemical, biotech, pharmaceutical, oil and gas and refining sectors. In 2024, Edin founded Prosera to productize the services and frameworks that consistently solved manufacturers’ biggest workforce and operational challenges. https://www.linkedin.com/in/edinrakovic/
Ed Farmer completed a BSEE and a Physics Master degree at California State University - Chico. He retired in 2018 after 50 years of electrical and control systems engineering. Much of his work involved oil industry automation projects around the world and application of the LeakNet pipeline leak detection and location system he patented. His publications include three ISA books, short courses, numerous periodical articles and blogs. He is an ISA Fellow and Mentor.