The following discussion is part of an occasional series, “Ask the Automation Pros,” authored by Greg McMillan, industry consultant, author of numerous process control books and 2010 ISA Life Achievement Award recipient. Program administrators will collect submitted questions and solicit responses from automation professionals. Past Q&A videos are available on the ISA YouTube channel; you can view the playlist here. You can read posts from this series here.
Greg McMillan’s Question
What are your thoughts on the consequences of an overemphasis on reducing project cost? What are some examples of what is done to reduce project cost and how it has affected instrumentation specification and testing, control strategies design and testing, process improvement and ultimately process and control system performance and reliability?
Julie Smith’s Thoughts
A classic example is the elimination of tankage to reduce capital investment. Typically, a “value engineering” exercise identifies a make tank between two unit operations as simply a wide spot in the line. It doesn’t do anything to materially transform the product so it can be eliminated. Now the outflow of the upstream operation feeds directly into the downstream operation.
This lack of buffering wreaks havoc on process control, where capacitance is our friend. A well-sized tank allows the attenuation of disturbances before they impact the next operation. Control strategies can be simple and easily understood, like feedback control. Operation is relatively easy.
Without tankage, the process is much less robust, and any disturbance is immediately passed along, sometimes even magnified, as it reaches the next unit. Thus, the controls need to be more complex, incorporating elements such as feed forward and override control. These are much harder to understand and maintain, and over time are likely to be found in manual. Operators complain that the process is too sensitive, requiring more of their time and attention. Product quality may suffer.
Tankage is especially critical when doing pH control, where multiple tanks in series are often required to achieve the target. Project teams are under immense pressure to collapse these into a single tank to save capital. Such a reduction will almost always result in subpar performance, unless more complex controls are implemented. Other design modifications such as jet mixers and eductors may also be needed.
A simple dynamic process model can go a long way in helping project teams understand the need for tanks. Transient and steady state operation are different! Value engineering is a worthy exercise, but it needs to have more appreciation for dynamics and how they impact operation. Save the tanks!
Russ Rhinehart’s Thoughts
Yes, we should be seeking the lowest cost solution, but cost is much more than just the capital price of the equipment. The control system reduces variability which permits operating at set points that are closer to specifications and constraints. This minimizes quality giveaway, reduces energy utilization, conserves resources and increases throughput. A control system minimizes specification violations, which reduces waste, down-graded product or material that needs to be recycled. Although cascade might add cost to the control system and its maintenance, it could have a substantial payback (on whatever is the profitability metric used to evaluate projects).
To assess the impact of control system options, we should not be looking at steady state simulation. We should use dynamic simulators that have continually changing environmental influences, raw material property drifts and internal process attributes (such as catalyst reactivity). Dynamic simulations also model any internal blending that damps out short-term perturbations. After a year (or some such) of simulation time, the output data can reveal the distributions of process throughput, energy utilization, waste, specification violations, etc. These can be converted to costs in a DCFROI or LTROA or PBT analysis for a proposed control strategy. Then alternate strategies can be compared and a comprehensive best be chosen.
Patrick Dixon’s Thoughts
I agree with Julie’s comments that tanks provide attenuation of disturbances (filtering), but there are also cases where tanks introduce deadtime, which of course leads to more challenging control implementation. I think when considering greenfield design the project cost should not be considered in isolation to the long-term economic benefits. Tanks may cost money now but save money over the long term.
This leads to the fundamental problem with short-sighted decision-making. When project costs are evaluated in isolation, the low bidder gets the project. I lost projects like this because I will not bid an approach that is stupid. In some cases, those same clients come back to me when they find the low bidder can’t do the job. There are also cases where the low cost project delivers an unsafe process that can result in massive costs if an industrial accident occurs. The “Causality” podcast on The Engineered Network hosted by John Chidgey is a great resource.
Successful projects have the long-term benefits of sustainability, safety and environmental impact seriously considered. When I say sustainability, that includes economic sustainability. It is not economically sustainable to waste raw materials and energy and limp along with suboptimal control while competitors achieve higher OEE. When there are competitive bids, clients need to seriously evaluate the design and competency of the firms providing the bid. When it is sole source, a team approach where client and vendor work together to design the right solution will make all parties happier than an adversarial battle over pennies.
Firms like mine want to use tools that reduce our effort and costs. Artificial Intelligence can help in a big way. To an extent, migration tools can save lots of money. In one of my projects, I developed a migration tool for all tags, logic, graphics and other configuration from Honeywell Data Highway (DH) to Universal Control Network (UCN) that saved a lot of money. Tools like this can lower project costs without compromising good engineering design.
Greg McMillan’s Thoughts
I have seen so many problems to reduce measurement, valve and VFD cost. The first that comes to mind was a project that used thermocouple (TC) cards instead of TC transmitters. The TC card range was so large that the A/D noise prevented me from using derivative action and desired higher gain action. A signal filter would have to be so large the additional loop dead time dead time from the filter time’s nearly 100% conversion to equivalent dead time deteriorated the load disturbance rejection capability. Another much more prevalent example is the use of on-off valves posing as throttling control valves. The on-off valves are in the piping spec and offer much larger capacity and tighter shutoff. Often, they have piston actuators with link-arm, rack and pinion or scotch yoke actuator connections and keylock shaft connections that introduce poor resolution and excessive lost motion including shaft windup from excessive ball or disk friction near closed position. Furthermore, the smart positioner is dumb because it is being lied to because the readback is showing motion when the actual internal closure member has not moved. I have seen lies as large as 8% in so called “high performance” rotary valves. Not as big a problem, but still disheartening in undersized actuators. I selected a good throttling valve for a pH loop but was notified that the actuator size I wanted was larger than size paired with valve stocked in an attempt for bids to look better. It was acknowledged that the resolution would degrade from 0.25% to 0.6%, which for a sensitive pH loop was a disaster. To get a larger actuator would require two months.
I went to a plant having problems with most of their loops and found the PID controllers all had a gain setting of 1 and a reset setting of 1 minute per repeat. The controllers were never tuned by using a dynamic simulation, auto tuning and some intelligence as to tuning settings. The plant was operational by the operators putting output limits keeping output nearly constant giving essentially manual control. I have also seen where engineers doing the control system configuration did not realize the PID algorithm in the programable logic controller (PLC) worked in engineering units and that traditional tuning is not workable. Fortunately, there was a dynamic simulation and I developed a quick and robust auto tuner that enabled us to retune all the PID controllers. Some academics do studies and publications on PID algorithms working in engineering units. When one protege tried to explain that 99.9% of industry PID algorithms work with percent signals to his former professor, he was so criticized by the professor the conversation could not go forward.
Dynamic simulation has been an essential part of achievement in process control improvement (PCI). I can’t imagine being able to deal with the challenges I face most notably in compressor surge, exothermic reactor and furnace control without using dynamic simulation to explore, develop, tune and test control strategies. Fortunately, I came from an engineering technology department where simulation and PCI were fostered and promoted. I am concerned that the engineers I have mentored seem to have to now focus on project budget and schedule and are not given free time for PCI. The inclusion of key performance indicators (KPI) in dynamic simulations that show value in $ from increased process efficiency and capacity from PCI in before and after cases could open management’s eyes and lead to support for PCI innovation and implementation. Section 11.10 “Virtual Plant and Online Process Metrics” in my McGraw-Hill Process/Industrial Instruments and Control Systems Handbook, Sixth Edition details benefits, real-time accounting, “Before” and “After” PCI, opportunity assessment, challenges and guidance.
See my Chemical Processing article Find Missed Opportunities in Process Control and Control Talk Blogs - Missed Opportunities in Process Control, Parts 1–6 to open minds to the possibilities.
Michael Taube’s Thoughts
Too often I’ve seen project managers, most of whom have ZERO knowledge or understanding of control and instrumentation (C&I), while presumably well-meaning, lose sight of the fact that ALL of the C&I scope (including instrumentation, bulk materials, DCS/SIS hardware, engineering, etc. — the whole of it!) for greenfield projects constitutes <2% of the total project cost! So making any changes to that scope — PLUS OR MINUS — only affects the “noise:” it’s INSIGNIFICANT!! Thus, for example, when the PM dictates removing control valves around preheat exchangers (for temperature/enthalpy control) — which is a common approach — it really saves NOTHING for the project, but results in operating issues for the plant after it’s running. Similarly, eliminating spare IO, marshalling and/or rack room/MCC capacity may make for good PR, but little real cost savings and then also creates bigger issues afterwards when the plant needs to add capacity back!
The other area where large capital projects set themselves up for operational issues is getting process control engaged too late into the project. As others have pointed out, once the base steady state model of the plant is settled — which defines basic equipment type & count — a dynamic assessment of the proposed regulatory controls could (and often would) reveal poor MV-CV Pairing and/or controllability issues based on equipment mechanical details. I have witnessed that this shows itself ONLY after the plant is built and commissioned and usually centers on inventory controls (levels and pressures). For example, a new tower was built for a large refinery and to presumably satisfy NPSH requirements, the tower bottoms accumulator (sump) had to be quite “tall;” the tower diameter at that location was also quite “large,” making for a very large liquid volume. However, the steady state model indicated that the bottoms flow was quite “small,” relative to the volume in the tower sump. As a result, the level controller, which manipulated the Bottoms Flow was unable to adequately maintain the sump level in the face of any disturbances affecting it. This is where a simple Relative Gain Analysis (RGA) would have identified this issue BEFORE the tower detailed design was approved for fabrication. In this case, if the sump internals or design couldn’t be modified, then using Reboiler Duty as the MV for the level control would have been the better (more effective) choice and then use the Bottoms Flow to control a temperature (I’ve done this latter one quite successfully in such cases). This concept is covered extensively in the book I wrote with Brent Young & Isuru Udugama, “A Real-Time Approach to Distillation Process Control” (Wiley 2023), and the mathematical derivations for an a priori assessment are also provided.
Similar to the MV-CV Pairing issue, other equipment/controls designs also requires a controllability review, particularly for distillation tower condensers. Again, too often I’ve seen where plant designers take the “that’s the way it always been done” approach without really thinking through how the proposed design will function (or understanding why it has “always been done that way”). This is particularly evident in recent refinery designs that use a Hot Vapor Bypass (HVB) pressure control design on a distillation tower condenser that is situated well above grade, as well as above the tower’s overhead accumulator! The plant designer puts on “blinders” and simply applies a “textbook design” without understanding why such designs were used in the past, as well as not considering if that design applies to the current project. This is covered in the book.
If a process control engineer (NOT to be confused with a control systems engineer) were engaged in the earlier stage of the plant’s mechanical and controls design, then such issues would be avoided.
Peter Morgan’s Thoughts
It is a fact of life that project costs for the procurement, design, testing and commissioning of a control system are candidates (targets!) for cost reduction by either limiting scope, accepting reduced system reliability, constraining engineering effort or foregoing best practices for testing and documenting the control system. If the long-term operating costs of performance shortfalls in terms of product quality or production loss are not understood, any economies at the project implementation stage may be eroded or overwhelmed by the cost of delays in commissioning or operating losses. In extreme cases, a cascading outage due to a malfunction of the control system which could have been avoided by incorporating redundancy, separation and robustness in upset situations, could incur production losses far outweighing any saving in project costs.
The identification of abnormal operating situations and plant upsets early in the project can suggest the performance criteria of the control system and mitigating control actions. This can be accomplished by allocating resources including operational and control specialists in facilitated HAZOP (Hazard and Operability) studies to lead to the development of control system functional requirement (for example dynamic performance requirements) as well as identifying special control provisions such as override control, action augmentation, etc. The use of resources in this way does require a commitment by project management but is relatively easy to justify by rewards in performance and reliability.
The omission of factory acceptance testing of implemented control strategies is a false economy since it can add to commissioning delays, and in some cases, production loss and impact on safety integrity. Essential to the execution of the factory acceptance test is a detailed written procedure with expected behavior described. The control narrative and Functional Logic diagrams should be used as reference documents for the tests, and form, together with the procedure and recorded results, the record of compliance with the design intent.
In the implementation (configuration) of complex control strategies, even with the best intent, errors can occur in the selection of configurable options at the function block level that affect behavior. Issues with controller initialization, for example in ganged or split range control and timing issues in Boolean logic to name just a few, if not discovered during testing can be a challenge to resolve during commissioning and can add to commissioning cost or even delays in productive operation. Functional testing also provides the opportunity to verify the design of the operator interface, and with the participation of a control room operator, can provide the opportunity for refinement and early acceptance of the operator interface.
Corporate and plant specific reliability requirements should be recognized in the choice of the control system hardware; this including process measurements. It should be recognized that reliability requirements based on the potential cost of a production loss due to control system component failure may significantly increase project costs where target reliability calls for redundancy in controllers and field instruments. This suggest that the performance requirements be identified and agreed to early in the project.
I have had the good fortune to have been involved in the development and use of wide-scope physics-based dynamic models for the validation of control systems performance from an early stage of my career. Not every project can justify control system performance studies using dynamic models, but the option should be investigated on a cost benefit basis using the costs of lost production or product quality as metrics for justification. Where the project can justify an operator training simulator which models the process and mimics the control system, or better yet, allows the control system configuration to be installed on the simulator, simulator testing can be extended to accommodate control performance assessment for critical conditions and upsets prior to commissioning, and after hand-over of the project, provide the opportunity for further improvement of control system performance.
George Buckbee’s Comments
We need to recognize that capital projects are usually managed by a temporary project team — a team with limited staffing and a relatively short life, made up largely of third parties, that will dissolve or move on to another project after the current project is completed. As such, they are driven by metrics that are very different than the ongoing operations. Other stakeholders — such as plant operations, engineering and maintenance — are driven by different, and somewhat conflicting metrics (production, reliability, quality, etc.).
The most important project team metrics are typically “on time and under budget.” They may also have key completion milestones such as “Produce XXX tons per day by YY/YY/YYYY.” After these metrics are complete, everything else is secondary. So, it is critically important that all stakeholders are represented on the project team, to speak up for the needs of the ongoing operation.
With all that being said, I will add some thoughts to those provided by others. Project teams will often cut corners in the following areas:
Spare parts: Projects are often given a limited budget for spare parts. Spares are often one of the later purchases for capital projects, and this part of the budget becomes a target for ransacking, especially if the overall project is running over budget. Spares are critical to the ongoing reliability of the operation, and adaptability to future needs. Process control personnel may need to advocate for spare instruments and valves, as well as for “hot spare” I/O.
Commissioning: In addition to what Peter mentioned above, all commissioning activities and testing come after construction completion and before turnover to operations. As such, commissioning will face tremendous schedule pressure. It is important that process control personnel stand firm for FAT and SAT test completion, especially to confirm proper controller tuning, and exercise/testing of all sequences, protections and interlocks.
Training: Starting up a process with experts from vendors and corporate offices may help you to achieve the initial production milestones, but it won’t keep the process running that way after the fact. Training for operations, maintenance and engineering must be properly trained. This includes training in process control, especially for handling startup, shutdown, abnormal situations, failures, alarms and any advanced controls.
Project teams are not the enemy: They are just adjacent teams with different metrics. Process control team members can help by getting involved early, and supporting the long term goals of the project, including project start up, as well as ongoing operation and maintenance.
Ed Farmer’s Thoughts
First, assess the needed features and quality: Automation engineers have natural motivation to achieve a good end result. The objective of the assignment may be avoiding certain situations, providing stability, providing output with a specific characteristic (e.g., rate), meeting a “finished product” quality standard or … ?
Consider approaches adequate for needs: Usually, performance is at or near the top of the objectives. Depending on the greater “situation,” getting the control loop “right” can require such features as fast response, long-term stability, appropriate accuracy and high reliability. These objectives, in a specific application, must be met at least “well enough.”
Familiarity is important: Servicing new process control equipment and systems usually involves technicians with plant or company training and experience with a narrow range of concepts and equipment vendors. Stellar service results from matching the new equipment with the capability of those available to provide it.
Can the new project “just fit in?”: Sometimes a new application is similar enough to in-use equipment and designs to enable the new equipment to match. Occasionally, the goal of the new equipment is elevating some feature or parameter of the involved process which can require fresher thinking about how to set it up and use it.
Maybe it fits with existing systems: Sometimes a new seemingly simple project motivates broader thinking which often motivates implementing this new project by adding it to a greater system covering a larger part of the plant or process. Often, this approach may involve a simple addition to the “big system” which can often be done without spending much.
Start a new project by thinking about how best to fit it into what’s already there: Evaluate each “idea” around its impact on meeting the needs motivating the new requirement. After 50 years of doing this kind of thing, I usually feel most comfortable with the approach that meets the new requirement with the most familiar approach.
About the Authors
Julie F. Smith is the global automation and process control leader for DuPont. She has 35 years of experience in the process industry, having been part of numerous engineering and operations activities across the globe. She has written several papers and columns highlighting the value of modeling and simulation. Julie has a BS in chemical engineering from Rensselaer Polytechnic Institute and an MChE from the University of Delaware.
Russell Rhinehart has experience in both the process industry (13 years) and academe (31 years). He is a fellow of ISA and AIChE, and a CONTROL Automation Hall of Fame inductee. He served as president of the American Automatic Control Council and editor-in-chief of ISA Transactions. Now “retired,” Russ is working to disseminate engineering techniques with his web site (www.r3eda.com), short courses, books and monthly articles. His 1968 B.S. in ChE and M.S. in NucE are both from the U of Maryland. His 1985 Ph.D. in ChE is from North Carolina State U.
Pat Dixon, PE, PMP is president of DPAS Inc (www.DPAS-INC.com). He is a certified project management professional and a professional engineer in four states. His company, DPAS, provides system integration and commissioning for SCADA and DCS, with specialties in advanced control, data analytics and cybersecurity. His career includes positions with SD Warren Paper (now SAPPI), Honeywell, DuPont, Pavilion Technologies, Emerson, Advanced System Integration and Global Process Automation.
Gregory K. McMillan retired as a Senior Fellow from Solutia Inc. in 2002 and retired as a senior principal software engineer in Emerson Process Systems and Solutions simulation R&D in 2023. Greg is an ISA Fellow and the author of more than 200 articles and papers, 100 Q&A posts, 80 blogs, 200 columns and 20 books. He was one of the first inductees into the Control Global Process Automation Hall of Fame in 2001, and received the ISA Lifetime Achievement Award in 2010, ISA Mentor Award in 2020 and ISA Standards Achievement Award in 2023. His LinkedIn profile is: https://www.linkedin.com/in/greg-mcmillan-5b256514/
Michael Taube is a principal consultant at S&D Consulting, Inc. Serving the greater process industries as an independent consultant since 2002, he pursues his passion to make things better than they were yesterday by identifying the problems no one else sees or is willing to admit to and willingly “gets his hands dirty” to solve the problems no one else can. Due to the continued occurrence of individual injuries and fatalities as well as large-scale industrial incidents, he collaborates with operational excellence and safety culture experts to promote a real and lasting cultural shift in the process industries to help make ZERO incidents a reality. He graduated from Texas A&M University in 1988 with a Bachelor of Science degree in chemical engineering. His LinkedIn profile is: https://www.linkedin.com/in/michaeltaube/
Peter Morgan is an ISA senior member with more than 40 years of experience designing and commissioning control systems for the power and process industries. He was a contributing member of the ISA 5.9 PID committee for which he won the ISA Standards Achievement Award and has had a number of feature articles published in Control magazine.
George Buckbee retired as general manager of ExperTune in 2021 and is now president of Sage Feedback LLC. He is an ISA Fellow and longtime member of ISA's Publications/Content Steering Committee. George's experience in process control led to the development of control loop performance monitoring software. He has written dozens of articles, several books and has hosted hundreds of talks and educational webinars. George holds a Bachelor of Science in chemical engineering from Washington University in St. Louis and a Master of Science in chemical engineering from the University of California at Santa Barbara.
Ed Farmer completed a BSEE and a Physics Master degree at California State University - Chico. He retired in 2018 after 50 years of electrical and control systems engineering. Much of his work involved oil industry automation projects around the world and application of the LeakNet pipeline leak detection and location system he patented. His publications include three ISA books, short courses, numerous periodical articles and blogs. He is an ISA Fellow and Mentor.



