The following discussion is part of an occasional series, "Ask the Automation Pros," authored by Greg McMillan, industry consultant, author of numerous process control books and 2010 ISA Life Achievement Award recipient. Program administrators will collect submitted questions and solicit responses from automation professionals. Past Q&A videos are available on the ISA YouTube channel; you can view the playlist here. You can read posts from this series here.
Looking for additional career guidance, or to offer support to those new to automation? Sign up for the ISA Mentor Program.
Russ Rhinehart’s Questions
Where are the best ways and methods to understand where cascade control is needed and to know the implementation issues to get the greatest performance?
I ask this question because most folks starting, or shifting to, engineering careers in process control have only had one mathematically-oriented course (or perhaps a few courses) in process control, and often that was years ago. Further, there are many aspects of implementation practice that were never addressed in the mathematical underpinning of college texts or courses, and the control engineer must scramble to catch up. In my experience, it was a mentor (Harold Wade) who revealed the issues. I also found product bulletins on devices, postings from experts on their websites and some of the ISA books to be very helpful. I had a short course on process control, but the content was too much like that of a college course to be of much utility. The content of the ISA CAP program seems relevant, but I have not taken it. Are ISA standards valuable for learning? Are training courses by device vendors?
I’d like to know how others learned the practice. I think that it would be a service to novices if they are clued into the best methods to learn.
David De Sousa’s Response
Here is an excerpt from my LinkedIn post.
In a cascade control strategy, there are 2 (or more) controllers. The main controller’s setpoint (SP) is set by an operator, while its controller’s output drives the SP of another controller.
When to Use It
Cascade control is used when you have a process with a relatively slow dynamics — such as level, temperature, composition, humidity and a liquid or gas flow or other fast process — that has to be manipulated to control the slow process.
It can be applied in a useful way to any process in which a measurable secondary variable directly influences the main controlled variable.
An example of this is a level controller driving a control valve to manipulate the flow rate in or out of the vessel in order to keep the level at its set point, where the line pressure might also affect the flow rate.
Here you will be able to have better level control if you can also control the flow rate to a target setpoint. The primary control loop (Level) provides the setpoint for a secondary control loop (Flow).
In this example, the level controller will be driving the set point of the flow controller to keep the level at its set point. The flow controller, in turn, drives the control valve to match the flow with the set point the level controller is requesting.
The purpose of cascade control is to achieve greater stability of the primary process variable by regulating a secondary process variable in accordance with the needs of the first.
An essential requirement of cascaded control is that the secondary process variable shall be faster-responding (shorter lag and dead times) than the primary process variable.
Advantages
✷ Better control of the primary variable
✷ Faster recovery from disturbances
✷ Improved dynamic performance
✷ Compensation for nonlinearities
Disadvantages
✷ Requires an additional measurement
✷ Additional controller to be tuned
✷ Added complexity
The disadvantages have to be weighed up against the benefits of the expected improvement to decide if cascade control should be implemented.
In the following very informative articles, a group of expert process control practitioners (Greg McMillan, R. Russell Rhinehart, Béla Lipták, Brian Hrankowsky, Michel Ruel and James Beall), explore cascade control.
International Society of Automation (ISA): Ask the Automation Pros: Can a Primary Loop Manipulate a Secondary Loop Valve?
https://blog.isa.org/can-a-primary-loop-manipulate-a-secondary-loop-valve
The Benefits and Challenges of Cascade Control:
https://www.controlglobal.com/control/article/33005627/the-benefits-and-challenges-of-cascade-control
Optimizing Cascade Control for Exothermic Batch Reactors:
https://www.controlglobal.com/measure/temperature/article/33012930/optimizing-cascade-control-for-exothermic-batch-reactors
Cascade Control Recommendation Tips:
https://www.controlglobal.com/home/blog/11333134/cascade-control-recommendation-tips
Cascade Control Perspective Tips:
https://www.controlglobal.com/home/blog/11339304/checklist-for-cascade-control
Brian Hrankowsky’s Response
The first thing I tell people to do when designing a control strategy is to understand the process. What is the applicable physics/chemistry? What are the setpoint changes expected? What are the load disturbances expected? What are the measurement system specifications?
The next thing is to know the criticality of the process parameter to be controlled and the determine performance specs. How far off setpoint matters? Is it better for recovery to be slow or fast? Does it even matter?
Then you’re ready to think about the control strategy design.
Depending on the audience, I find it can be helpful when teaching cascade from scratch to point out they use cascade a lot more than they realize because we have lots of valve positioners. The “single loop” in the DCS or PLC is really the outer loop of a cascade that asks the positioner for a specific valve % open and the positioner has a control loop to adjust pressure to achieve that position repeatably. The positioner is helpful as it senses a change in position caused by either process side pressure changes or its air supply pressure changes FASTER than the control loop in the DCS or PLC will see the effect of those changes in the process measurement. So the positioner can adjust the position and potentially “hide it” from the PLC DCS control loop entirely. The key is that the positioner senses and responds to load changes that are FASTER than the process and process measurement system. If the positioner is poorly tuned and sluggish, its easy to see why the PID loop in the DCS or PLC would have to be slowed down and “be gentle” with the positioner — which would defeat most of the reason for having the positioner in the first place.
Cascade is a good idea when:
There are load changes that are expected that have a magnitude enough to worry about their impact to the process.
These load changes can be sensed 3x or more faster than their effect on the primary process variable can be sensed. 5x if you need it really really smooth.
Cascade is not something to simply copy and paste for a unit operation simply because “the last time we built one of those reactors, we used cascade.” A typical misapplication of cascade it to use cascade on all jacketed reactors with tempered loops regardless of size and piping. As the reactors get smaller, the ratio of jacket volume to tank volume increases, as does the ratio of the recirculation loop recycle time to tank temperature process time constant. This leads to the situation where the jacket process dynamics are too slow compared with the tank process dynamics. On small vessels, the jacket dynamics are slower than tank itself. Using cascade for these actually results in needing to detune the primary loop — you’re better to just use single loop control.
Interestingly, when the vessels get really really small, you are better off not controlling their temperature at all, but just controlling the temperature of the jacket. As it is so dominant, the vessel will really be locked into a specific offset from the jacket temperature.
Mark Darby’s Response
I believe most sources (text books and short courses) do a reasonable job explaining, at a high level, where cascade control can be beneficial and how to implement it, but there still a lot best practice that is not covered. I think this same is true with the coverage of many control techniques.
I want to comment on the “cascade rule,” which addresses the necessary time scale separation between the primary and secondary loops in a two-loop cascade. It is normally expressed as the ratio of time constants between the two loops. Shinskey, I believe, recommends a ratio of at least 4. Skogestad has recently summarized recommendations, beginning with a recommended ratio of at least 5. At a ratio of about 3 (or less) there is interaction between the loops and performance significantly degrades (even if the models are exact). It should be stressed that the ratio really applies to the closed-loop responses, and a tuning technique based on a desired closed loop response can be helpful.
In practice, the desired time scale separation may not be achievable. What to do? This is another situation where external reset feedback is helpful. Here the PV of the secondary loop is fed back as the external reset feedback to the primary. In this way, the primary loop is aware of the slower response of the secondary. This approach is very similar to internal model control (IMC) for a cascade, if the inner loop PV is used as the model input to the internal model. BTW — I don’t believe this has been generally recognized in the control community. I’ve used this IMC technique for analyzer-temperature cascades in distillation columns, where very often the time scale separation is not met and the dynamic difference is mainly due to dead time.
There are instances where the “cascade rule” may not be apparent and is violated, such as when the primary loop is controlling a calculated variable that uses the PV of the secondary in its calculation. An example is a duty controller cascaded to a flow controller. We’ll assume no phase change of the heating medium and so the duty is proportional to the delta temperature of the hearing medium. If the duty is calculated based on the measured flow of the heating medium, it is clear that the cascade rule is violated as there is no dynamic difference. Ideally, one can simply divide the requited duty by enthalpy change ( c_p ∆T). However, it is often easier to calculate the duty using the flow setpoint and to use an integral controller in the primary (or a PI with low proportional gain).
It may seem obvious to cascade a pressure controller to a flow controller if both measurements are available. But if the pressure is a gas, it can be expected that the cascade rule is not met and the cascade will make performance worse. And therefore, the pressure controller should instead output directly to the valve. Further, since it is not needed, the flow controller introduces a point of failure that will affect reliability.
Héctor Torres’s Response
A process can sometimes have two processes in series, where the output of the first process is an input to the second one. To keep the output of the second process at the desired level, arranging two feedback control loops in cascade is recommended to enhance performance, especially if unmeasured disturbances are present. In this setup, the output of the second process’s control loop becomes the setpoint of the first one’s. This allows disturbances in the inner loop (first process) to be corrected with minimal impact on the controlled parameter (second process). The inner loop should be able to quickly handle disturbances so that the outer loop can focus on controlling the primary process variable. As already mentioned, it is crucial that the open loop response in the outer loop is slow compared to the inner loop’s open loop response. If the opposite is the case, cascade control is not recommended as it leads to cycling issues.
Most has been already covered by all to this point; however, to help complementing an answer to Russ’s question, I am listing a few extra implementation issues that keep the cascade control from getting the greatest performance:
- Control Variables and Minimum Dead Time Readings: Ensure that the variables chosen for the inner and outer loops are appropriate and that the inner loop variable responds quickly to changes in the manipulated variable. Accurate and reliable sensors are critical. The sensors for the inner loop need to be placed where they can quickly detect changes thus minimizing dead time. Excessive dead time degrades the performance of cascade control (and that of any other control strategy).
- Loop Interaction: In cascade control, it's crucial to ensure that the outer and inner control loops do not interfere with each other. Lambda tuning is a tuning method which tuning helps addressing the interaction issue in a more explicit manner. Lambda tuning involves setting the tuning parameters (λ) for the controllers in a way that interaction is prevented. The desired closed loop response for the outer control loop should be set much larger than that for the inner control loop. By doing this, the inner loop responds much faster than the outer loop, ensuring minimal interaction and better overall control performance. Another way of reducing interaction caused by an unexpected slow secondary response is by enabling external-reset feedback. See Mark’s and Greg’s responses for more details.
- Maintenance and Calibration: Regular maintenance and calibration of sensors and actuators are necessary to ensure the continued accuracy and reliability of the control system. It is typical having operators and production engineers blaming the control system for a decay in the control loop performance. There is only that much a control system can do when acting on a sensor or a final control element that is underperforming.
Greg McMillan’s Response
Cascade control is widely used to improve the performance of primary (upper or uouter) loops by the use of secondary (lower or inner loops that rapidly compensate for disturbances in the secondary (lower) loop (e.g., stream pressure or temperature) and nonlinearities (e.g., installed valve characteristic and temperature process gain). The most common lower loop is flow. Jacket or coil temperature lower loops are often used for reactor temperature control where the process gain is inversely proportional to coil or jacket flow. A constant jacket recirculation flow can help eliminate this nonlinearity. There can be a triple cascade control loops where a reactor temperature controller sets a jacket or coil temperature controller which sets a makeup coolant flow controller.
The lower loop should be tuned to provide a fast closed loop response using proportional and derivative action on error and minimal and if possible, no integral action. Integral action reduces the allowable gain setting. Offsets in the lower loop are inconsequential except in ratio control for reactor and column temperature control. Note that the introduction of more than one integrator can cause a limit cycle from backlash dead band. Integral action in valve positioners is not recommended because it also reduces maximum proportional action and creates limit cycles.
Each lower controller open loop process dead time and time constant should be 5 times faster (smaller) than the upper open loop dead time and time constant. Similar dead times can result in resonance, and a slow lower loop time constant can cause interaction between the loops. The consequential oscillations can get quite severe as the lower loop dead time and time constant approach the upper loop dead time and time constant.
Detuning the upper loop so that its closed loop time constant (63% closed loop response time) is 5 times larger than the lower loop closed loop time constant can help suppress oscillations. The resulting deterioration in process performance for both primary and secondary loop disturbances is undesirable. A much better solution is the use of external-reset feedback to prevent the upper controller output from changing faster than the lower controller loop can respond to. Of course, the best solution is to make sure the lower loop valve, measurement and process response is as fast and precise as possible. Do not use on-off or isolation (tight shutoff) valves. Use valves originally designed for throttling to avoid the severe consequences of great stiction and lost motion.
Note that while the following list of recommendations talks about the use of lambda, which is the closed loop time constant, users should use the tuning method that provides the most aggressive rejection of load disturbances to each loop. If lambda tuning is used, it should be modified to allow use of derivative action and more aggressive proportional action for process dead times greater than process time constants and for process time constants approaching and exceeding 4 times the dead time, lambda integrating process tuning rules should be used where lambda is a closed loop arrest time to stop (arrest) a disturbance. My book Tuning and Control Loop Performance, Fourth Edition (available for free online) offers many different tuning rules in Chapter 1 and the following recommendations for cascade control along with extensive test results in Chapter 11.
Tune the valve positioner first for fastest response without integral action. Note that valve positioners should be used on even fast loops with diaphragm actuators. The substitution of a booster for a positioner, per decades-old rule, can cause instability from positive feedback. A booster with a slightly open bypass valve can be added to positioner output to make valve response faster. Tune the lowest loop for fastest rejection of load disturbances with no or minimal integral action. Similarly, tune the next lowest loop and finally the upper loop for fastest rejection of load disturbances. Use setpoint lead-lag or setpoint weight factors to get the desired setpoint response.
So that reactants arrive at the same time in ratio control, it was often advocated that the closed loop time constants of the reactant flow controllers be made equal. I now prefer to tune each flow loop for the fastest possible rejection of pressure and valve disturbances and use a minimal setpoint filter on the fastest loop to better coincide the arrival of reactants.
When poor flow measurement rangeability causes noisy and irregular response at low flows (common problem for differential head and vortex flow meters), I advocate bumplessly switching to an inferential measurement of flow based on the valve’s installed flow characteristic and stream operating conditions.
- Use smart valve positioners (DVCs) on all control valves.
- If the valve stroking time (time for 100 percent stroke) is significantly greater than the reset time of the PID manipulating the valve, add volume booster(s) on valve positioner output(s). Open the booster bypass enough when stroke testing the valve to prevent high frequency cycling of valve position (e.g., 1 cps).
- Tune valve positioner for fast response avoiding the use of integral action.
- Use lower flow loops wherever possible to compensate for nonlinear installed flow characteristics and to provide the measurements needed for mass, mole and energy balances and cost analysis.
- Use jacket and coil temperature control loops for bioreactor, crystallizer and chemical reactors. If boiler feedwater is used for high temperature highly exothermic reactors, use a pressure control loop on coil outlet.
- Make the measurement and execution of the lower loop as fast as possible. Ensure the sensor type, condition and installation minimizes the sensor delay and lag. Minimize transmitter damping and signal filters. Use a PID execution rate that is at least five times faster in the lower loop than in the upper loop.
- Tune the lowest loop first, then the next lowest loop and ending up tuning the upper most loop last. For example, tune the valve positioners first, then the flow loop, then the jacket temperature loop and finally the reactor temperature loop.
- Tune loops to be as fast as possible emphasizing proportional and derivative action rather than integral action. Use a PID structure that has PID action on error.
- Minimize setpoint filters on lower loops. Use a minimal setpoint filter on fastest flow loop for coordination of flow loops in ratio control, particularly for inline blending and exothermic reactors to prevent composition unbalances (e.g., maintaining stoichiometry).
- If the lower loop cannot be made five times faster than the upper loop, tune the upper loop slower by making the upper loop lambda five times larger than the lower loop lambda. Consider abandoning cascade control using either the lower or upper loop PV for single loop control depending upon whether the disturbance size and speed is more problematic in the lower or upper process. The temperature control of some bioreactors is best done by simple jacket temperature control because the jacket volume is comparable to the process volume and process disturbances from cell growth are incredibly small and slow.
- Use external-reset feedback of lower loop PV to upper loop PID so the upper loop PID output does not try to change faster than the lower loop can respond.
- If flow measurement rangeability is insufficient, there may need to be a switch to direct throttling of the control valve, a common practice in boiler drum level control at low steam rates and start up. A better solution is a computed flow from the valve installed flow characteristic and stream conditions with a bumpless transition to an inferential flow measurement maintaining cascade control.
For much more on advanced regulatory control including the effect of instrumentation, process dynamics and tuning, see the free online version of Tuning and Control Loop Performance, Fourth Edition. The book does not advocate a specific tuning method or offer recent updates to details and advantages of external-reset feedback and 3 degrees of freedom PID (3DOF) PID structure.
Russ Rhinehart’s Follow-Up
Here is how I told students in my process control class to recognize the justification for cascade.
If you notice that either:
The process influence of a controller can also be affected by other influences, control that process input. For instance, if the controller adjusts the valve to change the flow rate, but line pressure might also affect the flow rate, control the flow rate to a set point.
Another process variable is an early indicator (a leading indicator) of what will eventually happen to the primary controlled variable (CV), then control that leading indicator variable. For instance, tray temperature in a distillation column is a leading indicator of product purity. Instead of manipulating reflux to control the delayed and lagged purity measurement, manipulate reflux to control a tray temperature.
But in any case, you still need feedback from the primary CV to determine the set point for the secondary CV, because of calibration errors of secondary sensors and unmodeled and nonideal effects.
In cascade, the output of the primary (supervisory) controller is the set point for a secondary (inner) controller.
The secondary loop needs to be faster than the primary loop for this to have an advantage. Some use the rule that (θ+3τ)_inner<〖1/5 (θ+3τ)〗_outer.
Proportional-only control is often fully satisfactory for the inner loop. The integrator in the primary controller can compensate for moderate offset.
Tune the inner loop first, then tune the primary with the secondary in AUTO. To the primary controller, the inner loop is just part of the process.
Ed Farmer’s Thoughts
Back in 1971, I finished my BSEE which had been interrupted by the Vietnam war buildup. The program had involved three semesters of mathematics with a huge focus on calculus followed by an “engineering mathematics” course that involved multivariable differential equations, among a lot of other things. I took a course in “analog computing” as a junior and taught it the next year. Looking at engineering projects through mathematics always felt good and confidence-building to me.
The traditional “process controller” is essentially a very simple analog computer, optimized and configured for controlling some process variable on the basis of data from another. Its capability is focused on solving a single differential equation and provides appropriate adjustment ranges for the parameters, such as “gain.” Essentially, the user gets to “tweak” a control element to produce a desired process value. This presumes, of course, that the dynamics of what’s being controlled behaves consistently with the assumed math describing it.
I remember days when enhanced control of some refinery loop produced amazement. Naturally, someone observed that this present control issue could certainly be even better if we could get rid of a seemingly random change in pressure or temperature in some of the peripheral systems. That brought us to writing more partial differential equations and figuring out how one thing was linked to another. We would draw process functional diagrams of the steps involved. We would then develop equations for what happened in each block. Sometimes this involved a simple input-process-output approach; sometimes it identified sources of interference from other factors.
Sometimes it was apparent that a “setpoint” for a control loop in the system depended on some other unknown parameter, such as the concentration of a particular component in the income stream, or the temperature in some mixer or reactor. That knowledge would result in another control loop, or a way to adjust some factor(s) in an existing one. Making those adjustments involved figuring out what to control and how to do it — all resulting in another control loop for stabilizing some intermediate value. As those various loops get strung together we notice one changes the expectations of the next, eventually compensating for all those disturbances into even better control. One parameter adjustment cascades into another control loop, which can cascade into another, and so on.
Those of us that had taken “analog computation” in college came quickly to terms that a conventional process control loop involving three elements: a gain amp, an integrator and a rate-of-change amp. Strung together, they solved, or helped solve, some application-pertinent differential equations.
Digital computers came along and could be configured to solve these equations from things one could do on a keyboard. This was much better than stringing together chains of Intel IC-745 “Operational Amplifiers,” along with lots of carefully selected capacitors and resistors. Configuring solutions with digital systems was generally much simpler, and far easier to experiment with. The equation-solving knowledge was built-in, but it remained important to be able to understand the behavioral “messages” an errant process was trying to explain to you.
There’s a lot of equipment, and a lot of manuals, that help with putting these control applications together but there is still no way to avoid understanding what needs to be done – it’s just a lot easier to implement. Can AI help? The last one I worked on involved digesting the impact of some hard-to-measure temperatures and pressures in complex fluids. That quickly led to things like state equations involving characteristics there was no way to dependably measure. Put a simpler way, there is nearly always a new frontier just past the present one that seemed so far away at the beginning.
All these things begin with understanding what you’re trying to accomplish. If it’s “ordinary” enough, there will be equipment (with instruction manuals) that can help you make it happen. Sometimes, if you have some experience and insight, it’s possible to “stretch” what’s available just a bit farther. Sometimes a bit of creativity can take you even farther. Sometimes knowledge level needs elevation — I remember appreciating days when someone from the research department would explain a couple of persistent “whys.” There’s always something to learn!
About the Automation Pros
Russ Rhinehart is ChE professor emeritus at Oklahoma State University. He has experience in both the process industry (13 years) and academe (31 years). He is a fellow of the International Society of Automation (ISA) and the American Institute of Chemical Engineers (AIChE), a former editor-in-chief of ISA Transactions and President of the Am. Automatic Control Council and was inducted into the Control Global Process Automation Hall of Fame. Visit www.r3eda.com.
David De Sousa is control & instrumentation advisor and Technical Authority 1 (Energy Division) at OMV Aktiengesellschaft, Vienna, Austria. David is an ISA Senior Member and part of the team contributing to ISA-TR5.9-2023. David participated in JIP for establishing flow meter traceability along the CO2 value chain for low-carbon businesses. He has extensive oil and gas industry experience, downstream-upstream (onshore, offshore, subsea developments), in technical leadership positions covering engineering, projects, operation, optimization, functional safety, process control, metrology and OT security, providing solutions to business needs in a timely cost-effective manner with the highest quality and reliability levels.
https://www.linkedin.com/in/daviddesousadrumond/
Brian Hrankowsky is a senior advisor – engineering with more than 23 years of process control experience in the pharmaceutical and animal health industries. Brian has experience in large and small molecule synthesis and purification, continuous utility, discrete assembly and packaging automation with various DCS, PLC, vision and single loop control platforms.
Mark Darby is an independent consultant with CMiD Solutions. He provides process control-related services to the petrochemical, refining and mid/upstream industries in the design and implementation of advanced regulatory and multivariable predictive controls. Mark is an ISA Senior Member. He served on the TR5.9 committee that produced the PID technical report and has presented at ISA technical conferences. Mark frequently publishes and presents on topics related to process control and real-time optimization. He is a contributing author to the McGraw-Hill Process/Industrial Instruments and Controls Handbook Sixth Edition.
www.linkedin.com/in/mark-darby-5210921
Héctor H. Torres is an Associate Control Systems Engineer at Eastman Chemical, with 28 years of experience in process and control engineering. He holds a master’s degree in Industrial Engineering and a Six Sigma Black Belt Certification. Héctor has participated in capital projects globally, including DCS implementations in the USA, México, Belgium, and China. He is a founder protégée of the ISA Control Mentor Program, winner of the ISA John McCamey Award in 2013, and ISA5.9 Committee member. He is a contributing author to the McGraw-Hill Process/Industrial Instruments and Controls Handbook Sixth Edition and presenter at major conferences, including the Emerson Exchange and ISA Automation Week.
Gregory K. McMillan retired as a Senior Fellow from Solutia Inc in 2000 and retired as a senior principal software engineer in Emerson Process Systems and Solutions simulation R&D in 2023. Greg is an ISA Fellow and the author of more than 100 articles and papers, 100 Q&A posts, 80 blogs, 200 columns and 20 books. He was one of the first inductees into the Control Global Process Automation Hall of Fame in 2001. He received the ISA Life Achievement Award in 2010, ISA Mentor Award in 2020 and ISA Standards Achievement Award in 2023.
https://www.linkedin.com/in/greg-mcmillan-5b256514/
Ed Farmer completed a BSEE and a Physics Master degree at California State University Chico. He retired in 2018 after 50 years of electrical and control systems engineering. Much of his work involved oil industry automation projects around the world, and application of the LeakNet pipeline leak detection and location system he patented. His publications include three ISA books, short courses, numerous periodical articles and blog posts. He is an ISA Fellow and Mentor.