ISA Interchange

Ask the Automation Pros: Achieving the Best Feedforward Control

Written by Greg McMillan | Jan 17, 2025 12:00:00 PM

The following discussion is part of an occasional series, "Ask the Automation Pros," authored by Greg McMillan, industry consultant, author of numerous process control books and 2010 ISA Life Achievement Award recipient. Program administrators will collect submitted questions and solicit responses from automation professionals. Past Q&A videos are available on the ISA YouTube channel; you can view the playlist here. You can read posts from this series here.

Looking for additional career guidance, or to offer support to those new to automation? Sign up for the ISA Mentor Program.

Russ Rhinehart’s Questions

Where are the best practices to 1) understand where feedforward control is needed, and 2) how to implement it?

I experience that feedforward (FF) control added to the feedback (FB) control does a good job of pre-compensating control action. It is a powerful technique. Control texts typically use Laplace transforms to derive a dynamic compensator from additive signals from block diagrams, representing first order plus dead time (FOPDT) models of how the manipulated variable (MV) and disturbance affect the process. But:

  1. The classic Laplace derivation does not seem to stick with or provide implementation assistance to the undergraduate student.
  2. The what-it-is does not reveal when it is justified.
  3. The instruction does not indicate the need for fine-tuning the feedforward coefficients (for instance, if the FF delay is negative, or when process dynamics change).
  4. There is no discussion about the practicality of its maintenance by novices.
  5. There is no indication how it is digitally implemented (what is the code, how to choose reference values for deviation variables).
  6. No indication that alternate models (such as multiplicative) might better represent the process.
  7. No discussion related to whether it trims the feedback signal or vice versa.
  8. No discussion of how to turn it on or off bumplessly.

I ask this question because there are many aspects of implementation practice that were never addressed in the mathematical underpinning of college texts or courses, and the control engineer must scramble to catch up. In my experience, it was a mentor (Harold Wade) who revealed the issues. I also found product bulletins on devices, postings from experts on their websites and some of the ISA books to be very helpful. I came to understand FF with FB through simulation where I wrote the code and watched how variable changed in time.

I’d like to know how others learned the practice. I think that it would be a service to novices if they are clued into the best methods to learn.

Brian Hrankowsky’s Response

My degree is in systems and control engineering, and I had more exposure than most to feedforward. Most of what I remember of it was from classes that used state space and stochastic modeling — maybe from a discrete systems controls course too. Russ’s question about how I learned to put it in practice made me think for quite a while. The most significant aha moments came from learning and understanding how three element drum level control works. It is a fairly straightforward example that relies on a simple mass balance and being smart about cascade design and transmitter calibration units leading to a robust and easy to maintain feedforward solution. It did take some time to work through, but it was worth the time to do so. That example doesn’t have necessarily significant timing components. To understand that, I used a block diagram to see how the process and output response effects combine on their way into the process transfer function. Knowing that an idea is to cancel out the effects makes it clear to me where the lead and lag terms I mention below come into play.

There are three basic cases where I start to consider feedforward:

  • When there are close coupled loops with similar closed loop dynamics in a process such as back pressure and flow on the same process line.
  • When there are expected common/frequent large load changes that can be measured/estimated (or sometimes setpoint changes).
  • When the cost of a deviation from setpoint is high.

When identifying sources to use in the feedforward calculation, they really need to be able to be sensed ahead of any significant effect on the loop the feedforward is intended to augment. If I can’t find such signals, then I’m not thinking about feedforward anymore as a solution — unless we can install more sensors, because feedforward needs to be able to predict.

Feedforward signals can be actual measurements such as a steam demand flow change, but doesn’t necessarily need to be. I’ve used simple ideas such as to force the controller output to a certain percent every time the loop transitions from manual to auto, an estimation of demand based on which block valves are open on a distribution loop converted to a VFD speed, using a characterization from a flow loop setpoint to its output for servo driven positive displacement pump speed.

The feedforward calculation needs to account for the time dynamics of the response in the loop’s measurement to a change in the signal(s) used in the feedforward calculation. One key idea for feedforward is that it is trying to negate the effect of the disturbance on the primary measurement by manipulating the loop output ahead of requiring feedback action be disturbed in order to react. If the feedforward signal is too early or too late, it will cause a process deviation that the feedback controller will need to compensate for. For this reason, it is a safe assumption that the feedforward signal will need to be passed through first order lead and first order lag functions and if there is significant dead time in the response, a delay or dead time block. The lag should be set to the lag in the process response to the disturbance. The lead should be set to the lag in the process response to a change in the loop output. The dead time should be set to the dead time in the process response to the disturbance minute the dead time in the process response to a loop output change (don’t let this be negative). When feedforward signal handling is built in to the control system platform, they vendor typically provides a gain that is used to convert from feedforward signal units to loop output units, e.g., percent output change per measured flow (lpm). Whether provided by the vendor or custom programed, make sure you’ve accounted for the unit change!

As we all know, process dynamics change at different operating points and loads. I find that feedforward is the most straightforward to implement and maintain when time is spent understanding the first principals driving the gain, lead, lag and deadtime components of the feedforward calculation; e.g., if the process lag to the disturbance is proportional to a flow rate, then make the feedforward lag term a function of that flow rate. Cascade control combined with feedforward can alleviate this need in some cases. Three element boiler drum level control I mentioned above is a really good example of this: to keep the level constant, we need to put in as much water as we take out. If our level controller cascades to an inlet feed water flow controller, then our demand flow measurement can be added to the level controller output directly (flow plus flow). Flow loops are fast and easy to make robust – the inner flow loop will handle lots of variability quickly and lead a level control that is highly robust and requires little long-term care and feeding. While I know feedforward in other industries can be very complicated to calculate in other industries, I have not had to deal with any that required more than understanding the transport lags, residence times, valve response times and other basic engineering models. When the math starts getting hard to relate to the physics and chemistry – its probably getting too complicated and/or fragile.

When tuning, I take the approach to tune the primary loop first without any feedforward. I tend to think of it like the feedforward is my operator’s initial guess at the output and then feedback control is going to close the gap to setpoint and handle process variability due to things not accounted for in the feedforward calculation – so it still needs to work as best as I can make it work on its own to handle all the setpoint and load changes expected. You’ll want this as well to be able to keep in control if there is a failure in the signal(s) used to calculate the feedforward. I have not had to spend much time actually tuning the feedforward as the use of bump tests, existing process data, and engineering design data has typically produced initial gain, lead and lag values that were good enough (I haven’t needed deadtime yet). As with all tuning, I will do multiple bump tests to explore the operating space and ensure the feedforward works across the intended range of disturbances and process operating points. When I need to actually tune the feedforward, my inclination is to determine why the feedforward did not accomplish what I intended (wrong amount of action: gain, too late or too early: what part of the process did I model wrong and then what to adjust between lead, lag, and deadtime).

Russ asked about turning it on and off bumplessly. Fortunately, I have not had to program that, as the control systems I’ve used feedforward on handle it for me. If it were not handled for me, then I’d expect to need to implement an internal bias term similar to what a PID controller does for transitions from manual to auto. My key thought here is that feedforward REALLY is calculating an output change to make, not an absolute output value, e.g., how much should pump speed increase if this user’s valve opens.

I have not used multiplicative feedforward. The applications I have used feedforward on were not ratio control related and additive was the right method. If feedforward is built in to the control system, it is usually additive with no option for multiplicative. However, this may be due to the availability of separate ratio functions that can be integrated in the control strategy or gains on the downstream output functions. Likely it is a good idea to ask the vendor for advice on implementing multiplicative in their control system. It may be covered in the vendor’s control training course as well. I’d expect that which one to use falls out of the first principles analysis of the process and how the loop is expected to change its output with respect to the disturbance.

Peter Morgan’s Response

The only reference to feedforward in the recommended reading for my undergraduate degree (which I have to this day) is disappointingly misleading, since it actually describes setpoint augmentation by the addition of a lead term, which, by its action and as described in the text might be described as anticipatory action but not strictly feedforward. Then, just as it is today, feedforward strategies were implemented in a wide variety of applications, so it is perhaps surprising that there was no reference to the application of feedforward for response improvement. That the book was written by an electrical engineer (one of my kind) focusing on frequency domain analysis and servosystems might be the reason for this neglect.

ISA Technical Report 5.9 (2023) does cover the subject of feedforward and readers are recommended to take a look at it. Greg’s highly instructive book “Tuning and Control Loop Performance, Fourth Edition” covers the subject well and is a very handy reference.

To answer Russ’s question regarding enabling and disabling the feedforward, and to concur with Brian’s response, when feedforward is offered as a user configurable option at the PID block by the DCS vendor, in my experience, the vendor makes provision to allow the feedforward to be enabled or disabled with the PID controller in auto or manual without disturbance. How this is accomplished in practice depends on the method of implementing the feedforward. When feedforward sum is implemented incrementally, it is sufficient to suppress the incremental change when feedforward is disabled. When the feedforward sum is implemented as an absolute value component of the controller output, when the feedforward is taken out of service, the controller internal position demand value is momentarily aligned with the last auto/manual station output. When the feedforward sum is enabled, the controller internal position demand value is momentarily aligned with the difference between the last auto/manual station output and feedforward signal value. It is of course important to verify the vendor’s claimed behavior before relying on it. Where feedforward is not provided as a user configurable option and it must be implemented using custom configuration/coding, the method of initialization just described can implemented by using output tracking at the PID (or writing the track value directly to the PID internal value) for just one scan when the feedforward is enabled of disabled.

Brian mentioned three element boiler drum level control as an example of a feedforward application, another application is boiler furnace pressure control where, perhaps uniquely, a mandatory requirement of NFPA for the prevention of furnace implosion requires the implementation of a feedforward strategy. In this case, forced draft (FD) damper/louver position is fed forward to position the induced draft (ID) inlet guide vane (IGV) position so that the ID IGV fan moves in sympathy with the FD fan louvers, minimizing the action of the PI controller that acts to maintain furnace pressure. In this case, recording the FD louver position and ID IGV position at various loads with the furnace pressure in auto, at setpoint and in the steady state is sufficient to establish the required characteristic of the feedforward calculation.

Greg McMillan’s Response

There is a lot of rarely recognized aspects of feedforward control, many of which were addressed in articles and Control Talk columns with Mark Darby, Peter Morgan and Michel Ruel.

I have mostly used feedforward summers because the volumes had a process time constant. This includes distillation columns, evaporators, neutralizers, bioreactors and liquid chemical reactors. These volumes have some degree of back mixing which translates to a process time constant proportional to residence time that increases with a decrease in feed rate that offsets in PID tuning the increase in process gain with a decrease in feed rate. The use of feedforward multiplier introduces an undesirable gain proportional to feed flow. I have used a feedforward multiplier a few times on sheet lines for correction of a speed ratio control to offset the change in process gain. It is suggested for plug flow volumes where there is essentially no process time constant.

Ratio control can be considerably beneficial if there are measurements with good rangeability, resolution, repeatability, reliability and response time (5Rs). The visibility and options for operator or procedure automation setting of the ratio during startup and abnormal operation can be essential. Blocks that allow a ratio setpoint and a bias setpoint to be set manually by operator or remotely by procedure automation or feedback correction or online adaptation are used in many distributed control systems (DCS). The setting by a primary process controller of the ratio setpoint is equivalent to a feedforward multiplier and the setting of the bias setpoint is equivalent to a feedforward summer. An adaptive feedforward controller with a setpoint equivalent to zero feedback correction can be used to optimize the ratio and bias setpoints for a bias and ratio setpoints, respectively set by the primary process controller. This adaptive controller advocated by Shinskey was tuned with slow integral only control to minimize interaction with process controller. Today we could use directional move suppression enabled by external-reset feedback to proactively help the process controller like what is advocated for valve position control.

The dynamic compensation of the feedforward should enable the correction to arrive at the same point at the same time in the process as the load disturbance. If it arrives too soon or is too large, inverse response occurs that can be extremely disruptive. Also, if there is an unmeasured disturbance driving the process variable in the opposite direction as the measured load disturbance (e.g., feed composition decreases as feed flow increases to distillation column), the feedforward action needs to be decreased. Also, process and measurement noise can be amplified by a lead-lag used for dynamic compensation. For this and many other reasons, the feedforward gain is usually set smaller and the feedforward lag set larger than identified.

A big — often unrecognized — opportunity is the use of setpoint feedforward to get to the setpoint faster and smoother. For production rate changes, the feedforwards can move in concert together for plant wide control. To minimize batch cycle time and continuous process startup and transition time, the primary controller output can be set to output limit and proactively set to final resting value found in previous batches or startups and transitions when the future value computed as the process rate of change multiplied by the loop dead time approaches the desired process setpoint.

For steam header pressure control, the cascade of pressure to flow control is too slow. The pressure controllers are used to set fast steam header letdown, user, and supply valves with linear installed flow characteristics. An application with 32 feedforwards was found to greatly increase plant steam header system performance by half-decoupling and proactively dealing with changes in steam supply from cogeneration and waste heat boilers and steam usage by production units.  

It is important to realize that the PID algorithm works in percent signals and that the feedback correction by the PID done internally is in terms of percent of the feedforward scale. The feedforward scale is normally the scale of the feedforward disturbance variable. The span of the feedforward scale and PID output scale consequently enter into the feedforward gain calculations.

Unfortunately, many software tuning packages do not readily enable the setup of automated testing to automatically identify feedforward gain, delay and lead-lag. The testing and calculations often must be done manually. For example, with the PID in manual, separate tests are done where the feedforward disturbance variable and PID output are stepped to identify the gain, deadtime and time constant in the percent process variable (PV) response to the percent change in the feedforward disturbance variable and the percent PID output. The feedforward lag is normally set equal to the lag time constant in the PV response to the disturbance and the feedforward lead is normally set equal to the lag time constant of the PV response to the PID output. The lag time should be at least 1/10 the lead time to prevent erratic action. The feedforward deadtime is the disturbance response deadtime minus the PID output response deadtime. The feedforward gain is simply the ratio of the dimensionless gain of the PV response to disturbance variable divided by the dimensionless gain of the PV response to PID output. This assumes that the feedforward scale was set properly. Also, feedforward implementation methods vary from one supplier to another. Careful testing is necessary starting with a very low feedforward gain.

Here are the links to many of my articles and columns on feedforward and ratio control.

I did an ISA Mentor Program Webex presentation on feedforward and ratio control (see slide 6 for reasons for prevalence of feedforward summer and addendum for feedforward gain calculation):

https://www.youtube.com/playlist?list=PLC3fVVaSjwYFSYiSSUJuIUL4vHMiQ6hT1

Michael Taube’s Response

While my undergraduate process control course covered feedforward control much as Russ Rhinehart describes (Laplace transforms — yuck!), my true grasp and understanding of HOW to implement it, in particular the need (or assessment of same) for dynamic compensation of the feedforward measurement, was due to a two-week long course provided by Dr. Robert Bartman (www.procontrol.net) which he refers to simply as the “Revelations” course. I credit him (and his course) for “pulling back the veil” and converting the “black art” of process control into an easily grasped science (or technical art). Sadly, Dr. Bartman has finally officially retired, thus his course is no longer available. However, a few of his more entrepreneurial students are investigating how they can bring the course back into existence. 

Ed Farmer’s Thoughts

Back in the late ‘60s, I was finishing college, partly financed with a part-time job with an electric utility where my family had worked for two generations. It had a huge and diverse customer base. Most electrical generation came from fossil-fueled plants and there were several rivers with chains of hydroelectric plants. The fossil-fueled plants (we referred to them as “thermal plants”) were fueled with purchased oil or natural gas — and they were HUGE. Changing generation on one of them involved changing the operating point of one or more boilers that created the steam that spun the turbine generators; all of which could take hours. The hydroelectric plants could change load quickly but all together amounted to a small fraction of total required generation. Fuel was cheap, though — snow melted into Sierra Nevada Mountain lakes which flowed down the mountains in rivers. Water was a cheap fuel, but in any year, there was only so much of it, and that depended on the amount of snow the mountains experienced over the last winter. That made it more valuable for modulating our generation capability to follow customer loads that varied with season, time of day and weather.

As you might guess, at any specific moment of any specific day, it was not a simple job to economically and functionally optimize generation objectives for each of the many plants. The controlled variable was the customer load on the total system. The economics and locations of the various thermal plants guided generation distribution to meet overall needs and the hydroelectric system was used to smooth out load changes that came with events like “dinner.” Hydroelectric–thermal coordination was always essential. The base load was handled by infrequent changes on the thermal plants, and the variability was handled simply by tweaking a valve in hydroelectric plants. Needing to change our system load was motivated by customer demand, but starting from an observed change in load was an easy way to have a bad day. We needed to plan around what would happen, not wait to see what actually did. We weren’t there to watch for a change in load around lunch time — we were there to have a working plan for dealing with it.

One of the most intelligent people I’ve ever met had developed the basics of a system that was rapidly expanding into a daily routine that not only met the process needs but provided a way to optimize the overall generation economics. I remember realizing that the one-day results of his work probably saved many times his annual salary.

There was a lot of accounting involved with regionally allocating resources in the most economical way. There were things it was easy to know, but perhaps the biggest factor was weather. In those days, it was hard to predict tomorrow’s conditions over a large area — and getting it wrong had far greater impact than an afternoon spent watching rain through a window.

Computers had recently been invented, and we had a huge one in the basement under our home office. Weather data was selected and prepared for the computer during the day, along with other factors that could influence human needs and behaviors. When the business day concluded, the computer run began. Early the next morning, operating point changes were communicated to each plant. The computer had earned a reputation for doing as it was told, and what it had been told was always reviewed afterward. This wasn’t just a smart computer; this was a smart and active system that used the available knowledge and information to do the best job we could imagine possible. Clearly, though, the success in any day depended on more than just a measured variable involving the changing load, minute by minute. That was certainly the system telling us how it was doing. All that was happening, though, because we’d been able to teach it a lot about itself and what was about to happen before it actually did. We had found a way to “feed-forward” in a useful way, what was needed to produce another good day.

Another insight from that period — and hydroelectric engineering — involves dead time. Some processes, like water flowing down a river, happen slowly. Opening a dam spillway up in the foothills is unnoticeable all those miles away down in the valley, but a few hours later (or maybe next Tuesday) a flooded road certainly raises attention. Thinking ahead might have provided some forward-thinking about what to do with other water control points along the way — if only the event had been fed forward.

Perhaps this can be thought of as AI stuff. Centuries ago, it would have been stuff observant smart people knew or could figure out. In any case, some of what we know comes from what we actively see. Sometimes it comes from things we know will (or can) happen before they do. In any case, sometimes, it adds an important dimension to process control. Modern technology enables monitoring lots of related things at the same time thus improving visibility and perspective. When but one thing was tightly mathematically related to another, a single-loop PID controller was fantastic. When confronted with an array of things, a thoughtful analysis might involve several of them, perhaps resulting in a “magic number” that indicates “the way.” Process automation involves a lot of well-known stuff, but is also on the path toward greater understanding of difficult problems and the conditions that make them so.

About the Automation Pros

Russ Rhinehart is ChE professor emeritus at Oklahoma State University and has experience in both the process industry (13 years) and academe (31 years). He is a fellow of the International Society of Automation (ISA) and the American Institute of Chemical Engineers (AIChE), a former Editor-in-Chief of ISA Transactions, President of the Am. Automatic Control Council and was inducted into the Control Global Process Automation Hall of Fame. Visit www.r3eda.com.

Brian Hrankowsky is a Senior Advisor – Engineering with 23+ years process control experience in the pharmaceutical and animal health industries. Brian has experience in large and small molecule synthesis and purification, continuous utility, discrete assembly and packaging automation with various DCS, PLC, vision and single loop control platforms.

Peter Morgan is an ISA senior member with more than 40 years of experience designing and commissioning control systems for the power and process industries. He was a contributing member of the ISA 5.9 PID committee for which he won the ISA Standards Achievement Award and has had a number of feature articles published in Control magazine.

Gregory K. McMillan retired as a Senior Fellow from Solutia Inc in 2002 and retired as a Senior Principal Software Engineer in Emerson Process Systems and Solutions simulation R&D in 2023. Greg is an ISA Fellow and the author of more than 200 articles and papers, 100 Q&A posts, 80 blogs, 200 columns and 20 books. He was one of the first inductees into the Control Global Process Automation Hall of Fame in 2001, and received the ISA Lifetime Achievement Award in 2010, ISA Mentor Award in 2020 and ISA Standards Achievement Award in 2023.

https://www.linkedin.com/in/greg-mcmillan-5b256514/

Michael Taube is a Principal Consultant at S&D Consulting Inc. Serving the greater process industries as an independent consultant since 2002, he pursues his passion to make things better than they were yesterday by identifying the problems no one else sees or is willing to admit to and willingly “gets his hands dirty” to solve the problems no one else can. Due to the continued occurrence of individual injuries and fatalities as well as large-scale industrial incidents, he collaborates with operational excellence and safety culture experts to promote a real and lasting cultural shift in the process industries to help make ZERO incidents a reality. He graduated from Texas A&M University in 1988 with a Bachelor of Science degree in Chemical Engineering. 

https://www.linkedin.com/in/michaeltaube/

Ed Farmer completed a BSEE and a Physics Master degree at California State University – Chico. He retired in 2018 after 50 years of electrical and control systems engineering. Much of his work involved oil industry automation projects around the world and application of the LeakNet pipeline leak detection and location system he patented. His publications include three ISA books, short courses, numerous periodical articles and blogs. He is an ISA Fellow and Mentor.