*The following technical discussion is part of an occasional series showcasing the **ISA Mentor Program**, authored by **Greg McMillan**, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient, and retired Senior Fellow from Solutia, Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.*

In the ISA Mentor Program, I provide guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Brian Hrankowsky, an associate senior consultant engineer at Eli Lilly and Company.

## Brian Hrankowsky’s Question

Derivative amplifies noise in the measured process value, causes the control loop output to reverse direction before the process has reached setpoint, and causes excessive fast process oscillation. Those are some of the reasons I’ve been given for never using derivative. In control theory, derivative is taught as a tool to reduce overshoot, improve phase margin (make the loop more robust), and cancel or lead secondary lags. In many tuning algorithms, derivative is set as a function of only the process dead time—implying that it is a good tool for mitigating that, too. I’ve worked on older (or sometimes just cheaper) systems that had poor PID implementations, which makes me think there were some real control issues solved by setting the derivative to 0. In some conversations, I find the person really just doesn’t understand what derivative does. In fact, I had a long discussion with an individual who was convinced derivative was a position controller. A bit of understanding, a modern PID implementation with a correctly set derivative filter, and an ability to control derivative kick enables the use of derivative to get all of the benefits with none of the drawbacks. Why are some engineers so against using derivative action?

## Michel Ruel’s Answer

To answer, this is a simple question with many folds:

**Poor implementation:**If the derivative function does not include a filter to limit high gain at high frequencies, then do not use derivative, since noise will be amplified and sent to the controller output. If this filter is adjustable, it should be around one-tenth of derivative time which corresponds to limit derivative gain at 10. In some systems, this filter is a second-order Butterworth filter which really improves the usage of derivative when noise is present. Most loops should be configured using derivative only on PV changes (and not on error changes) to avoid a kick at the controller output on SP change. In some PLCs, when switching from manual to automatic, the calculation of derivative is wrong and it generates a large change at the controller output (calculation of rate of change using last scan in automatic). In some implementations, the calculation of the rate of change is poorly done. In some implementations, improper derivative function interferes with the anti-reset wind-up algorithm and becomes a disaster!**Improving robustness:**Derivative improves phase margin, true. Derivative can cancel a small time constant (e.g., if the model is a second-order plus dead time, set derivative at the value of the small time constant to cancel it). If a multiple time constant, set derivative at the value of the largest of small time constant to cancel it. If a first-order plus dead time, set derivative at ~0.5 of equivalent dead time (usually in fact a real dead time + small time constants).**Derivative reacts to rate of change:**When tuning a loop for disturbance rejection, derivative will react before the error increases since it “sees” this error increasing by reacting to the slope.**Improving response on SP change:**If SP is manipulated by another system (or MPC), derivative will improve response if properly adjusted. Avoid ramp; I prefer using a SP filter.**Impacts of derivative:**When using derivative, you can use more proportional gain (derivative improving phase margin) and more integral action. The benefits are less overshoot, a faster response, and being more robust.

## Greg McMillan’s Answer

In going from a *Series* or *Real* Form in migration projects to newer systems that have an *Ideal* or *Standard* Form, people may not know that a rate time setting greater than ¼ the reset time setting will cause oscillations that get quite bad if the rate time is larger than the reset time. People don’t realize the inherent protection of a rate time greater than ¼ the reset time in the Series Form. This confusion can lead to people turning off derivative action.

People don’t like sudden changes in controller output even though this provides preemptive action and reduce the dead time from dead band and resolution limits. People prefer the gradual action from the integral mode that is always moving in the direction to eliminate an error seen on digital display no matter how small and despite whether the trajectory shows overshoot is eminent. Also, amplification of noise by derivative action turns people off. Instead of reducing noise by a better installation, filtering, or rate limiting, users play the blame game of outlawing derivative. Education and demonstration by the use of the digital twin can help a lot to show the value of derivative and how to enable derivative to do its job without upsetting other loops or wearing out the control valve.

If movement of the controller output signal is too fast, putting a simple setpoint rate limit on the analog output block or secondary PID and simply turning on PID option external-reset feedback (e.g., dynamic reset limit) with proper connection of the manipulated variable can prevent the fast change without the need to retune the PID controller.

A PID structure of PI on Error and D on PV or a 2 Degrees of Freedom (2DOF) structure can eliminate the bump from derivative action for a setpoint change. However, in some cases this bump is useful in helping a secondary PID have a faster response particularly getting through valve dead band and stiction faster to help the primary controller in cascade control. Usually, oscillation in the secondary loop is filtered by a larger process time constant in the primary loop, which normally occurs if the cascade rule is adhered to where the secondary loop is at least five times faster than the primary loop.

The elimination of noise by a better filter or a better installation, as described in the Control article “Why and how of signal filtering,” can provide much more. In this article, seemingly insignificant noise caused a lot of valve movement due to high PID gain and rate action for this integrating process. What was not considered is that fluctuations in controller output with a peak amplitude less than the dead band or resolution limit normally causes no movement. Pneumatic actuated control valves have a dead band of at least 0.2% for the best true throttling valves with sensitive diaphragm actuators and positioners. The dead band can be possibly as large as 20% for on-off valves posing as throttling valves. The strange story here is that the “poser” will not wear out so fast when it would be better if it did, so you might buy a true throttling valve.

If on approach to setpoint, the process variable (PV) reverses direction before reaching setpoint, the derivative action is most likely too large.

I have used derivative action in most temperature loops where it helps cancel out a thermowell or heat transfer surface lag. For highly exothermic batch reactors, derivative action is essential to prevent acceleration and shutdown. In some chemical plants I worked in, the temperature controllers were proportional plus derivative (PD on error no I) because derivative action was critical and integral action was detrimental.

I have seen specialists in refinery controls concentrate more on minimizing the movement of the manipulated variable (MV) than movement of the PV possibly due to the interactions in heat integration particularly for inline systems (smooth gradual minimal MV changes being the goal).

See the ISA Mentor Q&A posts “When and How to Use Derivative Action in a PID Controller” and “Key Insights to Control System Dynamics.”

## Mark Darby’s Answer

One of the challenges with derivative comes about when using a pure trial and error tuning approach—i.e., having to determine three tuning parameters: the derivative time in addition to gain and integral time. But the task is simplified with the advice given above. Derivative is most useful when applied to lag dominated, higher ordered processes (with or without dead time). When tight control is important, PID is worth pursuing. It can reduce IAE (integral absolute error) by 0.5 or more. One can set the derivative time, as Greg suggests in our recent article on filtering, equal to the secondary time constant or ½ of the dead time, whichever is largest. Be aware that if the process gain is overestimated, the controller gain will be underestimated, possibly leading to ineffective derivative action besides proportional action. For this reason, when using a tuning rule, a tuning package result, or trial and error tuning, it can be useful to determine if the controller gain is set high enough for the derivative action to be effective. To test: turn off derivative action (set derivative time to zero). One should see a damped oscillatory response that is not there with derivative action; if not, the gain can be increased. Alternatively, one can set the derivative time initially to 0 and increase the gain until a damped cycle is observed, then dial in derivative to eliminate the cycle. This approach assumes the integral time is not set too small. If the settling time is much larger than 10 times the dead time with an oscillation period about three to four times the dead time, the derivative time may be too small.

## Additional Mentor Program Resources

See the ISA book *101 Tips for a Successful Automation Career* that grew out of this Mentor Program to gain concise and practical advice. See the *InTech magazine* feature article Enabling new automation engineers for candid comments from some of the original program participants. See the *Control Talk* column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).