ISA Interchange

Welcome to the official blog of the International Society of Automation (ISA).

This blog covers numerous topics on industrial automation such as operations & management, continuous & batch processing, connectivity, manufacturing & machine control, and Industry 4.0.

The material and information contained on this website is for general information purposes only. ISA blog posts may be authored by ISA staff and guest authors from the automation community. Views and opinions expressed by a guest author are solely their own, and do not necessarily represent those of ISA. Posts made by guest authors have been subject to peer review.

All Posts

What Factors Affect Dead Time Identification for a PID Loop?

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient, and retired Senior Fellow from Solutia, Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.


In the ISA Mentor Program, I provide guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Adrian Taylor, an industrial control systems engineer at Phillips 66.

 

Adrian Taylor’s Question

One would expect dead time to be the easiest parameter to estimate, yet when using software tools that identify the process model in closed loop I find identification of dead time is inconsistent. Furthermore, when using software identification tools on simulated processes where the exact dead time is actually known, I find on occasions the estimate of dead time is very inaccurate. What factors affect dead time identification for a PID loop?

 

Russ Rhinehart’s Answer

When we identify dead time associated with tuning a PID loop, it is normally part of a model such as First Order Plus Dead Time (FOPDT), or a slightly more complicated Second Order model (SOPDT). Normally, and traditionally, we generate the data by step-testing (start at steady state, SS, and make a step-and-hold in the manipulated variable, MV (the controller output), then observe the process-controlled variable, CV, response. We pretend that there were no uncontrolled disturbances, and make the simple linear model best fit the data. This procedure has served us well for the 80 years or so that we’ve used models for tuning PID feedback controllers or setting up feedforward devices, but there are many issues that would lead to inconsistent or unexpected results.

One of the issues is that these models do not exactly match the process behavior. The process may be of higher order than the model. Consider a simple flow rate response. If the i/p device driving the valve has a first-order response, and the valve has a first-order response, and there is a noise filter on the measurement, then the flow rate measurement has a third-order response to the controller output. The distributed nature of heat exchangers and thermowells, and the multiple trays in distillation all lead to high-order responses.

So, the FOPDT model will not exactly represent the process; the optimization algorithm, the modeling approach, seeks the simple model that best fits overall the process response. In a zero dead time, high-order process, the best model will delay the modeled response so that the subsequent first-order part of the model can best fit the remaining data. The best model will report a dead time even if there is none. The model does not report the process dead time, but provides a pseudo-delay that makes the rest of the model best fit the process response. The model dead time is not the time point where one can first observe the CV change.

A second issue is that processes are usually nonlinear, and the linear FOPDT model cannot match the process. Accordingly, steps up or down from a nominal MV value, or testing at alternate operating conditions, will experience different process gains and dynamics, which will lead to linear models of different pseudo-dead time values.

A third issue is that the best fit might be in a least squares sense over all the response data, or it might be on a two-point fit of mid-response data. The classic hand-calculated “reaction curve” models use the point of highest slope of the response to get the delay and time-constant by extrapolating the slope from that point to where it intersects the initial and final CV values. A “parametric” method might use the points when the CV rose one-quarter and three-quarters of the way from the initial to the final steady state values and estimate delay and time-constant from those two points. By contrast, a least squares approach would seek to make the model best fit all the response data not just a few points. The two-point methods will be more sensitive to noise or uncontrolled disturbances.  My preference is to use regression to best fit the model over all the data to minimize the confounding aspects of process noise.

A fourth issue is that the step testing might not have started at steady-state (SS), nor ended at SS. If the process was initially changing because of its response to prior adjustments, then the step test CV response might initially be moving up or down. This will confound estimating the pseudo-delay and time-constant of any modeling approach. If the process does not settle to a SS, but is continuing to slowly rise, then the gain will be in error, and if gain is used in the estimation procedure for the pseudo-delay, it will also include that error. If replicate trials have a different background beginning, a different residual trend, then the models will be inconsistent.

A fifth issue relates to the assumption of no disturbances. If a disturbance is affecting the process, thensimilar to the case of not starting at SSthe model will be affected by the disturbance, not just the MV.

Here is a sixth. Delay is nonlinear, and it is an integer. If the best value for the pseudo-delay was 8.7 seconds, but the sample interval was on a 1-sec interval, the delay would either be rounded or truncated. It might be reported as 8 or as 9 sec. This is a bit inconsistent. Further, even if the model is linear in differential equation terminology, the search for an optimum pseudo-delay is nonlinear. Most optimizers end up in a local minimum, which depends on the initialization values. In my explorations, the 8.7-sec ideal value might be reported within a 0- to 10-sec range on any one particular optimization trial. Optimizers need to be run from many initial values to find the global.

So, there are many reasons for the inconsistent and inaccurate results.

You might sense that I don’t particularly like the classic single step response approach. But I have to admit that it is fully functional. Even if a control action is only 70 percent right because the model was in error, the next controller correction will reduce the 30 percent error by 70 percent. And, after several control actions, the feedback aspect will get the controller on track.

Although fully functional, I think that the classic step-and-hold modeling approach can be improved. I used to recommend four MV stepsup-down-down-up. This keeps the CV in the vicinity of the nominal value, and the four steps temper the effect of noise, nonlinearity, disturbances, and a not-at-SS beginning. However, it takes time to complete four steps. Production usually gets upset with the extended CV deviations, and it requires an operator monitoring to determine when to start each new test.

My preference now is to use a “skyline” MV sequence, which is patterned after the MV sequence used to develop models for model-predictive control (MPC), also termed advanced process control (APC). In the skyline testing, the MV makes steps to random values within a desired range, at random time intervals ranging from about ½ to 2 time-constants. In this way, in the same time interval for the four-step up-down-down-up response, the skyline generates about 10 responses, can be automated, and does not push the process as far or for an extended period from the nominal value as traditional step testing. The large number of responses does a better job of tempering noise and disturbances, while requiring less attention and causing smaller process upsets.

Because the skyline input sequence does not create step-and-hold responses from one SS to another, the two-point methods for reaction curve modeling cannot be used—but regression certainly can be used. What is needed is an approach to nonlinear regression (to find the global minimum in the presence of local optima), and a nonlinear optimizer that can handle the integer aspects of the delay. I offer open-code software on my web site in Visual Basic for Applications, free to any visitor. Visit r3eda.com and use the menu item “Regression,” then the sub-item “FOPDT Modeling.”

You can enter your data in the Excel spreadsheet and press the run button to let the optimizer find the best model. The model includes both the reference values for the MV and CV (FOPDT models are deviations from a reference) and initial values (in case the data does not start at an ideal SS).  The optimizer is Leapfrogging, one of the newer multiplayer direct search algorithms that can cope with multi-optima, nonlinearity, and discontinuities. It seeks to minimize the sum of squared deviations, SSD, over all the data. The optimizer is reinitialized as many times as you wish to ensure that the global is found, and the software reports the cumulative distribution of SSD values to reveal confidence that the global best has been found.


ISA Mentor Program Posts & Webinars

Is this information valuable to you? Want more? Click here to view more ISA Mentor Program blog posts, technical discussions, and educational webinars—or keep reading to learn Adrian's next steps, as well as how other mentors addressed his question.


Adrian Taylor’s Reply to Russ Rhinehart

Many thanks for your very detailed response. I look forward to having a play with your skyline test + regression method. I have previously set up a spreadsheet to carry out the various two point methods using points at 25 percent/75 percent, 35.3 percent/85.3 percent, and 28.3 percent/63.2 percent. As your recursive method minimizes the errors and should always give the best fit, it will also be interesting to compare to the various two point methods to see which of these methods most closely match your recursive best fit method for various different type of process dynamics. For the example code given in your guidance notes: I presume r is a random number between 0 and 1? I note the open settling time is required. Is the procedure to still carry out an open loop step test initially to establish open loop setting time, and then in turn use this to generate the skyline test?

 

Russ Rhinehart’s Reply to Adrian Taylor

Yes, RND is a uniform distributed random number on the 0-1 interval. It is not necessary to have an exact number for the settling time. In a nonlinear process, it changes with operating conditions; the choice of where the process settles is dependent on the user’s interpretation of a noisy or slowly changing signal. An intuitive estimate from past experience is fully adequate. If you have any problems with the software, let me know.

 

Michel Ruel’s Answer

See the ISA Mentor Program webinar Loop Tuning and Optimization for tips. Usually the dead time is easily identified in closed loop techniques but in open loop you can miss a chunk of it. Most modern tools analyze the process response in the frequency domain, and in this case, dead time corresponds to high frequencies. Tests using a series of pulses (or double pulses) are rich in high frequencies, and in this case, dead time is well-identified. If we use a first- or second-order + dead time, remember that dead time represents real dead time plus small time constants.

 

Adrian Taylor’s Reply to Michel Ruel

Many thanks for your response. While I have experienced the problem with various different identification tools, the dead time estimate when using a relay test-based identification tool seemed to be particularly inconsistent at identifying dead time. My understanding now is that while the relay test method is very good at identifying ultimate gain/ultimate period, attempts to convert to an FOPDT model can be more problematic for this method.

 

Mark Darby’s Answer

Model identification results can be poor due to the quality of the test/data as well as the capabilities of the model identification technology/software. Insufficient step sizes can lead to poor results—for example, not making big enough test moves relative to valve limitations (dead band and stick/slip) and noise level of the measurements you want to model. Also, to get good results, multiple steps may needed to minimize the impact of unmeasured disturbances.

Another factor is the identification algorithm itself and capabilities of the software. Not all are equivalent, and there is a wide range of approaches used, including how dead time is estimated. One needs to know if the identification approach works with closed-loop data. Not all do. Some include provisions for pre-filtering the data to minimize the impact of unmeasured disturbances by removing slow trends. This is known as high-pass filtering, in contrast to low-pass filtering, which removes higher frequency disturbances.

If sufficient numbers of steps are done, most identification approaches will obtain good model estimates, including dead time. Dead time estimates can usually be improved by making higher frequency moves (e.g., fractions of the estimated state-state response time).

As indicated in my response to the question by Vilson, the user will often need to specify whether the process is integrating. Estimates of process model parameters can be used to check or constrain the identification. As mentioned, the user may be able to obtain model estimates from historical dataeither by eyeball or by using selected historical data in the model identification, and thereby avoiding a process test.

 

Greg McMillan’s Answer

Digital devices including historians create a dead time that is one-half the scan time or execution rate plus latency. If the devices are executing in series and the test signal is introduced as a change in controller output, then you can simply add up the dead times. Often test setups do not have the same latency or same order of execution or process change injection point as in the actual field application. If the arrival time is different within a digital device execution, the dead time can vary by as much as the scan time or execution rate.

If there is compression, backlash or stiction, there is also a dead time equal to the dead band or resolution limit divided by the rate of change of the signal assuming the signal change is larger than dead band or resolution limit. If there is noise or disturbances, the dead time estimate could be smaller or larger depending upon the whether the induced change is in the same or opposite direction, respectively.

Some systems have a slow execution or large latency compared to the process dead time.  Identification is particularly problematic for fast systems (e.g., flow, pressure) and any loop where the largest sources of dead time are in the automation system resulting in errors of several hundred percent. Electrode and thermowell lags can be incredibly large varying with velocity, direction of change, and fouling of sensor. Proven fast software directly connected to the signals designed to identify the open loop response (e.g., Entech Toolkit) and multiple tests with different size perturbations and direction and at different operating conditions (e.g., production rates, setpoints, and degrees of fouling) are best.

I created a simple module in Mimic that offers a rough fast estimate of dead time and ramp rate and the integrating process gain for near-integrating and true integrating processes within 6 dead times that is accurate to about 20 percent if the process dead time is much larger than the software execution rate. While the relay method is not able to identify the open loop gain and time constant, it can identify the dead time. I have done this in the Mimic “Rough ‘n Ready” tuner I developed. Some auto-tuning software may be too slow or take a conservative approach using the largest observed delay between a PV change and a MV change plus a maximum assumed update rate, and possibly use a deranged algorithm thinking larger is better.

 

For Additional Reference

McMillan, Gregory K., Good Tuning: A Pocket Guide.

 

Additional Mentor Program Resources

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

Greg McMillan
Greg McMillan
Greg McMillan has more than 50 years of experience in industrial process automation, with an emphasis on the synergy of dynamic modeling and process control. He retired as a Senior Fellow from Solutia and a senior principal software engineer from Emerson Process Systems and Solutions. He was also an adjunct professor in the Washington University Saint Louis Chemical Engineering department from 2001 to 2004. Greg is the author of numerous ISA books and columns on process control, and he has been the monthly Control Talk columnist for Control magazine since 2002. He is the leader of the monthly ISA “Ask the Automation Pros” Q&A posts that began as a series of Mentor Program Q&A posts in 2014. He started and guided the ISA Standards and Practices committee on ISA-TR5.9-2023, PID Algorithms and Performance Technical Report, and he wrote “Annex A - Valve Response and Control Loop Performance, Sources, Consequences, Fixes, and Specifications” in ISA-TR75.25.02-2000 (R2023), Control Valve Response Measurement from Step Inputs. Greg’s achievements include the ISA Kermit Fischer Environmental Award for pH control in 1991, appointment to ISA Fellow in 1991, the Control magazine Engineer of the Year Award for the Process Industry in 1994, induction into the Control magazine Process Automation Hall of Fame in 2001, selection as one of InTech magazine’s 50 Most Influential Innovators in 2003, several ISA Raymond D. Molloy awards for bestselling books of the year, the ISA Life Achievement Award in 2010, the ISA Mentoring Excellence award in 2020, and the ISA Standards Achievement Award in 2023. He has a BS in engineering physics from Kansas University and an MS in control theory from Missouri University of Science and Technology, both with emphasis on industrial processes.

Books:

Advances in Reactor Measurement and Control
Good Tuning: A Pocket Guide, Fourth Edition
New Directions in Bioprocess Modeling and Control: Maximizing Process Analytical Technology Benefits, Second Edition
Essentials of Modern Measurements and Final Elements in the Process Industry: A Guide to Design, Configuration, Installation, and Maintenance
101 Tips for a Successful Automation Career
Advanced pH Measurement and Control: Digital Twin Synergy and Advances in Technology, Fourth Edition
The Funnier Side of Retirement for Engineers and People of the Technical Persuasion
The Life and Times of an Automation Professional - An Illustrated Guide
Advanced Temperature Measurement and Control, Second Edition
Models Unleashed: Virtual Plant and Model Predictive Control Applications

Related Posts

Onward and Upward to 2025: Proud of a Great Year

As my year as president of the International Society of Automation (ISA) comes to a close, I wanted to ta...
Prabhu Soundarrajan Dec 20, 2024 10:00:00 AM

How Did Automation Professionals Benefit from ISA in 2024?

The International Society of Automation (ISA) is proud to be the professional home of thousands of member...
Kara Phelps Dec 17, 2024 9:30:00 AM

Ensuring RCM or DCS Redundancy and Its Security in a Complex Industrial Environment

In industrial automation, remote control managers (RCM) or distributed control systems (DCS) are critical...
Ashraf Sainudeen Dec 13, 2024 10:00:00 AM