ISA Interchange

Welcome to the official blog of the International Society of Automation (ISA).

This blog covers numerous topics on industrial automation such as operations & management, continuous & batch processing, connectivity, manufacturing & machine control, and Industry 4.0.

The material and information contained on this website is for general information purposes only. ISA blog posts may be authored by ISA staff and guest authors from the automation community. Views and opinions expressed by a guest author are solely their own, and do not necessarily represent those of ISA. Posts made by guest authors have been subject to peer review.

All Posts

Ask the Automation Pros: What Scan Time Is Needed for Process Control Applications?

The following discussion is part of an occasional series, "Ask the Automation Pros," authored by Greg McMillan, industry consultant, author of numerous process control books and 2010 ISA Life Achievement Award recipient. Program administrators will collect submitted questions and solicit responses from automation professionals. Past Q&A videos are available on the ISA YouTube channel; you can view the playlist here. You can read posts from this series here.

Looking for additional career guidance, or to offer support to those new to automation? Sign up for the ISA Mentor Program.

Russ Rhinehart’s Question

What are your thoughts about what scan time is needed in different process control applications?

I recall a rule from the early digital controller applications (circa 1975) that the control frequency (or scan time) should be chosen to have 10 sampling/actions within a process time-constant, or 30 within the process settling time. This was a balance of seeking perfection in control with respect to the device cost and practicality. A balance of perfection and sufficiency. (It also matches the rule in finite-difference methods for solving differential equations that for stability and goodness of fit, the time increment should be one-tenth of the time-constant.) I also recall that early horizon-predictive control models had about 30 coefficients in the Finite-Impulse-Response vector over the process settling time. Today, computer power (speed, capacity, etc.) permit many more coefficients in the FIR model and much higher control frequency.

What is today’s balance of perfection to sufficiency? For instance: Is 10Hz a detectable benefit on a distillation column over 0.1 Hz? Is the optimization of 180 possible future control actions in model predictive control (MPC) a real benefit over, perhaps 10?

Brian Hrankowsky’s Responses

I am fairly certain that the 10 and 30 factors show up in a couple of my discrete control related texts from college, although I thought I remembered the rationale for at least one of them (the 30?) being that the high scan rate avoided the issues that arise from using tuning algorithms based on analog system analysis on discrete controllers. The other thing I remember is that the focus was on primary time constant and ignored deadtime as a consideration (perspective in electrical engineering based texts and publications that did not take into account process industry implications that tend to have more deadtime).

Another item that should be considered is the time stamp precision desired in the historized data. Scanning slower often impacts the historian data. In batch processes or even continuous processes with automatic start up and shutdown sequences, it gets very hard to figure out what happened when. While I don’t need to scan every second for good control, I may need to so I can see when measurements move and how much initially relative to discrete device and sequence commands and status.

Sometimes we need to take action based on a process condition NOW and waiting is not great. This could be a feedforward (FF) input, output tracking condition or interlock. It depends on the system how this will work with a PID scanning slow.

Some unexpected behaviors may be observed if the PID block is scanning really slow compared to the output function or other cascaded downstream block due to the handshaking requirements to achieve actual cascade control. Could the cascade status handshaking required become a problem? I’m not sure what happens with a loop running entirely in Foundation Fieldbus devices. You need to test that the system behaves as desired.

Peter Morgan’s Response

Scan time issues for PID control in the process industry have been addressed in the ISA-TR5.9-2023 Technical Report “Proportional-Integral-Derivative (PID) Algorithms and Performance” in Section 4.2.6.5. ISA members can view ISA Standards and Technical Reports for free.

Just as I was able to enjoy firsthand the experience of the transformation in performance of my car in switching from cross-ply to radial tires in the 60s, in the 80s I welcomed the opportunities afforded by the transition from analog to digital control systems for process control. The transition to digital control didn’t come without a price, or specifically yet another parameter in the form of execution interval to be considered in getting the best out of the control system. The selection of execution interval for the PID algorithm and associated logic is one of balancing loop performance and the controller’s capacity to execute the code in the interval between scans.

Figure 1 illustrates the impact of scan time on closed loop performance for an unmeasured load disturbance applied to a process modelled as a 20 second deadtime followed by a 100 second lag. The controller is tuned for maximum disturbance rejection. Increasing scan time increases peak error due to the delay in controller action and reduces damping due to additional phase shift (lagging) at all frequencies introduced by the added delay in response. With the least favorable timing of the disturbance (just after the controller scan) the theoretical maximum disturbance in the PV would be increased by the factor (ϴp+Δt)/ ϴp although in practice it would be typically lower.

Scan Time Figure 1

Figure 1. Impact of scan time on closed loop performance.

As demonstrated by the responses shown in Figure 1, an execution interval equal to one fifth of the effective process deadtime (Δt=4 sec) can be selected without compromising loop performance but that a execution interval of 10 seconds (equal to one tenth of the process time constant may be less than acceptable for a given application.

Although wider ranging studies (as reported in ISA 5.9) indicate that longer sampling times might be allowed for slower acting loops, consideration of the reaction of the plant operator to the delay in controller action with long execution intervals should not be overlooked. Note also that Boolean logic for plant control or equipment protection may require the scan time for configured logic to be based on the acceptable latency for discrete action.

The precision with which a first order filter is implemented in the controller is also affected by scan time to an extent dependent on the method of implementing the filter. Figure 2 compares the response of a first order filter (lag) for various implementations of the filter. Filter time constant is 10 seconds and the filter execution interval 3 seconds. Notably the method described under note 2 follows the reference response closer than the other methods described and can be considered more tolerant of longer scan times.

For a PID implementing integral action using filtered positive feedback, if the execution interval is one fifth of the closed loop arrest time (for example), practically there should be no discernable difference in response owing to the vendor’s favored method of implementing the filter.

Figure 2 Scan Time

Figure 2. First order filter response comparison.

To answer Brian’s question regarding the impact scan rate on the propagation of mode shedding status from the secondary in a cascade to the primary, whether the PID implementation is conventional or via filtered positive feedback with external feedback, that the execution interval of the primary loop PID could be longer than that for the secondary loop PID should not matter since the mode shedding status would be maintained rather than momentary and the primary would pick up status when it executes. Scan rate and block execution order are of course a concern and must be accounted for when states are momentary (transitional) and not maintained.

Greg McMillan’s Response

Section 7.3 in ISA-TR5.9-2023 provide a detailed view on the effect of scan time on control loop performance. It is important to note that equations by me and Shinskey on the effect of scan time on the practical limit to integrated error and peak error show that that relative size of the scan time to the PID tuning settings is important. Note that the I/O scan time is usually much faster than the PID execution rate so the term “scan time” in my discussion is in regards to the slower of the two digital update times.

A word of caution here is that as a PID is detuned, larger scan times are consequently viewed as permissible that deteriorates the ultimate limit to loop performance that depends upon the open loop deadtime and time constant. Also, the focus on setpoint response in tuning the PID by making setpoint changes traditionally seen in literature rather than on load disturbance rejection emphasized by me and Shinskey by momentarily putting PID in manual and making output changes, can lead one to think a slower scan time is permissible. A consequence is that if the tuning needs to be more aggressive for tighter control for unmeasured load disturbances, the scan time may become an overlooked significantly limiting factor to control loop performance. Both me and Shinskey tried to make users aware that the PID tuning must first be done to ensure peak and integrated error for unmeasured load disturbances in process inputs is acceptable and the PID structure beta and gamma factors or a setpoint lead-lag can be subsequently used to provide the desired setpoint response (fast with minimum rise time or slow and smooth).

The following Appendix C from my ISA book “Essentials of Modern Measurements and Final Elements in the Process Industry” provides some additional guidance. This appendix has been modified to provide more emphasis on load disturbance rejection. The table has been omitted.

There is considerable confusion as to when scan times affect the ability of a control system to compensate for unmeasured disturbances. The purpose of this appendix is to provide concepts and examples to sort out fact from fiction and offer some guidance.

Setting appropriate scan times and minimizing the effect of scan times on process control is increasingly important for the following reasons:

  • Since we live in a digital world, sampled data is the norm. Just from the volume of applications, the opportunity for setting and dealing with scan times is large.
  • There are no clear guidelines for various types of process control applications.
  • In some applications, conventional scan times can cause severe safety and performance issues.
  • In most cases, the tuning of the controller dictates that scan times could be significantly slower. If distributed control system (DCS) module execution times and wireless communication time intervals could be increased, controller loading is reduced and wireless battery life is prolonged, respectively.
  • If we want more at-line analyzers to provide measurements of stream compositions that tell us what is really going on in the process and offer the opportunity for a more advanced level of control, we need to understand and address sample processing and analyzer cycle times.
  • If we want to move to more wireless measurement that give us the flexibility and intelligence for process control improvement, we need to understand and address wireless communication intervals.

This appendix uses the term “scan time” as the time between updates in sampled data from digital measurements and controllers and from analyzers in the broadest sense. The following discussion should be useful for determining whether DCS scan or module execution times, wireless communication time intervals, model predictive control execution time, and at-line analyzer cycle time will affect control system performance.

A scan time less than 10% of the total loop deadtime has a negligible effect on load disturbance rejection for extremely aggressive tuning (ultimate limit on scan time). For any loop with a control valve, the minimum loop deadtime is about 1 second for an unmeasured disturbance so the ultimate limit on scan time is about 0.1 second. Note that actual PID tuning rarely approaches the ultimate limit to performance because of the tradeoff between performance and robustness and the tuning for the ultimate limit is too oscillatory. A scan time less than 10% of the PID reset time has a negligible effect on load disturbance rejection for self-regulating processes with robust tuning and a smooth response (practical limit on scan time).

Greg McMillan’s Theoretical Basis and Background

The performance of a control loop depends upon the tuning. Specifically, the peak and integrated errors are inversely proportional to the controller gain. The peak error is not affected much by the integral time setting. However, the integrated error is proportional to the integral time. Thus, a loop with good dynamics can be made to perform as poorly as a process with bad dynamics by sluggish tuning. The effect of slow scan times is hidden by large integral times or small controller gains. Thus, it is critical for any comparison, that tuning criteria be specified. In fact, there is an implied deadtime as a result of the tuning of the loop [1]. The tuning of the controller puts a practical limit on how fast the scan time must be for the effect to be negligible.

If a controller is tuned for maximum load disturbance rejection performance, the peak error is proportional to the total loop deadtime to open loop time constant ratio. The integrated error is proportional to the deadtime squared. These statements are strictly true only when the process time constant is large compared to the loop deadtime. The total loop dead is the sum of final element deadtime (e.g. valve pre-stroke time delay, deadband and stiction), process deadtime (e.g. mixing, thermal and transportation), automation deadtime (e.g. sensor lag, transmitter damping and scan times) and small process time constants. All of the time constants smaller than the largest time constant become effectively deadtime in the first order plus deadtime approximation used in industry. Process and automation system dynamics places an ultimate limit on loop performance. There is a corresponding ultimate limit on the scan time.

The effect of scan times can be accessed in terms of practical and ultimate limits on performance. Critical loops where peak errors can cause destruction or environmental releases such as compressor surge control, furnace pressure control, exothermic reactor temperature control and RCRA pH control, the tuning is necessarily aggressive. As a result, the practical limit is much closer to the ultimate limit. For a discussion of cases where exceptionally fast scan times are needed, see reference [2]. I have personally seen applications (e.g., furnace pressure control and polymer line pressure control) where analog control needed to be used because the control loop deadtime was so small due to fast variable speed drives and fast measurements

For many digital devices the update is available near the beginning of the scan time (latency is negligible), which means the average deadtime from the scan time is about half the sample period. For at-line analyzers (field analyzers with automated sample systems), the result is not available until the end of the sample processing and analyzer cycle time, which translates to an average effective deadtime that is about 1.5 times the time interval between updates in the analyzer output signal.

The detrimental effect of scan time is greater than deadtime in that for continuous sources of deadtime, such as process transportation and mixing time delays and small process time constants, there is a continuous train of updates. For sampled data, there are no intervening values. Consequently, the effects can be worse. For example, there is aliasing of oscillations where the indicated amplitude is smaller and the period is larger than actual. There can be jitter due to variations in latency and lack of synchronization of digital data that introduce variable time delays and noise for rapidly changing signals.

References

  1. McMillan, Greg, “Effect of Sample Delay on Standard PID Loop Tuning and Performance”, Advanced Application Note 005, August 28, 2008.
  2. McMillan, Greg, “Analog Control Holdouts”, April 2, 2007.

Note that if thermocouple or RTD input output (I/O) cards are used instead of thermocouple or RTD transmitters, the consequential poor I/O resolution can cause excessive noise from derivative action in slow processes if faster than necessary scan rates are used. The problem also exists for scan rates that are much faster than wireless update rates or analyzer cycle times. The enhanced PID using external-reset feedback that sets the PID time interval for the derivative calculation equal the time between updates seen in PV, greatly reduces the noise from the derivative mode.

Greg McMillan’s Simplified Guidance

Scan time must be intelligently selected and tested. To provide some simplified guidance, the scan time can start out as about 10% to 20% of the total loop deadtime. The total loop deadtime is the easiest parameter to estimate assuming steps are much larger than valve resolution or lost motion. After tuning for acceptable rejection of load disturbances and robustness, the scan time can be increased to be about 10% of the PID reset time. However, for cases where reset action is minimized due to integrating response (level or gas pressure control) or a runaway response (e.g., temperature control of exothermic reactors), the reset time is much larger than 4 times the deadtime since the positive feedback action of integral mode is contributing to the process positive feedback action requiring that the scan time be set based on deadtime not reset time. Some polymerization reactors have proportional plus derivative (PD) controllers. Finally, for tight pressure control with variable speed drives, the scan time may need to be less than 0.1 second requiring a special fast acting controller.

Mark Darby’s Thoughts

Early on with MPC, it was common to use 30 coefficients (step response or impulse response). This was due to the limited processors at the time. Note: the total execution time includes both the control computation time and the time associated with the I/O. The I/O can be the rate limiting step.

I look at 30 coefficients per response time as a minimum. Further, the rule should be applied to the fastest responding CV over all process inputs, including both manipulated and measured disturbance variables. Over time as processors became faster 30 x 3 =90 coefficients became more of a minimum standard.

The scan time will also depend on the underlying regulatory control strategy for the MPC and its ability to reject disturbances. For example, if a TC loop is used, one might be able to use a higher scan time for the MPC. I think another consideration is dealing with upsets — how much can an unmeasured disturbance variable change over several scan times and what is its impact?

A recent example: I applied MPC to a process with a settling time of 120 min. Given the nature of the disturbances (feed rate and composition), it executed at 30 sec. It was based on a state-space model, which required much fewer coefficients. Even if the controller was based on an impulse response model, the 30 sec execution time would normally not be a problem (with the number of coefficients being 500 or more) for a modern server. There is obviously a point in which a faster scan time does not provide a meaningful increase in control performance. However, I believe the majority of MPC applications probably fall on the slower side as opposed to being set faster than needed.

Regarding the number of future control actions (or moves) in MPC. In many of the commercial packages, the number is often not set independently, but is influenced by other settings. Also, internal “blocking” of the inputs is often used. Blocking means that future moves are not calculated at every future time step (i.e., future MV values may be held constant for multiple time steps), but instead based on a pre-specified pattern in the MPC. So 10 future moves may be fine, but more than that can be helpful if there is a wide range in the dynamics (fast vs. slow). In a distillation column, for example, pressure drop, levels (if controlled in the MPC) and valve positions will have much faster dynamics than product compositions.

Greg has covered the typical cases for PID execution period. I’ll just add that a useful lower limit for tight control with frequent disturbances is for the loop scan time to be 10%-25% of the loop deadtime.

Michel Ruel’s Response

What should be the scan time for a PID loop?

  1. For a loop tuned tightly to reject disturbances, think of it like a plane taking off — you want quick action. The key parameter here is the deadtime, which determines the strength of the initial response. The scan time should ideally be a fraction of the deadtime, with 10% being a good estimate.
  2. For a loop tuned to smoothly reach a new set point, liken it to a plane landing — you want a smooth descent. In this case, the time constant is crucial, representing the end of the transient response. The scan time should be a fraction of the time constant, but at least as long as the deadtime.

Ed Farmer’s Thoughts

I used to get projects that involved matching the capability of the control system to the optimization of the process. The idea was to have a data interval “comfortably” faster than the process could wander. This suggests comparing measurement interval to process dynamics. Electrical engineers would look worried for a minute or so and then say, “Nyquist-Shannon sampling theorem!” We could usually get enough data to understand the process’ behavior and apply Shannon’s theorem to sampling interval. As computers sped up and increased size, the importance of getting this “just right” was less important as long as it was faster than the process.

Wikipedia has a section on Nyquist-Shannon sampling theorem at: https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem

It covers aliasing extremely well, as well as bandwidth. It bumps up against more esoteric techniques like Fourier Analysis as a way of tracking down the changes that were related to process vs. the mechanics of its implementation (such as dead-time in the measurement concept).


The following recent ISA books particularly detail the use and value of simulations with breakthroughs I mentioned for charge balance and bioreaction kinetics (use promo code ISAGM10 for a 10% discount on Greg’s ISA books):

  1. Advanced pH Measurement and Control - Digital Twin Synergy and Advances in Technology Fourth Edition

  2. New Directions in Bioprocess Modeling and Control – Maximizing Process Analytical Technology Benefits Second Edition

Greg McMillan
Greg McMillan
Greg McMillan has more than 50 years of experience in industrial process automation, with an emphasis on the synergy of dynamic modeling and process control. He retired as a Senior Fellow from Solutia and a senior principal software engineer from Emerson Process Systems and Solutions. He was also an adjunct professor in the Washington University Saint Louis Chemical Engineering department from 2001 to 2004. Greg is the author of numerous ISA books and columns on process control, and he has been the monthly Control Talk columnist for Control magazine since 2002. He is the leader of the monthly ISA “Ask the Automation Pros” Q&A posts that began as a series of Mentor Program Q&A posts in 2014. He started and guided the ISA Standards and Practices committee on ISA-TR5.9-2023, PID Algorithms and Performance Technical Report, and he wrote “Annex A - Valve Response and Control Loop Performance, Sources, Consequences, Fixes, and Specifications” in ISA-TR75.25.02-2000 (R2023), Control Valve Response Measurement from Step Inputs. Greg’s achievements include the ISA Kermit Fischer Environmental Award for pH control in 1991, appointment to ISA Fellow in 1991, the Control magazine Engineer of the Year Award for the Process Industry in 1994, induction into the Control magazine Process Automation Hall of Fame in 2001, selection as one of InTech magazine’s 50 Most Influential Innovators in 2003, several ISA Raymond D. Molloy awards for bestselling books of the year, the ISA Life Achievement Award in 2010, the ISA Mentoring Excellence award in 2020, and the ISA Standards Achievement Award in 2023. He has a BS in engineering physics from Kansas University and an MS in control theory from Missouri University of Science and Technology, both with emphasis on industrial processes.

Books:

Advances in Reactor Measurement and Control
Good Tuning: A Pocket Guide, Fourth Edition
New Directions in Bioprocess Modeling and Control: Maximizing Process Analytical Technology Benefits, Second Edition
Essentials of Modern Measurements and Final Elements in the Process Industry: A Guide to Design, Configuration, Installation, and Maintenance
101 Tips for a Successful Automation Career
Advanced pH Measurement and Control: Digital Twin Synergy and Advances in Technology, Fourth Edition
The Funnier Side of Retirement for Engineers and People of the Technical Persuasion
The Life and Times of an Automation Professional - An Illustrated Guide
Advanced Temperature Measurement and Control, Second Edition
Models Unleashed: Virtual Plant and Model Predictive Control Applications

Related Posts

Ask the Automation Pros: What Scan Time Is Needed for Process Control Applications?

The following discussion is part of an occasional series, "Ask the Automation Pros," authored by Greg McM...
Greg McMillan Sep 27, 2024 7:00:00 AM

Taking a Look at the Virtual PLC Technology Stack

Learning Objectives Distinguish between traditional PLCs and virtual PLCs. Evaluate the advantages and di...
Daniel O'Duffy Sep 24, 2024 7:00:00 AM

Project Management for Automation Technicians: A New Perspective

Project management books are everywhere, but very few on the market dive into specific topics beyond theo...
Kara Phelps Sep 20, 2024 7:00:00 AM