ISA Interchange

Welcome to the official blog of the International Society of Automation (ISA).

This blog covers numerous topics on industrial automation such as operations & management, continuous & batch processing, connectivity, manufacturing & machine control, and Industry 4.0.

The material and information contained on this website is for general information purposes only. ISA blog posts may be authored by ISA staff and guest authors from the automation community. Views and opinions expressed by a guest author are solely their own, and do not necessarily represent those of ISA. Posts made by guest authors have been subject to peer review.

All Posts

Is Data Noise in Process Control Degrading Performance?

The following discussion is part of an occasional series, "Ask the Automation Pros," authored by Greg McMillan, industry consultant, author of numerous process control books, and 2010 ISA Life Achievement Award recipient. Program administrators will collect submitted questions and solicits responses from automation professionals. Past Q&A videos are available on the ISA YouTube channel. View the playlist here. You can read all posts from this series here.

Looking for additional career guidance, or to offer support to those new to automation? Sign up for the ISA Mentor Program.

Ed Farmer’s Questions

Actual conditions in a process are observed with equipment that comes with inherent measurement and timing limitations and distortion. In many casesthese issues can be easily managed with minimal impact. Sometimes, though, it becomes essential to differentiate the underlying reality from the corrupted measurement data. What kind of things in equipment, piping, and instrument design and installation produce such “noise”? Have you ever seen such a situation? Could it be “fixed”?

 

Mark Darby’s Responses

We usually think of noise as high frequency fluctuations imposed on a measurement signal from the process. But fluctuations can be generated in the process itself. Examples include mixing, flashing, and condensation/vaporization. It can also come from improper installation of a measurement device, for example, not enough straight-run pipe length upstream of a flowmeter. Some measurements are also prone to high noise relative to the signal, such as the case with an orifice differential pressure (DP) measurement for measuring flow, particularly at low end of the flow range. Sometimes the noise may be due to electrical interference, where a 60 Hertz effect cycle may be observed.

 

A question to answer is whether the noise is normal, which might be assessed by comparing similar measurement in the plant. An experience technician is a good source for answering this question.

 

Not all noise is a problem. For example, a flow controller is normally tuned with low gain so noise is not amplified and cause significant Proportional-Integral-Derivative controller (PID) output (OP) changes. Filtering is often used on the process variable (PV) of a PID, but is often overdone leading to significant lag in the loop, making the matter worse. Sometimes, one is left with the challenging problem of a noisy signal for a loop that needs tight tuning for performance. Done right, filtering can still allow significant proportional and derivative action without introducing significant lag. I have looked at this problem and found that a second order filter works much better than a first order filter for minimizing filter lag and OP movement, with a second-order Butterworth filter providing the best results. See ISA5.9 Technical Report. This result is consistent with results published by Karl Åström that considered similar criteria.

 

Another approach to deal with noise is to develop a model between the OP and the PV and use it for control. An example: As mentioned, a flow through an orifice operating at the lower range will have a high noise to signal (or low signal to noise). A possible solution if operation at the low ranger is frequently required and the orifice cannot be changed, is to instead shift to controlling to a calculated flow inferred from the OP when in the low range (triggered when the FI or OP is below a preset value). The flow gain (slope) can be determined from an X-Y plot of FI vs. OP.

 

Greg McMillan’s Responses

I agree with Mark Darby and Greg Shinskey that a second-order Butterworth filter is best. A first order signal filter time or transmitter damping setting should be less than 20% of the PID’s integral time setting and preferably less than 20% of the total loop dead time. The filter should keep noise from causing PID output fluctuations larger than the control valve or variable frequency drive (VFD) dead band to avoid wearing out valve seating and packing and VFD motors and seals.

 

For very slow responding processes, intelligent signal rate of change limits can screen rapid extraneous fluctuations without introducing a delay. The rate limit is turned off when the PID is in manual. Middle signal selection (MSS) can eliminate noise, drift, erratic, frozen, and slow responses from a single sensor without a delay. MSS is essential for pH electrodes since they are prone to be individually problematic. Signal characterization to linearize PVs (e.g., pH), can reduce noise from amplification by high process gains (e.g., steep titration curves) and offsets after filtering.

 

The following Control Talk Blogs offer more knowledge on fast oscillations:

 

https://www.controlglobal.com/home/blog/11338665/what-are-good-signal-filtering-tips

 

https://www.controlglobal.com/home/blog/11340194/what-is-the-best-transmitter-damping-or-signal-filter-setting

 

https://www.controlglobal.com/home/blog/11339523/what-are-the-alternatives-to-reduce-noise

 

https://www.controlglobal.com/home/blog/11337698/the-causes-and-fixes-for-fast-oscillations-tips

 

https://www.controlglobal.com/home/blog/11306955/how-to-prevent-acceleration-jumping-and-oscillation-of-unstable-processes

 

So far as slower oscillations, the causes are often too much integral action or limit cycles due to resolution limits (e.g., stiction) and due to lost motion (e.g., backlash) when there are one or two or more integrators, respectively in the control loop (positioners, controllers, process). Resolution limits and lost motion occur principally in control valves but can occur in VFDs due to insufficient bits in VFD signal input cards and deadband settings in the VFD setup.

 

An unrecognized opportunity is to use dynamic first principle models employing material and energy balances possibly supplemented by artificial intelligence and adapted by model predictive control to rate the integrity of measurements. Coriolis mass flowmeters, resistance temperature detectors (RTDs) spring loaded in a tapered thermowell, non-contacting radar level measurements, and narrow range direct mounted smart pressure transmitters would have the highest initialized ratings.

 

These measurements tend to have the 5Rs (resolution, repeatability, rangeability, reliability, and response time) an order of magnitude better than alternative measurements. Coriolis mass flow meters also offer an incredibly accurate density measurement and have recently been improved to handle different phases (bubbles and solids) that cause erratic measurements.

 

https://www.controlglobal.com/measure/flow/article/11301941/knowing-the-best-is-the-best

 

First principle calculations of measurements with the lowest ratings could be done to alert users of suspected problems,. The results could include more intelligent maintenance and justification for investment in better measurements. This methodology can be extended to provide ratings and possibly bias corrections of analytical measurements especially if there is a prevalent use of Coriolis flowmeters. The elimination of oscillations by more precise valves and VFDs, better filtering and tuning, and good control strategies is essential for this methodology to be proficient.

 

People tend to use too much integral action because it behaves the way they would in not proactively reversing the direction of the manipulated variable until the error changes sign to deal with the inevitable dead time in the loop response. If there is an oscillation, people tend to think the cause is too much gain action. If the oscillation period in the process variable is more than 10 times the dead time, the cause is most likely excessive integral action. If the amplitude is relatively constant and the period is between 10 and 30 times the dead time, the cause is most likely resolution or lost motion.

If the limit cycle amplitude can be decreased by increasing the PID gain, the cause is most likely lost motion. If the period is approaching or exceeding 40 times the dead time in a lag dominant, integrating, or runaway process, the cause is most likely the product of the PID gain and reset time being too small most often caused by an integral time too small but in some cases a PID gain too small. There is effectively a window of allowable gains that can pose severe safety issues for runaway processes with positive feedback. 

If the PID gain is too small, the controlled variable in a runaway process can accelerate to a point of no return causing a shutdown and relief devices to blow endangering equipment and people. I have often seen where the integral time is more than an order of magnitude too large for these processes.

  

Russ Rhinehart’s Responses

Yes, noise is a real issue. Noise is usually considered to be normally and independently distributed (NID) perturbations to a steady signal; and more often, the perturbations are Gaussian distributed with a mean of zero and noise amplitude characterized by standard deviation.

 

This assumption has permitted mathematical analysis of many techniques for tempering noise, such as the first-order filter or a moving average. The NID(0,σ) assumption has also grounded the development of statistical filters, such as Kalman or SPC-based methods. Legitimized by theoretical analysis, these methods dominate applications.

           

Even so methods to “remove” noise cannot actually remove it, they temper the perturbations, they reduce the variance, but still leave some uncertainty. Further, filtering methods cause either a lag or delay in the filtered signal catching up to the true signal. If at steady state, this just takes some time to see the average (within the residual uncertainty), but during transients or a sequence of real changes, the filtered signal can remain aggravatingly behind, misdirecting action.

           

Point 1: Filtering to remove noisecan’t. The greater the noise reduction, the greater is the lag or delay. You must choose the filter parameter values to balance precision with lag. But noise is not really symmetric (plus perturbations may have a different amplitude from minus perturbations). Then, filtering, based on minimizing the squared deviation, will shift the average toward the larger amplitude side, creating a bias.

 

Point 2: The theoretical best is just idealized theory. Noise is generally considered to be perturbation additions to the true measured value. Such mechanisms would include thermal noise in a sensor, or random electromagnetic influences on the transmission lines, or mechanical vibrations on the sensor.

 

But noise could be internally generated by the process. For example, fluid flow is turbulent, and the random pressure pulses in the turbulence cause flow measurement noise. As another example, in-line mixing is imperfect, and as packets of hot/cold or hi/low pH, etc. pass by a sensor, the measurement represents the temporal packet, not the perfectly blended material. These in-process sources of noise would be causing perturbations that seem independent (no correlation to the prior perturbation) at the sampling frequency of normal process instrumentations.

           

However, there are “random” process influences that happen on a slower scale and cause process measurement variation. For instance, pulsing in piston pumps, on-off level or PSA switching, changing properties of raw material feed, BTU content of a shovel-full of coal, or process device issues such as stick-slip (stiction) in a valve.

 

These influences eventually get to the measurement location of the process, and lead to up and down drifts in a measurement, even when the process is nominally at steady state. It may appear to be random noise on a longer sampling interval, but normal measurement frequency can track the persisting ups and downs making it look like the real deviation it represents.

           

Similarly, external environmental influences on the process can cause perturbations that have persistence. Consider wind gusts or rain showers that create intermittent cooling of external units. Again, these on-off disturbances will cause temporary measurement changes with some persistence.

           

Point 3: Instead of using the instrument and control system to cover-up (filter) or to control the noise, eliminate the source. Improve feed material blending, use pulse dampers, or put positioners on valves.

           

Point 4: Are these sort of influences noise or real process changes? From a data-based view, it depends on the sampling frequency of the observation. “Noise” is uncorrelated. From a process control view, it also depends on the speed with which the control system can correct the deviation. If the effect passes before the controller can correct it, then a controller would simply be tampering.

 

The perturbation will be gone before the controller “correction” is observed, and the controller will just be amplifying the “noise” by adding a source of deviations. Noise is a perturbation with short persistence relative to the time a controller can make a change.

           

Point 5: Don’t try to control noise. (See W. Edwards Deming’s funnel experiment.) But “noise” also refers to distraction or degradation from a message, background nonsense or confusion, such as promulgated by social media. Relative to a control system, errors that should be ignored are missed/dropped signals or other spurious signals. Here, median filters are a sensible solution. In a middle-of-three filter, the middle value of the last three signals is accepted as the representative value.

 

If one signal is absurdly high or low, one of the other two will have the middle value. The mid value might be the most recent, the second past, or the third past. On average this will cause a delay of one sampling as the reported value. If you think a spurious or outlier event might have a persistence of three sequential signals, use the middle of the past 5.

           

Point 6: Methods to remove outliers are different from those intended to reduce noise by filtering data. However, Ed Farmer suggests an even broader context, “Sometimes, though, it becomes essential to differentiate the underlying reality from the corrupted measurement data.” Certainly, an improperly calibrated sensor/transmitter can introduce a systematic error. For instance, time, heat, degradation of internals, or erosion may have made a once-calibrated device to now be out of calibration.

 

Additionally, if the sensor to transmitter response is nonlinear (for example temperature to thermocouple mV) yet it was calibrated assuming linear with a 2-point calibration, then between the two calibrated points, the sensor will report a systematic error. I think that nearly all sensors are nonlinear, but within a reasonable range, the linear assumption is acceptable.

           

Point 7: Be sure that instruments are still in calibration and that calibration procedures match the instrument reality. Signal discretization, whether due to digital transmission devices or analog-to-digital display, can give an appearance of noise. This is like the digital time display when seconds or minutes jump from the past value to the present, then hold that value during a time interval until it jumps to the next.

 

However, for a process signal that is progressively drifting about a value, signal discretization generates either 1) a flatline hold until enough change makes it jump, which is a false indication of a noiseless steady state, or 2) a square wave signal that seems to randomly jump between certain values. The clue to visible discretization is that the signal only jumps between certain values. I would not label this discretization signal as noise, however filtering could minimize its amplitude (but also introduces a lag)

           

Point 8: To eliminate the appearance of noise due to signal discretization, use higher bit converters.

           

Point 9: There are many definitions for noise. It might mean random fluctuations, systematic error, or a one-time erroneous value. The solution must match the “noise” character.

           

One last thought. Noise is not just a part of process control systems. Humans lie to cover-up or distract attention from things they do not want known. Humans preserve traditions as reality even when they are folklore. Humans claim certainty whether they are certain or not, because uncertainty does not engender followership action or their future leadership-based promotion.

 

Point 10: Use your understanding of filters to reject “leadership” nonsense.

Greg McMillan
Greg McMillan
Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including "New Directions in Bioprocess Modeling and Control Second Edition 2020" and "Advanced pH Measurement and Control Fourth Edition 2023." Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Greg has recently retired as a part-time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the digital twin for exploring new opportunities. Greg received the ISA Mentoring Excellence Award in 2020 and the ISA Standards Achievement Award in 2023.

Related Posts

Four Reasons Why Automation Is the Future of Metal Fabrication

Although automated fabrication was once a relatively niche possibility, more shop managers consider it a ...
Emily Newton Apr 24, 2024 9:40:13 AM

IoT Solutions World Congress: Why Barcelona is the Place to Be in May

A century ago, automation solutions arrived to transform manual industrial tasks. This century, the digit...
Renee Bassett Apr 2, 2024 7:00:00 AM

3 Ways Industry 4.0 Can Upgrade Industrial Water Treatment Methods

Industrial water treatment methods must evolve to remain relevant and efficient. Many decision-makers hav...
Emily Newton Mar 12, 2024 7:38:26 PM