The following discussion is part of an occasional series, "Ask the Automation Pros," authored by Greg McMillan, industry consultant, author of numerous process control books, and 2010 ISA Life Achievement Award recipient. Program administrators will collect submitted questions and solicits responses from automation professionals. Past Q&A videos are available on the ISA YouTube channel. View the playlist here. You can read all posts from this series here.
Looking for additional career guidance, or to offer support to those new to automation? Sign up for the ISA Mentor Program.
Erik Cornelsen’s Question:
When implementing loops using at-line and lab analyzers, in a continuous process application, how can this new process timestamped information can help to improve process control? Note that the test duration varies, and it takes between 15 and 25 minutes to get the new sample result.
Russ Rhinehart’s Responses:
If the analyzer delay is on the order of (or longer) than the process response time to the controller action, and disturbance events are frequent, then the controller will not be able to control that measurement deviation.
For example, if the process open loop settling time (the process response time to the controller output, the process FOPDT delay plus three time-constants) is 15 to 25 minutes, then the process will take that long to respond to control action. If a changing disturbance causes a deviation, then by the time the controller learns of the deviation, it will be too late to fix it.
A new deviation will be affecting the process. By the time the controller can implement a fix, that disturbance event will have passed, and the fix for the past event will be superimposed on a new disturbance event.
In this case the controller will be responding to noise, tampering with the process, and I would suggest implementing a SPC (statistical process control) filter on the controller output. See Muthiah, N., and R. Russell Rhinehart, “Evaluation of a Statistically-Based Controller Override on a Pilot-Scale Flow Loop”, ISA Transactions, Vol. 49, No. 2, pp 154-166, 2010. Alternately, one could detune the controller.
If the disturbances are infrequent, persist for a long time relative to the analyzer cycle, and the changes in analyzer delay are also infrequent, then classic tuning for dead time dominant processes should work. Use the current analyzer delay to adjust the dead time in, for instance, Cohen and Coon tuning. I do not know of a solution if the delay time cannot be forecast, except to accept a detuned conventional controller.
A standard solution is to consider cascade control. If there is an easy-to-measure secondary variable (temperature, pressure, differential pressure, conductivity, etc.) that could provide an early indication of the primary analyzer variable value, then control that secondary variable, and use feedback from the delayed analysis to adjust the set point for the secondary variable. The primary controller feedback could even use heuristic rules to adjust the secondary set point.
The soft sensor uses process signals (e.g., temperature, density, pressure to infer the measurement produced by the analyzer). The soft sensor generates a continuous signal.
a multivariable autoregressive (ARX) model may also help identify unsuspected relationships between process inputs and process outputs.
There are dynamics associated with both the analyzer such as a Gas Chromatograph (GC) and the process itself. For the analyzer, it is predominantly the delay associated with the length of tubing between the sample location and the analyzer. The process part includes delays and lags (time constants) that reflect dynamics of flows and volumes/holdups that exist between the measurement location and the analyzer sample location, The process measurements we normally think of are temperatures or flows.
To minimize dead time associated with the analyzer, a “fast loop” that continuous circulates process fluid is normally used. Samples are then “pulled” from the fast loop to the analyzer. This is standard practice. The impact of analyzer cycle time determines how often an analysis result is reported. For control purposes, it is obviously best to minimize.
Both analyzer and process dynamics are present when modeling the analyzer to a process measurement. This is often modeled with a first order plus dead time model, although the effect of multiple lags is usually observed. With an online process analyzer, the time stamp of a new analyzer result is not needed for control because of the dynamic relationship (normally assumed fixed) that exists between the process measurement, although this relationship can only be observed when a new analysis is reported. Note that the dead time does not change with the cycle time of the analyzer.
The other situation is when process analyzer is not present and instead samples are collected manually and taken to a lab. For this case, an accurate time stamp is necessary to match or synch-up the sample result with process conditions. In my experience, sample times are not sufficiently accurate for control applications and sample collection procedures must be changed.
As an example, it is common that a sample is routinely taken at the start of a shift; however, the actual times of the sample could easily vary by 30 minutes, perhaps loner than one hour. One solution to this problem is for the lab technician or outside operator to notify the console operator when the sample is taken. The console operator then manually changes a digital or switch in the DCS, which then records the current time or stores current values (often averaged) of corresponding process measurements.
An automatic way to do this is to install a raw thermocouple it the sample collection line. The resulting temperature spike from a sample collection is used to trigger a switch to record the current time or store current values.
Implications for control. There are two ways to use a process analyzer for control. One is as the PV to a PID loop. For the best performance, the controller should only execute upon a new analyzer result. If the analyzer comes into the DCS via a digital interface, there is often a digital that is associated with a new analysis result than be used to trigger the PID loop. If the signal comes into the DCS as an I/O point (4-20mA), logic is normally required to infer a new analysis based on the change in the analyzer.
For MPC, most products have a new results switch associated with each controlled variable that will need to be triggered in the same way. When brought is as analog input, there is usually some noise superimposed on the analyzer signal that must be accounted for in the detection logic. There may also be additional analyzer information that can quite helpful to use, including status/validity information and when the analyzer is being calibrated
The other way is to use the analyzer result to update an inferential or soft sensor model using continuously measured values like temperature and pressure. Similar logic is required to update the model upon a new analysis. This approach is also applicable to lab measurements. When a new lab analysis is reported, the time stamp is used to retrieve historical values, or stored value are used with new lab result to update the model.
Note that unless accurate time stamps are already in place it may be difficult to develop and inferential model. In addition, there may not be sufficient data and/or movement in the process and a dedicated test may be required.
When using analyzers in control, it is worthwhile to consider how they can be sped up. Options include changing the stream sequence or reallocating analyzers to difference service.
It is also important to consider if insulation of the sample line is adequate to avoid condensation and/or evaporation that can occur with ambient temperature.