ISA Interchange

Welcome to the official blog of the International Society of Automation (ISA).

This blog covers numerous topics on industrial automation such as operations & management, continuous & batch processing, connectivity, manufacturing & machine control, and Industry 4.0.

The material and information contained on this website is for general information purposes only. ISA blog posts may be authored by ISA staff and guest authors from the automation community. Views and opinions expressed by a guest author are solely their own, and do not necessarily represent those of ISA. Posts made by guest authors have been subject to peer review.

All Posts

How to Test and Validate a Pipeline Leak Detection System

 

This guest blog post is part of a series written by Edward J. Farmer, PE, author of the new ISA book Detecting Leaks in Pipelines. To read all the posts in this series, scroll to the bottom of this post for the link archive.

 

As in all process control work, some understanding of science is involved, and some experience with what actually happens in the monitored system can be very useful. That suggests testing over a range of conditions is prudent, so the performance and application limits can be reliably assessed.

Over the years, the technology I’ve developed and the systems in which it is implemented have been tested thousands of times, many supervised by industry or government agencies as well as the organizations that deployed the systems. While the core system capabilities are well-understood there are often differences; sometimes subtle, sometimes huge; in the characteristics of a particular application.

Most of the time, good analysis coupled with some on-line measurements and observations moves the system design into the “ballpark” so in-place performance testing is more of a quantification exercise than a verification that it all really does work.

 

For in-situ testing over years, I’ve settled on a process that begins with a really big leak – perhaps 2 to 5 percent of the line flow rate. If the concept is working correctly it should be detected in a few seconds and located precisely. From there, the leak size is reduced, usually by halves, until detection fails. That establishes the performance limit. Results are plotted and the curves through the test points are assessed, considering the algorithms involved, to assess the quality and dependability of the results.

This process covers the range of applicability and is easily repeated to confirm and re-confirm results over time. Many customers test periodically to ensure everything that needs to work, and was working last test, is still providing the necessary information – the observations that make the algorithms work. Testing that isn’t repeatable is not scientific or especially valuable in establishing dependable performance.

Many users install test manifolds, or taps for them, so leaks with specific orifice sizes can be reliably tested.  Timing is usually from opening of the leak to the alarm it has been detected.  When multiple algorithms are involved separate alarms can occur for each of them and each is timed to verify all components of the system are performing “on the curves” as expected.

Unless there is some specific objective that requires otherwise, all tests are initiated from a stable line.  Sometimes that means the readings are all stable. Sometimes it means the line has been running in its normal manner for some time. The presence of unusual or artificially induced transient conditions may affect the system’s response characteristics, and will probably be hard to reproduce. More testing, as opposed to artificial conditions, normally produces the more pertinent and dependable results.

In any case, the algorithms automatically tune and adapt to the actual conditions observed on the pipeline. When conditions change, e.g., from a leak, the algorithms assess why and where in a configurable manner. That provides an opportunity to optimize sensitivity, specificity, and detection time. Automatic tuning generally provides excellent results and can be “tweaked” for special conditions when necessary. Essentially, the system leaves very little to the operator and can be started up in a few hours.

There are some products based on the idea of pattern recognition, generally implemented using Bayesian statistics. The core idea is that if a data set that looks like the pattern produced by a leak is created and analyzed against blocks of actual data then it becomes possible to detect the event through pattern comparison using statistics. There are two problems with this. First, except for some highly unusual situations, there is no dependable pattern, no “fingerprint” for a leak.

The observable (measurable) characteristics are related by Newton’s momentum equation and how relevant data appear is a direct result of the ratio of force (e.g., from the difference between the pipeline pressure and outside pressure at the leak location), to the mass of the fluid involved.  Second, Exacerbating the problem, there are often issues with the volatility (vapor pressure) of the fluid in the pipeline.  Solving this problem normally involves using many Bayesian test data sets and comparing each of them to conditions in the pipeline, all while still risking the next leak might not have a “signature” on file for it in the inference engine.

Simply put, the detection methodology, and the testing of it, must be as realistic in terms of process monitoring as possible. Years ago, many users did impromptu testing where a team would set up and conduct a test without informing the operators. Results would be assessed on the basis of how long it took to implement the response plan. That is less common today but is worth thinking about.

Good test records are useful in identifying trends and the appearance of unusual conditions. We have on-line and off-line analysis systems that support careful analysis and review, not only of leaks but unusual operating occurrences and situations. The algorithms store data for every significant event and archive all selected data over time. This allows investigation of why, for example, a situation that was not a problem a few weeks ago is a problem today.

Testing provides the assessment of how a system can work, how it’s working now, the tools for analyzing unusual occurrences, and a vehicle for training and testing operators. Knowing how well things should work along with if and how well they are working now, is an important component of safety monitoring.

 

How to Optimize Pipeline Leak Detection: Focus on Design, Equipment and Insightful Operating Practices
What You Can Learn About Pipeline Leaks From Government Statistics
Is Theft the New Frontier for Process Control Equipment?
What Is the Impact of Theft, Accidents, and Natural Losses From Pipelines?
Can Risk Analysis Really Be Reduced to a Simple Procedure?
Do Government Pipeline Regulations Improve Safety?
What Are the Performance Measures for Pipeline Leak Detection?
What Observations Improve Specificity in Pipeline Leak Detection?
Three Decades of Life with Pipeline Leak Detection
How to Test and Validate a Pipeline Leak Detection System
Does Instrument Placement Matter in Dynamic Process Control?
Condition-Dependent Conundrum: How to Obtain Accurate Measurement in the Process Industries
Are Pipeline Leaks Deterministic or Stochastic?
How Differing Conditions Impact the Validity of Industrial Pipeline Monitoring and Leak Detection Assumptions
How Does Heat Transfer Affect Operation of Your Natural Gas or Crude Oil Pipeline?
Why You Must Factor Maintenance Into the Cost of Any Industrial System
Raw Beginnings: The Evolution of Offshore Oil Industry Pipeline Safety
How Long Does It Take to Detect a Leak on an Oil or Gas Pipeline?
Book Excerpt + Author Q&A: Detecting Leaks in Pipelines

 

About the Author
Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. His work has produced three books, numerous articles, and four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. He is the owner and president of EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.

 

Connect with Ed
48x48-linkedinEmail

 

Edward J. Farmer
Edward J. Farmer
Edward J. Farmer, PE, ISA Fellow, is the author of the ISA book "Detecting Leaks in Pipelines." He has more than 40 years of experience in the "high tech" part of the oil industry.

Related Posts

How Did Automation Professionals Benefit from ISA in 2024?

The International Society of Automation (ISA) is proud to be the professional home of thousands of member...
Kara Phelps Dec 17, 2024 9:30:00 AM

Ensuring RCM or DCS Redundancy and Its Security in a Complex Industrial Environment

In industrial automation, remote control managers (RCM) or distributed control systems (DCS) are critical...
Ashraf Sainudeen Dec 13, 2024 10:00:00 AM

ISA Podcast Explores Automation and Smart Agriculture

The International Society of Automation (ISA) podcast, Podomation, curates and shares insightful discussi...
Kara Phelps Dec 10, 2024 11:00:00 AM