This guest blog post is part of a series written by Edward J. Farmer, PE, author of the new ISA book Detecting Leaks in Pipelines. To read all the posts in this series, scroll to the bottom of this post for the link archive.
Like most complex and multidimensional process management ventures, a good result usually begins with good science and robust methodology. Consequently, most successful operators have developed proprietary methods of identifying, assessing, and mitigating risk. This methodology is usually based on experience and the developed wisdom of people who have, “been there and done that."
People with such breadth-of-experience, and such depth-of-practice, are becoming scarce due to specialization within the ever more complex industry coupled with the inevitable retirements. The ability to anticipate, assess, and mitigate risks becomes more difficult to do with confidence year-by-year.
Situations such as this spawn ventures into science and the development of concepts and processes that formulate methodical processes that aid in evaluation and ensure a path toward dependable results. In pipeline safety two divergent approaches have emerged that are referred to as the “deductive method” and the “inductive method.”
The deductive method, popularized by the tales of Sherlock Holmes, begins with an occurrence of a certain character at a certain place and time from which one deduces how it could have happened. In risk assessment language, an event and its cause are linked by a series of happenings, a “causality chain," which somehow connects them. Often it is a brutally short chain – a backhoe hits a buried line. Sometimes it is fascinatingly complex, such as a pressure safety valve failing to protect some item of vulnerability resulting in a sort of cascading failure.
The inductive method wonders what causality chain or initiating occurrence could cause a presumed or actual accident or leak. Analysis is often based on causal experience as described in U.S. Department of Transportation accident statistics. Analysis involves working backward from the leak, considering all the potential linkages and branches in the causality chain to identify a source, or set of events, that could produce the subject outcome. This is the traditional approach to safety analysis, and it flows from the experience, wisdom, insight, intuition, and sometimes the creativity of the investigators.
Like most complex and multidimensional process management ventures, a good result usually begins with good science and robust methodology. Consequently, most successful operators have developed proprietary methods of identifying, assessing, and mitigating risk. This methodology is usually based on experience and the developed wisdom of people who have, “been there and done that."
People with such breadth-of-experience, and such depth-of-practice, are becoming scarce due to specialization within the ever more complex industry coupled with the inevitable retirements. The ability to anticipate, assess, and mitigate risks becomes more difficult to do with confidence year-by-year.
The consequences that stem from some accidents or failures are sometimes far too serious to leave to intuition. The deductive method was developed by the U.S. Nuclear Regulatory Commission (NRC) as NUREG 492, or “Fault Tree Handbook." It applies the systematic methodology of fault tree analysis to the often-complex causality chains that lead to accidents. The process involves identifying information, events, or circumstances along the path from normal operation to failure.
The process begins with the observation that a system is a deterministic entity comprising an interacting collection of discrete elements. Analysis of a particular output of the system involves analysis of the discrete elements involved in causing, facilitating, or enabling the occurrence. From this analysis it is possible to assess causality and even occurrence probability. It is a correct, fundamental, documentable, and highly pedantic method of producing repeatable analyses. This has made it attractive in many important or high-risk ventures with potentially serious or even catastrophic consequences.
The only drawback is the work involved, which requires a thorough understanding of the process and a detailed understanding of its components, as well as some understanding of the statistical nature and impact each of these systems, subsystems, and occurrences might bring to discerning and evaluating the likelihood of a particular result. With all that done, however, it supports a quantitative determination of the potential outcome of a particular occurrence or combination of them.
These two approaches have been described using words like “vastly different,” “night and day,” “conjecture vs. determinism” and many others along the same thought process. One skilled and experienced person described the inductive method as the “wild-ass guess” and the deterministic method as “a lifetime’s work.” The NRC’s interest in the pedantic and deterministic yet highly repeatable methodology of the deductive deterministic approach is understandable. There are a small number of systems that require such analysis and a large cast of people to do them.
The potential consequences are huge, justifying the cost and inconvenience of assembling these people into analysis teams spending months, perhaps years, achieving precise understanding of the systems and the interactions of their component parts. This sometimes (often) requires subject-matter experts on various equipment and process features that can be hard to find, combine, and schedule.
On the other hand, many industrial systems are simpler, less obtuse, than the complex systems the NRC envisioned. Having an adequate understanding of them as well as sufficient information may not be as rare, and the logic that comes from experience and even “common sense” may not only be pertinent, but adequate.
Once upon a time, a critical project was deemed to require the best possible risk analysis. It required several subject-matter experts, lots of hard-to-obtain data , and an analysis team capable of understanding, assembling, and presenting the results. All that work provided very useful insights but nothing that the experienced people did not intuit. It took about three times as long and about five times the cost of the usual company approach.
In the end, the results were assessed by the review of experienced people to be “as expected.” This seemed to indicate that the method was a slow and expensive way to get to the same conclusion that experience, and logic, might predict. On the other hand, the results were clearly methodical, thorough, and repeatable which probably facilitated getting construction and operating permits.
As with many inherently good ideas the optimal implementation may involve some synthesis, which seems to be the case with some of the techniques used by industry leaders. All of this is worth thinking about, and perhaps re-thinking every now and again. My book, Detecting Leaks in Pipelines, includes a discussion of risk assessment and analysis involving each of these methods along with some additional insight. In any case, implementing the road to where you are trying to go begins with knowing where that is and how to get there. This kind of analysis can be a big help in making an optimal start on your way.
How to Optimize Pipeline Leak Detection: Focus on Design, Equipment and Insightful Operating Practices
What You Can Learn About Pipeline Leaks From Government Statistics
Is Theft the New Frontier for Process Control Equipment?
What Is the Impact of Theft, Accidents, and Natural Losses From Pipelines?
Can Risk Analysis Really Be Reduced to a Simple Procedure?
Do Government Pipeline Regulations Improve Safety?
What Are the Performance Measures for Pipeline Leak Detection?
What Observations Improve Specificity in Pipeline Leak Detection?
Three Decades of Life with Pipeline Leak Detection
How to Test and Validate a Pipeline Leak Detection System
Does Instrument Placement Matter in Dynamic Process Control?
Condition-Dependent Conundrum: How to Obtain Accurate Measurement in the Process Industries
Are Pipeline Leaks Deterministic or Stochastic?
How Differing Conditions Impact the Validity of Industrial Pipeline Monitoring and Leak Detection Assumptions
How Does Heat Transfer Affect Operation of Your Natural Gas or Crude Oil Pipeline?
Why You Must Factor Maintenance Into the Cost of Any Industrial System
Raw Beginnings: The Evolution of Offshore Oil Industry Pipeline Safety
How Long Does It Take to Detect a Leak on an Oil or Gas Pipeline?
Book Excerpt + Author Q&A: Detecting Leaks in Pipelines
About the Author
Edward Farmer has more than 40 years of experience in the “high tech” part of the oil industry. He originally graduated with a bachelor of science degree in electrical engineering from California State University, Chico, where he also completed the master’s program in physical science. Over the years, Edward has designed SCADA hardware and software, practiced and written extensively about process control technology, and has worked extensively in pipeline leak detection. He is the inventor of the Pressure Point Analysis® leak detection system as well as the Locator® high-accuracy, low-bandwidth leak location system. He is a Registered Professional Engineer in five states and has worked on a broad scope of projects worldwide. His work has produced three books, numerous articles, and four patents. Edward has also worked extensively in military communications where he has authored many papers for military publications and participated in the development and evaluation of two radio antennas currently in U.S. inventory. He is a graduate of the U.S. Marine Corps Command and Staff College. He is the owner and president of EFA Technologies, Inc., manufacturer of the LeakNet family of pipeline leak detection products.