ISA Interchange

Welcome to the official blog of the International Society of Automation (ISA).

This blog covers numerous topics on industrial automation such as operations & management, continuous & batch processing, connectivity, manufacturing & machine control, and Industry 4.0.

The material and information contained on this website is for general information purposes only. ISA blog posts may be authored by ISA staff and guest authors from the automation community. Views and opinions expressed by a guest author are solely their own, and do not necessarily represent those of ISA. Posts made by guest authors have been subject to peer review.

All Posts

What Is a Meaningful Sample for a Statistical Analysis?

We wonder about conditions on our processes and we try to assess them by sampled data. Every so often, we take a fresh reading of some hopefully relevant parameter, analyze it in some way, and decide about state or condition.

A pressure reading of 150 psi, for example, tells us the pressure at a specific time at a specific location within the process system. We have been informed that in “normal” operation, 150 psi is the common value. Further investigation might reveal that no reading other than 150 psi had ever been recordedeven when the process was down for turn-around.

More frequent sampling might have led to the conclusion that the monitored process was extremely stable and process noise was unobservably small at that point in the system, and there were virtually no process excursions of any kind. In fact, the process was always at its operating point, whether it was turned on or not. Replacing the pressure gauge with a recently tested and calibrated one may have led to diverse speculation and different outcomes.

Even when the reading we get is absolutely correct, we get it at some interval. What can happen between those measurements? The values we see for a parameter can, and often do, change at least a bit with every reading. If they didn’t, we might wonder if they were real. Generally, they can also change between readings.

One colleague, frustrated by a signal bandwidth problem with a digital measuring system (now frighteningly primitive), remarked that the event he was looking for “could finish college, raise kids, and go on vacation during this measurement interval.”

In the 1960s and 1970s, a lot of process control work was done in analog because the necessary performance of digital systems just wasn’t there yet. An important tool in every designer’s bag was the 741 Operational Amplifier. With a few external components, it could perform many interesting functions into the kilohertz region. Doing the same thing digitally in those days was difficult, expensive, and perhaps just the product of a dream.


detecting leaks

If you would like more information on how to purchase the author's book, Detecting Leaks in Pipelinesclick this link.


In those days, the frontier for process control was getting the inherently analog process data into some digital form that could be easily transmitted and analyzed. Analog-to-digital converters (we called them ADCs) with useful bandwidth were available, but the more useful they were, the more they costan impediment to prolific use in industrial products.

We could see the future, though. We were constantly reminded of Moore’s law and would doodle on our calculation pads how many more two-year intervals were between us and having the equipment necessary for some amazing ideas. Of course, as the equipment became more amazing, so did the ideas envisioned for it.

If knowing a value was useful, knowing a trend was even better. The things you could do and the conditions you could anticipatejust by looking at the statistics of an emerging population—were truly remarkable. Before long, the process noise we had worked so hard to eliminate in our analog equipment demanded more sophisticated methods in our digital systems. It was easier to do the math and develop a solution methodology than it was to find a way for a computer to solve all that stuff. Fortunately, John F. Kennedy’s 1962 vision of going to the moon and coming back again by the end of the decade made a lot of things happen.

For our present level of process knowledge, the applicable science involved with the tasks we now undertake is within the capability of computing equipment we can afford to employ. One outcome of this work is our deepening understanding of what is required to get where we are trying to go. That may well take us to the next horizon, from which we can see the next big thing.

I have to say, I love science and our journey into it.


Interested in reading more articles like this? Subscribe to ISA Interchange and receive weekly emails with links to our latest interviews, news, thought leadership, tips, and more from the automation industry.

Edward J. Farmer
Edward J. Farmer
Edward J. Farmer, PE, ISA Fellow, is the author of the ISA book "Detecting Leaks in Pipelines." He has more than 40 years of experience in the "high tech" part of the oil industry.

Related Posts

IoT Solutions World Congress: Why Barcelona is the Place to Be in May

A century ago, automation solutions arrived to transform manual industrial tasks. This century, the digit...
Renee Bassett Apr 2, 2024 7:00:00 AM

3 Ways Industry 4.0 Can Upgrade Industrial Water Treatment Methods

Industrial water treatment methods must evolve to remain relevant and efficient. Many decision-makers hav...
Emily Newton Mar 12, 2024 7:38:26 PM

ISA Business Academy: A Mini-MBA For Automation Industry Leaders

The ISA Business Academy is a 10-week fully digital program beginning 28 March for both current and aspir...
Ashley Ragan Mar 11, 2024 10:08:24 AM