If you are a manufacturing operations leader, the ability to make data-driven decisions in near real-time is most likely a very important goal. If you are a maintenance leader, being able to get to asset data to drive your condition-based and predictive initiatives is very important to the future success of your team. All these scenarios rely on the access to, and the readability of, manufacturing operational data in a way that does not interrupt operations or put critical control systems at risk.
This element sounds very basic but it’s the critical foundation element. This needs to be done correctly and efficiently, starting with the process data which is the time series data that the sensors and instrumentation generate, along with the calculated values generated within the process systems. This data is captured within the automation controls layer, which is mission critical data.
You should not have anyone, except the automation control experts, accessing it. Therefore, this data needs to be extracted into a repository that puts it into a different layer and keeps the automation controls layer isolated and secure.
The preferred way of doing this is via a process data historian. Process data is only half the collection element; the other half is the transactional data. This is the all-important operation event header type records (ie, batch numbers, lot numbers, order numbers, quality samples, downtime events, work orders, etc.). To be useful and manageable, this data needs to be kept in a relational database format, like a SQL-based database.
Basic analysis of process data is accomplished with native Microsoft Excel connectivity tools or the historical trending tools found in many human-machine interface/supervisory control and data acquisition (HMI/SCADA) platforms.
Typical examples of analysis are:
It is now time to socialize the results. A good socialization solution needs to address all these interests and needs to have access to the data to support these divergent interests. The interest at a plant management level is going to be a mixture of both high and low, which is natural.
Hence, when selecting a specific technology layer, consider the data sources the system needs to source, the existing enterprise analysis and visualization tools already in place, the entire user community needs, and the client hardware platforms that will require support.