Skip navigation
All Places > All Things PI - Ask, Discuss, Connect > Blog > 2020 > May
2020

25-minute read

 

Audience for this lab

  • An experienced PI System user
  • It’s okay if you’re not a data scientist or a developer and are new to open source tools such as R, Python etc.
  • If you are a subject-matter-expert in industrial operations, and you are working with a separate group of data engineers and data scientists, you are encouraged to review this lab content as a team

 

This blog is an extract from the Summary pages of the Lab Manual - enough details are included here to give you a feel for the lab content even if you skip the hands-on portions of the lab. 

 

Lab description

PI World 2020 - Hands-on Lab - San Francisco

This event was cancelled due to COVID-19. However, an on-line version of this lab is at: https://learning.osisoft.com (must be purchased separately or have a PI Dev Club subscription;  the lab will be available during the latter part of Sep/Oct 2020).

 

Attend this lab to learn about the best practices for operationalizing diagnostic/predictive data models with the PI System infrastructure.


Out-of-the-box, the PI System AF component allows you to deploy simple analytics and predictions such as those from linear and polynomial curve fits. Advanced models can be developed using open source R, Python etc. and commercial libraries in Azure ML, AWS and GCP. Once such models have been developed, you can operationalize them by posting the predictions and forecasted results back to the PI System. With illustrative use cases, this lab will provide a walk-through for putting simple and advanced models into play using the PI System infrastructure.


This is not a developer/coding lab; however prepared scripts and R/Python snippets with PI WebAPI will be used.
No coding or prior experience with open source R or Python is necessary but familiarity with the PI System is a pre-requisite.

 

Who should attend? Experienced PI user

 

 

Summary

Operationalizing analytics, also known as model deployment or ML (machine learning) Production, is part of MLOps  and a critical step in data science projects. These projects have several steps, as show below:

 

This lab’s objective is to review ML Production – specifically, the predictive/diagnostic model deployment patterns in the PI System as you go from a data scientist’s workbench to real-time operations. 

 

 

As part of the lab, you will also see how the various PI System components are used, and get an understanding for the user experience when consuming the diagnostic/predictive models – for process anomaly, product quality, machinery failure etc.  The diagnostic/predictive models may be based on regression, support vector, principal components, decision tree or any such machine learning algorithm.

 

It is assumed that you have already developed suitably trained/validated models. The “how to” of machine learning based model development is out-of-scope for this lab – but, it has been covered in previous PI World labs, listed below: 

 

  • Feed dryer (silica gel/molecular sieve) - identify the preferred and stable/capable temperature profile to guide future operations… more
  • Gasoline RVP - predict RVP based on stabilizer column process conditions… more
  • HVAC air-handler operation - anomaly detection… more
  • Engine failure – predict RUL (remaining useful life) or failure within a time window from operations data and failure history… more
  • Bio-reactor – yeast manufacturing - identify golden batch to guide future operations… more
  • Power – load forecasting at distribution transformers from aggregated electricity meter data… more
  • Wind turbine – power generation forecasting based on wind speed, air temperature… more

 

While the datasets from the previous labs are being repurposed, this lab's focus is on the "model deployment" portion of a data science project. We will cover two common model deployment patterns:

  • Models as algebraic equations via PI System asset analytics

When diagnostic/predictive models can be expressed as simple algebraic equations, you can use PI System asset analytics for writing the model results to PI. Models based on regression, PCA, PLS etc. may be developed in R, Python etc. but the model itself can be expressed simply as an algebraic equation and hence easily deployed with PI asset analytics.

 

  • Complex models via API calls to the model hosted externally and integrated with PI using PI WebAPI

More complex models based on ANN (artificial neural network), SVM (support vector machine), decision-trees etc. involve advanced math such as matrix operations, iterative calculations etc.  And, for deploying such complex models, it is easier to access the same libraries used during model development in R, Python etc. Or, in cloud-based systems such as Microsoft AzureML, the trained models are deployed via web services, which require passing real-time data to the web-services and writing the predictions to PI.

 

In the lab, the model deployment patterns are illustrated using some of the examples in the table below:

 

Topic

Model development

Model deployment

Comment

Overview

 

 

Lab overview

Exercise 1 – Feed dryer

Statistical techniques to extract the golden temperature profile

R - Plumber

Web services to evaluate actual vs. target temperature profile (shape) to alert the operator

Write temperature profile data to future data tag to guide the operator

more

Exercise 2 – Gasoline RVP – predict RVP property

Python,

Regression

PI Integrator for streaming data, Kafka, Python, PI WebAPI

Python Kafka consumer

more

Exercise 3 – HVAC air handler – anomaly detection

R

Principal Components (PCA)

 

AzureML

Support Vector (SVM)

PI asset analytics

 

 

 

AzureML web services, Windows script with PI WebAPI

more

Exercise 4 – Engine failure – early detection

 

 

Discussion only.

Quantify the benefits for moving from reactive to preventive to predictive maintenance

Extra – Additional examples

 

§  Power – load forecasting at distribution transformers from aggregated electricity meter data

 

§  Engine failure – predict RUL (remaining useful life) from operations and failure history

 

§  Wind turbine – power generation forecasting based on wind speed, air temperature

 

§  Bio-reactor – yeast manufacturing, golden batch

 

 

Python

Regression

 

 

 

R

PCA

 

 

 

AzureML

Regression

 

 

 

R

PCA, PLS

 

 

PI asset analytics

 

 

 

 

PI asset analytics

 

 

 

 

AzureML web services

 

 

 

 

PI asset analytics

 

 

more

 

 

 

 

more

 

 

 

 

more

 

 

 

 

more

Q & A and discussion

 

 

 

 

Exercise 1 – Feed dryer - golden temperature profile

The picture shows twin feed dryers – each contains a bed of desiccant and molecular sieve which removes moisture from a hydrocarbon stream flowing through it at ambient conditions.  The absorbed water is then removed from the bed during the regeneration process by passing the hydrocarbon stream at elevated temperature, 450-500 °F.

The process alternates between the two dryers – one is removing moisture from the feed stream while the second is in regeneration i.e. removing moisture from the bed.  The feed drying/regeneration takes about 12 hours – some runs can last 16 hours or more.  Over several months of operation, you will also find some aborted runs – aborted after an hour or 2 hours or 4 hours etc.   

 

The data used for this Exercise is from an actual processing facility and covers year 2017 data at six-minute interval.

 

The objective is to generate a temperature profile representing proper regeneration of the dryer bed based on historical operations data. And, then use this temperature profile to guide the operator during future dryer regeneration operations.

 

In the hands-on portion, you:

  • Review the model; details regarding the model development is … more
  • Review the model deployment
    • The model is deployed via web services (R – Plumber) and consumed via PI Notifications – web service delivery channel. The web services in R also use PI WebAPI to write the temperature profile (as future data) and the actual vs. golden profile (shape) deviation to PI.

For model development, PI Integrator publishes about 13 months of historical feed dryer operations data. It is consumed in R for model development, i.e. extracting the golden temperature profile - golden to imply statistically stable and capable regeneration operation.

 

And, for model deployment, PI Notifications web service delivery channel triggers  a web service call (when a new regeneration cycle starts)  to:

  • write the golden temperature profile to a future data Tag via PI WebAPI, and

 

as the regeneration cycle continues, periodically - say, every 30 minutes:

  • read from PI, the actual temperature profile(via PI WebAPI) and write to PI the calculated shape deviation between actual vs. golden temperature profile

 

End-to-end data flow pattern

The data flow sequence is shown below: 

 

The graph below shows the actual feeder Outlet Temperature (blue) and the desired Outlet Temperature (golden) labeled as Outlet Temperature Forecast. The purple trace shows the start/end of the regeneration cycle.

 

The shape deviation between actual vs. golden temperature profile is calculated periodically during the regeneration cycle and the operator is alerted if it exceeds limits.

 

 

 

The PI System components and external software used are: 

 

Application

Use case

PI software

Other software

Comment

Silica-gel/molecular sieve feed dryer

Prescriptive guidance for the operator – golden temperature profile during dryer bed regeneration

  • PI
  • AF
  • Asset Analytics
  • PI Integrator
  • PI Web API
  • PI Vision
  • PI Notification
  • R

R is used as a web host for receiving the web service call from PI Notifications

 

R is also used as a PI WebAPI client to integrate (read/write) with PI 

 

Exercise 2 – Gasoline RVP - predict RVP property

In a crude oil refinery, gasoline is produced in a stabilizer (distillation) column. The gasoline RVP property is a key measurement used to adjust column operations.

 

Refineries without a continuous on-line analyzer have to wait for lab measurements, usually available a couple of times a day.  As such, column process parameters (pressure, temperature, flow etc.) are used to predict RVP more frequently, in real-time, to guide the operator running the column.

 

The objective is to use column operations data with suitable algorithms to predict RVP in real-time so that it can be used to guide the operator when running the Stabilizer column.

 

In the hands-on portion, you:

  • Review the model; details regarding the model development... more
  • Review the model deployment
    • Real-time stabilizer data is streamed to Apache Kafka via PI Integrator. And, in Python, which consumes the Kafka payloads, we calculate the model-predicted RVP and write it back to PI via PI-WebAPI.

 

In model deployment, PI integrator streams the stabilizer column operations data to a Kafka end-point every 6 minutes and a Python Kafka consumer uses a regression model to predict RVP and writes the results back to PI via PI WebAPI.

 

End-to-end data flow pattern

The data flow sequence is shown below:

 

 

The PI System components and other software used are: 

 

Application

Use case

PI software

Other software

Comment

Gasoline RVP

Quality prediction

 PI
 AF
 Asset Analytics
 PI Integrator – Advanced
 PI Web API
 PI Vision

 Python

 Apache Kafka

Python/Jupyter for model development

 

And, Python client uses the Kafka consumer to receive data from PI Integrator, scores the model and writes the results to PI via PI WebAPI 

 

Exercise 3 – HVAC air handler - anomaly detection

A typical HVAC (Heating Ventilation & Air Conditioning) system with an air handler unit (AHU) is shown below.

During the day, the AHU operating conditions change continuously as the outside air temperature rises and falls, along with changing relative humidity, changing wind speed and direction, changing thermostat set-points, building occupancy level, and others. The building management system (BMS) adjusts the supply air flow rate, chilled water flow rate, damper position etc. to provide the necessary heating or cooling to the rooms to ensure tenant comfort.

 

However, fault conditions such as incorrect/drifting sensor measurements (temperature, flow, pressure…), dampers stuck at open/close/partial-open position, stuck chilled water valve, and others, can waste energy, or lead to tenant complaints from other malfunctions causing rooms to get too hot or too cold.

 

The objective is to develop and deploy a suitable model based on historical data to detect anomalies (sensor faults, damper open when it should be closed etc.) in day-to-day operation.

 

Sensor data available from the AHU, as part of the BMS (building management system) are:

  • Outside air temperature
  • Relative Humidity
  • Mixed air temperature
  • Supply air temperature
  • Damper position
  • Chilled water flow
  • Supply air flow
  • Supply air fan VFD (variable frequency drive) power
  • …   

 

 

In the hands-on portion, you:

  • Review the air handler model; details regarding model development…more
  • Review the model deployment for anomaly detection
    • PCA (principal component analysis) based model is deployed using PI asset analytics – clock scheduled to run every 10 minutes.
    • SVM (support vector machine) based model is deployed using AzureML web services.  And, a Windows script (with Task Scheduler) runs the model every 10 minutes, and writes the normal/fault status to PI via PI WebAPI.

In model deployment:

  • R-PCA model consists of algebraic equations and it is deployed via PI Asset Analytics

 

 

 

  • Azure-ML model is published as a web service (hosted in Azure). And, using a Windows script, you retrieve PI data via PI WebAPI, call the AzureML webservice for scoring and write the results to PI.

 

The PI System components and other software used are: 

 

Application

Use case

PI software

Other software

Comment

HVAC – Air Handler

Anomaly detection

  • PI
  • AF
  • Asset Analytics
  • PI Integrator
  • PI WebAPI
  • R
  • Azure-ML
  • Windows scripting

Windows scripting is used to orchestrate between PI (PI WebAPI) and Azure-ML web services

 

Exercise 4 – Engine failure - CM, PM and PdM - early fault detection via machine learning

In this Exercise, we use engine operations data and failure history to guide maintenance decisions, and quantify the benefits when moving from CM to PM to PdM:

  • CM - Corrective maintenance - break-fix
  • PM - Preventive maintenance - run-hours based
  • PdM - Predictive maintenance - early fault detection (via machine learning - multi-variate condition assessment)

 

In a deployment with about 100 similar engines, sensor data such as rpm, burner fuel/air ratio, pressure at fan inlet, and twenty other measurements plus settings for each engine – for a total of about 2000 tags – are available. On average, an engine fails after 206 cycles, but it varies widely - from about 130 to 360 cycles – each cycle is about one hour.

 

With a given failure history for the engines and known costs for PMs vs. repairs, we calculate the benefits in moving from CM to PM to PdM.

 

As part of the lab, we discuss:

  • Can you quantify the $ spent on maintenance with the break-fix strategy (corrective maintenance)?
  • A sister company with similar operations, failure history and repair/PM costs uses the median failure rate of 199 cycles for PMs. Should you adopt this?
  • Can you do better? If so, after how many cycles will you do the PMs?
  • Can you quantify the benefits in moving from corrective to run-hours based PMs?

 

  • If engine operations data can be used for early detection of failure – say, within 20 cycles of a failure with 100% certainty – if and how much will you save by using PdM vs the cycle-count based PM approach?

 

For PdM model development details, i.e. early fault detection via machine learning for predicting failure within a time-window, see more.

 

Amazon - AWS

The PI System components and the data flows used with R and Python also apply to Amazon - AWS and Google - GCP.

 

For AWS, PI Integrator supports:

  • Amazon S3
  • Amazon Redshift
  • Amazon Kinesis Data Streams

 

And, the model deployment patterns  are similar to those shown earlier for open source R/Python.

 

For an animated sequence of similar data flows in R/Python, see Exercise 1 (Feed Dryer)Interactions among AWS components is not shown below.

The pattern below includes PI Integrator - Amazon Kinesis Data Streams for use cases where streaming data is required.

 

For an animated sequence of similar data flows in R/Python, see Exercise 2 (RVP). Interactions among AWS components is not shown below.

The first pattern which includes PI Notifications web service delivery channel (and PI WebAPI) is preferable for data flows where an external workflow or calculation is triggered -  based on a PI event/alert.  Also, the first pattern gives you more control for reading PI System data via PI WebAPI - as is the case for the Feed Dryer example where the full temperature profile (shape) for the regeneration cycle is compared periodically.

 

Google - GCP

For GCP, PI Integrator supports:

  • Cloud Storage
  • Big Query
  • Cloud Pub/Sub

 

For data flows in R/Python with full payloads etc., see Exercise 1 (Feed Dryer). Interactions among GCP components is not shown below, and it does not have audio.

 

The pattern below includes PI Integrator streaming for Google Cloud Pub/Sub.

For data flows in R/Python with full payloads etc., see Exercise 2 (RVP). Interactions among GCP components is not shown below, and it does not have audio.

 

The first pattern which includes PI Notifications web service delivery channel (and PI WebAPI) is preferable for data flows where an external workflow or calculation is triggered -  based on a PI event/alert. Also, the first pattern gives you more control for reading PI System data via PI WebAPI - as is the case for the Feed Dryer example where the full temperature profile (shape) for the regeneration cycle is compared periodically.

 

Microsoft - Azure

For Microsoft, PI Integrator supports:

  • SQL
  • Azure SQL
  • Azure Event Hub

 

Also, see the end-to-end data flow in Exercise 3 for an Azure ML example. 

 

Recap

This Lab’s objective was to review diagnostic/predictive model deployment patterns in the PI System as you move from a data scientist’s workbench to real-time operations. 

 

The lab exercises illustrated the following patterns:

 

  • Models that can be expressed as algebraic equations - deploy in PI Asset Analytics (Air handler)

 

  • More complex models (SVM, regression, neural network, decision tree etc.) -  deploy via web services, and integrate with PI using PI WebAPI - and PI Integrator (streaming or continuous publication) as necessary

 

  • deploy via PI Notifications web service delivery channel to trigger a workflow or calculation in an external model, and PI WebAPI to integrate (read/write) with PI for additional periodic scoring (Feed dryer)

 

  • deploy via PI Integrator streaming and an external client application (Python-Kafka consumer, R-Kafka consumer, AWS-Kinesis consumer, GCP-Pub/Sub consumer etc.) for scoring (Stabilizer RVP)

 

The patterns shown for R and Python also apply to Amazon-AWS and Google-GCP.

 

Also, we reviewed the use of various PI System components and the user experience as the diagnostic/predictive models are consumed in PI for real-time operations.

 

Quiz

The quiz is not intended to test “recall” – but is more about synthesizing the concepts from the lab.

 

1. When starting a data science initiative, it is best to use advanced machine learning models for quick results.
_True _False

 

2. In the engine failure use case (Exercise 4), the median failure of 199 cycles is the recommended threshold for PMs.
_True _False

 

3. A data scientist’s predictions from his Jupyter/R notebook must be operationalized (put into practice in real-time operations) to see benefits/claim success.
_True _False

 

4. The feed dryer use case (Exercise 1) illustrates:
(check all that apply)
_ Descriptive analytics
_ Diagnostic analytics
_ Predictive analytics
_ Prescriptive analytics
_ None of the above

 

5. In the air handler use case (Exercise 3), the PCA model correctly identifies the anomalous operation for 18th May 2016.
_True _False

 

6. In predictive modeling, in addition to the external R/Python libraries, the following built-in features in AF can be leveraged for several analytical use cases
(check all that apply)
_ Linear regression (including piece-wise linear) and extrapolation
_ Polynomial curve-fit (AF Curve function)
_ AF table look-up (single and double look-up)

_ Moving average,  avg +/- 2 sigma, SQC rules ...

_ None of the above

 

7. For predictive models that can be expressed as algebraic equations, AF analytics with the backfilling feature provides ways to test and then fully deploy the model for real-time operations.
_True _False

 

8. Streaming data – as shown in the gasoline RVP use case (Exercise 2) – is a pre-requisite for deploying all machine learning models for real-time operations.
_True _False

 

9. PI Integrator for Business Analytics - Event Views can be of two types - Summarized Values and Sampled Values. For the Feed Dryer use case, the Summarized Values was used to prepare the data for the model development.
_True _False

 

10. It is a common practice to use AF analytics for tens of thousands of expression-based calculations and models for real-time streaming data.
_True _False

 

11. In the feed dryer use case (Exercise 1), the Processing/Regeneration status measurement is directly available from the DCS (distributed control system).
_True _False

 

12. In the lab, the use cases cover:
(check all that apply)
_ Regression model
_ Dimensionality reduction - PCA model
_ Anomaly detection - SVM model
_ Shape similarity and other statistical techniques
_ None of the above

 

13. For the Gasoline RVP use case (Exercise 2), the regression model could have also been deployed using AF analytics.
_True _False

 

14. When using the future data tag with forecasting models, you can overwrite the prediction values as the forecast improves.
_True _False

 

15. In the engine failure use case (Exercise 4), the calculations for profit per cycle such as that shown in EngineFailureAnalysis.xlsx – tab “Predictive implies all PMs” is useful to justify investing in the development of machine learning based failure prediction models.
_True _False

10-minute read

 

 

 

 

Intro video 

(watch in full screen mode)


Audience for this lab

  • An experienced PI System user
  • You are a subject-matter-expert in industrial operations – and have functional responsibilities in process operations and equipment maintenance/reliability
  • If your role is strictly “process operations” but you work with a separate group of reliability/plant maintenance engineers, you are encouraged to review this lab content as a team


This blog is an extract from the Summary pages of the Lab Manual - enough details are included in this blog to give you a feel for the lab content even if you skip the hands-on portions of the lab. 

 

Lab description

PI World 2020 - Hands-on Lab - San Francisco

This event was cancelled due to COVID-19. However, an on-line version of this lab is at:

https://learning.osisoft.com/series/power-user/asset-maintenance-using-a-layered-approach

 (must be purchased separately or  you must have a PI Dev Club subscription).

 

In this lab, we walk through scenarios to illustrate the use of process data and machine condition data and a layered approach to maintenance via usage-based, condition-based and predictive maintenance.

By the end of the course, you will be able to:

  • Discuss PI System’s role in maintenance and reliability – and explain the layered approach to usage-based, condition-based, predictive (simple and advanced) maintenance.
  • Use the techniques learnt in the lab to use the PI System tools to implement usage-based, condition-based, predictive (simple and advanced) maintenance.


Data sources include traditional plant instrumentation such as PLCs and SCADA, the newer IoT devices, and from machine condition monitoring such as vibration, oil analysis etc.


Usage-based maintenance includes utilizing operational metrics such as motor run-hours, compressor starts/stops, grinder tonnage etc. And, condition-based maintenance utilizes measurements such as filter deltaP, bearing temperature, valve stroke travel, and others. Predictive maintenance can be using simple analytics such as monitoring vibration (rms, peak etc.) to predict RUL (remaining useful life), heat-exchanger fouling to schedule cleaning, etc.

 

In this lab, we will also discuss predictive maintenance use cases that require advanced analytics, including machine learning, such as APR (advanced pattern recognition), anomaly detection, and others.

 

Who should attend? Experienced PI user

 

Summary

This lab’s objective is to walk-through the use of equipment and process data for a layered approach to uptime and reliability via usage based, condition-based and predictive – simple and advanced (machine learning) - maintenance.

 

  • Exercise 1: Usage base maintenance – motor run-hours and valve actuation counts

 

  • Exercise 2: Condition-based maintenance – bearing temperature high alert

 

  • Exercise 3a: Predictive maintenance (simple) – univariate (single variable) – increasing bearing vibration trend extrapolated to predict time to maintenance

 

  • Exercise 3b: CM, PM and PdM - Using engine failure history to support the decision criteria and quantify the benefits for moving from corrective maintenance (CM) to preventive maintenance (PM) to predictive maintenance (PdM)

For details of PdM multivariate model development, i.e. early fault detection via machine learning for predicting failure within a time window, see...more.

 

  • Exercise 4: Asset health score - utilize multiple condition assessment rules with appropriate weighting factors to process/equipment indicators to calculate an overall asset health score

 

Exercise 1: Usage-based Maintenance - motor runhours and valve actuation counts

In this exercise, motor run-hours and valve actuation counts are calculated to serve as a basis for usage-based maintenance.

We use an ice-cream factory running two process lines – Line 1 and Line 2, with two mixers on each line.

 

The hands-on portion includes building the run-hours calculations in AF, and the relevant PI Vision displays as shown below.

Exercise 2: Condition-based maintenance - high bearing temperature alert

In this exercise, we assess the condition of an equipment by calculating metrics that can serve as leading indicators of equipment failure or loss of efficiency – for example, bearing temperature to evaluate the pump bearing condition. 

 

We track the alerts for the bearing temperature and then discuss the use of PI Notification to send an email or use the web service delivery channel to notify a system (i.e. triggering a work order in a work management system such as SAP or IBM Maximo) for follow-up action. 

 

The bearing temperature events are viewed in a watchlist in PI Vision – see screens below.

 

 

 

 

 

Exercise 3a: Predictive Maintenance (PdM) - bearing vibration trend 


For certain classes of process equipment, their condition can be evaluated by monitoring some key metric, such as efficiency for a compressor, fouling for a heat-exchanger, bearing vibration on a pump, etc. Often, these metrics show a pattern with time – and, linear, piece-wise linear or non-linear trend can be extrapolated to estimate remaining-useful-life.


The screen below shows increasing vibration over time (100+ days). The trend can be extrapolated to estimate when it will reach a defined threshold and schedule maintenance.

 

Exercise 3b: Engine failure - CM, PM and PdM - early fault detection via machine learning

In this Exercise, we use engine operations data and failure history to guide maintenance decisions, and quantify the benefits when moving from CM to PM to PdM:

 

 CM - Corrective maintenance - break-fix
 PM - Preventive maintenance - run-hours based
 PdM - Predictive maintenance - early fault detection (via machine learning - multi-variate condition assessment)

 

In a deployment with about 100 similar engines, sensor data such as rpm, burner fuel/air ratio, pressure at fan inlet, and twenty other measurements plus settings for each engine – for a total of about 2000 tags – are available. On average, an engine fails after 206 cycles, but it varies widely - from about 130 to 360 cycles – each cycle is about one hour.

 

With a given failure history for the engines and known costs for PMs vs. repairs, we calculate the benefits in moving from CM to PM to PdM.

 

As part of the lab, we discuss:

 Can you quantify the $ spent on maintenance with the break-fix strategy (corrective maintenance)?
 A sister company with similar operations, failure history and repair/PM costs uses the median failure rate of 199 cycles for PMs. Should you adopt this?
 Can you do better? If so, after how many cycles will you do the PMs?
 Can you quantify the benefits in moving from corrective to run-hours based PMs?

 

 If engine operations data can be used for early detection of failure – say, within 20 cycles of a failure with 100% certainty – if and how much will you save by using PdM vs the PM approach

 

For details of PdM model development, i.e. early fault detection via machine learning for predicting failure within a time window, see...more.

 

Exercise 4: Multiple condition assessment rules and asset health score

In this Exercise, you apply the appropriate condition assessment rules and corresponding weighting factors to process/equipment measurements to calculate an overall asset health score.

 

It uses AF Analytics to convert a “Raw Value” (sensor data) to a normalized i.e. a “Case Value”. And then, by applying a Weight%, it is transformed to a Score.

 

Each measurement gets a normalized weighted score (0 to 10) by applying a condition assessment rule.  And, then the normalized scores are rolled up to arrive at a composite asset health score. The Weight% applied to each attribute depends on its contribution to the overall asset health.

 

The composite asset health score ranges from 0 to 10 (0=Good, 10=Bad)

 

A Transformer asset health score example is used with the following measurements:

 

  • LTC (Load Tap Changer) counter operations
  • LTC through neutral count
  • DGA (dissolved gas analysis) detectable acetylene
  • DGA high gas rate of change
  • Low dielectric
  • High water
  • Low nitrogen pressure

 

An example Transformer template is shown below:

And, as you configure Transformers using these templates, the composite health score is periodically calculated by PI System Asset Analytics.

The composite health score for transformer TR01 is 2 i.e. asset is in good health (0=Good, 10=Bad).

 

Recap

 

 

 

In this lab, we covered scenarios to illustrate the use of process data and machine condition data and a layered approach to maintenance via usage-based, condition-based and predictive maintenance.

 

Quiz

The quiz is not intended to test “recall” – but is more about synthesizing the concepts from the lab and the PI System's role in the maintenance decision process.

 

1. To implement run-hours or similar counter-based logic, it is a prerequisite to have such counter measurements directly from the SCADA or PLC or other such instrumentation for the equipment.
_ True _ False

 

2. APM (asset performance management), CBM (condition-based maintenance), CM (condition monitoring) etc. – all broadly refer to the use of continuous monitoring and analytics in the maintenance decision process

_ True _ False

 

3. Oil analysis is a form of predictive maintenance that focuses on evaluating the condition of an equipment’s lubricant… (extract from https://www.onupkeep.com/learning/maintenance-types/oil-analysis). The term “predictive maintenance” used on the web page is in line with that used in the lab. 

_ True _ False

 

4. In Exercise 3b, the median failure of 199 cycles is the recommended threshold for PMs.
_ True  False

 

5.Asset health score (Exercise 4) is more useful for comparison among peer-group assets than for comparing across different asset classes (motors, transformers, heat-exchangers…).

_ True _ False

 

6. In a mature maintenance setting, you will typically have a mix of all the different layers – reactive, usage-based, condition-based, and predictive deployed.
_ True _ False

 

7. For usage-based, it is recommended to do lifetime total calculations instead of doing meter reset in PI after every PM.
_ True _ False

 

8. When adopting predictive maintenance, you must start with advanced machine learning models for quick wins.

_ True _ False


9. Exercise 2 shows PI Notification via email for high bearing temperature alert. Event frames and PI Notification (web service delivery channel) can also be utilized as a basis for triggering a service request in work management systems such as SAP PM, IBM Maximo, and others.
_ True _ False

 

10. To incorporate the layered approach to maintenance, you can simultaneously deploy one or more of the appropriate techniques (reactive, usage-based, condition-based, and predictive) for an equipment and its components.

_ True _ False


11. To adopt the linear extrapolation example illustrated in Exercise 3a, you must fit the entire vibration history to ensure that all available data is used.
_ True _ False


12. Generically, the reason code manual entry feature for an event (Exercise 2) in PI Vision allows a pick list based on a hierarchical list of text entries.
_ True _ False


13. For usage-based, it is a good practice to use year-to-date calculations as illustrated by:

 

 

_ True _ False

 

14. AF calculations can include machine condition data such as oil analysis, vibration data etc. that may not be natively in PI tags but are available externally and referenced via table look-ups or other data reference methods.
_ True _ False

 

15. Integration with work management systems such as SAP-PM, IBM Maximo etc. is a prerequisite to adopting the layered approach to maintenance shown in this lab.
_ True _ False

 

Lab Manual

Lab manual is attached.

Link to the lab in OSIsoft Learning

 

Other resources

Incorporating Condition Monitoring Data - Vibration, Infrared and Acoustic for Condition-based Maintenance 

Getting started with IIoT sensor deployment 

Operationalizing analytics - simple and advanced (data-science) models with the PI System

A layered approach to maintenance - PI World 2020 Q&A session slides

Introduction to Asset Monitoring and Condition-based Maintenance with the PI System - PI World 2019 video and slides 

Transformer monitoring and maintenance at Alectra Utilities

Condition Monitoring Using Statistical & Machine Learning Models in PI AF at TC Energy

 

This Lab was part of TechCon 2017 in San Francisco. The Lab manual used during the instructor led interactive workshop is attached.  Lab VM is available via OSIsoft Learning

 

Traditionally, the PI System has worked with process data from plant instrumentation such as PLC and SCADA. However, newer IoT and edge device capability allows you to bring data from condition monitoring systems such as vibration, infrared (thermography), acoustic etc. to the PI System. Take this lab to learn how to use condition monitoring data along with process data in your condition-based maintenance programs to improve equipment uptime. The lab will also include the use of alert roll-ups, watch lists, KPIs and others for a holistic view of asset health and reliability.

The learning objectives for this lab are:

  • Understand the condition monitoring (CM) data collection process –with a  live demo of a hand-held device used for collecting vibration, infrared and acoustic data for a motor
  • Understand how the condition monitoring data is transformed and written to PI
  • Configure condition assessment calculations for the CM data and recognize the use of dynamic thresholds for the CM data
  • Incorporate CM data such as oil analysis (that may reside in external databases) in the condition assessment calculations
  • Incorporate manual input CM data (aka operator rounds/inspections) from PI Manual Logger
  • Create displays, notifications, alerts watch list etc. using various PI System capabilities and tools
  • Review the PSE&G customer use case on combining  PI System data with CM data for calculating asset health score

 

In this lab, we will use a hand-held device to collect vibration, infrared, and acoustic data.  The device is a Windows 10 based unit (it can also be an iPad) with suitable attachments based on National Instruments technologies. Please see AR-C10 for hardware details.

We will also incorporate CM measurements via devices from Fluke (Fluke Connect)

 

The lab book is attached.

Filter Blog

By date: By tag: