Skip navigation
All Places > All Things PI - Ask, Discuss, Connect > Blog

The following is from the lab notes for the hands-on lab "Exploring PI AF Analytics for Advanced Analysis and Prediction" at PI T&D + Power Generation Users Group Meeting, 2018.  

Lab VM is available via OSIsoft Learning

The Lab manual is attached; the manual is intended for an instructor led interactive workshop in a classroom setting.


In this  lab, we explore several data access methods in the PI System for extracting contextualized datasets for data science projects. PI AF will play an important role in shaping the data. In each case, a simple statistical model will be developed. The models will be evaluated, tested and operationalized using PI AF.


The lab includes:
Example 1 – Single Asset Predictive Model using Python and the PI Integrator for Business Analytics
Example 2 – Multiple Asset Predictive Models using Python and PI SQL Client

The following is from the lab notes for the hands-on lab "Operational Forecasting" at OSIsoft Users Conference 2017, San Francisco, CA.  Lab VM is available via OSIsoft Learning

The Lab manual is attached; the manual is intended for an instructor led interactive workshop in a classroom setting.


The lab's objective is to step through an end-to-end data science/machine learning task -  collect data, publish historical data, develop a predictive model and deploy the model in real-time for wind turbine operations .  

The predictive model is to forecast power generation for each turbine in our fleet as shown below


Operational Forecasting - Wind Farm

Figure shows a graph of Active Power vs. Time - actual power in purple and forecasted power in yellow.

The predictive model is based on forecasted wind speed and air temperature.


The tools used are: 

  • PI Integrator -  publish historical turbine operations data to a SQL endpoint
  • Power BI and its built-in support for R scripts  -  data munging, data diagnostics and exploring the features
  • Azure ML - develop and deploy the model (as web services)
  • Windows script (or, alternatively a .Net C# code via AF-SDK) is used to read/write forecast data to PI 


Wind turbine Power vs Windspeed, also correlation plot

Figure shows a graph of Active Power vs. Wind Speed from operations data. 

For additional details, please see the Lab  Manual.

We use PI Integrator to publish the data in a row-column format for the next steps.


Feed Dryer PI Integrator output


And, use PowerBI for descriptive analytics with this large dataset covering several months of minute resolution data.


Feed Dryers - Power BI  screen


Next, R is used for more data munging and extract the golden temperature profile.


Feed Dryer Golden Temperature Profile

And, validate the model to confirm if it can flag bad runs using shape metrics.


Feed Dryer Shape is not OK


And, after it is validated,  wedeploy it for real-time operations by writing to a PI future tag.


Feed Dryer Operationalize expected temperature profile


During operation, deviation from the expected temperature profile is continuously evaluated and it triggers a Notification to take corrective action. 

Feed Dryer Notification

 Feed Dryer PI Vi



Go to Part 1

Go to Part 2

This Lab was part of PI World 2019 in San Francisco. The Lab manual used during the instructor led interactive workshop is attached.  Lab VM is available via OSIsoft Learning 


In previous years, we have explored the use of advanced analytics and machine learning for:

  • Anomaly detection in an HVAC air-handler - more
  • RUL (remaining useful life) prediction based on engine operations and failure data - more
  • Golden-run identification for the temperature profile from a feed dryer (silica gel/molecular sieve) in an oil refinery - more


Additionally, as part of the above labs, we have used analytical methods such as PCA (principal components), SVM (support vector), shape similarity measures etc. And, in other similar labs, we have covered well-known algorithms for regression, classification etc. and reviewed the use of Azure Machine Learning - more - and open source platforms such as R and Python.


In this year’s lab, we explore the use of historical process data to predict quality and yield for a product (Yeast ) in batch manufacturing. We’ll use multivariate PCA modeling to walk-through the diagnostics for monitoring the 14-hour evolution of each batch. And, alert you when a batch may go “bad” as critical operating parameters violate “golden batch” criteria ((high pH, low Molasses etc.). And, then we utilize PLS – projection to latent structures - to predict product quality and yield at batch completion.


The lab illustrates the end-to-end tasks in a typical data science project – from data preparation, conditioning, cleansing etc. to model development using training data, testing/validation using unseen data, and finally, deployment for production use with real-time data.


The techniques explored in the lab are not limited to batch manufacturing; they can be applied to several industries and to numerous processes that are multivariate.


No coding or prior experience with open source R or Python is necessary but familiarity with the PI System is a prerequisite.


Who should attend? Power User and Intermediate

Duration: 3 hours


Problem statement

In this lab, we review Yeast manufacturing operations – specifically, the fermenter. A typical batch fermenter cultivation takes 13 to 14 hours. Raw material variability in molasses or operational issues related to other feeds such as air and ammonia can cause ethanol (a byproduct) to exceed limits or the pH in the fermenter tank to become too acidic resulting in “bad” batch runs.


 We want to use historic operations data with known “good” runs as a basis for alerts when current production parameters deviate from “golden batch” conditions. We also want to predict quality parameters, referred generically as QP1 and QP2, and the expected yield for each batch.


In the hands-on portion, you:

  • Review the AF model
  • Use PI Integrator to publish the process values and lab data for available batches - this is used for model development
  • Use R for model development - golden tunnel and control limits 
  • Review mode deployment
    • Model is deployed using PI asset analytics
    • Use PI Vision displays and PI Notifications to monitor a batch in  real-time using the golden tunnel criteria


Yeast AF Model


Yeast golden tunnel

Yeast PCA equation for Asset Analytics in AF

Yeast PI Vision golden tunnel

This Lab was part of PI World 2018 in San Francisco. The Lab manual used during the instructor led interactive workshop is attached.  Lab VM is available via OSIsoft Learning 


In a crude oil refinery, gasoline is produced in the stabilizer (distillation) column. Gasoline RVP is one of the key measurements used to run and adjust the column operations. Refineries that do not have an on-line RVP analyzer have to use lab measurements - available only a few times - say, a couple of samples, in a 24-hour operation. 


As such, column process values (pressure, temperature, flow etc.) and historical RVP lab measurements can be used  via machine learning models to predict RVP more often (say, every 15 minutes or even more frequently) to guide the operator.


Stablizer (distillation) column producing gasoline in an oil refinery

Figure: Stablizer column 


AF data model

Figure: Stablizer column - AF  data model


In the hands-on portion, you

  • Review the AF model
  • Use PI Integrator to prepare and publish historical data (to a SQL table) - this data is used for model development
  • Review the step-by-step machine learning model development process in Python/Jupyter
  • Deploy the model for real-time operations
    • Use PI Integrator to stream real-time stabilizer process data to Kafka. And, using Python and kafka consumer,  calculate the model-predicted RVP and write it back to PI via PI WebAPI


Stabilizer historical process data and lab RVP used for model development

Figure: Stablizer column - historical process data and lab RVP measurements 


RVP Jupyter Python kafka consumer

Figure: Python Jupyter notebook - shows Kafka consumer and WriteValuesToPI  snippet


The data flow sequence is as below: (to pause/play animation, save the GIF file to a local folder and open in Windows Media Player)



Gasoline RVP predicted values

Figure: Stablizer column - historical lab RVP measurements overlaid with predicted RVP 

Oil refinery process unit operation – Alkylation feed dryer (Exercise 1)

This exercise uses an oil refinery Alkylation feed dryer process to illustrate the layers of analytics - descriptive, diagnostic, predictive and prescriptive.

First, the descriptive and diagnostic portions are reviewed below.




The process consists of twin dryers – Dryer A and Dryer B - each with stacked beds of desiccant and molecular sieve to remove moisture from a hydrocarbon feed.  The dryers are cycled back and forth i.e. when one is removing moisture from the feed, the other is in a regeneration mode where the bed is heated to dry out the moisture from a previous run.


The modelling objective is to create a temperature profile representing proper regeneration of the dryer bed.  This profile is analyzed via AF Analytics and then a golden profile is extracted via R/MATLAB and subsequently operationalized again using AF Analytics, PI Notifications and PI Vision. 


The data used for this Exercise comes from an actual oil refinery and covers a year's (2017) data  at six-minute intervals.


PI Vision displays below show the Dryers in Process (green) and Regeneration (red) states.




The descriptive analytics consists of calculations using sensor data for temperatures, flows, valve positions etc. to identify the dryer status i.e. Operations vs. Regeneration.




The process piping configuration (via valve open/close) and the measurement instruments generating the sensor data are such that you have to perform several calculations similar to those shown above to prepare the data for subsequent steps i.e. diagnostic, predictive and prescriptive.


Also, event frames are constructed to track the start and end of each regeneration cycle for Dryers A and B.



More calculations with the flow sensor data is done for Dryer processing age defined as:

Lifetime volume of feed dried by a bed (bbls)
Molecular sieve load in dryer (lbs)


Since the feed flow rate varies, additional analysis is done to calculate the volume (bbls) of feed processed before each regeneration cycle.






Event frames with the requisite data for additional diagnostics is exported using PI Integrator for Business Analytics.




Fit for Purpose - Layers of Analytics using the PI System


Continue reading

The following is from the lab notes for the hands-on lab "Fit for Purpose - Layers of Analytics using the PI System: AF, MATLAB, and Machine Learning" at PI World 2018, San Francisco, CA

Lab VM is available via OSIsoft Learning 

Part 1 Introduction

Part 2 Alky feed dryer – process analytics - descriptive and diagnostic (Exercise 1)

Part 3 Alky feed dryer – process analytics - diagnostic/predictive/prescriptive (Exercise 1 continued)

Part 4 Motor/Pump – maintenance analytics – usage based, condition based and predictive (Exercise 2)


Layers of analytics can be viewed through many lenses.  Frequently, it refers to the levels of complexity and the kinds of computations required to transform “raw data” to “actionable information/insight.”  It is often categorized into:

  • descriptive analytics - what happened
  • diagnostic analytics - why did it happen
  • predictive analytics - what can/will happen
  • prescriptive analytics - what should I do, i.e. prescribing a course of action based on an understanding of historical data (what happened and why) and future events (what might happen)

The purpose of the analytics i.e. whether it is for descriptive or diagnostic or predictive or prescriptive will influence the “raw data” calculations and transforms.  The following graph shows “value vs. difficulty” as you traverse the layers.



Layers of analytics can also be viewed through a “scope of a business initiative” lens – for example, in asset maintenance and reliability, the layers are:

  • UbM – Usage-based Maintenance  -  AF
  • CbM – Condition-based Maintenance -  AF
  • PdM – Predictive Maintenance - AF plus third party libraries




Layers of analytics can also be categorized by where the analytics is done, such as:

  • Edge analytics
  • Server based analytics
  • Cloud-based analytics


Analytics at the edge include those done immediately with the collected data.  It lessens network load by reducing the amount of data forwarded to a server - for example, Fast-Fourier Transform (FFT) on vibration time wave-forms to extract frequency spectrums. Or, when an action is to be immediately taken based on the collected data without waiting for a round-trip to a remote analytics server.


In the hands-on portion of this Lab:

  • Exercise 1 uses an oil refinery process unit operation (Alky feed dryer) to walk-through the layers i.e. descriptive, diagnostic, predictive and prescriptive
  • Exercise 2 uses a maintenance/reliability scenario (pump/motor assembly) to illustrate the layers i.e. UbM, CbM, and PdM


Items not included in the detailed hands-on portion will be covered as discussion topics during the Lab.


Continue reading:

Part 2 Alky feed dryer – process analytics - descriptive and diagnostic

On Dec 4th, 2019, we had a one-day "Data & Analytics to Support Knowledge Management in Life Sciences" event at the MIT Samberg Center in Cambridge, MA.  


The presentations were not recorded, but the links to the slides are below.

If you have questions, please ask in the Comments section below. 







Every day, you work with different software products of OSIsoft to make data-driven decisions, monitor the health of your assets, build dashboards or support others in getting access to data. And as you interact with these software products, there might be features you wish the product had but are not available in the latest released version. OSIsoft product managers would want to know what these missing features are and how these features or capabilities could help you be more efficient.

A few years ago, we created a site so that you could communicate with OSIsoft product managers directly and share your enhancement ideas, suggestions and feedback about OSIsoft products and services:

This feedback site is the system of record of OSIsoft’s product management; product managers review the feedback daily, and the received feedback impacts the decisions they make in product development and how they prioritize different features.

In the early months of implementing the feedback site, we made it possible for the product-related “Ideas” collected on PI Square to get synced to the feedback site. This was helpful in continuing to collect ideas while users got familiar with the feedback site. We are now a few years past those early  months, and it is time to stop the sync between PI Square and the feedback site.

This means all product-related feedback needs to be shared on the feedback site or through the feedback widgets embedded with OSIsoft web-based applications (e.g. PI Vision, OSIsoft Cloud Services).

As a result, “Ideas” are no longer available on the public spaces within PI Square (e.g. All Things PI, PI DevClub). Please note that for any private groups on PI Square, the “Ideas” option will remain active to support the needs of the group. However, these ideas do not get synced with our feedback page so any product-related idea should be posted directly on

What makes Event Frames great is that they are an extremely rich bookmarking feature for your real time data.They can be templatized, have a security model, store computed results, etc. Event frames are stored in the PIFD database, that underlines our AF database, due to this large feature set, they also take up a lot of space in this AF database. This means, that it is possible to run into scaling problems when using event frames.

Whenever you have a scaling problem, you typically want to do the following thing:

1. Size appropriately

2. Monitor

3. Limit growth

4. Offloading

5. Retain only what is needed


In this blog post, I want to talk about 2. Monitor.


How to do this manually

Within PI System Explorer (PSE), under the property page for a database, you can view a count of the set of object contained in that database. In particular event frames.



As event frames of the same AF server are all stored in the same SQL database, a possible better number to monitor is the global number of event frames, which is found in the counts for the PI AF Server.


It is also possible to get this information using AF SDK, using the GetObjectCount method: 



var af = new PISystems()[afServerName];
var counts = af.GetObjectCounts(null);
Console.WriteLine($"Total number of event frame: {counts[AFIdentity.EventFrame]}");


Obtain the same information using an SQL Query


As Event Frames are stored in an SQL database, one could hope that there is a table that contains only event frames and the number of rows should be related to the number of event frames.

And that is indeed the case.


Here is the query:

t.NAME AS [Table Name],
p.rows AS [Row Counts]
sys.tables t
sys.partitions p
where Like 'AFEventFrame'
p.Rows s


Monitor and historize this metric

To retrieve a large amount of data out of an SQL databse, the PI Interface for RDBMS is off the course the way to go.

But, to historize only a few values at a slow rate, we can combine AF table look up feature and the analysis service.

First, I would create a connection to PIFD using a linked table.

This allows us to retrieve this information via a table data reference


We can then historize this data using af periodic AF analysis, to store the data into a tag.


This now helps you monitor Event Frame growth.


Off course, this number is kind of useless unless you actually have an idea for how many event frames you actual want to keep in your PI System.This is typically done to an initial sizing, base on your system spec, querying needs, etc. And refined base on your actual experience. But, this is a topic for an other time.


Did You Know?  Rubik's Cube

Posted by chuck Jul 31, 2019

From time to time I am reminded of little known bits from OSIsoft's history.  This was one I just have to share with you.


A young OSIsoft support/developer engineer named Dan Knights was crowned World Champion in speed solving Rubik's Cube on 24 August 2003.  Yes, true fact.  The World Championships were held in Toronto, Ontario that year with over 100 challengers from over 20 countries - all competing for various titles requiring solving the colorful cube.  24 year old Dan won with a Guinness world record beating time of 20.2 seconds, surpassing the previous record of 22.95 seconds.  The previous world record had been set at the World Championships in 1982 in Budapest.  Knights had only been solving the cube for about four years - he started playing with the cube after reading on the Internet about a woman who had solved the puzzle in 17 seconds. 


The woman was Jessica Fridrich, a professor from New York who developed the widely used Fridrich method of solving the cube after first having mastered the cube in her native Czechoslovakia back in 1981.  (Ms Fridrich was 39 in 2003.)  Ms Fridrich placed second in the finals where Dan found himself in first place.


Dan Knights said he trained for the competition by practicing as much as he could and by using hypnotherapy to overcome stage fright.  He was amazed he and the event drew so much attention.  "I just kept thinking, 'It's only a toy.' "  Following the competition Dan took it easy for a while.  "I'll just sit on the couch and veg:".  He was thinking to use the prize money for a trip to Hawaii. 


Shortly after winning his championship, Dan left OSIsoft for other endeavors.  

Here is Dan's own page on cubing:

Want to watch a much younger Dan solve the cube?


Reference:  Toronto Star ( 25 August 2003, "American "knighted" Rubik's cube champ"

OSIsoft is pleased to announce the release of PI Web API 2018 SP1. This is a standalone installation kit available at, and is grouped together with PI Server downloads. The PI Web API is a suite of REST services that provide access to PI System data. The product is a member of the Developer Technologies suite of products and is targeted at providing cross-platform, multi-user programmatic access. Some highlights of this release are:

  • Notifications
    • Read for child NotificationContactTemplates
    • Read for NotificationPlugIn
    • Read, create, update, and delete for SecurityEntries on NotificationRules, NotificationRuleTemplates and NotificationContactTemplates
    • Create, update and delete for NotificationRules, NotificationRulesTemplates, NotificationContactTemplates and NotificationRuleSubscribers
    • GetByPath endpoints for NotificationContactTemplates, NotificationRules, and NotificationRuleTemplates
    • Search endpoint for NotificationContactTemplates
  • Stream Sets GetJoined Endpoint
    • Returns a set of recorded values (x-axis) with another set of data for any number of streams (Y, Y', Y''... axis) that are interpolated based on the points returned for the x-axis
  • Stream Updates (CTP)
    • Client code will be notified of changes in AF metadata through an Exception item in the response payload.
    • The selectedFields parameter is now honored in both registration and poll for updates.
    • Responses now include PreviousEventAction information with each data value.
    • Error messages are returned for markers that are no longer valid with every poll using that marker. Previously, the error message would be returned only once.
  • Expose 'Paths' and 'DataReference' properties on objects of attribute
  • Expose 'Paths' property on objects of element
  • The version of Web ID returned by PI Web API can be configured
    • PI Web API instances that run 2018 SP1 can now work together with older versions behind a load balancer since they can be configured to return the same version of Web ID


To download, visit the OSIsoft Customer Portal. Installation kits are grouped with PI Server downloads.

With the new release of PI Web API, OSIsoft is pleased to announce some updates  to our getting started material for developers.  

To facilitate developer learning and best practice use of the latest PI Web API, we will be supporting the release with:

1)      New code samples on Github

2)      New private online course for PI Web API developers - available on PI Square in mid-June 2019

These additions will replace the previous client libraries for PI Web API, removed from GitHub in November of 2018, with new code samples developed and approved by OSIsoft engineering.

In addition, with the release of PI Web API 2018 SP1, we will be removing the inclusion of "Open API Specification" (formerly known as "Swagger™ Specification") from PI Web API. While the Open API Specification facilitated the rapid generation of code libraries while creating code, it had not been officially tested and validated with the wide diversity of code generating tools available to developers. In cases some code generators could create sub optimal code.

To follow best practices, and help developers learn how to build optimal code for PI Web API, OSIsoft decided to remove the Open API Specification from the latest PI Web API installation.  The new, approved code samples on GitHub are the preferred approach to learn, and ensure optimal coding practices.

If you have downloaded, and have access to the client libraries or the Open API Specification, feel free to continue to use these for learning needs however please do not incorporate the sample client libraries into your production applications. For those using Open API Specification, please recognize that code may not be optimized and may require further review and optimization.

If you have any questions about the new code samples or changes, please contact Frank Garriel, Technical Product Manager (

Hard to imagine but this time next week we will be gathering in San Francisco for PI World 2019.  Seems like just yesterday we started reviewing proposals for talks and labs and deciding what to include in our agenda!  Here are a few of my favorite talks you can look forward to.  (Remember we publish most PI World talks to our website within just a day or a few days after the talk is given at PI World.)


Session Code US19NA-D2MM02

Title: Artificial Intelligence-enabled autonomous operations at CEMEX with Petuum Industrial AI Autopilot

Description:  CEMEX Manufacturing processes are concerned with quality, production costs from energy, fuels and materials and equipment efficiency, which in turn are often simultaneous tradeoffs, to be made in real time.

Standardized repeatable processes easily replicated day after day, require understanding of process variables, forecasts and recommendations on changes that can be incorporated into workflows. CEMEX will show how companies working together created a data flow using PI infrastructure between plant control systems.

CEMEX speakers will guide the audience on how actionable prescriptions in real-time for plant subsystems were validated and implemented into the operational control systems in supervised operations.

Track Day 2: Mining, Materials, Supply Chain

Speakers Rodrigo Javier Quintero De la Garza (Cemex), Prabal Acharyya (Petuum)



Session Code US19NA-D2FB03

Title: Small effort with a big Payoff: Using PI Event Frames to drive Pack Line productivity (Cargill)

Description:  Cargill is one of the largest privately held businesses in the world, with a diverse portfolio ranging from Agriculture and Food to Industrial and Financial businesses. Working across so many varied businesses means standardization can be a challenge.

Cargill wants to share how localized efforts in the use of PI AF, Event Frames, and Data Link can lead to new insights that drive tangible action to improve pack line productivity. In many Cargill facilities, downtime is tracked for events greater than 1 minute and sites record context around these events. But what about micro stops? During daily production meetings, Cargill Fullerton consistently tracked micro stops as a top contributor to production loss. As a result, the site utilized PI Event Frames to gain more granular insight to those losses and drive the team from a reactive to a proactive culture. This talk will take you through the journey of solution implementation, quantifying the data, the findings and the value realized.

Track Day 2: Food and Beverage

Speakers Lauren Vahle (Cargill Global Edible Oil Solutions); Monica Varner-Pierson (Cargill)



Session Code US19NA-D2TT04

Title: Concurrent Programming for PI Developers

Description:  All too often projects fail because the capabilities of your programming toolstack are not being exploited to their fullest. We will show you how to break out of this vertical-stack prison by demonstrating how concurrent programming works. You will be exposed to Google's Go, which is a high-performance language and toolchain specifically geared towards concurrent programming. We will show how you can take advantage of Go in IoT projects and within your datacenter so that your projects may unlock the full potential of your existing hardware investment.

Track Day 2: Tech Talks

Speakers Christopher Sawyer (OSIsoft)



Session Code US19NA-D2MA03

Title: Integration & Transformation of Data for Analysis & Quality Control in Real Time (Kimball)

Description Presentations shows how we stared collecting data to optimize our processes and improve our quality costs.

Track Day 2: Manufacturing

Speakers Josue Fernandez (Kimball Electronics)

Chuck's notes:  The abstract doesn't do this talk justice.  The talk is a good introduction to challenges of descrete manufacturing (versus continuous).  PI System infrastructure was used in combination with existing and new instrumentation (IoT) to accomplish unified set of data for consumption by personnel as well as an "in place" upgrade of manufacturing assets using data.  The solution brings data into the PI System from more than a dozen different special purpose machines, each with their own data tracking and status monitoring software.  The customer users these unified data to track process and quality acros their operations.



Session Code US19NA-D2PG07

Title: How OSIsoft PI supports Unipers Maintenance Strategy Planning

Description: Uniper has started its 3-year Digitization journey in 2016. A major element of the program is the consistent and harmonized introduction of the OSIsoft PI system for our asset fleet. Now, 2/3 through through the program, Uniper leverage the central availability of machine data to optimize its maintenance CAPEX budget and reduced it by 16% for 2019ff.

Track Day 2: Power Generation

Speakers Stephan Dr.-Ing. van Aaken (Uniper SE)



Session Code US19NA-D2P101

Title: Selecting the Right Analytics Tool (Omicron)

Description: There are several analytics tools and approaches available for working with PI data: Performance Equations, AF analytics, custom data references, PI ACE, PI DataLInk and Business Intelligence (BI) tools. It can be a quandary in determining which tool should you use for what. Should you focus on only one tool or use a mix? As it turns out, the answer is not as simple as basing it on the specific analytic. Other considerations should be put into the decision including: scalability, reliability, maintainability, and future-proofing, to name a few.

This talk will discuss the various tools available for performing analytics on PI data and their strengths and weaknesses, their scalability, reliability, maintainability, and future-proofing. The tools will be separated into two major classes: server side (persistent) analytics and client side (query time) analytics and the general differences between the two classes. Attendees will learn practical guidelines to for selecting analytics tools

Track Day 2: PI Geek Track

Speakers David Soll (Omicron)



Session Code UC19NA-D2FW03

Title: Using operating data to enhance operations and spark sustainable innovation (UC Davis)

Description" The UC Davis campus operates as a mini-city, with its own utilities serving about 1,200 buildings. Operating data from these systems is stored in a PI database. Two teams within Facilities Management use PI data to generate operational improvements and enhance collaborations. This enables a culture of sustainable innovation.

The Buildings Energy Engineering team implements projects to save energy in campus buildings and recoups financial savings to fund its operations. The talk will present innovative optimizations implemented in HVAC systems, and measurement & verification methods used to demonstrate financial savings.

The Utilities Data and Engineering team supports the operations and growth of the utilities systems. The team automated a process for identifying and solving energy meter issues. The team also developed a visualization that brings together the operation of the chilled water plant with the 100+ buildings connected to the chilled water loop.

Track Day 2: Water, Facilities, and Data Centers

Speakers Nicolas Fauchier-Magnan (University of California, Davis), Joseph Yonkoski (University of California, Davis)



We hope you find this teaser interesting and hope you will join us for PI World San Francisco 2019.  Remember:

  • We will live stream our morning keynote sessions from Tuesday, 9 April 2019 to the internet.  You can join with us live even if you can't attend in person.
  • The  talks above and over a hundred other sessions from PI World will be published to our website - check back later to see these talks!

We are working hard to continuously make your experiences with OSIsoft better and the next major milestone towards that goal will happen in Q1 2019. We’ll be reconstructing the majority of the Tech Support site into a portal experience that we’re calling myOSIsoft.


Don’t worry, we’re not changing our phone support or our PI Square communities. We are adding better ways for you to find KB articles, look at old cases, initiate new cases, search for solutions that spans from your own cases to PI Square and more!


See updates and early screenshots at


Sign up for monthly updates on features we’re building


Filter Blog

By date: By tag: