Skip navigation
All Places > PI Developers Club > Blog > 2019 > March
2019

The PI Geek Talks are generally presented by partners and customers.  The target audience is PI Admins and Developers.  All the talks will be on Wednesday, Day 2, at the Parc 55 Hotel in the Powell room located on Level 3.  You are invited to read the PI World Agenda for more information.  Along with Day 2 Tech Talks, these are great reasons to take the walk from the Hilton to the Parc 55.

 

 

Selecting the Right Analytics Tool

David Soll, Omicron

There are several analytics tools and approaches available for working with PI data: Performance Equations, AF analytics, custom data references, PI ACE, PI DataLInk and Business Intelligence (BI) tools. It can be a quandary in determining which tool should you use for what. Should you focus on only one tool or use a mix? As it turns out, the answer is not as simple as basing it on the specific analytic. Other considerations should be put into the decision including: scalability, reliability, maintainability, and future-proofing, to name a few.

This talk will discuss the various tools available for performing analytics on PI data and their strengths and weaknesses, their scalability, reliability, maintainability, and future-proofing. The tools will be separated into two major classes: server side (persistent) analytics and client side (query time) analytics and the general differences between the two classes. Attendees will learn practical guidelines to for selecting analytics tools

David Soll

 

 

Providing Enterprise Level Visibility for Snowflakes Using PI and AF

David Rodriguez, EDF Renewables, and Lonnie Bowling, Diemus

As part a larger project to monitor a large number of distributed wind farms throughout the US and Canada, the customer desired to have visibility into substation status information. This included showing substation one-line diagrams, voltage regulation status, breaker status, and events to notify them of any issues. Each wind project was design and installed by others which resulted in large differences between sites, include variability in networking, communications, and tag configuration. In other words, each project was like a snowflake. Using PI, AF Analytics, and Event frames, a solution was developed to normalized all wind projects. Once standardization was achieved, we then defined substation one-line circuits using an AF hierarchy. Data visualization was developed to provide on-demand, real-time rendering of circuits, voltage regulation trends, events, supporting information. This was implemented enterprise wide, and allowed for easy access and visibility for everyone in the organization.

David Rodriguez   Lonnie Bowling

 

Just Another Weather Application – Evaluating the OSIsoft Cloud System

Lonnie Bowling, Diemus

This session will showcase a weather application designed using the new OSIsoft Cloud System (OCS).

A backyard weather station was used as a data source for a live and historical data source. Forecasted data was then added to provide a complete picture of historical, current, and forecasted weather. Once all the data was streaming into an OCS sequencial data store, a full stack front-end solution was developed. This included an API layer in C#, Angular for the UI, and D3 for data visualization. A complete solution was developed to fully evaluate how OCS could be used in a real-life, purpose-built application. Key takeaways, including challenges, an architectural review, and source-code highlights will be shared.

Lonnie Bowling

 

Data Analytics to enhance Advanced Energy Communities planning and operation

John Rogers and Alberto Colombo, DERNetSoft

In today’s energy marketplace, poor energy awareness and a lack of data visibility coupled with the technical complexities of DER integration leads to a gap in local Advanced Energy Community development. DERNetSoft provides a scalable solution to this issue, making it possible to build advanced energy communities increasing energy awareness, enabling Distributed Energy Resources planning and supporting their operational optimization. We transform data into actionable insight and value-added advanced analytics and machine learning technique in the energy industry at the community level.

 

 

Data Quality & Shaping: Two Keys to Enabling Advanced Analytics & Data Science for the PI System

Camille Metzinger and Kleanthis Mazarakis, OSIsoft

Data quality is critical in the success of data-driven decisions. Issues with data will impact users across the organization- from operators, engineers, data scientists, to leaders. Answering business intelligence questions such as “which assets are performing well and which are under-performing” requires a birds-eye view of the data which may require (re)shaping of the data within the PI System. This talk and demo will explore the aspects of data quality and data shaping using PI System infrastructure by illustrating why they are so critical for success. We will also demonstrate the steps of how to improve Data Quality in the PI System and shape the PI System data to give it the right context for your advanced analytics.

Camille Metzinger   Kleanthis Mazarakis

In this blog post I will show how to write a machine learning output, that was produced in Python, back to the PI System.

 

This blog post is preceeded by this blog post: Machine Learning Pipeline 1: Importing PI System Data into Python

 

Output of machine learning

The output of machine learning is expected to be a numpy array. The output features and output length determine the dimension of this numpy array.

 

Dimension: (output length, output features)

 

With

output length = number of predicted timesteps

output features = number of predicted features (for example: Temperature, Humdity,..)

 

Example:

Dimensions of this numpy array are: (192, 9)

9 columns and 192 rows

 

These values do not yet have a timestamp.

 

Generating a timestamp

Depending on how these predicted values were generated, a timestamp for these must be generated before they can be written to the PI System.

The timestamp format for the PI System is:

 

"YYYY-MM-DDThh:mm:ssZ"

 

Pythons datetime package can be used to generate timestamps:

 

from datetime import datetime
from datetime import timedelta

timestamp = datetime.now()

print(timestamp)

now = datetime.now() # current date and time

year = now.strftime("%Y")
print("year:", year)

month = now.strftime("%m")
print("month:", month)

day = now.strftime("%d")
print("day:", day)

time = now.strftime("%H:%M:%S")
print("time:", time)

date_time = now.strftime("%Y-%m-%dT%H:%M:%SZ")
print("date and time:",date_time)

(This is just an example how the parts of the timestamp can be generated)

 

Pythons timedelta can be used to add time to a timestamp. We will use timedelta to generate the timestamps for our predicted values. In our case we know that the sampling time of our values is 1h. (This is by design, as we earlier imported events with the same sampling frequency)

 

Posting output to the PI System

 

The following code will use the Python requests library to send a HTTP POST request to the PI Web API endpoint:

Requests: HTTP for Humans™ — Requests 2.21.0 documentation

 

for event in predict:

#build timestamp of format "YYYY-MM-DDThh:mm:ssZ"
timestamp = timestamp + timedelta(hours=1)#as we have 1h delta in between each predicted event
pi_timestamp = timestamp.strftime("%Y-%m-%dT%H:%M:%SZ")

#take only first column
value = event[0]

#Writing back to PI System
response = requests.post('https://<PIWebAPI_host>/piwebapi/streams/<webID_of_target_PIPoint>/value?updateOption=InsertNoCompression', data={'Timestamp': pi_timestamp, 'UnitsAbbreviation': '', 'Good': 'true' , 'Questionable': 'false', 'Value': value}, headers={"Authorization": "Basic %s" % b64Val}, verify=True)

(Sorry for the wrong intendation)

 

Here the UpdateValue method of PI Web API is used:

UpdateValue POST streams/{webId}/value

 

The efficiency can be enhanced by first creating all JSON objects for the events that are supposed to pe posted to the PI System, per PIPoint, and send them in bulk, using the UpdateValues method:

UpdateValues POST streams/{webId}/recorded

With this blog post series I want to enable data scientists to quickly get started doing Data Science in Python, without worrying about how to get the data out of the PI System.

 

In specific i want to highlight 2 options to get PI System data into Python for the use in data science:

 

  1. Writing PI System Data into a .csv file and using the .csv file as data source in Python.
  2. Directly accessing the PI Sytem using HTTP requests in Python.

 

Approach 1: Extracting PI System Data into a .csv file

Please check out these 3 ways to extract PI System data into .csv files:

 

Extracting PI System data in C# with AFSDK:

Extracting PI System Data to file using AFSDK in .NET

 

Extracting Pi System data in C# using PI SQL Client OLEDB

Extracting PI System Data to file using PI SQL Client OLEDB via PI SQL DAS RTQP in .NET

 

Extracting PI System Data in Python using PI Web API

Extracting PI System Data to file using PI Web API in Python

 

In each of the above approaches all events for the requested PI Points are extracted, no matter what how far the events are apart in time.

This can be not wanted, especially when using the data for time series prediction. In this case you would have to exchange the "RecordedValues" method by the "Interpolated" method to be able to define a sampling frequency:

 

PI Web API:

GetInterpolated GET streams/{webId}/interpolated

 

AFSDK:

AFData.InterpolatedValues Method

 

  • PI Datalink can also be used to create the .csv file, but focus is on programmatic approaches.

 

Reading data from .csv file in Python

Sample .csv file:

The events are stripped of their timestamps, as the events have a fixed sampling frequency, which makes a timestamp obsolete.

 

 

import numpy as np
import csv

dataset = np.loadtxt(open('filepath_csv', "rb"), delimiter=",", skiprows=1)

 

skiprows=1: will skip the first row of the .csv file. This can be useful when the header of the file contains column description.

The columns of the .csv file are stored in a numpy array, which can be further used for machine learning.

 

Approach 2: Directly accessing the PI Sytem using HTTP requests in Python.

For this approach we make use of the requests library in Python.

Requests: HTTP for Humans™ — Requests 2.21.0 documentation

 

The PI Web API GetInterpolated method is used to extract constantly sampled values of a desired PI Point:

GetInterpolated GET streams/{webId}/interpolated

 

In order to retrieve data for a certain PI Point we need the WebID as reference. It can be retrieved by the built-in search of PI Web API.

In this case the WebID can be found here:

 

 

 

Using the requests library of Python and the GetInterpolated method of PI Web API, we retrieve the sampled events of the desired PI Point as a JSON HTTP response:

 

import requests

response = requests.get('https://<PIWebAPI_host>/piwebapi/streams/<webID_of_PIPoint>/interpolated?startTime=T-10d&endTime=T&Interval=1h', headers={"Authorization": "Basic %s" % b64Val}, verify=True)

 

The response is in JSON format and will look something like that:

 

 

Parsing the JSON HTTP response:

We only need the values of the events. As they are interpolated, we do not care about quality. The timestamp information is contained in the sampling itnerval, that we have earlier specified in the GetInterpolated method of PI Web API.

We assume that we have 2 JSON responses r1 and r2 for 2 different PIPoints, but both generated with the GetInterpolated method, with same sampling interval, over the same timerange.

 

 

import json
import numpy as np

json1_data = r1.json()
json2_data = r2.json()

data_list_1 = list()

for j_object in json1_data["Items"]:

value = j_object["Value"]
if type(value) is float: #this is important to not iclude the last element which is of type "dict"

data_list_1 = np.append(data_list_1, float(value))
data_list_2 = list()

for j_object in json2_data["Items"]:

value = j_object["Value"]
if type(value) is float:
data_list_2 = np.append(data_list_2, float(value))

# Stack both 1-D Lists into a 2-D Array:
array_request_values = np.array(np.column_stack((data_list_1, data_list_2)))

(Sorry for the wrong intendation)

 

This Python code parses the JSON HTTP responses and writes them into 2 seperate lists. These then are stacked into a numpy array:

 

Example:

 

 

This numpy array can be used as input for machine learning.

 

Please check out Machine Learning Pipeline 2, for an easy way to write back machine learning output to the PI System.

rborges

Using VS Code with AF SDK

Posted by rborges Employee Mar 6, 2019

From time to time we hear from you several excuses for not jumping into software development with PI data. Today we will discuss the top two that I get and how to fix them with a very simple solution. Here they are:

 

  1. I must be an administrator to install development applications;
  2. The tools required for developing applications in a commercial environment are expensive;

 

1. Free IDEs and Licensing

 

Let's start by talking about the cost of applications used for software development. Yes, usually they are expensive, but there are pretty good free alternatives. The problem is that you have to be really careful with the license that it uses and what are your responsibilities. For instance, the first version of Eclipse's EPL was so vague that auto-generated code could be interpreted as derivative work and could make it mandatory for the developer to open the source code. Since EPL2 they have fixed it and Eclipse is now business friendly.

 

Now, if you want to stick with Microsoft tools, there are two free alternatives. The first one is Visual Studio Community, a slim version of the full-blown Visual Studio. But it has a proprietary license that it's not suitable for enterprise as it states:

If you are an enterprise, your employees and contractors may not use the software to develop or test your applications, except for: (i ) open source; (ii) Visual Studio extensions; (iii) device drivers for the Windows operating system; and, (iv) education purposes as permitted above.

 

So if you are writing a tool to process your corporate data, it's a no-go. This pushes us to the other alternative, VS Code, a free, yet very powerful, community-driven IDE that's suitable for enterprise development. It uses a simple license that allows you to use the application for commercial purposes and in an enterprise environment. As clearly stated on item 1.a: "you may use any number of copies of the software to develop and test your applications, including deployment within your internal corporate network".

 

2. You don't need to be an Admin

 

This item is really easy to fix as Microsoft offers a standalone version of VS Code. You just have to download the .zip version from here and run the executable. If you are not able to execute the file, the company you work for may be blocking and, in this case, you should talk to your manager or IT departament.

 

Well, what now? Is it that simple? Unfortunately no. Due to the increasing participation of Microsoft in the open-source world, several of its products are now targeting the open-source audience. More specifically for the .NET platform, Microsoft has created the .NET Foundation to handle the usage of the .NET products under OSI (Open Source Initiative, not OSIsoft) rules. Because of this, VS Code is designed to work natively with .NET Core and not the standard .NET Framework. So, in order to use AF SDK, you need some tweaks. But fear not, they are really simple!

 

3. Adding the C# Extension to VS Code

 

In this step, we will install the C# Extension that allows the IDE to process .cs and .csproj files. It's very simple as you just have to go to the Extensions tab (highlighted below or just press control + shif + x), type "C# for Visual Studio Code" and hit install.

 

 

4. Creating a .NET Standard project in VS Code

 

First, we start VS Code and go to File -> Open Folder so we can select a folder that will be the base of our demo project. Once you see the welcome screen, go to View -> Terminal so you can use the integrated terminal to use the .NET CLI to create a new .NET project. Because we will create a console application, we must type

 

PS C:\Users\rafael\afdemo> dotnet new console

 

If you really need to work with VB.NET you can append "-lang VB" at the end. (but please, don't )

 

You should end up with a structure similar to this:

 

 

If by any chance, your structure misses the .vscode folder, press control+shift+B to compile it and VS Code will ask if you want to add a default build action. Just say yes.

 

5. Converting from .NET Core to .NET Standard

 

Now that we have our project, we must change it to .NET Standard, so we can later reference the AF SDK assembly. We have to change two files. The first one is the .csproj file. You have to change from this:

 

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>

    <OutputType>Exe</OutputType>

    <TargetFramework>netcoreapp2.2</TargetFramework>

  </PropertyGroup>

</Project>

 

To this:

 

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>

    <OutputType>Exe</OutputType>

    <TargetFramework>net472</TargetFramework>

    <PlatformTarget>x64</PlatformTarget>

    <DebugType>net472</DebugType>

  </PropertyGroup>

</Project>

 

Here we see the first caveat of this approach: the application must be 64 bits. Also, note that I'm explicitly referencing the .NET Framework version that I want to work with. The list of available frameworks (as well as how to write it on the XML file) can be found here.

 

Now we move on the recently created launch.json inside the .vscode folder. If you pay close attention, you will see that it references a type "coreclr". You have to change it to type "clr". Also, the path must match the references framework version. So, if every goes as planned, you will change from this:

 

{

  "version":"0.2.0",

  "configurations":[

  {

    "name":".NET Core Launch (console)",

    "type":"coreclr",

    "request":"launch",

    "preLaunchTask":"build",

    "program":"${workspaceFolder}/bin/Debug/netcoreapp2.1/afdemo2.dll",

    "args":[],

    "cwd":"${workspaceFolder}",

    "console":"internalConsole",

    "stopAtEntry":false,

    "internalConsoleOptions":"openOnSessionStart"

  },

  {

    "name":".NET Core Attach",

    "type":"coreclr",

    "request":"attach",

    "processId":"${command:pickProcess}"

  }

 ]

}

 

To this:

{

  "version":"0.2.0",

  "configurations":[

  {

    "name":".NET Standard Launch (console)",

    "type":"clr",

    "request":"launch",

    "preLaunchTask":"build",

    "program":"${workspaceFolder}/bin/Debug/net472/afdemo.exe",

    "args":[],

    "cwd":"${workspaceFolder}",

    "console":"internalConsole",

    "stopAtEntry":false,

    "internalConsoleOptions":"openOnSessionStart"

  },

{

  "name":".NET Standard Attach",

  "type":"clr",

  "request":"attach",

  "processId":"${command:pickProcess}"

  }

 ]

}

 

And that's it! If everything went all right, you can now test your Hello World and see if it's working as it should be.

 

6. Referencing OSIsoft.AFSDK.dll

 

Now, in order to get PI Data into your project, you just have to add the reference to the AF SDK assembly. This is just a matter of changing your .csproj file and manually including the reference by adding the <ItemGroup> element to your <Project>. Also, it's important to mention that HintPath can also handle relative paths. I'm hardcoding the full path here so the unobservant copypaster don't break the code:

 

<Project Sdk="Microsoft.NET.Sdk">

  <ItemGroup>

    <Reference Include="OSIsoft.AFSDK, Version=4.0.0.0, Culture=neutral, PublicKeyToken=6238be57836698e6, processorArchitecture=MSIL">

      <SpecificVersion>False</SpecificVersion>

      <HintPath>C:\Program Files (x86)\PIPC\AF\PublicAssemblies\4.0\OSIsoft.AFSDK.dll</HintPath>

    </Reference>

  </ItemGroup>

</Project>

 

7. Code Time

 

Now let's write a simple code that lists all PI Systems and their respective AF Databases

 

using System;

using OSIsoft.AF;

using OSIsoft.AF.Asset;

using OSIsoft.AF.Data;

using OSIsoft.AF.PI;

using OSIsoft.AF.Time;

 

namespace afdemo

{

  class Program

  {

    static void Main(string[] args)

    {

      Console.WriteLine("This is a VS Code Project!");

      PISystems piSystems = new PISystems();

     foreach (var piSystem in piSystems)

     {

        Console.WriteLine(piSystem.Name);

        foreach (var database in piSystem.Databases)

        {

          Console.WriteLine(" ->" + database.Name);

        }

      }

    }

  }

}

 

You can build the code by pressing Control+Shift+B (Visual Studio users, rejoice!) and debug it by pressing F5. Now you can easily debug your code on the same way you do on the full-blown version of Visual Studio!

 

 

You can also execute the code by typing dotnet run on your integrated console. Here the output for this specific code on my machine:

 

 

8. Conclusion

 

By using VS Code you have no more excuses to not jump headfirst into developing custom tools that can make your workflow simpler and easier. Oh, and one more thing: VS Code is suitable for a myriad of languages, including web development. So you can also use it to create PI Vision Custom Symbols (leave a comment if you would like a guide on how to configure an environment for PI Vision Custom Symbols development).

In about a month, PI World 2019 will kick off in San Francisco.  Like the past many years, the events will be spread over 3 hotels.  Also like the past many years, the Parc 55 hotel is where you will find events catering specifically to developers and the data science community.  A year ago we introduced "Live Coding" sessions and "How To" walk-throughs to offer a more indepth talk (more steak, less sizzle).  This year we have collectively rebranded these new formats into a common track: Tech Talks.  These 90 minute talks hit the sweet spot for many.  If you leave a traditional 45 minute PowerPoint talk wishing for more details, then the longer 90 minute Tech Talk is for you.  If you feel a 3 hour lab is too slow, or you would rather be shown the material rather than typing it yourself, then the shorter 90 minute Tech Talk is for you too.

 

 

We have expanded the Tech Talks to begin on Day 2 rather than wait until Day 3.  Here's what you can find on the agenda.

 

Day 2 Tech Talks (Parc 55)

  • Using Stream Views for Real Time Analytics
  • Using PI Web API and PowerApps to Build Real World Apps With Your PI Data
  • Leveraging the Power of PI Vision Extensibility
  • Concurrent Programming for PI Developers
  • Generating API Clients with OpenAPI 2.0 (Swagger) specifications and interacting with REST endpoints
  • Effortlessly deploying a PI System in Azure or AWS

 

Day 3 Tech Talks (Parc 55)

  • OSIsoft Cloud Services for Developers
  • Writing High Performance Applications with AF SDK
  • Modernizing PI SQL For the Future
  • Create Dashboards to monitor PI Analysis Service

 

Check out the agenda for exact times and room locations.  While you are peeking at the agenda take a look at Day 2 PI Geek Talks and Day 3 Developer Talks too.  All are offered at the Parc 55.

 

And join us from 4:30-6:00 PM on Day 3 for the Developers Reception.

Filter Blog

By date: By tag: