Skip navigation
All Places > PI Developers Club > Blog > Author: rborges

PI Developers Club

9 Posts authored by: rborges Employee



Last week, just after publishing the blog post about how to send data from Node-RED to PI Web API, I got a call from a friend asking me how he could use a very similar setup to send raw binary data from the sensor to PI System and process it there. The answer is surprisingly simple and, after a quick look at several PI Square questions, I realised that this is a common question. So let's see how can we accomplish this in a very simple way, shall we?




The setup we will be using today is very similar to the one on my last post, so I strongly recommend you to give it a look before proceeding here. I will just go deeper into details about the sensor I'm using because we will need this information later on.


For this blog post, I will use my old and faithful BME280, a small temperature, humidity and pressure sensor that has a builtin calibration mechanism. Today we will only retrieve the temperature and calibration information, so we can keep the code simple and readable, but the procedure is pretty much the same for every sensor out there.


Because we will be dealing with binary data, it's important we start by giving a look at the sensor's datasheet so we understand how it's organized and where to get the information we need.



So here it is. Raw sensor data is available from 0xF7 to 0xFE (with temperatures on the first three bytes) and the calibration is a long sequence starting at 0x88 (if you read the datasheet you will see we only need the initial 6 bytes as the rest is for the other readings). The compensation formula is given in Appendix A and we will need to implement it on AF in order to process this data.


Sending the Data to PI


Once again, it's the same configuration from my last post, so I won't waste your time explaining all the details, but here's the Node-RED flow we will be using for this example:

This is pretty simple to follow. We start with an injection node that triggers the flow every five seconds. From there we go to an I2C node that reads data straight out of the I2C bus. Then we make some small adjustments to make it PI Web API friendly and we finally send it through a simple HTTP POST. The big difference here from the last time we did the same thing, is that now the I2C node creates a byte array as our payload:



So how do we POST an array to PI Web API? It's actually pretty simple. Considering that the PI Tag and AF Attribute are configured properly (we will see how to do that on the next secion), the JSON body should simply contain an array as the Value key. Then you POST to the Stream controller's UpdateValue method and you are good to go. Here's an example using Postman:



PI Tag and AF Attribute Configuration


There are two things we must consider in our configuration: the PI Tag type that will be used and the AF Attribute. Let's start with the AF Attribute configuration, where things are straightforward as the engine already exposes native array types. For this demo, we will use a byte array. Here's the config string of my attribute template, already set to tag creation: \\%Server%\%Element%.%Attribute%;ReadOnly=False;pointtype=Blob.




Because the sensor we are using sends data as a byte array, this will be the data type we will use. Keep in mind that it's not uncommon to see sensors sending data as Int arrays.


Now on the Data Archive side of our project, let's address a very common question: how do we store array data in the PI System? How should I configure a PI Tag to store array data? Here's a quote from LiveLibrary:


BLOB is the PI point type typically chosen for an arbitrary unstructured array of bytes. The value of a single event for a BLOB point is limited to binary data of up to 976 bytes in length. Data for larger binaries must be split into multiple events for the BLOB point or the data must be stored as an annotation.


So here's our answer: we must configure our PI Tag as a blob (by the way, blob means binary large object).


Processing the Data


At this point we already have data flowing in, an 8-byte array for our raw data a 6-byte array for the compensation factors:




Now we have to extract meaningful information out of it by implementing some transformations that will be able to convert the binary data into our final temperature value. In order to do this, we first have to check the sensor's datasheet to see how we convert the calibration factors into actual numbers that will go in the conversion formula. We will start with our calibration parameters and here's the info from the sensor documentation:




As I said before, we only need the first six bytes from the calibration information. So we now need to convert the bytes into actual numbers. Also, from this table, we now know T1 is an unsigned short and the other two are signed, so the transformation is simple. Here's how it's done in Python:


dig_T1 = cal[1] << 8 | cal[0]
dig_T2 = cal[3] << 8 | cal[2]
dig_T3 = cal[5] << 8 | cal[4]

if dig_T2 > 32767:
dig_T2 = dig_T2 - 65536

if dig_T3 > 32767:
dig_T3 = dig_T3 - 65536


In order to do that in AF, you have to use a little math because it doesn't offer the bitwise operators available in Python or C. The bitwise left shift (<< n) is equivalent of multiplying your number by 2^n while the binary OR ( | ) is a simple sum. Finally, we have to check if T2 and T3 are above 32767 because this is how signed ints work. This is how our final implementation is in AF (important: arrays on AF use one-based indexing! So to access the first to elements, we will use [1] and [2]):



Now we have to go back to the sensor's datasheet to see how can we convert the raw data into an actual number. Here's the information we need: 



On my Node-RED flow, I'm requesting data from 0XF7 to 0XFE so I don't need to make several requests to the I2C bus. This is important because, on our array, the MSB will be on position [4], LSB on [5] and XLSB on [6]. The Python script that does the bitwise operation to convert it into a decimal number is quite simple:


temp_raw = (block[3] << 16 | block[4] << 8 | block[5]) >> 4


In a similar fashion as before, we convert it to an AF Analysis script by using simple math:




We are almost there! We have all our reading as numbers and we can finally apply the conversion formula available on the sensor's documentation. Here's the C code they've provided:


double BME280_compensate_T_double(double adc_T)
double var1, var2, T;
const double K1 = 1024;
const double K5 = K1 * 5; // 5120
const double K8 = K1 * 8; // 8192
const double K16 = K1 * 16; // 16384
const double K128 = K1 * 128; // 131072

var1 = ((adc_T / K16) - (dig_T1 / K1)) * dig_T2;
var2 = ((adc_T / K128) - (dig_T1 / K8)) * ((adc_T / K128) - (dig_T1 / K8)) * dig_T3;
T = (var1 + var2) / K5;
return T;


This is easy peasy lemon squeezy for AF and I'm sure you will have no problem implementing this logic. Here's my complete analysis, where final is the output temperature in celsius:




Do I need this?


I reckon this is not for everyone. Most of the time we only need the final sensor reading. But some modern sensors and instruments are able to send more meaningful and important data, like maintenance flags, reading status and other parameters that may be useful for some teams, like instrumentation and maintenance.


PI Web API and Node-RED

Posted by rborges Employee May 21, 2019

1. Introduction

1.1 What is Node-RED?

Before going to the juicy bits of this blog post, let me start by explaining what Node-RED is for those who are not familiar with this tool. Quoting their own website, Node-RED is a flow-based programming tool for the internet of things. What this means, in simple English, is that you can drag and drop functional boxes for wiring together devices and online services.


This is what a Node-RED flow looks like:

A group of nodes is called a flow. Each node is responsible for manipulating an object called message. In this flow, Node-RED is listening for UDP packets and, based on the destination port, it can either pull high GPIO 15 or ground GPIO 13 while sending an HTTP request to a given API (and keep trying while not succeed).


Another selling point is the community support. Being an open source project (Apache 2.0 if you are curious), community engagement has skyrocketed and you can find thousands of different nodes: from OPC UA and Modbus nodes to Instagram and other social media platforms. From FFT to sentiment analysis. So most likely you won't even need to write a single line of code.


1.2 Why should I use it?


Rafa, usually graphical programming is just a dumbed-down version of actual programming. Why should I use it if I can write my own application? Because it's easy and fast. As a software engineer, I agree that a tailor-made firmware in C is more reliable and efficient. But can you create one in seconds? Can you easily include new features or make changes to it? This is an old debate where, on one hand, we have practicality and, on the other hand, robustness.


Let's just remember that we are talking about IoT devices, frequently located outside the boundaries of the main network and sending data through a non-wired channel. Unpredictability and unreliability are a given and your logic can go easily go sideways with an unforeseen condition. So would you prefer to update your code through a web interface or going on the field with a computer and a USB UART dongle to reflash your firmware?


1.3 Caveats

Everything sounds great, but it's not all roses. Although it has a very small footprint, you can't deploy Node-RED on small microchips (ESP8266, if you are wondering) or limited hardware like Arduino. Right now, the only "IoT hardware" capable of running it is Raspberry PIs and Beagle Bone Black. This poses a challenge if you are considering deploying the platform on the field, but there are alternatives if you need to use minimalist hardware.


Another caveat is the engine itself. Its capabilities are as big as the capabilities of the device hosting it. If you deploy it on a small Raspberry PI, don't expect the performance of deployment on a full-blown server. It may sound like an obvious observation, but because IoT devices can scale up easily on a mesh network, sometimes we forget that the host doesn't scale up that easily. So, your mileage may vary when it comes to performance.


1.4 Architectural Considerations


In today's example, we will have multiple sensors sending data directly through an access point (a Raspberry PI 3 B+ in our case). A rough representation is this:


For the geeks out there who are curious about the setup I'm using, here's more info: I'm using BME280 for temperature, humidity and pressure and a DS1307 for real-time clock. All of them are using the I2C bus to send data through an ESP-01 breakout. The ESP8266 is running a custom firmware that creates a mesh network. The Raspberry PI host is connected to the main corporate network through its own wi-fi module, but it's also connected to the mesh network through its own ESP-01 breakout. Each sensor is sending its data to specific MQTT topics on a Mosquitto broker running on the same Raspberry PI. I have three sets of sensors: one is sitting on my desk, the second one is at the office's kitchen, and the last one is in a meeting room, about 20m (65 ft) from my desk.



Although the hardware side is not related to PI, leave a comment below if you would like to hear a little bit more about what I'm doing. By the way, this is a preparation for an ongoing project, where a more professional looking version this will eventually be deployed around our office here in London.


Keep in mind that this is not a suitable production-grade architecture as there is no data buffer, no redundancy, no scalability, and no fail-safe mechanism. If you need any of these items (and you need if data is critical to your operation!), you should consider giving a look at EDS (Edge Data Store), our answer to data management on the edge (watch this presentation about EDS, you won't regret it).


2. Sample Flows


2.1 Reading Values from the Sensors


This is not a Node-RED tutorial, but let me just show how I'm capturing the data. After all, this is how everything starts. As I mentioned before I'm running an MQTT broker and each sensor data has it's own topic. Because I like to sort my data by data domains, my topics are the following:


Desk SensorTemperatureoffice/temperature/desk
Desk SensorHumidityoffice/humidity/desk
Desk SensorPressureoffice/pressure/desk
Kitchen SensorTemperatureoffice/temperature/kitchen
Kitchen SensorHumidityoffice/humidity/kitchen
Kitchen SensorPressureoffice/pressure/kitchen
Meeting Room SensorTemperatureoffice/temperature/meetingroom
Meeting Room SensorHumidityoffice/humidity/meetingroom
Meeting Room SensorPressureoffice/pressure/meetingroom


This allows me to easily get to all temperatures by subscribing to office/temperature/# or get all office data by subscribing to office/#. I can also get only my desk data by subscribing to office/+/desk. Here's an example of how to get the data using an MQTT node:

Dead simple, right? The green node is a debug node that outputs all the content of the payload. Here's the output for a single topic. Keep in mind that we receive several messages like this (one for each topic we are subscribed to).



Because I'm trying to keep things as streamlined as possible, the sensors are already streaming an output that is pretty much what we need to send to PI, including the AFPath where we are sending data to. On my custom ESP8266 firmware I have this hardcoded:


const String baseAFPath = "\\\\RBORGES-AF\\IoTDemo\\Rafael Desk Environmental Data|";
const String tempAFPath = baseAFPath + "temperature";
const String humiAFPath = baseAFPath + "humidity";
const String pressAFPath = baseAFPath + "pressure";
const String dpointAFPath = baseAFPath + "dewpoint";


In a more real-life scenario, instead of a hardcoded string, you should do something like this:


const String chipID = ESP.getChipId();
const String tempTopic = "office/temperature_"+ chipID + "/desk";
const String humiTopic = "office/humidity_"+ chipID + "/desk";
const String presTopic = "office/pressure_"+ chipID + "/desk";
const String vccTopic = "office/vcc_"+ chipID + "/desk";
const String ipTopic = "office/ip_"+ chipID + "/desk";


Then, on your Node-RED server, you would correlate the chip ID with the attributes you are sending data to. That has the extra benefit of easy maintenance, in the case of you moving the sensor to a different location. There's no need for you to rewrite the firmware, just update the reference table.


By the way, talking about real-life scenarios, if you are using the PI System infrastructure, you should consider OMF as your format standard, in the near future, you will be able to send OMF data right away to PI Web API. This will free you from the transformations we have to do in the next section.


2.2 Sending Values to PI

Now that we have the data, we need to send it to PI. As the blog title suggests, we will use PI Web API as our data entry point. In order to do this, we have to make some transformations on our data. First, we have to add the WebID for the attribute we want to write data to and then we have to execute an HTTP POST with a JSON body containing the data itself.


The first thing is to get the WebID. Using Web ID 2.0, it's actually pretty easy to encode the AF path into a valid Web ID. I strongly suggest you Christopher Sawyer's excellent post on how to encode, decode, and some basic concepts behind it.


Going back to Node-RED, in order to encode the path into a valid Web ID, we have to execute custom code. This is easily done with the function node, where you can run any arbitrary javascript code. It exposes the whole message as a plain JavaScript object and allows you to manipulate it in a programmatic way.


path = msg.path
if (path.substring(0, 2) == "\\\\") {
path = path.substring(2);
var encoded_path = new Buffer(path.toUpperCase()).toString('base64');
var count = 0;
encoded_path = encoded_path.replace('+', '-').replace('/', '_');
for (var i = (encoded_path.length - 1); i > 0; i--) {
if (encoded_path[i] == "=") {
else {
if (count > 0) {
encoded_path = encoded_path.slice(0, -count);
msg.webid = "P1AbE" + encoded_path
return msg;

Do you get the idea? I'm just getting the path variable from our message, encoding in base64 and concatenating with "P1Abe". The P1Abe WebID header means it encodes the path and refers to an AFAttribute that is a child of an AFElement. Once again, if you have not checked Chris' blog post, stop reading this article now and go read it! A final note for those JavaScript nerds wondering why I'm not using btoa(). Node-RED, as the name suggests, run on Node.js and it doesn't expose btoa() / atob().


At this point, this is our message object:


Our payload is ready and we have the Web ID encoded, so we are pretty much ready to send the data to PI. To do this, we now need the HTTP Request node.


It works by getting the payload content and sending it as a JSON body to a given URL. Here's the configuration for our example:

We define it as a POST to the Stream Controller Update Value Method, we set the URL, I enable SSL to properly handle the SSL certificate and finally, I select basic authentication. Let me just call your attention to the URL I'm using https://rborges-af/piwebapi/streams/{{webid}}/value. See the {{webid}}? It's a template system called Mustache Notation. It allows us to get a value from the message object and use it to feed the template.


Here's the full flow:

A recap: we subscribe to an MQTT topic, we convert the string into a valid JSON, we do some housekeeping by moving the path information out of the payload, we use the path to encode our WebID and we finally post the data. I added the debug node as an output of our request, so we can get the HTTP Response object and see if it's everything working properly. Here's the output:



The 202 status code on our response means that the data was sent. We can now see it on AF:



2.3 A Periodic Calculation Engine


Another cool thing that we can do with Node-RED, is to deploy flows that are triggered periodically. The Inject node allows us to not only inject an arbitrary JSON but also do it in a regular fashion. Here's an example. We first inject a JSON with two important tags that will be used for a given analysis:



Then we configure it to do it periodically. You can define a frequency (e.g., every 10 minutes), an interval between times (e.g., every 10 minutes between 8 AM and 3 PM), or at specific times (every noon on weekdays). For our example, every minute.


Now we can use it as the start trigger for our flow.


In this example, we are injecting the JSON every minute, we split it to get data from the PI Web API for each PI tag, we join the messages into a single message, we pass it through a generic splitter where we prepare it for FFT and the result we send it to our maintenance database. I'm using our beloved CDT158 and SINUSOID, but it could easily be vibrational data so we can log vibration information from a maintenance perspective.


Keep in mind that the inject node is just on way to do it. There are plenty of other ways for you to trigger a flow. Another possibility is to listen to multiple tags and start it only when a condition is met. I helped a customer a couple of years ago to wire up some electronics and trigger a flow when a door was opened and closed. This was used to log on PI when the lab door was open.


3. Conclusion


3.1 The PI System and IoT data


Today we saw how easy it is for you to wire up sensor data with PI, using nothing else than PI Web API requests. It's simple, cost-effective and fun to execute. I actually use this same architecture for my home automation system with PI (let me know in the comments if you would like to know why I use PI at my home).


But let me stress once again that this is just a proof-of-concept. An enterprise-grade project would never ever send data directly like that as it's a security breach and the lack of buffer makes it very unreliable. Once again give a look on EDS if you need to send edge data to your PI System.


3.2 Reference Material


Using VS Code with AF SDK

Posted by rborges Employee Mar 6, 2019

From time to time we hear from you several excuses for not jumping into software development with PI data. Today we will discuss the top two that I get and how to fix them with a very simple solution. Here they are:


  1. I must be an administrator to install development applications;
  2. The tools required for developing applications in a commercial environment are expensive;


1. Free IDEs and Licensing


Let's start by talking about the cost of applications used for software development. Yes, usually they are expensive, but there are pretty good free alternatives. The problem is that you have to be really careful with the license that it uses and what are your responsibilities. For instance, the first version of Eclipse's EPL was so vague that auto-generated code could be interpreted as derivative work and could make it mandatory for the developer to open the source code. Since EPL2 they have fixed it and Eclipse is now business friendly.


Now, if you want to stick with Microsoft tools, there are two free alternatives. The first one is Visual Studio Community, a slim version of the full-blown Visual Studio. But it has a proprietary license that it's not suitable for enterprise as it states:

If you are an enterprise, your employees and contractors may not use the software to develop or test your applications, except for: (i ) open source; (ii) Visual Studio extensions; (iii) device drivers for the Windows operating system; and, (iv) education purposes as permitted above.


So if you are writing a tool to process your corporate data, it's a no-go. This pushes us to the other alternative, VS Code, a free, yet very powerful, community-driven IDE that's suitable for enterprise development. It uses a simple license that allows you to use the application for commercial purposes and in an enterprise environment. As clearly stated on item 1.a: "you may use any number of copies of the software to develop and test your applications, including deployment within your internal corporate network".


2. You don't need to be an Admin


This item is really easy to fix as Microsoft offers a standalone version of VS Code. You just have to download the .zip version from here and run the executable. If you are not able to execute the file, the company you work for may be blocking and, in this case, you should talk to your manager or IT departament.


Well, what now? Is it that simple? Unfortunately no. Due to the increasing participation of Microsoft in the open-source world, several of its products are now targeting the open-source audience. More specifically for the .NET platform, Microsoft has created the .NET Foundation to handle the usage of the .NET products under OSI (Open Source Initiative, not OSIsoft) rules. Because of this, VS Code is designed to work natively with .NET Core and not the standard .NET Framework. So, in order to use AF SDK, you need some tweaks. But fear not, they are really simple!


3. Adding the C# Extension to VS Code


In this step, we will install the C# Extension that allows the IDE to process .cs and .csproj files. It's very simple as you just have to go to the Extensions tab (highlighted below or just press control + shif + x), type "C# for Visual Studio Code" and hit install.



4. Creating a .NET Standard project in VS Code


First, we start VS Code and go to File -> Open Folder so we can select a folder that will be the base of our demo project. Once you see the welcome screen, go to View -> Terminal so you can use the integrated terminal to use the .NET CLI to create a new .NET project. Because we will create a console application, we must type


PS C:\Users\rafael\afdemo> dotnet new console


If you really need to work with VB.NET you can append "-lang VB" at the end. (but please, don't )


You should end up with a structure similar to this:



If by any chance, your structure misses the .vscode folder, press control+shift+B to compile it and VS Code will ask if you want to add a default build action. Just say yes.


5. Converting from .NET Core to .NET Standard


Now that we have our project, we must change it to .NET Standard, so we can later reference the AF SDK assembly. We have to change two files. The first one is the .csproj file. You have to change from this:


<Project Sdk="Microsoft.NET.Sdk">







To this:


<Project Sdk="Microsoft.NET.Sdk">









Here we see the first caveat of this approach: the application must be 64 bits. Also, note that I'm explicitly referencing the .NET Framework version that I want to work with. The list of available frameworks (as well as how to write it on the XML file) can be found here.


Now we move on the recently created launch.json inside the .vscode folder. If you pay close attention, you will see that it references a type "coreclr". You have to change it to type "clr". Also, the path must match the references framework version. So, if every goes as planned, you will change from this:






    "name":".NET Core Launch (console)",












    "name":".NET Core Attach",








To this:





    "name":".NET Standard Launch (console)",












  "name":".NET Standard Attach",








And that's it! If everything went all right, you can now test your Hello World and see if it's working as it should be.


6. Referencing OSIsoft.AFSDK.dll


Now, in order to get PI Data into your project, you just have to add the reference to the AF SDK assembly. This is just a matter of changing your .csproj file and manually including the reference by adding the <ItemGroup> element to your <Project>. Also, it's important to mention that HintPath can also handle relative paths. I'm hardcoding the full path here so the unobservant copypaster don't break the code:


<Project Sdk="Microsoft.NET.Sdk">


    <Reference Include="OSIsoft.AFSDK, Version=, Culture=neutral, PublicKeyToken=6238be57836698e6, processorArchitecture=MSIL">


      <HintPath>C:\Program Files (x86)\PIPC\AF\PublicAssemblies\4.0\OSIsoft.AFSDK.dll</HintPath>





7. Code Time


Now let's write a simple code that lists all PI Systems and their respective AF Databases


using System;

using OSIsoft.AF;

using OSIsoft.AF.Asset;

using OSIsoft.AF.Data;

using OSIsoft.AF.PI;

using OSIsoft.AF.Time;


namespace afdemo


  class Program


    static void Main(string[] args)


      Console.WriteLine("This is a VS Code Project!");

      PISystems piSystems = new PISystems();

     foreach (var piSystem in piSystems)



        foreach (var database in piSystem.Databases)


          Console.WriteLine(" ->" + database.Name);







You can build the code by pressing Control+Shift+B (Visual Studio users, rejoice!) and debug it by pressing F5. Now you can easily debug your code on the same way you do on the full-blown version of Visual Studio!



You can also execute the code by typing dotnet run on your integrated console. Here the output for this specific code on my machine:



8. Conclusion


By using VS Code you have no more excuses to not jump headfirst into developing custom tools that can make your workflow simpler and easier. Oh, and one more thing: VS Code is suitable for a myriad of languages, including web development. So you can also use it to create PI Vision Custom Symbols (leave a comment if you would like a guide on how to configure an environment for PI Vision Custom Symbols development).

PI AF was released last week along with a new version AF SDK (, so let me show you a feature that has been long requested by the community and that it's now available for you: the AFSession structure. This structure is used to represent a session on the AF Server and it exposes the following members:


Public PropertyDescription
AuthenticationTypeThe authentication type of the account which made the connection.
ClientHostThe IP address of the client host which made the connection.
ClientPortThe port number of the client host which made the connection.
EndTimeThe end time of the connection.
GracefulTerminationA boolean that indicates if the end time was logged for graceful client termination.
StartTimeThe start time of the connection.
UserNameThe username of the account which made the connection.


In order to get session information of a given PI System, the PISystem class now exposes a function called GetSessions(AFTime? startTime, AFTime? endTime, AFSortOrder sortOrder, int startIndex, int maxCount) and it returns an array of AFSessions. The AFSortOrder is an enumeration defining whether you want the startTime to be ascending or descending. Note that you can specify AFTime.MaxValue at the endTime to search only sessions which are still open.


From the documentation's remarks: The returned session data can be used to determine information about clients that are connected to the server. This information can be used to identify active clients. Then from the client machine, you can use the GetClientRpcMetrics() (for AF Server) method to determine what calls the clients are making to the server. Session information is not replicated in PI AF Collective environments. In these setups, make sure you connect to the member you want to retrieve session info from.


Shall we see it in action? The code I'm using is very simple:


var piSystem = (new PISystems()).DefaultPISystem;
var sessions = piSystem.GetSessions(new AFTime("*-1d"), null, AFSortOrder.Descending);
foreach (var session in sessions)
     Console.WriteLine($"---- {session.ClientHost}:{session.ClientPort} ----");
     Console.WriteLine($"Username: {session.UserName}");
     Console.WriteLine($"Start time: {session.StartTime}");
     Console.WriteLine($"End time: {session.EndTime}");
     Console.WriteLine($"Graceful: {session.GracefulTermination}");


A cropped version of the result can be seen below:


---- ----

Username: OSI\rborges

Start time: 07/02/18 13:18:54

End time:



---- ----

Username: OSI\rborges

Start time: 07/02/18 13:06:36

End time: 07/02/18 13:11:51

Graceful: True


---- ----

Username: OSI\rborges

Start time: 07/02/18 13:06:17

End time: 07/02/18 13:06:19

Graceful: True


As you can see, now we can easily monitor sessions in your PI System. Share your thoughts about it in the comments and how you are planning on using it.


Happy coding!



Rick's post on how to use metrics with AF SDK.

1. Introduction

Every day more and more customers get in contact with us asking how does PI could be used to leverage their GIS data and how their geospatial information could be used in PI. Our answer is the PI Integrator for Esri ArcGIS. If your operation involves any sort of georeferenced data, geofencing or any kind of geospatial data, I encourage you to give a look at what the PI Integrator for Esri ArcGIS is capable of. But this is PI Developers Club, a haven for DIY PI nerds and curious data-driven minds. So, is it possible to create a custom data reference that provides access to some GIS data and functionalities? Let's do it using an almost-real-life example.


2018-06-07 11_38_41-pisquare - QGIS.png1.1 Scenario

The manager of a mining company has to monitor some trucks that operate at the northmost deposit of their open-pit mine. Due to recent rains, their geotechnical engineering team has mapped an unsafe area that should have no more than three trucks inside of it. They have also provided a shapefile with a polygon delimiting a control zone (you can download the shapefile at the end of this post). The manager wants to be notified whenever the number of trucks inside the control area is above a given limit.


relationship.pngCaveat lector, I'm not a mining engineer, so please excuse any inaccuracy or misrepresentation of the operations at a zinc mine. It's also important to state that the mine I'm using as an example has no relation to this blog post nor the data I'm using.


1.2 Premises

If you are familiar with GIS data, you know it's an endless world of file formats, coordinate systems, and geodetic models. Unless you have a full-featured GIS platform, it's very complicated to handle all possible combinations of data characteristics. So, for the sake of simplicity, this article uses Esri's Shapefile as its data repository and EPSG:4326 as our coordinate system.


1.3 A Note on CDRs

As the name implies, a CDR should be used to get data from an external data source that we don't provide support out-of-the-box. Simple calculations can be performed, but you should exercise caution as, depending on how intensive your mathematical operations are, you can decrease the performance of an analysis using this CDR. For our example, shapefiles, GeoJsons, and GeoPackages can be seen as a standalone data source files (as they contain geographic information in it) and the math behind it is pretty simple and it won't affect the server performance.


1.4 The AF Structure

Following the diagram on 1.1, our AF structure renders pretty simply: a Zinc Mine element with Trucks as children. The mine element has three attributes: (a) the number of trucks inside the control area (a roll-up analysis), (b) the maximum number of trucks allowed in the control area (a static attribute) and (c) the control area itself.


2018-06-06 09_37_25-__RBORGES-AF_GISToolBox - PI System Explorer.png


The control area is a static attribute with several supporting children attributes holding the files of the shapefile. Due to shapefile specification, together with the SHP file you also need the other three.


2018-06-06 09_39_00-__RBORGES-AF_GISToolBox - PI System Explorer.png


Finally, the truck element has two historical attributes for its position and the one using our CDR to tell if it's currently inside the control area or not (this is the one used by the roll-up analysis at the zinc mine element). Here I'm using both latitude and longitude as separated attributes, but if you have AF 2017 R2 or newer, I encourage you to have this data stored as a location attribute trait.


2018-06-06 16_55_42-__RBORGES-AF_GISToolBox - PI System Explorer.png



2. The GISToolBox Data Reference

The best way to present a new CDR by showing its config string:


Shape=..\|control area;Method=IsInside;Latitude=Latitude;Longitude=Longitude


Breaking it down: Shape is the attribute that holds the shapefile and its supporting files. It's actually just a string with a random name. What is important are the children underneath it that are file attributes and hold the shape files. Method is the method we want to execute. Latitude and Longitude are self-explanatory and they should also point to an attribute. If you don't provide a lat/long attribute, the CDR will use the location attribute trait defined for the element. There are also two other parameters that I will present later.


The code is available here and I encourage you to go through it and read the comments. If you want to learn how to create a custom data reference, please check the useful links section at the end of this post.


2.1 Dataflow

The CDR starts by overriding the GetInputs method. There we use the values passed by the config string, to get the proper attributes. You should pay close attention to the way the shapefile is organized, as there are some child attributes holding the files (these child attributes are AFFiles). Once this is done, the GetValue is called. It starts by downloading the shapefile from the AF server to a local temporary folder and creating a Shapefile object. Although Esri's specification is open, I'm using DotSpatial to incorporate the file handling and all spatial analysis we do. Once we have the shapefile, it goes through some verifications and we finally call the method that gets the data we want: GISGelper.EvaluateFunction(). For performance reasons, I'm also overriding the GetValues method. The reason is that we don't need to recreate the files for every iteration on the AFValues array sent by the SDK.


2.2 Available Methods

Taking into account what I mentioned on 1.3, we should not create sophisticated calculations so the CDR doesn't slow down the Analysis engine. To keep it simple and with good performance, I have implemented the following methods:

IsInsideDetermines whether a coordinate is inside a polygon in the shapefile. If your shapefile contains several polygons, it will check all of them.

1 if inside

0 if outside

IsOutsideDetermines whether a coordinate is outside a polygon in the shapefile. If your shapefile contains several polygons, it will check all of them.

1 if outside

0 if inside

MinDistanceDetermines the minimum distance from a coordinate to a polygon in the shapefile. If your shapefile contains several polygons, it will check all of them and return the shortest of them all.A double with the distance in the units defined by the used CRSmindist.png
CentroidDistanceDetermines the distance from a coordinate to a polygon's centroid in the shapefile. If your shapefile contains several polygons, it will check all of them and return the shortest of them all.A double with the distance in the units defined by the used CRScentdist.png


2.3 CRS Conversion

The GISToolbox considers that both lat/long and shapefiles are using the same CRS. If your coordinate uses a different base from your shapefile, you can use two other configuration parameters (FromEPSGCode and ToEPSGCode) to convert the coordinate to the same CRS used by the shapefile.


Let's say you have a shapefile using EPSG:4326, but your GPS data comes on EPSG:3857. For this case, you can use:

Shape=..\|control area;Method=IsInside;Latitude=Latitude;Longitude=Longitude;FromEPSGCode=3857;ToEPSGCod=4326


2.4 Limitations

  • It doesn't implement an AF Data Pipe, so it can't be used with event-triggered analysis (only periodic).
  • It handles temporary files, the user running your AF services must have read/write permissions on the temporary folder.
  • It only supports EPSG CRS
  • It only supports shapefiles.


3. Demo

Let's go back to our manager who needs to monitor the trucks inside that specific control area.


3.2 Truck Simulation

In order to make our demo more realistic, I have created a small simulation. You can download the shapefile at the end of this post ( Here's a gif showing the vehicles' position




The trucks start outside of the control area and they slowly move towards it. Here's a table showing if a given truck is inside the polygon at a specific timestamp:

TSTruck 001Truck 002Truck 003Truck 004Total


The simulation continues until the 14ᵗʰ iteration, but note how the limit is exceeded on the timestamp 5, so we should get a notification right after entering the 5ᵗʰ iteration.


3.3 Notification

The notification is dead simple: every 4 seconds I check the Active Trucks attribute against the maximum allowed. And as I mentioned before, the Active Trucks is a roll-up counting the IsInside attribute of each truck.


2018-06-07 16_05_09-__RBORGES-AF_GISToolBox - PI System Explorer.png


Shall we see it in action?



Et voilà!


The simulation files are available at the end of this post. Feel free to download and explore it.


4. Conclusion

This proof of concept demonstrates how powerful a Custom Data Reference can be. Of course, it doesn't even come close to what the PI Integrator for Esri ArcGIS is capable of, but it shows that for simple tasks, we can mimic functionalities from bigger platforms and can be used as an alternative while a more robust platform is not available.


If you like this topic and think that AF should have some basic support to GIS, please chime in on the user voice entry I've created to collect ideas from you.



Recently, during PI World 2018, I was surprised by the number of people asking me if it's possible to list all PI Points and AF Attributes used in PI Vision's displays. The good news is that it's possible to do it, the bad news is that it's not that straightforward.


I will show two different ways to achieve this. The first one using Powershell and the second one querying directly PI Vision's database. Warning: we strongly recommend that you don't mess around with PI Vision's database unless you know what you are doing. If you have questions, please contact tech support or leave a comment in this post.




The PowerShell method is the simplest and safest. In order to understand how it works, let's first do a quick recap of PI's architecture.


In a very high-level description, PI uses a producer-consumer pattern: multiple producers (interfaces, connectors, AFSDK writes, etc) send data to a central repository, while consumers subscribe to updates on a set of PI Points. Whenever a new data comes in, the Update Manager Subsystem notifies subscribers that fresh data is available.


If you open your PI System Management Tools and navigate to Operation -> Update Manager, you will see a list of all processes consuming and producing data.


2018-05-07 09_19_19-Update Manager - PI System Management Tools.png


Now, if you filter by *w3wp* (the name of the IIS process) you can drill down the data and get the list of tags being consumed by that specific signup.


2018-05-07 09_37_34-Update Manager - PI System Management Tools.png


But hey, this is PI DevClub! What about doing it programmatically? Unfortunately, the Update Manager information is not available in AF SDK, but we have the PowerShell tools to help us with this task:


$conn = Connect-PIDataArchive -PIDataArchiveMachineName "emiller-vm2";
$pointIds = @();
While ($true)
     $consumers = Get-PIUpdateManagerSignupStatistics -Connection $conn -ConsumerName "*w3wp*";
     $consumers | ForEach-Object -Process {
          $pointId = $_.Qualifier;
          if ($pointIds -notcontains $pointId -And $pointId -ne 0)
               $pointIds += $pointId;
               $piPoint = Get-PIPoint -Connection $conn -WhereClause "pointid:=$pointId" -AllAttributes;
               $printObj = New-Object PSObject;
               $printObj | Add-Member Name $piPoint.Point.Name;
               $printObj | Add-Member Description $piPoint.Attributes.descriptor;
               $printObj | Add-Member Changer $piPoint.Attributes.changer;
               Write-Output $printObj;


If you run this script it will keep listening for every call to your PI Server originated from an IIS:


2018-05-07 17_09_02-Windows PowerShell.png


By now you may have noticed the two problems of this method: (1) it only shows a new entry if somebody request data for a given PI point (i.e.: open a display) and (2), we are just listing tags and we totally ignore AF attributes. A workaround for the first one is to leave the script running for a while and pipe the result to a text file.


PI Vision's Database


Let me say this once again before we proceed: we strongly recommend users to not touch PI Visions' database. That said...


There are two ways to extract this information from the database. The first one is dead simple, but only works if you don't have displays imported from ProcessBook:


     E.Name [DisplayName],
     E.Owner [DisplayOwner],
     D.FullDatasource [Path]
     BrowseElements E,
     DisplayDatasources D
     E.ID = D.DisplayID


The result of this select is a table with all AF Attributes and PI Points used by PI Vision.


2018-05-07 17_05_28-SQLQuery1.sql - bmoura-vm3.PIVisualization (OSI_rborges (53))_ - Microsoft SQL S.png


This may work for you, but one person that approached me during PI World, also asked me if it was possible to list not only the data sources but also the Symbols using them. Also, most of the displays were imported from ProcessBook. And that's when things get tricky:


     E.Name as [DisplayName],
     E.Owner as [DisplayOwner],
     S.c.value('../@Id', 'nvarchar(128)') as [Symbol],
     D.c.value('local-name(.)', 'nvarchar(2)') as [Source],
     CASE -- Constructing the path according to the data source
          WHEN D.c.value('local-name(.)', 'nvarchar(2)') = 'PI' -- The data comes from a PI Point
          THEN '\\' +
               CASE WHEN CHARINDEX('?',D.c.value('@Node', 'nvarchar(128)')) > 0 -- Here we check if the server ID is present
               THEN LEFT(D.c.value('@Node', 'nvarchar(128)'), CHARINDEX('?',D.c.value('@Node', 'nvarchar(128)'))-1)
               ELSE D.c.value('@Node', 'nvarchar(128)')
               + '\' + 
               CASE WHEN CHARINDEX('?',T.c.value('@Name', 'nvarchar(128)')) > 0 -- Here we check if the point ID is present
               THEN LEFT(T.c.value('@Name', 'nvarchar(128)'), CHARINDEX('?',T.c.value('@Name', 'nvarchar(128)'))-1)
               ELSE T.c.value('@Name', 'nvarchar(128)')
          WHEN D.c.value('local-name(.)', 'nvarchar(2)') = 'AF' -- The data comes from an AF attribute
               THEN '\\' + D.c.value('@Node', 'nvarchar(128)')  + '\' + D.c.value('@Db', 'nvarchar(256)') +  '\' 
               + CASE 
               WHEN T.c.value('@ElementPath', 'nvarchar(128)') IS NOT NULL 
               THEN T.c.value('@ElementPath', 'nvarchar(128)') + '|' + T.c.value('@Name', 'nvarchar(128)')
               ELSE O.c.value('@ElementPath', 'nvarchar(128)') + T.c.value('@Name', 'nvarchar(128)')
     END as [Path]
     BaseDisplays B
     CROSS APPLY B.COG.nodes('/*:COG/*:Datasources/*/*') T(c)
     CROSS APPLY B.COG.nodes('/*:COG/*:Databases/*') D(c)
     CROSS APPLY B.COG.nodes('/*:COG/*:Symbols/*:Symbol/*') S(c)
     LEFT JOIN BaseDisplays B2 OUTER APPLY B2.COG.nodes('/*:COG/*:Contexts/*:AFAttributeParameter') O(c) 
          ON T.c.value('../@Id', 'nvarchar(128)') = O.c.value('@Datasource', 'nvarchar(128)'),
     BrowseElements E
     E.ID = B.BrowseElementID
     AND E.DeleteFlag = 'N'
     AND D.c.value('@Id', 'nvarchar(128)') = T.c.value('../@DbRef', 'nvarchar(128)')
     AND T.c.value('../@Id', 'nvarchar(128)') = S.c.value('@Ref', 'nvarchar(128)')


The result is a little more comprehensive than the previous script:


2018-05-07 17_05_47-SQLQuery2.sql - bmoura-vm3.PIVisualization (OSI_rborges (55))_ - Microsoft SQL S.png


These queries were made for the latest version available (2017 R2 Update 1) and it's not guaranteed to be future-proof. It's known that PI Vision 2018 will use a different data model, so, If needed, I will revisit this post after the launch of the 2018 version.


I'm not going to dig into the specifics of this script as it has a lot of T-SQL going on to deal with the XML information that is stored in the database. If you have specific questions about how it works, leave a comment. Also keep in mind that this query is a little expensive, so you should consider running during off-peak hours or on a dev database.




List all tags and attributes used by PI Vision is a valid use case and most PI admins will agree that it helps to understand their current tag usage. We have been increasing our efforts on system usage awareness and, with this post, I hope to help with this goal.


C#7 & AF SDK

Posted by rborges Employee Apr 17, 2018

If you have Visual Studio 2017 and the .NET Framework 4.6.2, you can benefit from new features that are available from the language specification. Some of them are pure syntactic sugar, but yet useful. The full list can be found in this post and here I have some examples of how you can use them to leverage your AF SDK usage.


1) Out variables

We use out variables by declaring them before a function assign a value to it:


AFDataPipe pipe = new AFDataPipe();
var more = false;
pipe.GetUpdateEvents(out more);
if (more) { ... }


Now you can inline the variable declaration, so there's no need for you to explicitly declare it before. They will be available throughout your current execution scope:


AFDataPipe pipe = new AFDataPipe();
pipe.GetUpdateEvents(out bool more);
if (more) { ... }


2) Pattern Matching

C# now has the idea of patterns. Those are elements that can test if an object conforms to a given pattern and extract information out of it. Right now the two most useful uses of it are Is-expressions and Switch statements.


2.1) Is-expressions

This is very simple and straightforward. what used to be:


if (obj.GetType() == typeof(AFDatabase))
    var db = (AFDatabase)obj;


Can now be simplified to:


if (obj is AFDatabase db)


Note that we are only instantiating the db object if it's an AFDatabase.


2.2) Switch Statements

So far this is my favorite because it completely changes flow control in C#. For me is the end of if / else if as it allows you to test variables types and values on the go with the when keyword:


public AFObject GetParent(AFObject obj)
    switch (obj)
        case PISystem system:
            return null;
        case AFDatabase database:
            return database.PISystem;
        case AFElement element when element.Parent == null:
            return element.Database;
        case AFElement element when element.Parent != null:
            return element.Parent;
        case AFAttribute attribute when attribute.Parent == null:
            return attribute.Element;
        case AFAttribute attribute when attribute.Parent != null:
            return attribute.Parent;
            return null;


The when keyword is a gamechanger for me. It will make the code simpler and way more readable.


3) Tuples

As a Python programmer that has been using tuples for years, I've always felt that C# could benefit from using more of it across the language specification. Well, the time is now! This new feature is not available out-of-the-box. You have to install a missing assembly from NuGet:


PM> Install-Package System.ValueTuple


Once you do it, you not only have access to new ways to deconstruct a tuple but also use them as function returns. Here's an example of a function that returns the value and the timestamp for a given AFAttribute and AFTime:


private (double? Value, DateTime? LocalTime) GetValueAndTimestamp(AFAttribute attribute, AFTime time)
    var afValue = attribute?.Data.RecordedValue(time, AFRetrievalMode.Auto, null);
    var localTime = afValue?.Timestamp.LocalTime;
    var value = afValue.IsGood ? afValue.ValueAsDouble() : (double?)null;
    return (value, localTime);


Then you can use it like this:


public void PrintLastTenMinutes(AFAttribute attribute)
    // First we get a list with last 10 minutes
    var timestamps = Enumerable.Range(0, 10).Select(m => 
    // Then, for each timestamp ...
    timestamps.ForEach(t => {
        // We get the attribute value
        var (val, time) = GetValueAndTimestamp(attribute, t);
        // and print it
        Console.WriteLine($"Value={val} at {time} local time.");


Note how we can unwrap the tuple directly into separated variables. It's the end of out variables!


4) Local Functions

Have you gone through a situation where a method exists only to support another method and you don't want other team members using it? That happens frequently when you are dealing with recursion or some very specific data transformations. A good example is in our last snippet, where GetValueAndTimestamp is specific to the method that uses it. In this case, we can move the function declaration to inside the method that uses is:


public void PrintLastTenMinutes(AFAttribute attribute)
    // First we get the last 10 minutes
    var timestamps = Enumerable.Range(0, 10).Select(m => 
    // Then, for each timestamp ...
    timestamps.ForEach(t => {
        // We get the attribute value
        var (val, time) = GetValueAndTimestamp(t);
        // and print it
        Console.WriteLine($"Value={val} at {time} local time.");
    // Here we declare our GetValueAndTimestamp
    (double? Value, DateTime? LocalTime) GetValueAndTimestamp(AFTime time)
        var afValue = attribute?.Data.RecordedValue(time, AFRetrievalMode.Auto, null);
        var localTime = afValue?.Timestamp.LocalTime;
        var value = afValue.IsGood ? afValue.ValueAsDouble() : (double?)null;
        return (value, localTime);


As you can see, we are declaring GetValueAndTimestamp inside PrintLastTenMinutes and blocking it from external calls. This increases encapsulation and helps you keep your code DRY. Note how the attribute is accessible from within local function without passing it as a parameter. Just keep in mind that local variables are passed by reference to the local function (more info here).


There are other new features but those are my favorite so far. I hope you see good usage and, please, let me know if you have a good example of C#7.0 features.


The AF SDK provides two different ways to get live data updates and I recently did some stress tests on AFDataPipes, comparing the observer pattern (GetObserverEvents) with the more traditional GetUpdateEvents. My goal was to determine if there is a preferred implementation.


The Performance Test

The setup is simple: listen to 5332 attributes that are updated at a rate of 20 events per second. This produces over 100k events per second that we should process. I agree that this is not a challenging stress test but is on par with what we usually see on customers around the globe. The server is very modest, with only 8GB of RAM and around 1.2GHz of processor speed (it’s an old spare laptop that we have here at the office). Here is the code I used to fetch data using GetUpdateEvents (DON’T USE IT - Later in this article, I will show the code I've used to test the observer pattern implementation):


var dataPipe = new AFDataPipe();
CancellationTokenSource source = new CancellationTokenSource();
Task.Run(async () =>
        while (!source.IsCancellationRequested)
            // Here we fetch new data
            var updates = dataPipe.GetUpdateEvents();
            foreach (var update in updates)
                Console.WriteLine("{0}, Value {1}, TimeStamp: {2}",
            await Task.Delay(500);
    catch (Exception exception)
        Console.WriteLine("Server sent an error: {0}", exception.Message);
}, source.Token);


After several hours running the application, I noticed that the GetUpdateEvents was falling behind and sometimes it was leaving some data for the next iteration. This is not a problem per se as, eventually, it would catch up with current data. I suspected that this would happen, but I decided to investigate what was going on. After some bit twiddling, I noticed something weird. Below we have a chart with the memory used by the application. On the top, we have the one that uses GetObserverEvents. On the bottom the GetUpdateEvents. They both use the same amount of memory but look closely at the number of GC calls executed by the .NET Framework.


2018-04-09 11_39_55-ObservableTest (Running) - Microsoft Visual Studio.png

(using GetObserverEvents)

2018-04-09 11_37_51-ObservableTest (Running) - Microsoft Visual Studio.png

(Using GetUpdateEvents)



Amazingly, this is expected as we are running the code on a server with a limited amount of memory and GetUpdates has extra code to deal with. Honestly, I was expected an increased memory usage and the GC kicking in like this was a surprise. Ultimately, the .NET framework is trying to save my bad code by constantly freeing resources back to the system.


Can this be fixed? Absolutely, but it is a waste of time as you could use this effort to implement the observer pattern (that handles all of this natively) and get some extra benefits:

  • Because it allows you to decouple your event handling from the code that is responsible for fetching the new data.
  • Because it is OOP and easier to encapsulate.
  • Because it is easier to control the flow.


Observer Pattern Implementation for AF SDK

In this GitHub file, you can find the full implementation of a reusable generic class that listens to AF attributes and executes a callback when new data arrives. It's very simple, efficient and has a minimal memory footprint. Let’s break down the most important aspects of it so I can explain what’s going on and show how it works.


The class starts by implementing the IObserver interface. This allows it to subscribe itself to receive notifications of incoming data. I also implement IDisposable because the observer pattern can cause memory leaks when you fail to explicitly unsubscribe to observers. This is known as the lapsed listener problem and it is a very common cause of memory issues:


public class AttributeListener : IObserver<AFDataPipeEvent>, IDisposable


Then comes our constructor:


public AttributeListener(List<AFAttribute> attributes, Action<AFDataPipeEvent> onNewData, Action<Exception> onError, Action onFinish)
      _dataPipe = new AFDataPipe();


Here I expect some controversy. First, because we are moving the subject to inside the observer and breaking the traditional structure of the pattern. Secondly, by using Action callbacks I’m going against the Event Pattern that Microsoft has been using since the first version of the .NET framework and has a lot of fans. It's a matter of preference and there are no performance differences. I personally don’t like events because they are too verbose and we usually don't remove the accessor (ie: implement a -=) and that can cause memory leaks. By the way, I’m not alone on this preference for reactive design as even the folks from Redmond think that reactive code is more suitable for the observer pattern. The takeaway here is how we subscribe the class to the AFDataPipe while keeping the data handling oblivious to it, giving us maximum encapsulation and modularity.


Now comes the important stuff, the code that does the polling:


public void StartListening()
    if (Attributes.Count > 0)
        Task.Run(async () =>
        while (!_source.IsCancellationRequested)
            await Task.Delay(500);
        }, _source.Token);


There is not much to talk about this code.  It starts a background thread with a cancellable loop that polls new data every 500 milliseconds. The await operator (together with the async modifier) allows our anonymous function to run fully asynchronous. Additionally, note how the cancellation token is used twice: as a regular token for the thread created by Task.Run(), but also as a loop breaker, ensuring that there will be no more calls to the server. To see how the cancelation is handled, give a look at the StopListening method of the class.


When a new DataPipeEvent arrives, the AF SDK calls the OnNext method of the IObserver. In our case it’s a simple code that only executes the callback provided to the constructor:


public void OnNext(AFDataPipeEvent pipeEvent)


Caveat lector: This is an oversimplified version of the actual implementation. In the final version of the class , the IObserver implementations are actually piping data to a BufferBlock that fires your Action whenever a new AFDataPipeEvent comes in. I'm using a producer-consumer pattern based on Microsoft's Dataflow library.


Finally, here’s an example of how this class should be used. The full code is available in this GitHub repo:


static void Main(string[] args)
    // We start by getting the database that we want to get data from
    AFDatabase database = (new PISystems())["MySystem"].Databases["MyDB"];
    // Defining our callbacks
    void newDataCallback(AFDataPipeEvent pipeEvent)
        Console.WriteLine("{0}, Value {1}, TimeStamp: {2}",
        pipeEvent.Value.Attribute.GetPath(), pipeEvent.Value.Value, pipeEvent.Value.Timestamp.ToString());
    void errorCallback(Exception exception) => Console.WriteLine("Server sent an error: {0}", exception.Message);
    void finishCallback() => Console.WriteLine("Finished");
    // Then we search for the attributes that we want
    IEnumerable<AFAttribute> attributes = null;
    using (AFAttributeSearch attributeSearch =
        new AFAttributeSearch(database, "ROPSearch", @"Element:{ Name:'Rig*' } Name:'ROP'"))
        attributeSearch.CacheTimeout = TimeSpan.FromMinutes(10);
        attributes = attributeSearch.FindAttributes();
    // We proceed by creating our listener
    var listener = new AttributeListener(attributes.Take(10).ToList(), finishCallback);
    // Now we inform the user that a key press cancels everything
    Console.WriteLine("Press any key to quit");
    // Now we consume new data arriving
    listener.ConsumeDataAsync(newDataCallback, errorCallback);
    // Then we wait for an user key press to end it all
    // User pressed a key, let's stop everything


Simple and straightforward. I hope you like it. And please, let me know whether you agree or disagree with me. This is an important topic and everybody benefits from this discussion!


UPDATE: Following David's comments, I updated the class to offer both async and sync data handling. Give a look at my final code to see how to use the two methods. Keep in mind that the sync version will run on the main thread and block it, so I strongly suggest you use the async call ConsumeDataAsync(). If you need to update your GUI from a separated thread, use Control.Invoke.


Related posts

Barry Shang's post on Reactive extensions for AF SDK

Patrice Thivierge 's post on how to use DataPipes

Marcos Loeff's post on observer pattern with PI Web API channels

David Moler's comment on GetObserverEvents performance

Barry Shang's post on async observer calls

Hello everybody,


I would like to share with you an app I've been working on. I call it AF Bash. It allows you to interact with AF the same way you interact with your files through CMD:

2018-03-13 14_48_38-C__Users_rborges_Documents_Projects_afbash_afbash_bin_Debug_afbash.exe.png


From a code perspective, it's a framework that provides a quick and easy way for you to implement your set of commands, so I encourage you to write your custom commands and send a pull request to the main repository! But first, lets break it down into topics and explain the architecture and implementation details


1) Architecture

Here is a very simple implementation diagram:

2018-03-13 16_12_05-Drawing1 - Visio Professional.png


AFBash is a console application that uses Autofac as its IoC container and exposes an interface called ICommand that is implemented by BaseCommand, an abstract class that is the base class for all commands to derive from. It provides a context class full of goodies that should be used to access AF SDK data. The console is wrapped around a custom version of ReadLine, where I can manage command history, autocompletation and command cancelling.


I strongly encourage you to follow the comments in the main entry point because it will make easier o understand how the main loop works and what it expects from your custom Command.


2) Adding a new Command

So lets stop the chit chat and go to the fun part. How to create a custom command. For this example, let me show you how to implement a dir / ls command.


First you need a class that is derived from BaseCommand.


class Dir : BaseCommand 


Because we are using Autofac's IoC container, we just need to declare a ctor that receives a Context as parameter.


public Dir(Context context) : base(context)


Now we have to take care of 4 functions:


public override ConsoleOutput Execute(CancellationToken token)
public override List<string> GetAliases()
public override string GetHelp()
public override (bool, string) ProcessArgs(string args)


The GetAliases() function must provide the alias you want for the command that you are implementing. The GetHelp() must return a simple help information.


public override List<string> GetAliases()
     return new List<string> { "dir", "ls" };
public override string GetHelp()
     return "Lists all children of an element";


The ProcessArgs() function is where get the arguments that your command must parse and store any variable that will be used later by Execute(). Note that BaseCommand exposes a global variable called BaseElement where you can store the AFObject output of your parsing. Going get back to our exemple, a dir command can be execute with or without parameters. A parameter-less dir implies that you want to list everything from the current element. Meanwhile, a parameter may be a full or relative path. So how to take care of it? simple. The AppContext has a function called GetElementRelativeToCurrent(string arg). It will return the element based on the argument passed. Here some examples:

\\Server\Database\FirstElement\Child"\" or "\\"\\ (a state where no element is selected)
\\Server\Database\FirstElement\Child"" or null\\Server\Database\FirstElement\Child


So far we have this for our argument processing (note that I'm using C#7.0, where a function can return multiple arguments.):


public virtual (bool, string) ProcessArgs(string args)
    BaseElement = AppContext.GetElementRelativeToCurrent(args);

    if (BaseElement == null)
        return (false, string.Format("Object '{0}' not found", args));
        return (true, null);


It's only missing one thing: when your current element is the top most node, the CurrentElement is Null because there is no PISystem selected. So a Null BaseElement is not necessarily a bad thing. We just have to check whether the user intentionally did that or was a mistake. This processing is already implemented as a virtual function on BaseCommand. So if your function argument is a path you don't even need to override it as BaseElement will be populated with the target element.


Finally the Execute() function.


It must return a ConsoleOutput, a wrapper class that makes easier for you to print structures into the console. In our exemple lets set a header message like CMD's dir:


ConsoleOutput console = new ConsoleOutput();
console.AddHeaderLine(string.Format("Children of {0}", BaseElement is null ? "root" : BaseElement.GetPath()));


We have a table-like result, so we need to set the headers:


console.SetBodyHeader(new List<string> { "Type", "Name" });


Now, we get the children of BaseElement and loop through them, printing everything we want:


var children = AppContext.GetChildren(BaseElement);
children.ForEach(c => {
                console.AddBodyLine(new List<Tuple<string, Color>> {
                    new Tuple<string, Color>(c.Identity.ToString(), AppContext.Colors[c.Identity]),
                    new Tuple<string, Color>(c.ToString(), AppContext.Colors.Base)
return console;


And that's it! Now you just need to compile and you are good to go! The actual implementation of my dir also handle data for attributes. I encourage you to go and see how I did it.


3) Available Commands


So far I have implemented 7 basic commands: CD / LS


This is a work in progress, so if you clone this project, keep your repo up-to-date because I will keep pushing bug fixes and new features.


Finally, if you find bugs or have questions, let me know!

Filter Blog

By date: By tag: