Skip navigation
All People > ernstamort > Holger Amort's Blog > 2018 > February
2018

Almost a year ago I wrote a blog about a R library to connect to OSIsoft PI and AF (ROSIsoft). The feedback was great and I tried to respond quickly.

I am also seeing more and more project with modeling and forecasting needs, which is a great development.

 

The problem is that the build process is rather lengthy; you have to work both in the .NET and R environments to put a package together. That is inherently a problem with scripting languages, they just don't do well with loops. So you need to build an intermediate layer between the AF-SDK objects and simple array types that R understands.

 

There were also other problems with the initial approach:

 

  1. The results were returned in lists, whereas most applications in R work with data.frames.
  2. Time Stamps were returned as string or double value (EXCEL) types instead of R date time types such as POSIXct; this required an extra conversion step in R.
  3. Function and variable description were missing.
  4. And as mentioned the build process was mostly manual.

 

I automated the build process and created a scripting engine in VS to write the R functions. That really helped accelerating the build process.

 

The library is loaded the same way:

 

  1. Installation is done manually. In RStudio select Tools\Install packages … and when the open dialog opens, change the option “Install from:” to “Packaged Archive File”
  2. After the installation the library is loaded with: library(ROSIsoft)
  3. To connect to AF and PI server use first: AFSetup() this will install dependencies such as rClr
  4. To connect to the PI server use the following:
  5. connector<-ConnectToPIWithPrompt("<PI Server>")  and connector<-ConnectToAFWithPrompt("<AF Server>","AFDatabase"

That's it.

But the data model looks now different:

And the data calls return the R time stamps:

 

 

Or values:

Some people have asked about summary types, which is really a good way to down sample data. This function is now also returning a data.frame:

Or for multiple values:


The package still needs a fair amount of testing and I would be happy if you would send your feedback to: hamort@tqsintegration.com As I mentioned, due to the automated build process maintenance of this package should now be much easier.

The PI2PI interface is routinely used to copy data from a source Server to a target Server. This is especially useful when consolidating for example data from a site server to an enterprise server. A lot of companies differentiate between production PI Server and application PI Server, where business units have direct access for visualization, reporting and analysis. The PI2PI interface is also the gate between different networks, which is an important aspect in cyber security.

 

One drawback of copying data between servers is that the PI2PI interface adds latency to the data flow - this is of course to be expected, data have to be read from the source and then written to the target.

 

The latency is an important factor when designing applications, especially event driven calculation. In PI an event is triggered when the data value enters the snapshot or archive queue. This process can be monitored and the latency calculated:

 

Measuring latency using PowerShell

 

When I measured the latency of data values on a production system, where two PI2PI interface were used in series, I was surprised about the measurements. The average latency was in the range I expected, but the standard deviation and distribution seem odd.

 

To understand the effect better I put together a small simulation in R. Here are the results for a system with 2 PI2PI interfaces in series:

 

This was not an exact match of the production system, but it showed some of the same patterns. The simulation was performed for a tag with a polling rate of 3 sec. and PI2PI interfaces with 5 sec scan rate each.

 

Since this looks like a decent model, we can optimize the distribution by selecting different parameters. Since the PI2PI scan rates seem to have the largest impact, we can rerun the same model with 2 sec. scan rates:

 

This looks already better! We could try to try to fit the MC parameters to the real measurements, but this model is good enough to get the basic metrics.

 

So in summary:

  1. The PI2PI interface adds latency to the data flow and modifies the distribution (non normal, several modes)
  2. Event based apps should be robustifed to take this into account
  3. It's in general a good idea to measure latency in time critical applications
  4. A simple MC model can help to understand data flow and optimize settings