The PI2PI interface is routinely used to copy data from a source Server to a target Server. This is especially useful when consolidating for example data from a site server to an enterprise server. A lot of companies differentiate between production PI Server and application PI Server, where business units have direct access for visualization, reporting and analysis. The PI2PI interface is also the gate between different networks, which is an important aspect in cyber security.


One drawback of copying data between servers is that the PI2PI interface adds latency to the data flow - this is of course to be expected, data have to be read from the source and then written to the target.


The latency is an important factor when designing applications, especially event driven calculation. In PI an event is triggered when the data value enters the snapshot or archive queue. This process can be monitored and the latency calculated:


Measuring latency using PowerShell


When I measured the latency of data values on a production system, where two PI2PI interface were used in series, I was surprised about the measurements. The average latency was in the range I expected, but the standard deviation and distribution seem odd.


To understand the effect better I put together a small simulation in R. Here are the results for a system with 2 PI2PI interfaces in series:


This was not an exact match of the production system, but it showed some of the same patterns. The simulation was performed for a tag with a polling rate of 3 sec. and PI2PI interfaces with 5 sec scan rate each.


Since this looks like a decent model, we can optimize the distribution by selecting different parameters. Since the PI2PI scan rates seem to have the largest impact, we can rerun the same model with 2 sec. scan rates:


This looks already better! We could try to try to fit the MC parameters to the real measurements, but this model is good enough to get the basic metrics.


So in summary:

  1. The PI2PI interface adds latency to the data flow and modifies the distribution (non normal, several modes)
  2. Event based apps should be robustifed to take this into account
  3. It's in general a good idea to measure latency in time critical applications
  4. A simple MC model can help to understand data flow and optimize settings