kmarsh

An Alternate PI-to-PI interface using multithreaded AFSDK and RDA for scalability

Discussion created by kmarsh on Oct 9, 2013

As many know, the seasoned OSIsoft PI-to-PI interface has scalability limitations beyond 10,000 points.  Now that AFSDK is available with rich data access (RDA) calls, it is easily possible to eliminate this scalability limitation.  I'm certain the OSIsoft development team is hard at work making the next generation of interfaces.  Meanwhile, for dev/test purposes or if you are in a real squeeze, the attached example code does PI-to-PI with history recovery in a high performance and scalable way.  With VMs on my notebook computer it easily kept up with 100,000 points getting data at 10-sec scan rates, doing an hour of history recovery in under 3 minutes.

 

THIS IS NOT RECOMMENDED FOR PRODUCTION USE.

 

What it doesn't do is handle errors or even abnormalities.  The first sign of trouble and it'll just crash.  It is also pretty simple in terms of features, although it does history recovery and real-time updating as would typically be needed in a PI to PI interface.  It's a fun example to work with if you are new to reading and writing to PI with RDA, or signing up for updates.  Although it is fairly safe to have a production SOURCE PI server, if abused it would make for a good DOS attack, e.g. a million points with 100 threads and a history recovery of *-100d.

 

Here's how it works...

 

It is a console application but could easily be made a service if it were more robust.  There's a registry key where all the configuration is stored.  If it doesn't exist it is created automatically.  Example:

 

[HKEY_LOCAL_MACHINE\SOFTWARE\PISystem\AltRDAPItoPI]

 

"SourcePIServer"="PISVR01"

 

"TargetPIServer"="PISVR02"

 

"TagMasks"=hex(7):4b,00,44,00,54,00,2a,00,00,00,00,00

 

"PointSources"=hex(7):00,00

 

"LogFilePath"="C:\\temp-AltRDAPItoPI.log"

 

"UpdatePeriodInSeconds"="10"

 

"Threads"="20"

 

"HistoryRecoveryStartTime"="*-1h"

 

All are REG_SZ strings, but pointsources and tagmasks are REG_MULTI_SZ.  ProbablyCaseSensitiveIThink.  All points matching any pointsource or tagmask on the TARGET PI server are loaded but they are then sorted to be distinct.  It then finds the corresponding point for each on the source PI server by matching Tag - the only slow part of the program, a few seconds per 10K points.  If there is no match that point is dropped from the list.  The snapshot value is read from target and history recovery will start there if it is later than the HistoryRecoveryStartTime from the registry.

 

The points are divide among Threads.  Each thread signs up for updates and then recovers history from MAX(HistoryRecoveryStartTime, snapshot timestamp) until the time of signing up for updates.  It writes to PI in NOREPLACE mode.  Then it collects updates every UpdatePeriodInSeconds seconds.  Any write errors to target PI server trigger it to exit.  The log file is verbose to provide continuous feedback about what it is doing.

 

You can run on source or target or a 3rd machine, but ensure there are trust accounts to grant you access to read and write to the points as required.  Hope you find it an interesting example!

Attachments

Outcomes