Whats the most efficient way to write values using the AF SDK?
I need to write over 100,000 values a second so im gonna push it to the max.
Oh and im can only use PI 2010 and AF 2010, not 2012 so i know im pushing it.
Can you explain more about scenario including:
The AFSDK call AFListData.UpdateValues can support 100k unbuffered events/s writing to PI, however this is only available in AF Client 2012 and later.
Regarding performance on writing data to PI Data Archive:
- AF SDK 2010 limitation:
AF2010 utilizes PI SDK for PI Data Access which is limited to writing data one point at a time. So if each of your event corresponds to different points then you'll have to make 100K RPC calls to PI Data Archive, hence almost definitely you won't get anything near the rate you're expecting.
- PIBufSS 3.4.380 limitation:
If writing data through PISDK buffering (with PIBufSS), the throughput limit for the currently released PIBufSS (3.4.380) is about 80K events/sec under optimum condition, and can only buffer to one PI Data Archive.
- PI Data Archive 2010 limitation:
Quoting Denis "100K+ events/sec into the Snapshot Subsystem is well into the “red” zone of PI Server 2010 on any hardware". See the following discussion:
In short, we suggest you upgrade to AF SDK 2012 (AFLisData.UpdateValues or PIServer.UpdateValues) and to PI Data Archive 2012.
If you want data to be buffered, the upcoming PIBufSS (bundled with upcoming AF SDK 2013) will be able to handle the rate you're expecting.
Thanks for your responses,
In answer to your questions,
The data pattern will just be a point and a value which will either be a plant value or a system digital
The data has to go to the snapshot to allow compression
It will be using PI Login credentials not windows security
Buffering is required
We are aiming for 2 machines, but can use multiple processes on the machines.
What's your data source? Something with that high a data rate sounds like it should be a dedicated interface.
Its a 1970s Control System, OSIsoft don't have a dedicated interface for it.
No OPC server? Assume not hence your question.
Your data pattern still isn't clear. Are you expecting 100,000 Points with 1 event/second, 50,000 Points with 2 events/second, or some other combination? An interface approach would allow you to scale horizontally, but if your data pattern was leaning towards few PI Points (like 100) with high event counts (like 1,000 events/second each) then you would potentially take a different approach.
Are you constrained by a customer from upgrading to PI Server 2012 (which exposes some bulk data calls)? I always take the Shoe analogy; you have size 10 feet but you've been given size 8 shoes, sure you can cram your feet in there but they'll get a little bit sore and you will walk a bit funny, so why not save that pain and buy some size 10 shoes for me to wear. (Bad analogy I know.)
Nah can't do OPC
The system exports its 200,000 datapoints every 2 seconds,
Obviously a lot less than 200,000 values will have changed but we have to cope with the case that all exception may have been turned off.
So we have 200,000 datapoints with 1 value each every 2 seconds.
We are being restricted by the client to use current systems, we may be able to force them to shift, may not but have to assume not
Okay, great, that is clear now.
Probably a couple of options here, all with caveats.
- Get OSIsoft to write an Interface. What's your time scale for implementation?
- Get the Control System vendor to write an OPC Server , then scale out with the standard OSIsoft PI Interface for OPC DA.
- Write your own interface to an OPC Server, then scale out with the standard OSIsoft PI Interface for OPC DA.
- Write a specific parser for the export, split it up for multiple instances of the buffer-capable PI UFL interface.
- Push for an upgrade to PI Server 2012, write directly to the PI Server using AF SDK (requires embedding the Exception Reporting logic). Scale out accordingly (PIBufss limitation).
Maybe some other combination, that's what springs to mind right now.
Is the data from a continuous process where you can assume the profile of the data is fairly consistent? You'd have to understand that for how you split up the data to more than one machine.
Option 1 is not viable as its doing a lot more than just sending data to PI and cost wise.
Option 2 is definatly not an option as the control computers would not cope with it.
Option 3 is an option but seems very slow and not ideal.
Option 4 is a possible but we cant gaurentee it.
What is the theoretical limit on the PI SDK transferring points? What is the best way regarding this route? Parralel WriteValues?
The data is consistent, we know how we can split it up to 5 or 6 ways however the split wouldn't be even.
Thanks for all your help so far
What's nature of the data or process that's producing 100,000 events/sec?
If you are limited to AF SDK 2010 and therefore PISDK since all data access is through PISDK, it is just not going to perform for you. Your best bet is to write your interface with the PI API. Is that a possibility?
Is there any figures out there as to what each solution can manage per second data transfer?
I.e. what is the theoretical limits of
UFL is based upon the PI API so same theoretical limits. PI API will outperform PI SDK every time. It has already been stated by Denis in previous vCampus threads that the theoretical limit for PI Server 2010 is 100K events/sec. PI API will be able to get that. PI SDK will not. Here is a link I found that states the limit closer to 2K/sec.
And in this link they state pretty much that they come to the same conclusion that the data rates between PI API and PI SDK favor PI API every time.
Hmm there seems to be very little documentation on the limits,
I can understand with SDK and API it depends on how you program it, however there should be some guidance.
Do the uni-int interfaces perform better? e.g. if we created our own OPC server and wrote to that would OPC DA get closer to 100,000 / sec
I found previously using the SDK i could do 10,000 per 5 second update without parallel although i never pushed it further.
UNIINT interfaces also are based on the PI API for writing data to PI. However, the PI API data access licensing is no longer available so that may not be an option unless you have purchased this many years ago. So your options at this point are really only to upgrade to the latest AF SDK. Now understand that we are NOT telling you to upgrade the AF server. All we are asking is for you to upgrade the AF SDK client on the machine where you are developing and running your custom-made application. Hopefully, this will be an available option to you.
As for performance metrics, unfortunately, it is very difficult for us give you better metrics than the ones we have given you because the performance depends on so many variables.
Hope this helps,
Jason, just updating the AF SDK will only provide parallelism improvements, as I mentioned earlier you would need PI Server 2012 to make use of some bulk RPCs that are exposed. Plus, with PI Server 2010 you'll soon hit the software limitations with sustained throughput.
PI API never entered the fold because there is depreciated licensing for it hence you should have OSIsoft themselves build the interface.
I wonder if the 100,000 events/second is too much of a nice to have when in fact most of the measurements would benefit from having some form of exception reporting applied so in fact you would reduce your throughout requirements anyway. Depending on the process would it, for example, make sense for you to be receiving a static temperature indicator measurement every second?
Yes ultimately you would never get 100,000 events per second as you are never going to have exception turned off every item.
However its an expected requirement by a client, so unless we have written proof its not possible it remains a requirement.
I maybe didn't make myself clear, the only thing i can't change is the server i can run whatever SDK i want on my design.
If i can hit the 100,000/s or close to it i would be happy,
If i run say 5 threads each with their own connection to the PI server, would i be able to run say 10,000 through each? or is the limitation by machine?
We are looking at 2 machines, with 2 failovers.
If you're going to buffer/fan by using the PI Buffer Subsystem then your ceiling is per machine despite threading, so your scale out is across machines. If you were to use PI API, PI SDK or AF SDK then they all hand off their events to PIBufss thus that is your potential bottleneck.
The PIBufss ceiling was mentioned earlier in the thread, 80k events/second, but I would expect maybe 40-50k events/second on a single machine - certainly what I've experienced over the years.
If you really want to prove to the client near to 100k events/second you'll need both machines without failover.
Excellent thats the kind of info i was after thanks, So theoretically no matter if we use an OSIsoft interface or any development tool 40-50k/s is going to be the limit.
its not a problem for us to have 4 machines, 2 primary, 2 failovers.
And if we upgraded to PI Server 2012,
Then we could use the AF SDK and use a datapipe to write the 100,000 per machine (failover)
Does the limit go up for OSIsoft interfaces as well? i.e. is an OPC interface able to transfer more or is it still limited by the buffss (i assume so cause it uses the API)
Matt, one of our Partner Managers will be in touch with you over the next day or so to follow up on this thread.
Matt InglisAnd if we upgraded to PI Server 2012, Then we could use the AF SDK and use a datapipe to write the 100,000 per machine (failover)
The increase in throughput will come from a combination of a version later than PI Server 2010, and the "next version" of the PI Buffer Subsystem. The next version of PI Buffer Subsystem is currently a CTP on vCampus so you could certainly test out the performance difference on a development system.
I deal with Gareth a lot but i haven't managed to chat to him yet.
To close out this thread (yes i know it was a long time ago it was started), we are now at the closing stages of the project and have exceeded the requirements, so i thought i would share our findings.
Whilst 100,000 events/sec in PI 2010 Snapshot is in the "red" zone, it certainly is possible.
Using a multi-threaded application with the AF-SDK Update Values procedure, we have been able to comfortably hit 110,000 events/sec, although it should be noted PI will not sustain this for ever and the event queue will eventually start to build up. It also should be noted that to do this we had to avoid the Buffer systems as they were not quick enough, instead we developed our own buffer system.
Thanks for closing this out. A lot has changed since you first posted and I hope you can upgrade to the latest PI Server. We have improved our buffering and it is even more ommon to use updatevalues and pass in an attribute list or point list to do a "bulk" update which is very efficient. Even the OSIsoft buffering has improved dramatically! Also, stay tuned for some new aysnc methods!!
Retrieving data ...