AnsweredAssumed Answered

AF SDK app using ReplaceValues() is too performant for Data Archive Snapshot

Question asked by AlexCote Champion on Feb 1, 2017
Latest reply on Feb 1, 2017 by ekuwana

Hello geeks,


Before anyone ask: PI Data Archive 2016 R2, PI AF Server R2, AFSDK


I have several C# Windows Services using AFSDK (latest version, these apps are built around AF (act as data store) so it reads/writes AFAttributes and generate S88 batch structure in Event Frames. The apps architecture allows "recovery / backfilling" capability, so I can go back at the beginning of a specific batch and recalculate everything. Right now I'm building a new instance of these apps for a new process unit (ie Continuous Steel Casting of Billets), so I'm enjoying this feature to recalc since Jan-01-2017, imagine the service parsing > 120M events and writing 60M in a few minutes... And I noticed something that scares me crazy: the app write values so fast that they mostly go in the Snapshot "Events in Queue", which grows and grows even faster with ALL events from ALL sources (ie our 25k OPC Int tags pushed events to the Snapshot which can no longer process them to the archive in a timely fashion).


I am using the following to push data to PI Data Archive

attribute.Data.ReplaceValues(timeRange, newValues);


But in the previous version I was using UpdateValues and it have the same effect (much slower)

// Gather current data
AFValues initialvalues = attribute.Data.RecordedValues(timeRange, AFBoundaryType.Inside, null, string.Empty, false);

// Erase current data
if (initialvalues.Count > 0)
     attribute.Data.UpdateValues(initialvalues, OSIsoft.AF.Data.AFUpdateOption.Remove, bufferOption);

 // Add new data
if (newValues.Count > 0)
     attribute.Data.UpdateValues(newValues, OSIsoft.AF.Data.AFUpdateOption.Replace, AFBufferOption.BufferIfPossible);

Note that to maximize the processing speed, this function is called as less as possible to limit the network traffic from several thread / parallel processing / async tasks... For example, one recovery function update 2 attributes with an average 8k events, about 650 times --> (10.4M events)... in approx 45 seconds


I'm just starting to investigate the possible workaround and solutions... what do you think?

- Is there a way to write directly to the archive without passing by the snapshot (therefore not performing Exception and Compression, which is bad)?

- Is there a way to speed up the processing of Snapshot "Events in Queue"?