2 of 2 people found this helpful
Removing events from the PI Server is a horrendous process, I've been through it a lot over the last few weeks. Short of going to PI Config then AF SDK is as efficient as it gets, in fact you can bulk send deleted values but it is the piarchss thread that suffers trying to remove the values.
As far as I understood removing values causes a couple of trips to the archive, the event is queued for deletion and has to be read from the archive before the deletion actually occurs. I managed to try and keep a sustained amount of deletions streaming into the PI Server (around 1,000/sec) that seemed to work for a long while but would occasionally cause all incoming events to get queued whilst piarchss caught up with itself - higher attempts of removing values caused the blocking to happen more frequently. I noticed that in my scenario there was a cyclical pattern as to when piarchss would start blocking incoming events...approx every 10 minutes whilst streaming deletes. However that pattern may be different for each PI Server depending on load, but it did point that there is likely a background process running to "clean up", perhaps even reindex the archive files themselves. So the bigger your archives then perhaps the longer the reindex.
It ended up taking as long to remove the data as it did for the data to first come into the system.
Out of curiosity, your script is looking for "Set to Bad" are you seeing that digital status being generated from Abacus calculations?
Thanks Rhys! Yeah I kind of suspected that it was just a very expensive operation. We will have to throttle the deletes accordingly. Do you expect PI config to be any better?
The Set To Bad is coming from some custom code which replaces values with "Set to Bad" as a sort of marker for getting deleted later. The reason is because these "Set To Bad" states get replicated across PI to PI.
1 of 1 people found this helpful
My view after testing AF SDK was that although piconfig has less fluff than the AF SDK, both routes will still hit the same bottleneck. I didn't test piconfig so don't have a direct comparison.
I also conceded that the mass deletes (millions of events from a mis-performing interface) I had to perform are a rare case that I shouldn't have to do often. If you're looking to periodically delete, which I think you possibly will be, then you'll want to get the PI Server devs on speed dial.
It would still be interesting to hear from the PI Server team on the internal mechanics for handling a delete, and if indexes are periodically rebuilt after a delete etc.
Stephen Kwan Denis Vacher Omar Shafie