What would be the best way to backfill data for past timestamps for an existing PI Tag which will honor the compression settings for the respective tags.
Open for any options.
If you are backfilling data up to the current snapshot time backwards, I would just let the analytics service do it and have it delete the data first.
If you cannot do this there are other options, such as switching the output to another tag and then moving it in using anlaytics. Other options involve custom code. Some people would argue, anything calculated should not use compression. If you are doing event triggered calculations and you are triggering off of data that is now compressed, then it you might not have to be concered with compression.
Also, a common misconception is that the analytics service does not do exception testing.
I think this is just an issue with the way I am reading this but I just want to make clear that the analysis service does not perform exception testing at all. I think that's what you were saying, Dan.
Compression is applied for values being sent through the Snapshot subsystem. Backfilling prior to the CurrentValue (snapshot) will bypass the Snapshot subsystem, and therefore will not have compression applied. This means that recalculating Analyses prior to the snapshot will also bypass compression (though there are plans for future releases to change this).
Programmatically this could be done, but would require care in case there are any hiccups in processing. Basically you can read the current archived values into memory - though writing to a CSV for safekeeping is not a bad idea. Cautiously delete all the data. Then with values in memory (or CSV), merge the older historical data into the values collection, sort by ascending date. Finally write all the values. While this data may be from years gone by, since each new value would be a more recent timestamp (old as it is), it goes to the Snapshot subsystem and gets compressed.
The other concern is how many Analyses or ACE calcs depend upon this tag, and what about their recalculations? As is, their recalcs would not be compressed.
Just a minor correction; all values regardless of their order are sent via the snapshot subsystem where it is determined that the events are out of order. Once we know the event has not been sent in order, compression is bypassed and the event is written to the archive as-is.
Retrieving data ...