You are changing the compdev PI Point attribute value of an existent PI Point, events from the PI Points history, insert those events and expect the new compression deviation will be applied? Is my understanding correct?
If so, I am having doubts this can work because
a) The new compression setting will not remove any events from the history.
b) The history you are backfilling has older timestamps than the current snapshot. This means you are inserting events out of order and there's no compression applied with out of order data.
thank you for the reply. Yes, you are close to what I do. The algorithm is: For some tags (about 10000) with too low compression and too many events:
- determine a better compdev setting (that works already)
- use an existing "cache tag" with empty history (also works, no out of order data)
- set the compdev attribute of that cache tag (here is my problem)
- copy the dense history of the original tag to the cache tag and let PI apply compression (that works fine as soon as the server has accepted the new compdev)
- delete the part of the history of the original tag that was copied (works)
- copy data from the cache tag back to the original tag (works)
- clean up the history of the cache tag for the next run of the algorithm (works)
Most steps work fine, my only problem is the unknown delay until the PI Server has accepted the new compdev Setting. If I wait some seconds after setting compdev and start copying afterwards, it works fine. But how can I be sure that some waiting will always help?
You can remove the history from your cache tag but you cannot delete the snapshot. Please confirm by e.g. using the Current Values add-in to PI SMT. With a snapshot value more recent than the events you are inserting, the events come out of order. Have you considered creating a new tag for each data stream you like to apply the new compression settings to? You could initiate the new PI Point with the appropriate compression deviation.
PI Base Subsystem is the instance you are talking to when changing a PI Point attribute, I can imagine that it takes some time until the change becomes picked up by PI Snapshot Subsystem but doubt you will be able to see when this happens.
When you query the actual value of compdev, PI Base Subsystem will be the instance servicing the request but there's also a chance that the query becomes serviced by the cache. In both cases, you cannot be certain the change has been picked up.
In case you are sending events through PI Buffer Subsystem, this is the instance that will apply compression and forward events as compressed to PI Snapshot Subsystem. I assume that both, PI Buffer Subsystem and PI Snapshot Subsystem will use signups with PI Update Manager to get information about configuration changes. I assume further that the frequency to check for updates with PI Update Manager will be ~ 2s.
How about the following workflow:
- Create a new PI Point with appropriate compression settings
- Read the history from the source PI Point (use chunks with 1k~10k events per call) and insert the events to the new PI Point
- Verify "data quality" of the new PI Point e.g. was the compression applied appropriately
- Delete the source PI Point but keep note of the name
- Rename the new PI Point with to the previously noted name
thank you for the information. I am a little bit confused, since the new compression settings always seem to work (after waiting some seconds to be sure that the compdev setting is accepted). Using a cache tag avoids creating a new tag for every new recompression and backfilling task, so just one archive track will be used.
Here is the "current value" view of the cache tag after deleting all values:
This is an example of a source tag and the cache tag which contains a copy of the source tag with compdev=1:
The workflow you proposed would be ideal, if we did not have some specific conditions:
- the time span to apply a new compression to is about 6 months, while the tags have a history of more than 10 years
- some tools look at the point id instead of the tag name, so using a new tag instead of the original would cause problems
You talked about 2 seconds update interval of the PI Update Manager. Would it cause a problem if the Update Manager had some other events in pipeline prior to change compdev of the specific tag? I.e. there is no deterministic possibility (for example wait 5 seconds) to be sure that compdev is applied? In this case, I do not see any other chance than using a new cache tag for every recompression task.
1 of 1 people found this helpful
Please note that I made some assumptions in my previous post about the mechanism used by PI Snapshot Subsystem and PI Buffer Subsystem to update themselves on the most recent point configuration. This is as well true for the period of the assumed updates.
The current version of PI SMT offers an Update Manager add-in that can be used to look at registered update producers, consumers and some statistics (SMT -> Operation -> Update Manager). Neither PI Snapshot Subsystem nor any PI Buffer Subsystem instance is showing in the list of update consumers which indicates a) signups of pisnapss and pibufss are not listed or b) a different mechanism is used.
For your purpose, you need to know the longest period between changing a PI Points compression setting and PI Snapshot Subsystem / PI Buffer Subsystem applying the new setting. Let me reach out to the PI Data Archive team to get that information.
Meanwhile I doubt PI Update Manager is used but I like to take the opportunity to explain how PI Update Manager works. Update producers, mainly PI Bases Subsystem, PI Snapshot and PI Archive subsystem communicate changes to PI Update Manager. For PI Base Subsystem these are changes in configuration, for PI Snapshot Subsystem these are snapshot and for PI Archive Subsystem these are archive updates. Update consumers register for specific updates e.g. PI Interfaces register for PI Point attribute changes because interfaces need these changes to be able to react on changed exception, span, zero, pointsource or Location1 settings. PI Update Manager creates and maintains a stack for each registered consumer and adds everything new and matching the signup to the stack. This means if a consumer only signs up for snapshot updates for SINUSOID, PI Update Manager will only add new snapshot events for point SINUSOID to the stack - not for others. It's up to the consumer to ask for the updates on the stack and when it does this, PI Update Manager returns the complete stack and cleans it up. It's now up to the consumer to apply a logic on how to deal with the received updates.
You mention that you need to adjust the history of ~ 10,000 points over a period of 6 years. Please make sure you backup archives (PI Backup) before performing the adjustment - just in case something goes wrong. Please also consider reprocessing all archives involved into the 6 months period after performing the adjustment. Cleaning up unused records will likely result in smaller archive files (or archives with more empty records) and you will likely gain better performance.
thank you for the detailled information about PI Update Manager which helps me a lot.
We have an automated PI Backup, but I also used a manual copy of all involved archive files. One of the most difficult parts is to recognize whether something went wrong, but the new replacevalues methods in the AF SDK help a lot, because they perform deleting and adding values at once.
The compression setting is passed immediately to the PI Snapshot Subsystem by PI Base Subsystem via a direct Remote Procedure Call (RPC).
PI Buffer Subsystem is updating the compression setting once a new value passes through the buffer and is sent to PI Snapshot Subsystem. This happens asynchronously as the buffer may not be available when the point edit occurs.
This means, in case you are not sending through PI Buffer Subsystem, you will likely only have to insert a short break between changing compdev and backfilling to allow PI Snapshot Subsystem to adjust on the new compression setting.
If you are sending through PI Buffer Subsystem, the compression is applied by PI Buffer Subsystem and you will have to send an initial event to trigger PI Buffer Subsystem picking up the changed compression setting. This event can be a dummy event that you delete later. I suggest to also apply a small break here between sending the initial event and starting to backfill.
If you want to be certain that the change was picked up by PI Snapshot subsystem you can verify this using command piartool -sd <tagname>
Similar can be done for PI Buffer Subsystem using pibufss -sd <tagname>
thank you for the information, which exactly matches to the observations which I made.
The first timespan of events that I copied always seemed to be an exact copy of the source tag, while the next part did have a correct compression. While wondering why this is the case, I had the workarround to copy a small time range of events in past of the intended timespan before copying the productive data.
I think you would be better of recalculating the exception and compression off-line and then send the compressed time series to archive.
what means "offline" in terms of AF SDK calls?
In my understanding, if I use a function like PiPoint.UpdateValues in combination with a Buffer Subsystem, the compression will be applied locally by Buffer Subsystem and the resulting events will be send to the server. So at least a part of the task is offline.