2 of 2 people found this helpful
Depending on the source of you data, you can generally simply create a copy of your existing tag with a different tag name and apply your new compression / exception settings on those. Simply note that this will create an additional load on the data source, so keep this in mind if you are testing with a large number of tags at the same time. Alternatively, you could create an analysis / PE that simply reads the data from tagA and write it to tagB.
I also invite you to take a look at Chris' answer on How to set Exception & Compression Parameters efficiently? to get ideas on how to properly set up those parameters.
I finally had the chance to work on this topic. I created a PE that reads the data in tag A (raw data) and write to tag B (with compression and exception).
I found the results a bit unexpected. The figure below is a example of what I get. The blue dots are the raw data, and the orange dots the result of the compression and exception.
I was expecting that the compressed data would keep all the local min/max. I also also don't understand why the is still that many points kept in the constant gradients.
Here are my compression parameters.
There is certainly something that I didn't understood...
1 of 1 people found this helpful
I think at least part of the reason there are so many values kept along those constant gradients is because your Compression Maximum time is set to 10 minutes, so the Data Archive will store a value every 10 minutes regardless of whether or not it met the compression criteria. As far as the local min/max, I would need the raw values of the points and do the calculations myself to confirm, but compression and exception can remove local mins/maxes (if you've ever taken a look at the default sinusoid tag you'll notice it never hits 100 or 0 exactly).
I believe Thyagarajan Ramachandran was working on an exception/compression tester to allow users to see what their data would look like with different settings, perhaps he can chime on the status of that.
Thank you for your quick response.
I don't think the compression max is an issue here, I am receiving a new value each minute and with compression, the interval between two points in the constant slopes is sometimes close as 2-3 minutes.
Do you know if I could have access to the compression algorithm? The youtube video is great for understanding the concepts but lacks a bit of details on the calculations.
2 of 2 people found this helpful
Looking at your data, you have a relatively large exception deviation applied given your smooth data. With such a large exception deviation, you might be filtering out your local max/mins. If you turn exception off, you probably will see results that you are expecting.
That seems to be exactly what is going on. Your exception is very large at 0.5. If you look at your local maximum/minimum, they all appear to be within that 0.5 deadband. Note that exception is defined as half of the deadband. Meaning that 0.5 means that the value actually has a 1.0 deadband. If you care about those min/max then you will want to reduce the exception value.
OK, I created an analysis that reads the data from tagA and write it to tagB, as suggested by Gabriel. The results are now making sense. Maybe I made a mistake when I was using the PE.