For most data sources the answer will be no. The scan frequency is usually driven by referring a fixed scan class with a tag attribute.
Is you question of general nature or specific for a special kind of data source / interface?
Very curious about the specific case why you need to lower the frequency of data aquisition. I can only think of technical reasons like severe bandwidth restrictions. In most cases, PI is very efficient in data collection and transport to the PI server.
The idea is to increase the frequency when well pressure drops below certain value to capture data more frequently during shut-in period and restore back to normal frequency when pressure normalizes again. And yes do we have band width limitations therefore we are using only optimized frequency for normal scans. But for special scenarios we do want to capture more data.
My question is general and not specific to any interface, based on certain tag value which is out of range, I would like change the scan class for that tags to the one which scans more frequently for the duration until value comes back with in range again. That's when I will change scan class back to the normal. So I'm interested in know if there exist a tool that can do that for me or do I've to build one for this purpose.
What you are looking for is done by Exception and Compression with PI. Please refer to PI Server 2012 System Management Guide. There's a chapter "Base Class Point Attributes" with 2 sub-chapters
- ExDev, ExcMin, ExcMax and ExcDevPercent
- CompDev, CompMin, CompMax and CompDevPercent
With OSIsoft Interfaces, the exception becomes applied by the interfaces. Compression is performed by PI Snapshot Subsystem when forwarding events to PI Archive Subsystem. When using PI Buffer Subsystem, events will become compressed on the interface node and be send with a marker "compressed" to avoid compression becomes applied again.
If you have questions or doubts, please let us know.