Bit of a left-field question here....
If I have an AF Analysis adding two input tags from separate scan classes, an inaccuracy creeps in due to stale snapshots. Whenever the calculation occurs, one of the snapshot values will be from sometime in the past. The actual value at the evaluation time is not known until the tag receives its next snapshot.
No-one has complained about this until now but our current customer is trying to replace a cumbersome spreadsheet-based system that is actually more accurate because they have compression turned off (I know!) so every snapshot gets stored and when they run a calculation, no stale snapshots are used because the offending input values are all interpolated. Their spreadsheet only ever looks at historical data.
I could see a mechanism where this could be fixed but I may be being naive. If any input tags snapshots are stale at the time of calculation, the engine could revisit the calculation when all the tags have received new snapshots. It can then get more accurate interpolated values and re-run the calculation, adjusting the output.
Does this make sense? Could it work? And is such a mechanism being considered?