I think that you might misunderstand floating point numbers. 1.0 and 1.00000 is exactly the same.
What PI does is to offer different ways to show the value depending on the tag and the client. You've read about significant digits or decimal places in the Pi Admin manual. This is exactly what it is meant for.
If for your purposes 1.0 and 1.000 are different (i.e. the implied precision of your measurement) and this value is dynamic (will change over time), the best way I think is to have a second tag that stores the precision, but you will have to interpret that in your client application (e.g. in DataLink you could fetch the number of significant digits and use that in a format formula).
Please let me know if I've misunderstood your problem.
Thanks for your response.
I understand mathematically that 1.0000 and 1 are identical. However, my work requires me to ensure and provide evidence that this behavior does not compromise data integrity.
Using multiple tags to store one value seems overkill for me. I wonder if it would make sense to store the float value as String and translate the value back through/in the application layer.... do you think this is possible/logical?
What is stored in PI Data Archive's archive files is actually irrelevant to DisplayDigits attribute of a PI Point. As long as your PI Point's type is float(32 or 64) and not an Integer type, the actual value stored within archives would have the trailing zeros.
The trailing zeros would only get truncated if you try to store float values into a Integer-type tags.
If you wish to see what is stored within PI archives, then PI-SMT > Data > Archive Editor plug-in would be the most accurate place to view what value is stored because it ignores DisplayDigits attribute.
I would like to correct any possible misunderstanding of "what PI stores" or "what PI does". OSIsoft adheres to the IEEE 754 specification for binary floating point numbers format. Here is link to Single, which is same as Float32. Note this adherence is not just standard but virtually universal as the floating point routines are built into frameworks such as .NET, Java, etc., but also pre-wired in the CPU architecture, which contributes to their speed. Consider Decimal, which is a 128 bit base-10 floating point structure. It is about 9 times slower to use than the 64-bit Double, not because of twice the bits, nor because of base-10 versus binary, but because all operations are done in software as such routines are not baked into the CPU firmware.
But I digress. The biggest question I would have for you is what precision do you need to have? A Single or Flloat32 has 7 digits of precision. A Double has 15. If the requirements are that you must show 3 significant digits after the decimal place, you have to ask how many digits will there be before it. If you expect the whole number portion to be greater than 9999, that means you will require at least 5 digits for the whole number portion. For a Single, that leaves you with only 2 decimal places maxing you out at the 7 available digits of precision. In that case, the 3rd decimal is insignificant. The alternative would therefore to use a Double if you expect numbers greater than 9999. If you expect numbers less than that, Single is quite appropriate and uses half the space.
A valuable link on a related topic that demonstrates how 32-bit floating point binary numbers are represented via IEEE 754: