Original created May 31, 2016


What is the precision of an AFTime? I know I have answered this question several times over the past 6 years in the older vCampus forums.  I want to modernize my answer for PISquare and go a little in-depth.


I’m guessing that you’ve landed on this page because you are dealing with subseconds and you’ve noticed a tiny difference between the time value you created in code versus the value that was saved to the PI Server.  Perhaps you had a code snippet such as:


AFTime time = AFTime.Now;


Checking out the OSIsoft Live Library online, the Remarks section for AFTime says it quite succinctly:


Represents the date and time data ranging in value from January 1, 1970 to December 31, 9999. Internally, the time is represented as a System.DateTime in Coordinated Universal Time (UTC). Actual storage of an AFTime value may cause a loss of accuracy depending on the target storage. The PI AF Server will store timestamps to an accuracy of 3.33 milliseconds. A PI 3.x Server stores timestamps to an accuracy of 15 microseconds.


If you feel you want your time instance in code to match what will be saved on the PI Server, you can do this with the ToPIPrecision() method.  I suggest you alter your code accordingly:


AFTime time = AFTime.Now.ToPIPrecision();


There you go!  If that’s all the answer you need, you may stop reading now. But if you are an inquisitive person, and you want a more detailed, exhaustive answer, then read on.


.NET DateTime Precision


The System.DateTime structure is accurate to 100 nanoseconds and there are helpful constants like TimeSpan.TicksPerSecond or TimeSpan.TicksPerMillisecond.


From the Remarks section:


Time values are measured in 100-nanosecond units called ticks, and a particular date is the number of ticks since 12:00 midnight, January 1, 0001 A.D. (C.E.) in the GregorianCalendar calendar (excluding ticks that would be added by leap seconds). For example, a ticks value of 31241376000000000L represents the date, Friday, January 01, 0100 12:00:00 midnight. A DateTimevalue is always expressed in the context of an explicit or default calendar.

And then see helpful Note right below that:


If you are working with a ticks value that you want to convert to some other time interval, such as minutes or seconds, you should use the TimeSpan.TicksPerDay, TimeSpan.TicksPerHour, TimeSpan.TicksPerMinute, TimeSpan.TicksPerSecond, orTimeSpan.TicksPerMillisecond constant to perform the conversion. For example, to add the number of seconds represented by a specified number of ticks to the Second component of a DateTime value, you can use the expressiondateValue.Second + nTicks/Timespan.TicksPerSecond.


PI Server or Data Archive Precision


As stated earlier, this is 15 microseconds.  But that’s a close-enough, short answer.  The real answer is because the original PI Time structure was most probably designed after the standard Unix time_t data type, which was a 32 bit signed integer.  This leads to the impending 2038 problem, which is a topic for another day.


As years passed and the need for subsecond data became greater, OSIsoft needed to modify the PI Time structure. What would you do if this was your decision?  Would you turn the structure on its end and convert it to a floating point number? Or would you retain the whole seconds as a signed integer and add the subseconds as an additional field? Should that new field be floating point or an integer?  How big should it be?


If you went with floating point, then a float or single is 32 bits.  A double would be 64 bits.  That means you are adding 4 to 8 bytes in memory and storage to every timestamp.


OSIsoft chose to go with a 16 bit unsigned integer or UInt16 or UShort.  This has 65536 possible values ranging from 0 to 65535 with 0 being a subsecond that evenly aligns with a whole second.  This requires adding only 2 bytes per timestamp.


Hence the PI Server precision is 1/65536.  Let me introduce my own unofficial term: a PITick.


PITick = 1 / 65536 second

= 0.00000152587890625 second

            = 15.2587890625 microsecond

            = 15.26 microsecond

            = 15 microsecond


If you use AFTime.Now and later save that to the PI Server, your time will be modified somewhere around the 6th or 7th decimal place in UtcSeconds.  That is because the time must be aligned evenly with a whole number of PITicks.  And in doing so, if your time is just 100 nanoseconds past that whole number, then the time is adjusted upward to the next whole number (or the equivalent of performing a Ceiling call).


If your code experiences any issues along these lines, I strongly suggest you investigate using ToPIPrecision().


AF Server or PISystem Precision


An AFTime loses more precision when saved to AF.  That is to say it has less precision when you are modifying an AFDatabase in PISystemExplorer.  The limiting factor here is SQL Server’s datetime data type, which has a precision of 3.33 milliseconds.


See MSDN for datetime (Transact-SQL) for the section called “Rounding of datetime Fractional Second Precision”.


This would affect static times you enter into your attributes or attribute templates, as well as AFTables. I’m a firm believer one should not overload a table to behave like a historian; to me, a few timestamps are okay and subsecond precision should not be high on a feature of a table.  If you require greater precision or have lots of data rows, then OSIsoft has a better solution: it’s called a PI Server.


That said, there is one other possible are of concern: Event Frames, particularly the StartTime and EndTime would be subject to the 3.33 millisecond precision.  If you’re using Asset Analytics to generate event frame creation - which is ridiculously easy to do - then a value from the PI Server will trigger that creation.  That value coming from PI will have a trigger timestamp with a precision of 15 microseconds. When saving as the StartTime or EndTime, that trigger timestamp will be truncated to 3 millisecond precision. Something to be aware of, perhaps.


As always, use this knowledge for good, not evil.