# How can something so simple BE SO WRONG?! A deep dive into converting Ticks To Seconds.

Posted by Rick Davin in Rick Davin's Blog on May 26, 2017 6:09:10 PM**We take a deep-dive into preserving subsecond precision when converting from Ticks to Seconds and back again.**

Ah, the joys of being a code developer! There is a flip-side, of course, in those pesky users of your application are so demanding for your calculations to be right. Let's take a journey into something that should be quite simple to do: convert back-and-forth between Ticks and Seconds.

One thing all PI Geeks are interested in is time series data. And PI Geeks who are developers with .NET are quite familiar with Osisoft's AFTime and .NET's DateTime objects. Internally the AFTime keeps a UTC-based DateTime backing field. Even though the AFTime doesn't have a Ticks property, it's UtcTime and LocalTime properties do. If you focus on UtcTime.Ticks, then for all intents and purposes you can say that AFTime does have Ticks. Meanwhile, DateTime lacks either Seconds or UtcSeconds properties. Maybe there is a simple conversion one can create? Let's try:

Our **first ToSeconds**() extension method:

public static double ToSeconds(this long ticks) => ticks / TimeSpan.TicksPerSecond;

On the surface that looks simple, right? So simple it must be right! Except it is so, so wrong. Both ticks and TimeSpan.TicksPerSecond are long (Int64), and in C# that would be using Integer Division when we really want Floating Point Division. I will give VB.NET a thumbs-up here in that VB's "/" operator always means floating point division whereas the "\" operator means integer division.

Our **second ToSeconds**() extension method:

public static double ToSeconds(this long ticks) => ticks / (double)TimeSpan.TicksPerSecond;

Okay, that's better with floating point division, and its still fairly simple. Pity it's still wrong for an AF developer. Hard to see isn't it. Got you curious? I'll explain more after another simple example. Let's reverse direction and try to convert seconds to ticks.

Our **first ToTicks**() extension method:

public static long ToTicks(this double seconds) => (long)(seconds * TimeSpan.TicksPerSecond);

Nice and simple. Life is good. The product produces a double and we cast (convert actually) to long. Would you believe something so simple could produce a wrong answer? Again, it's quite hard to see but the above code is also quite wrong. Seriously.

The problem can be seen with subseconds. A PI subsecond is 1/65536 of a second which would be between 152 and 153 ticks. And our first ToTicks() method could produce an error up to +/- 76 ticks. This may be surprising given that the seconds, along with its subseconds, is stored in a double. Surely a double is big enough for storing seconds. Well, it is. But it's not big enough to accurately convert to a long.

And that's the real problem. We are trying to convert a 64 bit double into a 64 bit long. A double is a 64 bit floating point representation of a real number, but not all of the bits go to composing the number. There are bits for NaN and Infinity. Contrast that to a long - its also a 64 bit number but *all* of its bits are used to produce the number. A long is *THE* number, whereas a double is merely a close approximation. And thanks to a double losing a few bits here and there for NaN and the like, it is impossible to always have the same precision when converting directly to a long. Somewhere along the way some precision will be lost.

We can get around that. You heard me right. Despite the previous paragraph, there is a way to get around that. We can break the seconds into 2 other doubles: wholeSeconds and subseconds, then convert each individually. By breaking them apart , the actual precision needed for each is much smaller, and therefore won't be lost in conversion. After the individual conversions, we sum the pieces parts together and we will have preserved the precision down to the last tick!

Our **second and final ToTicks**() extension method:

public static long ToTicks(this double seconds) { double wholeSeconds = Math.Truncate(seconds); double subseconds = seconds - wholeSeconds; return (long)(wholeSeconds * TimeSpan.TicksPerSecond) + (long)(subseconds * TimeSpan.TicksPerSecond); }

I'm hoping you understand and agree with me on ToTicks(). But you may have lingering doubts on ToSeconds(). Let's go back to our **second ToSeconds**() method. Remember it looked simple but I said it was wrong for an AF developer.

public static double ToSeconds(this long ticks) => ticks / (double)TimeSpan.TicksPerSecond;

The key phrase is "* for an AF Developer*". The above ToSeconds() would work perfectly fine with DateTime.Ticks and DateTime in general. But an AFTime adjusts the seconds way out there past the 6th decimal place. To understand this a bit better, let's find another way of determining a DateTime's "seconds" beside converting its Ticks. We can do that by subtracting a given DateTime from DateTime.MinValue.

public static double SecondsSinceDotNetEpoch(this DateTime time) => (time - DateTime.MinValue).TotalSeconds;

Let's look at code. I'll show the DisplaySubseconds() method later. For now you should notice our second ToSeconds method looks just fine compared to the DateTime. Also note I am hardcoding the ticks when setting the DateTime and I specifically chose a value with subseconds:

var time = new DateTime(636314147236593322L, DateTimeKind.Utc); var seconds1 = time.SecondsSinceDotNetEpoch(); var seconds2 = time.Ticks.ToSeconds(); // our second ToSeconds() Console.WriteLine($"Ticks: {time.Ticks}"); Console.WriteLine($" SecondsSinceDotNetEpoch(): {seconds1.DisplaySubseconds()}"); Console.WriteLine($" time.Ticks.ToSeconds() : {seconds2.DisplaySubseconds()}"); Console.WriteLine($" Difference : {(seconds1 - seconds2)}"); var ticks2 = seconds2.ToTicks(); Console.WriteLine($" round-trip back ToTicks(): {ticks2}"); Console.WriteLine($" Difference : {(time.Ticks - ticks2)}");

Produces this console output:

**Ticks: 636314147236593322**

**SecondsSinceDotNetEpoch(): 63631414723.6593246**

**time.Ticks.ToSeconds() : 63631414723.6593246**

** Difference: 0**

**round - trip back ToTicks(): 636314147236593246**

** Difference: 76**

Why then do I claim it's wrong for an AF developer? Because all we are dealing with has been a DateTime. We really should compare it to an AFTime. Now if we were to grab the AFTime.UtcTime and compute it's seconds, we'd get the same thing as the above because the UtcTime is a DateTime. What's critical here in our comparison is the AFTime.UtcSeconds. Any PI developer knows that's pretty gospel to the PI system, be it PITime.UTCSeconds or AFTime.UtcSeconds. Rather than compute the seconds from UtcTime (DateTime) we are going to honor UtcSeconds. To do that, we must adjust it from UTC epoch to the .NET epoch.

public static readonly DateTime EpochUtc = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc); public static double ToSecondsSinceDotNetEpoch(this double utcSeconds) => utcSeconds + EpochUtc.SecondsSinceDotNetEpoch();

We can produce a tiny test, using the exact same time as before. Let's try to round-trip back using our good ToTicks() method:

var time = new AFTime(636314147236593322L); var seconds1 = time.UtcSeconds.ToSecondsSinceDotNetEpoch(); var seconds2 = time.UtcTime.Ticks.ToSeconds(); // our second ToSeconds() Console.WriteLine($"Ticks: {time.UtcTime.Ticks}"); Console.WriteLine($" SecondsSinceDotNetEpoch(): {seconds1.DisplaySubseconds()}"); Console.WriteLine($" time.Ticks.ToSeconds() : {seconds2.DisplaySubseconds()}"); Console.WriteLine($" Difference : {(seconds1 - seconds2)}"); var ticks2 = seconds2.ToTicks(); Console.WriteLine($" round-trip back ToTicks(): {ticks2}"); Console.WriteLine($" Difference : {(time.UtcTime.Ticks - ticks2)}");

Produces this console output:

**Ticks: 636314147236593322**

** SecondsSinceDotNetEpoch(): 63631414723.6593323**

** time.Ticks.ToSeconds() : 63631414723.6593246**

** Difference : 7.62939453125E-06**

** round-trip back ToTicks(): 636314147236593246**

** Difference : 76**

It really boils down to which you consider gospel: the DateTime.Ticks or the AFTime.UtcSeconds? I choose to honor UtcSeconds over Ticks. Therefore, our **second ToSeconds**() method - ** as simple as it is** - is wrong! We can fix this by doing a similar technique that we did with ToTicks():

Our **third and final ToSeconds**() extension method:

public static double ToSeconds(this long ticks) { long wholeSecondPortion = (ticks / TimeSpan.TicksPerSecond) * TimeSpan.TicksPerSecond; long subsecondPortion = ticks - wholeSecondPortion; double wholeSeconds = wholeSecondPortion / (double)TimeSpan.TicksPerSecond; double subseconds = subsecondPortion / (double)TimeSpan.TicksPerSecond; return wholeSeconds + subseconds; }

With our **final ****ToSeconds**() method in hand, we run this code once again but let's include a round-trip back with our **final ToTicks**() method:

var time = new AFTime(636314147236593322L); var seconds1 = time.UtcSeconds.ToSecondsSinceDotNetEpoch(); var seconds2 = time.UtcTime.Ticks.ToSeconds(); Console.WriteLine($"Ticks: {time.UtcTime.Ticks}"); Console.WriteLine($" SecondsSinceDotNetEpoch(): {seconds1.DisplaySubseconds()}"); Console.WriteLine($" time.Ticks.ToSeconds() : {seconds2.DisplaySubseconds()}"); Console.WriteLine($" Difference : {(seconds1 - seconds2)}"); var ticks2 = seconds2.ToTicks(); Console.WriteLine($" round-trip back ToTicks(): {ticks2}"); Console.WriteLine($" Difference : {(time.UtcTime.Ticks - ticks2)}");

Produces this ** correct **console output:

**Ticks: 636314147236593322**

** SecondsSinceDotNetEpoch(): 63631414723.6593323**

** time.Ticks.ToSeconds() : 63631414723.6593323**

** Difference : 0**

** round-trip back ToTicks(): 636314147236593322**

** Difference : 0**

Now our round-tripping works fine and we exactly match AFTime. Which seems like a heck of a lot of effort for something that really should have been so simple. But this is far less effort rather than listening to a disgruntled user complain that your ToTicks() method is wrong. Demanding correctness and accuracy puts a burden on the developer if you want to be accurate to past the 6th decimal place. Life was so much simpler when dealing only with whole seconds.

To bring it altogether, there is the output below for similar code using the different good versus bad versions. To repeat earlier: the Gospel Seconds has at its foundation AFTime.UtcSeconds, and the Gospel Ticks is AFTime.UtcTime.Ticks.

**Gospel Ticks Converted ToSeconds()**

** Gospel Ticks : 636314183416593322**

** Gospel Seconds: 63631418341.6593323**

** Bad ToSeconds: 63631418341.6593246, Delta=7.62939453125E-06**

** Good ToSeconds: 63631418341.6593323, Delta=0**

**Gospel Seconds Converted ToTicks()**

** Gospel Seconds: 63631418341.6593323**

** Gospel Ticks : 636314183416593322**

** Bad ToTicks : 636314183416593280, Delta=42**

** Good ToTicks : 636314183416593322, Delta=0**

**Microsoft Uses Less Accurate Version with TimeSpan.TotalSeconds**

Here is link to Microsoft's source code for TimeSpan.TotalSeconds.

Knowing what we now know, that would produce a less accurate value than we desire. Consider how we would may be use that to compute ToUtcSeconds.

**ToUtcSeconds with loss of precision:**

public static double ToUtcSeconds(this DateTime time) { if (time.Kind == DateTimeKind.Unspecified) throw new Exception($"{nameof(time)}.Kind cannot be Unspecified."); return (time.ToUniversalTime() - EpochUtc).TotalSeconds; }

Instead we should skip the TimeSpan object and trust our own high precision ToTicks method.

**ToUtcSeconds preserving subsecond precision to 7th decimal place:**

public static double ToUtcSeconds(this DateTime time) { if (time.Kind == DateTimeKind.Unspecified) throw new Exception($"{nameof(time)}.Kind cannot be Unspecified."); return (time.ToUniversalTime().Ticks - EpochUtc.Ticks).ToSeconds(); }

**UPDATE: How Precise Are These Methods Really?**

It's an interesting discussion but after publishing I discovered that my methods are indeed precise ... for DateTime's that are at PIPrecision, that is with subseconds evenly on 1/65536th of a second. In such cases, my methods can convert Ticks to Seconds and then round-trip Seconds back to the original Ticks, which is something the simpler methods can't always do.

The problem is despite having 7 digits in Ticks for subseconds, my methods are not precise down to the last tick. This can be checked by taking DateTime.Today and adding 0.1234567 seconds to it. Now my methods can't accurately make the round-trip. Blame this on the nature of a Double (or a Float64 or binary64, if you like) as it has only 16 digits of significance and 11 of those digits are used for whole number seconds leaving 5 for subseconds.

Anyone reading this far should know that 65536 is 2 to the 16th power. I was curious as to how many more powers I could test and still have a successful round-trip. Turns out it was just one more power of 2. That means my methods are precise to 1/131072nd of a second or about 7.63 microseconds.

I also dabbled with using the 128-bit Decimal instead of Double, but Decimal is much slower. And .NET does not yet have an officially released binary128 data type, and the 3rd party ones haven't thrilled me.

**BONUS - DisplaySubseconds() method**

For total seconds or UtcSeconds, when printing it out it *appears* to be accurate to the millisecond. Appearances can be deceiving. The problem here is that a double will only display to a certain number of digits, and the whole number of seconds is a fairly long number, which leaves little room for decimals to appear. We get around this by - *you guessed it* - breaking the seconds up and displaying it in 2 pieces.

private const int DefaultDecimalPlaces = 7; public static string DisplaySubseconds(this double seconds, bool hideWholeNumbers = false, int decimalPlaces = DefaultDecimalPlaces) { long wholeSeconds = (long)Math.Truncate(seconds); double subseconds = Math.Abs(seconds - wholeSeconds); if (decimalPlaces < 3) decimalPlaces = 3; else if (decimalPlaces > 16) decimalPlaces = 16; string format = "0." + new string('0', decimalPlaces); if (hideWholeNumbers) return $"{subseconds.ToString(format)}"; return $"{wholeSeconds}{subseconds.ToString(format).Substring(1)}"; }

To contrast that with the double.ToString(), we use this code:

var time = new AFTime(636314147236593322L); // We use the correct ToSeconds() method var seconds = time.UtcTime.Ticks.ToSecondsGood(); Console.WriteLine($"Ticks: {time.UtcTime.Ticks}"); Console.WriteLine($" seconds.ToString(): {seconds.ToString()}"); Console.WriteLine($" DisplaySubseconds : {seconds.DisplaySubseconds()}");

Which produces this console output:

**Ticks: 636314147236593322**

** seconds.ToString(): 63631414723.6593**

** DisplaySubseconds : 63631414723.6593323**

For DisplaySubseconds() the minimum number of decimal places to display is 3, and the maximum is 16.

## Comments