# Precision and accuracy of DateTime

### Precision and accuracy of DateTime

Rate This

The DateTime struct represents dates as a 64 bit number that measures the number of “ticks” since a particular start date. Ten million ticks equals one second.

That’s a quite high degree of precision. You can represent dates and times to sub-microsecond accuracy with a DateTime, which is typically more precision than you need. Not always, of course; on modern hardware you can probably execute a couple hundred instructions in one tick, and therefore if you want timings that are at the level of precision needed to talk about individual instructions, the tick is too coarse a measure.

The problem that arises with having that much precision is of course that it is very easy to assume that a given value is as accurate as it is precise. But that’s not warranted at all! I can represent my height in a double-precision floating point number as 1.799992352094 metres; though precise to a trillionth of a metre, it’s only accurate to about a hundredth of a metre because I do not have a device which can actually measure my height to a trillionth of a meter, or even a thousandth of a metre. There is way more precision than accuracy here.

The same goes for dates and times. Your DateTime might have precision down to the sub-microsecond level, but does it have accuracy? I synchronize my computers with time.gov fairly regulary. But if I don’t do so, their clocks wander by a couple of seconds a year typically. Suppose my clock loses one second a year. There are 31.5 million seconds in a year and 10 million ticks in a second, so therefore it is losing one tick every 3.15 seconds. Even if my clock was miraculously accurate down to the level of a tick at some point, within ten seconds, it’s already well off. Within a day much of the precision will be garbage.

If you do a little experiment you’ll see that the operating system actually gives you thousands of times less accuracy than precision when asked “what time is it?”

long ticks = DateTime.Now.Ticks;
while(true)
{
if (ticks != DateTime.Now.Ticks)
{
ticks = DateTime.Now.Ticks;
Console.WriteLine(ticks);
}
else
{
Console.WriteLine("same");
}
}

On my machine this says “same” eight or nine times, and then suddenly the Ticks property jumps by about 160000, which is 16 milliseconds, a 64th of a second. (Different flavours of Windows might give you different results, depending on details of their thread timing algorithms and other implementation details.)

As you can see, the clock appears to be precise to the sub-microsecond level but it is in practice only precise to 16 milliseconds. (And of course whether it is accurate to that level depends on how accurately the clock is synchronized to the official time signal.)

Is this a flaw in DateTime.Now? Not really. The purpose of the “wall clock” timer is to produce dates and times for typical real-world uses, like “what time does Doctor Who start?” or “when do we change to daylight savings time?” or “show me the documents I edited last Thursday after lunch.”  These are not operations that require submicrosecond accuracy.

(And incidentally, in VBScript the “wall clock” timer methods built in to the language actually round off times we get from the operating system to the nearest second, not the nearest 64th of a second.)

In short, the question “what time is it?” really should only be answered to a level of precision that reflects the level of accuracy inherent in the system. Most computer clocks are not accurately synchronized to even within a millisecond of official time, and therefore precision beyond that level of accuracy is a lie. It is rather unfortunate, in my opinion, that the DateTime structure does surface as much precision as it does, because it makes it seem like operations on that structure ought to be accurate to that level too. But they almost certainly are not that accurate.

Now, the question “how much time has elapsed from start to finish?” is a completely different question than “what time is it right now?” If the question you want to ask is about how long some operation took, and you want a high-precision, high-accuracy answer, then use the StopWatch class. It really does have nanosecond precision and accuracy that is close to its precision.

Remember, you don’t need to know what time it is to know how much time has elapsed. Those can be two different things entirely.

• Time in windows has always been a little "bonkers". It might depend on each system but if one stops to lock at the task bar  watch, every few seconds the system seems to stall and its obvious that the seconds are not evenly spaced at all.

My question is ¿Is the taskbar clock synchronized with DateTime.Now or is the obvious stalling in the clock some UI update hiccup that DateTime is unaware of? Because if they are, then errors are not in the range of 10 ms at all, unless it evens out when you measure a long enough time period.

• The problem with the StopWatch class is that, while it is extremely precise, it is not guaranteed to be accurate. The source it uses for its tick count may be different on different CPUs, causing incorrect results when you stop the clock on a different CPU than you start it on. Furthermore, it may count at a different frequency in power-saving modes, which could be perfect for microbenchmarking code, but useless as an indicator of when an Ethernet packet arrived.

I would also add that I rather like that DateTime has so much precision built in (even if it implies that DateTime.Now has more precision than it does), so I can use the same data structures and functions on data that represents birthdays, system times, and when Ethernet packets arrived. This is much preferable to other systems that require different representations and thus different libraries for each of those situations.

• I don't know that I would go so far as to say DateTime supporting more preciscion than is provided by the hosting hardware\platform is a bad thing.

At no extra cost to developers DateTime can support more precise hosts in the future, in my book that's a good thing.

--Ifeanyi Echeruo

• Stopwatch is definitely one of those great little utility classes that many people are under-aware of. Being able to roll your own code performance timer using Stopwatch is invaluable in cases where you want to profile a very narrow area of code and you don't have time to break out an actual code profiler.

Something that often goes hand in hand with Stopwatch is the MethodBase.GetCurrentMethod() which reports the reflection info of the currently executing method. Unfortunately, you can't centralize this into a utility helper method - since GetCurrentMethod() reports the method actually running. What would be nice would be a GetCallingMethod() method that looks at the call stack frame just above the current one. You can of course write your own stack-frame walking code ... but who wants to do that :)

• @Leo Bushkin:

new StackFrame(1).GetMethod()

• The irony is that we've taken this on board for Noda Time as well - as the common "smallest unit of time" in .NET is a tick, we felt we needed to support that in Noda too. Joda Time - which we're porting from - only supports down to millisecond precision.

On the other hand, I suppose it means we can use the same types for stopwatch measurements and other measurements. My biggest gripe about Stopwatch is that it uses ticks as well - but to mean something entirely different from ticks in DateTime/TimeSpan. Grrr.

Fun fact: for time zones, we actually represent offsets in milliseconds. It's possible that that's overkill - seconds would probably have been okay. Minutes wouldn't have been, however - there have been time zones with offsets from UTC of "9 minutes and 21 seconds" and similar.

I don't have too much problem with DateTime having too much accuracy, so long as everyone *knows* it.

• Doh - amendment to final comment... I don't have a problem with DateTime having too much *precision* so long as everyone knows it. Precision, not accuracy.

• Pass GetCurrentMethod() as a parameter to the logging function.

• Thanks Eric, that's a pretty useful summary, straight from the horse's mouth as it were.

I do think this needs to be made far more explicit in the framework documentation. A colleague of mine (a behavioural psychologist interested in human reaction times) claims to have been trying to get a straight answer from all sorts of experts regarding the accuracy of Windows timing for around a decade now. I've been exploring the matter recently and I'm still picking up all sorts of contradictory statements. All I really want is a reliable piece of documentation that states unequivocally the factors influencing the margin of error in a Stopwatch elapsed time figure in a quantitative way, so that I can extrapolate a scientifically rigorous lower bound. Apparently nothing on the web is able to give me this.

• An excellent (as usual) post. One additional item to mention is to NOT use DateTime.Now for calculations. Two reasons. (in no particular order)

1) it is MUCH higher overhead than DateTime.UtcNow

2) It WILL give you errrors in most US locations twice a year.

I actually was involved with one company (Eastern US) who used "Bogota" time to avoid the time jumping as Daylight Savings kicked on/off. The side effect was that ALL compuiter clocks were off by 1 hour during the summer.....DELIBERATELY!!!

• About DateTime - very old thing :) Richter mentioned it in his book, as i remember it's because of standard windows win32 timer, not because of .net or datetime.

• Something that is really missing is a way to get a precision counter (HPET) readingat the precise moment when the system datetime counter was last incremented (other than looping to check whether the datetime has changed).  This would make it far easier to implement a good time synchronization scheme.

• "The problem with the StopWatch class is that, while it is extremely precise, it is not guaranteed to be accurate."

Thankfully, this isn't quite true. There are computer systems with faulty BIOS for which StopWatch can suffer the problems described. But that's not a normal affair. On a correctly working system, StopWatch is fine within the documented limits of the class.

• How about this? Let's say I have to do something every 1 minute. Here are two possible ways that I can solve the problem:

Solution A

DO it

WAIT 1 minute

DO it

WAIT 1 minute

...

Solution B (assume I start at 12:00)

DO it

WAIT until 12:01

DO it

WAIT until 12:02

...

But, if what has to be done takes a noticeable amount of time, let's say 30 seconds, then the result from Solution A will be very different from Solution B.

Time          Solution A      Solution B

--------      ----------      ----------

12:00:00      DO it           Do it

12:00:30      WAIT 1 min      WAIT until 12:01

12:01:00                      DO it

12:01:30      DO it           WAIT until 12:02

12:02:00      WAIT 1 min      DO it

12:02:30                      WAIT until 12:03

12:03:00      DO it           DO it

12:03:30      WAIT 1 min      WAIT until 12:04

12:04:00                      DO it

.             .               .

.             .               .

.             .               .

much later    gets worse      still on schedule

I've met someone who had a similar problem, used Solution A (or something like it), but was expecting to have results similar to Solution B. When he told me his story, I thought to myself, hey, that's kind of like the problem with dead reckoning.

But how else can this problem of drift be minimized? I thought a clock would be a good point of reference to use to get back on course.

• “what time does Doctor Who start?”

That's surprising. I thought Dr. Who was a curiously UK phenomenon, and over in the US you hadn't even heard of it, let alone appreciate it.

(And did you see the start of the new series? Fantastic!)

I grew up watching Dr. Who on WNED (PBS Buffalo, New York). As a child the opening credit sequence alone terrified me, though I would occasionally watch it "from a position of safety behind the couch" as they say. I became a big fan as a teenager; I still have a complete set of the Marvel reprint of Dr. Who Comics somewhere in the house, which as a teenager represented a signficant fraction of my monthly income. It is reasonably well known in the US, though when I wear my Tom Baker scarf, hardly anyone comments on it anymore; that series seems to no longer have much pop culture currency in the United States.

I've seen the first two seasons of the reboot and I am mostly favourably impressed; they seem to have done a good job of staying true to the wit, humour, scariness and cheerful low-budget making-do of the original run. I've not had a chance to sit down and watch the later seasons; eventually I'll pick them up on DVD or watch them on Netflix On Demand.  -- Eric

Page 1 of 2 (30 items) 12