다음을 통해 공유


Unix Time and Windows Time

A little note in Portfolio magazine caught my attention: On Friday, February 13, at 23:31:30, the Unix time value with be 1234567890.  This got me thinking about when the Windows time value will reach that serendipitous number, and led to some research on MSDN

Time is a complicated topic - part of it is that there are so many different MSDN-documented interfaces that return time, including SQL, Java Script, WBEM, MFC, .NET, etc.  I found this topic which gives a good overview of the various types of times in Windows.  Unlike Unix, where there is pretty much a simple time_t structure that contains a 32-bit signed number.  It might seem odd that Unix chooses to reduce the span of time it can represent by half by making the value signed rather than unsigned, but this is an artifact of the baseline used - January 1, 1970.  So allowing for negative values allows times prior to 1970 to be represented on Unix (an interesting article about Unix time notes that Unix co-inventor Dennis Ritchies birthtime is the Unix time value -893,400,000).

The basic idea of representing time on computers is pretty simple - pick a baseline date/time and then count some increment of nanoseconds, milliseconds, or seconds since then as a numerical value.  The trick is in picking the baseline, and accounting for all the oddities of leap years, leap seconds, changes in calendar, etc.  But basically it comes down to some number of time units from the baseline, so when Portfolio said the Unix time will be 1234567890, it means it will have been 1,234,567,890 seconds since the Unix baseline of January 1, 1970 on February 13 of this year.  Boy, how those billion plus seconds just flew by!

But Windows is more complicated (natch, some would cynically say).  MSDN notes: "There are five time formats. Time-related functions return time in one of these formats. You can also use the time functions to convert between time formats for ease of comparison and display. The following table summarizes the time formats.

Format Type Description
System SYSTEMTIME Year, month, day, hour, second, and millisecond, taken from the internal hardware clock.
File FILETIME 100-nanosecond intervals since January 1, 1601.
Local SYSTEMTIME or FILETIME A system time or file time converted to the system's local time zone.
MS-DOS WORD A packed word for the date, another for the time.
Windows DWORD The number of milliseconds since the system booted; a quantity that cycles every 49.7 days.

So what is called here Windows time (and so sounds like the parallel to Unix time) actually is dependent on when you booted your computer - so if you leave your machine running for a bit over two weeks, your Windows time will reach the magic 1234567890 value (actually at 14 days, 6 hours, 56 minutes).

The System time (which in usage is actually parallel to how Unix uses time_t and what the article is calling Unix time) is a structure on Windows, broken into WORD (16-bit on Win32) values for each component, i.e. hour, minute, etc.  The system queries the realtime clock built into the processor and generates this structure from it; how exactly the CPU maintains that value is processor-dependent (part of the BIOS is making that translation).

So the closest thing to Unix time - the idea of a single number representing actual date and time, and so could reach a "magic value" like 1234567890 - is FILETIME. This is what is stored in the filesystem.  I suspect it was created because it is less efficient to just store a SYSTEMTIME structure on disk (since it is composed of eight 16-bit values, or 128 bits of data, while FILETIME is a 64-bit value; doesn't sound like a lot of difference, but when you have millions of files, those bits add up... :))

FILETIME "contains a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC)."  So here very visibly is the difference between a 32-bit value and a 64-bit value.  Remember Unix is counting seconds since January 1, 1970; Windows is counting nanoseconds (a billionth of a second - nine orders of magnitude more precise than the Unix time) since a date over three centuries earlier - and still can store it in a single integer value because it's using a 64-bit value rather than a 32-bit value.  The power of exponential growth right there, ladies and gentlemen.

So when did this 64-bit value reach 1,234,567,890?  Well, recall that it is measured in 100 nanosecond intervals, which are a one hundred billionths of a second, and the magic value is roughly a billion - so in fact, Windows FILETIME reached the magic value Unix will reach on Friday February 13 a little more than a one hundred seconds, or about two minutes, after midnight on January 1, 1601.

Comments

  • Anonymous
    January 17, 2009
    a tick is 100 ns. Don't you mean 123.4 s after midnite? But even so I ask, in whose reckoning were nanoseconds counted before the invention of the telescope, or of decent pendulum escapements? Even POSIX is a fantasy, see the link I've given.

  • Anonymous
    January 17, 2009
    Good point. It is 100 nanosecond intervals, not nanosecond intervals. I've updated the post.

  • Anonymous
    January 17, 2009
    In 1601  Elizabeth still sat on the throne, and the Greenwich Observatory was founded by Charles II, three quarters of a century later, after the civil war, so there was no such thing as GMT, let alone UTC, by which to  measure this midnite to an accuracy of two minutes.  It's fantasy.

  • Anonymous
    May 12, 2009
    So, Is there any built-in functions to convert between Unix-time and "Windows-time" (or FILETIME)?

  • Anonymous
    May 12, 2009
    OK. found it myself. There are CTime's functions from MFC/ATL ( http://msdn.microsoft.com/en-us/library/b6989cds.aspx ) or the <ctime> header from SCL (Standard C++ Library, http://msdn.microsoft.com/en-us/library/w4ddyt9h.aspx )

  • Anonymous
    August 11, 2009
    gettimeofday() is a POSIX function that provides millisecond precision.  (What resolution the OS supports is another matter, of course.)