AaronCrane.co.uk

A brief history of time_t

In the context of a discussion about the Y2.038K problem, Craig Berry surmised that the 32-bit Unix time_t type originated on 16-bit machines. That’s entirely true; for those with too much time on their hands, here’s a short history of Unix time handling.

First Edition Unix (November 1971) measured time in sixtieths of a second since 1 January 1971, as a 32-bit quantity. It’s not clear what epoch was used in earlier versions of proto-Unix. The epoch was adjusted (perhaps more than once), because that definition provided a range of somewhat less than 2.5 years; Third Edition used 1 January 1972. The modern Unix whole-second granularity and epoch (midnight 1 January 1970) date from Fourth Edition.

First Edition predated C; even Fourth Edition, in which the kernel had been rewritten in C, predated introduction of the long type in C. So the time() call originally took a pointer to an array of two ints that were 16 bits each, like every int of the time. (The underlying system call is documented as setting a pair of registers when called from assembly language.)

C acquired long during the period between Sixth Edition (May 1975) and the publication of the first edition of K&R (1978). Seventh Edition (January 1979) changed the time() API in a backwards-compatible way: it reanalysed the argument as a pointer to a single long and additionally returned the time as a long. Furthermore, it made the pointer argument optional: if you pass a NULL pointer, the time is returned without additionally being written elsewhere.

That definition of time() was the source of the time() function standardised in ANSI C89 (also known as ISO C90), though with relaxed guarantees. In C89, time_t is merely required to be an arithmetic type; its granularity, epoch, and interpretation are decided by the implementation, with the exception that (time_t)-1 must mean “unknown”.

[Also published here.]