On January 19, 2038, the Unix timestamp (a signed
long type) overflows on 32-bit systems. Is Linux ready for it?
The only fix that really works is to make
time_t a 64-bit type, i.e. signed
long long. Why is it still relevant to fix this for 32-bit systems? More than half of the new
developments in the kernel still are for 32-bit systems, and some of the products that are being developed now have a lifetime of 20 years or more. Also, in product development,
you already start with a BSP kernel that is a bit old, you own development takes a couple of years, and then the user still wants to play with it for a couple of years. Finally,
even on 64-bit hardware, you might be running 32-bit userspace - e.g. because you’re running legacy code that may not run correctly in 64-bit.
In addition to
time_t, there are also network protocols, filesystems and file formats that are affected. Also hardware interfaces (e.g. RTC, PTP network adapters) which is impossible to fix, so has
to be worked around. Sometimes you can interpret the timestamp as unsigned in those cases, which gives you time until 2106.
John Stultz started with the timehandling code in the kernel and worked his way upward from there. Many hundreds of drivers have been patched since 2014. There is no flag day that
the time type is changed. Instead, a new
ktime_t is used, which in fact a struct with nanoseconds so much higher accuracy. Using jiffies is also a way to deal with it.
CLOCK_MONOTONIC is an easy solution, which also has the advantage that it’s generally a more appropriate view of the time because of leap
seconds, time adjustments, etc. In rare
time64_t is used. For timespec/timeval (really only used for userspace interfaces) it is converted to timespec64.
The harder problem is userspace interfaces. Ideally, it should be possible to do this without requiring changing the userspace code, just recompile. For example, for ioctls, the
ioctl number is changed (it’s normally accessed through a
#define). This works nicely because normally the size of the argument is encoded in the ioctl number with the
etc. macros. Where this is not possible, the ioctl number is defined based on the size of the types used only in the new case.
read() system call is particularly tricky, because there you don’t see if it’s an old or new libc. This is the case example for input event structures which are read from
/dev/inputXX. Similar for
mmap() interfaces, e.g. in ALSA. The choice there is to either keep the ABI, but change userspace by changing the uapi headers, or else to detect which kind of ABI userspace is
expecting and emulate that.
VFS layer is another problem because there are a lot of syscalls with time arguments there. Here, only some system calls will be supported in the future. E.g., there are a dozen different stat implementations. This will all be replaced with a single statx, and glibc can emulate older APIs.
Filesystems are mostly converted, but a few still have to be converted: XFS, ext3, coda. ext3 will not be fixed, instead you should use ext4.
There are about 50 system calls that pass time information. They are replaced with new entry points, however, they are not used yet. For these, there will be a flag day that all
syscalls for all architectures are converted. 4 system calls are still under discussion:
wait4 - there is no clear winner solution for each of
To do the syscall conversion, the 64-bit implementation of the system call already has a
compat_ version to support 32-bit binaries. This can just be reused for the time32
compat_ names will also be replaces with
In the userspace interface (and only there), a new type
__kernel_timespec is used. This makes it easy to implement the flag day: now, it is just defined as
timespec, in the
future it will be changed into
Of the 50 system calls, about half don’t need to be replaced because there is already a 64-bit version. The other half is mostly done, just 6 still to go.
The next step is the userspace side of things. First of all, the libc implementations: glibc and musl. glibc wants to continue supporting building userspace with the old 32-bit
time_t interfaces, just like is the case for largefile. Arnd did womething for musl. musl will just change the ABI, also fixing a few other things in the ABI. So there it will
no longer be possible to build OR run 32-bit-time executables.
For distros, they have to rebuild everything. Embedded distros have it easy because they anyway build everything. Desktop systems will need to have a migration plan where only part of the packages is rebuilt with 64-bit time - but a lot of them just plan to drop 32-bit support. Distros that will still have 32-bit support are Fedora armhf, Debian armhf and i386, and openSUSE armv7hl. Android has the biggest problem because there will be a new ABI, so old app binaries will be broken.
Overall, the kernel side of the work is largely done.
What about testing? There is not much testing, except the usual regression testing of the kernel.
There is also no solution for userspace applications: if they copy a
time_t into a
long, it has to be fixed. There is no strategy for this.