Software Tuning, Performance Optimization & Platform Monitoring
Discussion regarding monitoring and software tuning methodologies, Performance Monitoring Unit (PMU) of Intel microprocessors, and platform updating.

Is a "Y2K38 disaster" looming? Issues with a 'time_t' type

SergeyKostrov
Valued Contributor II
1,285 Views

Note: In February 2012 I've made a blog-poston Intel Software Network. Unfortunately, it is stillin 'Pending' state (!!!).
Since that subject is very interesting, and some users of Fortran Forumalready expressed some conserns ( see a thread
'PACKTIMEQQ & GMTIME will not work past 2038' ), I decided to create an independent thread.

I hope, that myblog-post will be approved before 2038...

---------------------------------------------------------------------------------------------------------------------------------------------------------------

Is a "Y2K38 disaster" looming? Issues with a 'time_t' type

Have you ever seen a classic ANSI declaration of a 'time_t' type in a 'time.h' header? Let's
take a look. It is declared as:

typedef long time_t;

and if you check a size of the 'time_t' type with 'sizeof( time_t )' it has to be 4. So,
this is a 32-bit type ( 4 bytes ) and this is true for many 16-bit and 32-bit platforms.

I recently verified a size of the 'time_t' type on a 32-bit Windows XP with a 32-bit application,
that was built with Visual Studio 2005, and to my surprise, the value was 8! A 'long' type is
4 bytes long ( 32 bits ), but the 'time_t' was 8 bytes long ( 64 bits ). How is it possible?

If you use a 32-bit Windows platform and do the software development with a Visual Studio try to
do a simple test:

...
int n1 = sizeof( long );
int n2 = sizeof( time_t );
printf( "sizeof( long ) = %d\\nsizeof( time_t ) = %d?\\n", n1, n2 );
...

An output could be as follows:

sizeof( long ) = 4
sizeof( time_t ) = 8

What is wrong? After a quick search on MSDN I've found two very interesting notes about
the 'time_t' type:

Note 1:
...
In Visual C++ 2005, time is a wrapper for _time64 and time_t is, by default, equivalent
to __time64_t. If you need to force the compiler to interpret time_t as the old 32-bit time_t,
you can define _USE_32BIT_TIME_T. This is not recommended because your application may fail
after January 18, 2038; the use of this macro is not allowed on 64-bit platforms.
...

Note 2:
In a 'Breaking Changes (CRT)' topic of the 'Run-Time Library Reference':

...
time_t is now a 64-bit value ( unless _USE_32BIT_TIME_T is defined ).
...

Microsoft has changed (!) a declaration of the 'time_t' type and now it is declared as:

...
#ifdef _USE_32BIT_TIME_T
typedef __time32_t time_t; /* time value */
#else
typedef __time64_t time_t; /* time value */
#endif
...

and types '__time32_t' and '__time64_t' declared as follows:

...
typedef _W64 long __time32_t; /* 32-bit time value */
...
typedef __int64 __time64_t; /* 64-bit time value */
...

You can also try to search for '_USE_32BIT_TIME_T' macro in the 'types.h', 'time.h' or 'crtdefs.h' headers
for more technical details.

It is clear what happened but some questions are still remaining. Here are a couple of
questions related to the topic:

Q: When did Microsoft introduce the '__time64_t' type?
A: I think in 2005, or possibly some time before 2005. In Visual Studio 2005 headers there is
a declaration of the type '__time64_t', and it would be nice to check in Visual Studio 2003
if it has it.

Q: Does it affect my application?
A: It doesn't affect your application if you don't use the 'time_t' type and CRT time functions.

Q: How the 'time_t' type is used by a 'time(...)' CRT function?
A: The 'time(...)' CRT function gets the system time in seconds elapsed since midnight of January 1st, 1970.

By default the 'time_t' is the signed 32-bit type and it is based on the 'long' type.
A maximum value it could have is 2,147,483,647 that is seconds in our case, and it will
reach that value some time after January 18, 2038.

To be honest, I really don't understand why the 'time_t' wasn't declared as 'unsigned long' or
'double' in the first place. Have you ever seen a negative system time?

If the 'time_t' would be declared as 'unsigned long' then it would reach its maximum
value of 4,294,967,295, that is seconds in our case, some time after 2106, or so. I've done
very simple calculations based on 365 days in every year.

It makes sence to note that a time difference calculated by a CRT function 'difftime(...)'
is returned as the 'double' type.

Q: How could I enforce the 32-bit 'time_t' on a 32-bit platform?
A: It depends on a platform and C/C++ compiler and you should verify the 'types.h' and 'time.h'
headers. In case of a 32-bit Windows platform and a Visual Studio the following declarations
should work:

#define _USE_32BIT_TIME_T
#include

or

#define _USE_32BIT_TIME_T
#include

There is a possibility that a 64-bit 'time_t' type is still not declared in headers of
your C/C++ compiler.

Q: Is it important to have a 64-bit based 'time_t' instead of a 32-bit based?
A: That depends on the application. For example, if the the matter is NOT taken into account
some problems could happen after January 18, 2038.

It is possible that a binary compatibility of some data structures used by interacting
applications working on 32-bit or 64-bit Desktop and 16-bit or 32-bit Embedded platforms
will be broken.

A binary compatibility of some data structures assumes the following. If a structure:

typedef struct tagS
{
float fV;
time_t tV;
} S;

is used for transferring some data between applications working on different platforms, then:

(A) on a 16-bit Embedded platform a sizeof( S ) equals to 8 ( 4 + 4 );

(B) on a 32-bit Embedded platform a sizeof( S ) equals to 8 ( 4 + 4 ), for example on Windows CE ( ARMV4i );

(C) on a 32-bit Windows platform if a '_USE_32BIT_TIME_T' is defined a sizeof( S ) equals to 8 ( 4 + 4 );

(D) on a 32-bit Windows platform if a '_USE_32BIT_TIME_T' is NOT defined a sizeof( S ) equals to 12 ( 4 + 8 );

(E) on a 64-bit platform a sizeof( S ) equals to 12 ( 4 + 8 ).

Now, consider cases (A) and (D):

- on the 16-bit Embedded platform an application A allocated a memory buffer ( size 8 bytes )
to receive some data in the format of structure 'S';

- on the 32-bit Windows platform an application D allocated a memory buffer ( size 12 bytes )
to send some data in the format of structure 'S';

- the data will be received on the 16-bit Embedded platform and saved in the memory buffer;

- three scenarios are possible:

- the application A on the 16-bit Embedded platform received and saved 8 bytes but has an
invalid value for the member 'tV' of the structure 'S' because 4 bytes are "lost".
The memory or a system stack is not corrupted, but the result of processing is unpredictable;

- the application A on the 16-bit Embedded platform received and saved 12 bytes instead
of 8 bytes and crashes (!), because of the memory or a system stack corruption;

- both applications A and D take into account differences in sizes of data in the format
of the structure 'S' and do a correct processing.

Note 1: An application crash on an Embedded platform is a complete "disaster" because it is
not an easy task to update a firmware.

Note 2: A structure packing, members padding, alignment and little-big-endian issues are not
considered to simplify the cases.

Note 3: RPC or DCE APIs could resolve all these problems but not considered to simplify the cases.

Q: Was it a right decision by Microsoft to introduce the '__time64_t' type?
A: Yes. But, this is also a great example of some kind of "technological negligence" because
a software developer didn't add a #pragma message ( ... ) directive to inform other
software developers when the '__time64_t' type is used on a 32-bit platform, like:

...
#ifndef _TIME_T_DEFINED
#ifdef _USE_32BIT_TIME_T
typedef __time32_t time_t; /* time value */
#else
#pragma message ( "Attention: 'time_t' type is based on a 64-bit '__time64_t' type" )
typedef __time64_t time_t; /* time value */
#endif
#define _TIME_T_DEFINED /* avoid multiple def's of time_t */
#endif
...

In that case I would have detected the issue in 2009!

A change from the 32-bit 'time_t' to the 64-bit 'time_t' is NOT a small change. I still
remember the Y2K events when many companies were very busy with software changes and
verifications that all time related issues are taken into account.

Now, is a "Y2K38 disaster" looming? I hope no, but issues with 'time_t' type deserve some attention.

PS: Unfortunately, it created some problems on a project I'm currently working on. Microsoft
"broke" the default compatibility of the 'time_t' type and some CRT time functions. They are
defined by the ANSI standard for a very long time and were "frozen" for changes.

---------------------------------------------------------------------------------------------------------------------------------------------------------------

0 Kudos
1 Reply
Steven_L_Intel1
Employee
1,285 Views
Microsoft changed the definition of time_t in VS2005. I remember this as it broke one of the Fortran run-time library routines once it was built with VS2005.
0 Kudos
Reply