Many moons ago, when computers were a hobby rather than a profession for me, I was in the Merchant Navy. Part of my job involved me spending 4 hours every night, between midnight and 04:00, on the bridge ensuring that the ship was where it should be and not about to hit anything. The thing is that in the middle of the ocean there is not, usually, much around that you could hit, so it tended to be a bit boring. Thus I found my self talking to who ever was on watch with me about all sorts of strange things. For example with one person I spent many happy hours discussing the very Zen like statement “Reality is an Illusion”. On an other occasion we talked about dates. No not about going out with someone but dates like 7th march 1953. Eventually we got around to wondering what the date itself was, not the day but the date and we decided that it was simply a label used to define a specific period in time. [See I told you we got bored :-)] At the time it was simply an academic discussion with no real use. However, now it has become rather important because soon the date will be 1st January 2000.

The Year 2000 Problem or How I Stopped Worrying About the Bomb and Started Worrying About the Millennium

By now you must have heard of the Year 2000 problem, which is often abbreviated to Y2K, and you are probably aware that the problem is caused because computers will not be able to cope with the millennium.

But why is there going to be such a problem?

Well, it is all to do with the way that a computer stores the date. Actually it's down to the way computers store everything. What you see on your screen is nothing like what the computer stores inside it self. A computer is basically a lot of switches which are either on or off, which can be represented mathematically by 1 or 0. In other words everything to a computer is a collection of 1s and 0s.

Now, for reasons I wont go into at the moment, computers use 8 of these 1s and 0s (which by the way are known as bits) at a time. This collection of 8 bits is known as a byte, and there are 256 possible different combinations of 1s and 0s.

A little diversion. With early computers were not as reliable as today's. Thus it was possible for a byte to get corrupted. In order to enable the computer to tell if a particular byte was corrupted parity was used. Basically what was done was to take the eighth bit and set to either 1 or 0 depending upon the number of 1s in the other 7 bits. There were two ways of doing this, even and odd parity. With even parity if there was an odd number of 1s then the eight bit was set to 1, thus there is always an even number of 1s. With odd parity the eighth bit is set so there is always an odd number of 1s. This makes it easy for the computer to tell if a particular byte is corrupted.

So although there are eight bits in a byte effectively only 7 were used, giving a total of 128 possible combinations.

So What?

Well, in the early days of computers it was easier to deal with a single byte than two or more. Not only that but storage, both Ram and disk space, were expensive, so as much as possible had to be squeezed into a single byte. Since there were only 128 possible combinations, it was decided that the contents of the byte would mean different things, in different circumstances. So for example the value 101 0001 could be the letter Q, or an instruction to the CPU to push the contents of the CX register onto the stack, or it could be a number, say 81. It just depended upon the circumstance. To put it another way the CPU knew if it was expecting data or an instruction and interpreted the value accordingly.

The real problem with this approach, or for that matter with any practical approach, is numbers. The trouble with numbers is that unlike letters, or instructions to the CPU there is an infinite number of numbers. To make matters worse there is an infinite number of numbers between any two numbers.

Computers can handle this problem in a number of different ways. The most common is to split the numbers up into integers and floating point numbers. An integer is simply a whole number such as 1, 2, 3, 453, and so on. A floating point number is anything else and, as far as this article is concerned, we can ignore them.

It should be obvious that a computer can store integers up to 128, or if we ignore parity 256, in one byte. However, usually one bit is used to indicate if the number is negative or positive so the actual range is -128 to +128. If you want a larger number then you use two or more bytes.

Remember right back at the start I was asking what actually is a date? Well one way of looking at it is to say it is a collections of numbers. Take, for example, 27th May 1997. In the UK this would be written, in numeric form as 27/05/97. In other words we can think of the date 27th May 1997 as the three numbers 27, 5 and 97, three nice small integers. In fact the largest possible values are, for day - 31, for month - 12 and for year - ahh well it could go up to anything. But if we only use the last two digits then the maximum value is 99. Thus we only need 3 bytes to store any date. Actually we can improve on that......

Hang on a minute! Surely that's the Millennium problem.

Err yes. The decision was made to only store the last two digits of the year which is what is causing the problem. Come 1st January 2000 the date will be ambiguous does the 1/1/00 mean 1st January 1900 or 1st January 2000?

Hold it. The year is held in a single byte, which can hold values between 0 and 256 since we only need 99 why can't we just add 100 to the year, that is go from 31/12/99 to 1/1/100? Sure it looks a little strange but it will work, so where is the problem?

Well remember I said ram and disk storage were expensive in the early days and that as much as possible had to be squeezed into each byte?

Yes.

Well look at the maximum values for the day, 31, month, 12 and year, 99. In binary 31 is 1 1111, 12 is 1100 and 99 is 110 0011. Now count the number of bits used by each 5, 4 and 7. With a bit of careful programming you can just fit the date into 2 bytes. As an example look at the Dos directory entries. As you know when you do a dir you are given a date the file was last modified. This date is stored in the directory in bytes 23 and 24 as shown:-

 Byte   23 24
 Bit    F     E     D     C     B     A     9     8     7     6     5     4     3     2     1     0  
 Use  Year Month Day
 Max value   0 - 127 1-15 1-31

As you can see using this method the maximum possible value for the year is 127, so allowing the year value to increase to 100 would work but only until 2027.

Except that Dos does not actually hold the value of the last two digits of the year. That is a value of 0 does not mean year xx00. Dos actually counts year 0 as being 1980 so in 2000 the value stored in bits 9-F will be 1 0100, which is 20 in decimal.

Confusing isn't it?

But don't worry it gets worse :-) You see there are other ways of storing dates, for example these days PCs automatically know the date and time when you switch them on. In order to do this the date and time is stored in the CMOS. However, it is not stored like the date and time in the directory, instead it used a format called Binary Coded Decimal. And, of course there are other ways, ways which you can be sure some programmer somewhere has used.

It's the number of different ways that a date can be stored that is causing the problem. If the same method was used in all cases conversion would be simple. As it is you first have to find out what programs are storing dates and then how they are storing the dates before you can even begin to convert the dates.

But all I do is play games on my PC so I don't have to do anything. Do I?

Well, that depends. From the point of view of the PC user there are two problems, will the PC handle the change from 99 to 00 and will the software you run handle it?

You don't need to worry about the games since they do not care about the date. The operating system might be a problem if you are running Dos and sort directories by date (Dir /O:D) but otherwise you should be OK.

As for the PC itself, if it is a modern one then it will handle the date change. Most older PCs will have a slight problem. If you leave the PC on as the date changes then the PC will change the date, until you turn it off. But the next time you turn the PC on it will read the date from the CMOS and, since the CMOS only stores two digits, the date will then be 100 years out. As far as I know this should not cause a problem with games, but if there are any that use the date, for example for copy protection or to limit play to a fixed number of days, then they might not work after the 1st January 2000.

The problems for PC users will be with applications that use the date, for example databases, personal organisers and project planners. If all you use are 'off-the-shelf' applications such as Access for example, then all you can do is check with the publisher that the version you have is millennium compliant. If it isn't the update to a later version. If you have any applications written specifically for you then check with the programmer.

Of course all I have talked about so far is the 1st Millennium problem, that of the year change from 99 to 00. There is a second problem. As you know every 4th year is a leap year, unless it's a century. Thus programmers could check for a leap year very easily, if the year is divisible by 4 it's a leap year unless the last two digits are 00. The trouble is that if the century is divisible by 400 it is a leap year. So, 1900 was not a leap year, but 2000 is a leap year.

So as a games player I don't need to worry about the so called millennium bug?

Well not as far as your PC is concerned. But, have you noticed how wide spread the computer is these days? There are computer chips in just about everything from your washing machine to aircraft. How many of them use a date? Well, off hand I can think of the video recorder, the satellite receiver, smart cards, digital watches and so on.

How many of them will work after the 1st January 2000?