WI: The Worst that Y2K Could Be?

The year 2000 problem was a bust. Some may say this was the result of proper planning before hand which alleviated or dealt with potential problems. Others would argue it was a waste of resources for a panic that was mostly imagined. And you can point to organizations and nations that took few precautions as support of the latter. In fictional portrayals, there was a lot of assumption of Y2K as the mcguffin for doomsday: technology stops, social order breaks down, and perhaps even nuclear weapons get launched...somehow.

My question is, how bad could a year 2000 problem actually be in reality?
 
I'd say if the date rollover problem isn't identified some systems could stop working until they're fixed. No social breakdown likely.
 
By the time this actually becomes an issue, all the modern computers already use years with four digits.

Anyway, I know people who tested this on their computers, moving the calendar ahead to the year 2000. It did absolutely nothing.
 
Hi,

Computer programmer here, in most cases absolutely nothing. The worry was embedded systems and infrastructure would fail because it only handles two digits.

http://www.fluffycat.com/COBOL/Y2K/

So in sum what would fail is the business logic would fail. For example, send out cheques "if" the date was greater than 01, and somesuch. The program in general would not "crash" or fail to work (although the device might fail to work). I used COBOL because it is the language of banks and underlying infrastructure and backends that everyone was worried about. For more modern like C, the problem is even more specific. The time would simply wrap around and cause integer overflow in 2038.

https://commons.wikimedia.org/wiki/File:Year_2038_problem.gif#/media/File:Year_2038_problem.gif

In other words, big deal.

Most commercial products are replaced every few years / decades, and those that are not are fault tolerant. I suppose the worst that could happen, is insulin pumps somewhere failing to deliver a shot at the right time, or bricking a VCR, or cheques getting sent out at the wrong time.
 
One of my cousins was the IT chief in a local factory. They missed testing a few older legacy devices, which refused to work and had to be reprogramed after the fact. 99.99% of the factories programable equipment was checked and sorted out in time. His main bitch was the management dithered on taking action for several years, then were nonplussed to find everyone was busy finishing up the checks and fixes elsewhere. They had to pay a premium for programers from India to do the maximum last minute effort to check all the essential equipment & instal patches. He took early retirement a couple years later, leaving management still unable to make a decision on replacing their 1960s, 70s, & 'modern' 80s era programable machine control systems.
 
I was a programmer working on a couple of Y2K projects at the end of the last century and there were a number of issues that needed to be fixed. In one of the projects this was harder than expected because we couldn't actually find all of the source code of the programs that were running in production. That was a very old system, originally written in the 1960s but still in use. A saying at the time was "Old hardware goes into a museum, old software goes into production".

The main problem with the fictional (and indeed journalistic) portrayal of Y2K was the assumption that everything would go wrong at once at midnight on 31-Dec-1999. In reality, the Y2K bug started showing up a few years in advance, which was enough of a hint that IT managers finally started taking the issue seriously. A good example is the expiry date on credit cards, which is quoted as a two digit year. Credit cards are normally issued with an expiry date three years in the future, so cards issued in March 1997 would have an expiry date of 03/00. If the expiry date check just worked with two digit years then it would find that 00 was before 97 and treat the card as expired. So that's a Y2K bug that affected things in 1997 - it was telling that most credit cards issued in 1997 had expiry dates of 1999 to avoid the issue and buy a bit more time to get it fixed.


Cheers,
Nigel.
 
Last edited:
To expand on NCW8's comment, I seem to recall reading that the first hits of the Y2K issue came in 1970, when newly-issued 30-year mortgages started having their last payments fall due in 2000.

As a computer programmer myself, I know of several common strategies used for managing times and dates, each with corresponding failure modes:

  • Seconds (or similar small time units) since an "epoch" time (midnight UTC on Jan 1, 1970 on UNIX and derived systems), stored as a single integer value and converted to month, day, year, hour, minute, etc when needed. Early UNIX systems used a signed 32-bit integer, which will roll over in 2038. Modern UNIX-derived systems use 64-bit integers, which will roll over about 300 billion years from now. Windows uses an unsigned 64-bit integer counting hundreds of nanoseconds since Jan 1, 1601, just to be difficult.
  • Months/days/years as separate integer values. The year can be the full value as a 16, 32, or 64-bit integer (16-bit signed integer will roll over about 14,000 years from now, and larger integers will roll over later still), or years since an epoch date (usually 1900). Years-since-1900 would probably be used in older or embedded systems where space is at a premium as a signed 8-bit integer. This would run into two problems:
    • A cosmetic problem in the year 2000 if output is formatted as "19xx"(year) rather than "xxxx"(1900 + year), so Jan 1, 2000 would be printed as "Jan 1, 19100".
    • A rollover problem in 2029 (1900 + 128), which in an 8-bit integer is the same as -127, so the system would think the year was 1900 - 127 = 1773.
  • A two-digit or four-digit decimal string, representing years-since-1900 and the full year respectively. This was done for space-saving reasons, mostly in older COBOL programs. A two-digit decimal string is the classic Y2K bug, which would make the software treat Jan 1, 2000 as Jan 1, 1900.
 
That was a hoot. I had not listened to Art Bells show in many years. Remember it well from working the late shift. Engineers & electricians on the late shift had a game of debunking his callers.
 
To expand on NCW8's comment, I seem to recall reading that the first hits of the Y2K issue came in 1970, when newly-issued 30-year mortgages started having their last payments fall due in 2000.

As a computer programmer myself, I know of several common strategies used for managing times and dates, each with corresponding failure modes:
...

One format used on the IBM mainframe stored the date as number of years since 1900/number of days since the beginning of the year. This was squeezed into two bytes - 7 bits for the year and 9 for the days. It could cope with dates up until 2028, if the formatting was done correctly (i.e. Not just prefixing the year with "19").

One of the strategies of dealing with dates saved with two digit years was to apply windowing. Basically the years "00" to "49" (for example) would be treated as being in the 21st century, so the date fields could cope with dates in the range 1950 - 2049. Hopefully the systems will be replaced or a more permanent fix implemented in the next thirty years, or we could be faced with a 2050 bug.

Edit: Going back to the OP, just increasing the number of Y2K bugs would result in more issues occurring earlier. This would lead to Y2K projects being started earlier and more time being available to fix the issues. Paradoxically you might get a worse Y2K problem if there were fewer bugs. There would be more of a last minute panic to fix things and a greater chance of bugs being missed in testing.


Cheers,
Nigel
 
Last edited:
Top