What Y2K Can Teach Us About the CrowdStrike Outage

We were warned that our embrace of computer technology would lead to disaster.

By integrating computers into more and more areas of our lives, we were told, we had created a scenario in which an everyday malfunction could bring everything crashing down. Air travel would grind to a halt, bank accounts would become inaccessible, essential services would be severely disrupted, and people would stare in horror as the computers they depended on simply stopped working.

These warnings were not in reference to CrowdStrike’s IT outages last week, but to the Year 2000 (Y2K) computer outage, when experts warned that disaster would strike when 1999 became 2000 unless precautions were taken.

The events that unfolded on July 19, 2024, when a faulty software update from cybersecurity firm CrowdStrike caused widespread outages for Microsoft Windows users, appeared to be a replay of the outages expected in the year 2000. Even as CrowdStrike rushed to fix the problem, air travel was grounded, many people had difficulty accessing their bank accounts, essential services were seriously disrupted, and many people felt themselves turning red as they stared at the “blue screen of death” on their computers.

There are important differences between today’s CrowdStrike outage and the scenario warned against in Y2K. But there are also important parallels. Perhaps most important is what the CrowdStrike outage reveals about what we didn’t learn from Y2K itself: The computer systems we depend on are fragile and prone to failure. And these systems are so intertwined with our daily lives that when disaster strikes, it can hit us everywhere at once.

Read more: CrowdStrike’s Role in Microsoft’s IT Outage Explained

The Y2K problem is now almost a thing of the past. In the 1950s and 1960s, computer memory was expensive, and computer professionals were under pressure to save money. One solution they came up with was to truncate the data, cutting off the century digits so that 1939 would be encoded as 39. To put it plainly, it worked. It saved memory, it saved money, and it did not affect the calculations computers made with the data. Simply put, 1999 minus 1939 equals 60, and 99 minus 39 also equals 60. However, 2000 minus 1939 equals 61, but 00 minus 39 equals -39. And when computers encountered these incorrect results, some systems and programs began to produce garbage data, while others failed entirely.

It turned out that simple programming decisions made in the moment could have long-lasting and potentially disastrous consequences, especially when so many people and systems came to depend on those underlying programs.

In the 1990s, computers were still in the early stages of becoming an everyday feature in people’s homes, but computers had already assumed important functions for businesses, governments, and had become intimately intertwined with other vital infrastructure. Congressional hearings with titles like “Y2K: Will the Lights Go Out?” “Y2K and Nuclear Power: Will the Reactors React Responsible?” “Year 2000 and Oil Imports: Can Y2K Bring Back the Gas Lines?” and “McDonald’s Corporation: Is the World’s Largest ‘Small Business’ Y2K Ready?” all attested to how intimately tied to vulnerable computer systems daily life had become by the 1990s. In the 1990s, there wasn’t a computer in every home (or pocket), but computers were already integral to keeping the lights on in those homes.

Congress held its first Y2K hearing in 1996, promisingly titled “Is January 1, 2000, the Date for Computer Disasters?” There, Kevin Schick, then a research director at technology research and advisory firm Gartner Group, declared, “We use computers for everything — we have written programs to do everything.” By saying “everything,” Schick emphasized to the committee that he wasn’t just talking about the dangers to industries and individual companies if their systems failed. Instead, he was drawing attention to the fact that so much of the nation’s (and the world’s) critical infrastructure was now tied to computer systems. Senator Robert F. Bennett (R-Utah) made this point during a Y2K hearing on “Utilities and the National Electric Grid,” where he referenced a study he had conducted on the Y2K preparedness of the ten largest electric, oil, and gas companies. The results of that study led him to make the ominous statement: “I cannot be optimistic… I am genuinely concerned about the prospects for power shortages as a result of the change in the millennium date.”

Read more: 20 years later, the Y2K bug seems like a joke because the people behind the scenes took it seriously

In its first report, the Senate Special Committee on the Year 2000 Problem echoed Schick, describing Y2K as “the first widespread challenge of the information age,” offering “a crash course in the fragile mechanisms of information technology.”

The special commission emphasized how advances in computer technology had been enormously beneficial, but with those benefits came new dangers. And while the special commission did not encourage anyone to stock up on canned food and head for the hinterlands, or argue that computer networks should be dismantled, they did emphasize that Y2K was “an opportunity to educate ourselves firsthand about the nature of the threats of the 21st century” and an opportunity “to reflect carefully on our dependence on information technology and the implications of interconnectedness, and to work to protect what we have long taken for granted.”

Y2K has led to a greater awareness of our collective reliance on computers for everything from banking to keeping the electricity on.

But although the scale of the problem seemed enormous, the predicted disaster did not materialize.

The reason, however, was not that we were lucky or that the problems were exaggerated. Instead, those in and around the IT community, with the coordination and support of the federal government, took the problem seriously and marshaled the attention and resources needed to solve Y2K before their fears became reality.

When you consider that Y2K didn’t ultimately cripple airlines, block access to banks, or disrupt emergency services, it’s easy to laugh about it later. And yet the CrowdStrike outage is a reminder that if we don’t take “our dependence on information technology” seriously, the joke will eventually be on us.

As we look back on Y2K today, amid our myriad computer problems, we should remember the work that went into fixing Y2K. But we should also remember that part of what Y2K revealed was that, as Kevin Schick put it at the first Congressional Y2K hearing, “we use computers for everything — we have written programs to do everything.” And those computers, which we use for everything, are fragile and error-prone. Moreover, when we consider the costly headaches of the CrowdStrike outage, it’s worth remembering something else Schick said at that hearing: “It’s very expensive to fix something once it’s broken, compared to making sure you’ve fixed the problem before you fix it.”

Speaking in the early weeks of the year 2000, Rep. Constance Morella (R-Md.) asked, “Will Y2K inspire a conscious effort toward greater long-term planning and more reliable and secure technology, or will it merely prolong the short-sighted thinking that made Y2K so costly?” Unfortunately, the ridicule with which it is often treated suggests that Y2K did not really succeed in inspiring such a conscious effort. We too often fail to apply lessons learned when things are going well, or move on too quickly from the incident when other events begin to dominate the headlines, and the CrowdStrike outage makes clear that we have much work to do to achieve the “more reliable and secure technology” and “short-sighted thinking” that Rep. Morella spoke of in the context of Y2K, which still pose major problems for us nearly 25 years later. So when we look back today at Morella’s provocation, we are left not with an answer, but with another question: “If Y2K didn’t do that, will the CrowdStrike failure inspire that effort?”

It’s too early to answer that question. But almost 25 years after Y2K, we can’t say we weren’t warned.

Zachary Loeb is an assistant professor in the history department at Purdue University. He works on the history of technology, the history of disasters, and the history of technological disasters. He is currently working on a book about the Year 2000 Computer Problem (Y2K).

Made by History takes readers beyond the headlines with articles written and edited by professional historians. Read more about Made by History at TIME hereThe opinions expressed do not necessarily reflect the views of TIME’s editors..

Write to Created by History at madebyhistory@time.com.

Leave a Comment