AI is an existential threat – but not in the way you think

The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp rise in fear of AI. In recent months, executives and AI security researchers have made predictions, dubbed “P(doom),” about the likelihood that AI will cause a large-scale catastrophe.

Concerns reached a fever pitch in May 2023, when the nonprofit research and advocacy organization Center for AI Safety released a one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google, and Anthropic, as well as two of the so-called “godfathers” of AI: Geoffrey Hinton and Yoshua Bengio.

You might wonder how such existential fears would play out. A familiar scenario is Oxford philosopher Nick Bostrom’s thought experiment about the “paperclip maximizer.” The idea is that an AI system tasked with producing as many paperclips as possible might go to extreme lengths to find resources, such as destroying factories and causing car crashes.

A less resource-intensive variant is an AI tasked with arranging a reservation at a popular restaurant, disabling mobile networks and traffic lights to prevent other diners from getting a table.

Office supplies or dinner, the basic idea is the same: AI is rapidly becoming an extraterrestrial intelligence, good at achieving goals but dangerous because it does not necessarily align with the moral values ​​of its creators. And in its most extreme version, this argument morphs into explicit fears about AIs enslaving or destroying the human race.

Actual damage

Over the past few years, my colleagues and I at the Applied Ethics Center at UMass Boston have been studying the impact of AI on people’s self-awareness. I argue that these catastrophic fears are overblown and misplaced.

Yes, AI’s ability to create convincing deepfake video and audio is terrifying, and it can be abused by those with bad intentions. In fact, it’s already happening: Russian agents likely tried to embarrass Kremlin critic Bill Browder by trapping him in a conversation with an avatar for the former Ukrainian president Petro PoroshenkoCybercriminals are using AI voice cloning for a variety of crimes, from high-tech heists to common scams.

AI decision-making systems that provide credit approvals and hiring recommendations carry the risk of algorithmic bias because the training data and decision models they run on reflect long-standing societal biases.

These are big problems, and they require policymakers’ attention. But they have been around for a while, and they are hardly catastrophic.

Not in the same class

The Center for AI Safety’s statement lumped AI in with pandemics and nuclear weapons as a major risk to civilization. There are problems with that equation. COVID-19 has killed nearly 7 million people worldwide, caused a massive and ongoing mental health crisis, and created economic challenges including chronic supply chain shortages and uncontrollable inflation.

Nuclear weapons probably killed more than 200,000 people in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years since, caused decades of deep fear during the Cold War, and brought the world to the brink of destruction during the Cuban Missile Crisis in 1962. They have also changed the calculations of national leaders about how to respond to international aggression, as is currently the case with the Soviet invasion of Ukraine.

AI is nowhere near capable of doing this kind of damage. The paperclip scenario and others like it are science fiction. Existing AI applications perform specific tasks rather than making broad judgments. The technology is still far from being able to decide on and then plan the goals and subgoals necessary to stop traffic to get you a seat at a restaurant, or blow up a car factory to satisfy your itch for paperclips.

Not only does the technology lack the complex multi-layered assessment capabilities required for these types of scenarios, it also does not have autonomous access to enough of our critical infrastructure to cause this type of damage.

What it means to be human

There is, in fact, an existential risk inherent in the use of AI, but that risk is existential in a philosophical sense, not an apocalyptic one. AI in its current form can change the way people see themselves. It can degrade skills and experiences that people consider essential to being human.

a robot hand points to one of four photos on a glossy black surface

For example, humans are judgmental creatures. People rationally weigh details and make judgments every day at work and in their free time about who to hire, who to get a loan, what to watch, and so on. But more and more of these judgments are being automated and outsourced to algorithms. If that happens, the world won’t end. But humans will gradually lose the ability to make these judgments themselves. The fewer people who judge, the worse they are likely to become at it.

Or consider the role of chance in people’s lives. People value serendipity: stumbling across a place, person, or activity by accident, getting sucked into it, and appreciating the role chance played in these meaningful discoveries. But the role of algorithmic recommendation systems is to reduce that kind of serendipity and replace it with planning and prediction.

Finally, consider the writing capabilities of ChatGPT. The technology is rapidly eliminating the role of writing assignments in higher education. If that happens, teachers will lose an important tool for teaching students to think critically.

Not dead but weakened

So no, AI won’t blow up the world. But its increasingly uncritical embrace, in all sorts of narrow contexts, is eroding some of the most important human skills. Algorithms are already undermining people’s ability to make judgments, enjoy chance encounters, and refine critical thinking.

The human species will survive such losses. But our way of being will be impoverished in the process. The fantastical fears surrounding the coming AI catastrophe, singularity, Skynet, or whatever you want to call it, obscure these more subtle costs. Consider T. S. Eliot’s famous closing lines from “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but with a whimper.”

This article is republished from The Conversation, a nonprofit, independent news organization that brings you facts and reliable analysis to help you understand our complex world. It was written by: Nir Eisikovits, UMass Boston

Read more:

The Applied Ethics Center at UMass Boston receives funding from the Institute for Ethics and Emerging Technologies. Nir Eisikovits serves as data ethics advisor for Hour25AI, a startup focused on reducing digital distractions.

Leave a Comment