AI may be the cause of our inability to contact alien civilizations

This article was originally published on The conversation. The publication contributed the article to Space.com Expert voices: opinion pieces and insights.

Michael Garrett is Sir Bernard Lovell, Chairman of Astrophysics and Director of Jodrell Bank Center for Astrophysics, University of Manchester.

Artificial intelligence (AI) has made astonishing progress in recent years. Some scientists are now looking to develop artificial superintelligence (ASI) – a form of AI that would not only surpass human intelligence but also not be tied to the learning speed of humans.

But what if this milestone isn’t just a remarkable achievement? What if it also poses a formidable bottleneck in the development of all civilizations, a bottleneck so challenging that it hinders their long-term survival?

Related: Could AI find alien life faster than humans, and would that tell us?

This idea is at the heart of a research article I recently published in Acta Astronautica. Could AI be the universe’s “great filter” – a barrier so difficult to overcome that it prevents most life from evolving into space-faring civilizations?

This is a concept that could explain why the Search for Extraterrestrial Intelligence (SETI) has yet to discover the signatures of advanced technical civilizations elsewhere in the galaxy.

The grand filter hypothesis is ultimately a proposed solution to the Fermi paradox. This begs the question why, in a universe large and old enough to host billions of potentially habitable planets, we haven’t discovered any sign of alien civilizations. The hypothesis suggests that there are insurmountable hurdles in the evolutionary timeline of civilizations that prevent them from developing into space-faring entities.

I believe the rise of ASI could be such a filter. The rapid advancement of AI, which could potentially lead to ASI, could intersect with a crucial stage in the development of a civilization: the transition from a single-planet species to a multi-planet species.

a silver cylinder flies to a red-orange planet

a silver cylinder flies to a red-orange planet

This is where many civilizations could falter, with AI advancing much faster than our ability to control it or sustainably explore and populate our solar system.

The challenge with AI, and ASI in particular, lies in its autonomous, self-reinforcing and improving nature. It possesses the potential to improve its own capabilities at a rate that surpasses our own evolutionary timelines without AI.

The chances of something going seriously wrong are enormous, leading to the demise of both biological and AI civilizations before they ever have a chance to become multiplanetary. For example, as countries increasingly rely on and cede power to autonomous AI systems that compete with each other, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including the AI ​​systems themselves.

In this scenario, I estimate that the typical lifespan of a technological civilization could be less than 100 years. That is approximately the time between the ability to receive and transmit signals between the stars (1960) and the estimated emergence of ASI (2040) on Earth. This is alarmingly short when compared to the cosmic timescale of billions of years.

When this estimate is coupled with optimistic versions of the Drake equation – which attempts to estimate the number of active, communicating alien civilizations in the Milky Way – it suggests that only a handful of intelligent civilizations exist at any given time. Moreover, like us, they can be quite difficult to detect due to their relatively modest technological activities.

Radio telescopes point to the sky at sunset.Radio telescopes point to the sky at sunset.

Radio telescopes point to the sky at sunset.

Wake up call

This study is not just a cautionary tale of possible doom. It serves as a wake-up call for humanity to establish robust regulatory frameworks to guide the development of AI, including military systems.

This isn’t just about preventing the malicious use of AI on Earth; it’s also about ensuring that the evolution of AI aligns with the long-term survival of our species. It suggests that we need to devote more resources to becoming a multiplanetary society as quickly as possible – a goal that has been dormant since the heady days of the Apollo project, but has recently been rekindled by the advances of private companies.

As historian Yuval Noah Harari noted, nothing in history has prepared us for the impact of the introduction of non-conscious, super-intelligent entities to our planet. Recently, the implications of autonomous AI decision-making have led to calls from leading leaders in the field for a moratorium on AI development until a responsible form of control and regulation can be introduced.

But even if every country agreed to adhere to strict rules and regulations, rogue organizations would be difficult to rein in.

The integration of autonomous AI into military defense systems should be an area of ​​particular concern. There is already evidence that people will voluntarily relinquish significant power to increasingly capable systems, as they can perform useful tasks much faster and more effectively without human intervention. Governments are therefore reluctant to regulate in this area, given the strategic benefits that AI offers, as recently devastatingly demonstrated in Gaza.

RELATED STORIES:

— Should we look for artificial intelligence in the search for extraterrestrial life?

– Machine learning can help detect alien technology. Here’s how

– Fermi Paradox: Where are the aliens?

This means that we are already coming dangerously close to a precipice where autonomous weapons operate beyond ethical boundaries and circumvent international law. In such a world, surrendering power to AI systems to gain a tactical advantage could inadvertently trigger a series of rapidly escalating, highly destructive events. In an instant, the collective intelligence of our planet could be wiped out.

Humanity is at a pivotal point in its technological trajectory. Our actions now could determine whether we become an enduring interstellar civilization, or succumb to the challenges posed by our own creations.

Using SETI as a lens through which we can examine our future development adds a new dimension to the discussion about the future of AI. It is up to all of us to ensure that when we reach for the stars, we do so not as a cautionary tale for other civilizations, but as a beacon of hope – a species that has learned to thrive alongside AI.

Original published at The Conversation.

Leave a Comment