‘Father of the Internet’ Reveals Reasons He Fears for Its Future

According to the people who created the Internet, it faces a range of threats that could endanger both the technology and the people who use it.

We are becoming increasingly dependent on a technology that is more vulnerable than we realise and we could be entering a “digital dark age” that will leave us without access to our own history, warned Vint Cerf, one of the “fathers of the internet”.

The web is also becoming more central to our lives, but that means “there are consequences if it doesn’t work as intended” or is used by malicious people, he warned. As the internet has become almost ubiquitous, it has allowed people to use it for malicious purposes, such as ransomware, he warned.

“The consequences of [the increasing availability of the internet] “The consequences of that are that it is accessible to the general public, which it was not in early evolution,” he said. “And the consequences of that are that some parts of the general public do not necessarily mean well, and so their access to the technologies that make it possible, in many ways, very constructive ways, but also in some very disruptive ways.”

He also warned that there are “a lot of concerns about the reliability and resilience of this technology that we are increasingly dependent on,” and that we are increasingly giving software autonomy to take actions on our behalf that we may not understand.

For example, we rely heavily on our mobile phones for their “convenience and utility,” but there may be “no alternative” to use if they break, leaving us with increasingly vulnerable systems.

    (Getty Images for Webby Awards)

(Getty Images for Webby Awards)

He also suggested that fragility will continue into the future. None of the digital media we have today has lasted as long as the paper we used before – and so we may no longer be able to access the files that shape our understanding of our history.

“I’m starting to wonder what kind of ecosystem we would have to create that would assure everyone that digital content has a serious lifespan,” he said, pointing to the fact that he recently found a number of floppy disks with files created just a few decades ago that could no longer be read; “It’s embarrassing to think that baked clay tablets from five or 6,000 years ago are still readable,” he said.

Solving those problems means “rethinking our ecosystem as a whole,” he said, with new legal structures and international agreements, as well as technology, to ensure we can trust our digital environment, he said. That could mean writing a new “digital social contract” that calls on people to take more responsibility for how they interact with the world online, for example.

We also need to work on “improving people’s intuition” about how to use technologies safely, and giving them more power to protect themselves, he said. “We need more critical thinking and a willingness to think critically, especially about the information that we’re given,” a problem he said was exacerbated by the widespread availability of large language models.

But he said he wasn’t as worried about the power of artificial intelligence as some other technologists. While computing had brought “astonishing” new capabilities, much of the panic about AI was the result of “making it out to be more than it really is,” partly because it’s trained on human text and often looks like it’s talking the same way we do.

Mr Cerf was speaking at a meeting organised by the Royal Society and a range of other organisations to mark 50 years of the internet. He said the early days of the internet were often characterised by optimism about whether the system would be abused.

“When we started this work, we were just a bunch of engineers and we just wanted to make it work well enough to make something of this size and scale work,” he said. “And I don’t think we really thought about how the system could be abused by people who have your best interests at heart.”

He said there was a lot of concern about security, such as encrypting web traffic so it couldn’t be intercepted as it traveled across the internet. But there was less concern about security, he said – so less thought was given to the fact that traffic could contain malware that would attack the computer it was sent to.

“So I think we need to rethink the ecosystem that we create. It’s not this ethereal thing. It has real-world implications,” he said.

“It’s becoming a contentious area in a geopolitical sense, where we’re really concerned about national security, but also personal security in the online environment.

“And I can assure you that wasn’t at the top of the list in the beginning. It was just trying to understand what would happen if every computer could talk to every other computer.”

Wendy Hall, a British computer scientist at the University of Southampton who helped build some of the fundamental systems of the Web, told The independent at the same event she was optimistic about the future of the Internet.

(l-r) Wendy Hall and David Payne from the University of Southampton, with Vint Cerf (Web Science Institute, University of Southampton)(l-r) Wendy Hall and David Payne from the University of Southampton, with Vint Cerf (Web Science Institute, University of Southampton)

(l-r) Wendy Hall and David Payne from the University of Southampton, with Vint Cerf (Web Science Institute, University of Southampton)

“The internet — the infrastructure of how computers communicate with each other — has been around for 50 years,” she said. “And it has survived everything, including Covid.”

“We all jumped on it in 2020 and it kept going. And imagine what Covid would have been like without the internet or the web.”

But she agreed that AI could eventually become a problem, and pointed to the importance of learning the lessons of the history of the early internet and web. In addition to her other work, Dame Wendy serves on a UN advisory body on artificial intelligence, which aims to encourage international governance to avoid these dangers.

“AI is potentially more dangerous” than the web itself, she said. “I’m not one to say we have an existential threat next year or in the short term, but if we let AI get out of control, it will be used by bad actors.”

“If we let AI lose control, so that it does things that we can’t control – whatever that is – then we have a problem.”

She pointed to the importance of ensuring that the technology is useful for everyone, of ensuring that the technology is available to people in the global south and to wealthier countries, and of protecting people from malicious AI actors. And she pointed to more distant dangers that the defense and security sector needs to work on, to “protect us from potential AI wars in the future.”

But Dame Wendy said she was optimistic about the “huge things” AI could achieve, including “how it will help us with health education, energy supply, food security” and more. She also said she hoped the world would come together to provide good governance – but that there was still important work to be done.

Leave a Comment