2023 was the year AI became mainstream. It was also the year we started panicking about it

Artificial Intelligence (AI) became mainstream in 2023. It’s been a long time coming, but there’s still a long way to go before the technology catches up to people’s science fiction fantasies about human-like machines.

The catalyst for a year of AI fanfare was ChatGPT. The chatbot gave the world a glimpse of recent developments in computer science, even though not everyone knew exactly how it works or what to do with it.

“I would call this a bending moment,” said pioneering AI scientist Fei-Fei Li.

“In history, 2023 will hopefully be remembered for the profound changes in technology and public awakening. It also shows how messy this technology is.”

It was a year of people figuring out “what this is, how to use it, what the impact is – all the good, the bad and the ugly,” she said.

Panic about AI

The first AI panic of 2023 began shortly after New Year’s Day, when classrooms reopened and schools from Seattle to Paris began blocking ChatGPT.

Teenagers were already asking the chatbot – which was released at the end of 2022 – to write essays and answer take-home tests.

AI big language models behind technology like ChatGPT work by repeatedly guessing the next word in a sentence after “learning” the patterns of a huge body of human-written works.

They often get the facts wrong. But the outcomes seemed so natural that it sparked curiosity about the next developments in AI and its potential use for deception and deception.

Concerns escalated as this new cohort of generative AI tools – which spit out not just words but new images, music and synthetic voices – threatened the livelihoods of anyone who writes, draws, strums or codes for a living.

It led to strikes by Hollywood writers and actors and legal challenges from visual artists and best-selling authors.

Some of the most highly regarded scientists in the field of AI warned that the technology’s unchecked advancement was aimed at outsmarting humans and potentially threatening their existence, while other scientists called their concerns exaggerated or drew attention to more immediate risks.

In the spring, AI-generated deepfakes – some more convincing than others – entered US election campaigns, falsely showing Donald Trump embracing the country’s former top infectious disease expert.

The technology made it increasingly difficult to distinguish between real and fictional images of war in Ukraine and Gaza.

By the end of the year, the AI ​​crises had shifted to ChatGPT’s own creator, San Francisco-based Start OpenAInearly destroyed by corporate unrest over its charismatic CEO, and to a government meeting room in Belgium, where exhausted political leaders from across the European Union emerged after days of intense talks with a deal for the world’s first major legal safeguards for AI.

The new EU AI law It will take a few years before it comes into full effect, and other legislative bodies – including the US Congress – are still far from passing their own legislation.

Too much hype?

There is no doubt that commercial AI products unveiled in 2023 will feature technological achievements not possible in earlier stages of AI research, dating back to the mid-20th century.

Generative AI is right at the height of inflated expectations. There are huge claims from suppliers and manufacturers of generative AI about its capabilities, its ability to deliver those capabilities.

But the latest generative AI trend is at its peak, according to market research firm Gartner, which has been tracking the “hype cycle” of emerging technology since the 1990s. Imagine a wooden roller coaster riding up to the highest hill, about to descend into what Gartner describes as a “valley of disillusionment” before returning to reality.

“Generative AI is at the height of inflated expectations,” says Gartner analyst Dave Micko. “There are huge claims from suppliers and manufacturers of generative AI about its capabilities, its ability to deliver those capabilities.”

Google was criticized this month for editing a video demonstration of its most capable AI model, called Gemini, to make it look more impressive and human.

Micko said leading AI developers are pushing certain ways to adopt the latest technology, most of which align with their current product line – whether it’s search engines or workplace productivity software. That doesn’t mean the world will use it that way.

“As much as Google, Microsoft, Amazon and Apple would like us to adopt the way they think about their technology and deliver that technology, I think adoption really comes from the bottom up,” he said.

Is it different this time?

It’s easy to forget that this isn’t the first wave of it Commercialization of AI. Computer vision techniques developed by Li and other scientists helped search a vast database of photos to recognize objects and individual faces and help guide self-driving cars. Advances in voice recognition have made voice assistants like Siri and Alexa a staple in many people’s lives.

“When we launched Siri in 2011, it was the fastest-growing consumer app at the time and the only major mainstream application of AI that people had ever experienced,” said Tom Gruber, co-founder of Siri Inc., which bought Apple and became a made an integral iPhone feature.

But Gruber believes what’s happening now is the “biggest wave ever” in AI, bringing both new possibilities and dangers.

“We are surprised that we can accidentally encounter this amazing ability with language, by training a machine to play solitaire across the entire Internet,” Gruber said. “It’s pretty amazing.”

The dangers could come quickly in 2024, as major national elections in the US, India and elsewhere could be overrun with AI-generated deepfakes.

In the longer term, AI technology’s rapidly improving language, visual perception and step-by-step planning capabilities could supercharge a digital assistant’s vision — but only if access is granted to the “inner loop of our digital lifestream,” said Gruber.

“They can manage your attention as in, ‘You should watch this video. You should read this book. You should respond to this person’s communication,'” Gruber said.

“That’s what a real executive assistant does. And we could have that, but at a very high risk of personal information and privacy.”

Leave a Comment