Technology companies want to build artificial general intelligence. But who decides when AGI is achieved?

There is a race to develop artificial general intelligence, a futuristic vision of machines that are as smart as humans or at least can do many things as well as humans.

Making such a concept a reality – commonly called AGI – is the driving mission of ChatGPT maker OpenAI and a priority for the elite research departments of tech giants Amazon, Google, Meta and Microsoft.

It is also a cause for concern for world governments. Leading AI scientists published a study in the journal Science on Thursday warning that uncontrolled AI agents with “long-term planning” skills could pose an existential risk to humanity.

But what exactly is AGI and how do we know when it has been achieved? Once on the fringes of computer science, it is now a buzzword that is constantly being redefined by those trying to make it a reality.

What is AGI?

Not to be confused with the similar-sounding generative AI – which describes the AI ​​systems behind the suite of tools that ‘generate’ new documents, images and sounds – artificial general intelligence is a vaguer idea.

It’s not a technical term, but “a serious but ill-defined concept,” says Geoffrey Hinton, a pioneering AI scientist who has been called a “Godfather of AI.”

“I don’t think there is agreement on what the term means,” Hinton said by email this week. “I use it to mean AI that is at least as good as humans at almost all the cognitive things humans do.”

Hinton prefers another term – superintelligence – “for AGIs that are better than humans.”

A small group of early proponents of the term AGI wanted to evoke how mid-20th century computer scientists envisioned an intelligent machine. That was before AI research branched out into subfields that produced specialized and commercially viable versions of the technology – from facial recognition to speech-recognizing voice assistants like Siri and Alexa.

Mainstream AI research “turned away from the original vision of artificial intelligence, which was quite ambitious in the beginning,” says Pei Wang, a professor who teaches an AGI course at Temple University and helped organize the first AGI conference in 2008.

Putting the ‘G’ in AGI was a signal to those who “still want to do the big thing. We don’t want to build tools. We want to build a thinking machine,” Wang said.

Are we at AGI yet?

Without a clear definition, it’s difficult to know when a company or group of researchers will have achieved artificial general intelligence – or if they have done so yet.

“Twenty years ago, I think people would have happily agreed that systems with the capabilities of GPT-4 or (Google’s) Gemini would have achieved general intelligence comparable to that of humans,” Hinton said. “It would have passed the test if we could have answered virtually every question in a sensible way. But now that AI can do that, people want to change the test.”

Improvements in ‘autoregressive’ AI techniques that predict the most plausible next word in a sequence, combined with enormous computing power to train those systems on large amounts of data, have led to impressive chatbots, but they’re still not quite the AGI that many people use. had in mind. Achieving AGI requires technology that can perform as well as humans in a wide range of tasks, including reasoning, planning and the ability to learn from experience.

Some researchers would like to reach consensus on how this should be measured. It’s one of the topics of an upcoming AGI workshop next month in Vienna, Austria – the first at a major AI research conference.

“This really needs a community effort and attention so that we can mutually agree on some kind of classification of AGI,” said workshop organizer Jiaxuan You, an assistant professor at the University of Illinois Urbana-Champaign. One idea is to segment it into tiers, the same way automakers are trying to benchmark the trajectory between cruise control and fully self-driving vehicles.

Others plan to find out for themselves. San Francisco company OpenAI has given the board of directors of a nonprofit organization – which includes a former US Treasury Secretary – the responsibility of deciding when its AI systems have reached the point where perform the most economically valuable work better than people’.

“The board determines when we have achieved AGI,” says OpenAI’s own explanation of the board structure. Such a feat would cut off the company’s largest partner, Microsoft, from the rights to commercialize such a system, as the terms of their agreements “apply only to pre-AGI technology.”

Is AGI dangerous?

Hinton made global headlines last year when he left Google and warned of the existential dangers of AI. A new Science study published Thursday could reinforce these concerns.

Its lead author is Michael Cohen, a researcher from the University of California, Berkeley, who studies the “expected behavior of generally intelligent artificial agents,” especially those competent enough to “pose a real threat to us by trying to plan’.

Cohen made clear in an interview Thursday that such long-term AI planning agents do not yet exist. But “they have the potential to be,” as tech companies try to combine today’s chatbot technology with more informed planning skills using a technique known as reinforcement learning.

“By giving an advanced AI system the goal of maximizing its reward and withholding reward at some point, the AI ​​system has a strong incentive to take people out of the loop if given the chance,” said the paper whose co-authors include prominent AI scientists Yoshua Bengio and Stuart Russell and law professor and former OpenAI consultant Gillian Hadfield.

“I hope we’ve made it clear that people in government decide to think seriously about exactly what regulations we need to address this problem,” Cohen said. For now, “governments only know what these companies decide to tell them.”

Too legit to leave AGI?

With so much money being capitalized on the promise of advances in AI, it’s no surprise that AGI is also becoming a corporate buzzword that sometimes fuels quasi-religious enthusiasm.

It’s a part of the tech world divided between those who argue it should be developed slowly and carefully, and others – including venture capitalists and rapper MC Hammer – who have declared themselves part of an “accelerationist” camp.

The London-based startup DeepMind, founded in 2010 and now part of Google, was one of the first companies with the explicit ambition to develop AGI. OpenAI did the same in 2015 with a security-focused promise.

But now it may seem like everyone else is jumping on the bandwagon. Google co-founder Sergey Brin was recently spotted hanging out at a California venue called the AGI House. And less than three years after rebranding from Facebook to focus on virtual worlds, Meta Platforms revealed in January that AGI was also top of the agenda.

Meta CEO Mark Zuckerberg said his company’s long-term goal was “to build complete general intelligence,” which would require advances in reasoning, planning, coding and other cognitive skills. Although Zuckerberg’s company had long focused researchers on these topics, his focus changed tone.

At Amazon, a sign of the new messaging came when the chief scientist of the voice assistant Alexa changed roles to become chief scientist for AGI.

While not as tangible to Wall Street as generative AI, broadcasting AGI ambitions can help recruit AI talent who have a choice of where they want to work.

Given the choice between an “old-school AI institute” or one that “has the goal of building AGI” and has sufficient resources to do so, many would choose the latter, said You, a researcher at the University of Illinois.

Leave a Comment