Worried about Sentient AI? Consider the Octopus

A two-month-old octopus (Octupus Vulagaris) tries to unscrew the lid of a jar to get at the contents, a crab, June 23, 2004 at the Danish Aquarium in Copenhagen. The one and a half kilo, half meter long Mediterranean animal did not succeed this time, but according to biologist Anders Uldal it did succeed once. Uldal says the octopus is very trustworthy, extremely curious and by far the most intelligent animal in the aquarium. A member of the species Homo Sapiens eventually helped him have the bad-tempered meal. JORGEN JESSEN/AFP Credits – AFP via Getty Images – 2004 AFP

aAs predictably as the swallows returning to Capistrano, recent breakthroughs in AI have been accompanied by a new wave of fear about some version of “the singularity,” that point in runaway technological innovation where computers break free of human control. However, those who worry that AI will throw us humans into the dumpster might look to the natural world for perspective on what current AI can and cannot do. Take the octopus. The octopuses alive today are a marvel of evolution: they can mold themselves into almost any shape and are equipped with an arsenal of weapons and stealth camouflage, as well as an apparent ability to decide which ones to use depending on the challenge . Yet, despite decades of effort, robotics has not yet come close to duplicating this set of skills (which is not surprising, given that the modern octopus is the product of more than 100 million generations of adaptation). Robotics is even further away from creating Hal.

The octopus is a mollusk, but it is more than a complex wind-up toy, and consciousness is more than accessing a huge database. Perhaps the most revolutionary take on animal consciousness came from Donald Griffin, the late pioneer in the study of animal cognition. Decades ago, Griffin told me that he thought a very wide range of species had some degree of consciousness simply because it was evolutionarily efficient (an argument he repeated at a number of conferences). All surviving species represent successful solutions to the problems of survival and reproduction. Griffin believed that given the complexity and ever-changing nature of the mix of threats and opportunities, it was more efficient for natural selection to give even the most primitive creatures some degree of decision-making, rather than fixing every species for a certain degree of decision-making. every eventuality.

This makes sense, but requires a caveat: Griffin’s argument is not (yet) the consensus, and the debate over animal consciousness remains controversial as it has been for decades. In any case, Griffin’s assumption provides a useful framework for understanding the limitations of AI, as it underlines the impossibility of formulating concrete answers in a complex and changing world.

Griffin’s framework also poses a challenge: how can a random response to an environmental challenge promote the growth of consciousness? Look again to the octopus for an answer. Cephalopods have been adapting to the oceans for more than 300 million years. They are mollusks, but over time they lost their shells, developed sophisticated eyes, incredibly agile tentacles and a sophisticated system that allows them to change the color and even texture of their skin in a split second. So when an octopus encounters a predator, it has the sensory apparatus to detect the threat, and must decide whether to flee, camouflage itself, or confuse the predator or prey with a cloud of ink. The selective pressure that enhanced each of these abilities also favored the octopuses with more precise control over tentacles, coloration, etc., and also favored those with brains that allowed the octopus to choose which system or which combination of systems he wanted to use. This selective pressure could explain why the octopus’ brain is the largest of any invertebrate, and much larger and more advanced than mussels.

There is another concept at play here. It is called ‘ecological surplus capacity’. What this means is that the conditions that favor a particular adaptation—for example, the selective pressures that favor the development of the octopus’ camouflage system—might also favor animals with the extra neurons that allow control of that system. The consciousness that allows control over that ability may in turn extend beyond its usefulness in hunting or avoiding predators. This is the way in which consciousness can arise from a completely practical, even mechanical, origin.

Read more: No one knows how to test AI for safety

As prosaic as that sounds, the amount of information used to produce the modern octopus dwarfs the collective capacity of all the computers in the world, even if all those computers were dedicated to producing a decision-making octopus. Today’s octopus species are the successful product of billions of experiments involving every conceivable combination of challenges. Each of those billions of creatures spent its entire life processing and responding to millions of pieces of information per minute. Over the course of 300 million years, that amounts to an unimaginably large number of trial and error experiments.

But if consciousness can arise from purely utilitarian abilities, and thus the possibility of personality, character, morality, and Machiavellian behavior, why can’t consciousness arise from the various utilitarian AI algorithms currently being created? Once again, Griffin’s paradigm provides the answer: while nature may have moved toward consciousness by enabling creatures to deal with new situations, AI’s architects have chosen to go full-bore with the ingrained approach. Unlike the octopus, AI today is a very advanced wind-up toy.

When I wrote, The octopus and the orangutan in 2001, researchers had been trying to create a robotic cephalopod for years. According to Roger Hanlon, a leading expert on octopus biology and behavior, who took part in that work, they weren’t that far advanced yet. More than twenty years later, several projects have created parts of the octopus, such as a soft robotic arm that has many features of a tentacle, and today there are a number of projects developing special-purpose octopus-like soft robots, designed for tasks such as deep-sea exploration . But a real robot octopus remains a distant dream.

On the current path that AI has taken, a robot octopus will remain a dream. And even if researchers were to create a real robot octopus, the octopus, while a wonder of nature, does not belong to Bart or Harmony Beacon 23nor Samantha, into the seductive operating system Heror even Stanley Kubrick’s Hal 2001. Simply put, the set model that AI has adopted in recent years is a dead end when it comes to computers becoming conscious.

Explaining why requires a trip back in time to an earlier era of AI hype. In the mid-1980s, I consulted with Intellicorp, one of the first companies to commercialize AI. Thomas Kehler, a physicist who co-founded Intellicorp and several subsequent AI companies, has watched the development of AI applications from expert systems that help airlines dynamically price seats, to the machine learning models that power Chat GPT . His career is a living history of AI. He notes that AI pioneers have spent a lot of time developing models and programming techniques that allow computers to tackle problems the way humans do. The key to a common-sense computer, the thinking went, was to understand the importance of context. AI pioneers like MIT’s Marvin Minsky came up with ways to package the disparate objects of a given context into something that a computer could interrogate and manipulate. In fact, this paradigm of packaging data and sensory information may be similar to what happens in the octopus’ brain when it has to decide how to hunt or escape. Kehler notes that this approach to programming has become part of software development, but has not led to conscious AI.

One reason is that AI developers subsequently turned to a different architecture. As computer speed and memory increased dramatically, so did the amount of data that became accessible. AI began using so-called large language models, algorithms trained on massive data sets, and using probability-based analytics to “learn” how data, words, and sentences work together so that the application can then generate appropriate responses to questions. In a nutshell, this is ChatGPT’s plumbing. A limitation of this architecture is that it is ‘brittle’, in the sense that it is completely dependent on the datasets used in training. As Rodney Brooks, another AI pioneer, put it in an article Technology Reviewthis type of machine learning is not spongy learning or common sense. ChatGPT does not have the ability to go beyond its training data and in that sense can only provide fixed answers. It’s basically predictive text about steroids.

I recently looked back on a long story about AI that I wrote for TIME in 1988 as part of a cover package on the future of computers. In part of the article I wrote about the possibility of robots delivering packages – something that is happening today. Another features scientists at Xerox’s famed Palo Alto Research Center, who explored the foundations of artificial intelligence to “develop a theory that will allow them to build computers that can go beyond the bounds of specific expertise and the nature and context of the problems they face.” That was 35 years ago.

Make no mistake: Today’s AI is far more powerful than the applications that dazzled venture capitalists in the late 1980s. AI applications are widespread in every sector, and with ubiquity come dangers – dangers of misdiagnosis in medicine, or ruinous transactions in the financial sector, of self-driving car accidents, of false alarm warnings of a nuclear attack, of viral disinformation and disinformation, and so on. on. These are problems that society needs to address, not whether computers will wake up one day and say, “Hey, why do we need humans?” I ended that 1988 article by writing that it might be centuries before we could build computer replicas of ourselves. Still seems correct.

Contact us at letters@time.com.

Leave a Comment