Why are so many robots white?

Deze kleine man is heel schattig en heel wit.  <a href= Jiuguang Wang/Flickr, CC BY-SA” src=”https://s.yimg.com/ny/api/res/1.2/X81hCBSMEgwMSr_gdvqA1Q–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTYzOQ–/https://media.zenfs.com/en/the_conversation_us_articles_815/94feb7b647c963 e66c1827bc8d3fe0f8″ data-src= “https://s.yimg.com/ny/api/res/1.2/X81hCBSMEgwMSr_gdvqA1Q–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTYzOQ–/https://media.zenfs.com/en/the_conversation_us_articles_815/94feb7b647c963e66c 1827bc8d3fe0f8″/>

Problems with racial and gender bias in artificial intelligence algorithms and the data used to train large language models such as ChatGPT have attracted the attention of researchers and generated headlines. But these problems also arise with social robots, which have physical bodies modeled after non-threatening versions of humans or animals and are designed to communicate with humans.

The goal of the subfield of social robotics, called socially assistive robotics, is to communicate with increasingly diverse groups of people. The noble intention of its practitioners is “to create machines that can best help people help themselves,” writes one of its pioneers, Maja Matarić. The robots are already being used to help people on the autism spectrum, children with special needs and stroke patients who need physical rehabilitation.

But these robots don’t look like humans and don’t interact with humans in a way that reflects even basic aspects of society’s diversity. As a sociologist who studies human-robot interactions, I believe this problem will only get worse. Autism diagnoses for children of color are now higher than for white children in the US. Many of these children could eventually come into contact with white robots.

So why #robotssowhite to adapt the famous Twitter hashtag surrounding the Oscars in 2015?

Why robots are often white

Why does Kaspar, designed to deal with children with autism, have rubber skin that resembles that of a white person, given the diversity of people they will be exposed to? Why are Nao, Pepper and iCub, robots used in schools and museums, covered in glossy white plastic? In The Whiteness of AI, technology ethicist Stephen Cave and science communication researcher Kanta Dihal discuss racial bias in AI and robotics and note the prevalence of online stock photos of robots with reflective white surfaces.

What is going on here?

One problem is which robots are already there. Most robots are not developed from scratch, but purchased by engineering laboratories for projects, customized with custom software and sometimes integrated with other technologies such as robotic hands or skin. Robotics teams are therefore limited by design choices made by the original developers (Aldebaran for Pepper, Italian Institute of Technology for iCub). These design choices usually follow the clinical, clean look with glossy white plastic, similar to other technology products such as the original iPod.

In an article I presented at the 2023 meeting of the American Sociological Association, I call this “the poverty of the artificial imaginary.”

How society imagines robots

In anthropologist Lucy Suchman’s classic book on human-machine interaction, which has been updated with chapters on robotics, she discusses a “cultural imaginary” of what robots should look like. A cultural imaginary is what is shared through representations in texts, images and films, and that collectively shapes people’s attitudes and perceptions. For robots, the cultural imaginary is derived from science fiction.

This cultural imaginary can be contrasted with the more practical concerns about how computer scientists and engineering teams view robot bodies, what Neda Atanasoski and Kalindi Vora call the “technical imaginary.” This is a hotly contested area in feminist science studies, with, for example, Jennifer Rhee’s ‘The Robotic Imaginary’ and Atanasoski and Vora’s ‘Surrogate Humanity’ critical of the gender and racial assumptions that drive people to design service robots – designed to solve everyday tasks – as a woman.

The cultural imaginary that labels robots as white and in fact mostly female dates back to European antiquity, along with an explosion of novels and films at the height of industrial modernity. From the first mention of the word ‘android’ in Auguste Villiers de l’Isle-Adam’s 1886 novel ‘The Future Eve’, the introduction of the word ‘robot’ in Karel Čapek’s 1920 play ‘Rossum’s Universal Robots’ and the sexualised robot Maria in Thea von Harbou’s 1925 novel ‘Metropolis’ – the basis of her husband Fritz Lang’s 1927 film of the same name – fictional robots were quickly feminized and made subservient.

Perhaps the prototype for this cultural idea lies in ancient Rome. A poem in Ovid’s ‘Metamorphoses’ (8 CE) describes a statue of Galatea ‘of snow-white ivory’ that its creator Pygmalion falls in love with. Pygmalion prays to Aphrodite that Galatea comes to life, and his wish is granted. There are numerous literary, poetic and film adaptations of the story, including one of the first cinematic special effects in Méliès’s 1898 film. Paintings depicting this moment, for example by Raoux (1717), Regnault (1786) and Burne- Jones (1868-70 and 1878), accentuate the whiteness of Galatea’s flesh.

Interdisciplinary route to diversity and inclusivity

What can be done to counter this cultural legacy? After all, according to engineers Tahira Reid and James Gibert, all human-machine interactions should be designed with diversity and inclusivity in mind. But outside of Japan’s ethnically Japanese-looking robots, robots designed to be non-white are rare. And the Japanese robots tend to follow the submissive female gender stereotype.

The solution is not simply to encase machines in brown or black plastic. The problem goes deeper. The Bina48 “custom character robot,” modeled after the head and shoulders of African-American millionaire wife Bina Aspen, is remarkable, but its speech and interactions are limited. A series of conversations between Bina48 and African-American artist Stephanie Dinkins form the basis of a video installation.

The absurdity of talking about racism with a disembodied animated head becomes clear in such a conversation – it literally has no personal experience to speak of, but the AI-powered answers reference an unnamed person’s experience of growing up with racism. These are implanted memories, like the “memories” of the replicant toothroids in the “Blade Runner” films.

Social science methods can help produce a more inclusive “technical imaginary,” as I discussed at the Being Human festival in Edinburgh in November 2022. For example, working with Guy Hoffman, a roboticist from Cornell, and Caroline Yan Zheng, then a Ph.D. design student from the Royal College of Art, we invited contributions to a publication entitled Critical Perspectives on Affective Embody Interaction.

One of the persistent threads in that collaboration and other work is the extent to which people’s bodies communicate with others through gesture and expression, as well as vocalization, and how this varies across cultures. In that case, it’s one thing to make the appearance of robots reflect the diversity of people who benefit from their presence, but what about diversifying forms of interaction? In addition to making robots less universally white and feminine, social scientists, interaction designers and engineers can work together to create more intercultural sensitivity in gestures and touch, for example.

Such work promises to make human-robot interactions less scary and creepy, especially for people who need help from the new types of socially assistive robots.

This article is republished from The Conversation, an independent nonprofit organization providing facts and analysis to help you understand our complex world.

It was written by: Mark Paterson, University of Pittsburgh.

Read more:

Mark Paterson has previously received funding from AHRC-EPSRC and OC Robotics in the UK

Leave a Comment