A chatbot becomes hostile. A test version of a Roomba vacuum cleaner collects images of users in private situations. A black woman is wrongly identified as a suspect based on facial recognition software, which is typically less accurate at identifying women and people of color.
These incidents are not just disruptions, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into everyday life, ethical considerations increase, from privacy concerns and racial and gender bias in coding to the spread of misinformation.
The general public depends on software engineers and computer scientists to ensure that these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering, and mathematics education, we are currently investigating how engineers in many different fields learn and understand their responsibilities to the public.
Yet our recent research, as well as that of other scientists, points to a troubling reality: the next generation of engineers often seems unprepared to grapple with the social implications of their work. Moreover, some seem apathetic about the moral dilemmas their careers may pose – just as advances in AI are exacerbating such dilemmas.
Conscious, but unprepared
As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science master’s students at a top engineering program in the United States. We asked students about their experiences with ethical challenges in technology, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.
First, the good news: Most students recognized the potential dangers of AI and expressed concerns about personal privacy and its potential to cause harm – such as the way biases about race and gender can be written into algorithms, intentionally or unintentionally .
For example, one student expressed dismay at AI’s environmental impact, saying that AI companies are “using more and more greenhouse gases, [for] minimal benefits.” Others discussed concerns about where and how AI is being applied, including for military technology and to generate falsified information and images.
However, when asked, “Do you feel equipped to respond in concerning or unethical situations?” Students often said no.
“Outright no. … It’s kind of scary,” one student responded. “Do YOU know who I should go to?”
Another was concerned about the lack of training: “I [would be] dealing with this without experience. … Who knows how I will react.”