Algorithms help people see and correct their biases, research shows

Algorithms are an important part of modern life. People rely on algorithmic recommendations to browse deep catalogs and find the best movies, routes, information, products, people and investments. Because people train algorithms on their decisions—for example, algorithms that make recommendations on e-commerce and social media sites—algorithms learn and encode human biases.

Algorithmic recommendations show a bias toward popular choices and information that sparks outrage, such as partisan news. At a societal level, algorithmic biases perpetuate and reinforce structural racial biases in the legal system, gender biases among the people companies hire, and wealth inequality in urban development.

Algorithmic biases can also be used to reduce human biases. Algorithms can reveal hidden structural biases in organizations. In a paper published in the Proceedings of the National Academy of Science, my colleagues and I discovered that algorithmic biases can help people better recognize and correct biases in themselves.

The bias in the mirror

In nine experiments, Begum Celikitutan, Romain Cadario, and I had research participants rate Uber drivers or Airbnb listings on their driving skills, reliability, or the likelihood that they would rent the listing. We gave participants relevant details, such as the number of trips they had taken, a description of the accommodation or a star rating. We also added an irrelevant piece of information: a photo revealing the drivers’ age, gender and attractiveness, or a name suggesting the ad’s hosts were white or black.

After participants made their ratings, we showed them one of two rating summaries: one with their own ratings, or one with the ratings from an algorithm trained on their ratings. We told participants about the confounding feature that may have influenced these ratings; for example, Airbnb guests are less likely to rent from hosts with distinctly African-American names. We then asked them to rate how much influence the bias had on the ratings in the summaries.

Whether participants judged the distorting influence of race, age, gender, or attractiveness, they saw more bias in algorithm ratings than they did. This algorithmic mirror effect involved whether participants rated the ratings of real algorithms or whether we showed participants their own ratings and deceptively told them that an algorithm was making those ratings.

Participants saw more biases in algorithms’ decisions than in their own decisions, even when we gave participants a monetary bonus if their biased judgments matched the judgments of another participant who saw the same decisions. The algorithmic mirror effect persisted even when participants belonged to the marginalized category – for example, identifying as female or black.

Research participants were just as likely to see biases in algorithms trained on their own decisions as they were to see biases in other people’s decisions. Also, participants were more likely to see the influence of racial bias in algorithms’ decisions than in their own decisions, but they were equally likely to see the influence of defensible characteristics, such as star ratings, on algorithms’ decisions and on their own decisions. decisions.

Biased blind spot

People see more of their biases in algorithms because the algorithms remove people’s blind spots. It’s easier to see biases in other people’s decisions than in your own because you use other evidence to evaluate them.

When you examine your decisions for bias, you look for evidence of conscious bias – whether you considered race, gender, age, status, or other unfounded characteristics when making a decision. You overlook and excuse biases in your decisions because you don’t have access to the associative machinery that drives your intuitive judgments, where biases often play a role. You might think, “I didn’t think about their race or gender when I hired them. I hired them solely on merit.”

When you examine the decisions of others for bias, you do not have access to the processes they used to make the decisions. So you examine their decisions for bias, where bias is evident and more difficult to excuse. For example, you might see that they only hired white men.

Algorithms take away the blind spot, because you see algorithms more like you see other people than you see yourself. Algorithms’ decision-making processes are a black box, similar to how other people’s thoughts are inaccessible to you.

Participants in our study who were most likely to demonstrate the bias blind spot were likely to see more bias in algorithms’ decisions than in their own decisions.

People also externalize biases into algorithms. Seeing biases in algorithms is less threatening than seeing biases in yourself, even if algorithms are trained on your choices. People blame algorithms. Algorithms are trained on human decisions, yet people call the biases reflected “algorithmic biases.”

Corrective lens

Our experiments show that people are also more likely to correct their biases if they are reflected in algorithms. In a final experiment, we gave participants the opportunity to correct the judgments they were rating. We showed each participant their own ratings, which we attributed to the participant or to an algorithm trained on their decisions.

Participants were more likely to correct the ratings when they were attributed to an algorithm because they thought the ratings were more biased. As a result, the final corrected ratings were less biased when attributed to an algorithm.

Algorithmic biases with harmful consequences are well documented. Our findings show that algorithmic bias can be harnessed for good. The first step to correcting bias is to recognize its influence and direction. Like mirrors that reveal our biases, algorithms can improve our decision-making.

This article is republished from The Conversation, an independent nonprofit organization providing facts and trusted analysis to help you understand our complex world. It was written by: Carey K. Morewedge, Boston University

Read more:

Carey K. Morewedge does not work for, consult with, own stock in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Leave a Comment