Building fairness into AI is crucial – and difficult to do well

Nemen de AI's de beslissingen over uw leven eerlijk?  <a href=sorbetto/DigitalVision Vectors via Getty Images” src=”https://s.yimg.com/ny/api/res/1.2/L2Kn9Bw9iILFEveCNLSZ9g–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTg0Nw–/https://media.zenfs.com/en/the_conversation_us_articles_815/86337b71ef32842 c6b0a7f398e06d47f” data-src= “https://s.yimg.com/ny/api/res/1.2/L2Kn9Bw9iILFEveCNLSZ9g–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTg0Nw–/https://media.zenfs.com/en/the_conversation_us_articles_815/86337b71ef32842c6b0 a7f398e06d47f”/>

Artificial intelligence’s ability to process and analyze large amounts of data has revolutionized decision-making processes, making operations in healthcare, finance, criminal justice and other sectors of society more efficient and, in many cases, more effective.

However, this transformative power comes with a significant responsibility: the need to ensure that these technologies are developed and deployed in a way that is equitable and just. In short, AI must be fair.

Pursuing fairness in AI is not merely an ethical obligation, but a requirement to promote trust, inclusivity and the responsible advancement of technology. However, ensuring that AI is fair is a major challenge. And what’s more, as a computer scientist who studies AI, my research shows that attempts to ensure fairness in AI can have unintended consequences.

Why fairness in AI matters

Fairness in AI has become a critical area of ​​focus for researchers, developers and policymakers. It transcends technical performance and touches on the ethical, social and legal dimensions of technology.

Ethically, honesty is a cornerstone of building trust and acceptance of AI systems. People must be able to trust that AI decisions that affect their lives (for example, hiring algorithms) are made in a fair manner. On a social level, AI systems that embody fairness can help address and mitigate historical biases – such as those against women and minorities – thereby promoting inclusivity. Legally, embedding fairness into AI systems helps bring these systems into line with anti-discrimination laws and regulations around the world.

Unfairness can arise from two primary sources: the input data and the algorithms. Research has shown that input data can perpetuate bias in various sectors of society. For example, when hiring, algorithms that process data that reflect societal biases or lack diversity can perpetuate “like me” biases. These biases favor candidates who are similar to the decision makers or to those already working in an organization. When biased data is then used to train a machine learning algorithm to help a decision maker, the algorithm can propagate and even amplify these biases.

Why fairness in AI is difficult

Fairness is inherently subjective and influenced by cultural, social and personal perspectives. In the context of AI, researchers, developers, and policymakers often translate fairness to the idea that algorithms should not perpetuate or exacerbate existing biases or inequities.

However, measuring fairness and building it into AI systems involves subjective decisions and technical difficulties. Researchers and policymakers have proposed various definitions of fairness, such as demographic equality, equality of opportunity, and individual fairness.

These definitions include different mathematical formulations and underlying philosophies. They are also often in conflict with each other, which highlights the difficulty of simultaneously satisfying all fairness criteria in practice.

Furthermore, fairness cannot be distilled into a single metric or guideline. It covers a spectrum of considerations including, but not limited to, equal opportunity, treatment and impact.

Unintended effects on fairness

The multifaceted nature of fairness means that AI systems must be scrutinized at every level of their development cycle, from the initial design and data collection phase to their final implementation and ongoing evaluation. This research reveals another layer of complexity. AI systems are rarely deployed in isolation. They are used as part of often complex and important decision-making processes, such as making recommendations about hiring or allocating funds and resources, and are subject to many restrictions, including security and privacy.

Research my colleagues and I have conducted shows that limitations such as computing resources, hardware types, and privacy can significantly impact the fairness of AI systems. For example, the need for computational efficiency can lead to simplifications that inadvertently overlook or misrepresent marginalized groups.

In our research into network pruning – a method to make complex machine learning models smaller and faster – we found that this process can unfairly affect certain groups. This happens because pruning may not take into account how different groups are represented in the data and by the model, leading to biased results.

Likewise, privacy-preserving techniques, while critical, can obscure the data needed to identify and mitigate biases or disproportionately affect outcomes for minorities. For example, if statistical agencies add noise to data to protect privacy, this can lead to unfair allocation of resources because the added noise affects some groups more than others. This disproportionality can also skew decision-making processes that rely on this data, such as the allocation of resources to public services.

These constraints do not function in isolation, but intersect in ways that magnify their impact on equity. For example, when privacy measures exacerbate biases in data, they can further widen existing inequalities. This makes it important to have a comprehensive understanding and approach to both privacy and fairness for AI development.

The path forward

Making AI fair is not easy and there are no one-size-fits-all solutions. It requires a process of continuous learning, adaptation and collaboration. Given that biases are widespread in society, I believe that people working in the field of AI should recognize that it is not possible to achieve perfect fairness, and instead strive for continuous improvement.

This challenge requires a commitment to rigorous research, thoughtful policymaking, and ethical practice. To make it work, AI researchers, developers and users will need to ensure that fairness considerations are woven into all aspects of the AI ​​pipeline, from concept, through data collection and algorithm design, to implementation and beyond.

This article is republished from The Conversation, an independent nonprofit organization providing facts and trusted analysis to help you understand our complex world. It was written by: Ferdinando Fioretto, University of Virginia

Read more:

Ferdinando Fioretto receives funding from the National Science Foundation, Google and Amazon.

Leave a Comment