AI permeates everyday life with virtually no oversight. States are doing everything they can to catch up

DENVER (AP) — While artificial intelligence made headlines with ChatGPT, the behind-the-scenes technology has quietly seeped into everyday life — screening resumes, applying for rental apartments and in some cases even determining medical care.

While a number of AI systems have been found to be discriminatory and tip the balance in favor of certain races, genders or incomes, there is little government oversight.

Lawmakers in at least seven states are making major legislative changes to regulate artificial intelligence bias, filling a void left by Congress’s inaction. These proposals are some of the first steps in a decades-long debate over the balance between the benefits of this vague new technology and its widely documented risks.

“AI is actually affecting every part of your life, whether you know it or not,” said Suresh Venkatasubramanian, a professor at Brown University and co-author of the White House Blueprint for an AI Bill of Rights.

“Now you wouldn’t care if they all worked fine. But they don’t.”

Success or failure will depend on whether lawmakers must solve complex problems while negotiating with an industry that is worth hundreds of billions of dollars and growing at a rate best measured in light years.

Last year, only about a dozen of the nearly 200 AI-related bills introduced in statehouses were passed into law, according to BSA The Software Alliance, which advocates on behalf of software companies.

These bills, along with the more than 400 AI-related bills being debated this year, largely focused on regulating smaller parts of AI. That includes nearly 200 targeted deepfakes, including proposals to ban pornographic deepfakes like the one of Taylor Swift that flooded social media. Others are trying to rein in chatbots, like ChatGPT, to ensure they don’t cough up instructions to make a bomb, for example.

These are separate from the seven state laws that would apply across industries to regulate AI discrimination – one of the technology’s most perverse and complex problems – being debated from California to Connecticut.

Those who study AI’s propensity to discriminate say states are already lagging behind in setting up guardrails. The use of AI to make consistent decisions – what the bills call “automated decision tools” – is ubiquitous but largely hidden.

It is estimated that as many as 83% of employers use algorithms to assist with hiring; that’s 99% for Fortune 500 companies, according to the Equal Employment Opportunity Commission.

Yet the majority of Americans are unaware that these tools are being used, Pew Research polls show, let alone whether the systems are biased.

An AI can learn biases from the data it is trained on, usually historical data that may contain a Trojan horse of past discrimination.

Amazon has shut down its hiring algorithm project after it was found nearly a decade ago that it favored male candidates. The AI ​​is trained to evaluate new resumes by learning from previous resumes – largely male applicants. Even though the algorithm didn’t know the genders of applicants, it still downgraded resumes that contained the word “women” or listed women’s colleges, in part because they weren’t represented in the historical data it learned from.

“If you let the AI ​​learn from decisions that existing managers have made in the past, and if those decisions have historically benefited some people and disadvantaged others, then that’s what the technology will learn,” said Christine Webber, the attorney in a class action. lawsuit alleging that an AI system that scored rental applicants discriminated against people who were black or Hispanic.

Court documents describe one of the lawsuit’s plaintiffs, Mary Louis, a Black woman, who applied to rent an apartment in Massachusetts and received a cryptic response: “The third-party service we use to screen all potential tenants has rejected your lease.”

When Louis submitted two landlord references to show she had paid rent early or on time for 16 years, court records show, she received another response: “Unfortunately, we will not accept an appeal and cannot review the outcome of the tenant screening shift aside.”

That lack of transparency and accountability is partly what the bills aim to address, following California’s failed proposal last year — the first comprehensive attempt to regulate AI bias in the private sector.

Under the bills, companies using these automated decision-making tools would be required to conduct “impact assessments,” including descriptions of how AI factors into a decision, the data collected and an analysis of the risks of discrimination, along with an explanation of the company’s safeguards . Depending on the bill, these assessments would be submitted to the state or regulators could request them.

Some bills would also require companies to tell customers that an AI will be used in making a decision and allow them to opt out, with certain caveats.

Craig Albright, senior vice president of U.S. government relations at BSA, the industry lobbying group, said its members are generally in favor of proposing certain steps, such as impact assessments.

“Technology is moving faster than the law, but there are actually benefits if the law catches up. Because companies then understand their responsibilities, consumers can have more confidence in the technology,” Albright said.

But it was a mediocre start to the legislation. A bill in Washington state has already failed in committee, and a California proposal introduced in 2023, which many current proposals are modeled after, has also died.

California Assemblymember Rebecca Bauer-Kahan has renewed her legislation, which failed last year, with the support of some tech companies, such as Workday and Microsoft, after dropping the requirement that companies routinely submit their impact assessments. Other states where bills are being or expected to be introduced include Colorado, Rhode Island, Illinois, Connecticut, Virginia and Vermont.

While these bills are a step in the right direction, the impact assessments and their ability to identify bias remain vague, according to Brown University’s Venkatasubramanian. Without better access to the reports – which many bills restrict – it is also difficult to know whether someone has been discriminated against by an AI.

A more intensive but more accurate way to identify discrimination would be to require bias audits – tests to determine whether an AI is discriminatory or not – and make the results public. That’s what the industry is opposing, arguing that it would expose trade secrets.

Requirements to routinely test an AI system are not in most legislative proposals, almost all of which still have a long way to go. Yet this is the beginning of lawmakers’ and voters’ struggles with what is and will remain an ever-present technology.

“It encompasses everything in your life. For that reason alone, you should care,” Venkatasubramanian said.

——-

Associated Press reporter Trân Nguyễn in Sacramento, California, contributed.

Leave a Comment