AI companies are in a state of fear as California attempts to develop and regulate the technology

  • California is leading the world in the AI ​​arms race.

  • However, a new law in California could change the course of the AI ​​revolution forever.

  • Tech giants like Meta and OpenAI argue that the bill would hinder AI innovation.

Silicon Valley is divided over a groundbreaking bill in California that could radically change the pace of AI development worldwide.

California Senator Scott Wiener was introduced SB article 1047 in February, and it has since received support from lawmakers on both sides of the political spectrum.

It was approved by the Senate Privacy and Consumer Protection Committee in June and by the Assembly Budget Committee last week. The full Assembly is expected to vote on the bill later this month.

According to a report from the Brookings Institute, California has emerged as a leader in the global AI arms race, with 35 of the top 50 AI companies based in the state.

California Governor Gavin Newsom has been working to elevate his state’s status as a global AI pioneer. Earlier this year, California unveiled a training program for state employees, held a generative AI summit and launched pilot projects to understand how the technology can address challenges like traffic congestion and language accessibility.

This month, the state announced a partnership with chipmaker Nvidia that will train state residents to use technology to create jobs, foster innovation and use AI to solve everyday problems.

But amid all the excitement about AI’s potential, there’s also no small amount of concern about its danger to humanity. That means California must walk a fine line between regulating the AI ​​industry and stifling the growth it hopes to see.

At an AI event in San Francisco in May, Newsom said, “If we regulate too much, if we indulge too much, if we chase the shiny object, we can put ourselves in a dangerous position.”

Newsom signed an executive order in September that included several provisions to ensure the responsible use of the technology and directed government agencies to study its best applications.

Newsom has not publicly commented on SB 1047. The governor’s office did not respond to a request for comment from Business Insider. If signed into law, however, the bill would be the most comprehensive attempt to regulate the AI ​​industry to date.

What SB 1047 Would Change

According to the bill’s authors, its purpose is to “ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, and sound safety standards for developers of the largest and most powerful AI systems.”

It would apply to companies that develop models that cost more than $100 million to train or use high levels of computing power. These companies would be required to test new technologies for security before releasing them to the public. They would also have to build “full shutdown” capabilities into their models and be held liable for all technology applications.

The bill also sets legal standards for AI developers. It outlines “torts” for which the California attorney general can sue companies, establishes protections for whistleblowers, and creates a board that will set computing limits for AI models and issue regulatory guidance.

What Big Tech Thinks

The developers likely to be affected by this bill are not exactly happy about it.

Companies like Meta, OpenAI and Anthropic — which are pouring millions into building and training large language models — have lobbied state legislators for changes. Meta said the former would stifle innovation and make it less likely that companies would open-source their models.

“The law also actively discourages the release of open-source AI because providers would have no way to open-source models without untenable legal liability,” Meta wrote in a letter to Wiener in June. That will likely impact the small business ecosystem by reducing the likelihood that they will “use free, readily available, and sophisticated models to create new jobs, businesses, tools, and services that are often used by other companies, governments, and civil society groups.”

Anthropic, which positions itself as a safety-conscious AI company, was unhappy with earlier versions of the bill and lobbied lawmakers for changes. It calls for a greater focus on deterring companies from building unsafe models rather than enforcing strict laws before catastrophic incidents occur. It also suggests that companies that meet the $100 million threshold should be allowed to set their own safety testing standards.

The bill has also drawn criticism from venture capitalists, executives and other members of the tech industry. Anjney Midha, a general partner at Andreessen Horowitz, called it “one of the most anti-competitive proposals I’ve seen in a long time.” He believes lawmakers should focus on “regulating specific high-risk applications and malicious end users.”

California lawmakers enacted a handful of the proposed changes, all of which made it into the latest version of the bill. The updated version now prevents the California attorney general’s office from suing AI developers before a catastrophic event occurs. And while the bill initially called for a new state agency to oversee enforcement of new laws, that has since been scaled back to a board within the state’s Government Operations Agency.

An Anthropic spokesperson told BI the company would “review the new text once it becomes available.” Meta and OpenAI did not respond to a request for comment.

Smaller founders are concerned, but also somewhat more optimistic.

Arun Subramaniyam, founder and CEO of enterprise-focused generative AI company Articul8, told BI that no one understands the “sweeping powers” ​​of the new board or how directors are appointed. He also believes the law will have implications for companies beyond Big Tech players, as even well-funded startups can hit the $100 million training threshold.

At the same time, he said he supports the creation of a public cloud computing cluster, CalCompute, dedicated to researching the safe deployment of large-scale AI models. This cluster could bring more equality to researchers and groups who lack the resources to evaluate AI models themselves. He also believes that reporting requirements for developers will increase transparency, which will benefit Articul8’s work in the long run.

He said the future of the industry depends on how common models are regulated. “It’s good that the state wants to regulate this early, but the vocabulary needs to be a little more balanced,” he said.

Read the original article on Business Insider

Leave a Comment