Advocacy groups 'concerned' about federal proposal to override state AI regulations

5 hours ago 4

A provision included in a House committee budget bill this week would prohibit states from enforcing their own laws and regulations governing the use of artificial intelligence systems over the next decade.

The provision, which is included in a broader, budget reconciliation bill pushed ahead by House Republicans of the Energy and Commerce Committee and advanced by a vote on Wednesday, would prohibit states and local governments cities from passing new laws or enforcing existing laws that regulate AI models or systems, until 2035.

If enacted, the bill would put a moratorium on laws such as Colorado’s landmark comprehensive AI legislation, and California’s laws addressing harms caused by AI deepfakes. Many state AI laws are aimed at promoting transparency, protecting creative rights and mitigating harms that could be caused by invasions of privacy.

The National Association of State Chief Information Officers said on Tuesday it was “concerned” about the language in the proposed bill, and its potential impact to the work states have done to regulate AI use in absence of federal laws.

While California, Colorado and Utah are the only states so far to pass laws specifically regarding the use of AI, another 15 states have proposed similar laws. Many governors have created AI task forces or working groups to propose best practices for AI use.

“States have not had the luxury of waiting for federal action on AI policy. As a result, they have forged ahead in creating their own AI standards that meet their unique needs, in consultation with the stakeholders and citizens who will be most directly impacted,” NASCIO’s statement reads. “Language in the reconciliation measure preventing them from enforcing these provisions undermines their efforts to deliver services to their citizens and ensure responsible data protections. We urge Congress to reject these provisions and work proactively with states to develop meaningful and effective AI policy.”

Daniel Rodriguez, a professor and former dean of Northwestern University’s Pritzker School of Law, said the proposed broad preemption of state AI legislation appears to be political, especially considering that only three states have laws on the books regulating AI use. He pointed out that the legislation is being led by House Republicans and was lauded by Sen. Ted Cruz, R-Tex. for its potential to bolster innovation.

“I see this House bill as kind of a shot across the bow that basically says, ‘You states that are moving into this area, considering legislation, doing kinds of things that the pundits write about, that you should be doing — don’t do it because we’re not going to let you.’ But that’s not how preemption generally works as a matter of constitutional doctrine or statutory interpretation,” Rodriguez said. “What’s bizarre about it is that it sort of paints with this incredibly broad brush a declaration that states should just stay the heck out of this area because, you know, because they shouldn’t mess with it.”

Cobun Zweifel-Keegan, managing director of the International Association of Privacy Professionals, agreed that the legislative action was rare, but noted there is precedent in the Internet Tax Freedom Act, a 1998 law that prohibited states from implementing new taxes on internet access or e-commerce transactions for a three-year period.

Legal fights about the federal versus state power balance might come from state bodies and authorities wanting to enforce policies they’ve already codified.

“Battles would probably come about from other areas, like the California Privacy Protection Agency wanting to enforce its automated decision making rules once those are finalized, or even the Colorado [attorney general] wanting to enforce its automated decision making rules which are finalized, or the Colorado AI Act if it doesn’t get taken away,” he said. “Those are the kinds of places where we’ll see future conflict on this publicly.”

But Zweifel-Keegan said state and local governments likely wouldn’t have to worry about disruptions to their their ongoing AI initiatives. These include Pennsylvania’s ChatGPT pilot, which the state recently expanded, and Indiana’s new generative AI tool, Captain Record, which allows users to search for data across millions of pages of archived documents. It also wouldn’t necessarily disrupt broader laws that affect AI.

“Part of what remains after this is still all of the technology-neutral laws that apply to all sorts of systems regardless, without singling out AI for special treatment, and that includes things like consumer protection and product liability, even other civil rights laws — all of those sorts of things continue to apply,” he said.

But the provision has the potential to render some parts of states’ comprehensive data privacy laws moot. Several states have incorporated AI regulations as part of their larger consumer privacy laws, some of which have are being enforced or will go into effect in the coming years.

Zweifel-Keegan said that some House Republicans are being motivated by a belief that AI regulations stifle innovation. Rodriguez said sensible regulations were necessary, though, to balance innovation with public safety. He noted the way Colorado’s AI Act encourages structures for debate and conversation with its transparency requirements. He said such a regulation would be “misleading to describe it as anti-AI, as anti-innovation.”

“If the argument that comes from the federal government to come back to the bill,” Rodriguez said, “but we really don’t want these state laws because they’ll stifle innovation, then I ask this as a rhetorical question: What is the federal government doing at the legislative level to promote innovation in the in the responsible use of AI? What exactly is it doing?”

Read Entire Article