Major US Tech Companies Call for Pause on State-Level AI Laws
June 27, 2025 By IronHeartedIn a noteworthy policy shift, top American tech firms—including Amazon, Google, Meta, and Microsoft—are lobbying for a decade-long suspension on state-led artificial intelligence (AI) regulations. This proposal, part of a larger Republican-supported budget bill, has stirred a nationwide debate over innovation versus oversight. Having passed the House in May 2025, the proposal is now pending Senate review. If approved, it would block individual states from creating or enforcing AI-specific laws until 2035. Supporters argue this pause is necessary to maintain global competitiveness—especially with China—while critics warn it could expose consumers to unregulated AI risks. This article delves into the proposal’s background, key arguments on both sides, and potential consequences.
Setting the Stage
AI is reshaping sectors from medicine to transportation, with tools ranging from chatbots to predictive policing systems. As the technology grows, so does concern about issues such as bias in algorithms, privacy threats, and misuse of deepfakes. Historically, Congress has lagged in developing overarching tech rules, leaving states to take initiative. Nearly 1,000 AI-related bills have been introduced at the state level, with 113 becoming law in 2024 alone. States like California have been especially active, enacting over 20 laws since 2016 on topics such as medical AI transparency and deepfake protections. Others, including New York and Utah, have focused on algorithmic accountability and data privacy.
This decentralized approach has resulted in what tech companies describe as a confusing and inconsistent regulatory landscape. For instance, California mandates disclaimers for AI-generated health communications, while Colorado emphasizes openness in automated decision-making. Tech firms argue that such variation raises costs, hinders innovation, and puts U.S. companies at a disadvantage globally, especially compared to China’s unified AI strategy.
In response, Rep. Brett Guthrie (R-KY) introduced a proposal—part of the “One Big, Beautiful” budget bill (H.R. 1)—calling for a 10-year moratorium on state AI regulations. Passed narrowly in the House, the bill also allocates $500 million to upgrade federal IT and promote AI in government use. The moratorium, outlined in Section 43201(c), bars states from enforcing laws specifically aimed at AI systems, with a few exceptions for rules that support AI adoption. The Senate, under Sen. Ted Cruz (R-TX), is reviewing a version that connects the moratorium to broadband funding, advancing it despite pushback.
Why Supporters Back the Moratorium
Tech companies and several Republican lawmakers argue that a national pause on AI regulations at the state level is necessary to foster growth and preserve the country’s AI leadership. Their case centers on several points:
-
Avoiding a Regulatory Maze: The sheer volume of state-level AI bills makes compliance burdensome, especially for startups. Different rules across states create legal complexity and inflate operational costs. The Center for Data Innovation called this scenario “fragmentation at its worst,” and advocates suggest a uniform national policy would streamline compliance.
-
Staying Globally Competitive: Proponents say that China’s centralized approach gives it an edge in AI development. Removing state-by-state obstacles would allow American firms to invest freely and compete more effectively on the global stage. Some argue that, like the early internet, AI could flourish under minimal regulation.
-
Fueling Innovation: By reducing regulatory hurdles, companies can focus more on research and development. Executives from OpenAI and Microsoft have publicly favored a light regulatory touch to ensure fair opportunities for developers, particularly new entrants. Investors also support the idea, seeing it as a way to reduce barriers for startups.
-
Keeping Broader Protections Intact: Advocates stress that the proposed pause wouldn’t eliminate all forms of oversight. Existing consumer protection and civil rights laws would still apply. States could still address harms through general laws, even if they couldn’t enforce AI-specific rules.
-
A Temporary Solution: The moratorium is intended as a bridge until federal lawmakers develop a comprehensive AI framework. Rep. Jay Obernolte (R-CA), who co-chairs the Bipartisan AI Task Force, described the measure as a temporary fix, after which states could regain some authority once national guidelines are in place.
The Case Against the Proposal
Opponents—including many state officials, advocacy groups, and a few Republicans—contend that the proposal puts business interests ahead of public welfare and undermines local authority. Their concerns include:
-
Leaving a Policy Gap: Critics say the moratorium could nullify hundreds of state laws designed to protect consumers from AI misuse, including scams, discrimination, and invasions of privacy. Without a federal substitute, they argue, the public would be exposed to unregulated risks. A bipartisan coalition of 40 state attorneys general called the plan “dangerous” and “destructive.”
-
Weakening State Rights: States have long served as testing grounds for new laws, often informing national policy. Opponents argue the moratorium halts this experimentation. Unique local issues—like Tennessee’s legislation to protect musicians from AI-generated deepfakes—could go unaddressed if the federal government takes full control. Over 260 state lawmakers from across the country have voiced opposition, calling the plan a federal overreach.
-
Undermining Consumer Protections: Many existing state laws address concrete harms caused by AI, such as biased hiring practices or automated health insurance denials. Critics warn that halting these efforts could erode trust in AI and allow predatory or discriminatory technologies to flourish unchecked.
-
No Guarantee of Federal Action: Given Congress’s slow track record on technology regulation—evident in its failure to pass privacy or social media laws—skeptics worry that a 10-year moratorium would amount to inaction, not progress. California Assemblymember Rebecca Bauer-Kahan pointed out that Congress’s history suggests no comprehensive AI laws will materialize any time soon.
-
Internal Republican Dissent: Not all Republicans support the measure. Lawmakers such as Sen. Marsha Blackburn (R-TN), Sen. Josh Hawley (R-MO), and Rep. Marjorie Taylor Greene (R-GA) have criticized it for limiting states’ rights and failing to address real AI threats. Greene specifically called it an “infringement on state sovereignty.”
Conclusion
The push for a decade-long pause on state-level AI regulation has ignited a national debate over how best to balance innovation with consumer protection. While the tech industry and its allies argue that a moratorium is essential to maintaining America’s global edge in AI, critics warn it could expose the public to harm and undercut state power. As the Senate weighs the bill’s future, the outcome may set a precedent for how AI governance is shaped in the U.S. for years to come.