Debunking Myths about the 10-Year AI Moratorium
Yesterday, the House of Representatives advanced the Reconciliation Bill. This “Big Beautiful Bill” is a 1000+ page document that is essential to the federal budget legislation. There are many contentious issues in this reconciliation bill, such as gun control, healthcare programs, clean energy tax credits, but what I was most intrigued by was the AI provision- the moratorium. That provision would bar states from enforcing any laws regulating artificial intelligence models or systems for a decade
I tuned into the May 13 Congressional hearing (full recording here) and, having audited Professor Persily, Professor Kelly, and Professor Florence G’sell’s class on Governing Artificial Intelligence: Law, Policy, and Institutions at Stanford Law School last Thursday, I feel compelled to write this blog. My mind was buzzing with conversations about the Byrd Bath (which I will unpack) and what the passing of this bill means for the future of AI regulation.
The bill squeaked through 215-214, with every Democrat opposed—but still passed by 1 vote. I felt a mix of things: dubious (of the incentives of those pushing it forward, as this is such a partisan issue), curious if it’ll make it past Senate, and deeply worried. If enacted, the moratorium could block or unwind the 45 state-level AI bills introduced just last year, according to the National Conference of State Legislatures.
Supporters say the provision fixes a messy “patchwork” by giving Congress time to craft a federal framework. I question whether 1) having no patchwork (and no rules) is really better than having a patchwork and 2) silencing states that want to protect their residents is the solution. As we look ahead to the Senate vote in likely June or July, I think it’s so crucial to be aware o f the updates and understand how much of a touchstone moment this might be.
The 10-Year Moratorium: What Is It?
The draft from the House Energy and Commerce Committee forbids states from enacting or enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for a full decade. Everyone agrees AI needs guardrails some; the debate is how.
The consequence: In my opinion, the impacts of this moratorium would be bleak. It would lead to 10 years of un-checked innovation, no guardrails, and no user-safety guarantees. Bad news. If the Senate passes it and the White House signs it, don’t expect robust federal rules, especially under the current administration, which has been committed to deregulation.
I thought long and hard about how to structure the thoughts swimming through my head. In the end, I came to realize that myths-busting is the best way to do this because some of the loudest arguments supporting the moratorium rely on assumptions that sound convincing on the surface, but are important to scrutinize. I hope by breaking these myths down, it will invite greater honest and informed conversations about what’s really at stake.
Myth 1: “China will eat our lunch if we regulate.”
Can’t we have regulation and still compete with China?
One of the loudest arguments from Republican lawmakers is that regulating AI domestically will slow us down, and let Beijing race ahead to artificial general intelligence (AGI). Rep. Jay Obernolte and others frame this as a binary: we can either have guardrails or global dominance. But that’s a false dichotomy.
It is possible for the U.S. to maintain its edge while still having smart, necessary safeguards. And the reality is that China isn’t operating in a completely laissez-faire environment either. In fact, in a session with the Political Bureau of the CPC Central Committee, President Xi emphasized that he wants to accelerate the “formulation and improvement of relevant laws, regulations, policy systems, application standards and ethical guidelines” of AI. Our biggest AI competitor also recognizes that innovation with critical regulation is the winning formula to win the AI race. Am I saying we should copy our geopolitical rivals? No. But avoiding writing the rulebook is wrong. What we need to strive for is continuity in AI policy- that I think both republicans and democrats can get behind. The moratorium doesn't give us continuity. It gives us stalled time for the federal government to do nothing.
If we pause for a decade, the U.S. risks becoming the AI Wild West while Europe, China, and others draft the global standards that will shape adoption worldwide. And if there’s one thing the UN’s OEWG on cybersecurity and ICTs has taught me, it’s this: the standards-setter wins in the global arena. There's a reason why the US is the global leader in international human rights - it’s because we have set the standards on HR norms and laws. Can’t we aim for the same in AI?
As guest speaker Dr. Gemma Galdón-Clavell said in the Stanford class I attended, trust in AI hasn’t caught up with adoption- broad adoption has happened even without consumers buying into trust. While I agree this is the case, for serious sectors (banks, hospitals, the Department of Defense), trust will be the deciding factor. Building trustworthy AI will have to come down to regulations. If we have trustworthy AI, we’ll own those high-value markets. China won’t even have a shot.
I worry that taking a step back from regulation puts us in a situation where we’ll be taking a backseat in these conversations, and thus, the U.S.’ global dominance.
Myth 2: “Politicians supporting the bill are doing so purely in the national interest.”
I say let’s follow the money.
While the bill is framed as a forward-looking effort to safeguard innovation, the financial and lobbying networks behind it raise serious questions about legislative intent. Campaign contributions and industry influence appear to play a substantial, if under examined, role.
1. Rep. Jay Obernolte (R-CA), co-chair of the Bipartisan House Task Force on Artificial Intelligence and a leading advocate for the moratorium.
According to public filings:
His top campaign contributor in 2023-2024 was Google, with Apple also ranking prominently.
His highest-funded industry? Lobbyists, who contributed $112,900 to him in just one year.
Obernolte received substantial funds from lobbyists Elise Finley Pickering and Dean Rosen, both of whom have been retained by major AI players like IBM and venture firm Andreessen Horowitz (a16z).
The broader lobbying effort in support of this moratorium includes a coalition of powerful actors (like IBM, Meta, Nvidia, a16z, and billionaire donor Charles Koch) who collectively shape the AI landscape and have a clear interest in limiting regulatory oversight.
None of this proves direct causation. But the alignment between campaign funding sources and policy positions is striking and merits attention. While it’s politically expedient to frame the moratorium as a measure for the “future of the country,” the financial underpinnings complicate that narrative. At the very least, they challenge the claim that this legislation is driven solely by the public good.
It’s a fair question to ask: whose interests are truly being served through this moratorium?
2. President Trump
Former President Trump’s embrace of the moratorium appears consistent with his broader deregulatory stance- and aligns closely with the interests of his donors in the tech sector.
For example, Larry Ellison, co-founder of Oracle, has emerged as a key political ally, reportedly dining with Trump and contributing tens of millions of dollars to GOP causes. Similarly, Jeff Yass, a tech investor with a financial stake in TikTok’s parent company, has reportedly discussed tech policy with Trump. Notably, Trump reversed his prior position on banning TikTok shortly after meeting with Yass, who was expected to make a substantial campaign donation.
By championing the moratorium, Trump reinforces support among powerful, pro-business backers- many of whom favor minimal federal oversight of emerging technologies. Whether explicitly stated or not, this provision reflects priorities shared by key financial supporters who benefit from de-regulatory inertia.
3. Rep. Rich McCormick (R-GA): vocal proponent of the moratorium, member of the Bipartisan House Task Force on AI
While Rep. Rich McCormick has not received significant direct contributions from Big Tech firms, his campaign finance record suggests alignment with pro-business interests. His top donor in 2024 was Cox Enterprises, a major Atlanta-based media and communications company with stakes in technology. He’s also received support from large Georgia-based businesses, including Home Depot, and from several conservative political action committees (PACs). While I couldn’t find direct ties to Big Tech, McCormick’s incentives align with the broader Republican funding ecosystem that opposes heavy regulation.
4. Meta
Lobbying data reinforces the scale of industry influence: Meta alone spent over $30 million on lobbying in 2024, including nearly $8 million in 2025 so far.
These numbers may not offer definitive answers, but they raise important questions. When legislative proposals disproportionately benefit industries that spend heavily to influence lawmakers, it’s hard not to question the incentives of those pushing the bill.
Myth 3: “Halting (state-level) AI rules for ten years will help us out-innovate China.”
If the moratorium passes through the Senate as it is, I don't think we’ll see meaningful AI regulation. We’ve already seen how deregulation has been prioritized in past administrations—recall the executive order that required eliminating ten federal rules for every new one adopted. At best, we’ll get a minimal framework. At worst, nothing at all.
And that comes with consequences: weakened online safety standards, increased cybersecurity threats, and diminished consumer protection. We need a framework: something responsive, enforceable, and forward-looking.
Proponents argue that the pause is necessary to unlock $500 million allocated to the Department of Commerce for IT and cybersecurity upgrades. But the connection between pausing state-level AI regulation and improving federal cybersecurity infrastructure remains tenuous. It’s unclear how tying lawmakers’ hands will help coordinate national AI development.
Take 4: Two underreported hazards
This one isn’t a myth, but rather my reflection on two significant, under-discussed risks posed by the AI moratorium: (1) its impact on existing state laws, and (2) the precedent it could set for federal preemption in technology regulation.
1. Important state protections eroding
If enacted, the moratorium could block the enforcement of a range of current laws, potentially even retroactively. For example, it could undermine California’s deepfake consent law or Colorado’s healthcare AI transparency requirements. These laws were designed to enhance user protections in rapidly evolving areas of technology, such as AI-generated media and decision-making in medical systems.
The implications are broad: protections related to child safety, domestic abuse, and other high-risk applications of AI could be rendered unenforceable. Even yesterday, Dr. Gemma Galdón-Clavell shared that she’s working with California lawmakers on a bill that would require AI companies to implement audit mechanisms. These types of laws serve as critical guardrails, ensuring companies develop AI responsibly, with awareness of both risk and impact. A decade-long moratorium could freeze this progress.
2. The scope of federal preemption
The moratorium’s language also raises concerns about federal overreach. According to legal analyses, formal legislation isn't even required to establish federal preemption under this framework- meaning the federal government could prevent states from enacting their own regulations by default. In effect, it gives Washington a blank check to override local policy, regardless of whether comprehensive federal rules are ever developed.
This is particularly troubling in a deregulatory climate. If the current or future administration favors minimal intervention, this precedent could create a regulatory vacuum. It raises important questions: can agencies like the SEC or CFTC intervene to mitigate the harm of preemption? And if not, how do we ensure meaningful oversight? Does the fall of the Chevron deference have any role here? These are questions I’d like to explore further in a follow-up piece.
What to watch next?
The bill now advances to the Senate, where it will face procedural scrutiny. Democrats are expected to invoke the Byrd Rule, which bars provisions deemed “extraneous” to budget legislation from being included in reconciliation packages. I think this clause will face a serious challenge—what some call a “Byrd bath”—in June. Budget riders that don't materially affect federal spending are routinely stripped, and both Democrats and federalism-oriented Republicans are preparing to contest it.
Lawmakers across the aisle, attorneys general, and public-interest groups have expressed concern about the moratorium’s implications. Preventing it from reaching the president’s desk may require several GOP senators to break with their party. It will be a tall task, but with any luck, not necessarily impossible.
So, in short:
If the moratorium survives the Senate? Expect a summer of furious lobbying. State AGs (40 so far) already fired a warning shot opposing the freeze.
If it doesn’t get passed? Don’t hold your breath. The same coalition is working on a standalone pre-emption bill. Keep watch.
If you care about keeping a balance of both AI innovation and accountability in the U.S. now is the time to call your senator, submit that op-ed, or at least forward this post to your group chat.
(Updated, 5/28/25): Check out this fascinating article by Tech Policy that covers similar points.