Scenario 2: The AI Ceiling

The AGI Series explores three radically different futures for artificial intelligence:

  1. AGI is imminent,

  2. AI will plateau permanently—never reaching AGI, and

  3. AGI is possible but far off.

Our current AI laws are written for a single imagined future. To no one’s fault, the landscape is fraught with gaps that come as a result of our collective uncertainty of where this technology leaves us. It leaves policymakers guessing the “correct” trajectory. But legal frameworks grounded in a single prediction are fragile—they can overreach, lag behind, or misalign with reality.

Scenario-based forecasting offers an alternative. By mapping the distinct legal and regulatory challenges that emerge under each possible AI future, it’s easier to consider what early signals to look for and how we might activate the right governance levers at the right moment.. This series helps us map that out.


Scenario 2:
AI Hits a Ceiling

What if what we have right now as far as AI goes? That’s what this scenario claims.

In this “AI Ceiling” world, the long-promised breakthroughs—superintelligence, artificial general intelligence, emergent capabilities—never materialize. The curve flattens. Models plateau. Tools stabilize.

With the arms race cooling off and existential risks fading, regulatory attention finally turns to the slower, messier questions we’ve long deferred: How is AI actually being used? Who is it harming? And what systems of governance are overdue?

In the ceiling world, the threat we have to mitigate isn’t AGI. It’s scale without oversight.

What if AI stops growing? 

For years, we have built AI on the backbone of Moore’s Law—doubling compute meant smarter, better AI. Now, we find that even after doubling, tripling the chips, we hit limits. The models are bigger, but the insights aren’t.

Leading labs begin hitting walls on core benchmarks. GPT-6, Gemini Ultra, Claude X: each one was faster, more efficient, slightly safer. But none were smarter in a fundamental way. Academic papers describe “performance flattening”—a structural limit in returns despite scale.  

The Realization

Where does this bring us ?

AI still saturates society in its current form—imperfect, narrow, and probabilistic. It is still extremely useful, even transformative for sectors like finance and tech, but they are not all-powerful. The arms race of AI development cools, and the focus pivots toward deployment and governance, not superintelligence.

There is no grand leap to general intelligence, let alone superintelligence—no sentient machine writing its own code. Instead, developers will focus on deployment: scaling foundation models for niche industries, fine-tuning them for safety, and adapting them to human workflows. Instead of the government nationalizing chips like in the AGI-is-imminent scenario, policymakers shift toward regulating access and enforcing transparency at key chokepoints in the AI stack: compute providers, model registries, and deployment infrastructure.

However, military AI remains an outlier. It continues to attract deep investment and urgency, insulated from the broader market’s deceleration. Dual-use applications, autonomous targeting systems, and cyber operations ensure it remains a high-priority frontier, both technologically and diplomatically.

However, military AI remains an outlier—it remains a high-priority frontier. 

The realization that AGI isn’t imminent triggers a quiet but profound shift in Big Tech’s posture. No headline announces the shift. Yet across labs and leadership teams, a quiet recalibration takes hold: OpenSentience quietly sunsets its “cognitive emergence” division. Anthropy reframes its mission toward “safe and sovereign deployment.” Microsoft, Amazon, and Google redirect R&D away from generalist models and toward infrastructure contracts, sector-specific APIs, and compliance tooling. Funding shrinks, startups struggle, and researchers pivot to adjacent fields like cybersecurity and quantum computing. Large tech firms, resilient with deeper pockets, recalibrate by refining existing AI capabilities.

The entire business model shifts from “frontier breakthroughs” to “foundational dominance.” Because in a ceiling world, whoever owns the “AI pipes”—cloud infrastructure, inference chips, deployment platforms—controls the flow. Because without the threat of a geopolitically destabilizing superintelligence, the rationale for total state control weakens. Regulation, not nationalization, becomes the preferred tool to manage accountability and mitigate harms (such as algorithmic discrimination and deceptive UX) in workplaces and healthcare and legal systems. In a ceiling world, the policy challenge isn’t about preventing AI from surpassing human control; instead, it’s about governing how it's actually being used—who gets to deploy it, at what scale, and for what purposes.


The New Normal. Immediate Reactions
 

Eventually, a major AI lab publicly admits they hit practical computational limitations. Immediately, financial analysts downgrade stock ratings of AI-centric companies (Google, Microsoft, NVIDIA, CoreWeave), causing sudden market corrections and stock sell-offs. And where the money goes (or leaves, in this case), the business ventures follow. AI researchers start to see their funding dry up. They get let go, or pivot into niches or alternative tech fields, like cybersecurity, quantum computing. 

Policy Shifts: 

When frontier innovation slows, AI policymaking will see a dramatic change. The knowledge that AI is indefinitely contained will have legislators and regulators scrambling to see who can set the rules first. Up to this point, much of AI regulation has been paralyzed by uncertainty—by the fear of prematurely restricting a technology that might still have revolutionary breakthroughs ahead.

Policymakers have treaded lightly, worried that overregulation could stifle innovation. Others, like Senator Ron Wyden, worried that rigid rules, once enacted, would quickly become outdated. But the new realization that AI is indefinitely containable—that models are no longer leaping forward in intelligence—removes this ambient fear of falling behind. Now, policymakers are no longer chasing a moving target. And so the perceived cost of inaction rises, and with it, the political incentive to complete the regulatory architecture of AI. 

Here are the 8 biggest implications for AI Policy:

In a ceiling world, regulators may start asking not just who built the model, but who’s enabling it to scale. This idea of accountability remains complex and challenging. Who bears liability—the developer of the foundational model or the deployer applying it in harmful ways? Policymakers must distinguish clearly between these roles. This is a critical question to answer, as the greatest risks come not from capability jumps, but from unchecked deployment. So, regulatory attention zeroes in on three critical nodes:

  1. Foundation Model Developers (OpenSentience, DeepThink): Even if they aren't responsible for how models are used, their choices shape everything downstream.

    1. Mandatory transparency reports, licensing systems, and alignment tool documentation requirements become standard.

  2. Infrastructure Providers (CoreWeave, GPU cloud providers):  They shape who gets to deploy what, and how fast. at the bottleneck of compute access—especially for startups, governments, and enterprises that don’t own their own chips. This makes them a key chokepoint. 

    1. Facing heightened scrutiny, these providers must comply with capacity reporting, prioritize compute allocation (e.g., defense versus commercial), and adhere to stringent environmental reporting guidelines.

  3. Downstream Deployers (startups, enterprises): This is where actual harms to consumers or workers occur.

    1. The FTC and DOJ actively prosecute biased outputs and manipulative user interfaces, enforcing sector-specific oversight—FDA for medical AI, CFPB for lending algorithms.


Concluding Thoughts:
 

In this AI-ceiling world, technology mirrors our societal priorities and shortcomings. AI governance no longer wrestles with hypothetical futures; it manages real, present-day issues amplified by technology. The central challenge isn’t whether we can build a super intelligent technology, but whether we can create institutions resilient enough to govern the technology we already possess.

In this scenario, U.S. policymakers will have to shift from reactive posturing to proactive governance. As the EU moves ahead with enforceable AI regulations—setting global norms on risk, transparency, and rights—the U.S. risks falling behind not in innovation, but in influence. Without a coherent federal framework, American companies face legal uncertainty abroad and domestic trust gaps at home. Policymakers must step up. They must define liability and build durable regulatory institutions. The AI arms race may have cooled, but the race for global emerging technology governance has only begun.

Next
Next

AGI is Imminent. Part 1 of the AGI Series