Scenario 2: The AI Ceiling

The AGI Series explores three radically different futures for artificial intelligence:

  1. AGI is imminent;

  2. AI will plateau permanently—never reaching AGI; and

  3. AGI is possible but far off.

Our current AI laws are written for a single imagined future. To no one’s fault, the landscape is fraught with gaps that come as a result of our collective uncertainty of where this technology leaves us. It leaves policymakers guessing the “correct” trajectory. But legal frameworks grounded in a single prediction are fragile—they can overreach, lag behind, or misalign with reality.

Scenario-based forecasting offers an alternative. By mapping the distinct legal and regulatory challenges that emerge under each possible AI future, it’s easier to consider what early signals to look for and how we might activate the right governance levers at the right moment. This series helps us map that out.

Note: All company names, laboratory names, and model names such as “OpenSentience” and “DeepThink” are fictional and used for illustrative purposes only. References to real individuals refer to actual public figures and reflect their publicly stated positions or roles.


Scenario 2:
AI Hits a Ceiling

What if what we have right now as far as AI goes? That’s what this scenario claims.

In this “AI Ceiling” world, the long-promised breakthroughs—superintelligence, artificial general intelligence, emergent capabilities—never materialize. The curve flattens. Models plateau.

With the arms race cooling off and existential risks fading, regulatory attention finally turns to the slower, messier questions we’ve long deferred: How is AI actually being used? Who is it harming? And what systems of governance are overdue?

Continue reading for a chronological narrative of this prediction, and the 8 major policy implications that hit across major sectors and U.S. national security.

What if AI stops growing? 

For years, we have built AI on the backbone of Moore’s Law—doubling compute meant smarter, better AI. Now, we find that even after doubling, tripling the chips, we hit limits. The models are bigger, but the insights aren’t.

Leading labs have begun hitting walls on core benchmarks. GPT X, Ganymede Ultra, Cloude: each one was faster, more efficient, slightly safer. But none were smarter in a fundamental way. Academic papers describe “performance flattening”—a structural limit in returns despite scale.  

Eventually, a major AI lab—namely, OpenSentience—publicly admits they hit practical computational limitations. Immediately, financial analysts downgrade stock ratings of AI-centric companies, causing sudden market corrections and stock sell-offs. And where the money goes (or leaves, in this case), the business ventures follow. AI researchers start to see their funding dry up. They get let go, or pivot into niches or alternative tech fields, like cybersecurity and quantum computing. 

The Infrastructure Reality

Up until July 2025, AI has already altered society so profoundly that there’s no returning to a pre-AI world. Even without AGI on the horizon, it will remain embedded in daily life—just in a different capacity than the AGI-race-to-build tools that once promised to push U.S. national interests ahead. Instead, AI will settle into the role of infrastructure: an essential “pipe” that powers and optimizes critical sectors.

In finance, AI infrastructure might continuously run fraud detection, high-frequency trading models, and credit risk assessments—quietly steering trillions of dollars in transactions.

In tech, it will drive cloud-based productivity platforms and generate code for software development. But because there is no grand leap to general intelligence or super intelligence, developers turn their focus from breakthroughs to refinement: scaling foundation models for niche industries, fine-tuning them for safety, and adapting them to human workflows.

So yes, AI still saturates society in its current form—imperfect, narrow, and probabilistic. It is still extremely useful, even transformative for these two sectors, but it is not all-powerful.

Actors Impacted

  1. Tech companies + researchers

The realization that AGI isn’t imminent, let alone remotely possible, triggers a quiet but profound shift in Big Tech’s posture. Before the headlines hit, a quiet recalibration takes hold: OpenSentience quietly sunsets its “cognitive emergence” division. Anthropy reframes its mission toward “safe and sovereign deployment.” Other Big Tech titans redirect R&D away from generalist models and toward infrastructure contracts, sector-specific APIs, and compliance tooling.

In the startup world, funding shrinks. In academia, researchers pivot to adjacent fields like cybersecurity and quantum computing. Large tech firms, resilient with deeper pockets, recalibrate by refining existing AI capabilities.

2. Government

Instead of the government nationalizing chips like in the AGI-is-imminent scenario, policymakers shift toward regulating access and enforcing transparency at key chokepoints of deployment.

However, military AI remains an outlier. It continues to attract deep investment and urgency, insulated from the broader market’s deceleration. Dual-use applications, autonomous targeting systems, and cyber operations ensure it remains a high-priority frontier, both technologically and diplomatically.

Policy Shifts

When frontier innovation slows, AI policymaking starts to see a dramatic change. The knowledge that AI is indefinitely contained will have legislators and regulators scrambling to see who can set the rules first. Up to this point, much of AI regulation has been paralyzed by uncertainty—by the fear of prematurely restricting a technology that might still have revolutionary breakthroughs ahead.

Until now, policymakers have treaded lightly, worried that overregulation could stifle innovation. Others, like Senator Ron Wyden, have worried that rigid rules, once enacted, would quickly become outdated. But the new realization that AI is indefinitely containable—that models are no longer leaping forward in intelligence—removes this ambient fear of falling behind. Now, policymakers are no longer chasing a moving target. And so the perceived cost of inaction rises, and with it, the political incentive to complete the regulatory architecture of AI. 

Here are the 8 biggest implications for AI Policy:

In a ceiling world, regulators may start asking not just who built the model, but who’s enabling it to scale. This idea of accountability remains complex and challenging. Who bears liability—the developer of the foundational model or the deployer applying it in harmful ways? Policymakers must distinguish clearly between these roles. This is a critical question to answer, as the greatest risks come not from capability jumps, but from unchecked deployment. So, regulatory attention zeroes in on three critical nodes:

  1. Foundation Model Developers (OpenSentience, DeepThink): Even if they aren't responsible for how models are used, their choices shape everything downstream.

    • Mandatory transparency reports, licensing systems, and alignment tool documentation requirements become standard.

  2. Infrastructure Providers (CoreWove, GPU cloud providers):  They shape who gets to deploy what, and how fast. at the bottleneck of compute access—especially for startups, governments, and enterprises that don’t own their own chips. This makes them a key chokepoint. 

    • Facing heightened scrutiny, these providers must comply with capacity reporting, prioritize compute allocation (e.g., defense versus commercial), and adhere to stringent environmental reporting guidelines.

  3. Downstream Deployers (startups, enterprises): This is where actual harms to consumers or workers occur.

    • The FTC and DOJ actively prosecute biased outputs and manipulative user interfaces, enforcing sector-specific oversight—FDA for medical AI, CFPB for lending algorithms.


Concluding Thoughts:
 

In contrast to Scenario 1, where AGI is imminent, Scenario 2 forces policymakers to confront from racing toward breakthroughs to governing the limits of existing systems.

In this AI-ceiling world, technology mirrors our societal priorities and shortcomings. AI governance no longer wrestles with hypothetical futures; it manages real, present-day issues amplified by technology. The central challenge isn’t whether we can build a super intelligent technology, but whether we can create institutions resilient enough to govern the technology we already possess.

U.S. policymakers will have to pay close attention: they must shift from reactive posturing to proactive governance. As the EU moves ahead with enforceable regulations on AI as an infrastructure—setting global norms on risk, transparency, and rights—the U.S. risks falling behind. Without a coherent federal framework, American companies face legal uncertainty abroad and domestic trust gaps at home. Policymakers must step up. They must define liability and build durable regulatory institutions.

The race to build AI may be over, but the race to properly govern it is just starting.

Next
Next

AGI is Imminent. Part 1 of the AGI Series