Scenario 2: The AI Ceiling
The AGI Series explores three radically different futures for artificial intelligence:
AGI is imminent,
AI will plateau permanently—never reaching AGI, and
AGI is possible but far off.
Our current AI laws are written for a single imagined future. To no one’s fault, the landscape is fraught with gaps that come as a result of our collective uncertainty of where this technology leaves us. It leaves policymakers guessing the “correct” trajectory. But legal frameworks grounded in a single prediction are fragile—they can overreach, lag behind, or misalign with reality.
Scenario-based forecasting offers an alternative. By mapping the distinct legal and regulatory challenges that emerge under each possible AI future, it’s easier to consider what early signals to look for and how we might activate the right governance levers at the right moment.. This series helps us map that out.
Scenario 2: AI Hits a Ceiling
What if what we have right now as far as AI goes? That’s what this scenario claims.
In this “AI Ceiling” world, the long-promised breakthroughs—superintelligence, artificial general intelligence, emergent capabilities—never materialize. The curve flattens. Models plateau. Tools stabilize.
With the arms race cooling off and existential risks fading, regulatory attention finally turns to the slower, messier questions we’ve long deferred: How is AI actually being used? Who is it harming? And what systems of governance are overdue?
In the ceiling world, the threat we have to mitigate isn’t AGI. It’s scale without oversight.
What if AI stops growing?
For years, we have built AI on the backbone of Moore’s Law—doubling compute meant smarter, better AI. Now, we find that even after doubling, tripling the chips, we hit limits. The models are bigger, but the insights aren’t.
Leading labs begin hitting walls on core benchmarks. GPT-6, Gemini Ultra, Claude X: each one was faster, more efficient, slightly safer. But none were smarter in a fundamental way. Academic papers describe “performance flattening”—a structural limit in returns despite scale.
The Realization
Where does this bring us ?
AI still saturates society in its current form—imperfect, narrow, and probabilistic. It is still extremely useful, even transformative for sectors like finance and tech, but they are not all-powerful. The arms race of AI development cools, and the focus pivots toward deployment and governance, not superintelligence.
There is no grand leap to general intelligence, let alone superintelligence—no sentient machine writing its own code. Instead, developers will focus on deployment: scaling foundation models for niche industries, fine-tuning them for safety, and adapting them to human workflows. Instead of the government nationalizing chips like in the AGI-is-imminent scenario, policymakers shift toward regulating access and enforcing transparency at key chokepoints in the AI stack: compute providers, model registries, and deployment infrastructure.
However, military AI remains an outlier. It continues to attract deep investment and urgency, insulated from the broader market’s deceleration. Dual-use applications, autonomous targeting systems, and cyber operations ensure it remains a high-priority frontier, both technologically and diplomatically.
However, military AI remains an outlier—it remains a high-priority frontier.
The realization that AGI isn’t imminent triggers a quiet but profound shift in Big Tech’s posture. No headline announces the shift. Yet across labs and leadership teams, a quiet recalibration takes hold: OpenSentience quietly sunsets its “cognitive emergence” division. Anthropy reframes its mission toward “safe and sovereign deployment.” Microsoft, Amazon, and Google redirect R&D away from generalist models and toward infrastructure contracts, sector-specific APIs, and compliance tooling. Funding shrinks, startups struggle, and researchers pivot to adjacent fields like cybersecurity and quantum computing. Large tech firms, resilient with deeper pockets, recalibrate by refining existing AI capabilities.
The entire business model shifts from “frontier breakthroughs” to “foundational dominance.” Because in a ceiling world, whoever owns the “AI pipes”—cloud infrastructure, inference chips, deployment platforms—controls the flow. Because without the threat of a geopolitically destabilizing superintelligence, the rationale for total state control weakens. Regulation, not nationalization, becomes the preferred tool to manage accountability and mitigate harms (such as algorithmic discrimination and deceptive UX) in workplaces and healthcare and legal systems. In a ceiling world, the policy challenge isn’t about preventing AI from surpassing human control; instead, it’s about governing how it's actually being used—who gets to deploy it, at what scale, and for what purposes.
The New Normal. Immediate Reactions
Eventually, a major AI lab publicly admits they hit practical computational limitations. Immediately, financial analysts downgrade stock ratings of AI-centric companies (Google, Microsoft, NVIDIA, CoreWeave), causing sudden market corrections and stock sell-offs. And where the money goes (or leaves, in this case), the business ventures follow. AI researchers start to see their funding dry up. They get let go, or pivot into niches or alternative tech fields, like cybersecurity, quantum computing.
Policy Shifts:
When frontier innovation slows, AI policymaking will see a dramatic change. The knowledge that AI is indefinitely contained will have legislators and regulators scrambling to see who can set the rules first. Up to this point, much of AI regulation has been paralyzed by uncertainty—by the fear of prematurely restricting a technology that might still have revolutionary breakthroughs ahead.
Policymakers have treaded lightly, worried that overregulation could stifle innovation. Others, like Senator Ron Wyden, worried that rigid rules, once enacted, would quickly become outdated. But the new realization that AI is indefinitely containable—that models are no longer leaping forward in intelligence—removes this ambient fear of falling behind. Now, policymakers are no longer chasing a moving target. And so the perceived cost of inaction rises, and with it, the political incentive to complete the regulatory architecture of AI.
Here are the 8 biggest implications for AI Policy:
The first major shift is regulatory reorientation towards ethical deployment. With limited marginal gains from bigger models, the true value of AI lies in how responsibly existing data is handled. Medical diagnostics, facial recognition, hiring algorithms, and automated scoring systems all come under intensified scrutiny because as innovation slows, the spotlight shifts—giving the public more time to see how AI is actually used, not just how it's imagined. Privacy and data governance, once overshadowed by promises of AI-led transformation, return to the forefront.
Policymakers begin regulating AI less like a volatile innovation and more like a core piece of infrastructure—akin to utilities or financial systems. That means more compliance, more audits, and more integration into existing governance regimes. The federal appetite for aggressive industrial policy in AI wanes. Instead, public spending pivots: toward retraining programs, unemployment relief, AI literacy, and diversification into adjacent sectors.
When AI is no longer a zero-sum arms race, international alignment becomes more attainable. With fewer incentives to “win” the AI race, countries develop shared interests in stabilizing, standardizing, and regulating existing capabilities. This opens space for long-stalled agreements on AI misuse, cross-border data flows, and foundational model transparency. In multilateral forums, the tone shifts. Dialogue becomes less about “leadership” and more about “stewardship.” The window opens for cohesive global governance frameworks, including minimum safety standards, AI misuse prohibitions, and export protocols. Think less “race to the top,” more “secure the middle.”
At home, the debate between state vs national rules rages on. With AGI off the table, the battle over AI governance goes local. In some states, proactive lawmakers treat AI policy as the new environmentalism: tailoring guardrails to local values and vulnerabilities. California and New York push aggressive transparency mandates, while Texas and Florida resist stringent controls, seeing them as impediments to economic freedom. In such regions, AI regulation becomes politically polarizing. The debate over federal preemption intensifies, but without existential urgency like in the AGI-Imminent scenario, federal momentum slows. Thus, without a national emergency to catalyze action, we see that the debate between states regulating AI vs a federal framework rages on. The 2026-27 election cycle becomes a turning point. Will voters reward states for bold AI policy experimentation, or demand harmonized national standards? The outcome hinges on two forces: (1) the political will of Congress to legislate through gridlock, and (2) the economic pressure from industry coalitions that increasingly struggle with compliance across state lines. Until then, the patchwork persists. But ultimately, federal consistency wins out, driven by industry demands for uniform regulatory frameworks to avoid patchwork complexity.
The US-China tech rivalry shifts. With AI no longer an existential threat or game-changing advantage, policymakers refocus attention toward emerging threats like quantum computing, redirecting diplomatic energies elsewhere. At home, national security agencies reduce AI-specific task forces, fold them into broader tech governance portfolios, and redirect attention toward securing critical infrastructure against misuse—whether AI-powered or not.
As AI plateaus, the U.S.–China rivalry evolves. Rather than framing AI as a race for cognitive supremacy, the conversation shifts to economic supremacy over the existing hardware. As we’ve already seen with current AI, software innovation becomes commoditized, blunting its role as a differentiator. But hardware capacity and deployment scale remain strategic bottlenecks. Whoever controls the chips, the compute, and the infrastructure decides how fast and broadly AI can be applied.
AI regulators sharing oversight, data, and standards. Some call for a White House policy czar, but the lack of a singular existential threat makes bureaucracy—rather than urgency—the dominant force.
Across the Atlantic, the EU had taken the early lead with a sweeping horizontal framework: the AI Act. Meanwhile, the U.S.—traditionally hands-off and sector-specific—slowly shifted toward more comprehensive governance. Rather than mimicking EU rigidity, American policy favored flexible, principle-driven statutes. Both systems converged on shared values: fairness, explainability, safety, and transparency.
The U.S. shifts cautiously from sector-specific, incremental approaches to more comprehensive regulations, wary of EU-style rigidity but mindful of falling behind global standards. A pragmatic American-style framework emerges, blending market incentives with targeted statutes. Regulatory paralysis ends, replaced by informed caution and proactive governance.
Perhaps the most overlooked but lasting shift is meta-political: AI governance becomes a blueprint. Much like financial regulation or environmental law, AI policy becomes a case study in how to govern complex, fast-evolving systems—especially those in similarly ambiguous or fast-evolving stages, like:
- Neurotechnology
- Quantum computing
- Synthetic biology
- Autonomous robotics
- Geoengineering
In a ceiling world, regulators may start asking not just who built the model, but who’s enabling it to scale. This idea of accountability remains complex and challenging. Who bears liability—the developer of the foundational model or the deployer applying it in harmful ways? Policymakers must distinguish clearly between these roles. This is a critical question to answer, as the greatest risks come not from capability jumps, but from unchecked deployment. So, regulatory attention zeroes in on three critical nodes:
Foundation Model Developers (OpenSentience, DeepThink): Even if they aren't responsible for how models are used, their choices shape everything downstream.
Mandatory transparency reports, licensing systems, and alignment tool documentation requirements become standard.
Infrastructure Providers (CoreWeave, GPU cloud providers): They shape who gets to deploy what, and how fast. at the bottleneck of compute access—especially for startups, governments, and enterprises that don’t own their own chips. This makes them a key chokepoint.
Facing heightened scrutiny, these providers must comply with capacity reporting, prioritize compute allocation (e.g., defense versus commercial), and adhere to stringent environmental reporting guidelines.
Downstream Deployers (startups, enterprises): This is where actual harms to consumers or workers occur.
The FTC and DOJ actively prosecute biased outputs and manipulative user interfaces, enforcing sector-specific oversight—FDA for medical AI, CFPB for lending algorithms.
Concluding Thoughts:
In this AI-ceiling world, technology mirrors our societal priorities and shortcomings. AI governance no longer wrestles with hypothetical futures; it manages real, present-day issues amplified by technology. The central challenge isn’t whether we can build a super intelligent technology, but whether we can create institutions resilient enough to govern the technology we already possess.
In this scenario, U.S. policymakers will have to shift from reactive posturing to proactive governance. As the EU moves ahead with enforceable AI regulations—setting global norms on risk, transparency, and rights—the U.S. risks falling behind not in innovation, but in influence. Without a coherent federal framework, American companies face legal uncertainty abroad and domestic trust gaps at home. Policymakers must step up. They must define liability and build durable regulatory institutions. The AI arms race may have cooled, but the race for global emerging technology governance has only begun.