Before the Breakthrough: AGI Scenarios
Abstract:
When and whether we’ll hit Artificial General Intelligence (AGI) and Artificial Superintelligence (AGI) seems to be a topic dominating tech circles—and the conversation shows no signs of slowing down. For the purposes of this article, AGI is defined as a system capable of performing a wide range of cognitive tasks at a human level—such as reasoning, learning, and creativity—while also applying its knowledge flexibly across different domains without needing to be reprogrammed for each task. Whereas AGI is an advanced form of AI that surpasses human intelligence in virtually every area. As described by Nick Bostrom, it would be “much smarter than the best human brains in practically every field,” enabling it to outperform leading experts in science, strategy, and innovation. From the recent AI-2027 predicting an AI-induced apocalypse and its more optimistic analogue, AI as Normal Technology circulating in the past few months, the current discourse is rich with speculation about timelines and existential stakes. Dozens of op-eds attempt to map out the evolving taxonomy of AI futures: are we facing an imminent AGI boom? Is it far off but plausible? Or is AI development destined to plateau?
But amidst all this theorizing, one thing strikes me: voices in law and policy are notably missing within technical discourse. While technologists debate timelines and thresholds, the legal scaffolding around these developments remains alarmingly underdeveloped, and dangerously underestimated. RAND researchers warn that progress on AI rules is still “sporadic, reactive, and subject to political influence.” Few official frameworks “take each scenario seriously” around the AI race. And that matters. Because as history shows, the people writing the rules—drafting contracts, shaping policy, facilitating who gets to build what—are just as central to these turning points as the engineers. Without this thinking, a “move fast, break things” mentality justifies innovation over accountability.
This series, “Before the Breakthrough: AGI Scenarios,” seeks to bridge that gap. Modeled loosely after AI-2027, this project breaks down three futures of AI development in the U.S.: “AGI-Imminent,” “Slow-Burn AGI,” and “AI Ceiling.” But unlike many scenario exercises, my aim is to inject legal and policy foresight into the mix. No one can predict the future—but we can prepare better if we ask sharper questions. Who writes the rules if AGI emerges tomorrow? What happens if a state tries to nationalize a model? What acts or treaties might be needed, and what values would they preserve—or erode?
These scenarios are a way to make sense of the legal vacuum that surrounds one of the most consequential technologies of our time. My hope is to widen the aperture of this debate- to surface what’s missing and offer a more grounded path forward.
My message for practitioners is straightforward: track the signal that tells you which column we’ve entered, then staff accordingly. For example, the AGI-imminent world, export-control lawyers may shape grand strategy. In the “slow-burn AGI” world, privacy litigators and local regulators may carry the day. Either way, the law will not sit on the sidelines. It will set the guardrails or, if ignored, may become a costly bottleneck to U.S. national strategy.
Intro:
People underestimate the role of law in the age of AI—especially when the conversation turns to militarization, global power, or existential risk. The discourse tends to orbit around compute, chips, algorithms, or CEOs. But law, as fragmented and imperfect as it is, is one of the only levers that can define what power looks like in this new AI-era.
As someone coming to this space with a background in policy and law, I’ve come to see lawyers not as lagging behind, but as the ones quietly shaping the scaffolding of what’s to come. Yes, law often trails innovation. And yes, lawyers themselves are still figuring out their place in this evolving terrain. But if we’re moving toward more advanced AI, and possibly AGI, we need more than engineers and businessmen—we need legal clarity. We need people at the table representing companies and governments negotiating over control of frontier models, debating how these technologies should be deployed. We need people doing the work to safeguard privacy—people drawing the lines between public interest and private power. Lawyers are the ones handling these questions, yet the public tends to think of lawyers as irrelevant in this domain; or worse, as people who just show up too late. “Law continues to struggle against the pace of technology” I still remember a professor saying in class.
Historical examples:
Acts, policies, laws—they may seem procedural, but they are much more than that. They are how a nation tells itself what matters. They shape who leads, who follows, and who is impacted. They are as equal part consequential in deciding global order as the geopolitical strategies they support. Just take a look at history: the Manhattan Project, famous for helping the U.S. develop the atomic bomb and shift the balance of power in World War II and the Cold War. But little known to many, it was much of a legal triumph as much as it was a scientific one. The First War Powers Act of 1941 allowed the U.S. government to gain sweeping wartime authority, including the power of eminent domain, which it used to quietly seize land for top-secret research and testing sites like Los Alamos. Few people also realize that non-disclosure agreements were critical to keeping the Manhattan Project secret—and, arguably, to winning the war. Scientists and workers signed strict NDAs and were legally bound under the Espionage Act, which made leaking any details a serious federal offense.
Or take the Official Secrets Act (OSA) as an example. Alan Turing’s Bombe machine, which helped crack the Nazi Enigma code and shortened WWII by an estimated 2-4 years, was legally classified under the OSA. This means there were hundreds of policymakers and lawyers who were working in tandem to keep his machine a national secret—making sure there were no whistleblowers, obscuring critical contributions and keeping Turing safe from war enemies. The Official Secrets Acts was essential legal infrastructure for turning cryptography into a war-time tool.
The gears of the legal and regulatory machine are always grinding; without the right machinery in place, invention alone rarely rewrites history. Had the lawyers not have worked as swiftly as they did, it’s difficult to say if the science alone would have been enough to turn the tide of the war. And yet, that legal scaffolding is rarely acknowledged.
History shows us it’s possible to build legal tools that are as dynamic and adaptive as the technologies they govern. But that imperative feels blurred when it comes to today’s frontier models. The policy landscape around advanced AI is fractured, delayed, and often reactive. The challenges of this moment—cybersecurity, autonomous systems, cross-border compute—require legal minds that not only understand doctrine, but also the technical and geopolitical terrain on which AGI is unfolding. But at Fidelity Investments, where I worked in compliance around emerging technology like cryptocurrencies and AI, it was clear how little coordination existed between the policy and tech folks. The prevailing guidance from regulators was only punitive, like sanctions and enforcement actions, while lawmakers clammored over whether states or the federal government gets to write the rules on emerging technology. I saw that part of the issue is that today’s legal conversation is stuck on buzzwords like “AI,” or “emerging technologies” as though all forms of machine cognition were all the same.
But the law can draw those lines—if we let it. If we invest in it. If we train lawyers who not only understand doctrine, but also the technical and geopolitical terrain they now inhabit. If we can ask it who we can hold accountable for the decisions over a technology that can shift the world in unimaginable ways. That’s the kind of legal work I intend to do: to close the gap between speed and structure, between innovation and accountability, before we lose the chance to get it right.
I begin with the scenario that feels the closest to home, and is arguably the most loaded: “AGI-imminent”.