AGI is Imminent. Part 1 of the AGI Series

Read Before the Breakthrough: AGI Scenarios to understand the inspiration behind this series.


The AGI Series argues that there are three core scenarios of what the future holds for AI development: AGI is imminent, AGI is possible but far off, and AI will reach a ceiling (meaning we will never reach AGI).

Scholars, policymakers, politicians, and lawyers keep sketching the “perfect” rulebook for regulating AI, yet they overlook the elephant in the room: we have no idea which future we’re writing it for. The laws and policies meant to be governing AI are static—their reach only goes as far as our current understanding of AI development, and that understanding is limited. We find ourselves not only limited by the knowledge gap of how the technology actually works, but also by the unpredictability of how it may change. These are the missing links in our regulatory imagination.

But what if these limits didn’t exist? What if, instead of anchoring policy to our current blind spots, we tried to forecast across multiple futures? Suppose we could fast forward through the timeline and spot the tipping points- would this help us match each phase of AI with the right policy and legal levers?

I believe it absolutely can. We will never have a seeing glass into the future, but the AGI Series introduces a forecast that may help us prepare across radically different scenarios. Whether AGI is around the corner, decades away, or never arrives at all, the goal of this series is to map which government actors and legal tools will matter at each stage of AI’s evolution. Who will intervene? What statutes will they invoke—or create? What types of lawyers will be called upon?

My goal with this series is to address the missing link by spotlighting which lawyers and government arms will matter in different AI scenarios, what statutes they’ll reach for (or create), and how they’ll influence this new technology.

Reaching AGI, in any of these futures, is not just a scientific milestone. It’s a matter of national security. It’s also a matter of the legal authority and legitimacy. And we shouldn’t assume the public will know when the threshold is crossed; the signs may come subtly, like a spike in government datacenter procurement, a sudden classification of safety research, or abrupt, unexplained jumps in model capability.

So, our job in the policy and legal world is to stay alert. If the policy and legal worlds keep looping through abstract frameworks and releasing rulebooks that expire before the ink dries, it will trail the technology by a crucial step. The AGI Series seeks to create a clear roadmap of all the various ways lawyers, legislators, and regulators can coordinate at every stage of AI’s evolution.

I hope that articles like this one will help give more direction to our conversations.


The “AGI-Imminent” scenario (Scenario 1) starts from one core provocation:

If a machine that can out-think, out-strategize, and out-invent humans arrives, the race to wield it will look a lot less like Silicon Valley’s sprint for market share and a lot more like the Cold War scramble for nukes. It’s an unprecedented amount of power that no one country has at the moment. A former CIA Deputy Director noted that the nation that achieves AGI first “will gain a strategic advantage that may prove impossible to overcome,” and that “America has no time to waste.” Even President Putin famously acknowledged that “whoever becomes the leader in this sphere will become the ruler of the world.”

Even though Stanford’s 2025 AI Index Report found that the U.S. still produces the most frontier models, China is closing the performance gap at alarming speed; in 2024, U.S. labs released 40 frontier‑grade models; China released 15- but Chinese accuracy on MMLU and HumanEval closed from double‑digit deficits in 2023 to near parity in 2024. Both countries frame AI leadership as strategic primacy. For this reason, I mainly focus on the geopolitical interplay of the U.S. and China throughout this series.

A few key terms: 

  • AGI Command Group (ACG): The classified interagency task force that the National Security Council (NSC) quietly forms, in partnership with DOD, DARPA, CIA, DOE, and NIST. It is authorized to bypass sectoral regulators and formalize the government’s exclusive partnership with OpenSentience, the U.S.’ leading frontier AI lab. 

  • Menlo Directive: The codename of the National Security Council’s AGI Command Group (ACG), due to there being a sense of heightened secrecy surrounding the development of the first AGI—much like Project Manhattan

  • OpenSentience: The U.S.’ leading frontier AI lab that is the first to cross into the AGI threshold.

  • DeepCompute: China’s leading frontier AI lab racing against OpenSentience to secure strategic dominance through AGI. 

  • Five Eyes Intelligence: This is a real intelligence group that spans five countries: the United States, the United Kingdom, Canada, Australia, and New Zealand. In the AGI-Imminent scenario, the U.S. shifts away from deep coordination with allies—even those in the Five Eyes. Echoing the AI 2027 forecast, strategic secrecy takes priority over shared progress.


Scenario 1:

OpenSentience—the leading frontier lab in the U.S.—makes a breakthrough. The discovery signals that artificial general intelligence (AGI) is no longer speculative. It’s real, and no longer confined to theory or science fiction. What follows is not just a race to scale the technology, but a full-scale transformation of how the U.S. government regulates, controls, and operationalizes advanced AI.

Until this moment, AI regulation in the U.S. was fragmented. Some rules were federal (such as Federal Trade Commission guidance and Executive Orders) but states like California, Colorado, and Utah had passed their own AI-related privacy and deployment laws. This patchwork created inconsistent compliance burdens and delayed frontier-model rollouts.  

With AGI now within reach, and the geopolitical stakes crystal clear, the federal government concludes that delay would only lead to defeat. Their fear of losing the AGI race to China or another global rival accelerates policy alignment. The government recognizes that state-by-state patchwork is only going to slow down the pace of innovation - and do what it takes to remove that as a barrier. 

The following are the events that I believe will take place between October 2025-January 2027:

AGI Timeline
Oct 2025
Strategic Chips National Asset Control
Dec 2025
Privacy Sidestepped Regulatory Bypass
Jan 2026
Public-Private Pact Strategic Alliance
May 2026
Liability Cap Risk Management
Jan 2027
Cyber Hack Security Breach

The critical question that OpenSentience and ACG must answer now is: If AGI is within reach, how do we protect it while we finish it? Because no system this powerful should be built without guardrails.


Enter the legal teams. As OpenSentience and the AGI Command Group (ACG) push the boundaries of AGI development, their legal teams are quietly structuring the most consequential decisions of the decade:

  • Who owns the model?

  • Who can access or deploy it?

  • What can cross U.S. borders? (e.g. chips, model weights, training data?)

  • What guardrails can and should be put up, and who has the authority to write them?

Below is a breakdown of the six legal specialties that I believe will be front and center in the AGI race:

Legal fields 1. IP & Tech-Transactions Lawyers (government-use, data rights, licensing) 2. Government Contracts Attorneys 3. Export-Control & Sanctions Lawyers 4. Legislative & Reg-Drafting Counsel (Capitol-Hill staff & agency lawyers) 5. Corporate-Governance & Securities Counsel 6. International lawyers (working for the U.S. Departments of State, Defense, and Justice)
What will they do? They negotiate who controls and can freely use the AGI model—especially when the government steps in using emergency powers or funding.

That includes:
• Writing the legal terms that decide whether the U.S. government gets a license, deployment right, or full ownership of the model.
• Protecting or surrendering intellectual property rights depending on national-security priorities.
• Structuring contracts that let the model be used by government agencies without violating private IP protections.
TLDR: Negotiate the fine line between national security access and private IP rights.
Trigger and defend Defense Production Act (DPA) powers to compel AGI firms to prioritize federal contracts.
Write indemnity clauses and liability caps for potential harm caused by AGI use.
Write rules for how AGI is used inside the government: ranging from when and how it can give output to being overridden.
TLDR: Turn emergency powers into binding contracts.
Decide which model weights and chips may legally cross borders.
Block high-risk exports to non-allies (e.g., China, UAE, or loosely regulated intermediaries).
Advise labs on how to legally collaborate with foreign researchers without triggering felony violations.
Possibly police cloud access for foreign nationals.
TLDR: Enforce the line between collaboration and criminal liability in the AGI arms race.
They write the laws and rules that authorize, constrain, or justify government action in the AGI race. This includes drafting emergency statutes, revising standing acts (like the DPA, CHIPS Act, IEEPA), and building the legal frameworks that shape AI deployment, national security powers, and public-private collaboration.
TLDR: Translate technical guardrails into legalese fast enough for Congress to vote.
Interpret and reconcile conflicting obligations—e.g., fiduciary duty to shareholders vs. national security directives.
Help structure indemnification and liability shielding under laws like the AGI Accountability & Compensation Act (AACA).
Mitigate securities liability for labs like OpenSentience, as they engage in quasi-governmental functions, including the transfer of assets or exclusive contracts with the AGI Command Group (ACG).
Prepare AGI labs’ leadership for Congressional or SEC investigations—ensuring records are compliant and risk-managed, especially as labs’ valuations skyrocket or plummet due to sudden classification or declassification of AGI progress.
TLDR: Structure legal and financial safeguards as AGI labs take on quasi-governmental roles under national security pressure.
Advise the U.S. government and OpenSentience on treaty obligations, export control law, and cross-border data governance.
Draft exceptions and legal workarounds to WTO obligations (particularly under GATT Article XXI—national security exceptions).
Advise the U.S. delegation on how to interpret or reshape the UN OEWG on Cybersecurity & ICTs’ voluntary norms in light of AGI-linked chip militarization.
Help manage fallout or retaliation from allies, partners, or adversaries affected by the Strategic Compute Readiness Act (SCRA)’s sweeping industrial and export policy shifts.
TLDR: Reframe international norms (soft law) and treaties (hard law) to fit the U.S.’ AGI strategy.
Why are they important? This may be one of the most critical specialties of the AGI era. Government might soft-nationalize labs—keeping them technically private but demanding emergency access. These lawyers draw the line between government control and private ownership. While the public sees only a U.S.–China race, they arbitrate the custody battle over AGI source code between Washington and Silicon Valley. Without them, labs either lose the model to the state or break the law trying to keep it. Every line of a DPA order is emergency law. Since AGI will be treated like nuclear material or satellites, these attorneys draft the rules of engagement. They ultimately control who gains access to the knowledge, tools, and compute that make AGI possible. Congress will move at wartime speed; tech-savvy drafters will write the rulebook—defining limits for ACG, FASA, and others. Their decisions set disclosure, control, and accountability standards as AGI becomes a geopolitical asset. When the U.S. sidelines allies in early AGI coordination, international lawyers keep diplomatic fallout in check.
The primary question(s) they must answer “Can the company still sell this model, or is it locked into government use?”
“Does the U.S. get a royalty-free license, secret deployment rights, or full control in a crisis?”
“If we share this model with ACG, is it open, limited-use, or classified access?”
“What liability protections must AGI contracts include?”
“Who pays if something goes wrong—developer, government, or another party in the supply chain?”
“Is this model ‘weaponizable’ under WMD export precedent?”
“Do these weights or chips fall under EAR or ITAR?”
“How do we draft a statute in two weeks that both parties can defend at home?”
“Can we frame a liability cap as public protection, not corporate bailout?”
“How do we define ‘Track 1 AGI’ without revealing classified info?”
“What wording grants emergency powers without looking like overreach?”
“Which disclosures are mandatory—and which can remain secret under national security exemptions?”
“How do we shield OpenSentience from securities violations while it operates in classified mode?”
“How far can the U.S. push AGI export controls before breaching WTO rules?”
“Which international laws apply to AGI?”
“How do we avoid harmful norms while still looking responsible?”
“Is AGI development covered by arms-control treaties?”
(Bonus) Day-to-day work examples Draft the clause granting ACG an irrevocable royalty-free licence to deploy AGI.
Argue whether the government can seize a model it partly funded.
Create Special Government Agreements (SGAs) that waive procurement rules; police subcontracting for foreign influence. File BIS applications for cross-border research; lead internal investigations if a model leaks to a sanctioned country. Draft pre-emption text overriding 50 state AI laws; turn CHIPS into the SCRA. Negotiate board resolutions authorizing cooperation with ACG; advise on indemnity clauses under the AACA. Draft Article XXI exceptions for WTO filings; prepare cables that reinterpret OEWG norms to justify new chip controls.
Previous
Previous

Scenario 2: The AI Ceiling

Next
Next

Before the Breakthrough: AGI Scenarios