AGI is Imminent. Part 1 of the AGI Series

Read Before the Breakthrough: AGI Scenarios to understand the inspiration behind this series.


The AGI Series argues that there are three core scenarios of what the future holds for AI development: AGI is imminent, AGI is possible but far off, and AI will reach a ceiling (meaning we will never reach AGI).

Scholars, policymakers, politicians, and lawyers keep sketching the “perfect” rulebook for regulating AI, yet they overlook the elephant in the room: we have no idea which future we’re writing it for. The laws and policies meant to be governing AI are static—their reach only goes as far as our current understanding of AI development, and that understanding is limited. We find ourselves not only limited by the knowledge gap of how the technology actually works, but also by the unpredictability of how it may change. These are the missing links in our regulatory imagination.

But what if these limits didn’t exist? What if, instead of anchoring policy to our current blind spots, we tried to forecast across multiple futures? Suppose we could fast forward through the timeline and spot the tipping points- would this help us match each phase of AI with the right policy and legal levers?

I believe it absolutely can. We will never have a seeing glass into the future, but the AGI Series introduces a forecast that may help us prepare across radically different scenarios. Whether AGI is around the corner, decades away, or never arrives at all, the goal of this series is to map which government actors and legal tools will matter at each stage of AI’s evolution. Who will intervene? What statutes will they invoke—or create? What types of lawyers will be called upon?

My goal with this series is to address the missing link by spotlighting which lawyers and government arms will matter in different AI scenarios, what statutes they’ll reach for (or create), and how they’ll influence this new technology.

Reaching AGI, in any of these futures, is not just a scientific milestone. It’s a matter of national security. It’s also a matter of the legal authority and legitimacy. And we shouldn’t assume the public will know when the threshold is crossed; the signs may come subtly, like a spike in government datacenter procurement, a sudden classification of safety research, or abrupt, unexplained jumps in model capability.

So, our job in the policy and legal world is to stay alert. If the policy and legal worlds keep looping through abstract frameworks and releasing rulebooks that expire before the ink dries, it will trail the technology by a crucial step. The AGI Series seeks to create a clear roadmap of all the various ways lawyers, legislators, and regulators can coordinate at every stage of AI’s evolution.

I hope that articles like this one will help give more direction to our conversations.


The “AGI-Imminent” scenario (Scenario 1) starts from one core provocation:

If a machine that can out-think, out-strategize, and out-invent humans arrives, the race to wield it will look a lot less like Silicon Valley’s sprint for market share and a lot more like the Cold War scramble for nukes. It’s an unprecedented amount of power that no one country has at the moment. A former CIA Deputy Director noted that the nation that achieves AGI first “will gain a strategic advantage that may prove impossible to overcome,” and that “America has no time to waste.” Even President Putin famously acknowledged that “whoever becomes the leader in this sphere will become the ruler of the world.”

Even though Stanford’s 2025 AI Index Report found that the U.S. still produces the most frontier models, China is closing the performance gap at alarming speed; in 2024, U.S. labs released 40 frontier‑grade models; China released 15- but Chinese accuracy on MMLU and HumanEval closed from double‑digit deficits in 2023 to near parity in 2024. Both countries frame AI leadership as strategic primacy. For this reason, I mainly focus on the geopolitical interplay of the U.S. and China throughout this series.

A few key terms: 

  • AGI Command Group (ACG): The classified interagency task force that the National Security Council (NSC) quietly forms, in partnership with DOD, DARPA, CIA, DOE, and NIST. It is authorized to bypass sectoral regulators and formalize the government’s exclusive partnership with OpenSentience, the U.S.’ leading frontier AI lab. 

  • Menlo Directive: The codename of the National Security Council’s AGI Command Group (ACG), due to there being a sense of heightened secrecy surrounding the development of the first AGI—much like Project Manhattan

  • OpenSentience: The U.S.’ leading frontier AI lab that is the first to cross into the AGI threshold.

  • DeepCompute: China’s leading frontier AI lab racing against OpenSentience to secure strategic dominance through AGI. 

  • Five Eyes Intelligence: This is a real intelligence group that spans five countries: the United States, the United Kingdom, Canada, Australia, and New Zealand. In the AGI-Imminent scenario, the U.S. shifts away from deep coordination with allies—even those in the Five Eyes. Echoing the AI 2027 forecast, strategic secrecy takes priority over shared progress.


Scenario 1:

OpenSentience—the leading frontier lab in the U.S.—makes a breakthrough. The discovery signals that artificial general intelligence (AGI) is no longer speculative. It’s real, and no longer confined to theory or science fiction. What follows is not just a race to scale the technology, but a full-scale transformation of how the U.S. government regulates, controls, and operationalizes advanced AI.

Until this moment, AI regulation in the U.S. was fragmented. Some rules were federal (such as Federal Trade Commission guidance and Executive Orders) but states like California, Colorado, and Utah had passed their own AI-related privacy and deployment laws. This patchwork created inconsistent compliance burdens and delayed frontier-model rollouts.  

With AGI now within reach, and the geopolitical stakes crystal clear, the federal government concludes that delay would only lead to defeat. Their fear of losing the AGI race to China or another global rival accelerates policy alignment. The government recognizes that state-by-state patchwork is only going to slow down the pace of innovation - and do what it takes to remove that as a barrier. 

The following are the events that I believe will take place between October 2025-January 2027:

AGI Timeline
Oct 2025
Strategic Chips National Asset Control
Dec 2025
Privacy Sidestepped Regulatory Bypass
Jan 2026
Public-Private Pact Strategic Alliance
May 2026
Liability Cap Risk Management
Jan 2027
Cyber Hack Security Breach

And here is a breakdown of that timeline:

AGI Scenario Matrix
Predictions: Tech World Implications: Policy & Regulatory Implications: Regulatory Instruments Involved: Historical / Statutory Analogy:
1. Chips & compute become a strategic national asset The ACG, or Menlo Directive, knows that building AGI will take an unimaginable amount of computing power. Controlling the chips won’t win the race outright, but without them, they know they won’t have a chance. The ACG applies strict export controls: No cooperation with blacklisted nations, no deployment of AGI-related chips abroad. ACG begins to monitor and enforce compute trade violations with the same priority as Weapons of Mass Destruction (WMDs). All U.S.-made chips include government-visible monitoring and remote-shutdown firmware. Track 1 and Track 2 AI is established: The ACG coins the term “Track 1 AI” to describe national-security-grade AGI—systems that are being developed that exhibit high-risk behaviors like deception, manipulation, and autonomous control. These are immediately classified as military secrets and restricted to top-cleared government officials and select OpenSentience researchers. Track 1 is to be used exclusively by the government until it can determine how to safely deploy AGI to the public without compromising national or geopolitical security. Everyone else—including allies, civilians, and most researchers—only has access to “Track 2 AI”: slower, safer, public-facing models. Fewer than 15 % of OpenSentience staff and under 2 % of U.S. officials are cleared for Track 1—for now. President Trump announces a 145% tariff on Chinese tech goods—he must at all costs choke off Beijing’s access to frontier-class compute. He also approves a $10 B emergency deal to accelerate domestic chip-plant construction in Phoenix, AZ. The CHIPS Act is expanded and reauthorized under a new framework that explicitly links chip manufacturing to national security and AGI containment—the Strategic Compute Readiness Act (SCRA). Under the SCRA, new fabs must meet defense-readiness standards (e.g., hardened infrastructure, proximity to military-industrial zones). But the ACG faces a hard truth: even at full speed, U.S. chip self-sufficiency isn’t possible before 2060. For now, they remain dependent on the biggest chip producer in the world: Taiwan. Thus, a secret rotational U.S. National Guard cyber-defense and infrastructure protection unit is posted at Taiwan Semiconductor Manufacturing Company (TSMC). ACG assures itself that the Arizona fabs funded through the CHIPS Act will be a continuity-of-compute site in case Taiwan is attacked or blockaded—but everyone, including themselves, knows it's a thin assurance. The Strategic Compute Readiness Act (SCRA): Formerly the CHIPS and Science Act, its focus shifts from economic competition with China to securing U.S. control over the global compute stack as a matter of national and democratic survival. Cold War blocs: During the Cold War, the U.S. and its allies banned sales of uranium and cutting-edge tech to the Soviet bloc (through the uranium embargo, otherwise known as the McMahon Act).
2. Privacy & consumer regulations are sidelined. The entire discourse pivots toward strategic competition against China. Therefore, privacy and fairness issues are seen as secondary, or even obstacles, to this end. In the tech world, bias audits and justice-centered design are deemed as nonessential delays—leading to greater risks of building AGI on biased foundations. In such a high-stakes national security situation, data is mandatorily pooled to train the AGI. As a result, it compromises some people’s ability to opt-out and consent for/against such uses of their data—further bringing up risks that the federal government won’t slow down to respond to immediately. Consumer privacy, ethics boards of OpenSentience serve as figureheads that have realistically no power. Once models are cleared at the national level, no state law can delay or block the release—federal preemption over AI is now a reality. Thus, although state regulators—especially California, Colorado, and Connecticut—and civilian agencies continue to flag safety, ethical, and diffusion risks, the ACG blocks or subverts these efforts. As the ACG consolidates control under a unified national AGI agenda, the long-debated House Moratorium passes, meaning states are barred from regulating frontier AI. This solidifies an important reality: the ACG treats “safety” as important only insofar as it prevents catastrophic scenarios. In practice, this means that data protection, algorithmic transparency, and civil oversight are deprioritized—sacrificed for what is framed as the greater imperative: national security. The President’s Executive Order, issued under the International Emergency Economic Powers Act (IEEPA), as well as the Defense Production Act (DPA), will compel OpenSentience to align with the federal government and authorize the forced pooling of private-sector data. The 2025 Federal Reconciliation Bill will preempt state AI laws to allow as much flexibility for AGI development, deployment, and training. COVID-19 Pandemic Emergency powers: Governments invoked emergency laws to track individual data. PATRIOT Act (2001): It expanded government surveillance powers, including warrantless wiretapping and data collection from telecommunications companies.
3. Private-Public Partnerships become a national priority OpenSentience enters a Special Government Agreement (SGA)—a national security contract that bypasses normal procurement rules and grants data exclusivity. It’s a risk: more people inside the company now know AGI exists, despite it being classified. But the government accepts the trade-off—OpenSentience is where AGI first emerged, and keeping it close is critical for national security. The U.S. also can’t afford to have delays or competing claims over deployment authority. It also needs to keep it relatively secretive and therefore needs the buy-in of the lab at the helm of AGI, and that’s OpenSentience. An SGA allows them to consolidate strategic control over the most advanced capabilities without triggering bureaucratic procurement delays or legislative or overseas scrutiny. It also ensures OpenSentience is legally and contractually bound to prioritize U.S. interests. OpenSentience emerges as the leading frontier lab with the biggest, high-secured datacenter in the U.S. It’s the U.S.' best chance of winning the AGI race. Other AI labs trail behind but not too far behind—ACG nationalizes several data centers to gives them to OpenSentience, making a mega datacenter. A federal safety rubric is drafted by NIST and enforced by ACG; it prevents state AGs from suing for harms of potential data and privacy violations until after the ACG waiver window of 10 years lapses—hopefully enough time to win the AGI war. The ACG places a DoD program executive full-time inside OpenSentience to enforce national directives and serve as a direct conduit between the lab and the federal command structure. They will ensure that no regulatory delays interfere with AGI development and deployment during the 10-year waiver period. ACG will invoke the Defense Production Act (DPA) to take trailing companies’ datacenters and give them to OpenSentience. The Defense Production Act (DPA) of 1950, originally passed during the Korean War, gives the federal government sweeping authority to requisition private resources—including materials, infrastructure, and industrial capacity—in the name of national defense. It has since been used in emergencies ranging from natural disasters to semiconductor shortages, and now, in the AGI race, it's invoked to consolidate compute power under federal command.
4. Liability Cap/Federal Compensation Fund is issued (“Price-Anderson for AI”) In the months following the AGI discovery by OpenSentience and ACG, Track 1 AI is quietly activated in strategic sectors while full capabilities remain classified—and it begins to upend entire industries. The first ones to be affected are transportation and healthcare diagnostics—leading to mass unemployment. The hundreds of thousands of workers displaced by AGI automation are furious. They seek collective compensation through class settlements or trust payout. The ACG’s solution is to set up the AI Federal Compensation Fund (AFCF), which offers payouts to those affected by the layoffs. This solution helps quell public pushback and keep the ACG’s R&D work alive. OpenSentience pays into the ACFC, and the remaining frontier labs that the ACG hasn’t already nationalized also have to pay into the fund. Each lab’s financial exposure is capped (e.g., at $50 MM), after which the federal government acts as a backstop. OpenSentience uses its AGI models to classify its Track 1 AI systems by risk for any future issues, such as if the market crashes. Their red team assigns internal risk scores and creates containment protocols accordingly. These reports are classified and shared to other frontier labs. The DoD and the CEO of OpenSentience come out with a joint public statement, where they claim that “The liability cap is about creating a survivable innovation model. The tech is in service of resilient social transition” they say. With this public statement however, the Chinese Communist Party (CCP) is now even more suspicious that the U.S. is close to unlocking the full capacity of AGI. DeepCompute accelerates its timelines, doubles its compute investments, and shifts to a closed military coordination model. Policy advisors for the ACG grow increasingly concerned that shielding frontier labs from catastrophic liability may result in abuses of power. They know that some risks are inherently unknowable at the AGI scale and fear that in the wrong hands, no amount of money would be nearly enough to cover the damages. But they agree that now more than ever, it is crucial to put on a united front. The tech people are moving fast—too fast. They know that if they don’t draw hard guideposts now on deployment conditions and public accountability, they’ll be up to their necks adjudicating long after the race ends. So, they get to work. They quickly draft and publish the AGI Accountability & Compensation Act (AACA). It: 1) Creates risk classification tiers for AGI systems. 2) Codifies the AFCF through an indemnity clause—which claims the government will absorb the tail risk for certified national-security deployments. 3) Adds a sunset clause, requiring reauthorization of indemnity every 3 years, based on national risk posture and model performance. 4) Creates channels for employees to confidentially report concerns through internal compliance units, to prevent whistleblowers. 5) Creates the Federal AI Safety Authority (FASA) to oversee all of the above. It operates with formal independence but it will increasingly find itself in a power struggle with ACG over decision-making power. The AI Federal Compensation Fund (AFCF) and the AGI Accountability & Compensation Act (AACA). The nuclear-era Price-Anderson Act (1957) created liability protection for the private sector in the event of catastrophic accidents. The same bargain was struck with vaccine makers during pandemics through the PREP Act (2005).
5. Chinese cyberattacks on OpenSentience According to OpenSentience’s classified risk assessments, China is estimated to be just two months behind the U.S. in the AGI race—and Beijing knows it. The CCP and DeepCompute already have partial insight into the U.S. government’s AGI timeline, thanks to a mole inside the ACG who has been leaking details confirming what China has long suspected: the U.S. is on the brink. This intel heightens the incentive to act fast. Chinese cyber units are now believed to be targeting Track 1 AI directly—the classified, military-grade model the U.S. is keeping under wraps. Expect major efforts to infiltrate U.S. labs and exfiltrate model weights, training data, or personnel—not just by China, but also Russia, North Korea, Iran, and Israel. With the presence of a mole inside the ACG, labs trigger urgent personnel vetting. The mole is eventually discovered—a mid-level systems engineer embedded at OpenSentience through a long-term subcontractor arrangement. The revelation triggers a full facility lockdown at OpenSentience and a joint press statement from OpenSentience and the Department of Justice. They framed the breach as a "critical national security incident with foreign involvement." Internally, trust fractures. Teams are reshuffled, clearances revoked, and every engineer is now subject to continuous behavioral monitoring and threat modeling—even those who built the system from day one. For the first time since the AGI sprint began, labs are told to prioritize resilience over raw capability. The culture flips from “move fast” to “move safe.” In early winter of 2027, reports start to emerge of increased government spending on Track 1 AI: The Department of Defense increases the original financial-year budget from $14.5 B on cybersecurity defenses to $30 B. The ACG tie asset freezes and secondary sanctions on Chinese firms and individuals credibly linked to the hack. They make it clear that stealing AGI IP counts as a “non-nuclear WMD” under the same regime that sanctioned nuclear proliferation networks. It also draws up plans for kinetic attacks on Chinese data centers. Lastly, in a rare bipartisan moment, Congress enacts the AGI Security Act, which codifies narrow “hack-back” permissions for Federal AI Safety Authority (FASA)-certified labs under DoJ supervision and creates an AI Bounty Fund, offering six-figure rewards for early warnings of insider threats or novel exploit techniques. In addition, any lab that fails to report a breach attempt within 12 hours now faces steep penalties—including loss of indemnity cover under the AI Federal Compensation Fund and public disclosure requirements. The AGI Security Act Manhattan Project espionage: Just as Klaus Fuchs and other Soviet spies infiltrated Los Alamos to smuggle atomic secrets to Moscow, a mole inside the ACG feeds CCP intelligence on U.S. AGI progress.

What fields of law will the government and OpenSentience call on in the AGI race?:

As OpenSentience and the AGI Command Group (ACG) push the boundaries of AGI development, their legal teams are quietly structuring the most consequential decisions of the decade:

  • Who owns the model?

  • Who can access or deploy it?

  • What can cross U.S. borders? (e.g. chips, model weights, training data?)

  • What guardrails can and should be put up, and who has the authority to write them?

Below is a breakdown of the six legal specialties that I believe will be front and center in the AGI race:

Legal fields 1. IP & Tech-Transactions Lawyers (government-use, data rights, licensing) 2. Government Contracts Attorneys 3. Export-Control & Sanctions Lawyers 4. Legislative & Reg-Drafting Counsel (Capitol-Hill staff & agency lawyers) 5. Corporate-Governance & Securities Counsel 6. International lawyers (working for the U.S. Departments of State, Defense, and Justice)
What will they do? They negotiate who controls and can freely use the AGI model—especially when the government steps in using emergency powers or funding.

That includes:
• Writing the legal terms that decide whether the U.S. government gets a license, deployment right, or full ownership of the model.
• Protecting or surrendering intellectual property rights depending on national-security priorities.
• Structuring contracts that let the model be used by government agencies without violating private IP protections.
TLDR: Negotiate the fine line between national security access and private IP rights.
Trigger and defend Defense Production Act (DPA) powers to compel AGI firms to prioritize federal contracts.
Write indemnity clauses and liability caps for potential harm caused by AGI use.
Write rules for how AGI is used inside the government: ranging from when and how it can give output to being overridden.
TLDR: Turn emergency powers into binding contracts.
Decide which model weights and chips may legally cross borders.
Block high-risk exports to non-allies (e.g., China, UAE, or loosely regulated intermediaries).
Advise labs on how to legally collaborate with foreign researchers without triggering felony violations.
Possibly police cloud access for foreign nationals.
TLDR: Enforce the line between collaboration and criminal liability in the AGI arms race.
They write the laws and rules that authorize, constrain, or justify government action in the AGI race. This includes drafting emergency statutes, revising standing acts (like the DPA, CHIPS Act, IEEPA), and building the legal frameworks that shape AI deployment, national security powers, and public-private collaboration.
TLDR: Translate technical guardrails into legalese fast enough for Congress to vote.
Interpret and reconcile conflicting obligations—e.g., fiduciary duty to shareholders vs. national security directives.
Help structure indemnification and liability shielding under laws like the AGI Accountability & Compensation Act (AACA).
Mitigate securities liability for labs like OpenSentience, as they engage in quasi-governmental functions, including the transfer of assets or exclusive contracts with the AGI Command Group (ACG).
Prepare AGI labs’ leadership for Congressional or SEC investigations—ensuring records are compliant and risk-managed, especially as labs’ valuations skyrocket or plummet due to sudden classification or declassification of AGI progress.
TLDR: Structure legal and financial safeguards as AGI labs take on quasi-governmental roles under national security pressure.
Advise the U.S. government and OpenSentience on treaty obligations, export control law, and cross-border data governance.
Draft exceptions and legal workarounds to WTO obligations (particularly under GATT Article XXI—national security exceptions).
Advise the U.S. delegation on how to interpret or reshape the UN OEWG on Cybersecurity & ICTs’ voluntary norms in light of AGI-linked chip militarization.
Help manage fallout or retaliation from allies, partners, or adversaries affected by the Strategic Compute Readiness Act (SCRA)’s sweeping industrial and export policy shifts.
TLDR: Reframe international norms (soft law) and treaties (hard law) to fit the U.S.’ AGI strategy.
Why are they important? This may be one of the most critical specialties of the AGI era. Government might soft-nationalize labs—keeping them technically private but demanding emergency access. These lawyers draw the line between government control and private ownership. While the public sees only a U.S.–China race, they arbitrate the custody battle over AGI source code between Washington and Silicon Valley. Without them, labs either lose the model to the state or break the law trying to keep it. Every line of a DPA order is emergency law. Since AGI will be treated like nuclear material or satellites, these attorneys draft the rules of engagement. They ultimately control who gains access to the knowledge, tools, and compute that make AGI possible. Congress will move at wartime speed; tech-savvy drafters will write the rulebook—defining limits for ACG, FASA, and others. Their decisions set disclosure, control, and accountability standards as AGI becomes a geopolitical asset. When the U.S. sidelines allies in early AGI coordination, international lawyers keep diplomatic fallout in check.
The primary question(s) they must answer “Can the company still sell this model, or is it locked into government use?”
“Does the U.S. get a royalty-free license, secret deployment rights, or full control in a crisis?”
“If we share this model with ACG, is it open, limited-use, or classified access?”
“What liability protections must AGI contracts include?”
“Who pays if something goes wrong—developer, government, or another party in the supply chain?”
“Is this model ‘weaponizable’ under WMD export precedent?”
“Do these weights or chips fall under EAR or ITAR?”
“How do we draft a statute in two weeks that both parties can defend at home?”
“Can we frame a liability cap as public protection, not corporate bailout?”
“How do we define ‘Track 1 AGI’ without revealing classified info?”
“What wording grants emergency powers without looking like overreach?”
“Which disclosures are mandatory—and which can remain secret under national security exemptions?”
“How do we shield OpenSentience from securities violations while it operates in classified mode?”
“How far can the U.S. push AGI export controls before breaching WTO rules?”
“Which international laws apply to AGI?”
“How do we avoid harmful norms while still looking responsible?”
“Is AGI development covered by arms-control treaties?”
(Bonus) Day-to-day work examples Draft the clause granting ACG an irrevocable royalty-free licence to deploy AGI.
Argue whether the government can seize a model it partly funded.
Create Special Government Agreements (SGAs) that waive procurement rules; police subcontracting for foreign influence. File BIS applications for cross-border research; lead internal investigations if a model leaks to a sanctioned country. Draft pre-emption text overriding 50 state AI laws; turn CHIPS into the SCRA. Negotiate board resolutions authorizing cooperation with ACG; advise on indemnity clauses under the AACA. Draft Article XXI exceptions for WTO filings; prepare cables that reinterpret OEWG norms to justify new chip controls.
Previous
Previous

(Coming Soon) Countdown to Consensus: The Final Substantive OEWG Session on Cybersecurity

Next
Next

Before the Breakthrough: AGI Scenarios