Mapping the AI Regulation Landscape
A Comparative Analysis of Major U.S. Proposals (2024–2026)
The United States is in the middle of a defining policy moment for artificial intelligence. Between early 2024 and early 2026, a remarkable number of AI governance proposals have emerged from the White House, the U.S. Senate, the U.S. House, and state legislatures. Each reflects a different theory about what the core problem is, who should solve it, and how urgently it needs solving.
This analysis maps the current landscape of major AI regulation proposals across three analytical dimensions: governance structure (who regulates and how), scope and application (what gets regulated and when), and political positioning (the underlying theory of the problem). It examines eight proposals in detail, identifies the axes of agreement and disagreement, and places them on comparative frameworks to reveal the emerging contours of the debate.
The proposals range from the White House's light-touch, innovation-first framework to Senator Sanders' calls for a national data center moratorium and a robot tax. In between sit comprehensive federal legislation, industry-funded redistribution mechanisms, worker-centered governance principles, and state-level transparency and compliance regimes. Together they define the boundaries of what is politically imaginable in U.S. AI policy today.
The Proposals
Eight major proposals spanning federal, state, and industry perspectives
White House · 2026
White House AI Legislative Framework
The White House released legislative recommendations outlining a National Policy Framework for Artificial Intelligence, structured around seven pillars addressed to Congress. The framework covers child safety, community protection, intellectual property, anti-censorship, innovation, workforce, and federal preemption. It positions state regulation as the primary threat to U.S. competitiveness and frames preemption as the central legislative priority.
Primary frame: Innovation and competitiveness
Sen. Marsha Blackburn · 2026
Blackburn TRUMP AMERICA AI Act
The most comprehensive federal AI bill to date, spanning 17 titles and hundreds of pages. Despite being framed as implementing the White House's deregulatory vision, the bill contains significantly more regulatory density than the White House framework suggests. It creates multiple enforcement pathways, mandatory reporting obligations, and a risk-based evaluation program.
Primary frame: Comprehensive federal regulation
OpenAI (Chris Lehane, Sasha Baker) · 2026
OpenAI / Chris Lehane Policy Position
OpenAI advocates for a specific sequencing of governance: federal framework first, state alignment second, federal incentive third. The position endorses mandatory federal testing of frontier systems using classified government capabilities before deployment. CAISI would serve as the primary evaluative institution.
Primary frame: Prevention-first safety
Sen. Mark Kelly · 2025
Sen. Mark Kelly — "AI for America" Roadmap
The most developed Democratic proposal for AI governance, focusing on worker protection and economic redistribution alongside safety and competitiveness. Kelly treats AI primarily as an economic disruption problem requiring institutional investment, proposing an industry-funded AI Horizon Fund for worker retraining and infrastructure.
Primary frame: Economic redistribution
Sen. Bernie Sanders · 2025
Sen. Bernie Sanders — AI Policy Proposals
The most interventionist and structurally critical position in the current debate, treating AI governance as inseparable from questions of corporate power and wealth inequality. Sanders' proposals include a national data center moratorium, a robot tax, and calls to break up major AI companies.
Primary frame: Democratic control
Rep. Ro Khanna · 2026
Rep. Ro Khanna — Seven Principles for Democratic AI
Khanna articulates a middle path between Silicon Valley optimism and progressive structural critique through seven principles for democratic AI. As a representative of a Silicon Valley district, his position carries particular weight. He explicitly rejects Luddism while insisting on structural mechanisms that embed worker and community interests into AI governance.
Primary frame: Worker empowerment
California Legislature · 2025
California SB 53 — Transparency in Frontier AI Act
California's evolution from the vetoed SB 1047 to SB 53 illustrates the real-time negotiation between ambition and political feasibility in AI governance. SB 53 targets large frontier developers with over $500M annual revenue and requires transparency reports on safety testing, with critical safety incident reporting within 15 days standard or 24 hours if imminent harm.
Primary frame: Transparency
New York Legislature / Gov. Hochul · 2025
New York RAISE Act
New York's Responsible AI Safety and Education Act establishes reporting and safety governance for frontier AI developers. It covers companies with over $500M in revenue developing frontier models and requires publicly disclosed safety and security protocols. The act includes civil penalties of $1M for initial violations, escalating to $3M for repeat offenses.
Primary frame: Compliance
Axes of Agreement and Disagreement
Three dimensions for comparing what the proposals prioritize and where they diverge
Governance Structure
Who regulates AI, through what institutions, and with what enforcement tools
Should the U.S. create a new federal AI regulatory body, or distribute authority across existing agencies?
Scope and Application
What gets regulated, how thresholds are defined, and when intervention occurs
Is a revenue-based threshold (like $500M) the right way to identify which AI systems require governance, or does it miss important risks?
Political Positioning
Partisan dynamics, coalition possibilities, and competing theories of the problem
Can a bipartisan coalition form around child safety and transparency while deferring harder questions about preemption and liability?
Analytical Frameworks
Interactive 2x2 charts plotting each proposal along key policy axes
Enforcement Mechanism vs. Regulatory Scope
X-axis: Narrow Scope (AI-specific) → Broad Scope (Economy-wide)
Y-axis: Voluntary / Industry-led → Mandatory / Government-enforced
Prevention vs. Liability & Regulatory Authority
X-axis: State-led Authority → Federal-led Authority
Y-axis: Liability / Post-hoc → Prevention / Pre-deployment
Innovation Priority vs. Worker Protection
X-axis: Low Worker Priority → High Worker Priority
Y-axis: Low Innovation Priority → High Innovation Priority
Pre-deployment Obligations vs. Federal Preemption
X-axis: Weak Obligations → Strong Obligations
Y-axis: Weak Preemption → Strong Preemption
Where Proposals Converge and Diverge
Key themes emerging across the political spectrum
Areas of Broad Agreement
- •Every proposal treats the most powerful AI systems as requiring some form of distinct governance. The $500M revenue threshold used by California and New York is emerging as a de facto standard.
- •Child safety is the area of greatest bipartisan alignment and the most likely candidate for near-term legislation. The White House, Blackburn, California, and OpenAI all prioritize it.
- •Even the lightest-touch proposals endorse some form of transparency. The debate is over whether transparency alone is sufficient or must be paired with pre-deployment testing or liability.
- •All proposals acknowledge AI-driven workforce disruption. The disagreement is over mechanism: reporting versus redistribution versus structural intervention.
- •Copyright and digital replicas need addressing. The White House and Blackburn bill converge on the need for a federal framework on AI-generated replicas and content provenance. Copyright training data remains unresolved.
Areas of Sharp Disagreement
- •Federal preemption of state AI laws is the deepest divide. The White House and Blackburn want broad preemption. OpenAI wants preemption contingent on a meaningful federal framework. States are asserting authority while signaling willingness to defer.
- •Pre-deployment versus post-deployment intervention. OpenAI's prevention-first model (federal testing before deployment) is endorsed by no other major proposal in its strong form. Most rely on post-deployment accountability.
- •The economic structure of AI. Sanders' moratorium and breakup proposals have no analogs elsewhere. Kelly's industry-funded Horizon Fund is a moderate redistribution mechanism. The White House and Blackburn contain no redistribution provisions.
- •The role of liability. The Blackburn bill creates extensive liability frameworks including developer, deployer, and federal cause of action pathways. The White House prefers existing law. The appropriate role of litigation in AI governance remains unresolved.