Mapping the AI Regulation Landscape
A Comparative Analysis of Major U.S. Proposals (2024–2026)
The United States is in the middle of a defining policy moment for artificial intelligence. Between early 2024 and early 2026, a remarkable number of AI governance proposals have emerged from the White House, the U.S. Senate, the U.S. House, and state legislatures. Each reflects a different theory about what the core problem is, who should solve it, and how urgently it needs solving.
This analysis maps the current landscape of major AI regulation proposals across three analytical dimensions: governance structure (who regulates and how), scope and application (what gets regulated and when), and political positioning (the underlying theory of the problem). It examines eight proposals in detail, identifies the axes of agreement and disagreement, and places them on comparative frameworks to reveal the emerging contours of the debate.
The proposals range from the White House's light-touch, innovation-first framework to Senator Sanders' calls for a national data center moratorium and a robot tax. In between sit comprehensive federal legislation, industry-funded redistribution mechanisms, worker-centered governance principles, and state-level transparency and compliance regimes. Together they define the boundaries of what is politically imaginable in U.S. AI policy today.
The Proposals
Eight major proposals spanning federal, state, and industry perspectives
White House · 2026
White House AI Legislative Framework
The White House released legislative recommendations outlining a National Policy Framework for Artificial Intelligence, structured around seven pillars addressed to Congress. The framework covers child safety, community protection, intellectual property, anti-censorship, innovation, workforce, and federal preemption. It positions state regulation as the primary threat to U.S. competitiveness and frames preemption as the central legislative priority.
Primary frame: Innovation and competitiveness
Sen. Marsha Blackburn · 2026
Blackburn TRUMP AMERICA AI Act
The most comprehensive federal AI bill to date, spanning 17 titles and hundreds of pages. Despite being framed as implementing the White House's deregulatory vision, the bill contains significantly more regulatory density than the White House framework suggests. It creates multiple enforcement pathways, mandatory reporting obligations, and a risk-based evaluation program.
Primary frame: Comprehensive federal regulation
OpenAI (Chris Lehane, Sasha Baker) · 2026
OpenAI / Chris Lehane Policy Position
OpenAI advocates for a specific sequencing of governance: federal framework first, state alignment second, federal incentive third. The position endorses mandatory federal testing of frontier systems using classified government capabilities before deployment. CAISI would serve as the primary evaluative institution.
Primary frame: Prevention-first safety
OpenAI · 2026
OpenAI — Industrial Policy for the Intelligence Age
OpenAI's most expansive policy document to date, moving well beyond its earlier safety-focused position to propose a comprehensive industrial policy agenda for the transition to superintelligence. The document is organized around two pillars: building an open economy with broad participation and shared prosperity, and building a resilient society through safety systems, alignment, and governance. It proposes a Public Wealth Fund giving every citizen a stake in AI-driven growth, portable benefits decoupled from employers, adaptive safety nets with automatic triggers, a 32-hour workweek pilot, modernized taxation of capital over labor, and a global network of AI Safety Institutes. The framing explicitly invokes the Progressive Era and the New Deal as precedents for the scale of institutional response required.
Primary frame: Industrial policy and shared prosperity
Center for Humane Technology · 2026
Center for Humane Technology — The AI Roadmap
The Center for Humane Technology's most comprehensive AI policy document, structured around seven principles for how AI should be built, deployed, and governed. The Roadmap operates across three intervention domains -- norms, laws, and product design -- arguing that no single reform is sufficient and that change requires layered, simultaneous pressure on the AI development paradigm. CHT explicitly draws parallels to the Big Tobacco and nuclear weapons movements, framing its theory of change as identifying high-leverage intervention points and applying coordinated civil society pressure across them. Unlike industry or legislative proposals, this is a civil society document setting expectations for what governance should look like rather than introducing bills.
Primary frame: Humane technology and public interest
Sen. Mark Warner (with Sens. Hawley, Young, Rounds, and others) · 2026
Sen. Mark Warner — AI Workforce Data & Commission Package
Senator Warner's AI agenda is not a single bill but a coordinated three-part strategy to build the institutional infrastructure for evidence-based AI workforce policy. First, a bipartisan letter (co-signed by Warner, Hawley, Banks, Hassan, Kelly, Kaine, Hickenlooper, Young, and Rounds) urging the Bureau of Labor Statistics and Census Bureau to rapidly expand AI labor market data collection across existing surveys — CPS, JOLTS, and the National Longitudinal Survey. Second, the AI-Related Job Impacts Clarity Act (with Sen. Hawley), requiring quarterly disclosures from publicly traded companies and federal agencies on AI-driven layoffs, hires, unfilled positions, and retraining — reported to DOL with NAICS codes and published on the BLS website. Third, the Economy of the Future Commission Act (S.4046, with Sen. Rounds), establishing a 10-member bipartisan legislative commission to develop consensus recommendations on workforce development, education, social safety nets, taxation, open-source AI, transportation safety, energy, and robotics. The commission must deliver employment projections by NAICS code at 5- and 10-year horizons within 7 months, and full legislative recommendations within 13 months. Together, the three measures form a pipeline: collect the data, mandate its disclosure, then channel it into bipartisan legislative recommendations.
Primary frame: Data-driven workforce governance
NY Assemblymember Alex Bores · 2026
Alex Bores — The AI Dividend
A federal policy proposal from the New York Assemblymember who authored the RAISE Act, now pitching a contingency-based direct payment program designed to activate automatically if AI meaningfully displaces American workers. The AI Dividend is explicitly framed as 'fire insurance' — not a prediction that mass unemployment will occur, but preparation in case it does. The proposal is notable for three novel funding mechanisms: a token tax on AI computation, federal equity warrants in frontier AI companies (out-of-the-money, exercisable only if companies multiply dramatically in value), and tax reform eliminating the accelerated depreciation subsidy for AI capital that currently makes automation cheaper than hiring. Revenue flows to three buckets: direct payments to Americans, workforce transition and education investment, and public AI safety/oversight infrastructure. Bores frames the timing urgency around a closing political window — demanding equity stakes in AI companies after they have already captured the value is far harder than structuring it now.
Primary frame: Contingency-based economic insurance
Sen. Mark Kelly · 2025
Sen. Mark Kelly — "AI for America" Roadmap
The most developed Democratic proposal for AI governance, focusing on worker protection and economic redistribution alongside safety and competitiveness. Kelly treats AI primarily as an economic disruption problem requiring institutional investment, proposing an industry-funded AI Horizon Fund for worker retraining and infrastructure.
Primary frame: Economic redistribution
Sen. Bernie Sanders · 2025
Sen. Bernie Sanders — AI Policy Proposals
The most interventionist and structurally critical position in the current debate, treating AI governance as inseparable from questions of corporate power and wealth inequality. Sanders' proposals include a national data center moratorium, a robot tax, and calls to break up major AI companies.
Primary frame: Democratic control
Rep. Ro Khanna · 2026
Rep. Ro Khanna — "AI for the People" Manifesto
Khanna's April 2026 manifesto in The Nation, building on his earlier Seven Principles but substantially more developed and politically explicit. Self-identifying as an 'AI democratist' (neither accelerationist nor doomer), Khanna frames AI policy as inseparable from the broader fight against billionaire wealth concentration in a 'new Gilded Age.' Notably published as a Silicon Valley representative who has co-hosted town halls with Sen. Bernie Sanders on AI oligarchy, the piece invokes FDR's New Deal as the template for the scale of response required and proposes a Future Workforce Administration funded by a wealth tax. Khanna explicitly attacks Trump's December 2025 executive order authorizing the DOJ to sue states over AI safety regulations.
Primary frame: AI democratism and economic redistribution
California Legislature · 2025
California SB 53 — Transparency in Frontier AI Act
California's evolution from the vetoed SB 1047 to SB 53 illustrates the real-time negotiation between ambition and political feasibility in AI governance. SB 53 targets large frontier developers with over $500M annual revenue and requires transparency reports on safety testing, with critical safety incident reporting within 15 days standard or 24 hours if imminent harm.
Primary frame: Transparency
New York Legislature / Gov. Hochul · 2025
New York RAISE Act
New York's Responsible AI Safety and Education Act establishes reporting and safety governance for frontier AI developers. It covers companies with over $500M in revenue developing frontier models and requires publicly disclosed safety and security protocols. The act includes civil penalties of $1M for initial violations, escalating to $3M for repeat offenses.
Primary frame: Compliance
Axes of Agreement and Disagreement
Three dimensions for comparing what the proposals prioritize and where they diverge
Governance Structure
Who regulates AI, through what institutions, and with what enforcement tools
Should the U.S. create a new federal AI regulatory body, or distribute authority across existing agencies?
Scope and Application
What gets regulated, how thresholds are defined, and when intervention occurs
Is a revenue-based threshold (like $500M) the right way to identify which AI systems require governance, or does it miss important risks?
Political Positioning
Partisan dynamics, coalition possibilities, and competing theories of the problem
Can a bipartisan coalition form around child safety and transparency while deferring harder questions about preemption and liability?
Analytical Frameworks
Interactive 2x2 charts plotting each proposal along key policy axes
Enforcement Mechanism vs. Regulatory Scope
X-axis: Narrow Scope (AI-specific) → Broad Scope (Economy-wide)
Y-axis: Voluntary / Industry-led → Mandatory / Government-enforced
Prevention vs. Liability & Regulatory Authority
X-axis: State-led Authority → Federal-led Authority
Y-axis: Liability / Post-hoc → Prevention / Pre-deployment
Innovation Priority vs. Worker Protection
X-axis: Low Worker Priority → High Worker Priority
Y-axis: Low Innovation Priority → High Innovation Priority
Pre-deployment Obligations vs. Federal Preemption
X-axis: Weak Obligations → Strong Obligations
Y-axis: Weak Preemption → Strong Preemption
Where Proposals Converge and Diverge
Key themes emerging across the political spectrum
Areas of Broad Agreement
- •Every proposal treats the most powerful AI systems as requiring some form of distinct governance. The $500M revenue threshold used by California and New York is emerging as a de facto standard.
- •Child safety is the area of greatest bipartisan alignment and the most likely candidate for near-term legislation. The White House, Blackburn, California, and OpenAI all prioritize it.
- •Even the lightest-touch proposals endorse some form of transparency. The debate is over whether transparency alone is sufficient or must be paired with pre-deployment testing or liability.
- •All proposals acknowledge AI-driven workforce disruption. The disagreement is over mechanism: reporting versus redistribution versus structural intervention.
- •Copyright and digital replicas need addressing. The White House and Blackburn bill converge on the need for a federal framework on AI-generated replicas and content provenance. Copyright training data remains unresolved.
Areas of Sharp Disagreement
- •Federal preemption of state AI laws is the deepest divide. The White House and Blackburn want broad preemption. OpenAI wants preemption contingent on a meaningful federal framework. States are asserting authority while signaling willingness to defer.
- •Pre-deployment versus post-deployment intervention. OpenAI's prevention-first model (federal testing before deployment) is endorsed by no other major proposal in its strong form. Most rely on post-deployment accountability.
- •The economic structure of AI. Sanders' moratorium and breakup proposals have no analogs elsewhere. Kelly's industry-funded Horizon Fund is a moderate redistribution mechanism. The White House and Blackburn contain no redistribution provisions.
- •The role of liability. The Blackburn bill creates extensive liability frameworks including developer, deployer, and federal cause of action pathways. The White House prefers existing law. The appropriate role of litigation in AI governance remains unresolved.