Eight major approaches to governing AI in the United States
White House · 2026
The White House released legislative recommendations outlining a National Policy Framework for Artificial Intelligence, structured around seven pillars addressed to Congress. The framework covers child safety, community protection, intellectual property, anti-censorship, innovation, workforce, and federal preemption. It positions state regulation as the primary threat to U.S. competitiveness and frames preemption as the central legislative priority.
Primary frame: Innovation and competitiveness
Sen. Marsha Blackburn · 2026
The most comprehensive federal AI bill to date, spanning 17 titles and hundreds of pages. Despite being framed as implementing the White House's deregulatory vision, the bill contains significantly more regulatory density than the White House framework suggests. It creates multiple enforcement pathways, mandatory reporting obligations, and a risk-based evaluation program.
Primary frame: Comprehensive federal regulation
OpenAI (Chris Lehane, Sasha Baker) · 2026
OpenAI advocates for a specific sequencing of governance: federal framework first, state alignment second, federal incentive third. The position endorses mandatory federal testing of frontier systems using classified government capabilities before deployment. CAISI would serve as the primary evaluative institution.
Primary frame: Prevention-first safety
Sen. Mark Kelly · 2025
The most developed Democratic proposal for AI governance, focusing on worker protection and economic redistribution alongside safety and competitiveness. Kelly treats AI primarily as an economic disruption problem requiring institutional investment, proposing an industry-funded AI Horizon Fund for worker retraining and infrastructure.
Primary frame: Economic redistribution
Sen. Bernie Sanders · 2025
The most interventionist and structurally critical position in the current debate, treating AI governance as inseparable from questions of corporate power and wealth inequality. Sanders' proposals include a national data center moratorium, a robot tax, and calls to break up major AI companies.
Primary frame: Democratic control
Rep. Ro Khanna · 2026
Khanna articulates a middle path between Silicon Valley optimism and progressive structural critique through seven principles for democratic AI. As a representative of a Silicon Valley district, his position carries particular weight. He explicitly rejects Luddism while insisting on structural mechanisms that embed worker and community interests into AI governance.
Primary frame: Worker empowerment
California Legislature · 2025
California's evolution from the vetoed SB 1047 to SB 53 illustrates the real-time negotiation between ambition and political feasibility in AI governance. SB 53 targets large frontier developers with over $500M annual revenue and requires transparency reports on safety testing, with critical safety incident reporting within 15 days standard or 24 hours if imminent harm.
Primary frame: Transparency
New York Legislature / Gov. Hochul · 2025
New York's Responsible AI Safety and Education Act establishes reporting and safety governance for frontier AI developers. It covers companies with over $500M in revenue developing frontier models and requires publicly disclosed safety and security protocols. The act includes civil penalties of $1M for initial violations, escalating to $3M for repeat offenses.
Primary frame: Compliance