Governance Structure

Who regulates AI, through what institutions, and with what enforcement tools

The proposals diverge sharply on who should regulate AI and through what mechanisms. The White House framework and Blackburn bill rely primarily on existing agencies (FTC, NIST, DOL). Kelly proposes a new funding mechanism (AI Horizon Fund) but not a new regulatory body. Sanders implicitly calls for new authority through moratorium power. OpenAI advocates for CAISI as a quasi-new federal institution with pre-deployment testing authority.

The White House leans voluntary with targeted mandates. Blackburn, despite deregulatory framing, creates substantial mandatory obligations. OpenAI endorses mandatory federal testing. State laws (CA, NY) create mandatory disclosure and reporting. Sanders and Khanna both call for mandatory mechanisms.

The Blackburn bill is the most enforcement-dense, creating overlapping FTC, state AG, and private right of action pathways. New York's RAISE Act has specific monetary penalties. California's SB 53 relies more on transparency pressure than punitive enforcement.

Key Questions

  1. Should the U.S. create a new federal AI regulatory body, or distribute authority across existing agencies?
  2. What role should state attorneys general play in enforcement when federal preemption is on the table?
  3. Can voluntary commitments and transparency requirements substitute for mandatory compliance regimes?

Proposals

All proposals in this analysis

WH FrameworkBlackburnOpenAI/LehaneKellySandersKhannaCA SB 53NY RAISE
← Back to overview