Who regulates AI, through what institutions, and with what enforcement tools
The proposals diverge sharply on who should regulate AI and through what mechanisms. The White House framework and Blackburn bill rely primarily on existing agencies (FTC, NIST, DOL). Kelly proposes a new funding mechanism (AI Horizon Fund) but not a new regulatory body. Sanders implicitly calls for new authority through moratorium power. OpenAI advocates for CAISI as a quasi-new federal institution with pre-deployment testing authority.
The White House leans voluntary with targeted mandates. Blackburn, despite deregulatory framing, creates substantial mandatory obligations. OpenAI endorses mandatory federal testing. State laws (CA, NY) create mandatory disclosure and reporting. Sanders and Khanna both call for mandatory mechanisms.
The Blackburn bill is the most enforcement-dense, creating overlapping FTC, state AG, and private right of action pathways. New York's RAISE Act has specific monetary penalties. California's SB 53 relies more on transparency pressure than punitive enforcement.
All proposals in this analysis