Center for Humane Technology · 2026

Center for Humane Technology — The AI Roadmap

CHT Roadmap

The Center for Humane Technology's most comprehensive AI policy document, structured around seven principles for how AI should be built, deployed, and governed. The Roadmap operates across three intervention domains -- norms, laws, and product design -- arguing that no single reform is sufficient and that change requires layered, simultaneous pressure on the AI development paradigm. CHT explicitly draws parallels to the Big Tobacco and nuclear weapons movements, framing its theory of change as identifying high-leverage intervention points and applying coordinated civil society pressure across them. Unlike industry or legislative proposals, this is a civil society document setting expectations for what governance should look like rather than introducing bills.

Key Provisions

Regulatory Philosophy

Civil-society systems-change approach combining norms, laws, and product design. CHT explicitly rejects the framing that any single intervention can fix AI, arguing instead for layered pressure modeled on the campaigns against Big Tobacco and nuclear weapons. The philosophy treats the AI race itself -- the 'if I don't build it, someone else will' incentive structure -- as the root problem, and seeks to change the underlying paradigm rather than negotiate within it. Notably ecumenical: it endorses product liability (a market mechanism), antitrust reform (a structural mechanism), and international red lines (a treaty mechanism) as complementary rather than competing approaches.

Strengths

Derived from the proposal’s own policy documents

  • +The only proposal that explicitly addresses anthropomorphic chatbot design and psychosocial harms with concrete product-design standards, an area every legislative proposal sidesteps
  • +Treating AI as a product subject to product liability is a legally elegant solution that leverages centuries of common-law accountability without requiring a new regulatory regime
  • +The norms-laws-design framework recognizes that legislation alone cannot move a multi-trillion-dollar industry, building in cultural and technical change as parallel levers
  • +Advances cognitive liberty and a right to think free from surveillance as a new constitutional category — the most ambitious rights framework in any AI proposal
  • +Whistleblower protections extending to all AI employees (not just those working on catastrophic risk) acknowledges that the people closest to harm have the greatest knowledge to surface it

Weaknesses

From the perspective of political opposition

  • A 36-page roadmap of principles is not legislation — CHT names dozens of bills it supports without committing to a single legislative vehicle, hedging on every hard tradeoff
  • The 'AI is a product' framing collapses when applied to general-purpose foundation models — strict product liability would functionally ban open-source release and end academic research
  • Calls for international red lines on recursive self-improvement while offering no enforcement mechanism beyond moral suasion — the same mechanism that has failed to constrain frontier labs domestically
  • The norms-laws-design theory of change is vague enough to be unfalsifiable — every advocacy outcome can be claimed as progress while the AI race accelerates regardless
  • Sidesteps the question of who decides what 'humane' means — by substituting CHT's editorial judgment for democratic process, it risks the same paternalism it accuses tech companies of

Position on Analytical Frameworks

Enforcement Mechanism vs. Regulatory Scope

Prevention vs. Liability & Regulatory Authority

Innovation Priority vs. Worker Protection

Pre-deployment Obligations vs. Federal Preemption

← Back to all proposals