The Center for Humane Technology's most comprehensive AI policy document, structured around seven principles for how AI should be built, deployed, and governed. The Roadmap operates across three intervention domains -- norms, laws, and product design -- arguing that no single reform is sufficient and that change requires layered, simultaneous pressure on the AI development paradigm. CHT explicitly draws parallels to the Big Tobacco and nuclear weapons movements, framing its theory of change as identifying high-leverage intervention points and applying coordinated civil society pressure across them. Unlike industry or legislative proposals, this is a civil society document setting expectations for what governance should look like rather than introducing bills.
Key Provisions
Mandatory pre-deployment testing and risk management for AI systems with standardized reporting
Federal whistleblower protections for all AI employees and contractors (not just catastrophic risk)
Statutory clarification that AI is a product subject to product liability, not a service or legal person
Federal chatbot design standards focused on psychosocial harms, with enhanced protections for minors
Tax incentives flipping the current capital-over-labor bias to reward worker retention and upskilling
Right to cognitive liberty, expanded publicity rights, and firm limits on biometric surveillance
International red lines on recursive self-improvement and autonomous weapons, with technical verification
Upgraded antitrust enforcement targeting AI consolidation, lobbying transparency, and democratic ownership models
Regulatory Philosophy
Civil-society systems-change approach combining norms, laws, and product design. CHT explicitly rejects the framing that any single intervention can fix AI, arguing instead for layered pressure modeled on the campaigns against Big Tobacco and nuclear weapons. The philosophy treats the AI race itself -- the 'if I don't build it, someone else will' incentive structure -- as the root problem, and seeks to change the underlying paradigm rather than negotiate within it. Notably ecumenical: it endorses product liability (a market mechanism), antitrust reform (a structural mechanism), and international red lines (a treaty mechanism) as complementary rather than competing approaches.
Strengths
Derived from the proposal’s own policy documents
+The only proposal that explicitly addresses anthropomorphic chatbot design and psychosocial harms with concrete product-design standards, an area every legislative proposal sidesteps
+Treating AI as a product subject to product liability is a legally elegant solution that leverages centuries of common-law accountability without requiring a new regulatory regime
+The norms-laws-design framework recognizes that legislation alone cannot move a multi-trillion-dollar industry, building in cultural and technical change as parallel levers
+Advances cognitive liberty and a right to think free from surveillance as a new constitutional category — the most ambitious rights framework in any AI proposal
+Whistleblower protections extending to all AI employees (not just those working on catastrophic risk) acknowledges that the people closest to harm have the greatest knowledge to surface it
Weaknesses
From the perspective of political opposition
−A 36-page roadmap of principles is not legislation — CHT names dozens of bills it supports without committing to a single legislative vehicle, hedging on every hard tradeoff
−The 'AI is a product' framing collapses when applied to general-purpose foundation models — strict product liability would functionally ban open-source release and end academic research
−Calls for international red lines on recursive self-improvement while offering no enforcement mechanism beyond moral suasion — the same mechanism that has failed to constrain frontier labs domestically
−The norms-laws-design theory of change is vague enough to be unfalsifiable — every advocacy outcome can be claimed as progress while the AI race accelerates regardless
−Sidesteps the question of who decides what 'humane' means — by substituting CHT's editorial judgment for democratic process, it risks the same paternalism it accuses tech companies of