All Proposals

Eight major approaches to governing AI in the United States

White House · 2026

White House AI Legislative Framework

The White House released legislative recommendations outlining a National Policy Framework for Artificial Intelligence, structured around seven pillars addressed to Congress. The framework covers child safety, community protection, intellectual property, anti-censorship, innovation, workforce, and federal preemption. It positions state regulation as the primary threat to U.S. competitiveness and frames preemption as the central legislative priority.

Primary frame: Innovation and competitiveness

Sen. Marsha Blackburn · 2026

Blackburn TRUMP AMERICA AI Act

The most comprehensive federal AI bill to date, spanning 17 titles and hundreds of pages. Despite being framed as implementing the White House's deregulatory vision, the bill contains significantly more regulatory density than the White House framework suggests. It creates multiple enforcement pathways, mandatory reporting obligations, and a risk-based evaluation program.

Primary frame: Comprehensive federal regulation

OpenAI (Chris Lehane, Sasha Baker) · 2026

OpenAI / Chris Lehane Policy Position

OpenAI advocates for a specific sequencing of governance: federal framework first, state alignment second, federal incentive third. The position endorses mandatory federal testing of frontier systems using classified government capabilities before deployment. CAISI would serve as the primary evaluative institution.

Primary frame: Prevention-first safety

OpenAI · 2026

OpenAI — Industrial Policy for the Intelligence Age

OpenAI's most expansive policy document to date, moving well beyond its earlier safety-focused position to propose a comprehensive industrial policy agenda for the transition to superintelligence. The document is organized around two pillars: building an open economy with broad participation and shared prosperity, and building a resilient society through safety systems, alignment, and governance. It proposes a Public Wealth Fund giving every citizen a stake in AI-driven growth, portable benefits decoupled from employers, adaptive safety nets with automatic triggers, a 32-hour workweek pilot, modernized taxation of capital over labor, and a global network of AI Safety Institutes. The framing explicitly invokes the Progressive Era and the New Deal as precedents for the scale of institutional response required.

Primary frame: Industrial policy and shared prosperity

Center for Humane Technology · 2026

Center for Humane Technology — The AI Roadmap

The Center for Humane Technology's most comprehensive AI policy document, structured around seven principles for how AI should be built, deployed, and governed. The Roadmap operates across three intervention domains -- norms, laws, and product design -- arguing that no single reform is sufficient and that change requires layered, simultaneous pressure on the AI development paradigm. CHT explicitly draws parallels to the Big Tobacco and nuclear weapons movements, framing its theory of change as identifying high-leverage intervention points and applying coordinated civil society pressure across them. Unlike industry or legislative proposals, this is a civil society document setting expectations for what governance should look like rather than introducing bills.

Primary frame: Humane technology and public interest

Sen. Mark Warner (with Sens. Hawley, Young, Rounds, and others) · 2026

Sen. Mark Warner — AI Workforce Data & Commission Package

Senator Warner's AI agenda is not a single bill but a coordinated three-part strategy to build the institutional infrastructure for evidence-based AI workforce policy. First, a bipartisan letter (co-signed by Warner, Hawley, Banks, Hassan, Kelly, Kaine, Hickenlooper, Young, and Rounds) urging the Bureau of Labor Statistics and Census Bureau to rapidly expand AI labor market data collection across existing surveys — CPS, JOLTS, and the National Longitudinal Survey. Second, the AI-Related Job Impacts Clarity Act (with Sen. Hawley), requiring quarterly disclosures from publicly traded companies and federal agencies on AI-driven layoffs, hires, unfilled positions, and retraining — reported to DOL with NAICS codes and published on the BLS website. Third, the Economy of the Future Commission Act (S.4046, with Sen. Rounds), establishing a 10-member bipartisan legislative commission to develop consensus recommendations on workforce development, education, social safety nets, taxation, open-source AI, transportation safety, energy, and robotics. The commission must deliver employment projections by NAICS code at 5- and 10-year horizons within 7 months, and full legislative recommendations within 13 months. Together, the three measures form a pipeline: collect the data, mandate its disclosure, then channel it into bipartisan legislative recommendations.

Primary frame: Data-driven workforce governance

NY Assemblymember Alex Bores · 2026

Alex Bores — The AI Dividend

A federal policy proposal from the New York Assemblymember who authored the RAISE Act, now pitching a contingency-based direct payment program designed to activate automatically if AI meaningfully displaces American workers. The AI Dividend is explicitly framed as 'fire insurance' — not a prediction that mass unemployment will occur, but preparation in case it does. The proposal is notable for three novel funding mechanisms: a token tax on AI computation, federal equity warrants in frontier AI companies (out-of-the-money, exercisable only if companies multiply dramatically in value), and tax reform eliminating the accelerated depreciation subsidy for AI capital that currently makes automation cheaper than hiring. Revenue flows to three buckets: direct payments to Americans, workforce transition and education investment, and public AI safety/oversight infrastructure. Bores frames the timing urgency around a closing political window — demanding equity stakes in AI companies after they have already captured the value is far harder than structuring it now.

Primary frame: Contingency-based economic insurance

Sen. Mark Kelly · 2025

Sen. Mark Kelly — "AI for America" Roadmap

The most developed Democratic proposal for AI governance, focusing on worker protection and economic redistribution alongside safety and competitiveness. Kelly treats AI primarily as an economic disruption problem requiring institutional investment, proposing an industry-funded AI Horizon Fund for worker retraining and infrastructure.

Primary frame: Economic redistribution

Sen. Bernie Sanders · 2025

Sen. Bernie Sanders — AI Policy Proposals

The most interventionist and structurally critical position in the current debate, treating AI governance as inseparable from questions of corporate power and wealth inequality. Sanders' proposals include a national data center moratorium, a robot tax, and calls to break up major AI companies.

Primary frame: Democratic control

Rep. Ro Khanna · 2026

Rep. Ro Khanna — "AI for the People" Manifesto

Khanna's April 2026 manifesto in The Nation, building on his earlier Seven Principles but substantially more developed and politically explicit. Self-identifying as an 'AI democratist' (neither accelerationist nor doomer), Khanna frames AI policy as inseparable from the broader fight against billionaire wealth concentration in a 'new Gilded Age.' Notably published as a Silicon Valley representative who has co-hosted town halls with Sen. Bernie Sanders on AI oligarchy, the piece invokes FDR's New Deal as the template for the scale of response required and proposes a Future Workforce Administration funded by a wealth tax. Khanna explicitly attacks Trump's December 2025 executive order authorizing the DOJ to sue states over AI safety regulations.

Primary frame: AI democratism and economic redistribution

California Legislature · 2025

California SB 53 — Transparency in Frontier AI Act

California's evolution from the vetoed SB 1047 to SB 53 illustrates the real-time negotiation between ambition and political feasibility in AI governance. SB 53 targets large frontier developers with over $500M annual revenue and requires transparency reports on safety testing, with critical safety incident reporting within 15 days standard or 24 hours if imminent harm.

Primary frame: Transparency

New York Legislature / Gov. Hochul · 2025

New York RAISE Act

New York's Responsible AI Safety and Education Act establishes reporting and safety governance for frontier AI developers. It covers companies with over $500M in revenue developing frontier models and requires publicly disclosed safety and security protocols. The act includes civil penalties of $1M for initial violations, escalating to $3M for repeat offenses.

Primary frame: Compliance