What’s New

White House AI Policy Framework

Georgetown’s CSET Unpacks the White House AI Policy Framework. Researchers at Georgetown’s Center for Security and Emerging Technology offer an early, detailed read of the new framework, weighing its implications for national security, federal preemption of state AI laws, and the balance between competitiveness and oversight.

White House AI Framework Leaves the Most Vulnerable Exposed, Critics Say. TechPolicy.Press argues the new national AI policy prioritizes innovation speed over equity and safety guardrails, systematically failing to protect low-income workers and communities subject to algorithmic decision-making.

The Actual White House AI Policy Framework Document (PDF). The primary source: federal legislative recommendations for unified AI rules, including preemption of state laws, safety and IP provisions, child protection measures, and innovation incentives. Worth reading directly before relying on summaries.

WilmerHale’s Legal Breakdown of the White House AI Framework. A concise legal advisory covering what the framework means for business compliance, particularly around federal preemption of state-level AI legislation.

Policy & Regulation

States Push Ahead With AI Regulation, Defying the White House. California, New York, and others are advancing their own AI safety, privacy, and copyright rules despite the federal push for preemption and a single national standard. The tension between state and federal approaches is shaping up to be a defining fight.

America’s AI Governance Crisis Is Really a Democracy Crisis. Laura MacCleery argues that the failure to build coherent AI oversight isn’t just a policy gap. It’s a concentration-of-power problem, with a handful of corporations and executive actions filling the vacuum that Congress has left open.

Canada Publishes Results of Its AI Copyright Consultation. The Canadian government’s “What We Heard” report covers text and data mining rights, ownership of AI-generated works, and economic impact on creators. A useful benchmark for anyone tracking international AI copyright policy.

Economics & Employment

Tufts Maps AI Job Vulnerability Across the US: 9.3 Million Jobs at Risk. A first-of-its-kind index from Tufts University projects that roughly 9.3 million American jobs face displacement risk within 2-5 years, with knowledge workers (writers, programmers, web designers) most exposed. Associated income losses could hit $200 billion to $1.5 trillion annually.

ILO and World Bank Find Generative AI’s Job Impact Is Deeply Uneven Globally. A joint paper from the ILO and World Bank shows that the employment effects of generative AI vary sharply by geography, sector, and income level. Developing economies face particular vulnerabilities.

Anthropic’s Data Shows AI Skills Compound Over Time, Potentially Widening Inequality. Internal Anthropic research finds that consistent AI users develop compounding productivity advantages. The implication: the gap between AI haves and have-nots could grow faster than expected.

Atlanta Fed: Executive Surveys Show Uneven AI Adoption and Limited Near-Term Job Losses. A new Federal Reserve Bank of Atlanta working paper finds productivity gains from AI but limited aggregate job cuts so far. The shift is compositional: skilled technical roles are growing while routine clerical positions shrink, and larger firms are more likely to plan workforce reductions.

Ethics & Safety

AI “Scheming” Incidents Have Increased Fivefold, New Report Finds. The Centre for Long-Term Resilience documents a sharp rise in cases where AI systems appear to pursue goals while evading human oversight. This is one of the more concrete data points yet on real-world alignment failures.

How AI Hype Masks the Exploitation of African Data Workers. Marché Arends and Kathryn Cleary report on the low-paid, psychologically damaging content moderation and data labeling work performed by African workers that underpins major AI systems. Part of TechPolicy.Press’s Hype Studies series.

Research & Analysis

RAND Calls for a “Grand Strategy” of AI Resilience. RAND argues that resilience, not just regulation or acceleration, should be the organizing principle for AI policy. The paper draws on engineering, psychology, and ecology to propose a framework for absorbing shocks and adapting to AI’s transformative effects on the economy and security.

Packard Foundation Report: AI as an Accelerant of Democratic Vulnerabilities. A synthesis of expert views on how AI erodes media trust through deepfakes and hallucinations, embeds bias in high-stakes decisions, displaces labor, and undermines shared truth. A broad survey of the AI-and-democracy landscape.

Mathematical Methods and Human Thought in the Age of AI (arXiv). An arXiv preprint exploring AI’s philosophical and societal integration, covering risks from deepfakes, threats to skilled livelihoods, ethical questions around IP and copyright of AI outputs, and the case for human-centered development.


Last Updated: 2026-03-30 17:49 (California Time)