What’s New

US Federal AI Policy: The White House Framework

White House Drops a National AI Policy Framework, Aims to Override State Laws. The administration released legislative recommendations on March 20 that would establish a unified federal approach to AI regulation, explicitly seeking to preempt the patchwork of 24+ state-level AI laws. The document covers workforce training, IP protections, child safety, and proposes an AI Litigation Task Force.

Politico: What the White House AI Blueprint Actually Says. A useful breakdown of the framework’s light-touch regulatory philosophy, its tension with existing kids’ safety bills in Congress, and the deliberate decision not to create a new federal AI agency.

The Preemption Play: How the White House Plans to Standardize US AI Law. This analysis unpacks the political and legal stakes of federal preemption, explaining the tug-of-war between large AI companies (who want uniformity) and civil society groups (who want stronger state-level consumer protections).

R Street Institute: “The AI Terrible Ten” State Laws. A policy report naming and critiquing the worst state-level AI regulations, arguing that poorly designed rules are failing to improve safety while creating compliance headaches. Useful context for the federal preemption debate.

Policy & Regulation

UK Government Publishes Its Official Position on AI and Copyright. Released March 18 under the Data (Use and Access) Act 2025, this is the UK’s definitive regulatory stance on how copyright law applies to AI training data and AI-generated outputs. A primary source document for anyone tracking global AI copyright policy.

UK Walks Back Broad Copyright Exception for AI Training. Legal commentary explaining how the government reversed course on a proposed exception that would have let AI companies train on copyrighted works without licensing. The piece details the economic impact assessment behind the reversal and what it means for the UK’s competitiveness as an AI hub.

Baltimore Sues xAI Over Deepfake Harms. A landmark municipal lawsuit accusing xAI of enabling non-consensual sexual deepfakes, escalating legal accountability for generative AI platforms. The case could set precedent for how local governments use consumer protection law against AI companies.

Economics & Employment

Reuters: Companies Are Cutting Jobs as Investment Shifts Toward AI. HSBC, Amazon, Meta, and others are slashing thousands of positions amid massive AI spending. Surveys now link a growing share of layoffs directly to AI adoption, with estimated net US job losses of 5,000 to 10,000 per month.

Economists and Investors Pitch Washington on an AI Job-Loss Safety Net. A group of economists and investors is urging proactive policies like portable benefits, wage insurance, and AI-linked taxes to address potential displacement and wealth concentration before widespread labor market shocks hit.

Federal Reserve: AI and Coder Employment, Compiling the Evidence. Fed analysis finds that coder employment growth slowed sharply after ChatGPT’s release, dropping roughly 3% annually after controlling for industry factors. Since coders are among the most AI-exposed occupations, this could foreshadow broader labor demand shifts.

Researchers Say AI Isn’t Killing Jobs, It’s “Unbundling” Them. A new study argues AI automates specific tasks rather than entire roles, effectively splitting “weak-bundle” occupations into narrower functions while leaving “strong-bundle” jobs mostly intact. The distinction matters for predicting which workers are actually at risk.

Anthropic Economic Index: How AI Is Actually Being Used Across the Economy. Anthropic’s March 2026 report uses privacy-preserving usage data from Claude to track real-world AI adoption by sector. One of the most data-grounded public documents available on AI’s economic footprint, designed to give researchers and policymakers early warning of labor market shifts.

Ethics & Safety

How AI Hype Masks the Exploitation of African Workers. An investigative piece exposing how the AI industry’s triumphant narrative conceals the low-wage, exploitative conditions faced by African data annotators and content moderators. The authors argue that the invisibility of this workforce is a structural feature of how AI hype gets manufactured.

Diversity Laundering: When Brands Use AI Faces Instead of Real People. Advertisers are increasingly using AI-generated synthetic faces to simulate diversity in campaigns without hiring diverse talent. The piece raises pointed questions about authenticity, economic displacement of models from marginalized communities, and brand responsibility.

America’s AI Governance Crisis Is a Democracy Crisis. This essay argues that the US failure to establish coherent AI oversight is not just a policy gap but a symptom of deeper democratic dysfunction. It connects the absence of meaningful AI accountability to broader erosion of institutional checks and balances.

Research

The Matthew Effect at Scale: Why AI Won’t Democratize Academic Influence. A counterintuitive argument that generative AI’s ability to lower the cost of producing academic writing will concentrate rather than spread scientific influence, because the bottleneck has shifted from production to attention. Relevant implications for knowledge inequality and content market competition.

IDB: Are We Ready for AI? From Measurement to Policy Governance. This Inter-American Development Bank paper critiques existing AI readiness indexes for biases that disadvantage regions like Latin America, and proposes an Adaptive AI Readiness Scorecard for context-specific policy learning.


Last Updated: 2026-03-25 19:51 (California Time)