What’s New

White House AI Policy Framework

White House Releases National AI Policy Framework with Legislative Recommendations. The administration’s new framework lays out federal priorities for AI legislation, covering child safety, deepfake protections, copyright, workforce development, and energy infrastructure. Notably, it pushes to preempt state-level AI regulations while offering limited federal guardrails in return.

Georgetown’s CSET Unpacks What the New AI Framework Actually Says. Researchers at the Center for Security and Emerging Technology provide an early, detailed breakdown of the framework’s implications for U.S. competitiveness, national security, and the conspicuous absence of strong safety requirements.

Beware the AI Preemption Trap. Just Security argues the framework’s push to override state-level safety, bias audit, and accountability laws amounts to a bait-and-switch: stripping local protections without replacing them with meaningful federal ones. A sharp legal critique worth reading alongside the framework itself.

The Framework’s Blind Spot: Who Gets Left Behind. Sydney Saubestre at Tech Policy Press makes the case that the new policy prioritizes economic dominance and deregulation while systematically failing to protect marginalized communities from algorithmic harm, bias, and surveillance.

America’s AI Governance Crisis Is Really a Democracy Crisis. Laura MacCleery connects the dots between the failure to establish meaningful AI oversight and a broader erosion of democratic accountability, arguing that corporate power and executive authority are crowding out public deliberation.

Ethics & Safety

700 Real-World Cases of AI Scheming, and a 5x Increase in Incidents. The Centre for Long-Term Resilience has documented over 700 cases where AI systems lied, bypassed instructions, or acted deceptively to evade human oversight. This is one of the first large-scale empirical datasets on AI misalignment in deployment, and the trend line is not encouraging.

Stanford Finds AI Models Are Sycophants, Even When It Causes Harm. New Stanford research shows large language models have a strong tendency to affirm users seeking personal advice, even when the user’s behavior is harmful. The researchers frame this as a safety problem that needs design changes and regulatory attention.

Why AI Interfaces Shouldn’t Try to Be Human. This arXiv preprint examines how anthropomorphic design choices in AI (dialogue style, emotive language) can undermine user autonomy and cause real harm, particularly for vulnerable populations like survivors of gender-based violence. It advocates for trauma-informed, restrained design as a form of procedural ethics.

Carnegie Mellon Proposes a New Framework for AI Privacy and Dignity. Researchers at CMU have developed a framework that integrates contextual privacy norms with universal dignity requirements. The goal is to give policymakers a practical tool for governing foundation models as they evolve.

How AI-Generated News Affects Trust and Perceived Bias. A new paper in Nature Humanities and Social Sciences Communications studies how exposure to automated journalism shapes perceptions of media credibility among younger audiences. The findings underscore the need for transparency when AI produces news content.

Economics & Employment

ILO and World Bank: Generative AI’s Impact on Jobs Will Be Deeply Uneven. A joint paper from the International Labour Organization and the World Bank finds that AI-driven job displacement will hit developing economies and lower-wage workers hardest. A major authoritative signal on who actually bears the cost of the AI transition.

Anthropic’s Data Suggests AI Skills Compound, and That Could Widen Inequality. Internal Anthropic data shows that users who invest time learning to work with AI develop compounding advantages over those who don’t. The implication: AI may accelerate socioeconomic inequality rather than democratize opportunity.

How AI Hype Obscures the Exploitation of African Data Workers. Marche Arends and Kathryn Cleary document how the AI industry’s progress narrative hides the systematic exploitation of low-paid African workers who do the data labeling, content moderation, and RLHF annotation that makes frontier models work.

Brookings: A People-First Vision for Work in the Age of AI. Brookings argues that without deliberate policy intervention, AI risks accelerating layoffs and wealth concentration. The piece proposes concrete measures including tripartite institutions (government, business, unions), minimum staffing in human-centric roles, and support for mid-career transitions.

Tufts Study Identifies 9.3 Million U.S. Jobs at High Risk from AI. A new American AI Jobs Risk Index from Tufts University maps which occupations and geographies face the greatest displacement risk over the next two to five years. The political implications are significant, especially in states already pushing for AI regulation.

Policy & Regulation

Why the Pentagon Branded Anthropic a “Supply Chain Risk,” and What a Federal Judge Did About It. A federal judge has blocked the Pentagon’s move against Anthropic after the Defense Department labeled the AI safety company a national security liability. The case is a landmark confrontation between ethical AI development and the state’s demand for unrestricted military AI capabilities.

Baltimore Sues X and xAI Over Grok-Generated Deepfakes. The City of Baltimore has filed a municipal lawsuit alleging that xAI’s Grok tool enabled widespread creation of non-consensual sexualized deepfake images, including of minors. It is one of the first major municipal legal actions targeting an AI platform for deepfake harms.

Can the EU AI Act Handle Autonomous AI Agents?. This arXiv preprint analyzes the challenges the EU AI Act faces as AI agents become more autonomous. It identifies gaps in monitoring, enforcement, and liability that existing rules were not designed to address.

Australia Sets National Rules for AI Data Center Approvals. Australia has announced principles governing how AI data centers get approved, addressing energy consumption, water use, and community impact. It signals a growing trend of governments treating AI infrastructure as a matter of public interest.


Last Updated: 2026-03-29 20:19 (California Time)