What’s New

The White House vs. AI Safety Companies

Pentagon vs. Anthropic: An AI Safety Dispute Becomes a Government Blacklist. Anthropic refused to waive its ethical red lines for Pentagon deployment and was designated a national-security supply-chain risk in response. The move is unprecedented and could discourage other AI companies from maintaining independent safety commitments.

White House Drafts “Any Lawful Use” Mandate for AI Firms. A draft directive would require AI companies to support any lawful government use of their systems, effectively stripping firms of the ability to impose their own ethical constraints. The Anthropic blacklisting fits into a broader executive-branch strategy to subordinate private AI safety policies to federal authority.

Commerce Department Moves to Preempt State AI Laws. On March 11, the Department of Commerce initiated a process to override state-level AI legislation, potentially nullifying dozens of consumer-protection and algorithmic-accountability laws. Businesses and civil-society groups were given just days to respond.

The Legal Limits of the Federal Push to Override State AI Rules. Ropes Gray examines how the Trump administration’s executive order to preempt state AI laws faces real hurdles without comprehensive federal legislation. The analysis maps out likely legal battles and the risk of weakening existing consumer safeguards against AI bias.

Policy & Regulation: EU and International

Leaked EU Draft Shows How Brussels Will Investigate and Fine AI Model Providers. A leaked regulatory document outlines the specific investigative procedures, evidentiary standards, and fine structures the EU plans to use against general-purpose AI providers. Fines could reach hundreds of millions of euros, and regulators intend to demand access to training data and internal safety evaluations.

The EU AI Act Is Fracturing Transatlantic Tech Regulation. Full enforcement of the EU AI Act is creating a widening gap between European risk-based regulation and the U.S. deregulatory posture. Multinational companies now face dual compliance regimes, producing a “Brussels Effect” in AI governance that echoes what happened with GDPR.

Red Lines Under the EU AI Act: The Ban on Untargeted Facial Image Scraping. The Future of Privacy Forum breaks down how the AI Act bans untargeted scraping for facial recognition databases, distinguishing it from targeted methods and connecting it to past GDPR enforcement against companies like Clearview AI.

Kenya Moves to Criminalize Unapproved “High-Risk” AI Deployment. Kenya’s proposed AI bill would impose criminal penalties, including jail time, on organizations deploying high-risk AI systems like credit-scoring and hiring tools without state approval. This makes Kenya one of the first African nations to pursue criminal enforcement of AI regulation.

AI Regulation in 2026: A Global Policy Map for Product Teams. A practitioner-oriented survey of the global AI regulatory landscape covering the EU AI Act, U.S. federal and state developments, China’s generative-AI rules, and emerging frameworks in the Global South. Unusually, it translates each regulatory requirement into concrete product-team obligations.

Ethics & Safety

Autonomous AI Agents and the Future of Cyber Competition. A policy analysis examining how autonomous AI agents are reshaping offensive and defensive cyber operations, tied to the Trump administration’s March 6 Cyber Strategy. It raises hard questions about accountability and escalation risk when AI systems conduct cyber operations with minimal human oversight.

The Governance Gap in Autonomous AI Agents. The most popular open-source agent framework has 247,000 GitHub stars and over 135,000 publicly exposed instances across 52 countries, all running with full system access. Existing governance frameworks are structurally unable to address agents that act, persist, and self-modify across organizational boundaries.

Norwegian Consumer Council Calls Generative AI the Next Wave of “Enshittification”. The same body whose 2018 “Deceived by Design” report reshaped EU dark-patterns law now frames generative AI products as a systematic mechanism for extracting value from users while degrading service quality. The report is likely to influence upcoming EU consumer-protection enforcement.

The Ethics of Using AI to Immortalize the Dead. Marketplace examines AI “griefbots” that simulate deceased loved ones, raising questions about consent, data manipulation for profit, and deepfake recreations. The growing posthumous AI industry sits in a regulatory gray zone with real implications for privacy and consumer protection.

Economics & Employment

Anthropic Launches the Anthropic Institute. Anthropic created a new research body to study the economic, social, security, and governance impacts of powerful AI. The initiative warns that rapid AI advances will reshape society within the next two years and calls for preparation on job shifts and systemic risks.

Fed Governor Warns AI Could Bring Job Displacement Before Job Creation. Federal Reserve Governor Lisa Cook says AI may displace workers in tasks like coding before new jobs emerge, causing real hardship for families as unemployment rises among recent graduates. The comments underscore the gap between long-run economic optimism and short-run labor market pain.

Job Transformation, Specialization, and the Labor Market Effects of AI. A Minneapolis Fed working paper models how AI changes the task content of jobs, finding moderate benefits for average workers but harm for those with high AI exposure. The research highlights growing wage inequality within occupations and a shift in demand from analytical to social skills.

Research

Via Negativa for AI Alignment: Why Negative Constraints Beat Positive Preferences. This arXiv preprint argues that prohibitions are structurally superior to positive preference optimization for aligning large language models, because they are discrete and verifiable. The authors advocate shifting safety research toward rejection learning for more stable and robust outcomes.

Sanders’s Data Center Moratorium Is a Risky Strategy for AI Safety. A LessWrong analysis argues that Bernie Sanders’ proposed moratorium on new data centers would slow AI development by months at most while risking political backlash that links AI safety to environmental populism. The piece warns the approach could undermine more effective compute governance efforts.


Last Updated: 2026-03-18 07:29 (California Time)