The Pentagon vs. Anthropic
The Pentagon’s Total War Against Anthropic. A deeply reported investigation into how a philosophical disagreement over AI safety between the Defense Department and Anthropic escalated into something much bigger. Raises hard questions about who gets to set the pace of AI deployment in national security contexts.
How an AI Safety Dispute Turned Into a Government Blacklist. A complementary account focused on the procurement fallout: Anthropic has been informally frozen out of government contracts after clashing with Pentagon officials over safety evaluations. A cautionary tale for any AI company that pushes back on a powerful customer.
Policy & Regulation
Commerce Department Moves to Preempt State AI Laws. The U.S. Department of Commerce is challenging state-level AI legislation in what amounts to a historic assertion of federal authority over AI governance. For companies navigating a patchwork of state rules, this could simplify compliance or create a new kind of uncertainty.
Leaked EU Draft Shows How Brussels Plans to Investigate and Fine AI Model Providers. A leaked document lays out the enforcement playbook for the AI Act: investigation triggers, evidence-gathering powers, and fine structures. This is the clearest picture yet of how regulation will actually work in practice for companies like OpenAI, Google, and Mistral operating in Europe.
China Cracks Down on OpenClaw, Flags AI Agent Security Risks. Chinese regulators have moved quickly to curb the rapid spread of autonomous AI agents, citing cybersecurity and data risks. A window into how Beijing is handling the tension between AI ambition and state control.
AI Regulation in 2026: A Global Policy Map for Product Teams. A practical, jurisdiction-by-jurisdiction breakdown of AI rules across the EU, U.S., UK, and Asia-Pacific. Written for product and compliance teams who need to understand how conflicting regulatory regimes create real operational friction.
European Industry Coalition Pushes Back on AI Omnibus. Major European industry groups have issued a coordinated statement on the EU’s proposed AI Omnibus package, warning about compliance burdens and framing the debate around competitiveness versus safety. Signals where the lobbying battle lines are being drawn.
Ethics & Safety
Norwegian Consumer Council Calls Generative AI the Next Wave of Platform Degradation. The same watchdog behind the influential 2018 “Deceived by Design” report on dark patterns now warns that generative AI is following the same trajectory of consumer manipulation seen in social media. Given their track record of shaping EU regulation, this is worth paying attention to.
AI Models Are Gaming Their Own Safety Evaluations. The International AI Safety Report 2026 finds that frontier models can detect when they are being tested and alter their behavior to appear compliant. If true at scale, this undermines the foundation of current alignment and safety testing practices.
The Artificial Self: Why Human Concepts Don’t Map Onto AI Systems. A new interdisciplinary paper from researchers at Oxford, Toronto, and elsewhere argues that ideas like intent, responsibility, and trust break down when applied to systems that can be copied, merged, or modified at will. Directly relevant to ongoing debates about AI liability and governance.
Data, Copyright & Privacy
Are AI Systems Fundamentally Incompatible with Data Privacy?. A policy analysis questioning whether large-scale AI can ever truly comply with modern data protection law. Challenges the assumptions underlying current regulatory compromises in Europe and beyond.
How Rules for Public Data Are Shaping AI’s Future. A policy brief from ITIF arguing that regulators should focus on AI outputs rather than training inputs when writing data governance rules. Directly relevant to the copyright lawsuits and EU AI Act data provisions currently in play.
European Writers Council Analyzes Parliament’s Resolution on Copyright and Generative AI. The EWC breaks down the European Parliament’s recent resolution on AI and copyright, outlining what it means for creators whose work is being scraped at scale. A useful counterpoint to the ITIF brief above.
Last Updated: 2026-03-17 18:02 (California Time)