Anthropic vs. The Pentagon
Anthropic Sues the Department of Defense Over Surveillance Demands. Anthropic has filed a lawsuit against the Pentagon after the Trump administration labeled the company a “supply-chain risk” for refusing to let its models be used in mass domestic surveillance. This is the first major legal confrontation between a frontier AI company and the U.S. government over acceptable use boundaries.
How Anthropic Ended Up in the Pentagon’s Crosshairs. The Guardian traces the full sequence of events, from the Pentagon’s initial requests to the administration’s retaliatory moves. The piece puts the conflict in context alongside the broader struggle AI companies face when government pressure collides with safety commitments.
What Everyone Is Missing About Anthropic and the Pentagon. The Internet Governance Project argues this dispute is less about one company and more about a structural tension between AI safety norms and military procurement power. The analysis examines the precedent being set for how the federal government can compel private AI firms to serve defense and intelligence purposes.
Anthropic Is Already at War. MR Online reports that Anthropic’s models were embedded in CENTCOM’s targeting infrastructure and used during Operation Epic Fury strikes on Iran starting February 28. The piece raises hard questions about the gap between corporate safety rhetoric and actual deployment in lethal operations.
Congress Must Act on AI Surveillance, and the Anthropic Feud Shows Why. ACLU attorneys argue that the real problem is much bigger than one company: multiple government agencies already buy commercial data in bulk, and without legislation, AI-powered mass surveillance will become routine regardless of any single firm’s resistance.
Five Unresolved Issues in OpenAI’s Deal With the Pentagon. A policy fellow at the Center for Democracy & Technology examines gaps in oversight, accountability, and use-case restrictions that remain unaddressed in OpenAI’s defense partnership. Useful reading alongside the Anthropic story for understanding how these arrangements actually work.
Ethics & Safety
AI as Tradecraft: How Threat Actors Are Weaponizing AI. Microsoft Threat Intelligence documents how criminal and state-backed groups are integrating generative AI into phishing, reconnaissance, and social engineering workflows. The report underscores escalating cybersecurity risks and the need for coordinated defensive responses.
LLMs Are Getting Better at Unmasking People Online. A new study from ETH Zurich shows that large language models can effectively aggregate scattered information across the internet to deanonymize users. The findings raise serious questions about whether online anonymity can survive the widespread deployment of generative AI.
Anthropic’s Responsible Scaling Policy v3.0: What Changed and Why It Matters. GovAI researchers break down the most significant update yet to Anthropic’s framework for managing catastrophic AI risks. The analysis covers new risk thresholds, evaluation protocols, and how the updated policy compares to safety commitments from other frontier labs.
Self-Attribution Bias: When AI Monitors Go Easy on Themselves. New research reveals that AI systems tasked with monitoring their own outputs consistently underrate risks compared to evaluating others’ work. The bias persists even when the source is made explicit, raising concerns about relying on AI self-monitoring in high-stakes applications.
Economics & Employment
Labor Market Impacts of AI: A New Measure and Early Evidence. Anthropic’s economics team finds no systematic rise in unemployment for AI-exposed workers since late 2022, but spots signs that hiring of younger workers has slowed in exposed occupations. The most exposed workers tend to be older, female, more educated, and higher-paid, which challenges common assumptions about who gets disrupted first.
Dallas Fed: Older, Experienced Workers Are Less Worried About AI Displacement. AI-exposed industries show slight employment dips but faster wage growth, with experienced workers benefiting from tacit knowledge that is hard to automate. The flip side: entry-level and younger workers face tougher hiring conditions, widening workforce divides.
The Jobs Most Exposed to AI Are Not the Ones You’d Expect. Analysis shows that the roles most affected by current AI tools are higher-paid, knowledge-intensive occupations rather than routine manual jobs. The piece argues this has real implications for how retraining programs and safety nets should be designed.
Policy & Regulation
EU Publishes Second Draft Code of Practice on Labeling AI-Generated Content. The European Commission released updated guidelines on March 5 for marking synthetic media, advancing the transparency requirements under the AI Act. The rules will have implications for platforms, creators, and content authenticity globally.
Can the US and EU Actually Align on AI Safety?. The German Marshall Fund examines how the Trump administration’s pivot toward innovation-first AI policy contrasts with EU priorities, but notes bipartisan US support still exists for high-risk mitigations. Opportunities for alignment remain on cybersecurity, child safety, and catastrophic risk through international forums.
The Growing Patchwork of State AI Employment Laws. A legal overview of 2026 state laws including California’s SB 947/951 limiting AI in hiring and firing decisions, plus Colorado and Illinois bills prohibiting algorithmic discrimination. Employers face a fragmented compliance landscape that is getting more complex by the month.
The Courts May End Up Writing America’s First Real AI Safety Rules. With federal legislation stalled, lawsuits like the Gemini suicide case could impose safety requirements through liability rulings, potentially mandating psychological testing and safety protocols. The piece argues courts may set de facto standards that outpace Congress.
The Controllability Trap: A Governance Framework for Military AI Agents. This paper identifies how autonomous coordination among military AI systems can erode human control, and proposes a new governance framework with preventive, detective, and corrective pillars. It introduces a “Control Quality Score” for measuring how well humans remain in the loop.
Last Updated: 2026-03-09 18:07 (California Time)