What’s New

Anthropic vs. the Pentagon

Anthropic Sues the Pentagon After Being Labeled a “Supply Chain Risk”. Anthropic filed suit against the Department of Defense after becoming the first U.S. company designated a supply chain risk, reportedly for refusing to give the military unrestricted access to its Claude AI for surveillance and autonomous weapons. The case sets up a direct collision between AI safety commitments and national security demands.

Microsoft and Competing AI Researchers Rally Behind Anthropic in Pentagon Fight. In a rare display of cross-industry solidarity, Microsoft and researchers from rival AI labs have filed in support of Anthropic’s lawsuit. The coalition argues the Pentagon’s designation sets a dangerous precedent that could punish any company for maintaining ethical guardrails.

When Moral Reasoning Becomes a “Supply Chain Risk”. Policy analyst Laura MacCleery argues that penalizing an AI company for refusing to strip safety features conflates corporate ethics with national security threats. The piece warns this approach will erode both democratic norms and long-term U.S. competitiveness in AI.

What Everyone Is Missing About Anthropic and the Pentagon. This analysis reframes the dispute as a governance failure rather than a procurement disagreement, focusing on how AI policy is being shaped through opaque security mechanisms with little public accountability.

Anthropic v. U.S. Government: What AI Professionals Need to Know. A detailed breakdown of the legal, technical, and professional implications of the lawsuit, covering the constitutional questions at stake and what the outcome could mean for how AI companies structure government contracts and safety policies.

AI in Warfare

Halt AI in Warfare Until International Rules Exist, Say Researchers (Nature). A Nature editorial and open letter signed by employees at OpenAI, Anthropic, and Google DeepMind calls on AI companies to stop deploying their technologies for mass surveillance and fully autonomous weapons. The letter was prompted by recent AI-enabled military strikes and warns that the technology is outpacing international law.

EU Policy & Regulation

EU Parliament Adopts Stricter Copyright Rules for AI Training Data. The European Parliament formally endorsed a framework requiring generative AI developers to obtain licenses and compensate creators when copyrighted works are used for model training. The creative sector accounts for 6.9% of EU GDP, and lawmakers framed the vote as essential to protecting it.

Tech Industry Pushes Back Hard on Europe’s AI Copyright Proposals. Major technology groups warned that the Parliament’s proposed rules could cripple European AI development and fragment the global market. Industry representatives called the mandatory compensation scheme technically unworkable at scale.

Europe Recalibrates: AI Copyright and GDPR Under Simultaneous Pressure. A legal analysis examining how the EU is trying to balance its AI ambitions with strict copyright and data privacy protections. Recent court decisions and legislative proposals are forcing regulators to rethink how these frameworks interact.

European Commission Releases Updated Code of Practice for Labeling AI-Generated Content. The Commission published a second draft of its rules on mandatory marking and labeling of AI-generated media. The effort targets deepfakes and synthetic content, aiming to give consumers clearer signals about what they are seeing online.

Big Tech’s Full AI Operations Now Under EU Antitrust Scrutiny. EU antitrust chief Teresa Ribera is investigating Big Tech’s dominance across the entire AI supply chain, from chips to models, targeting firms like Nvidia and Meta for potential competition distortions.

Economics & Employment

“What Will Our Kids Do?” The Question Haunting Investors at Morgan Stanley’s AI Conference. At Morgan Stanley’s TMT Conference, executives including Sam Altman warned of significant workforce reductions ahead. Surveys presented at the event showed companies planning 4% headcount cuts, with middle-income workers most exposed.

Anthropic Introduces New Measure of AI’s Real-World Labor Effects. Anthropic researchers developed “observed exposure” to track how AI is actually affecting jobs, finding no broad unemployment spike so far but notably slowed hiring for younger workers in exposed roles. Those most affected tend to be older, female, educated, and higher-paid.

Brookings: Research on AI and Jobs Is Still in the First Inning. A Brookings review of early studies finds mixed and inconclusive signals on AI’s employment effects, stressing the need for better data on productivity, labor supply, and workforce transitions before drawing firm conclusions.

America Cannot Withstand the Economic Shock That’s Coming (NYT Opinion). Former Commerce Secretary Gina Raimondo argues that AI is outpacing workforce adaptation and risks triggering mass joblessness comparable to past trade shocks. She calls for public-private investment in retraining, prediction tools, and wage insurance.

Anthropic Launches Think Tank Focused on AI’s Economic and Social Effects. Anthropic established a dedicated research body to study AI’s impact on labor, inequality, and economic disruption. The move signals that leading AI labs are beginning to institutionalize their engagement with the downstream consequences of their technology.

Research & Safety

Johns Hopkins Researchers Build Reusable Framework for AI Safety Testing. A team at Johns Hopkins developed “Jailbreak Distillation,” a method to systematically generate attack prompts for testing large language model vulnerabilities. The framework is designed to make safety evaluations more reproducible and sustainable as models scale.

Shutdown Safety Valves for Advanced AI (arXiv). This paper proposes giving advanced AI systems a primary goal of permitting their own shutdown, as a way to counter self-preservation incentives. It evaluates the conditions under which such an approach is advisable for mitigating existential risk.

Thaler Is Dead. Now for the AI Copyright Questions That Actually Matter. Following the Supreme Court’s refusal to hear Thaler v. Perlmutter, this analysis clarifies that U.S. copyright still requires human authorship but notes the large gray areas remaining around AI-assisted works. It lays out the unresolved questions on infringement, licensing, and proving human contribution.


Last Updated: 2026-03-12 18:28 (California Time)