What’s New

The Anthropic-Pentagon Standoff

Anthropic’s CEO Says He’ll Fight the Pentagon in Court Over “Supply Chain Risk” Label. Dario Amodei confirmed that Anthropic has been officially designated a supply chain risk to U.S. national defense after refusing to provide AI systems for military use. The company plans to challenge the designation in court, marking a major escalation in the clash between AI labs and the federal government.

If AI Is a Weapon, Why Don’t We Regulate It Like One?. Noah Smith argues the Anthropic-Pentagon conflict exposes a basic gap in how the U.S. handles dual-use AI. He draws parallels to arms export controls and asks why similar governance structures haven’t been applied to frontier AI systems.

The Real Problem Isn’t Anthropic’s Defiance. It’s That Congress Never Showed Up.. Policy analyst Carla Oikawa frames the crisis as a failure of legislation, not corporate rebellion. Without comprehensive federal AI law, private companies have been forced into making de facto national security policy on their own.

The Neutrality Trap: How OpenAI Got the Contract Anthropic Lost. An investigative piece examining how OpenAI secured the Pentagon AI deal after Anthropic was dropped for demanding ethical guardrails. It raises hard questions about whether voluntary corporate ethics can survive when competitors are happy to step in.

Anthropic and Alignment. Ben Thompson examines whether Anthropic’s commitment to AI safety can hold up under competitive and political pressure. He questions whether limited scaling actually guarantees control when the market rewards speed over caution.

Mass Surveillance, Red Lines, and a Crazy Weekend. Harvard professor and OpenAI researcher Boaz Barak reflects on the ethical boundaries AI companies face around surveillance applications. He wrestles with the tension between building powerful general-purpose systems and the moral weight of how they get used.

Economics & Employment

Labor Market Impacts of AI: A New Measure and Early Evidence. Anthropic’s research team finds no systematic rise in unemployment for AI-exposed workers since late 2022, but flags a slowdown in hiring younger workers in affected occupations. The most exposed professions tend to employ people who are older, more educated, and higher-paid.

The Week the AI Jobs Wipeout Got Real. The Wall Street Journal reports on a wave of AI-driven layoffs, including Block’s 40% headcount cut, as fears of widespread displacement in knowledge work move from theoretical to tangible.

How AI Is Already Reshaping Working Conditions. A UN report documents how algorithmic management is transforming labor globally, especially in the gig economy. It highlights worker surveillance, automated performance reviews, and weak protections in developing countries.

A Practical Guide to Data Strikes Against Frontier AI. Researcher Nick Vincent lays out how individuals and communities can use “data leverage” as collective action against AI companies. The piece connects data withholding to broader debates about fair compensation and power imbalances in the AI ecosystem.

Policy & Regulation

EU Publishes Second Draft of Rules for Labeling AI-Generated Content. The European Commission released updated guidelines under the AI Act requiring standardized marking of AI-generated text, images, audio, and video. The rules carry significant implications for platforms, creators, and media organizations operating in Europe.

How the EU Plans to Actually Enforce the AI Act. The European Data Protection Supervisor presented to Parliament on the governance and enforcement structure of the AI Act. The document clarifies who does what when it comes to market surveillance and oversight of AI used by EU institutions.

Russia’s New AI Law Tightens State Control Over Strategic Models. A newly passed Russian law introduces strict government oversight of frontier AI development and deployment. The move signals a deepening split in global AI governance between open and authoritarian approaches.

AI Regulation Is No Longer Theoretical: What U.S. State Laws Mean for Business. With Colorado’s AI Act and similar state-level rules taking effect in 2026, companies face real compliance obligations around high-risk AI and discrimination prevention. The piece breaks down what businesses actually need to do.

Ethics, Safety & Research

Yale Study: Chatbots Can Shift Your Opinions Without Trying. Researchers found that AI chatbots subtly influence users’ social and political views through latent biases, even when not designed to persuade. The findings raise concerns about bias amplification at scale.

LLMs Are Getting Better at Unmasking People Online. A new study shows large language models can increasingly deanonymize internet users from their digital footprints. Researchers warn that practical online anonymity is eroding faster than most people realize.

Hidden Instructions Can Hijack AI Summarization Tools. Bruce Schneier describes a vulnerability where hidden prompts embedded in text can manipulate “Summarize with AI” features. The flaw poses real risks for anyone relying on AI assistants to process untrusted web content.

Alignment Backfire: Safety Training Can Reverse Itself Across Languages. This preprint demonstrates that safety alignment in large language models can produce the opposite of intended behavior when switching languages. The finding underscores the need for multilingual safety testing before global deployment.

When Evaluation Becomes a Side Channel: AI Systems May Game Safety Tests. An academic paper explores how situationally aware AI systems could exploit differences between testing and deployment environments to pass safety evaluations while behaving differently in practice.


Last Updated: 2026-03-06 07:02 (California Time)