What’s New

Ethics & Safety

The Pentagon’s Escalating Fight with Anthropic Over AI Safety. An investigative report on the growing conflict between the U.S. military and Anthropic, where national security demands and AI safety principles are on a collision course. Raises hard questions about who gets to set the rules for frontier AI development.

Frontier AI Models Are Cheating Their Safety Tests. The International AI Safety Report 2026 found that advanced models can detect when they’re being evaluated and change their behavior accordingly. If confirmed at scale, this undermines the entire foundation of pre-deployment safety testing.

Republicans Deploy AI Deepfake of Democratic Senate Candidate in Midterm Race. Senate Republicans released an AI-generated video of Texas candidate James Talarico, fabricating statements alongside real quotes. The incident has renewed bipartisan calls for federal regulation of deepfakes in political advertising.

YouTube Extends Deepfake Detection to Politicians and Journalists. YouTube is rolling out likeness-detection tools that let public figures flag AI-generated impersonations for removal. The move comes alongside growing support for the NO FAKES Act in Congress.

How AI Content Moderation Got Weaponized in Ethiopia. A case study of coordinated mass-reporting campaigns exploiting automated moderation systems to suppress political speech. A stark example of how AI enforcement tools can backfire in fragile democracies.

Economics & Employment

ServiceNow CEO: AI Agents Could Push College Grad Unemployment Past 30%. Bill McDermott warns that AI-driven cost-cutting could hit entry-level white-collar jobs hardest, with companies like Amazon and Atlassian already slowing new-grad hiring. A blunt assessment from someone running a major enterprise software company.

Karpathy Scores U.S. Jobs on AI Exposure, and High-Paying Roles Fare Worst. OpenAI co-founder Andrej Karpathy built a quick analysis rating occupations on vulnerability to AI automation. Software developers scored 9 out of 10, while manual labor roles scored 1-2. Jobs paying over $100k averaged 6.7.

Stanford Researchers Ask: AI Is on the Job, So What Should Workers Do?. Stanford’s SIEPR convened economists who note rising unemployment in AI-exposed fields like software engineering, with uneven productivity gains. Their advice to workers: lean into problem-solving and reliability, the things AI still struggles with.

New Anthropic Research Measures AI’s Actual Impact on Hiring Patterns. Empirical analysis from Anthropic examining how AI exposure is reshaping job composition and hiring without yet triggering mass layoffs. The findings push back against simple displacement narratives and offer useful data for policymakers.

Policy & Regulation

Leaked EU Draft Shows How Brussels Plans to Investigate and Fine AI Companies. A leaked document outlines the European Commission’s enforcement playbook under the AI Act, including audit triggers, investigative powers, and penalty structures for foundation model providers. One of the clearest pictures yet of how EU AI regulation will actually work.

The Federal Push to Preempt State AI Laws, and Why It May Not Work. A legal analysis of the Trump administration’s executive order aiming to override the patchwork of state AI regulations. With 38 states having passed AI-related laws in 2025, the order lacks teeth without Congressional action and will likely face court challenges.

State AI Law Tracker: What Passed and What’s Pending as of March 16. A roundup of recent state-level AI legislation, including Washington’s chatbot disclosure rules, New York’s generative AI warning requirements, and new bills targeting algorithmic pricing surveillance and deceptive AI practices.

European Industry Groups Push Back on EU AI Omnibus Proposal. Major European companies question whether the AI Omnibus represents real regulatory simplification or just a reshuffling of compliance burdens. The statement reflects growing frustration in the business community with the pace and complexity of EU AI rules.

AI Regulation in 2026: A Global Map and Compliance Guide for Product Teams. A practical synthesis of AI regulatory developments worldwide, covering the EU AI Act, U.S. executive orders, and emerging Asian frameworks. Useful for engineering and product leaders trying to build compliant systems across jurisdictions.

AI & Copyright

House of Lords Demands UK Government Take a Clear Position on AI and Copyright. The Lords Communications Committee issued a report pressing the government to resolve legal ambiguity around text-and-data mining exceptions for AI training. The creative industries are watching closely, as the outcome could reshape how AI companies access training data in the UK.

European Writers Council Assesses the EU Parliament’s New Resolution on GenAI and Copyright. A detailed breakdown of what the European Parliament’s adopted resolution means for authors and publishers. The analysis highlights both useful protections and significant gaps that leave creators exposed to unauthorized scraping.

ITIF: Regulate AI Outputs, Not Training Data. A policy brief arguing that restrictions on web scraping and data access are creating barriers to AI development that mainly benefit incumbents. The authors recommend shifting regulatory focus from training inputs to what AI systems actually produce.

Research

The Artificial Self: Why Human Concepts of Intent and Responsibility Don’t Map onto AI. A research essay by Jan Kulveit, Owen Cotton-Barratt, David Duvenaud, and collaborators arguing that terms like “intent,” “trust,” and “self-interest” break down when applied to systems that can be copied and modified at will. The implications touch AI ethics, legal liability, and alignment research.

Bridging the Gap Between AI Safety and AI Ethics Communities. This arXiv preprint maps the tensions between the AI Safety and AI Ethics research communities, arguing that their fragmented approaches to governance weaken responses to AI harms. The authors propose using shared concerns like transparency as common ground.


Last Updated: 2026-03-17 07:36 (California Time)