The EU AI Act Rollback
EU Clinches Deal to Roll Back AI Restrictions. The EU agreed to delay high-risk AI enforcement by 16 months and largely exempt industrial AI from the AI Act’s scope. This is the first major rollback of EU digital rules, driven by lobbying from ASML, Airbus, Siemens, and Mistral AI alongside fears of falling behind the US.
Consumer Groups Warn AI Omnibus Creates Dangerous Regulatory Loopholes. The European Consumer Organisation criticizes the deal for weakening protections around biometrics, employment, and education AI systems. They argue the changes disproportionately benefit large companies while leaving consumers exposed to high-risk AI deployments.
EU AI Act Undergoes Significant Changes: A Legal Breakdown. Wilson Sonsini details the postponement of high-risk AI obligations, the industrial machinery carve-out, and the narrowed “safety component” definition. Essential reading for compliance teams reassessing their EU AI Act roadmaps.
Policy & Regulation
US and Tech Firms Strike Deal to Review AI Models for National Security Before Release. All five major frontier AI labs have now agreed to give the US Commerce Department pre-release access to evaluate models for national security risks. The arrangement is voluntary and gives the government no power to block a release, but it marks a notable shift toward preemptive oversight.
Congress Narrowed the GUARD Act, But Serious Problems Remain. The EFF analyzes the revised bill targeting AI companions for minors, noting it still mandates identity-linked age verification and raises penalties to $250,000 per violation. Core speech, privacy, and security concerns remain unresolved despite the narrower scope.
Colorado Passes Bill Limiting Use of AI to Set Prices and Wages. HB 26-1210 bans the use of genetic data, biometrics, and online behavior history in algorithmic wage-setting and surveillance pricing. The bill is part of a wave of 70+ state-level bills on personalized pricing heading to Governor Polis.
Canada’s Privacy Commissioner Finds OpenAI Violated Privacy Laws. A joint federal-provincial investigation concluded that OpenAI’s initial ChatGPT training involved overbroad data collection, lacked consent, and followed a “launch first, fix later” pattern. OpenAI has agreed to implement further protective measures.
China Bans AI Partners for Minors, Maps AI Agent Security Threats. China finalized regulations on “anthropomorphic AI interaction services” effective July 15, banning virtual partner services for minors and requiring parental consent for under-14s. A separate 90-page report from TC260 maps 11 AI agent security threats and proposes 8 new standards.
UN Launches IPCC-Style Scientific Panel on AI. The panel, co-chaired by Yoshua Bengio and Maria Ressa, will synthesize peer-reviewed research independently of corporate interests. It aims to create a tiered risk framework to inform the Global Digital Compact, though it currently lacks enforcement mechanisms.
Economics & Employment
The AI Layoff Trap: Why Rational Firms Over-Automate. An exposition of a recent working paper showing that competition between firms guarantees automation beyond the collectively optimal level, a classic Prisoner’s Dilemma. Each firm captures 100% of cost savings but absorbs only a fraction of the demand destruction it creates. The paper finds that UBI, capital taxes, and voluntary agreements all fail to correct the distortion.
‘Your Craft Is Obsolete’: WiseTech Staff in Limbo as AI Replaces Coders. Employees at logistics software firm WiseTech face severe job insecurity after management declared AI tools have rendered their traditional coding skills obsolete. The situation illustrates the immediate psychological and economic toll of aggressive AI integration on the tech workforce.
AI Disruption Plan: Preparing Americans for the Next Great Labor Shock. Third Way argues that AI will cause massive economic disruption requiring a radical modernization of the US labor safety net. The proposal calls for upgraded unemployment insurance, federal wage insurance, and modern workforce hubs for displaced workers.
Global Finance Watchdog Warns Over Private Credit Fueling AI Boom. International financial regulators issued a warning about systemic risks from the private credit sector’s massive investments in AI infrastructure. The rapid influx of unregulated capital into AI data centers and startups could trigger broader instability if returns disappoint.
Ethics & Safety
Selective Virtue: Anthropic, the Pentagon, and the Contradictions of AI Governance. A sharp critique arguing that Anthropic occupies a position of fundamental ethical contradiction. Its CEO has publicly predicted AI will drive unemployment to 20%, yet the company continues to build at maximum commercial speed while simultaneously suing the Pentagon over autonomous weapons use.
Automated Alignment Is Harder Than You Think. This pre-print argues that using AI agents to automate alignment research could produce catastrophically misleading safety assessments. AI-generated alignment solutions may contain systematic errors that human reviewers cannot detect, risking deployment of misaligned systems.
When Prompts Become Shells: RCE Vulnerabilities in AI Agent Frameworks. Microsoft researchers describe how prompt injection attacks in the Semantic Kernel framework can trigger remote code execution. The findings show that connecting AI models to tools transforms prompt injection from a content problem into an infrastructure security threat.
Warren Buffett on AI: ‘We Don’t Know What’s Going to Happen’. At Berkshire Hathaway’s annual meeting, Buffett expressed concern over AI and deepfakes, comparing the technology’s disruptive potential to nuclear weapons. His comments reflect growing apprehension among financial leaders about unpredictable consequences of AI advancement.
Last Updated: 2026-05-09 08:02 (California Time)