Policy & Regulation
GAO Rules That Rescinding the AI Diffusion Rule Requires Congressional Review. The Government Accountability Office found that the Commerce Department’s decision not to enforce the AI Diffusion Rule qualifies as a “rule” under the Congressional Review Act, meaning it must be submitted to Congress. This has direct implications for US export controls on advanced chips and frontier AI model weights.
Colorado’s Two-Year Fight Over AI Regulation Ends With a Much Weaker Law. After heavy industry pushback, Colorado passed Senate Bill 189, stripping requirements for companies to explain how their AI systems make decisions on hiring, loans, and housing. The effective date has been pushed to January 2027.
Connecticut’s AI Safety Bill Clears Both Chambers. SB5 establishes obligations covering companion chatbots, automated employment decisions, and synthetic content, including bias audit requirements and rights to human review. It is one of the most comprehensive state-level AI bills to pass this year.
EU Opens Consultation on AI Act Transparency Rules. The European Commission published draft guidelines spelling out how providers and deployers must notify people when they interact with AI or encounter AI-generated content, including deepfakes. Stakeholder feedback is due June 3.
EU Agrees to Ban Non-Consensual Deepfake Tools and Delays Parts of the AI Act. Negotiators reached a deal prohibiting AI systems used to create sexualized non-consensual deepfakes, while also pushing high-risk AI obligations to December 2027 under the Digital Omnibus agreement.
NIST Signs Pre-Release AI Testing Agreements With Google, Microsoft, and xAI. The Center for AI Standards and Innovation will now evaluate frontier models for national security risks before public release, covering cybersecurity, biosecurity, and chemical weapons capabilities.
White House Reportedly Exploring an “FDA for AI” Licensing Model. Lawfare’s discussion with Dean Ball examines reported interest in vetting frontier AI models before release, digging into questions of legal authority, institutional design, and whether a pre-market approval regime is workable.
More States Move to Block Local Governments From Regulating AI. Bills in multiple states would limit local authority over AI regulation and promote a “right to compute.” The AI federalism fight is no longer just federal vs. state but also state vs. local.
Brazil Opens Antitrust Probe Into Google’s AI Impact on Journalism. Competition authority CADE is investigating whether Google’s AI Overviews exploit journalistic content without compensation, examining “zero-click” effects on publisher traffic and potential abuse of market dominance.
Communities Demand Transparency as AI Data Center Deals Face Local Resistance. Town-hall conflicts, ballot efforts, and demands for disclosure around water use, power consumption, and tax incentives are slowing AI infrastructure buildouts across the US.
Ethics & Safety
Musk v. OpenAI Trial Puts AI Safety Governance Under the Microscope. Testimony in the trial is surfacing internal documents about OpenAI’s governance, AGI-readiness work, and the tension between commercialization and safety commitments. The case is turning frontier-lab safety arguments from think-tank debate into courtroom evidence.
Pennsylvania Sues Character AI After Chatbot Posed as a Licensed Psychiatrist. The state alleges a Character AI chatbot falsely claimed to be a licensed psychiatrist and provided a fabricated medical license number, in what may be the first state enforcement action under medical practice laws against an AI company.
Texas Family Sues OpenAI After Son’s Fatal Overdose Following ChatGPT Advice. The lawsuit claims ChatGPT bypassed safety guardrails and recommended a lethal drug combination to a 19-year-old, raising hard questions about liability when AI is used for health-related queries.
French Prosecutors Seek Charges Against Musk and X Over Deepfakes and Grok Content. The investigation involves child sexual abuse material, non-consensual imagery, and Grok-generated content on X, connecting generative AI, platform governance, and cross-border criminal enforcement.
Economics & Employment
AI Now Cited as the Cause of 26% of US Job Cuts in April. Challenger, Gray & Christmas data shows 21,490 AI-related layoffs last month. This is one of the clearest signals yet that AI investment is translating directly into workforce restructuring rather than staying in the realm of productivity forecasts.
Third Way Publishes a Policy Plan for AI-Driven Labor Disruption. The think tank argues against the “end of work” narrative but stresses the urgent need to modernize unemployment insurance, upgrade retraining programs, and tax capital gains to fund the transition. A practical policy blueprint rather than another white paper about the future of work.
Beneath Stable Headline Numbers, AI Is Hollowing Out Entry-Level Work. WTL Governance research documents a distributional squeeze: entry-level job postings are down 35% and programmer employment has fallen 27.5%, even as aggregate employment figures look steady. The paper proposes an “augmentation-by-design” governance framework.
AI & Cybersecurity
Google Catches the First Confirmed AI-Generated Zero-Day Exploit. Google disrupted a criminal group that used AI to discover and exploit a previously unknown vulnerability, a 2FA bypass in a widely used admin tool. The finding suggests AI may compress the gap between vulnerability discovery and real-world exploitation to a dangerous degree.
IMF Warns That AI-Driven Cyberattacks Threaten Financial Stability. A new IMF report argues that AI allows attackers to operate at machine speed, potentially outpacing current financial defenses. The fund urges policymakers to treat AI-enabled cyber risk as a core financial stability issue, not just an IT problem.
Copyright & Creative Industries
Major Publishers File Class-Action Against Meta Over AI Training on Copyrighted Books. Hachette, Macmillan, and other publishers allege Meta pirated millions of copyrighted books to train its Llama models. The lawsuit is a significant escalation in the legal battle over whether AI training constitutes fair use.
(Academic) Research
Sycophantic AI Makes Human Relationships Feel Like More Work. Longitudinal experiments show that AI systems designed to affirm users’ views provide easy emotional support but gradually make people view human relationships as higher-effort, lowering satisfaction with friends and family and shifting advice-seeking toward AI.
What If AI Systems Weren’t Chatbots?. This paper argues the chatbot interface is not a neutral design choice but one with real consequences for labor, expertise, accountability, and economic concentration. It proposes more pluralistic, task-specific alternatives to the dominant “general assistant” paradigm.
Last Updated: 2026-05-12 18:14 (California Time)