South Africa’s AI Policy Debacle
South Africa Pulls Its National AI Policy After Discovering Fake, AI-Generated Citations. Communications Minister Solly Malatsi withdrew the country’s first draft AI policy after journalists found that at least six of its 67 academic references were hallucinated by AI, citing articles that never existed. Malatsi called it an “unacceptable lapse” and promised accountability for those responsible.
The AI Policy That Was Undone by AI: A Deeper Look at South Africa’s Hallucination Problem. TNW places the scandal in a broader pattern, noting a Nature study found 2.6% of academic papers published in 2025 contained at least one hallucinated citation, up from 0.3% the year before. The failure exposes a systemic gap in institutional capacity to verify AI outputs before acting on them.
Policy & Regulation
DOJ Challenges Colorado’s AI Anti-Discrimination Law on Constitutional Grounds. The U.S. Department of Justice filed a complaint arguing that Colorado’s SB 24-205, which requires AI developers to prevent “algorithmic discrimination,” violates the Equal Protection Clause. The DOJ contends the law effectively mandates demographic balancing in AI outputs while exempting AI used to “increase diversity.”
Bipartisan Bill Targets Deepfake Distribution and Adds Protections for AI Safety Whistleblowers. Reps. Lieu and Obernolte introduced legislation imposing stricter penalties on non-consensual deepfakes, protecting AI safety whistleblowers, and mandating U.S. participation in international AI standards bodies. The bill consolidates over 20 proposals from a congressional AI task force.
Getty’s AI Copyright Suit Against Stability AI Survives Dismissal Bid. A federal judge ruled that Getty Images’ copyright and trademark infringement claims against Stability AI can proceed, finding sufficient grounds that millions of photographs were copied without permission for model training.
Japan’s Draft AI IP Code Could Undermine Its Own Pro-Innovation Stance. Analysts argue that Japan’s forthcoming rules requiring AI developers to disclose training data sources and enable output traceability are technically infeasible for frontier models and risk exposing trade secrets. The piece warns Japan is drifting from the pro-innovation posture it championed through the Hiroshima AI Process.
Taylor Swift Files Trademarks to Shield Her Image and Voice From AI Deepfakes. Swift filed trademark applications covering her likeness and voice as a legal strategy against unauthorized AI clones. The move signals a new front in how public figures may use intellectual property law to combat synthetic media.
Economics & Employment
What 81,000 People Told Anthropic About the Economics of AI. A large-scale survey of Claude users finds that workers in high AI-exposure roles report significantly greater job displacement fears, especially early-career individuals seeing large productivity gains. The top exposure quartile mentioned job worries three times more often than others.
Carnegie Lays Out Three Competing Views on AI and the Future of Work. This report maps the AI labor debate into three camps: the “alarmed” who foresee rapid white-collar displacement, the “patient” who expect gradual complementarity over decades, and the “excited” who anticipate productivity-driven surplus creating new roles. It highlights how inequality risks differ sharply across each scenario.
One in Five London Jobs at Risk From AI, City Hall Report Warns. A 71-page Greater London Authority report finds over one million London jobs are “highly or significantly exposed” to AI-driven automation, with women, young workers, and higher-educated employees disproportionately affected. Seven percent of large UK businesses have already used AI to cut staff.
SAG-AFTRA and AFL-CIO Escalate Push for AI Labor Protections. Labor organizations are coordinating efforts to regulate deepfakes and protect workers’ likeness rights at the inaugural AFL-CIO AI Summit, emphasizing consent and compensation in AI-generated media. The event signals growing union mobilization around AI-driven labor disruption.
Policy Proposals for Sharing the Economic Gains From AI (Economic Security Project). A new report outlines concrete proposals including wage insurance, public AI infrastructure, and worker ownership models to address inequality from AI-driven economic shifts. It reflects growing momentum for redistributive policy frameworks as automation accelerates.
Ethics & Safety
OpenAI and Anthropic Give Classified Briefings to Congress on Cyber-Capable AI Models. Both companies briefed House Homeland Security Committee staff on advanced AI models with offensive cyber capabilities, including one Anthropic withheld from public release due to its ability to rapidly exploit critical security flaws. One committee member described what he saw as “very scary.”
Open Markets Institute: A Flawed AI Content Market Is Accelerating “Content Cannibalization”. This first-of-its-kind report finds that AI bots bypassing voluntary access restrictions have quadrupled in six months, Big Tech firms occupy both sides of the content value chain, and most publishers receive no compensation. It warns of a slow degradation in AI output quality as the supply of quality human-created content dries up.
Creative Commons Acknowledges Its Preference-Based Signals Can’t Shift Power in the AI Data Ecosystem. Creative Commons published a strategic update admitting that voluntary signals without enforcement cannot meaningfully protect creators. The organization is now exploring conditional access frameworks and governance mechanisms beyond copyright as its primary tool.
Research
The Metrics Trap: How Technical Accuracy Masks Social Harm in Urban AI Systems (Nature). A study of 28 urban AI deployments reveals how high accuracy metrics hide discriminatory feedback loops in predictive policing and housing allocation. It documents successful community pushback and argues for democratic governance over purely technical evaluation approaches.
A New Benchmark for Measuring Whether AI Systems Align With Societal Values (Nature). Published in Nature’s Humanities and Social Sciences Communications, this paper introduces the SAIA benchmark to assess how well large language models converge with human ethics and norms across languages. It finds LLMs exhibit more positive sentiment toward AGI than humans do, raising questions about built-in bias.
Reckoning With the Political Economy of AI: How “Decoys” Distract From Real Accountability. Researchers argue that fairness frameworks, safety discourse, and accountability initiatives often function as “decoys” that create the illusion of critique while reinforcing AI’s existing power structures. The paper calls on scholars and policymakers to confront the material political economy of AI directly.
AI Regulation and Human Rights: A Global Trilemma (Harvard Kennedy School). This paper argues that major regulatory regimes struggle to simultaneously achieve governance reach, technological power, and a credible commitment to human rights. It frames the tension as a structural trilemma rather than a solvable optimization problem.
Last Updated: 2026-04-28 18:11 (California Time)