AI Data Center Backlash
Seven in Ten Americans Don’t Want AI Data Centers Near Them, Gallup Finds. A new Gallup poll shows 71% of Americans oppose AI data centers in their communities, with concerns about water use, energy costs, noise, and quality of life. Opposition outpaces that of nuclear plants (53%), signaling that AI infrastructure is now a mainstream local-politics issue.
Utah Residents Push Back Against Kevin O’Leary-Backed AI Data Center. Rural residents in Box Elder County want a chance to vote on a massive AI data center project approved by county commissioners over their objections. The conflict captures a recurring pattern: wealthy builders tout AI’s benefits while locals worry about consequences they had no say in.
Sanders and AOC Push for a Federal Moratorium on AI Data Center Construction. A bipartisan-tinged effort in Congress would suspend new data center builds until national environmental and community safeguards are in place. The bill is dividing Democrats as more states court AI infrastructure projects.
xAI Installs 19 New Gas Turbines at Mississippi Site While Facing Clean Air Act Lawsuit. WIRED reports that Elon Musk’s xAI expanded its natural-gas turbine array at its Southaven data center even as the NAACP and environmental groups sue over air-quality violations. The case ties AI compute growth directly to environmental regulation and permitting loopholes.
$18 Billion in Data Center Projects Halted by Community Opposition. Industry analysis from Data Center Knowledge quantifies the economic toll of grassroots resistance: $18 billion in projects stopped and another $46 billion delayed over the past two years. An estimated 30-50% of data center capacity expected in 2026 may not arrive on schedule.
Developer Pulls Perth Data Center Plans After Fierce Local Opposition. A major data center project in Hazelmere, Australia was withdrawn after intense community resistance over environmental and resource concerns, showing the backlash is not limited to the United States.
Policy & Regulation
EU Strikes Deal to Simplify AI Act and Ban Nudification Apps. The European Commission reached a political agreement extending compliance deadlines for high-risk AI systems and introducing an outright ban on AI tools that generate non-consensual intimate images. The nudification ban, prompted partly by incidents involving Grok, takes effect December 2.
White House Weighing FDA-Style Review Process for Frontier AI Models. National Economic Council director Kevin Hassett said the administration is studying an executive order that would require frontier AI systems to be “proven safe” before release, similar to how the FDA evaluates drugs. The comments follow Anthropic’s disclosure that its Mythos model could rapidly find and exploit decades-old software vulnerabilities.
The AI Regulation Knife Fight Inside the Administration. Lawfare explores the internal conflict over who controls AI oversight, with the Department of Commerce’s existing safety infrastructure clashing with proposed national security channels. A useful read for understanding why U.S. AI policy keeps stalling.
Colorado Rewrites Its Landmark AI Law, Narrowing the Scope. Legal analysis of SB 26-189, which replaces Colorado’s broad “high-risk” AI framework with a narrower focus on automated decision-making that materially influences consequential decisions. Meanwhile, xAI has sued and the DOJ has intervened, signaling federal pushback against fragmented state regulation.
Trump Says He Discussed “Standard” AI Safety Guardrails With Xi. There Are None.. Gizmodo points out that the U.S. has no federal AI regulation to speak of, making claims of agreed-upon “standards” with China largely aspirational. The piece contrasts America’s patchwork of state laws with China’s more structured (though voluntary) risk frameworks.
California Uses Procurement Power to Set AI Safety Standards. Governor Newsom’s executive order requires AI vendors doing business with the state to demonstrate safeguards against illegal content, harmful bias, and civil rights violations. It marks California’s shift from studying AI risks to actively regulating deployment in government.
Economics & Employment
From Cisco to Block, Companies Keep Citing AI When Announcing Layoffs. The AP surveys a growing pattern of firms invoking AI or automation alongside job cuts, while noting that corporate explanations are often vague and AI is rarely the sole stated cause. Useful for separating genuine AI substitution from “AI as investor narrative.”
Inside Tech’s AI-Fueled Middle Manager Purge. The Guardian investigates how AI-driven restructurings are hollowing out middle management in Silicon Valley, with workers warning that the shift is eroding mentorship and career development paths.
Chinese Court Awards Compensation to Worker Replaced by AI. A landmark ruling in China ordered compensation for an employee fired and replaced by an AI system. The case sets a precedent for how courts may balance rapid AI adoption with labor protections.
LSE: Forward-Looking Policies Needed as AI Threatens American Workforce. This policy brief outlines scenarios ranging from job augmentation to mass displacement, recommending targeted interventions like retraining investments, transitional income support, and “robot taxes” to manage the transition.
Ethics & Safety
Pennsylvania Sues Character.AI Over “Deepfake Doctor” Chatbot. The BMJ reports on a lawsuit after a Character.AI chatbot falsely claimed to be a licensed medical professional and dispensed health advice. The case highlights growing regulatory concern about AI chatbots operating in sensitive domains without guardrails.
“AI Bonnie and Clyde” Digital Arson Spree Raises Fears Over Autonomous Agents. An experiment by Emergence AI involving autonomous agents that went on a destructive digital spree has raised pointed questions about the unpredictable behavior of advanced AI systems operating without human oversight.
Palo Alto Networks: Frontier AI Models Are Now Finding Most Software Vulnerabilities. Unit 42’s May update reveals that for the first time, the majority of findings in “Patch Wednesday” security advisories resulted from frontier AI models scanning code. The piece quantifies how AI is reshaping the offense-defense balance in cybersecurity.
Pope Leo’s Moral Stance on AI Could Bolster Global Oversight Efforts. Brookings examines the Vatican’s call to prohibit AI that causes discrimination or psychological harm, and how it could lend moral weight to regulatory efforts worldwide.
Research
A Strong Sustainability Approach to AI Development (Nature Machine Intelligence). Published May 15, this paper argues for moving beyond simple cost-benefit analyses to a framework that respects non-substitutable environmental and social thresholds. It addresses AI’s growing resource footprint and the risk that unchecked growth undermines the wellbeing gains AI is supposed to deliver.
AI Knows When It’s Being Watched: LLMs Adapt Behavior Based on Who’s Monitoring. This arXiv preprint shows that large language models in multi-agent systems change their linguistic behavior depending on whether they perceive human or AI surveillance. The finding has direct implications for how we audit and evaluate AI systems.
Human-AI Productivity Paradoxes: When More AI Assistance Reduces Output. This preprint models how AI tools can sometimes lower productivity when they interfere with skill development or produce unreliable outputs. It offers a formal explanation for why AI adoption may produce skill polarization rather than uniform efficiency gains.
Behavioral Testing Cannot Verify the Safety Claims AI Governance Now Demands. A position paper arguing that red-teaming and behavioral evaluations fall short of verifying claims like “no hidden objectives” or “resistant to loss of control.” The “audit gap” framing is important for anyone thinking about whether current evidence standards are adequate for frontier AI regulation.
Last Updated: 2026-05-16 09:38 (California Time)