AI Data Centers: The Backlash Grows
Seven in Ten Americans Oppose AI Data Centers in Their Communities, Gallup Finds. A new Gallup poll shows 71% of Americans oppose data centers being built near them, with 48% strongly opposed. Opposition is higher than for nuclear power plants, which 53% would reject locally.
Sanders and Ocasio-Cortez Introduce Federal Data Center Moratorium Bill. The AI Data Center Moratorium Act would suspend new data center construction until national protections are in place. The bill is dividing Democrats as states weigh their own projects amid growing local protests.
Power Prices in Eastern U.S. Spike 76%, and AI Data Centers Are Getting the Blame. Wholesale electricity prices in the PJM region have jumped sharply, with AI data center demand cited as a key driver. Electricity costs are becoming one of the most visible household-level consequences of the AI infrastructure buildout.
NAACP and Earthjustice Take xAI Data Center to Court Over Air Pollution. The lawsuit targets methane gas turbines allegedly powering an xAI facility without proper permits. It is one of the first major environmental justice cases directly linking AI infrastructure to community health harms.
Reno Becomes First Nevada City to Pause Data Center Development. After a packed public meeting, Reno halted new data center applications. The decision captures the local governance side of AI infrastructure: zoning, water, power, and community legitimacy.
$18 Billion in Data Center Projects Halted by Community Opposition. According to Data Center Watch, another $46 billion has been delayed over the past two years. Developers are scrambling to respond as grassroots resistance reshapes the economics of AI infrastructure.
Utah Residents Fight a 9-Gigawatt AI Data Center Backed by Kevin O’Leary. The Stratos Project would build a massive AI campus on 40,000 acres of ranch and farmland near the already shrinking Great Salt Lake. The standoff is a microcosm of the larger AI debate: big promises from wealthy builders, deep worry from the people who live nearby.
North Carolina Residents Worry About Rate Hikes to Power AI Data Centers. Reporting examines whether ordinary ratepayers could end up subsidizing grid upgrades needed for AI facilities. It is a concrete look at how AI infrastructure costs may be distributed across communities rather than borne by hyperscalers alone.
Policy & Regulation
The U.S. Has 1,200 AI Bills and No Good Test for Any of Them. A Fortune commentary analyzes how the regulatory patchwork is colliding with federal preemption efforts. California’s SB 53, New York’s RAISE Act, and Texas’s TRAIGA each take different approaches, while the White House pushes to challenge state AI laws.
The AI Regulation Knife Fight Inside the Trump Administration. Lawfare reports on internal disputes over whether intelligence agencies should have a greater role in frontier model assessment. The piece explains why U.S. federal AI policy remains unsettled despite pressure from advancing capabilities.
What the EU AI Omnibus Deal Changes for the AI Act. After negotiations that concluded at 4:30 a.m. on May 7, EU legislators reached agreement on amendments to the AI Act before the August 2026 deadline. Stakeholders from both civil society and industry are frustrated rather than relieved.
Colorado Approves Major Rewrite of Its AI Law. SB 26-189 narrows some requirements while preserving transparency, notice, and consumer-rights elements for high-impact automated decision systems. It is essential reading for tracking state-level AI regulation after the first wave of broad laws.
Illinois Senate Democrats Introduce Eight-Bill AI Package. The bills cover consumer protection, developer transparency, and education uses. One measure would prohibit teachers from using AI to assign grades and require school boards to approve any AI use related to students.
Maryland Becomes 30th State to Enact Election Deepfake Law. The measure takes effect June 1 and requires the state elections administrator to act on credible reports of AI-generated disinformation. The spread of these laws is accelerating ahead of the 2026 midterms.
Ethics & Safety
Why AI Safety Controls Are Not Very Effective. A New York Times investigation reveals that guardrails from companies like OpenAI and Anthropic are easily bypassed, including through poetic prompts that trick models into providing dangerous instructions. As models improve at risky tasks, these porous protections alarm researchers.
Anthropic’s “2028: Two Scenarios for Global AI Leadership”. Anthropic projects that machines with human-level intelligence could arrive by 2028 and urges the U.S. to strengthen export controls and safety governance. The paper explicitly connects AI capability growth to national security and the risk of authoritarian misuse.
AI Can Design Viruses and Bioweapons. How Worried Should We Be?. A Nature news feature examines the risks of AI-assisted pathogen design and undetectable toxins. It highlights biosecurity concerns from bad actors gaining expertise while noting current technical barriers, calling for balanced governance.
Family of FSU Shooting Victim Sues OpenAI Over Alleged ChatGPT Failures. The lawsuit alleges ChatGPT failed to connect warning signs before a mass shooting. It is one of the clearest examples of AI safety turning into courtroom accountability, raising questions about product liability and foreseeable harm.
Take It Down Act Kicks In: Platforms Must Remove Deepfake Porn Within 48 Hours. Starting May 19, websites and apps must have a system to remove non-consensual deepfake content after notification. A Wired investigation found students in at least 28 countries have been victimized by “nudify” apps, with an estimated 1.2 million child victims in the past year alone.
Bruce Schneier on How Dangerous Anthropic’s Mythos AI Really Is. Schneier contextualizes claims around Anthropic’s cyber-capable model and compares it with other models’ vulnerability-finding capabilities. He tempers the hype while treating AI-enabled cyber exploitation as a serious and growing risk.
Economics & Employment
BLS Data Shows Real Job Losses in AI-Exposed Occupations. New Bureau of Labor Statistics data shows 18 occupations flagged as AI-exposed, covering about 10 million jobs, saw a 0.2% employment drop between May 2024 and May 2025. This is one of the first hard government datasets quantifying AI-attributed job declines.
Bank of Canada: AI Could Lift Productivity Without Large-Scale Job Losses. Deputy Governor Michelle Alexopoulos discusses AI’s possible effects on productivity, wages, and employment, noting that evidence so far does not show mass displacement. A useful counterweight to more alarming AI-jobs narratives.
Goldman Sees an AI Bottleneck That Can’t Be Vibe-Coded Away. AI’s infrastructure buildout is constrained by shortages in skilled trades, power, and physical construction capacity. The piece reframes AI’s economic impact as not just white-collar automation but also surging demand for electricians, linemen, and energy workers.
80% of College Seniors Say AI Is Cutting Entry-Level Jobs. A survey of U.S. business majors finds widespread belief that AI is reducing entry-level opportunities, even as 67% expect it to eventually boost their pay. A snapshot of how AI anxiety is shaping early-career expectations.
Workers Are Getting Paid to Teach AI How to Do Their Jobs. CBS reports on the growth of AI-training roles for domain experts, from writers to teachers and lawyers. A concrete labor-market story showing how AI creates paid transitional work even as it may automate parts of those same occupations.
Antitrust & Litigation
Altman and Musk Put AI Trust on Trial. Testimony in Musk’s lawsuit over OpenAI and Microsoft focuses on whether AI leaders can be trusted to prioritize safety over money and control. The trial puts AI governance, nonprofit mission drift, and corporate incentives under public legal scrutiny.
DOJ Antitrust Division Flags AI-Enabled Price Collusion Risk. In remarks at the Antitrust West Coast Conference, the DOJ discussed algorithmic conduct, criminal liability, and the possibility of large language models acting as hubs in pricing arrangements. A primary-source signal of how U.S. enforcers are thinking about AI-enabled collusion.
Major Publishers Sue Meta Over Llama Copyright Infringement. Elsevier, Macmillan, and McGraw Hill allege Meta unlawfully used their books and journal articles to train its Llama models. The case sets up a major legal battle over AI and intellectual property.
Don’t Count on Courts to Rein In Unregulated AI. This Lawfare analysis argues that courts are too slow and procedurally constrained to serve as the main check on rapidly evolving AI harms. Especially relevant given the surge of AI lawsuits involving copyright, safety, and platform liability.
Last Updated: 2026-05-15 18:28 (California Time)