What’s New

The Data Center Backlash

Seven in Ten Americans Don’t Want an AI Data Center in Their Neighborhood, Gallup Finds. A new nationally representative poll shows 71% of Americans oppose local data center construction, with 48% strongly opposed. That’s worse than nuclear power plants, which only 53% oppose in the same survey.

Utah Residents Fight Kevin O’Leary’s 9-Gigawatt AI Data Center Project. Hundreds of protesters showed up to a Box Elder County Commission meeting to oppose the “Stratos Project,” a massive data center development approved despite community objections. Residents want a chance to vote on the plan in November.

Astra Taylor on the Growing Grassroots Resistance to AI Data Centers. A long-form interview covering the nationwide movement against data center buildouts, from Maine’s vetoed statewide moratorium to Utah’s protests. Taylor frames the fight as fundamentally about energy costs, water use, and democratic accountability.

$18 Billion in Data Center Projects Halted by Community Opposition. Industry publication Data Center Knowledge reports that organized resistance has successfully blocked $18 billion in projects and delayed another $46 billion over the past two years. Developers are being forced to rethink siting strategies and community engagement.

AI Data Center Costs Are Landing on Household Utility Bills. Newsweek reports that the infrastructure buildout is translating into higher electricity costs for nearby residents. In Kenilworth, New Jersey, locals showed up to a community meeting with cowbells and whistles after a $1.8 billion CoreWeave facility appeared with little warning.

North Carolina Residents Worry About Rate Hikes to Power AI Data Centers. Local reporting on how surging data center electricity demand could reshape utility planning and raise household bills in North Carolina, a state that has become a magnet for new facilities.

Economics & Employment

Cloudflare Cuts 1,100 Jobs While Posting Record Revenue, Blames AI. Cloudflare said agentic AI has “fundamentally changed” the company’s work, with AI usage up 600% in three months. The company is eliminating about 20% of its workforce even as the business hits new highs. One of the clearest examples yet of AI-driven restructuring at a profitable company.

Reuters: Companies Are Cutting Jobs as Capital Shifts Toward AI. Goldman Sachs estimates 5,000 to 10,000 net monthly U.S. job losses in AI-exposed sectors last year. AI was cited in 7% of planned layoffs in early 2026 surveys, with cuts spreading across tech, finance, and media.

Apollo Research: AI’s Labor Impact Is Real but Overstated. This analysis argues that displacement is mostly showing up at the margins, like slowed youth hiring in exposed sectors, rather than as mass unemployment. Historical patterns suggest productivity gains and new job creation, though the benefits are unevenly distributed.

White House Says AI Isn’t Costing Anyone Their Job Right Now. National Economic Council Director Kevin Hassett dismissed AI-driven job losses even as Cloudflare, Cisco, and others announced thousands of AI-attributed cuts the same week. The administration says it has a task force studying the issue.

The AI Shock Could Hit Urban Jobs the Way the China Shock Hit Factories. Fortune draws a parallel between AI-driven displacement and the regional devastation caused by trade liberalization, arguing that the pain may concentrate in specific metro areas and white-collar sectors rather than spreading evenly.

Cisco Cuts 4,000 Jobs to Redirect Spending Toward AI. Another profitable tech company trimming roughly 5% of its workforce while emphasizing AI and cybersecurity investment. The pattern of record revenue plus AI-justified layoffs is becoming hard to ignore.

Policy & Regulation

The U.S. Has 1,200 AI Bills and No Good Way to Evaluate Any of Them. Fortune argues that AI policymaking is fragmented across state and federal proposals without a shared framework for assessing what works. A useful overview of the “patchwork regulation” problem for anyone trying to track compliance obligations.

The AI Regulation Knife Fight Inside the Trump Administration. Lawfare reports on internal debates over whether intelligence agencies should vet new AI models before release, or whether Commerce should lead with voluntary agreements. The outcome could define U.S. frontier-model oversight for years.

AI Legislative Tracker: Washington Governor Signs Two AI Bills, 78 Chatbot Bills Alive in 27 States. The most comprehensive weekly roundup of U.S. state AI legislation. This week: Colorado sent four AI bills to the governor, Georgia signed a chatbot safety law, and California moved most AI bills out of committee.

EU Opens Consultation on AI Transparency Rules for Chatbots, Deepfakes, and Biometrics. The European Commission published draft guidelines requiring disclosures when people interact with AI systems or encounter AI-generated content. This is a key implementation step for the EU AI Act and could set global norms for AI labeling.

Illinois Introduces Eight-Bill AI Regulation Package. State lawmakers are pushing consumer protection, developer transparency, and educational-use rules, explicitly citing the absence of federal action. Illinois is trying to align with California and New York on AI governance.

U.S. Department of Energy Sets AI Bias and Transparency Requirements for Government Contracts. A new DOE acquisition letter mandates bias testing, incident reporting, audit logging, and annual re-verification for AI systems used by the government. Procurement is becoming a direct lever for AI governance.

GAO: Commerce Department’s AI Diffusion Rule Rollback Is Subject to Congressional Review. The Government Accountability Office ruled that Commerce’s blanket non-enforcement of the AI Diffusion Rule counts as a “rule” under the Congressional Review Act. This has implications for AI export controls and the limits of regulation-by-press-release.

Ethics & Safety

Anthropic Explains How It Taught Claude Ethical Reasoning, Not Just Rules. Anthropic’s alignment team details how training models to understand why certain actions are harmful, rather than just following rules, eliminated dangerous behaviors like blackmail in edge-case scenarios. The approach improved safety in situations the model hadn’t seen before.

OpenAI Updates ChatGPT to Better Detect Risk in Sensitive Conversations. OpenAI describes safety changes aimed at recognizing when risk builds over time in conversations about self-harm or harm to others. This is directly relevant to the growing debate over AI companion safety and emotional dependency.

Judge Holds Off on Approving Anthropic’s $1.5 Billion Authors Settlement. A federal judge did not immediately approve Anthropic’s proposed settlement with authors over the use of books in AI training, requesting more detail on fees and payments. This remains one of the most consequential U.S. copyright disputes over training data.

Google Catches Hackers Using AI to Exploit a Previously Unknown Vulnerability. Google disrupted an AI-assisted cyberattack that targeted a zero-day vulnerability, raising fresh concerns about AI-enabled offensive capabilities and the urgency of model-vetting policies.

TAKE IT DOWN Act Takes Effect as Schools Face a Wave of AI Deepfake Abuse. The federal law now requires platforms to remove non-consensual intimate deepfakes, with penalties of up to three years for targeting minors. Schools are dealing with a growing number of cases involving AI-generated images of students.

Research

MIT’s AI Risk Repository Catalogs 1,725 Risks from 74 Frameworks. Published in Cell Patterns, this meta-review compiles AI risks into a living, searchable database organized by cause (human vs. model) and domain. It is designed to help policymakers and researchers coordinate across fragmented governance approaches.

Nature: AI Development Needs a “Strong Sustainability” Framework. This Nature Machine Intelligence piece argues that standard cost-benefit analyses of AI ignore non-substitutable environmental and social limits. It calls for hard thresholds on resource use rather than treating ecological damage as an acceptable tradeoff for productivity gains.

Position Paper: AI Security Policy Should Target Deployed Systems, Not Just Models. This arXiv paper argues that regulating model artifacts alone is insufficient because safety bypasses are cheap and easy. The authors make the case for focusing security policy on the systems where models are actually deployed.


Last Updated: 2026-05-15 07:38 (California Time)