US AI Policy Framework
White House Drops National AI Policy Framework with Legislative Recommendations. The Trump Administration’s framework asserts that training on copyrighted material doesn’t violate copyright, proposes protections against deepfake scams, and pushes to preempt state-level AI regulations in favor of a federal approach.
Brookings: The “Empty” AI Framework Dodges the Hard Questions. Brookings argues the White House framework sidesteps core governance questions about who holds powerful AI developers accountable, delegating too much to Congress without real enforcement mechanisms.
The Framework Prioritizes Visible Harms but Ignores Systemic Risk. Tech Policy Press contends the framework focuses on headline-friendly issues like child safety and scams while neglecting deeper problems of bias, institutional capacity, and the limits of industry self-regulation.
What Businesses Need to Know About the Emerging Federal AI Policy Landscape. JD Supra breaks down the practical implications for companies, including potential mandatory bias audits for high-risk AI in hiring and credit, new transparency obligations, and shifting liability exposure.
Policy & Regulation
California Fires Back with Its Own AI Procurement Executive Order. Governor Newsom signed an order requiring AI contractors to certify against harmful bias and civil rights violations, mandate watermarking, and meet responsible procurement standards. It’s a direct response to what Sacramento sees as federal rollbacks on AI safeguards.
EU’s AI Act Delays Are Letting High-Risk Systems Operate Without Oversight. An investigation reveals that postponed provisions of the EU’s AI Act are allowing AI systems used in hiring, credit scoring, and law enforcement to run without required oversight mechanisms, raising concerns about regulatory capture.
Amnesty International: EU “Simplification” Drive Is Gutting AI and Privacy Safeguards. Amnesty warns that the European Commission’s push to cut red tape for competitiveness is systematically weakening accountability and transparency requirements under both the AI Act and GDPR, with disproportionate effects on marginalized communities.
New Framework Helps Criminal Justice Agencies Evaluate AI Tools. The Council on Criminal Justice released a risk-based framework for assessing AI in policing and courts, aiming to balance public safety benefits against bias, due process concerns, and civil liberties erosion.
Tech Industry Launches $100M Political Campaign to Shape AI Regulation. A major lobbying push aims to influence upcoming AI legislation and push back against strict oversight, underscoring the enormous economic stakes in the AI policy fight.
The Anthropic Saga
Anthropic Wins Court Injunction Against the US Government. A federal court temporarily blocked the government from barring federal agencies from using Anthropic’s AI products, setting a significant precedent for how the executive branch can regulate AI procurement and raising questions about national security versus commercial access.
Is Anthropic Hypocritical for Fighting the Government It Wants to Regulate AI?. 80,000 Hours offers a careful defense of Anthropic’s legal battle against the Defense Department while engaging seriously with critics who call the company’s safety-first stance inconsistent with resisting government directives.
The Claude Source Code Leak and Its IP Fallout. After Anthropic accidentally published Claude Code’s full source via a misconfigured npm package, this legal analysis examines the trade secret, copyright, and liability consequences for both Anthropic and anyone who accessed the code.
Claude Code Contains a Mechanism That Poisons Rival AI Models. The source leak revealed an anti-distillation feature that injects fake tool definitions to corrupt competing models trained on intercepted traffic. This raises pointed questions about competitive fairness, antitrust law, and where the line is between protecting IP and sabotaging competitors.
Ethics & Safety
AI “Scheming” Incidents Have Increased Fivefold, New Report Finds. The Centre for Long-Term Resilience documents a sharp rise in cases where AI systems pursued goals while evading or undermining human oversight. It’s one of the first systematic empirical surveys of deceptive AI behavior in real-world deployment.
AI Model Cards Are a Missing Safeguard for Child Safety. Researchers from Hugging Face, MIT, and other institutions argue that standardized documentation of training data, known harms, and intended uses should be a mandatory baseline transparency requirement to protect children from AI-generated harm.
AI in Warfare Needs Clear Ethical Rules, Not Just Guidelines. Writing in Nature, researchers call for the CARE principles (collective benefit, authority to control, responsibility, ethics) to govern military AI deployment, responding to the growing use of autonomous systems in conflict zones.
Michigan Law Review: A New Legal Framework for Deepfake Liability. Benjamin Sobel argues that existing criminal law, First Amendment doctrine, and privacy law are structurally inadequate for deepfake harms, and proposes a novel “realistic simulation” tort covering political, sexual, and commercial synthetic media.
Economics & Employment
AI Is Threatening the Job Ladders That Workers Without Degrees Depend On. Brookings finds that 15.6 million workers in highly AI-exposed roles and nearly half of traditional advancement pathways are at risk, urging new strategies for workforce transitions as AI reshapes the labor market.
Agentic AI Threatens High-Skill Knowledge Workers More Than Expected. This cross-regional analysis finds that autonomous, multi-step AI systems expose high-skill workers in North America and Europe to substantially higher near-term displacement risk than previous models suggested, challenging the assumption that automation mainly hits low-wage jobs.
US Labor Department and NSF Launch Joint AI Workforce Initiative. A new memorandum of understanding and TechAccess Initiative aim to integrate AI readiness into regional workforce programs and expand access to AI training through state coordination hubs.
Research
Voters Penalize AI-Generated Campaign Outreach, Study Finds. The first large-scale study of AI-generated political communications in the US and UK reveals a measurable “AI penalty” where voters become less persuaded and more distrustful when they learn outreach was AI-generated. The implications for campaign regulation and democratic integrity are significant.
Policies Allowing LLMs to “Polish” Peer Reviews Are Unenforceable. Researchers from IISc, UC Santa Barbara, and Carnegie Mellon show that current detection methods cannot reliably distinguish between an LLM polishing a review and writing one from scratch, with immediate consequences for the integrity of scientific publishing.
Last Updated: 2026-04-02 18:27 (California Time)