Policy & Regulation
Trump Administration Unveils National AI Legislative Framework. The White House released a sweeping federal framework aimed at preempting a patchwork of state AI laws, covering IP rights, child safety, scam prevention, and workforce training. It signals a clear push toward centralized AI governance in the US, with direct implications for every company building or deploying AI products.
UK Government Reverses Course on AI Copyright After Artist Backlash. Following significant pushback from creators, the UK dropped its proposed opt-out system for AI training on copyrighted works. The reversal underscores how organized opposition from creative industries can reshape AI policy in real time.
UK Publishes Official Impact Assessment on Copyright and AI. The Department for Science, Innovation and Technology released a detailed policy paper examining how copyright law applies to AI training data. This document will likely shape compliance requirements for AI companies operating in the UK and could influence similar efforts elsewhere.
Over 20 New California AI Laws Take Effect: What Companies Need to Know. California’s latest batch of AI regulations covers transparency, risk assessment, pricing algorithms, and deepfakes, with penalties reaching $1M. These laws are likely to set de facto national standards, much as California’s privacy rules did before.
Arizona Deepfake Lawsuit Tests Whether Existing Laws Can Tackle AI-Generated Abuse. A group of women filed a landmark case challenging the AI pornography industry, testing whether current revenge-porn statutes extend to AI-generated content. The case exposes major gaps in US legal protections around non-consensual synthetic media.
AI Governance Rules Are Being Written Without Public Input. An AI governance specialist argues that the most consequential policy decisions are happening behind closed doors, with minimal participation from civil society or workers. The piece calls for opening up these processes before the rules get locked in by a narrow set of actors.
AI Regulation in 2026: A Global Map for Product Teams. A practical survey of how the EU, US, China, and other jurisdictions are diverging on AI governance. Especially useful for compliance and product teams trying to ship across borders without running afoul of conflicting rules.
Economics & Employment
Cognizant Updates Its AI Disruption Estimate to $4.5 Trillion in Labor Value at Risk. An updated study warns that 93% of jobs face some degree of AI disruption, with 30% at serious risk, as companies accelerate layoffs to fund AI investments. The pace of adoption is outrunning earlier forecasts.
Goldman Sachs: AI Could Automate 25% of US Work Hours Within a Decade. The bank’s analysis projects 6-7% of workers displaced but new roles created in AI infrastructure, with knowledge-sector jobs hit first. It flags a temporary rise in unemployment alongside faster GDP growth, reinforcing the case for large-scale reskilling.
New York Launches FutureWorks Commission to Address AI’s Workforce Impact. Governor Hochul created a new body to guide policy on AI-driven job displacement, citing data showing a 37% drop in entry-level NYC jobs. The commission will focus on training programs for the most vulnerable workers.
Washington Post: Which Jobs Are Most at Risk from AI?. An interactive analysis identifies clerical roles (86% held by women) as among the most exposed, with 6.1 million workers facing high vulnerability due to low adaptability. The tool is a useful resource for understanding where the labor market pressure is building.
AI Layoffs Are Here. Is It Time to Revisit the Four-Day Work Week?. With companies like Atlassian cutting 10% of staff citing AI, this piece argues for shorter working hours as a way to distribute productivity gains more broadly. It draws on historical precedents to make the case that reduced hours have worked before.
Ethics & Safety
Pentagon Blacklists Anthropic After AI Safety Standoff. Anthropic reportedly refused Pentagon terms for deploying its AI in military contexts, leading the government to designate the company a national security concern. The episode marks a dramatic escalation in the tension between AI safety commitments and defense priorities.
OpenAI’s Post-Tumbler Ridge Safety Pledges Are Not Regulation. A critical analysis argues that OpenAI’s voluntary safety commitments function more as corporate self-surveillance than genuine oversight. The piece warns that such pledges can crowd out the enforceable legislation that is actually needed.
Autonomous AI Agents Are Reshaping Cyber Operations. This analysis, tied to a White House cybersecurity directive, examines what happens when AI systems rather than humans make real-time decisions in offensive and defensive cyber operations. It raises hard questions about accountability and escalation risk.
135,000 Autonomous AI Agents Are Running with Full System Access and Minimal Oversight. A report documenting the scale of ungoverned AI agent deployment across 52 countries, arguing that governance frameworks are dangerously behind the curve. The piece proposes structural approaches to closing the gap before a major incident forces reactive regulation.
Research & Academia
ICML Addresses Growing Crisis of LLM Misuse in Peer Review. The program chairs of one of the top machine learning conferences publicly confront researchers using AI tools in ways that violate review integrity policies. The statement has broad implications for how scientific institutions govern AI-assisted work.
Bridging the Divide Between AI Safety and AI Ethics Research. A new paper analyzing 3,550 publications maps the tensions between the AI safety community (focused on long-term risk) and the AI ethics community (focused on present harms). It proposes “critical bridging” strategies to unify fragmented governance approaches.
Legal Scholarship on AI-Driven Harms and the Limits of Current Doctrine. A newly published law review article examines how US legal systems are struggling to adapt liability and speech protections to AI-generated harms including deepfakes and misinformation.
Last Updated: 2026-03-20 07:48 (California Time)