OpenAI’s “New Deal” for the AI Era
Industrial Policy for the Intelligence Age. OpenAI published a 13-page policy blueprint proposing robot taxes, a public wealth fund that pays dividends directly to citizens, a four-day workweek, and government “containment playbooks” for rogue AI. It is the most detailed policy document yet from a frontier AI lab, and it reads as much like a political manifesto as a corporate white paper.
OpenAI’s Vision for Reorganizing Society Around Superintelligence Draws Skepticism. Gizmodo’s Bruce Gil picks apart the tensions in Altman’s proposal, questioning whether a company racing to build superintelligence is a credible author of the rules meant to govern it. The piece highlights the lack of specifics around enforcement and accountability.
OpenAI Calls for Robot Taxes, a Public Wealth Fund, and a Four-Day Week. The Next Web offers a detailed breakdown of the economic proposals, placing them in the context of existing UBI debates and labor economics research. Altman himself calls the document “a starting point, not a prescription.”
Policy & Regulation
EU’s AI Act Delays Are Letting High-Risk Systems Dodge Oversight. An investigative report showing that postponed enforcement of the EU’s landmark AI Act has created a gap where high-risk systems in employment, education, and critical infrastructure operate without mandatory compliance checks. Features commentary from the European Parliament’s Co-Rapporteur on the Digital Omnibus.
California’s AI Executive Order Sets New Trust and Safety Procurement Standards. Governor Newsom’s March 30 order requires AI vendors seeking state contracts to certify safeguards against bias, illegal content, and unlawful surveillance. It is the most concrete state-level pushback yet against lighter-touch federal approaches.
UK Government Publishes Official Report on Copyright and AI. The Intellectual Property Office released a detailed impact assessment examining how AI training on copyrighted works fits (or doesn’t fit) within existing law, and lays out potential legislative paths forward. One of the most thorough government-level treatments of the copyright question from any major economy.
India’s Gujarat High Court Bans AI in Judicial Decisions. The court issued a formal policy prohibiting AI tools from being used to draft judicial reasoning or orders, while allowing limited administrative use. It is one of the first court-level AI governance frameworks anywhere and sets a precedent for judiciaries worldwide.
New Report Maps the Growing Patchwork of U.S. State AI Laws. Analysis reveals a surprising degree of consensus among states on AI governance priorities, even as enactment rates remain low and federal preemption efforts loom. Useful for tracking how regulation is actually developing on the ground.
Economics & Employment
Losing Your Job to AI Leaves Lasting Scars, Goldman Sachs Research Finds. Workers displaced by AI earn roughly 10% less a decade later and delay major life milestones like homeownership. Retraining helps but does not fully close the gap, suggesting the costs of AI-driven displacement run much deeper than short-term unemployment figures indicate.
Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis. This new preprint extends standard task-exposure frameworks to agentic AI systems that can handle entire workflows autonomously. It finds significantly higher displacement risk than prior estimates, particularly in white-collar and knowledge-work roles across multiple regions.
Bounded by Risk, Not Capability: A New Model for AI Job Substitution. An early April arXiv paper introduces a “compliance premium” concept, showing that jobs with high physical or ethical stakes remain resilient even when technically automatable. Data scientists face high exposure while care workers do not, challenging simpler capability-based forecasts.
Ethics & Safety
The Hidden Costs of “Helpful” AI. A Nature commentary argues that AI decision aids are quietly de-skilling professions in healthcare, law, and education by collapsing value-laden judgments into probabilistic outputs. The result: ethical reasoning and interpretive expertise become invisible, with long-term consequences for professional accountability.
Frontier AI Models Are Outpacing Safety Guardrails. A deeply reported piece synthesizing incident data, researcher testimony, and policy gaps to argue that the development pace of frontier models has structurally outrun the capacity of safety research and regulation to keep up. Provides important context for why policy proposals are accelerating.
AI in Warfare Needs a Strong Ethical Framework. A Nature correspondence proposes CARE principles (collective benefit, authority, responsibility, ethics) for governing AI in military contexts. It responds to growing calls for rules on lethal autonomous systems and addresses accountability gaps in current military AI deployments.
Research
How AI Aggregation Affects Knowledge (Acemoglu et al.). A new paper from Daron Acemoglu and co-authors models how AI systems trained on population beliefs can distort collective knowledge when their outputs feed back into social learning. The findings raise concerns about polarization and epistemic quality in democratic discourse.
AI-Generated Political Outreach Triggers Measurable “AI Penalty” Among Voters. An empirical study across the U.S. and UK finds that voters who know they are receiving AI-generated campaign messages show reduced trust and persuasion. One of the first cross-national studies of its kind, with direct implications for regulating AI in elections.
Beyond Symbolic Control: The Case for Genuine Human Oversight of AI. This preprint argues that most current human oversight frameworks are performative rather than substantive, and that without structural redesign, AI-driven workforce displacement will proceed without meaningful checks. Bridges AI safety and political economy in a way few papers attempt.
Last Updated: 2026-04-07 18:15 (California Time)