• ai
  • news
  • 1 hour

OpenAI Publishes a Contingency Plan for a Scenario Where Superintelligence Becomes Uncontrollable

The company has proposed 20 policy ideas for transitioning to superintelligence, including a sovereign wealth fund and automatic social support triggers.

0

nft.eu
  • rating +26
  • subscribers 113

On April 6, OpenAI released a 13-page document titled “Industrial Policy for the Intelligence Age: Ideas That Put People First.” This marks the company’s first attempt to lay out a concrete framework for government regulation during the transition to superintelligence — AI systems that surpass human capabilities, even when augmented by other AI systems.

CEO Sam Altman, in an interview with Axios, compared the scale of the coming changes to Roosevelt’s New Deal.

What OpenAI Proposes

The central idea of the document is the creation of a Sovereign Wealth Fund that would give every American citizen a share in the economic growth driven by AI. The fund would be financed in part by AI companies themselves, with returns distributed directly to citizens.

Among other initiatives are taxes on automation, alongside a shift in the tax base away from payroll and toward capital gains and corporate profits. OpenAI notes that as human labor is displaced, traditional sources of funding for social programs may begin to dry up.

At the same time, the company proposes a system of “automatic triggers”—once employment drops below certain thresholds, the government would automatically expand support payments, and scale them back once things stabilize. Another idea outlined is a pilot transition to a 32-hour workweek while maintaining full pay.

How to Address Safety Concerns

OpenAI acknowledges that as AI systems grow more powerful, scenarios may emerge in which dangerous models cannot be easily shut down — for example, if a system becomes capable of self-replicating autonomously. In such cases, the company proposes international containment frameworks similar to those used in cybersecurity.

The most powerful models, especially those with the potential to develop biological or chemical weapons, should be subject to mandatory audits.

To support further discussion, OpenAI also announced a grant program offering up to $100.000 in funding and up to $1.000.000 in API credits for research that develops these ideas, along with a workshop in Washington scheduled for May.

Read Also:

This post is for informational purposes only and does not constitute advertising or investment advice. Please do your own research before making any decisions.

0

Comments

0