The artificial intelligence field needs an international watchdog to regulate future superintelligence, according to the founder of OpenAI. 

In a blog post from CEO Sam Altman and company leaders Greg Brockman and Ilya Sutskever, the group said – given potential existential risk – the world "can't just be reactive," comparing the tech to nuclear energy. 

To that end, they suggested coordination among leading development efforts, highlighting that there are "many ways this could be implemented," including a project set up by major governments or curbs on annual growth rates. 

"Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc." they asserted. 

AI COULD GROW SO POWERFUL IT REPLACES EXPERIENCED PROFESSIONALS WITHIN 10 YEARS, SAM ALTMAN WARNS

Sam Altman

Sam Altman, chief executive officer of OpenAI, during a fireside chat at University College London, United Kingdom, on Wednesday, May 24, 2023. (Chris J. Ratcliffe/Bloomberg via Getty Images)

The International Atomic Energy Agency is the international center for cooperation in the nuclear field, of which the U.S. is a member state. 

The authors said tracking computing and energy usage could go a long way. 

"As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say," the blog continued. 

Thirdly, they said they needed the technical capability to make a "superintelligence safe."

OpenAI on a phone

The OpenAI logo on a smartphone in the Brooklyn on Jan. 12, 2023. (Gabby Jones/Bloomberg via Getty Images)

LATEST VERSION OF CHATGPT PASSES RADIOLOGY BOARD-STYLE EXAM, HIGHLIGHTS AI'S 'GROWING POTENTIAL,' STUDY FINDS

While there are some facets that are "not in scope" – including allowing development of models below a significant capability threshold "without the kind of regulation" they described and that systems they are "concerned about" should not be watered down by "applying similar standards to technology far below this bar" – they said the governance of the most powerful systems must have strong public oversight.

A side view of Sam Altman

Sam Altman speaks during a Senate Judiciary Subcommittee hearing in Washington, D.C., on Tuesday, May 16, 2023. (Eric Lee/Bloomberg via Getty Images)

"We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don't yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that, within these wide bounds, individual users should have a lot of control over how the AI they use behaves," they said. 

The trio believes it is conceivable that AI systems will exceed expert skill level in most domains within the next decade. 

So, why build AI technology at all considering the risks and difficulties posed by it?

CLICK HERE TO GET THE FOX NEWS APP 

They claim AI will lead to a "much better world than what we can imagine today," and that it would be "unintuitively risky and difficult to stop the creation of superintelligence."

"Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right," they said.