The AI giant predicts that human-like artificial intelligence could arrive within 10 years, so they want to be ready for that in four years.

OpenAI is looking for researchers to work on containment of super-intelligent artificial intelligence with other AIs. The end goal is to mitigate a human-like artificial intelligence threat that may or may not be science fiction.
“We need scientific and technical breakthroughs to direct and control AI systems much smarter than us,” wrote Open AI Alignment Manager Jan Leike and Co-Founder and Chief Scientist Ilya Sutskever in a blog post.
Jump to:
The OpenAI Superalignment team is currently recruiting
Superalignment team will dedicate 20% of OpenAI resources total computing power to train what they call a human-level automated alignment researcher to keep future AI products online. To that end, OpenAI’s new Superalignment group is recruiting a Research Engineer, Research Scientist, and Research Manager.
OpenAI says the key to controlling an AI is alignment, or making sure the AI is doing the job a human intended.
The company also said one of its goals is the control of “superintelligence,” or AI with capabilities beyond humans. It’s important that these sci-fi-like hyperintelligent AIs “follow human intent,” wrote Leike and Sutskever. They anticipate the development of super-intelligent AI over the past decade and want to have a way to control it within the next four years.
SEE: How to build a ethical policy for the use of artificial intelligence in your organization (TechRepublic Premium)
AI trainer can keep other AI models online
Today, AI training requires a lot of human intervention. Leike and Sutskever propose that a future challenge for AI development may be contradictory – namely, “the inability of our models to successfully detect and undermine supervision during training”.
Therefore, they say, it will take a specialized AI to train an AI that can outperform the people who made it. The AI researcher training other AI models will help OpenAI perform stress testing and reassess the company’s entire alignment pipeline.
Changing the way OpenAI handles alignment involves three major goals:
- Create an AI that helps evaluate other AIs and understand how those models interpret the type of surveillance a human would typically perform.
- Automate the search for problematic behaviors or internal data within an AI.
- Stress test this alignment pipeline by intentionally creating a “misaligned” AI to ensure that the alignment AI can detect them.
Staff from the former OpenAI roster team and other teams will work on the superalignment with the new recruits. The creation of the new team reflects Sutskever’s interest in super-intelligent AI. He plans to make superalignment his primary area of research.
Superintelligent AI: fact or science fiction?
The question of whether “superintelligence” will ever exist is a subject of debate.
OpenAI offers superintelligence as a level above generalized intelligence, a class of human-like AI that some researchers believe will never exist. However, some Microsoft researchers believe GPT-4 scores high on standardized tests brings him near the threshold of generalized intelligence.
Others doubt that intelligence can truly be measured by standardized tests, or wonder if the very idea of generalized AI is a philosophical challenge rather than a technical one. Large language models cannot interpret language “in context” and therefore address nothing like human thought, a 2022 study by Cohere for AI underline. (None of these studies are peer-reviewed.)
SEE: Some high-risk uses of AI could be covered by laws being drafted in the European Parliament. (TechRepublic)
OpenAI aims to outpace the speed of AI development
OpenAI describes the superintelligence threat as possible but not imminent.
“We have a great deal of uncertainty about the speed of development of the technology over the next few years, so we choose to aim for the toughest target to field a much more capable system,” Leike and Sutskever wrote.
They also point out that improving the security of existing AI products such as ChatGPT is a priority, and that the discussion of AI security should also include “AI-related risks such as misuse, economic disruption, misinformation, prejudice and discriminationaddiction and excessive dependence, and others” and “related socio-technical problems”.
“Superintelligence alignment is fundamentally a machine learning problem, and we believe that leading machine learning experts – even if they are not already working on alignment – will be key to solving it,” Leike said. and Sutskever in the blog.