What OpenAI co-founder Ilya Sutskever is planning now

What OpenAI co-founder Ilya Sutskever is planning now

Top AI researcher and co-founder of OpenAI, Ilya Sutskever, wants to make artificial intelligence safer with his new start-up. What exactly does he mean by that?

This is original content from the Capital brand. This article will be available for ten days on stern.de. After that, you will find it exclusively on capital.de. Capital, like the star to RTL Germany.

Ilya Sutskever is an icon in the AI ​​community. In 2012, the 37-year-old was part of the three-person team of authors who sparked the current AI hype with the deep learning architecture Alexnet. In 2015, he was one of the co-founders of OpenAI – and in 2023 he was part of the palace revolt that tried to overthrow CEO Sam Altman (before he, at least, changed his mind). In mid-May, he announced his departure from OpenAI.

For months, the scene was wondering what Sutskever would plan next. Now the computer scientist has announced the first details of his future project: Sutskever is founding the company Safe Superintelligence (SSI) with his former OpenAI colleague Daniel Levy and with investor and ex-Apple manager Daniel Gross. The stated goal: to develop a safe superintelligence.

The question of the security of artificial intelligence has been on Sutskever’s mind for a long time. He once joined OpenAI because he believed that he could drive AI development there with a particular focus on its security. The Altman coup was also apparently related to this – with Sutskever’s unease about the development of OpenAI, which had become a billion-dollar start-up with strong commercial interests. He does not want to comment on this publicly – but his OpenAI colleague Jan Leike, who has also joined a new start-up, recently summed it up like this: The focus on security has “taken a back seat to shiny products”.

Immensely high financing needs for companies like OpenAI

That won’t happen at SSI. The company will be “completely isolated from any external pressure of having to deal with a large and complicated product and being in a competitive battle,” Sutskever told . “This company is special because its first product will be a secure superintelligence and it will do nothing else until then.”

However, the commercial pressure on AI companies like OpenAI is also related to the fact that they have an immensely high need for financing – the latest language models require huge amounts of data and computing power. Sutskever’s new project cannot escape this fact. It is still unclear which investors are financing SSI.

Also unclear is what exactly SSI wants to achieve and develop. What is certain is that it will not be about fixing the shortcomings and dangers of current AI products – which include problems of data protection, copyright or the fact that tools like ChatGPT can tell a lot, but factual accuracy is not one of their core competencies.

Instilling liberal values

Sutskever is concerned with bigger questions, about what significantly more powerful AIs will look like in the future and how they will deal with us humans. The SSI founder explained to “Academy” a few months ago that artificial intelligence will “solve all the problems we have today”, including unemployment, disease and poverty – but it will also create new problems: “Artificial intelligence has the potential to create everlasting dictatorships.” Sutskever is referring to a stage of AI development that the scene calls superintelligence – and which, according to this understanding, would be even more powerful than artificial general intelligence (AGI). It would not only have human-like abilities, but abilities that would go beyond that.

Such a superintelligence must have the property “that it will not harm humanity on a large scale,” Sutskever told Bloomberg. “We want it to work on the basis of some important values.” By this he means “perhaps the values ​​that have been so successful over the last few hundred years and that underpin liberal democracies, such as freedom and democracy.”

There is no concrete information yet on how exactly the founders plan to instill these values ​​in their AI models. Sutskever simply says that one day there will be huge super data centers that will independently develop new technologies. “That’s crazy, isn’t it? It is their security that we want to contribute to.”

Source: Stern

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts