Ilya Sutskever, former OpenAI co-founder and current CEO of Safe Superintelligence Inc. (SSI), revealed new details about the company’s strategy and eventual business model during a rare podcast appearance with Dwarakesh Patel.
Even though SSI has raised nearly $3 billion since 2024 and doesn’t yet have a product, Sutskever argued that monetization will only follow after the core research is solved.

“Right now we just have to focus on the research, and the answer to this question will become clear by itself,” he says. “I think there are many possible answers.”
SSI began with a philosophy of “straight superintelligence.” Prioritize building a secure superintelligence system first and think about commercialization later. But Sutskever now says even that approach may need public exposure before reaching the finish line.
Patel asked why SSI would try to directly build superintelligence when its competitors would release “increasingly weaker intelligence” models that would gradually adapt the public.
Sutskever agreed that a gradual release of powerful AI may be necessary because society cannot meaningfully grasp its impact through essays and predictions alone.
“I’m happy to say this, but we’re going to step back from all this and focus on research and only come out when we’re ready and not in advance,” he said.
But he added: “In this regard, even in a straightforward scenario, you would release in stages… Gradualism would be an inherent part of any plan. It’s just a matter of what you get first out the door.”
On the issue of computing spending, Sutskever, who has been a vocal critic of the idea that the industry can achieve superintelligence by simply “throwing more and more compute at it,” also argued that SSI does not require hardware investments on the same scale as today’s AI giants.
He argued that most of the huge budgets of institutes like OpenAI and Anthropic are tied to inference, multimodal systems, staffing, and product engineering rather than pure research.
“Then when you look at what’s actually left in the research, the difference is much smaller,” he says. “The other thing is, if you’re doing something different, do you really need the absolute maximum scale to prove it? I don’t think that’s true.”
Mr. Patel asked him further. If SSI is considering “50 different ideas,” how does it know which ideas are comparable to breakthroughs like transformers without extensive computing?
Sutskever responded, “In our case, I think we have enough computing to convince ourselves and others that what we’re doing is right.”
Sutskever emphasized that SSI’s advantage is simply pursuing a different paradigm, not trying to beat other institutes in the product cycle. “We are in the era of research companies,” he said, whose goal is to first test new ideas about generalizations and then decide how to develop them.
After leaving Google Brain, Sutskever co-founded OpenAI in 2015 and contributed to breakthroughs that led to large-scale deep learning systems.
He left OpenAI in May 2024 amid concerns that commercial pressures were overtaking the company’s original safety-first mission.
On June 19, 2024, we launched SSI with former Apple AI lead Daniel Gross and former OpenAI researcher Daniel Levy.
SSI, which operates between Palo Alto and Tel Aviv, reportedly secured $1 billion by September 2024 and another $2 billion in April 2025, reaching a valuation of $30 billion to $32 billion despite having no product.
In July 2025, Mr. Gross left to join Meta’s AI division. Mr. Sutskever then became CEO and continued SSI’s research-first push toward building a secure superintelligence.
