Three years ago, Openai co-founder and former chief scientist Ilya Sutskever raised her eyebrows when she declared that ERA’s most advanced neural network could already be “slightly conscious.”
That hype talent is on display in full display in a new venture in Sutskever, another AI costume boasting a meaningless name for “safe tension.”
And if you are worried that Openai’s business model was struggling with the logic of Alice in Wonderland, Sutskever’s new project will have you through the glass you’ve been looking at all the time. With the Financial Times flagging, the company raised an additional $1 billion, adding to previous investments from deep investors in Andreessen Horowitz and Sequoia Capital, bringing its valuation to a maximum of $30 billion.
What about the wild? It did everything, including achieving a higher rating than Warner Bros., Nokia, or Dow Chemical Company, without offering any products at all. In fact, Sutskever has boasted before, but never offers a product. That is, in the future, until we drop super intelligent AI that is completely formed at unspecified points.
“The company is special in that it makes the first product safe and tight, and up until then we’re not going to do anything else,” the former Openai-er told Bloomberg when he first began surgery. “It’s completely isolated from the external pressure of dealing with large, complex products and having to get stuck in competitive rat races.”
Of course, it’s not uncommon for venture capitalists to invest in companies that don’t have products yet, but creating something that might not even exist in our lifetime is throwing billions, even by the VC standard.
Despite a well-founded claim to Sutskever’s opposition, there is little reason to believe that AI researchers are creating artificial general information (AGI). The timeline for reaching AGI is controversial, but some experts argue that it will never be achieved, as this “singularity” calls it.
As FT points out, Safe Superintelligence’s valuation has swelled from $5 billion to $30 billion since its launch last June. Over that period, Openai CEO Sam Altman continues to tease Openai that it is on the cliff to achieve it, so the concept of AGI has grown even more and more with popular imaginations. (By the way, it’s worth remembering that Satsukeiber left the opening last summer.
The Safe Superintelligence website does not explain what Sutskever’s company will set separately from others trying to achieve similar goals. Instead, he speaks of the pure purity of the AI industry, where he is proud to say that “safety and capabilities approach tandem” and “approaching safety and capabilities to advance capabilities as fast as possible” technical issues that need to be solved through innovative engineering and scientific breakthroughs.
Sutskever’s wildest predictions come to fruition, and he guides you through epic, powerful and completely risk-free super intelligence. But unless he can do it immediately, the investors will definitely knock.
Details of artificial “intelligence”: Openai researchers have discovered that even the best AI can’t “solve the majority” of coding problems