• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Internet

Ex-OpenAI star Sutskever shoots for superintelligent AI with new company

June 20, 2024
Share on FacebookShare on Twitter

Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.

On Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the goal of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly in the extreme.

“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product,” wrote Sutskever on X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked on machine learning projects at Apple between 2013 and 2017. The trio posted a statement on the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.
Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured on June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI in May, six months after Sutskever played a key role in ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well on his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later in May.

Advertisement

A nebulous concept

OpenAI is currently seeking to create AGI, or artificial general intelligence, which would hypothetically match human intelligence at performing a wide variety of tasks without specific training. Sutskever hopes to jump beyond that in a straight moonshot attempt, with no distractions along the way.

“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever in an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. On X, University of Washington computer science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify or define since there is no one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans in many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi scenario of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more or less what Sutskever hopes to achieve and control safely.

“You’re talking about a giant super data center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Next Post

Best generator deal: Save 47% on the Jackery Solar Generator at Amazon

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • Get these essential Microsoft apps for your Mac without the endless monthly costs
  • Rare discount drops the Wyze Cam Pan V4 to its lowest price ever
  • Best portable power station deal: Save $450 on EF EcoFlow Delta 3 Max
  • Puerto Rico vs. Italy 2026 livestream: How to watch World Baseball Classic for free
  • ElevenLabs pledges to restore 1 million voices with AI at SXSW 2026

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously