• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Gadgets

OpenAI announces new internal safety and security team

May 28, 2024
Share on FacebookShare on Twitter

OpenAI is digging its heels deeper into industry self governance as the company announces a revamped safety and security team, following several public resignations and the dissolution of its former oversight body.

The Safety and Security Committee, as its been renamed, is led by board members and directors Bret Taylor (Sierra), Adam D’Angelo (Quora), Nicole Seligman, and — of course — OpenAI CEO Sam Altman. Other members include internal “OpenAI technical and policy experts,” including heads of “Preparedness,” “Safety Systems,” and “Alignment Science.”

“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI,” wrote OpenAI. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”

SEE ALSO:

OpenAI confirms GPT-4 successor in training stage

The committee’s first task is to “evaluate and further develop OpenAI’s processes and safeguards over the next 90 days,” the company wrote in its announcement, with feedback from outside experts who are already on OpenAI’s external oversight roster, like former NSA cybersecurity director Rob Joyce.

Mashable Light Speed

The announcement is a timely response to a swirling management controversy at OpenAI, although it may do little to reassure watchful eyes and advocates for external oversight. This week, former OpenAI board members called for more intense government regulation of the AI sector, specifically calling out the poor management decisions and toxic culture fostered by Altman in the role of OpenAI’s leader.

“Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives,” they argued.

OpenAI’s new committee is being thrown straight into the fire, with an immediate mandate to evaluate the company’s AI safeguards. But even those might not be enough.

Topics
Artificial Intelligence
OpenAI

Next Post

Destiny 2's Final Shape Brings New Depth To Its Missions And "Defragmented" World

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • Wordle today: The answer and hints for March 8, 2026
  • The new fad of PC renting is a nightmare and I’m scared about the future of gaming
  • The best Pixel feature just got better
  • Sony may be testing dynamic pricing on the PlayStation Store
  • The Android features I usually ignore are actually the best things on my phone

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously