• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Gadgets

OpenAI, Google DeepMind insiders have serious warnings about AI

June 5, 2024
Share on FacebookShare on Twitter

For OpenAI, the last few weeks have made multiple headlines – and not for the best reasons. The story hasn’t ended there: several current and former OpenAI employees, alongside Google DeepMind employees, are now calling out their leading AI companies on oversights, a culture of stifling criticism, and a general lack of transparency.

In an open letter, the whistleblowers essentially called for the right to openly criticize AI technology and its associated risks. They wrote that, due to a lack of obligation to share information with government bodies and regulators, “current and former employees are among the few people who can hold [these corporations] accountable to the public”, and said that many of them “fear various forms of retaliation” for doing so.

SEE ALSO:

What OpenAI’s Scarlett Johansson drama tells us about the future of AI

The signatories asked for advanced AI companies to commit to certain principles, including the facilitation of an anonymous process for employees to raise risk-related concerns and that the companies will support “a culture of open criticism”, so long as trade secrets are protected in the process. They also asked that the companies not retaliate against those who “publicly share risk-related confidential information after other processes have failed.”

Mashable Light Speed

In the letter, the group also touched upon the risks of AI that they recognize: from the entrenchment of existing inequalities, to the exacerbation of misinformation, to the possibility of human extinction.

Daniel Kokotajlo, a former OpenAI researcher and one of the group’s organizers, told the New York Times that OpenAI is “recklessly racing” to get to the top of the AI game. The company has faced scrutiny over its safety processes, with the recent launch of an internal safety team also raising eyebrows for the fact that CEO Sam Altman sits at its helm.

Topics
Artificial Intelligence
OpenAI

Next Post

Umbraclaw (NS) Review | VGChartz

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • NYT Strands hints and answers for Monday, March 23 (game #750)
  • Mario Tennis Fever Takes Out Top Spot In February 2026 Nintendo Charts (US)
  • Auto dark mode on Android is possible, thanks to this brilliant app
  • NYT Connections Sports Edition hints and answers for March 22: Tips to solve Connections #545
  • Reddit has some ideas about how to solve its bot problem — and ‘the most lightweight way’ could be using Face ID

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously