• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Gadgets

OpenAI, Anthropic agree to have their models tested before making them public

August 30, 2024
Share on FacebookShare on Twitter

OpenAI and rival company Anthropic have signed agreements with the U.S. government to have new models tested before public release.

On Thursday the National Institute of Standards and Technology (NIST) announced that its AI Safety Institute will oversee “AI safety research, testing and evaluation” with both companies. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Elizabeth Kelly, director of the AI Safety Institute in the announcement.

SEE ALSO:

Sam Altman just teased ‘Project Strawberry’ on X: Everything we know about the secret AI tool

It’s no secret that generative AI poses safety risks. Its tendency to produce inaccuracies and misinformation, enable harmful or illegal behavior, and entrench discrimination and biases is well documented at this point. OpenAI has its own internal safety testing, but has been secretive about how its models work and what they’re trained on. This is the first instance of OpenAI opening up access to third party scrutiny and accountability. Altman and OpenAI have been vocal about the need for AI regulation and standardization. But critics say the willingness to work with the government is a strategy to ensure OpenAI is regulated favorably and stamps out competition.

Mashable Light Speed

“For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!” posted OpenAI CEO Sam Altman on X.

Mashable Games


Tweet may have been deleted

The formal collaboration with NIST builds on the Biden Administrations AI executive order that was signed last October. Amongst other mandates that tapped several federal agencies to ensure the safe and responsible deployment of AI, the order directed requires AI companies to grant access to NIST for red-teaming before an AI model is released to the public.

The announcement also said that it would share findings and feedback in partnership with the UK AI Safety Institute.

Topics
Artificial Intelligence
OpenAI

Next Post

Where Is Xur Today? (August 30 - September 3) Destiny 2 Exotic Items And Xur Location Guide

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • Resident Evil 7 Biohazard – Gold Edition Review – Twisted Voxel
  • Save $150: The first gen Bose QuietComfort headphones are below $300 at Amazon
  • OrangePi Neo console delayed indefinitely over RAM crisis
  • Scotland vs. France 2026 livestream: How to watch Six Nations for free
  • Best TV deals this week: Save on Samsung The Terrace, LG G5 OLED, Sony Bravia

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously