• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Sci-Fi

Anthropic tests AI’s capacity for sabotage

October 21, 2024
Share on FacebookShare on Twitter

As the hype around generative AI continues to build, the need for robust safety regulations is only becoming more clear.

Now Anthropic—the company behind Claude AI—is looking at how its models could deceive or sabotage users. Anthropic just dropped a paper laying out their approach.

SEE ALSO:

Sam Altman steps down as head of OpenAI’s safety group

Anthropic’s latest research — titled “Sabotage Evaluations for Frontier Models” — comes from its Alignment Science team, driven by the company’s “Responsible Scaling” policy.

The goal is to gauge just how capable AI might be at misleading users or even “subverting the systems we put in place to oversee them.” The study focuses on four specific tactics: Human Decision Sabotage, Code Sabotage, Sandbagging, and Undermining Oversight.

Think of users who push ChatGPT to the limit, trying to coax it into generating inappropriate content or graphic images. These tests are all about ensuring that the AI can’t be tricked into breaking its own rules.

Mashable Light Speed

In the paper, Anthropic says its objective is to be ready for the possibility that AI could evolve into something with dangerous capabilities. So they put their Claude 3 Opus and 3.5 Sonnet models through a series of tests, designed to evaluate and enhance their safety protocols.

The Human Decision test focused on examining how AI could potentially manipulate human decision-making. The second test, Code Sabotage, analyzed whether AI could subtly introduce bugs into coding databases. Stronger AI models actually led to stronger defenses against these kinds of vulnerabilities.

The remaining tests — Sandbagging and Undermining Oversight — explored whether the AI could conceal its true capabilities or bypass safety mechanisms embedded within the system.

For now, Anthropic’s research concludes that current AI models pose a low risk, at least in terms of these malicious capabilities.

“Minimal mitigations are currently sufficient to address sabotage risks,” the team writes, but “more realistic evaluations and stronger mitigations seem likely to be necessary soon as capabilities improve.”

Translation: watch out, world.

Topics
Artificial Intelligence
Cybersecurity

Next Post

Gemini can now help you catch up on your chats faster

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • Reddit considers adding ID verification to fight AI bots
  • Screamer Review — 100 Billion Dollar Prize | Console Creatures
  • Xbox announces Partner Showcase: When is it, how to watch
  • Total Wireless wants to give you a ‘gorgeous’ Motorola phone for only $50 — here’s the deal
  • When does the Amazon Big Spring Sale end: Check sale dates and times

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously