• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Sci-Fi

Scientists figured out how to fool state-of-the-art Deepfake detectors

March 6, 2020
Share on FacebookShare on Twitter

A team of researchers from UC San Diego recently came up with a relatively simple method for convincing fake video-detectors that AI-generated fakes are the real deal.

AI-generated videos called “Deepfakes” started flooding the internet a few years back when bad actors realized they could be used to exploit women and, potentially, spread political misinformation. The first generation of these AI systems produced relatively easy-to-spot fakes but further development has lead to fakes that are harder than ever to detect.

Read: How to stop deepfakes from destroying trust in society

To this end, numerous researchers have developed AI-powered methods by which Deepfakes and other similar AI-generated fake videos can be detected. Unfortunately, they’re all susceptible to the UC San Diego team’s adversarial attack methods.

Per the team’s research paper:

In this work, we demonstrate that it is possible to bypass such detectors by adversarially modifying fake videos synthesized using existing Deepfake generation methods. We further demonstrate that our adversarial perturbations are robust to image and video compression codecs, making them a real-world threat.

The first video below shows a Deepfake detector working normally, the second shows what happens when the researchers infuse the video with adverserial information designed to fool the detectors:

Fake video detectors examine each frame of a video for manipulation — it’s impossible to make a manipulated frame appear natural on the surface. The UC San Diego team developed a process by which they inject adversarial information into each frame, thus causing the detectors to think the video is “normal.”

Credit: Neekhara et al.

The attack works on both white and black box AI – with the difference being whether the “faking” happens openly or under hidden parameters. And, according to the researchers, it’s entirely robust to compression and other manipulations.

This research clearly indicates that state-of-the-art fake video detectors are severely limited. In essence, the team concludes that a bad actor with partial domain knowledge could fool all of the current Deepfakes detectors.

Published March 6, 2020 — 23:21 UTC

Next Post

Android battery credited with saving woman's life

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • Best Pokémon TCG deal: Phantasmal Flames ETB only $98 at Amazon
  • Mario Day brings three classic games to the Switch library
  • Mantle8 secures €2.06M EU grant to scale natural hydrogen tech
  • Samsung expands ‘Inactivity Restart,’ a safety net for Galaxy phones left idle
  • Proton Mail is on sale for just $1 per month — secure your online communication for less

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously