• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Internet

Meta will label AI-generated content from OpenAI and Google on Facebook, Instagram

February 6, 2024
Share on FacebookShare on Twitter

Meta / Getty Images

On Tuesday, Meta announced its plan to start labeling AI-generated images from other companies like OpenAI and Google, as reported by Reuters. The move aims to enhance transparency on platforms such as Facebook, Instagram, and Threads by informing users when the content they see is digitally synthesized media rather than an authentic photo or video.

Coming during a US election year that is expected to be contentious, Meta’s decision is part of a larger effort within the tech industry to establish standards for labeling content created using generative AI models, which are capable of producing fake but realistic audio, images, and video from written prompts. (Even non-AI-generated fake content can potentially confuse social media users, as we covered yesterday.)

Meta President of Global Affairs Nick Clegg made the announcement in a blog post on Meta’s website. “We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” wrote Clegg. “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve.”

Clegg said that Meta’s initiative to label AI-generated content will expand the company’s existing practice of labeling content generated by its own AI tools to include images created by services from other companies.

“We’re building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards—so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.”

Advertisement

Meta says the technology for labeling AI-generated content labels will rely on invisible watermarks and metadata embedded in files. Meta adds a small “Imagined with AI” watermark to images created with its public AI image generator.

In the post, Clegg expressed confidence in the companies’ ability to reliably label AI-generated images, though he noted that tools for marking audio and video content are still under development. In the meantime, Meta will require users to label their altered audio and video content, with unspecified penalties for non-compliance.

“We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so,” he wrote.

However, Clegg mentioned that there’s currently no effective way to label AI-generated text, suggesting that it’s too late for such measures to be implemented for written content. This is in line with our reporting that AI detectors for text don’t work.

The announcement comes a day after Meta’s independent oversight board criticized the company’s policy on misleadingly altered videos as overly narrow, recommending that such content be labeled rather than removed. Clegg agreed with the critique, acknowledging that Meta’s existing policies are inadequate for managing the increasing volume of synthetic and hybrid content online. He views the new labeling initiative as a step toward addressing the oversight board’s recommendations and fostering industry-wide momentum for similar measures.

Meta admits that it will not be able to detect AI-generated content that was created without watermarks or metadata, such as images created with some open source AI image synthesis tools. Meta is researching image watermarking technology called Stable Signature that it hopes can be embedded in open source image generators. But as long as pixels are pixels, they can be created using methods outside of tech industry control, and that remains a challenge for AI content detection as open source AI tools become increasingly sophisticated and realistic.

Next Post

Best Valentine's Day deals: products on sale from Amazon, Lego, Beats, Shark, Lovehoney, and more

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • Dead in Antares Review – Get Lost Somewhere Familiar | COGconnected
  • Google Store’s new Pixel charging dock does one thing most 3-in-1 chargers don’t
  • ID@Xbox at GDC 2026: Indie Developers at the Heart of Great Games
  • The Google Pixel 10a solved my biggest smartphone pet peeve
  • We Spent 10 Hours Playing Marathon: What We Loved (and Hated) About Bungie’s New Sci-Fi Shooter

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously