Regístrese ahora para una mejor cotización personalizada!

Meta will add AI labels to Facebook, Instagram, and Threads

Feb, 07, 2024 Hi-network.com
Meta AI labels
Meta

Generative AI has made it possible to create realistic images that look like they were taken by a human, making it harder to differentiate between what is real and what is AI-generated. As a result, Meta announced several efforts regarding AI-generated images to help combat the misinformation. 

On Tuesday, Meta announced via a blog post that in the upcoming months, it will be adding new labels across Instagram, Facebook, and Threads that indicate when an image was AI-generated. 

Also: I just tried Google's ImageFX AI image generator, and I'm shocked at how good it is

Meta is currently working with industry partners to determine common technical standards that signal when content was created using generative AI. Then, by using those signals, Meta is building a capability that issues labels in all languages on posts across its platforms, delineating that the image was AI-generated, as seen in the photo at the top of the article.

"As the difference between human and synthetic content gets blurred, people want to know where the boundary lies," said Nick Clegg, Meta president of global affairs. "So it's important that we help people know when photorealistic content they're seeing has been created using AI."

This labeling would work similarly to TikTok's AI-generated content labels, released in September, that appear on TikTok videos containing realistic images, audio, or videos that were AI-generated. 

Also: The best AI image generators

Meta includes visible markers, invisible watermarks, and IPTC metadata embedded in each image generated using Meta AI's photogeneration capabilities. The company then labels those images with an "Imagined by AI" label to designate they were artificially created. 

Meta shares that it is building industry-leading tools that can detect invisible watermarks, such as IPTC metadata, in images generated by AI generators from different companies. These include Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, to include AI labels for those images as well. 

Of course, this leaves a loophole for malicious actors. If the company doesn't comply with adding metadata to its AI image generator, Meta will have no way of tagging the image with the label. Still, it seems to be a step in the right direction. 

Despite the efforts of companies to include signals on AI-generated images, the same effort has yet to be made regarding AI-generated videos and audio. In the meantime, Meta is adding a feature in which people can disclose that they used AI to generate an image so that Meta could add a label. 

Also: The ethics of generative AI: How we can harness this powerful technology

The company is enforcing voluntary disclosure by threatening to add penalties if a user fails to disclose. The company also retains the ability to add a more prominent label to images, audio, or videos that create a particularly high risk of deceiving the public. 

"We'll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so," added Clegg. 

The developments of these tools come at an especially critical time with elections on the horizon. Creating believable misinformation is easier than ever and can negatively impact public opinion of candidates and hinder the democratic voting process. As a result, other companies, including OpenAI,  have also taken action to implement guardrails ahead of elections. 

tag-icon Etiquetas calientes: innovación

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.