Meta Platforms Pledges on AI Content Transparency

Meta Platforms Pledges on AI Content Transparency

Meta Platforms plans to label images generated by competitors’ AI services. It is part of a push by industry players to align on common technical standards that signal when content has been created using the technology.

Facebook and Instagram-owner said in a blog it would apply labels in the coming months to inform users if an image has been created using AI on services run by OpenAI, Microsoft, Google, and more. The company already labels images posted on its platforms generated using its in-house AI tools but plans to expand the service.

Nick Clegg, president of Global Affairs at Meta Platforms, explained the difference between human and synthetic content continues to get blurred, and people want to know where the boundary lies. He said users are often coming across AI-generated content for the first time and are keen to have transparency around the technology.

“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,” said Clegg. “People and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it.”

Clegg, however, added it was more difficult to mark and identify AI-generated audio and video content, with solutions still being developed. It was also not able to label written text content generated by platforms including ChatGPT, with Clegg telling Reuters “That ship has sailed”.