Loading...

Explained: Big Tech and digital content credibility amid AI-based misinformation surge

Explained: Big Tech and digital content credibility amid AI-based misinformation surge
Photo Credit: Pixabay
Loading...

On Friday, Google became the latest technology company to join the Coalition for Content Provenance and Authenticity (C2PA) in its bid to make digital content transparent and and trustworthy. The Gemini large language model maker now is the latest addition as the steering committee member along with Microsoft, Adobe, BBC, Intel, Publicis Groupe, Sony, and Truepic.

Google becoming a part of C2PA, comes at the time of increased artificial intelligence-based misinformation dissemination especially at the time world stands at the cusps of a few major elections. 

To be sure, C2PA offers open technical standards for creators and consumers to trace the origin of different types of media – text, image, video or voice and have access to transparency-based solutions like content credentials. Content credentials is like a nutrition label for digital content that gives information on creator’s name, the date an image was created, what tools were used to create an image and any edits that were made along the way.

Loading...

“We are thrilled to welcome Google as the newest steering committee member of the C2PA, marking a pivotal milestone in our collective effort to combat misinformation at scale. Google’s membership will help accelerate adoption of Content Credentials everywhere, from content creation to consumption,”  Dana Rao, General Counsel and Chief Trust Officer, Adobe and Co-founder of the C2PA.

Notably, OpenAI, announced last month that it will be implementing C2PA’s cryptography-based content provenance technique for identifying images generated by it tools. Besides, the AI firm is internally also working on a provenance classifier which the company claims to have shown ‘promising early results’.

Meta, which has increased its investment in AI as per its recent earnings, announced this week that it will offer markers to identify AI-generated images. The Mark Zuckerberg-owned company said that it will roll out visible markers on the images, and  invisible watermarks and metadata embedded within image files. 

Loading...

As AI and machine learning tools get more sophisticated, the output they produce is likely to be indistinguishable from real life. Technology wise, it is a big feat, however, this poses major risk to the society at large. There are reports aglore of targetted harassment and attacks at individuals and institutions. This has led to stakeholders across governments, civil society, and tech communities to come up with solutions to at least control its impact.

For instance, after Indian actress Rashmika Mandana’s deepfake video (followed by other celebrities) was circulated widely over social media, the IT ministry issued an advisory to all social media platforms. The ministry has now made it mandatory for intermediaries to communicate to users about prohibited content, especially those specified under Rule 3(1)(b) of the IT Rules. Rule 3(1)(b) of the IT Rules makes it compulsory for the intermediaries to communicate rules, regulations, privacy policy, and user agreement in the user’s preferred language.

The proliferation of seemingly genuine looking but AI-generated content has created a sentiment of distrust among people. Fact checking group Full Facts recently found that rise of AI-generated images is eroding public trust in online information. They also advocated for increased media literacy and adequate government action for safeguarding against fake digital content.

Loading...

Sign up for Newsletter

Select your Newsletter frequency