Marketplace for Freelancers & Businesses

Google will begin flagging AI-generated images in Search later this year

Cpvr

Novice Trader
Staff member
Community Moderator
Joined
May 19, 2024
Messages
251
Credits
875
Feedback: +0 / =0 / -0
Google has announced plans to introduce changes to its Search feature, aiming to provide clearer identification of images that have been created or modified using AI tools. Over the coming months, Google will start flagging AI-generated and AI-edited images in the “About this image” section on platforms like Google Search, Google Lens, and the Android-exclusive Circle to Search feature. These disclosures could extend to other Google platforms, such as YouTube, though more details on this will be revealed later this year.

The key point is that only images containing “C2PA metadata” will be marked as AI-altered in Search. C2PA, or the Coalition for Content Provenance and Authenticity, is working on standards to track an image's origin, including the devices and software used to capture or create it. This initiative is supported by major companies like Google, Amazon, Microsoft, OpenAI, and Adobe. However, as highlighted by The Verge, C2PA’s standards face adoption and compatibility issues, with only a few generative AI tools and select cameras from Leica and Sony currently supporting them.

Furthermore, C2PA metadata can be removed, corrupted, or rendered unreadable, making it a less than perfect solution. Some widely-used AI tools, such as Flux (utilized by xAI's Grok chatbot for image generation), do not incorporate C2PA metadata, partly because their developers have not endorsed the standard.

Despite these challenges, these measures are a step in the right direction amid the growing prevalence of deepfakes. One report estimates a 245% increase in scams using AI-generated content between 2023 and 2024. According to Deloitte, losses related to deepfakes are expected to skyrocket from $12.3 billion in 2023 to $40 billion by 2027. Public surveys also indicate that most people are worried about being misled by deepfakes and the potential of AI to spread propaganda.

Source: https://techcrunch.com/2024/09/17/g...i-generated-images-in-search-later-this-year/
 
This is actually really interesting! I can understand why they would want to flag AI-generated or modified images especially if people are trying to scam others using these AI-generated or modified images and telling people it is their own.
 
I support the fight against misinformation and deep fakes. I think Google has made the right choice. I discovered that its effectiveness rely hugely on industry-wide C2PA adoption. I keep my fingers crossed on the development.
 
This is great for numerous reasons. Scams come to the top of my mind, but this move will also.allownactual photographers and designers to thrive again. It may take time but things will start looking up for them soon.
 
Back
Top