Google is adding two new features to its image search to reduce the spread of misinformation, especially now that artificial intelligence tools have made the creation of photorealistic fakes trivial.
The Alphabet Inc. company’s first new feature is called ‘About this image,’ serving up additional context like when an image or similar ones were first indexed by Google, where they first appeared, and where else they’ve shown up online. The intent is to help users pinpoint the source while also contextualizing an image with any debunking evidence that might have been provided by news organizations.
Google will mark every AI-generated image created by its tools as such and it’s working with other platforms and services to make sure they add the same markup to the files they put out.
Midjourney and Shutterstock are among the publishers Google has on board, and the goal is to ensure that all AI content that appears in search results is flagged as such.
The provenance of images online is a growing issue in the AI age, and several startups are working to produce verification and authentication tools. Microsoft-backed Truepic Inc., for example, offers systems that ensure an image hasn’t been manipulated from capture to delivery.
Google’s new features, which are rolling out over this year, are comparatively lower-tech, though they may have a bigger positive impact with sufficient industry support.