OpenAI is launching a tool that can detect images created by its text-to-image generator DALL-E 3, the Microsoft-backed startup said on Tuesday amid rising worries about the influence of AI-generated content in this year’s global elections.
The company said the tool correctly identified images created by DALL-E 3 about 98% of the time in internal testing and can handle common modifications such as compression, cropping and saturation changes with minimal impact.
The ChatGPT creator also plans to add tamper-resistant watermarking to mark digital content such as photos or audio with a signal that should be hard to remove.
As part of the efforts, OpenAI has also joined an industry group that includes Google, Microsoft and Adobe and plans to provide a standard that would help trace origin of different media.
In April, during the ongoing general election in India, fake videos of two Bollywood actors that are seen criticizing Prime Minister Narendra Modi have gone viral online.
The spread of AI-generated content and deepfakes are being increasingly used in India and in elections elsewhere in the world including in the U.S., Pakistan and Indonesia.
OpenAI said it is joining Microsoft in launching a $2 million “societal resilience” fund to support AI education.
—Priyanka G, Reuters
Login to add comment
Other posts in this group

I’ve worked at the bleeding edge of robotics innovation in the United States for almost my entire professional life. Never before have I seen another country advance so quickly.
In


Restaurant industry leaders are excited for

Elon Musk’s anger over the One Big Beautiful Bill Act was evident this week a

Welcome to AI Decoded, Fast Company’s weekly new

When artificial intelligence first gained traction in the early 2010s,

You wake up in the morning and, first thing, you open your weather app. You close that pesky ad that opens first and check the forecast. You like your weather app, which shows hourly weather forec