According to a new research paper by Microsoft’s AI for Good Lab, humans are surprisingly (or maybe not-so-surprisingly) bad at detecting and recognizing AI-generated images.
The study collected data from an online “Real or Not Quiz” game, involving over 12,500 global participants who analyzed approximately 287,000 total images (a randomized mixture of real and AI-generated) to determine which ones were real and which ones were fake.
The results showed that participants had an overall success rate around 62 percent—only slightly better than flipping a coin. The study also showed that it’s easier to identify fake images of faces than landscapes, but even then the difference is only by a few percent.
In light of this study, Microsoft is advocating for clearer labeling of AI-generated images, but critics point out that it’s easy to get around this by cropping the images in question.
Further reading: How to spot AI trickery and the biggest red flags
Inicia sesión para agregar comentarios
Otros mensajes en este grupo.

Anyone who’s tried to use a non-Microsoft browser on Windows in the l

Google has fixed several vulnerabilities in Chrome versions 138.0.720


When you’re out and about in the city, a regular power bank can serve


Most of the OLED deals I’ve found lately are for 27-inch 1440p monito