According to a new research paper by Microsoft’s AI for Good Lab, humans are surprisingly (or maybe not-so-surprisingly) bad at detecting and recognizing AI-generated images.
The study collected data from an online “Real or Not Quiz” game, involving over 12,500 global participants who analyzed approximately 287,000 total images (a randomized mixture of real and AI-generated) to determine which ones were real and which ones were fake.
The results showed that participants had an overall success rate around 62 percent—only slightly better than flipping a coin. The study also showed that it’s easier to identify fake images of faces than landscapes, but even then the difference is only by a few percent.
In light of this study, Microsoft is advocating for clearer labeling of AI-generated images, but critics point out that it’s easy to get around this by cropping the images in question.
Further reading: How to spot AI trickery and the biggest red flags
Ak chcete pridať komentár, prihláste sa
Ostatné príspevky v tejto skupine

I’ve wanted a Steam Deck from the first moment I saw one. But I haven



So many people haven’t been able to upgrade their older Windows compu


