According to a new research paper by Microsoft’s AI for Good Lab, humans are surprisingly (or maybe not-so-surprisingly) bad at detecting and recognizing AI-generated images.
The study collected data from an online “Real or Not Quiz” game, involving over 12,500 global participants who analyzed approximately 287,000 total images (a randomized mixture of real and AI-generated) to determine which ones were real and which ones were fake.
The results showed that participants had an overall success rate around 62 percent—only slightly better than flipping a coin. The study also showed that it’s easier to identify fake images of faces than landscapes, but even then the difference is only by a few percent.
In light of this study, Microsoft is advocating for clearer labeling of AI-generated images, but critics point out that it’s easy to get around this by cropping the images in question.
Further reading: How to spot AI trickery and the biggest red flags
Jelentkezéshez jelentkezzen be
EGYÉB POSTS Ebben a csoportban

$400 is the lowest I’ve ever seen for a new OLED gaming monitor… but

Adobe has finally delivered on one of the most requested features in

Logitech’s MX Master mouse series is probably the most well-regarded

Data wonks, rejoice! Pivot tables now automatically refresh themselve


It’s official: Peacock’s with-ads plan is now the priciest of its big

Multi-screen laptops are a thing, and have been a thing for a while.