According to a new research paper by Microsoft’s AI for Good Lab, humans are surprisingly (or maybe not-so-surprisingly) bad at detecting and recognizing AI-generated images.
The study collected data from an online “Real or Not Quiz” game, involving over 12,500 global participants who analyzed approximately 287,000 total images (a randomized mixture of real and AI-generated) to determine which ones were real and which ones were fake.
The results showed that participants had an overall success rate around 62 percent—only slightly better than flipping a coin. The study also showed that it’s easier to identify fake images of faces than landscapes, but even then the difference is only by a few percent.
In light of this study, Microsoft is advocating for clearer labeling of AI-generated images, but critics point out that it’s easy to get around this by cropping the images in question.
Further reading: How to spot AI trickery and the biggest red flags
Accedi per aggiungere un commento
Altri post in questo gruppo

It should come as no surprise that students the world over are using


Multi-screen laptops are a thing, and have been a thing for a while.


It might seem that “ChatGPT” is all you ever hear about when discussi

Just because it’s tiny doesn’t mean it’s not powerful. This Kamrui Hy

It’s summer, it’s hot, and even your laptop is struggling, so it’s ti