As the web fills up with junk text written by robots, it’s no surprise that AI-generated slop is also making its way into ebooks, YouTube, and even Wikipedia. That last one is a particular problem, since Wikipedia’s open access is its greatest strength and its primary method for dealing with misinformation. In response, the site’s administrators are using new policies to combat the onslaught of AI text.
According to a new policy write-up, Wikipedia administrators now have the authority and the tools to rapidly delete articles and edits that are obviously generated with large language models. It’s an expansion to an existing “speedy deletion” option that skips a week-long discussion process among Wikipedia’s teams of volunteer editors and administrators for a full deletion.
But articles that are new or have been substantially rewritten can now be flagged with tags of obvious signs of LLM text, including phrases like “Here is your Wikipedia article on…” or citations and references to things that don’t exist. (That’s a common problem for auto-generated text, as some lawyers and would-be diners of glue-enhanced pizza have discovered.) The presence of both is a strong indicator that whoever’s submitting the article hasn’t even read through it themselves.
If the article shows such telltale signs of being automatically generated, it can be tossed out under the speedy deletion option, something previously reserved for additions that were obvious nonsense or thinly-disguised advertisements.
In an interview with 404 Media, Wikimedia editor Ilyas Lebleu says that most new articles that are removed still use the week-long discussion option. But the slew of quickly-generated content necessitated a method of quickly dealing with obvious junk in a much faster manner. Lebleu says this is a “band-aid” for the most egregious examples of AI-generated submissions, though the larger problem will continue.
Not for the first time, I can’t help but think of John Henry racing the steam drill, a timeless image of humans versus machines. Wikimedia’s new policy is notable in contrast to a would-be change from earlier this year, when editors overwhelmingly rejected AI-generated article summaries. “Wikipedia’s brand is reliability, traceability of changes, and ‘anyone can fix it.’ AI is the opposite of these things,” said Wikipedia editor Bawolff.
Accedi per aggiungere un commento
Altri post in questo gruppo

Time and again, dangerous security vulnerabilities are discovered in

Microsoft has shared a video in which David Weston, who holds the tit


With new AI systems comes new AI vulnerabilities, and a big one was j

Stuck on a boss? Don’t know how to “perfect parry?” Microsoft is addi

Still hanging on to the old Hulu app? If so, it’s probably time to sa

Nvidia is sitting on an absolutely mind-boggling amount of money… lik