Apple should shut off its AI summaries of news alerts until it can prevent them from spreading fake news. Anecdotes of iPhone users seeing weirdly phrased summaries of texts and other content are common. But getting news headlines wrong is another matter. Washington Post tech columnist Geoffrey Fowler is the latest to call out the problem.
“[A]pple Intelligence is so bad that today it got every fact wrong in its AI summary of [Washington Post] news alerts,” Fowler posted on Bluesky Wednesday. An Apple Intelligence Fowler got on his iPhone told him that Pete Hegseth, defense secretary nominee, had been “fired,” that Trump’s tariff policies are affecting inflation, and that Pam Bondi and Marco Rubio, the incoming administration’s nominees for attorney general and secretary of state, respectively, were confirmed for their Cabinet posts. None of those AI summarizations correctly reflected the original notifications.
“It’s wildly irresponsible that Apple doesn’t turn off summaries for news apps until it gets a bit better at this AI thing,” Fowler wrote. He’s right. Apple is contributing to the misinformation problem now burning out of control in the digital space—all for barely needed summarizations of already pithy news headlines.
This isn’t the first time Apple Intelligence’s summarization errors have been called out for gross inaccuracies. In November, one misrepresented a headline about Israeli Prime Minister Benjamin Netanyahu. The following month, the BBC complained that an Apple notification made it seem as if the BBC had reported that Luigi Mangione, charged with killing United Healthcare CEO Brian Thompson, had shot himself. Days after that error, the journalist organization Reporters Without Borders called on Apple to suspend the news alert summaries, stating it is “very concerned by the risks posed to media outlets.”
Apple turned on the notification summaries for iPhone users in a software update in late October 2024. It was part of a larger package of Apple Intelligence AI features launched within iOS 18.1.
Artificial intelligence companies have developed a number of methods of preventing AI models from making up facts. Some have used a technique called retrieval augmented generation (RAG), in which the AI consults indexed data or web information to confirm facts or add context. Others have developed separate AI models that scrutinize and fact-check other models’ output.
We’ve developed some tolerance for tech companies pushing out new AI features before they’re fully baked. But when such features do real harm they must be taken back into the lab for more work. That’s what Apple should do with its news summaries.
Apple did not immediately respond to a request for comment.
Accedi per aggiungere un commento
Altri post in questo gruppo

George Arison is telling me about a hookup.
Arison, the 47-year-old CEO of the LGBTQ dating app and social network Grindr, recalls an encounter with a man who ranked low in physical chem

Just two years ago, prompt engineering was hailed as a hot new job in tech. Now, it has all but disappeared.
At the beginning of the corporate AI boom, some companies sought out large la
Summoning a robotaxi from your phone is not a futuristic fantasy since Waymo achieved full commercial deployment.
https://www.fastcompany.com/91325288/goodbye-human-drivers-waymos-robotaxis-a

Haliey Welch, better known as the Hawk Tuah girl, is ready for a rebrand.
After being thrust into the spotlight in 2024, thanks to her now-iconic “Hawk Tuah” catchphrase—featured in a vi

Anthropic is turning to a Biden administration alum to run its new Beneficial Deployments team, which is tasked with helping extend the benefits of its AI to organizations focused on social good—p

