Apple is stuck in neutral when it comes to personal AI

“At long last, Apple has finally entered the AI race.”

That was the first line in my story about Apple’s announcements at WWDC 2024, almost exactly one year ago from today. After the company announced a bunch of highly personalized AI features last June, Apple seemed poised to finally reap the rewards of its long-time effort to build trust around user data privacy.

At last year’s WWDC, Apple software head Craig Fedherighi said the company intends to offer “a personal intelligence model” on users’ phones that “draws on personal context.” Apple, it seemed, was finally going to leverage its considerable strengths and deliver proactive AI features that leverage the user’s own personal data.

But 12 months later, that still hasn’t come to fruition. Fedherighi said early on in today’s WWDC 2025 keynote that the AI personalization features, which tap into private user data to offer proactive AI-generated insights, had failed to reach Apple’s “high bar” for quality. More announcements on that front, he added, would arrive “in the coming year.” Meanwhile, the world outside Cupertino moves on, and the generative AI boom continues to accelerate.

After last year’s keynote, this year’s presentation felt like a throwback to years past, with Fedherigi and friends running through a litany of modest  UX and feature upgrades to iOS and the other Apple operating systems. In iOS, they showed us a breezy new UX look (featuring translucent app and widget panels), refreshed icon design, a live translation feature, and “Mixmoji” (combine two existing emoji into one!), etc. There’s also a new AI‑powered 3D effect that shifts a lock screen image’s perspective as the user tilts their phone. (Didn’t the ill-fated Amazon Fire phone do that?)

There were also some updates to the AI Visual Intelligence features announced last year. iPhone users can already tap on real-world objects via their phone camera to get more information. Now, they can do the same within any app, including social media, to identify and learn about objects—and in some cases, on where to buy the items. The screenshot interface now lets users search for items within the image or ask ChatGPT deeper questions about them. (Google announced the addition of on-screen object search earlier this year.)

But while Google, OpenAI, and Anthropic are integrating their AI models with users’ personal and professional data, Apple seems stuck in neutral

Apple had every opportunity to own personal AI. It had a big head start with the acquisition of Siri way back in 2010. It controls billions of devices, it designs the chips, and it has the trust of billions of consumers. And Apple users already store a wealth of personal data on their phones, which the company could leverage to offer a personal assistant with an intimate knowledge of the user. 

Instead, while Apple talks about new icons and Genmoji, OpenAI continues embedding itself deeper into consumer workflows and digital habits. ChatGPT is learning users’ tastes, preferences, and memory. And now, Apple’s former design guru is helping OpenAI build a hardware device that could harness all that power.

I was optimistic when Apple hired John Giannandrea from Google in April 2018 to lead its machine learning and AI strategy. Reporting directly to Tim Cook, he seemed like the right person to inject new life into Apple’s AI ambitions. After all, transformer models—the foundation of the generative AI revolution—were invented at Google while Giannandrea was still there in 2017. They sparked immense excitement and innovation at Google, and I hoped he’d bring that same energy to Apple, baking it into core products like the still-lagging Siri. He didn’t. Giannandrea still leads Apple’s “core AI” division, but Siri and robotics have since been moved out from under his leadership.

Still, that’s not to say Apple is anything close to cooked. The company makes the biggest and best smartphone in the world, and will for a long time. And it sells digital services through its devices better than anyone else. Still it’s worrying that it doesn’t seem to be acting with the urgency that the moment demands. The next big thing is here, and Apple isn’t at the forefront.

Separately, Apple researchers released a widely discussed paper over the weekend that calls into question whether new “large reasoning models” are capable of the kind of cognitive function that could lead to artificial general intelligence, where the AI performs as well or better than humans at most tasks.

https://www.fastcompany.com/91349176/apple-is-stuck-in-neutral-when-it-comes-to-personal-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Létrehozva 1d | 2025. jún. 9. 20:30:04


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

4 observations about Apple’s low-key WWDC 2025

At Apple’s annual WWDC keynote, the highest-level subject is always the future of its software platforms. And the big news in that department usually stares us right in the face. In

2025. jún. 11. 2:40:04 | Fast company - tech
Video game voice actors have been on strike for nearly a year. They finally have a deal

Video game voice actors and motion capture artists could be headed back to work soon. SAG-AFTRA and major video game companies have announced a tentative contract agreement, 11 months after union

2025. jún. 10. 22:10:05 | Fast company - tech
Starbucks is hiring full-time content creators to travel the world and post on social media

Here’s a dream job for chronically online coffee lovers: Starbucks is hiring two full-time content creators for a 12-month gig posting content at Starbucks locations around the world.

2025. jún. 10. 22:10:03 | Fast company - tech
OpenAI and Anthropic are getting cozy with government. What could possibly go wrong?

While the world and private enterprise are adopting AI rapidly in their workflows, government isn’t far behind. The U.K. government

2025. jún. 10. 17:20:07 | Fast company - tech
The artists experimenting with camera glasses and bodycams

Barely anything that truly makes me pause on the internet is shot using traditional, modern camera tech. I appreciate the grainy texture of film photos and the fast, smooth zoom of a shitty camcord

2025. jún. 10. 15:10:03 | Fast company - tech
How Austin became the robotaxi capital of America

The robotaxi race is heating up in Austin. A decade after Google’s self-driving car project quietly tested on the city’s

2025. jún. 10. 12:40:06 | Fast company - tech