How philosopher Shannon Vallor delivered the year’s best critique of AI

A few years ago, Shannon Vallor found herself in front of Cloud Gate, Anish Kapoor’s hulking mercury drop of a sculpture, better known as the Bean, in Chicago’s Millennium Park. Staring into its shiny mirrored surface, she noticed something. 

“I was seeing how it reflected not only the shapes of individual people, but big crowds, and even larger human structures like the Chicago skyline,” she recalls, “but also that these were distorted—some magnified, others shrunk or twisted.” 

To Vallor, a professor of philosophy at the University of Edinburgh, this was reminiscent of machine learning, “mirroring the patterns found in our data, but in ways that are never neutral or ‘objective,’” she says. The metaphor became a popular part of her lectures, and with the advent of large language models (and the many AI tools they power), has gathered more potency. AI’s “mirrors” look and sound a lot like us because they are reflecting their inputs and training data, with all of the biases and peculiarities that entails. And whereas other analogies for AI might convey a sense of living intelligence (think of the “stochastic parrot” of the widely cited 2021 paper), the “mirror” is more apt, says Vallor: AI isn’t sentient, just a flat, inert surface, captivating us with its fun-house illusions of depth.

The metaphor becomes Vallor’s lens in her recent book The AI Mirror, a sharp, witty critique that shatters many of the prevailing illusions we have about “intelligent” machines and turns some precious attention back on us. In anecdotes about our early encounters with chatbots, she hears echoes of Narcissus, the hunter in Greek mythology who fell in love with the beautiful face he saw when he looked in a pool of water, thinking it was another person. Like him, says Vallor, “our own humanity risks being sacrificed to that reflection.”

She’s not anti AI, to be clear. Both individually and as codirector of BRAID, a U.K.-wide nonprofit devoted to integrating technology and the humanities, Vallor has advised Silicon Valley companies on responsible AI. And she sees some value in “narrowly targeted, safe, well-tested, and morally and environmentally justifiable AI models” for tackling hard health and environmental problems. But as she’s watched the rise of algorithms, from social media to AI companions, she admits her own connection to technology has lately felt “more like being in a relationship that slowly turned sour. Only you don’t have the option of breaking up.”

For Vallor, one way to navigate—and hopefully guide—our increasingly uncertain relationships with digital technology is to tap into our virtues and values, like justice and practical wisdom. Being virtuous, she notes, isn’t about who we are but what we do, part of a ”struggle” of self-making as we experience the world, in relation with other people. AI systems, on the other hand, might reflect an image of human behavior or values, but, as she writes in The AI Mirror, they “know no more of the lived experience of thinking and feeling than our bedroom mirrors know our inner aches and pains.” At the same time, the algorithms, trained on historical data, quietly limit our futures, with the same thinking that left the world “rife with racism, poverty, inequality, discrimination, [and] climate catastrophe.” How will we deal with those emergent problems that have no precedent, she wonders. “Our new digital mirrors point backward.”

As we rely more heavily on machines, optimizing for certain metrics like efficiency and profit, Vallor worries we risk weakening our moral muscles, too, losing track of the values that make living worthwhile.

As we discover what AI can do, we’ll need to focus on leveraging uniquely human traits, too, like context-driven reasoning and moral judgment, and on cultivating our distinctly human capacities. You know, like contemplating a giant bean sculpture and coming up with a powerful metaphor for AI. “We don’t need to ‘defeat’ AI,” she says. “We need to not defeat ourselves.”

This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.

https://www.fastcompany.com/91240425/how-philosopher-shannon-vallor-delivered-the-years-best-critique-of-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 8mo | Dec 11, 2024, 11:50:04 AM


Login to add comment

Other posts in this group

How AI will radically change military command structures

Despite two centuries of evolution, the structure of a modern military staff would be recognizable to Napoleon. At the same time, m

Aug 20, 2025, 5:20:02 PM | Fast company - tech
This startup knows what AI is saying about your brand

Internet users are increasingly turning to AI tools like ChatGPT, rath

Aug 20, 2025, 2:50:08 PM | Fast company - tech
OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model

It’s rare for a tech titan to show any weakness or humanity. Yet even OpenAI’s notoriously understated CEO Sam Altman had to admit this week that the rollout of the company’s

Aug 20, 2025, 2:50:06 PM | Fast company - tech
An engineer explains how AI can prevent satellite disasters in space

With satellite mega-constellations like SpaceX’s Starlink deploying thousands of spacecraft, monitoring their health has become an enormous challenge. Traditional methods can’t easily scale

Aug 20, 2025, 12:30:15 PM | Fast company - tech
Landline phones are back—and they’re helping kids connect safely with friends

In today’s world, communication is largely done through one of two methods: smartphones or social media. Young children, however, rarely have access to either—and experts say they shouldn’t have a

Aug 20, 2025, 12:30:12 PM | Fast company - tech