How philosopher Shannon Vallor delivered the year’s best critique of AI

A few years ago, Shannon Vallor found herself in front of Cloud Gate, Anish Kapoor’s hulking mercury drop of a sculpture, better known as the Bean, in Chicago’s Millennium Park. Staring into its shiny mirrored surface, she noticed something. 

“I was seeing how it reflected not only the shapes of individual people, but big crowds, and even larger human structures like the Chicago skyline,” she recalls, “but also that these were distorted—some magnified, others shrunk or twisted.” 

To Vallor, a professor of philosophy at the University of Edinburgh, this was reminiscent of machine learning, “mirroring the patterns found in our data, but in ways that are never neutral or ‘objective,’” she says. The metaphor became a popular part of her lectures, and with the advent of large language models (and the many AI tools they power), has gathered more potency. AI’s “mirrors” look and sound a lot like us because they are reflecting their inputs and training data, with all of the biases and peculiarities that entails. And whereas other analogies for AI might convey a sense of living intelligence (think of the “stochastic parrot” of the widely cited 2021 paper), the “mirror” is more apt, says Vallor: AI isn’t sentient, just a flat, inert surface, captivating us with its fun-house illusions of depth.

The metaphor becomes Vallor’s lens in her recent book The AI Mirror, a sharp, witty critique that shatters many of the prevailing illusions we have about “intelligent” machines and turns some precious attention back on us. In anecdotes about our early encounters with chatbots, she hears echoes of Narcissus, the hunter in Greek mythology who fell in love with the beautiful face he saw when he looked in a pool of water, thinking it was another person. Like him, says Vallor, “our own humanity risks being sacrificed to that reflection.”

She’s not anti AI, to be clear. Both individually and as codirector of BRAID, a U.K.-wide nonprofit devoted to integrating technology and the humanities, Vallor has advised Silicon Valley companies on responsible AI. And she sees some value in “narrowly targeted, safe, well-tested, and morally and environmentally justifiable AI models” for tackling hard health and environmental problems. But as she’s watched the rise of algorithms, from social media to AI companions, she admits her own connection to technology has lately felt “more like being in a relationship that slowly turned sour. Only you don’t have the option of breaking up.”

For Vallor, one way to navigate—and hopefully guide—our increasingly uncertain relationships with digital technology is to tap into our virtues and values, like justice and practical wisdom. Being virtuous, she notes, isn’t about who we are but what we do, part of a ”struggle” of self-making as we experience the world, in relation with other people. AI systems, on the other hand, might reflect an image of human behavior or values, but, as she writes in The AI Mirror, they “know no more of the lived experience of thinking and feeling than our bedroom mirrors know our inner aches and pains.” At the same time, the algorithms, trained on historical data, quietly limit our futures, with the same thinking that left the world “rife with racism, poverty, inequality, discrimination, [and] climate catastrophe.” How will we deal with those emergent problems that have no precedent, she wonders. “Our new digital mirrors point backward.”

As we rely more heavily on machines, optimizing for certain metrics like efficiency and profit, Vallor worries we risk weakening our moral muscles, too, losing track of the values that make living worthwhile.

As we discover what AI can do, we’ll need to focus on leveraging uniquely human traits, too, like context-driven reasoning and moral judgment, and on cultivating our distinctly human capacities. You know, like contemplating a giant bean sculpture and coming up with a powerful metaphor for AI. “We don’t need to ‘defeat’ AI,” she says. “We need to not defeat ourselves.”

This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.

https://www.fastcompany.com/91240425/how-philosopher-shannon-vallor-delivered-the-years-best-critique-of-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creado 9mo | 11 dic 2024, 11:50:04


Inicia sesión para agregar comentarios

Otros mensajes en este grupo.

This man keeps buying and returning 110-pound anvils on Amazon

An Illinois man keeps buying and returning 110-pound anvils on Amazon—until “someone does something about it,” he says.

The creator, who goes by Johnbo Stockwell on

26 ago 2025, 5:30:09 | Fast company - tech
3 quick and easy ways to clear up storage space in Windows 11

Digital hoarders, unite! I have a game on my PC that I haven’t played in months, and it’s taking up more than 100 GB of disk space. There, I said it.

This is a scenario most of us find o

26 ago 2025, 5:30:07 | Fast company - tech
Texas residents push to form a new town to fight Bitcoin mining noise

For months, a group of Hood County, Texas, residents has been pushing to create a new town of their own. The effort began in March, when citizens living in a 2-square-mile unincorporated stretch o

25 ago 2025, 20:10:12 | Fast company - tech
Why AI surveillance cameras keep getting it wrong

Last year, Transport for London tested AI-powered CCTV at Willesden Gr

25 ago 2025, 13:20:05 | Fast company - tech
The gap between AI hype and newsroom reality

Although AI is changing the media, how much it’s

25 ago 2025, 10:50:11 | Fast company - tech
Big Tech locks data away. Wikidata gives it back to the internet

While tech and AI giants guard their knowledge graphs behind proprieta

25 ago 2025, 10:50:10 | Fast company - tech