A case for making our AI chatbots more confrontational

Spend any time interacting with AI chatbots and their tone can start to grate. No question is too taxing or intrusive for the noncorporial assistants, and if you probe under the hood of the bot too much, it’ll respond in a platitudinous way designed to dullen the interaction.

Nearly a year and a half into the generative-AI revolution, researchers are starting to wonder whether that deathly dull format is the best approach.

“There was something off with the tone and values that were being embedded in large language models,” says Alice Cai, a researcher at Harvard University. “It felt very paternalistic.” Beyond that, Cai says, it felt overly Americanized, imposing norms of consensual, agreeable, often saccharine agreement—which aren’t shared by the entire world.

In Cai’s household growing up, criticism was commonplace—and healthy, she says. “It was used as a way to incite growth, and honesty was a really important currency of my family unit.” That triggered her and colleagues at Harvard and the University of Montreal to explore whether a more antagonistic AI design would better serve users.

In their study, published in the open-access repository arXiv, the academics conducted a workshop asking participants to imagine how they thought a human personification of the current crop of generative-AI chatbots would look if brought to life. The answer: a white, middle-class customer service representative with a rictus smile and an unflappable attitude—and clearly, not always the best approach. “We humans don’t just value politeness,” says Ian Arawjo, assistant professor in human-computer interaction at the University of Montreal and one of the study’s coauthors.

Indeed, says Arawjo, “in many different domains, antagonism broadly construed, is good.” The researchers suggest that an AI coded to be antagonistic, rather than supplicant and sickeningly consensual, could help users confront their assumptions, build resilience, and develop healthier relational boundaries.

One of the potential deployments for a confrontational AI that the researchers came up with was in intervention, to shake a user out of a bad habit. “We had a team come up with an interventional system that could recognize when you were doing something that you might consider a bad habit,” says Cai. “And it does use a confrontational coaching approach that you often see used in sports, or sometimes in self-help.”

However, Arawjo points out that the use of confrontational AIs would require careful oversight and regulation, especially if it were deployed in those areas.

But the research team have been surprised by the positive response they’ve received to their suggestion of retooling AIs to be a little less polite. “I think the time has come for this kind of idea and exploring these systems,” says Arawjo. “And I would really like to see more empirical investigations so we can start to tease out how you actually do this in practice, and where it could be beneficial and where it might not be—or what the trade-offs are.”

https://www.fastcompany.com/91035372/a-case-for-making-our-ai-chatbots-more-confrontational?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 1y | 23 feb. 2024, 07:50:03


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

‘The /r/overemployed king’: A serial moonlighter was exposed for holding 19 jobs at Silicon Valley startups

A software engineer became X’s main character last week after being outed as a serial moonlighter at multiple Silicon Valley startups.

“PSA: there’s a guy named Soham Parekh (in India) w

8 iul. 2025, 22:20:04 | Fast company - tech
Texas flood recovery efforts face an unexpected obstacle: drones

The flash floods that have devastated Texas are already a difficult crisis to manage. More than 100 people are confirmed dead

8 iul. 2025, 17:40:02 | Fast company - tech
The internet is trying—and failing—to spend Elon Musk’s $342 billion

How would you spend $342 billion?

A number of games called “Spend Elon Musk’s Money” have been popping up online, inviting users to imagine how they’d blow through the

8 iul. 2025, 15:20:07 | Fast company - tech
What happened at Wimbledon? ‘Human error’ blamed for ball-tracking tech mishap

The All England Club, somewhat ironically, is blaming “human error” for a glaring mistake by the electronic

8 iul. 2025, 15:20:04 | Fast company - tech
Elon Musk has ‘fixed’ Grok—to be more like him than ever

As Elon Musk announced plans over the Fourth of July weekend to establish a third political party,

8 iul. 2025, 12:50:09 | Fast company - tech