The internet loves ChatGPT, but there’s a dark side to the tech

Amid the usual doom and gloom that surrounds the internet these days, the world experienced an all-too-rare moment of joy over the past week: the arrival of a new artificial intelligence chatbot, ChatGPT.

The AI-powered chat tool, which takes pretty much any prompt a user throws at it and produces what they want, whether code or text, was launched by the team at AI development company OpenAI on November 30; by December 5, more than one million users had tested it out. The AI model comes hot on the heels of other generative AI technologies that take text prompts and spit out polished work that has swept social media in recent months—but its Jack-of-all-trades ability makes it stand out among the crowd.

The chatbot is free to use, though OpenAI CEO Sam Altman expects that will change in the future—meaning users have embraced the tech wholeheartedly. People have been using ChatGPT to run a virtual Linux machine, answer coding queries, develop business plans, write song lyrics, even pen Shakespearean verses.

Yet for all the brouhaha, there are some important caveats to note. The system may seem too good to be true, in part because at times it is. While some have professed that there’s no need to learn to code because ChatGPT can do it for you, software bug site Stack Overflow has temporarily banned any responses to questions generated by the chatbot because of the poor quality of its answers. “The posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers,” the site’s moderators say.

It’s also plagued by the same issues many chatbots have: It reflects society and all the incorrect biases society has. Computational scientist Steven T. Piantadosi, who heads the computation and language lab at UC Berkeley, has highlighted in a Twitter thread a number of issues with ChatGPT, where the AI turns up results that suggest “good scientists” are those who are white or Asian men, and that African American men’s lives should not be saved. Another query prompted ChatGPT to indulge in the idea that brain sizes differ and, as such, are more or less valuable as people.

pic.twitter.com/HngRj7ODGW

— steven t. piantadosi (@spiantado) December 4, 2022

OpenAI did not respond to a request for comment for this story. Altman, in response to Piantadosi’s Twitter thread highlighting serious incidents of his chatbot promoting racist beliefs, asked the computational scientist to “please hit the thumbs down on these and help us improve!”

“With these kind of chatbot models, if you search for certain toxic offensive queries, you’re likely to get toxic responses,” says Yang Zhang, faculty member at CISPA Helmholtz Center for Information Security, who was coauthor of a September 2022 paper looking at how chatbots, not including ChatGPT, turn nasty. “More importantly, if you search some innocent questions that aren’t that toxic, there’s still a chance that it will give a toxic response.”

The reason is the same that nobbles every chatbot: The data it uses to generate its responses are sourced from the internet, and folks online are plenty hostile. Zhang says that any chatbot developers ought to produce the worst-case scenario they can think of for their models as part of the development process, and then use that scenario to propose defense mechanisms to make the model safer. (A ChatGPT FAQ says: “We’ve made efforts to make the model refuse inappropriate requests.”) “We should also make the public aware that such models have a potential risk factor,” says Zhang.

The issue is that people often get caught up in incredulity at the prowess of the models’ output. ChatGPT appears to be streaks ahead of its competitors, with some already saying that it’s the death not just of Google’s chat models, but also of the search engine itself, so accurate are the model’s answers to some questions.

How the model has been trained is another conundrum, says Catalina Goanta, associate professor in private law and technology at Utrecht University. “Because of the very big computational power of these models, and the fact that they rely on all of this data that we cannot map; of course, a lot of ethical questions arise,” she says. The challenge is acknowledging the benefits that come from such powerful AI-powered chatbots while also ensuring there are sensible guide rails on its development.

That’s something, in the first flourish of social media hype, that it’s difficult to think about. But it’s important to do so. “I think we need to do more research to understand what are the case studies where it should be fair game to use such very large language models, as is the case with ChatGPT,” says Goanta, “and then where we have certain types of industries or situations where it should be forbidden to have that.”

https://www.fastcompany.com/90820090/the-internet-loves-chatgpt-but-theres-a-dark-side-to-the-tech?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2y | Dec 8, 2022, 2:21:49 AM


Login to add comment

Other posts in this group

What happens when you mix random stuff in a bowl for 100 days? TikTok found out

Ever wondered what happens when you add random household items to the same bowl every day for 100 days straight?

Well, you’re in luck. One TikTok account has made it their mission to fin

May 7, 2025, 10:20:05 AM | Fast company - tech
Why TikTok Shop can’t shake its knockoff problem

TikTok has spent nearly $1 billion cracking down on intellectual property violations in its marketplace. So why is TikTok Shop still flooded with knockoffs?

From July to December 2024, t

May 7, 2025, 10:20:04 AM | Fast company - tech
My favorite tools for a focused, restful second half of the day

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

May 7, 2025, 5:40:03 AM | Fast company - tech
An Arizona family used AI to recreate a road rage victim’s voice

The family of a man killed in a 2021 road rage incident in Arizona used artificial intelligence to portray the victim delivering his own impact statement during his killer’s sentencing hearing, ac

May 6, 2025, 10:40:04 PM | Fast company - tech
Justice Department asks court to break up Google’s ad tech business

The U.S. Justice Department is doubling down on its attempt to break up Google by asking a federal judg

May 6, 2025, 8:30:03 PM | Fast company - tech
OpenAI’s nonprofit mission fades further into the rearview

OpenAI was founded as a nonprofit with a mission to build safe artificial general intelligence for the be

May 6, 2025, 8:30:02 PM | Fast company - tech
DoorDash agrees to buy Deliveroo for $3.9 billion

DoorDash, the ubiquitous U.S. food delivery app, has

May 6, 2025, 6:10:05 PM | Fast company - tech