Amid the usual doom and gloom that surrounds the internet these days, the world experienced an all-too-rare moment of joy over the past week: the arrival of a new artificial intelligence chatbot, ChatGPT.
The AI-powered chat tool, which takes pretty much any prompt a user throws at it and produces what they want, whether code or text, was launched by the team at AI development company OpenAI on November 30; by December 5, more than one million users had tested it out. The AI model comes hot on the heels of other generative AI technologies that take text prompts and spit out polished work that has swept social media in recent months—but its Jack-of-all-trades ability makes it stand out among the crowd.
The chatbot is free to use, though OpenAI CEO Sam Altman expects that will change in the future—meaning users have embraced the tech wholeheartedly. People have been using ChatGPT to run a virtual Linux machine, answer coding queries, develop business plans, write song lyrics, even pen Shakespearean verses.
Yet for all the brouhaha, there are some important caveats to note. The system may seem too good to be true, in part because at times it is. While some have professed that there’s no need to learn to code because ChatGPT can do it for you, software bug site Stack Overflow has temporarily banned any responses to questions generated by the chatbot because of the poor quality of its answers. “The posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers,” the site’s moderators say.
It’s also plagued by the same issues many chatbots have: It reflects society and all the incorrect biases society has. Computational scientist Steven T. Piantadosi, who heads the computation and language lab at UC Berkeley, has highlighted in a Twitter thread a number of issues with ChatGPT, where the AI turns up results that suggest “good scientists” are those who are white or Asian men, and that African American men’s lives should not be saved. Another query prompted ChatGPT to indulge in the idea that brain sizes differ and, as such, are more or less valuable as people.
pic.twitter.com/HngRj7ODGW
— steven t. piantadosi (@spiantado) December 4, 2022
OpenAI did not respond to a request for comment for this story. Altman, in response to Piantadosi’s Twitter thread highlighting serious incidents of his chatbot promoting racist beliefs, asked the computational scientist to “please hit the thumbs down on these and help us improve!”
“With these kind of chatbot models, if you search for certain toxic offensive queries, you’re likely to get toxic responses,” says Yang Zhang, faculty member at CISPA Helmholtz Center for Information Security, who was coauthor of a September 2022 paper looking at how chatbots, not including ChatGPT, turn nasty. “More importantly, if you search some innocent questions that aren’t that toxic, there’s still a chance that it will give a toxic response.”
The reason is the same that nobbles every chatbot: The data it uses to generate its responses are sourced from the internet, and folks online are plenty hostile. Zhang says that any chatbot developers ought to produce the worst-case scenario they can think of for their models as part of the development process, and then use that scenario to propose defense mechanisms to make the model safer. (A ChatGPT FAQ says: “We’ve made efforts to make the model refuse inappropriate requests.”) “We should also make the public aware that such models have a potential risk factor,” says Zhang.
The issue is that people often get caught up in incredulity at the prowess of the models’ output. ChatGPT appears to be streaks ahead of its competitors, with some already saying that it’s the death not just of Google’s chat models, but also of the search engine itself, so accurate are the model’s answers to some questions.
How the model has been trained is another conundrum, says Catalina Goanta, associate professor in private law and technology at Utrecht University. “Because of the very big computational power of these models, and the fact that they rely on all of this data that we cannot map; of course, a lot of ethical questions arise,” she says. The challenge is acknowledging the benefits that come from such powerful AI-powered chatbots while also ensuring there are sensible guide rails on its development.
That’s something, in the first flourish of social media hype, that it’s difficult to think about. But it’s important to do so. “I think we need to do more research to understand what are the case studies where it should be fair game to use such very large language models, as is the case with ChatGPT,” says Goanta, “and then where we have certain types of industries or situations where it should be forbidden to have that.”
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup

Uber is facing internal staff unrest as it attempts to implement a three-day-per-week return to office (RTO) mandate and stricter sabbatical eligibility.
An all-hands meeting late

A study has confirmed what we all suspected: “K” is officially the worst text you can send.
It might look harmless enough, but this single letter has the power to shut down a conversatio

SoundCloud is facing backlash after creators took to social media to complain upon discovering that the music-sharing platform uses uploaded music to train its AI systems.
According to S

The Trump administration on Thursday proposed a multibillion-dollar overhaul of a

As recently as 2021, Figma was a one-product company. That product was Figma Design, the dominant tool for creating app and web interfaces. The company’s subsequent addition of offerings such as

A startup marketing to Gen Z on college campuses filed a lawsuit this week alleging that Instacart engaged in federal trademark infringement and unfair competition by naming its new group ordering

Influencers often face more negativity than most people experience in a lifetime—and with that comes a significant mental health toll. Now, a new therapy service has been launched specifically for