What we can learn from ChatGPT’s first year

ChatGPT was launched on November 30, 2022, ushering in what many have called artificial intelligence’s breakout year. Within days of its release, ChatGPT went viral. Screenshots of conversations snowballed across social media, and the use of ChatGPT skyrocketed to an extent that seems to have surprised even its maker, OpenAI. By January, ChatGPT was seeing 13 million unique visitors each day, setting a record for the fastest-growing user base of a consumer application.

Throughout this breakout year, ChatGPT has revealed the power of a good interface and the perils of hype, and it has sown the seeds of a new set of human behaviors. As a researcher who studies technology and human information behavior, I find that ChatGPT’s influence in society comes as much from how people view and use it as the technology itself.

Generative AI systems like ChatGPT are becoming pervasive. Since ChatGPT’s release, some mention of AI has seemed obligatory in presentations, conversations and articles. Today, OpenAI claims 100 million people use ChatGPT every week.

Besides people interacting with ChatGPT at home, employees at all levels up to the C-suite in businesses are using the AI chatbot. In tech, generative AI is being called the biggest platform since the iPhone, which debuted in 2007. All the major players are making AI bets, and venture funding in AI startups is booming.

Along the way, ChatGPT has raised numerous concerns, such as its implications for disinformation, fraud, intellectual property issues, and discrimination. In my world of higher education, much of the discussion has surrounded cheating, which has become a focus of my own research this year.

Lessons from ChatGPT’s first year

The success of ChatGPT speaks foremost to the power of a good interface. AI has already been part of countless everyday products for well over a decade, from Spotify and Netflix to Facebook and Google Maps. The first version of GPT, the AI model that powers ChatGPT, dates back to 2018. And even OpenAI’s other products, such as DALL-E, did not make the waves that ChatGPT did immediately upon its release. It was the chat-based interface that set off AI’s breakout year.

There is something uniquely beguiling about chat. Humans are endowed with language, and conversation is a primary way people interact with each other and infer intelligence. A chat-based interface is a natural mode for interaction and a way for people to experience the “intelligence” of an AI system. The phenomenal success of ChatGPT shows again that user interfaces drive widespread adoption of technology, from the Macintosh to web browsers and the iPhone. Design makes the difference.

At the same time, one of the technology’s principal strengths—generating convincing language—makes it well suited for producing false or misleading information. ChatGPT and other generative AI systems make it easier for criminals and propagandists to prey on human vulnerabilities. The potential of the technology to boost fraud and misinformation is one of the key rationales for regulating AI.

Amid the real promises and perils of generative AI, the technology has also provided another case study in the power of hype. This year has brought no shortage of articles on how AI is going to transform every aspect of society and how the proliferation of the technology is inevitable.

ChatGPT is not the first technology to be hyped as “the next big thing,” but it is perhaps unique in simultaneously being hyped as an existential risk. Numerous tech titans and even some AI researchers have warned about the risk of superintelligent AI systems emerging and wiping out humanity, though I believe that these fears are far-fetched.

The media environment favors hype, and the current venture funding climate further fuels AI hype in particular. Playing to people’s hopes and fears is a recipe for anxiety with none of the ingredients for wise decision making.

What the future may hold

The AI floodgates opened in 2023, but the next year may bring a slowdown. AI development is likely to meet technical limitations and encounter infrastructural hurdles such as chip manufacturing and server capacity. Simultaneously, AI regulation is likely to be on the way.

This slowdown should give space for norms in human behavior to form, both in terms of etiquette, as in when and where using ChatGPT is socially acceptable, and effectiveness, like when and where ChatGPT is most useful.

ChatGPT and other generative AI systems will settle into people’s workflows, allowing workers to accomplish some tasks faster and with fewer errors. In the same way that people learned “to google” for information, humans will need to learn new practices for working with generative AI tools.

But the outlook for 2024 isn’t completely rosy. It is shaping up to be a historic year for elections around the world, and AI-generated content will almost certainly be used to influence public opinion and stoke division. Meta may have banned the use of generative AI in political advertising, but this isn’t likely to stop ChatGPT and similar tools from being used to create and spread false or misleading content.

Political misinformation spread across social media in 2016 as well as in 2020, and it is virtually certain that generative AI will be used to continue those efforts in 2024. Even outside social media, conversations with ChatGPT and similar products can be sources of misinformation on their own.

As a result, another lesson that everyone—users of ChatGPT or not—will have to learn in the blockbuster technology’s second year is to be vigilant when it comes to digital media of all kinds.


Tim Gorichanaz is an assistant teaching professor of information science at Drexel University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://www.fastcompany.com/90990376/what-we-can-learn-from-chatgpts-first-year?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 1y | 30 nov. 2023, 18:30:08


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

Going ‘AI-first’ appears to be backfiring on Klarna and Duolingo

Artificial intelligence might be the future of the workplace, but companies that are trying to get a head start on that future are running into all sorts of problems.

Klarna and Duloingo

12 mai 2025, 20:20:01 | Fast company - tech
Lyft CEO David Risher on competing with Uber and the future of rideshare

The rideshare market has reached a crossroads. Autonomous vehicles are on the rise, driver unrest is mounting, and customers are questioning everything from pricing to trust and safety. In the mid

12 mai 2025, 17:50:04 | Fast company - tech
Tech billionaires’ plan for a new California city may bypass voter approval

A group backed by tech billionaires spent years and $800 million secretly buying up over 60,

12 mai 2025, 13:20:04 | Fast company - tech
Snapchat’s Snap Map reaches 400 million users

Move aside, Google Maps: Snapchat’s Snap Map has hit a major milestone with 400 million monthly active users.

Launched in 2017, Snap Map began as a GPS-based feature that allowed users t

12 mai 2025, 13:20:03 | Fast company - tech
How Yahoo built AI-driven content discovery into its revamped news app

In April 2024, Yahoo acquired Artifact, a tool that uses AI to recommend news to readers. Yahoo folded Artifact’s—which was cofounded by Instagram cofounders Mike Krieger and Kevin Systrom—into it

12 mai 2025, 10:50:05 | Fast company - tech
How AI is changing your doctors appointments

It is hard to believe that in 2025, we are still dialing to schedule doctor appointments, get referrals, refill prescriptions, confirm office hours and addresses, and handle many other healthcare

12 mai 2025, 10:50:04 | Fast company - tech