Most students are using AI to enhance learning, not outsource it, research shows

Over 80% of Middlebury College students use generative AI for coursework, according to a recent survey I conducted with my colleague and fellow economist Zara Contractor. This is one of the fastest technology adoption rates on record, far outpacing the 40% adoption rate among U.S. adults, and it happened in less than two years after ChatGPT’s public launch.

Although we surveyed only one college, our results align with similar studies, providing an emerging picture of the technology’s use in higher education.

Between December 2024 and February 2025, we surveyed over 20% of Middlebury College’s student body, or 634 students, to better understand how students are using artificial intelligence, and published our results in a working paper that has not yet gone through peer review.

What we found challenges the panic-driven narrative around AI in higher education and instead suggests that institutional policy should focus on how AI is used, not whether it should be banned.

Not just a homework machine

Contrary to alarming headlines suggesting that “ChatGPT Has Unraveled the Entire Academic Project” and “AI Cheating Is Getting Worse,” we discovered that students primarily use AI to enhance their learning rather than to avoid work.

When we asked students about 10 different academic uses of AI—from explaining concepts and summarizing readings to proofreading, creating programming code, and, yes, even writing essays—explaining concepts topped the list. Students frequently described AI as an “on-demand tutor,” a resource that was particularly valuable when office hours weren’t available or when they needed immediate help late at night.

We grouped AI uses into two types: “augmentation” to describe uses that enhance learning, and “automation” for uses that produce work with minimal effort. We found that 61% of the students who use AI employ these tools for augmentation purposes, while 42% use them for automation tasks like writing essays or generating code.

Even when students used AI to automate tasks, they showed judgment. In open-ended responses, students told us that when they did automate work, it was often during crunch periods like exam week, or for low-stakes tasks like formatting bibliographies and drafting routine emails, not as their default approach to completing meaningful coursework.

Of course, Middlebury is a small liberal arts college in Vermont with a relatively large portion of wealthy students. What about everywhere else? To find out, we analyzed data from other researchers covering over 130 universities across more than 50 countries. The results mirror our Middlebury findings: Globally, students who use AI tend to be more likely to use it to augment their coursework, rather than automate it.

But should we trust what students tell us about how they use AI? An obvious concern with survey data is that students might underreport uses they see as inappropriate, like essay writing, while overreporting legitimate uses like getting explanations. To verify our findings, we compared them with data from AI company Anthropic, which analyzed actual usage patterns from university email addresses of their chatbot, Claude AI.

Anthropic’s data shows that “technical explanations” represent a major use, matching our finding that students most often use AI to explain concepts. Similarly, Anthropic found that designing practice questions, editing essays, and summarizing materials account for a substantial share of student usage, which aligns with our results.

In other words, our self-reported survey data matches actual AI conversation logs.

Why it matters

As writer and academic Hua Hsu recently noted, “There are no reliable figures for how many American students use AI, just stories about how everyone is doing it.” These stories tend to emphasize extreme examples, like a Columbia student who used AI “to cheat on nearly every assignment.”

But these anecdotes can conflate widespread adoption with universal cheating. Our data confirms that AI use is indeed widespread, but students primarily use it to enhance learning, not replace it. This distinction matters: By painting all AI use as cheating, alarmist coverage may normalize academic dishonesty, making responsible students feel naive for following rules when they believe “everyone else is doing it.”

Moreover, this distorted picture provides biased information to university administrators, who need accurate data about actual student AI usage patterns to craft effective, evidence-based policies.

What’s next

Our findings suggest that extreme policies like blanket bans or unrestricted use carry risks. Prohibitions may disproportionately harm students who benefit most from AI’s tutoring functions while creating unfair advantages for rule breakers. But unrestricted use could enable harmful automation practices that may undermine learning.

Instead of one-size-fits-all policies, our findings lead me to believe that institutions should focus on helping students distinguish beneficial AI uses from potentially harmful ones. Unfortunately, research on AI’s actual learning impacts remains in its infancy—no studies I’m aware of have systematically tested how different types of AI use affect student learning outcomes, or whether AI impacts might be positive for some students but negative for others.

Until that evidence is available, everyone interested in how this technology is changing education must use their best judgment to determine how AI can foster learning.

Germán Reyes is an assistant professor of economics at Middlebury.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://www.fastcompany.com/91387634/middlebury-college-students-ai-use-enhance-learning-research?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss
Létrehozva 7h | 2025. aug. 23. 10:10:10


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

Dropbox Passwords is shutting down. Do this before your passwords are deleted for good

It’s been a bad year for password managers. First, Microsoft announced earlier this summer that its popular Microsoft Authenticator app would be

2025. aug. 23. 10:10:09 | Fast company - tech
The TikTok dorm water panic is officially here

Instead of worrying about making friends or keeping up with their studies, new college students have a different concern on their minds: dorm water.

“Praying dorm water doesn’t ruin my h

2025. aug. 22. 20:20:07 | Fast company - tech
Reddit—and a dash of AI—do what Google and ChatGPT can’t

Hello, everyone, and thanks once again for reading Fast Company’s Plugged In.

For years, some of the world’s most

2025. aug. 22. 20:20:06 | Fast company - tech
Angel Hair chocolate is taking over TikTok

There’s a new viral chocolate bar on the block.

Angel Hair chocolate, created by Belgian brand Tucho, launched in December 2024 and ticks al

2025. aug. 22. 15:40:05 | Fast company - tech
Cambridge Dictionary adds ‘skibidi,’ ‘delulu,’ and other viral internet words

You can now look up skibidi, tradwife, and delulu in the Cambridge Dictionary. 

Among the 6,000 or so words added to the dictionary over the past year, these i

2025. aug. 22. 15:40:03 | Fast company - tech