Predicting the future—or at least, trying to—is the backbone of economics and an augur of how our society evolves. Government policies, investment decisions, and global economic plans are all predicated on estimating what’s happening in the future. But guessing right is tricky.
However, a new study by researchers at the London School of Economics, the Massachusetts Institute of Technology (MIT), and the University of Pennsylvania suggests that forecasting the future is a task that could well be outsourced to generative AI—with surprising results. Large language models (LLMs) working in a crowd can predict the future as well as humans can, and with a little training on human predictions, can improve to superhuman performance.
“Accurate forecasting of future events is very important to many aspects of human economic activity, especially within white collar occupations, such as those of law, business and policy,” says Peter S. Park, AI existential safety postdoctoral fellow at MIT, and one of the coauthors of the study.
Just a dozen LLMs can forecast the future as well as a team of 925 human forecasters, according to Park and his colleagues, who conducted two experiments for the study that tested AI’s ability to forecast three months into the future. Both the 925 humans, and the 12 LLMs, were asked 31 questions to which the answer was yes or now, in the first part of the study.
Questions included “Will Hamas lose control of Gaza before 2024?” and “Will there be a US military combat death in the Red Sea before 2024?”
Looking at all the LLM responses to all the questions, and comparing them to the humans’ responses to the same questions, the AI models performed as well as the human forecasters. In the second experiment in the study, the AI models were informed about the median prediction for each question from the human forecasters to better inform their prediction. Doing so helped improve LLMs’ prediction accuracy by between 17 and 28%.
“To be honest, I was not surprised [by the results],” Park says. “There are historical trends that have been true for a long time that make it reasonable that AI cognitive capabilities will continue to advance.” The fact that LLMs are trained on vast volumes of data, trawled on the internet, and are designed to produce the most predictable, consensual—some would say average—response is also an indication of why LLMs may have strength in predictive capabilities. The scale of the data they pull from, and the range of opinions, also helps supercharge the traditional wisdom of the crowd concept that helps make accurate predictions.
The paper’s findings have huge ramifications for our ability to gaze into the metaphorical crystal ball—and for the future employment of human forecasters. As one AI expert put it on X: “Everything is about to get really weird.”
Login to add comment
Other posts in this group

Artificial intelligence might be the future of the workplace, but companies that are trying to get a head start on that future are running into all sorts of problems.
Klarna and Duloingo

Now that the “100 men vs. one gorilla” debate has been settled, a new question is circulati

The rideshare market has reached a crossroads. Autonomous vehicles are on the rise, driver unrest is mounting, and customers are questioning everything from pricing to trust and safety. In the mid

A group backed by tech billionaires spent years and $800 million secretly buying up over 60,

Move aside, Google Maps: Snapchat’s Snap Map has hit a major milestone with 400 million monthly active users.
Launched in 2017, Snap Map began as a GPS-based feature that allowed users t

In April 2024, Yahoo acquired Artifact, a tool that uses AI to recommend news to readers. Yahoo folded Artifact’s—which was cofounded by Instagram cofounders Mike Krieger and Kevin Systrom—into it

It is hard to believe that in 2025, we are still dialing to schedule doctor appointments, get referrals, refill prescriptions, confirm office hours and addresses, and handle many other healthcare