Google’s Gemini 2.5 Pro could be the most important AI model so far this year

Google released its new Gemini 2.5 Pro Experimental AI model late last month, and it’s quickly stacked up top marks on a number of coding, math, and reasoning benchmark tests—making it a contender for the world’s best model right now.

Gemini 2.5 Pro is a “reasoning” model, meaning its answers derive from a mix of training data and real-time reasoning performed in response to the user prompt or question. Like other newer models, Gemini 2.5 Pro can consult the web, but it also contains a fairly recent snapshot of the world’s knowledge: Its training data cuts off at the end of January 2025.

Last year, in order to boost model performance, AI researchers began shifting toward teaching models to “reason” when they’re live and responding to user prompts. This approach requires models to process and retain increasingly more data to arrive at accurate answers. (Gemini 2.5 Pro, for example, can handle up to a million tokens.) However, models often struggle with information overload, making it difficult to extract meaningful insights from all that context.

Google appears to have made progress on this front. The YouTube channel AI Explained points out that Gemini 2.5 fared very well on a new benchmark test called Fiction.liveBench that’s designed to test a model’s ability to remember and comprehend context information. For instance, Fiction.liveBench might ask the model to read a novelette and answer questions that require a deep understanding of the story and characters. Some of the top models, including those from OpenAI and Anthropic, score well when the amount of stored data (the context window) is relatively small. But as the context window increases to 32K, then 60K, then 120K—about the size of a novelette—Gemini 2.5 Pro stands out for its superior comprehension.

That’s important because some of the most productive use cases to date for generative AI involve comprehending and summarizing large amounts of data. A service representative might depend on an AI tool to swim through voluminous manuals in order to help someone struggling with a technical problem out in the field, or a corporate compliance officer might need a long context window to sift through years of regulations and policies. 

Gemini also scored much higher than competing reasoning models on a new benchmark called MathArena, which tests models using hard questions from recent math Olympiads and contests. The test also requires that the model clearly show its reasoning as it steps toward an answer. Top models from OpenAI, Anthropic, and DeepSeek failed to break 5% of a perfect score, but Gemini 2.5 Pro model scored an impressive 24.4%.

The new Google model also scored high on another superhard benchmark called Humanity’s Last Exam, which is meant to show when AI models exceed the knowledge and reasoning of top experts in a given field. The Gemini 2.5 scored an 18.8%, a score topped only by OpenAI’s Deep Research model. The model also now sits atop the crowdsourced benchmarking leaderboard, LMArena.

Finally, Gemini 2.5 Pro is among the top models for computer coding. It scored a 70.4% on the LiveCodeBench benchmark, coming in just behind OpenAI’s o3-mini model, which scored 74.1%. Gemini 2.5 Pro scored 63.8% on SWE-bench (measures agentic coding), while Anthropic’s latest Claude 3.7 Sonnet scored 70.3%. Finally, Google’s model outscored Anthropic, OpenAI, and xAI models on the MMMU visual reading test by roughly 6 points. 

Google initially released its new model to paying subscribers but has now made it accessible by all users for free.


https://www.fastcompany.com/91311063/google-gemini-2-5-pro-testing?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 5mo | Apr 3, 2025, 10:10:02 PM


Login to add comment

Other posts in this group

Smarter AI is supercharging battery innovation 

The global race for better batteries has never been more intense. Electric vehicles, drones, and next-generation aircraft all depend on high-performance energy storage—yet the traditiona

Aug 24, 2025, 11:40:14 AM | Fast company - tech
AI passed the aesthetic Turing Test, raising big questions for art

Pick up an August 2025 issue of Vogue, and you’ll come across an advertisement for the brand Guess featur

Aug 24, 2025, 9:20:14 AM | Fast company - tech
This word-search website is the brain boost you never knew you needed

Language is the original technology, the tool we’ve all used to coordinate with each other for thousands of years. Our success in life—both professionally and in relationships—depends on it.

Aug 24, 2025, 12:10:13 AM | Fast company - tech
Dropbox Passwords is shutting down. Do this before your passwords are deleted for good

It’s been a bad year for password managers. First, Microsoft announced earlier this summer that its popular Microsoft Authenticator app would be

Aug 23, 2025, 10:10:09 AM | Fast company - tech
The TikTok dorm water panic is officially here

Instead of worrying about making friends or keeping up with their studies, new college students have a different concern on their minds: dorm water.

“Praying dorm water doesn’t ruin my h

Aug 22, 2025, 8:20:07 PM | Fast company - tech
Reddit—and a dash of AI—do what Google and ChatGPT can’t

Hello, everyone, and thanks once again for reading Fast Company’s Plugged In.

For years, some of the world’s most

Aug 22, 2025, 8:20:06 PM | Fast company - tech