Google’s Gemini 2.5 Pro could be the most important AI model so far this year

Google released its new Gemini 2.5 Pro Experimental AI model late last month, and it’s quickly stacked up top marks on a number of coding, math, and reasoning benchmark tests—making it a contender for the world’s best model right now.

Gemini 2.5 Pro is a “reasoning” model, meaning its answers derive from a mix of training data and real-time reasoning performed in response to the user prompt or question. Like other newer models, Gemini 2.5 Pro can consult the web, but it also contains a fairly recent snapshot of the world’s knowledge: Its training data cuts off at the end of January 2025.

Last year, in order to boost model performance, AI researchers began shifting toward teaching models to “reason” when they’re live and responding to user prompts. This approach requires models to process and retain increasingly more data to arrive at accurate answers. (Gemini 2.5 Pro, for example, can handle up to a million tokens.) However, models often struggle with information overload, making it difficult to extract meaningful insights from all that context.

Google appears to have made progress on this front. The YouTube channel AI Explained points out that Gemini 2.5 fared very well on a new benchmark test called Fiction.liveBench that’s designed to test a model’s ability to remember and comprehend context information. For instance, Fiction.liveBench might ask the model to read a novelette and answer questions that require a deep understanding of the story and characters. Some of the top models, including those from OpenAI and Anthropic, score well when the amount of stored data (the context window) is relatively small. But as the context window increases to 32K, then 60K, then 120K—about the size of a novelette—Gemini 2.5 Pro stands out for its superior comprehension.

That’s important because some of the most productive use cases to date for generative AI involve comprehending and summarizing large amounts of data. A service representative might depend on an AI tool to swim through voluminous manuals in order to help someone struggling with a technical problem out in the field, or a corporate compliance officer might need a long context window to sift through years of regulations and policies. 

Gemini also scored much higher than competing reasoning models on a new benchmark called MathArena, which tests models using hard questions from recent math Olympiads and contests. The test also requires that the model clearly show its reasoning as it steps toward an answer. Top models from OpenAI, Anthropic, and DeepSeek failed to break 5% of a perfect score, but Gemini 2.5 Pro model scored an impressive 24.4%.

The new Google model also scored high on another superhard benchmark called Humanity’s Last Exam, which is meant to show when AI models exceed the knowledge and reasoning of top experts in a given field. The Gemini 2.5 scored an 18.8%, a score topped only by OpenAI’s Deep Research model. The model also now sits atop the crowdsourced benchmarking leaderboard, LMArena.

Finally, Gemini 2.5 Pro is among the top models for computer coding. It scored a 70.4% on the LiveCodeBench benchmark, coming in just behind OpenAI’s o3-mini model, which scored 74.1%. Gemini 2.5 Pro scored 63.8% on SWE-bench (measures agentic coding), while Anthropic’s latest Claude 3.7 Sonnet scored 70.3%. Finally, Google’s model outscored Anthropic, OpenAI, and xAI models on the MMMU visual reading test by roughly 6 points. 

Google initially released its new model to paying subscribers but has now made it accessible by all users for free.


https://www.fastcompany.com/91311063/google-gemini-2-5-pro-testing?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 5mo | 3 avr. 2025, 22:10:02


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

Why Japan’s 7-Elevens are the hottest new tourist attraction

Forget the Shibuya Crossing or Mount Fuji; tourists in Japan are adding convenience stores to their travel itineraries.

Thanks to

19 août 2025, 11:10:06 | Fast company - tech
I tried 10 AI browsers. Here’s why Perplexity’s Comet is the best so far

While AI features have been creeping into pretty much every popular br

19 août 2025, 11:10:05 | Fast company - tech
AI assistants are here to shake up (or ruin) your fantasy sports league

The English Premier League, the world’s most popular soccer league, kicks off this weekend to a global TV audience of around one billion peo

19 août 2025, 11:10:04 | Fast company - tech
Founder fraud isn’t an outlier: it’s a design flaw

Another month, another founder accused of fraud. This time it’s Christine Hunsicker of CaaStle, indicted on July 18 for allegedly falsifying financial records, misrepresenting profits, and continu

19 août 2025, 11:10:03 | Fast company - tech
5 excellent free podcast apps for iOS and Android

I’m going to go out on a limb and assume you’ve been on the internet before. If so, you’ve likely stumbled upon a podcast or two. There are almost 5 million of them out there, after all.

<p

18 août 2025, 23:30:03 | Fast company - tech
Philips CEO Jeff DiLullo on how AI is changing healthcare today

AI is quietly reshaping the efficiency, power, and potential of U.S. h

18 août 2025, 21:10:07 | Fast company - tech
How satellites and orbiting weapons make space the new battlefield

As Russia held its Victory Day parade this year, hackers backing the Kremlin hijacked an orbiting satel

18 août 2025, 21:10:06 | Fast company - tech