Google’s Gemini 2.5 Pro could be the most important AI model so far this year

Google released its new Gemini 2.5 Pro Experimental AI model late last month, and it’s quickly stacked up top marks on a number of coding, math, and reasoning benchmark tests—making it a contender for the world’s best model right now.

Gemini 2.5 Pro is a “reasoning” model, meaning its answers derive from a mix of training data and real-time reasoning performed in response to the user prompt or question. Like other newer models, Gemini 2.5 Pro can consult the web, but it also contains a fairly recent snapshot of the world’s knowledge: Its training data cuts off at the end of January 2025.

Last year, in order to boost model performance, AI researchers began shifting toward teaching models to “reason” when they’re live and responding to user prompts. This approach requires models to process and retain increasingly more data to arrive at accurate answers. (Gemini 2.5 Pro, for example, can handle up to a million tokens.) However, models often struggle with information overload, making it difficult to extract meaningful insights from all that context.

Google appears to have made progress on this front. The YouTube channel AI Explained points out that Gemini 2.5 fared very well on a new benchmark test called Fiction.liveBench that’s designed to test a model’s ability to remember and comprehend context information. For instance, Fiction.liveBench might ask the model to read a novelette and answer questions that require a deep understanding of the story and characters. Some of the top models, including those from OpenAI and Anthropic, score well when the amount of stored data (the context window) is relatively small. But as the context window increases to 32K, then 60K, then 120K—about the size of a novelette—Gemini 2.5 Pro stands out for its superior comprehension.

That’s important because some of the most productive use cases to date for generative AI involve comprehending and summarizing large amounts of data. A service representative might depend on an AI tool to swim through voluminous manuals in order to help someone struggling with a technical problem out in the field, or a corporate compliance officer might need a long context window to sift through years of regulations and policies. 

Gemini also scored much higher than competing reasoning models on a new benchmark called MathArena, which tests models using hard questions from recent math Olympiads and contests. The test also requires that the model clearly show its reasoning as it steps toward an answer. Top models from OpenAI, Anthropic, and DeepSeek failed to break 5% of a perfect score, but Gemini 2.5 Pro model scored an impressive 24.4%.

The new Google model also scored high on another superhard benchmark called Humanity’s Last Exam, which is meant to show when AI models exceed the knowledge and reasoning of top experts in a given field. The Gemini 2.5 scored an 18.8%, a score topped only by OpenAI’s Deep Research model. The model also now sits atop the crowdsourced benchmarking leaderboard, LMArena.

Finally, Gemini 2.5 Pro is among the top models for computer coding. It scored a 70.4% on the LiveCodeBench benchmark, coming in just behind OpenAI’s o3-mini model, which scored 74.1%. Gemini 2.5 Pro scored 63.8% on SWE-bench (measures agentic coding), while Anthropic’s latest Claude 3.7 Sonnet scored 70.3%. Finally, Google’s model outscored Anthropic, OpenAI, and xAI models on the MMMU visual reading test by roughly 6 points. 

Google initially released its new model to paying subscribers but has now made it accessible by all users for free.


https://www.fastcompany.com/91311063/google-gemini-2-5-pro-testing?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

созданный 3mo | 3 апр. 2025 г., 22:10:02


Войдите, чтобы добавить комментарий

Другие сообщения в этой группе

A newly discovered exoplanet rekindles humanity’s oldest question: Are we alone?

Child psychologists tell us that around the age of five or six, children begin to seriously contemplate the world around them. It’s a glorious moment every parent recognizes—when young minds start

13 июл. 2025 г., 11:10:06 | Fast company - tech
How Watch Duty became a go-to app during natural disasters

During January’s unprecedented wildfires in Los Angeles, Watch Duty—a digital platform providing real-time fire data—became the go-to app for tracking the unfolding disaster and is credit

13 июл. 2025 г., 06:30:05 | Fast company - tech
Why the AI pin won’t be the next iPhone

One of the most frequent questions I’ve been getting from business execs lately is whether the

12 июл. 2025 г., 12:10:02 | Fast company - tech
Microsoft will soon delete your Authenticator passwords. Here are 3 password manager alternatives

Users of Microsoft apps are having a rough year. First, in May, the Windows maker

12 июл. 2025 г., 09:40:03 | Fast company - tech
Yahoo Creators platform hits record revenue as publisher bets big on influencer-led content

Yahoo’s bet on creator-led content appears to be paying off. Yahoo Creators, the media company’s publishing platform for creators, had its most lucrative month yet in June.

Launched in M

11 июл. 2025 г., 17:30:04 | Fast company - tech
GameStop’s Nintendo Switch 2 stapler sells for more than $100,000 on eBay after viral mishap

From being the face of memestock mania to going viral for inadvertently stapling the screens of brand-new video game consoles, GameStop is no stranger to infamy.

Last month, during the m

11 июл. 2025 г., 12:50:04 | Fast company - tech
Don’t take the race for ‘superintelligence’ too seriously

The technology industry has always adored its improbably audacious goals and their associated buzzwords. Meta CEO Mark Zuckerberg is among the most enamored. After all, the name “Meta” is the resi

11 июл. 2025 г., 12:50:02 | Fast company - tech