Home » Google’s Gemini 2.5 Pro is Better at Coding, Math & Science Than Your Favourite AI Model

Google’s Gemini 2.5 Pro is Better at Coding, Math & Science Than Your Favourite AI Model

by Brandon Duncan


Gemini 2.5 Google

Google has unveiled Gemini 2.5 Pro, the first in its Gemini 2.5 family. This multimodal reasoning model outperforms competitors from OpenAI, Anthropic, and DeepSeek in key benchmarks related to coding, mathematics, and science.

What are reasoning AI models?

Reasoning AIs are designed to “think before they speak.” They evaluate context, process details methodically, and fact-check responses to ensure logical accuracy — though these capabilities demand more computing power and higher operational costs.

OpenAI launched the first reasoning model last September with o1, a notable departure from the GPT series, which was largely focused on language generation. Since then, the major players in the AI race have responded: DeepSeek with R1, Anthropic with Claude Sonnet 3.7, and xAI’s with Grok 3.

Evolving beyond ‘flash thinking’

Google previously launched its first reasoning AI model, Gemini 2.0 Flash Thinking, in December. Marketed for its agentic capabilities, Flash Thinking was recently updated to allow file uploads and larger prompts; however, with the introduction of Gemini 2.5 Pro, Google appears to be retiring the “Thinking” label altogether.

According to Google’s announcement about Gemini 2.5, this is because reasoning capabilities will now be integrated natively across all future models. This shift marks a move toward a more unified AI architecture, rather than separating “thinking” features as standalone branding.

The new experimental model combines “a significantly enhanced base model” with “improved post-training.” Google touts its performance at the top of the LMArena leaderboard, which ranks major large language models across various tasks.

DOWNLOAD: How to Use AI in Business from TechRepublic Premium

Benchmark leader in science, math, and code

Gemini 2.5 Pro excels in academic reasoning benchmarks, scoring 86.7% on AIME 2025 (mathematics) and 84.0% on the GPQA diamond benchmark (science). On Humanity’s Last Exam — a broad test featuring thousands of questions across mathematics, science, and humanities —  the model leads with a score of 18.8%.

Notably, these results were achieved without the use of expensive test-time techniques, which allow models like o1 and R1 to continue learning during evaluation.

In software development benchmarks, Gemini 2.5 Pro performance is mixed. It scored 68.6% on the Aider Polyglot benchmark for code editing, outperforming most top-tier models. However, it scored 63.8% on SWE-bench Verified, placing second to Claude Sonnet 3.7 in broader programming tasks.

Despite this, Google says Gemini 2.5 Pro “excels at creating visually compelling web apps and agentic code applications,” as evidenced by its ability to create a video game from a single prompt.

The model supports a context window of one million tokens, meaning it can process the equivalent of a 750,000-word prompt, or the first six Harry Potter books. Google plans to increase this threshold to two million tokens in due course.

Gemini 2.5 Pro is currently available through the Gemini Advanced app, which requires a $20-a-month subscription, and to developers and enterprises through Google AI Studio. In the coming weeks, Gemini 2.5 Pro will be made available on Vertex AI, Google’s machine-learning platform for developers, and pricing details for different rate limits will also be introduced.



Source link

You may also like

Leave a Comment