Facefam ArticlesFacefam Articles
  • webmaster
    • How to
    • Developers
    • Hosting
    • monetization
    • Reports
  • Technology
    • Software
  • Downloads
    • Windows
    • android
    • PHP Scripts
    • CMS
  • REVIEWS
  • Donate
  • Join Facefam
Search

Archives

  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • January 2025
  • December 2024
  • November 2024

Categories

  • Advertiser
  • AI
  • android
  • betting
  • Bongo
  • Business
  • CMS
  • cryptocurrency
  • Developers
  • Development
  • Downloads
  • Entertainment
  • Entrepreneur
  • Finacial
  • General
  • Hosting
  • How to
  • insuarance
  • Internet
  • Kenya
  • monetization
  • Music
  • News
  • Phones
  • PHP Scripts
  • Reports
  • REVIEWS
  • RUSSIA
  • Software
  • Technology
  • Tips
  • Tragic
  • Ukraine
  • Uncategorized
  • USA
  • webmaster
  • webmaster
  • Windows
  • Women Empowerment
  • Wordpress
  • Wp Plugins
  • Wp themes
Facefam 2025
Notification Show More
Font ResizerAa
Facefam ArticlesFacefam Articles
Font ResizerAa
  • Submit a Post
  • Donate
  • Join Facefam social
Search
  • webmaster
    • How to
    • Developers
    • Hosting
    • monetization
    • Reports
  • Technology
    • Software
  • Downloads
    • Windows
    • android
    • PHP Scripts
    • CMS
  • REVIEWS
  • Donate
  • Join Facefam
Have an existing account? Sign In
Follow US
Technologywebmaster

OpenAI Wins Gold at International Math Olympiad – or Did It?

Ronald Kenyatta
Last updated: July 22, 2025 7:15 am
By
Ronald Kenyatta
ByRonald Kenyatta
Follow:
Share
4 Min Read
SHARE

Contents
AI models’ mixed success at solving math problemsQuestions arise about OpenAI’s gold medal at IMO
Image: X/@alexwei_

OpenAI’s latest model has achieved a gold-level score on the 2025 International Math Olympiad. It answered five out of the six questions under exam conditions, scoring 35 out of a possible 42 points.

The International Math Olympiad is known to be the most prestigious and challenging mathematics competition for high school students in the world. Only about 10% of this year’s competitors received gold medals, and numerous Fields Medalists have won it in the past. Each competitor has two 4.5-hour sessions to complete the six questions without access to the internet or any tools.

AI models’ mixed success at solving math problems

Artificial intelligence models are not known to excel at complex mathematical problems because they can struggle to understand logic. And yet, recently, Gemini 2.5 Pro and OpenAI’s o3 scored 86.7% and 88.9%, respectively, in the American Invitational Mathematics Examination, a key math benchmark for AI models. In contrast, in September 2024, o1 scored 83% in just a qualifying exam for the International Olympiad. And, Grok 4 reportedly got a perfect 100% on AIME (math olympiad problems).

“IMO problems demand a new level of sustained creative thinking compared to past benchmarks,” OpenAI researcher Alexander Wei posted on X after announcing the unreleased model’s milestone. His colleague, Noam Brown, said that just last year, AI labs were using grade school math as a benchmark, referring to the GSM8K test.

OpenAI CEO Sam Altman said the experimental model was “an LLM doing math and not a specific formal math system” like AlphaGeometry, indicating that the company is well on its way to achieving general intelligence.

Manon Bischoff, an editor at the German-language version of Scientific American, predicted in January 2024 that it would be “a few years” before AI models could conceivably compete in the International Math Olympiad; however, AI models are improving quickly. At the time, Bischoff was announcing the release of the math-specific model AlphaGeometry, which could solve 54% of all the geometry questions included in the competition over the last 25 years. By February, a second-generation version could solve 84% of them.

Questions arise about OpenAI’s gold medal at IMO

Not everyone is convinced of OpenAI’s leaps and bounds in mathematical capabilities.

According to Google DeepMind researcher Thang Luong and OpenAI’s former ​​CTO Mikhail Samin, OpenAI’s model was not graded based on the International Math Olympiad’s official guidelines, and thus its claims to be a gold medallist are not verifiable. Wei said on X that “three former IMO medalists independently graded the model’s submitted proof” and reached “unanimous consensus” on their scores.

OpenAI doesn’t have the strongest reputation when it comes to benchmarking the mathematical ability of its models. In April, Epoch AI, the independent research institute behind the FrontierMath benchmark, found that the o3 model could correctly answer only about 10% of the advanced problems, a steep decline from the over 25% accuracy originally claimed by OpenAI in December 2024.

It will be difficult for anyone to conduct the same level of independent verification on the experimental model that took part in the Olympiad until it is released. Unfortunately, Wei confirmed that OpenAI does not “plan to release anything with this level of math capability for several months,” and as GPT-5 is coming “soon,” it’s unlikely that this experimental system will be part of that release.

Mathematical ability is clearly an important quality for OpenAI. Last month, it released the o3-pro model, which it dubbed its most intelligent yet.

TAGGED:GoldInternationalMathOlympiadopenaiWins
Share This Article
Facebook Whatsapp Whatsapp Email Copy Link Print
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Monitor AI's Decision-Making Black Box: Here's Why Monitor AI’s Decision-Making Black Box: Here’s Why
Next Article China Builds Underwater AI Data Center China Builds Underwater AI Data Center
Leave a review

Leave a Review Cancel reply

Your email address will not be published. Required fields are marked *

Please select a rating!

Meta Strikes $10 Billion Cloud Deal With Google to Boost AI Capacity
NVIDIA CEO Dismisses Chip Security Allegations as China Orders Firms to Halt Purchases
Anthropic Folds Claude Code Into Business Plans With Governance Tools
Google Claims One Gemini AI Prompt Uses Five Drops of Water
Generate AI Business Infographics without the Fees

Recent Posts

  • Meta Strikes $10 Billion Cloud Deal With Google to Boost AI Capacity
  • NVIDIA CEO Dismisses Chip Security Allegations as China Orders Firms to Halt Purchases
  • Anthropic Folds Claude Code Into Business Plans With Governance Tools
  • Google Claims One Gemini AI Prompt Uses Five Drops of Water
  • Generate AI Business Infographics without the Fees

Recent Comments

  1. https://tubemp4.ru on Best Features of PHPFox Social Network Script
  2. Вулкан Платинум on Best Features of PHPFox Social Network Script
  3. Вулкан Платинум официальный on Best Features of PHPFox Social Network Script
  4. Best Quality SEO Backlinks on DDoS Attacks Now Key Weapons in Geopolitical Conflicts, NETSCOUT Warns
  5. http://boyarka-inform.com on Comparing Wowonder and ShaunSocial

You Might Also Like

IT Leader’s Guide to the Metaverse

August 21, 2025
State of AI Adoption in Financial Services: A TechRepublic Exclusive
Technologywebmaster

State of AI Adoption in Financial Services: A TechRepublic Exclusive

August 21, 2025
AI Underperforms in Reality, and the Stock Market is Feeling It
Technologywebmaster

AI Underperforms in Reality, and the Stock Market is Feeling It

August 21, 2025
Google Shows Off Pixel 10 Series and Pixel Watch 4
Technologywebmaster

Google Shows Off Pixel 10 Series and Pixel Watch 4

August 21, 2025
NVIDIA & NSF to Build Fully Open AI Models for Science
Technologywebmaster

NVIDIA & NSF to Build Fully Open AI Models for Science

August 20, 2025
Previous Next
Facefam ArticlesFacefam Articles
Facefam Articles 2025
  • Submit a Post
  • Donate
  • Join Facefam social
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?

Not a member? Sign Up