ARC Prize 2025
The Grand Prize remains unclaimed.
All scores & papers below are open source & reproducible.
2025 High Score Winners
| 1st | NVARC | 24.0% | $25k |
| 2nd | the ARChitects | 16.5% | $10k |
| 3rd | MindsAI | 12.6% | $5k |
| 4th | Lonnie | 6.7% | $5k |
| 5th | G. Barbadillo | 6.5% | $5k |
All scores & papers below are open source & reproducible. See results on Kaggle.
2025 Paper Award Winners
1st Place - $50k
"Less is More: Recursive Reasoning with Tiny Networks" Interview
A. Jolicoeur-Martineau
2nd Place - $20k
"Self-Improving Language Models for Evolutionary Program Synthesis: A Case Study on ARC-AGI" Interview
J. Pourcel, C. Colas & P. Oudeyer
3rd Place - $5k
"ARC-AGI Without Pretraining" Interview
I. Liao & A. Gu
Runners Up - $2.5k
"Vector Symbolic Algebras for the Abstraction and Reasoning Corpus"
I. Joffe & C. Eliasmith
"From Parrots to Von Neumanns: How Evolutionary Test-Time Compute Achieved State-of-the-Art on ARC-AGI"
J. Berman
"Efficient Evolutionary Program Synthesis"
E. Pang
"ARC-NCA: Towards Developmental Solutions to the Abstraction and Reasoning Corpus"
E. Guichard, F. Reimers, M. Kvalsund, M. Lepperød & S. Nichele
"ArcMemo: Abstract Reasoning Composition with Lifelong LLM Memory"
M. Ho et al.
Honorable Mentions
"ARC-AGI is a Vision Problem!"
K. Hu et al.
"Product of Experts with LLMs: Boosting Performance on ARC Is a Matter of Perspective" Interview
D. Franzen, J. Disselhoff & D. Hartmann
"Exploring the combination of search and learn for the ARC25 challenge"
G. Barbadillo
"Beyond Brute Force: A Neuro-Symbolic Architecture for Compositional Reasoning in ARC-AGI-2"
A. Das, O. Ghugarkar, V. Bhat & J. McAuley
"Test-time Adaptation of Tiny Recursive Models"
R. McGovern
"Rethinking Visual Intelligence: Insights from Video Pretraining"
P. Acuaviva et al.
"Don't throw the baby out with the bathwater: How and why deep learning for ARC" Interview
J. Cole & M. Osman
"NVARC solution to ARC-AGI-2 2025"
I. Sorokin &
Jean-François Puget
Winner Interviews
Top Scores
A synthetic-data-driven ensemble of an improved Architects-style test-time-trained model and TRM-based components that reaches ~24% on ARC-AGI-2 under contest constraints.
A 2D-aware masked-diffusion LLM with recursive self-refinement and perspective-based scoring achieves top-tier ARC-AGI-2 performance, improving substantially over the team's 2024 autoregressive system.
A heavily engineered test-time-training pipeline that combines TTFT, augmentation ensembles, tokenizer dropout, and some new pretraining tricks to produce a competitive 15.42% ARC-AGI-2 score.
Paper Awards
Tiny Recursive Model (TRM) is a ~7M-parameter, single-network recursive model with separate answer and latent states that, via deep supervised refinement, attains ~45% on ARC-AGI-1 and ~8% on ARC-AGI-2.
SOAR is a self-improving evolutionary program synthesis framework that fine-tunes an LLM on its own search traces, boosting open-source ARC-AGI-1 solution performance up to 52% without human-engineered DSLs or solution datasets.
CompressARC is an MDL-based, single puzzle-trained neural code golf system that achieves ~20–34% on ARC-AGI-1 and ~4% on ARC-AGI-2 without any pretraining or external data.
ARC Prize in 2026
Congratulations to the winners! Now, where do we go from here?
Intelligence is interactive. ARC-AGI-3 is in development, bringing temporal and agentic elements to the benchmark. Expect to see competitions in 2026 focused on this exciting new paradigm.
ARC-AGI-2 remains undefeated and continues to teach us important things about the gap between artificial and human intelligence. Expect to see more competitions grounded in this benchmark, as well.
The exact shape of what's to come is yet to be determined. We welcome feedback from the community.
We'll announce more competition details early next year. Stay tuned!
