LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations
Abstract
LLMs' internal representations can predict problem difficulty and enable efficient inference routing that reduces costs while maintaining performance.
Running LLMs with extended reasoning on every problem is expensive, but determining which inputs actually require additional compute remains challenging. We investigate whether their own likelihood of success is recoverable from their internal representations before generation, and if this signal can guide more efficient inference. We train linear probes on pre-generation activations to predict policy-specific success on math and coding tasks, substantially outperforming surface features such as question length and TF-IDF. Using E2H-AMC, which provides both human and model performance on identical problems, we show that models encode a model-specific notion of difficulty that is distinct from human difficulty, and that this distinction increases with extended reasoning. Leveraging these probes, we demonstrate that routing queries across a pool of models can exceed the best-performing model whilst reducing inference cost by up to 70\% on MATH, showing that internal representations enable practical efficiency gains even when they diverge from human intuitions about difficulty. Our code is available at: https://github.com/KabakaWilliam/llms_know_difficulty
Community
We show that LLMs maintain a linearly accessible internal representation of difficulty that differs from human assessments and varies across decoding settings. We apply this to route queries between models with different reasoning capabilities.
Github: https://github.com/KabakaWilliam/llms_know_difficulty
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CoRefine: Confidence-Guided Self-Refinement for Adaptive Test-Time Compute (2026)
- Pay for Hints, Not Answers: LLM Shepherding for Cost-Efficient Inference (2026)
- Calibrating LLM Judges: Linear Probes for Fast and Reliable Uncertainty Estimation (2025)
- Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models (2026)
- IntroLM: Introspective Language Models via Prefilling-Time Self-Evaluation (2026)
- ATLAS: Adaptive Test-Time Latent Steering with External Verifiers for Enhancing LLMs Reasoning (2026)
- Learning Generative Selection for Best-of-N (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
