Papers
arxiv:2512.20578

Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits

Published on Dec 23, 2025
· Submitted by
Amirhosein Ghasemabadi
on Jan 6
#2 Paper of the day
Authors:

Abstract

Large language models (LLMs) generate fluent and complex outputs but often fail to recognize their own mistakes and hallucinations. Existing approaches typically rely on external judges, multi-sample consistency, or text-based self-critique, which incur additional compute or correlate weakly with true correctness. We ask: can LLMs predict their own failures by inspecting internal states during inference? We introduce Gnosis, a lightweight self-awareness mechanism that enables frozen LLMs to perform intrinsic self-verification by decoding signals from hidden states and attention patterns. Gnosis passively observes internal traces, compresses them into fixed-budget descriptors, and predicts correctness with negligible inference cost, adding only ~5M parameters and operating independently of sequence length. Across math reasoning, open-domain question answering, and academic knowledge benchmarks, and over frozen backbones ranging from 1.7B to 20B parameters, Gnosis consistently outperforms strong internal baselines and large external judges in both accuracy and calibration. Moreover, it generalizes zero-shot to partial generations, enabling early detection of failing trajectories and compute-aware control. These results show that reliable correctness cues are intrinsic to generation process and can be extracted efficiently without external supervision.

Community

Paper author Paper submitter

Can Large Language Models predict their own failures? 🧠⚡

We all know the critical bottleneck in GenAI: LLMs are incredible, but they can confidently hallucinate and make mistakes.

Until now, most fixes have been computationally massive — relying on expensive external judges, huge reward models, or costly training to make the LLM itself more robust.

This brings us to two fundamental questions:
❓ Do LLMs recognize when they're making mistakes?
❓ Can we make them self-aware about their own failures?

🚀 Introducing Gnosis: A lightweight self-awareness mechanism for frozen LLMs.
Named after the Greek word for knowledge/insight, Gnosis gives LLMs a form of introspection.

We add only ~5M parameters to enable a frozen LLM to verify its own outputs by decoding internal hidden states + attention patterns during inference — with negligible overhead and no external judge.

The results challenge the classic efficiency–accuracy trade-off:

🏆 Superior performance across domains
Despite being orders of magnitude smaller, Gnosis can outperform strong 8B reward models and proprietary judges like Gemini 2.5 Pro on both multi-step reasoning and factual/parametric knowledge QA (e.g., TriviaQA), across multiple backbones.

Real-time early failure detection
Gnosis doesn’t need to wait for the final token. By monitoring the generation trajectory in real time, it can predict an error before the model finishes — enabling early stopping, preventing bad outputs from reaching users, and saving significant compute.

This suggests something important: the model often already contains signals of impending failure during generation — we just needed the right mechanism to read them.

👇 code + models:

💻 Code: https://github.com/Amirhosein-gh98/Gnosis

arXiv lens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/can-llms-predict-their-own-failures-self-awareness-via-internal-circuits-4244-0e1311d2

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 3

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.20578 in a Space README.md to link it from this page.

Collections including this paper 9