Training Language Models To Explain Their Own Computations

This is a Llama-3.1-8B explainer model fine-tuned for the activation patching task for the Llama-3.1-8B target model, as described in this paper. In the activation patching task, explainer models learn to predict the effects of activation patching interventions on Llama-3.1-8B using CounterFact data. By predicting how patching internal activations at specific layers and positions influences the output, the research aims to develop models that can faithfully describe their own internal causal structures.

Repository | Paper

Sample Usage

To evaluate the explainer model on the input ablation task, you can use the evaluation script provided in the GitHub repository.

uv run --env-file .env evaluate.py \
  --config config/act_patch/base_base_act_patch_cf.yaml \
  --target_model_path meta-llama/Llama-3.1-8B \
  --task act_patch \
  --model_path Transluce/act_patch_llama3.1_8b_llama3.1_8b \
  --output_dir /PATH/TO/RESULTS/ \
  --batch_size 64

Citation

@misc{li2025traininglanguagemodelsexplain,
      title={Training Language Models to Explain Their Own Computations}, 
      author={Belinda Z. Li and Zifan Carl Guo and Vincent Huang and Jacob Steinhardt and Jacob Andreas},
      year={2025},
      eprint={2511.08579},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2511.08579}, 
}
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Transluce/act_patch_llama3.1_8b_llama3.1_8b

Adapter
(535)
this model

Collection including Transluce/act_patch_llama3.1_8b_llama3.1_8b