--- base_model: - meta-llama/Llama-3.1-8B datasets: - allenai/winogrande - allenai/ai2_arc - google/boolq - wentingzhao/obqa license: llama3.1 tags: - peft - bayesian - uncertainty-quantification pipeline_tag: text-generation library_name: transformers --- This repository hosts the adapter weights for the **Training-Free Bayesianization for Low-Rank Adapters of Large Language Models** (TFB) model, as introduced in the paper [Training-Free Bayesianization for Low-Rank Adapters of Large Language Models](https://huggingface.co/papers/2412.05723). Estimating the uncertainty of responses from Large Language Models (LLMs) remains a critical challenge. While recent Bayesian methods have demonstrated effectiveness in quantifying uncertainty through low-rank weight updates, they typically require complex fine-tuning or post-training procedures. In this paper, we propose Training-Free Bayesianization (TFB), a simple yet theoretically grounded framework that efficiently transforms trained low-rank adapters into Bayesian ones without additional training. TFB systematically searches for the maximally acceptable level of variance in the weight posterior, constrained within a family of low-rank isotropic Gaussian distributions. Our theoretical analysis shows that under mild conditions, this search process is equivalent to KL-regularized variational optimization, a generalized form of variational inference. Through comprehensive experiments, we show that TFB achieves superior uncertainty estimation and generalization compared to existing methods while eliminating the need for complex Bayesianization training procedures. For more details, code, and further experiments, please refer to the official [GitHub repository](https://github.com/Wang-ML-Lab/bayesian-peft).