text
stringlengths
13.9k
38.7k
labels
class label
4 classes
id
stringlengths
10
10
# I NTERPRETING AND E DITING V ISION -L ANGUAGE R EPRESENTATIONS TO M ITIGATE H ALLUCINATIONS **Nick Jiang** _[∗]_ **, Anish Kachinthaya** _[∗]_ **, Suzie Petyrk** _[†]_ **,Yossi Gandelsman** _[†]_ University of California, Berkeley _{_ nickj,anishk,spetryk,yossi ~~g~~ andelsman _}_ @berkeley.edu A BSTRACT We investigate the internal representations of vision-language models (VLMs) to address hallucinations, a persistent challenge despite advances in model size and training. We project VLMs’ internal image representations to their language vocabulary and observe more confident output probabilities on real objects than hallucinated objects. We additionally use these output probabilities to spatially localize real objects. Building on this approach, we introduce a knowledge erasure algorithm that removes hallucinations by linearly orthogonalizing image features with respect to hallucinated object features. We show that targeted edits to a model’s latent representations can reduce hallucinations by up to 25.7% on the COCO2014 dataset while preserving performance. Our findings demonstrate how a deeper understanding of VLMs’ latent representations can enhance reliability and enable novel capabilities, such as zero-shot segmentation. [1] 1 I NTRODUCTION Vision-Language Models (VLMs) have recently emerged as powerful tools for understanding images via text (Dai et al., 2023; Liu et al., 2024a). They have demonstrated remarkable capabilities across multimodal tasks such as image captioning (Li et al., 2023a), visual question answering (Ye et al., 2023), and complex multimodal reasoning (Bai et al., 2023). Despite their capabilities, VLMs tend to hallucinate content that does not appear in the images (Ji et al., 2023), which poses serious concerns for the reliability of these models in real-world applications (Hu et al., 2023; Luo et al., 2024). Widespread belief has been that scaling to larger models and more training data will naturally mitigate hallucinations. However, recent studies have shown that hallucinations persist even in larger and more advanced models (Rohrbach et al., 2019; Li et al., 2023b), suggesting that this issue cannot be solved by scale alone. Current methods reduce hallucinations by applying external interventions (e.g. object detectors; Yin et al. (2023)) or additional model fine-tuning (e.g. on hallucination examples; Zhou et al. (2024); Zhang et al. (2024a)). Nevertheless, these methods often struggle to distinguish between subtle hallucinations and existing details, requiring new models or updated model parameters. In this paper, we aim to introduce fine-grained edits directly to the image latent representations of VLMs to reduce hallucinations without hindering their performance, an approach that has had some success in large language models (Zhang et al., 2024b; von Rutte et al., 2024). To edit the latent representations of VLMs, we first explain their role via text. We employ the logit lens technique (nostalgebraist, 2020) to directly interpret the spatial VLM _image_ representations with VLM _text vocabulary_ . Surprisingly, the characteristics of these image representations are different for real objects that appear in the image and objects that are hallucinated. Moreover, the logit lens enables spatially localizing objects within the input image. Relying on the ability to detect hallucinated objects, we edit them out by intervening in their internal representations. We introduce a knowledge erasure algorithm, P ROJECT A WAY, to target and remove objects by linearly orthogonalizing image features with respect to the text features of target objects. We find that P ROJECT A WAY can erase both real and hallucinated objects with high rates of removal. _∗_ Equal contribution as first authors. _†_ Equal contribution as last authors. 1 [Code: https://github.com/nickjiang2378/vl-interp](https://github.com/nickjiang2378/vl-interp) 1 |Col1|Cat and dog<end>|Col3|Col4| |---|---|---|---| ||Cat and dog<end|Cat and dog<end|Cat and dog<end| ||||| ||||| |(c) Object Removal Cat <end>|Col2|Col3|Col4|Col5| |---|---|---|---|---| |Cat <end|Cat <end|Cat <end|Cat <end|Cat <end| |||||| |||||| |||||| Figure 1: **Interpreting VLM internal image representations** . (a) Given a VLM, (b) we unembed the latent representations from image embeddings to the vocabulary and classify hallucinations. We remove hallucinations by (c) linearly editing them out of the latent representations. We use our interpretation and editing approach for three tasks. First, we utilize the logit lens on image features to detect hallucinations in the image. We find that it improves mAP by 22.45% and 47.17% in two VLMs. Then, we combine our editing and detection method to erase hallucinations from the VLM’s internal representations, reducing hallucinations up to 25.7% on standard benchmarks, while preserving accuracy in image captioning. Finally, we use the logit lens to localize objects in the image features. We find that our spatial mapping provides comparable performance to state-of-the-art zero-shot segmentation methods. Our results indicate that understanding the internal representations of VLMs can be achieved and used to repair model hallucinations and introduce new capabilities. 2 R ELATED WORK 2.1 I NTERPRETING L ATENT R EPRESENTATIONS IN L ANGUAGE M ODELS Interpreting the inner workings of large language models enables fine-grained improvement of the language model behavior. Recent work involves utilizing the model’s attention maps (Kobayashi et al., 2020; Chefer et al., 2021), activation patterns (Conmy et al., 2023; Meng et al., 2023; Bronzini et al., 2024), and latent representations (Ghandeharioun et al., 2024; Cunningham et al., 2023; Bricken et al., 2023) to understand their behavior with applications such as early exiting (Halawi et al., 2024) and editing or erasing the model’s knowledge (Dai et al., 2022; Ravfogel et al., 2024). One class of methods probe the VLMs knowledge with linear classifiers (Hewitt & Manning, 2019; Tucker et al., 2021; Li et al., 2024; Belrose et al., 2023). The logit lens method (nostalgebraist, 2020), which we will use in our analysis, finds the output distribution over the vocabulary of the language model at intermediate layers with the model’s own unembedding matrix. We apply this approach to _VLMs_ to interpret the model’s understanding of _visual information_ in the model’s textual vocabulary. 2.2 I NTERPRETING LATENT REPRESENTATIONS IN V ISION M ODELS Understanding the internal dynamics of vision models is critical for ensuring safety and reliability in multimodal systems. Early works in this area focused on producing saliency maps (Petsiuk et al., 2018), analyzing individual neurons (Bau et al., 2020; 2019; Dravid et al., 2023), and training networks to map latent representations to concepts (Esser et al., 2020). With the emergence of transformerbased vision models like CLIP (Radford et al., 2021), recent methods explain latent tokens (Chen et al., 2023) and the roles of attention heads and neurons with natural language (Gandelsman et al., 2024b;a). Few works currently interpret the internal computation of VLMs: Palit et al. (2023) develop a neuron causal tracing tool; Schwettmann et al. (2023) identifies multi-modal neurons; and Huo et al. (2024) ablates domain-specific neurons to improve vision question-answering. Whereas past 2 papers have primarily studied the mechanisms (e.g. neuron analysis) that drive VLMs, we focus on interpreting and editing their latent representations for real-world applicability. 2.3 D ETECTING AND REDUCING VLM HALLUCINATIONS While VLM performances on image caption and visual question answering are continually improving, they continue to hallucinate facts that are not supported by the visual input. Existing methods for detecting hallucinations in language models during inference utilize latent representations (He et al., 2024; Su et al., 2024), activations (Chen et al., 2024), and output logit values (Varshney et al., 2023). SAPLMA (Azaria & Mitchell, 2023) trains a hallucination classifier on the internal latent representations. LUNA (Song et al., 2024) learns a transition function on latent representations and identifies abnormal transitions. Varshney et al. (2023) uses the final layer logits to score the model’s confidence in an entity or keyword and intervenes by instructing the model to either repair or remove the hallucinated information. Among VLMs, LURE (Zhou et al., 2024) is a fine-tuned revisor model to detect and reduce hallucinations. OPERA (Huang et al., 2024) uses the model’s internal attention weights to detect and suppress patterns that align with the beginning of hallucinated phrases. In contrast to these methods, we leverage the internal _image_ representations in the VLMs for hallucination reduction and for zero-shot segmentation. 3 E XTRACTING K NOWLEDGE FROM VLM S We start by introducing VLMs and the general framework of their architectures in most recent work. We then describe our approach for decoding the features in intermediate image representations in VLMs into text, and apply it to two types of VLMs. Surprisingly, this approach effectively probes the knowledge about objects present in images and can localize objects within the image. 3.1 P RELIMINARIES **Vision-Language Models.** The architecture of recent state-of-the-art VLMs for text generation typically involves three main components: a vision encoder to process image inputs, a mapping network to map image features to image embeddings, and an autoregressive language model to process the image embeddings and prompt embeddings to generate text. We focus on two recent state-of-the-art VLMs: _LLaVA_ 1.5 (Liu et al., 2024a) and _InstructBLIP_ (Dai et al., 2023). We use 7B versions of both these models. LLaVA utilizes a frozen CLIP vision encoder and an MLP as a mapping network to project the vision encoder outputs into image embeddings for the language model. The MLP is pre-trained on a large vision-language dataset and both the MLP and the language model are fine-tuned on an instruction-focused dataset. In contrast, InstructBLIP freezes both the vision encoder and the language model and only trains the mapping network. **Notations.** For the purposes of our work, we define the VLM architecture as follows. The vision encoder processes an input image to produce _n_ image features. These image features are projected to embedding space via the mapping network, resulting in _n d_ -dimensional image embeddings _{k_ _i_ : _k_ _i_ _∈_ R _[d]_ _, i_ = 1 _, ..., n}_ . For the language model, the entire set of text tokens constitutes the vocabulary _V_ with vocabulary size _|V |_ . The image embeddings, followed by _m_ text embeddings _{t_ _i_ : _t_ _i_ _∈_ R _[d]_ _, i_ = 1 _, ..., m}_ of the prompt tokens, are input to the language model through _L_ decoder layers. For an input embedding _x ∈_ R _[d]_, we define _h_ _l_ ( _x_ ) _∈_ R _[d]_ to be the latent representation for embedding _x_ at layer _l ∈{_ 1 _, ..., L}_, the output of the decoder layer, which is conditioned on previous tokens of the input sequence. An unembedding matrix _W_ _U_ _∈_ R _[|][V][ |×][d]_ maps the last latent representation _h_ _L_ ( _t_ _m_ ) to a probability distribution over the vocabulary for the next token _t_ _m_ +1 . **Logit Lens.** Logit Lens is an interpretability method for intermediate language model representations introduced in Section 2.1. The logit lens technique applies the unembedding matrix _W_ _U_ to latent representations _h_ _l_ ( _x_ ) in the _L_ intermediate layers in the language model to retrieve the logit distributions over the vocabulary. _f_ _l_ ( _t_ _m_ ) = _W_ _U_ _· h_ _l_ ( _t_ _m_ ) = [logit 1 _,_ logit 2 _,_ logit 3 _, . . .,_ logit _|V |_ ] (1) This is the logit distribution representing the predictions of the model after _l_ layers, where logit _j_ corresponds to the token _j_ in the vocabulary. 3 Figure 2: **Comparison of internal confidence in objects present and not present in the image** . We examine the internal confidence of COCO objects that exist and do not exist in the image within intermediate VLM image representations. We observe that objects that do not exist in the image have lower internal confidence. 3.2 A PPLYING L OGIT L ENS ON VLM S We apply the logit lens to probe the language model as it processes the image representations. This enables us to interpret the image features’ output distributions as they are transformed by the layers of the language model and localize objects spatially within the image. **Extracting probability distributions from intermediate image representations** . We apply logit lens on the _image representations_ in the VLM. For a given image embedding _k_ _i_, we find the latent representation of the image embedding at layer _l_, _h_ _l_ ( _k_ _i_ ), taking the logit lens to get the probability distribution over the vocabulary, softmax( _f_ _l_ ( _k_ _i_ )) . We define an object _o_, an object word composed of tokens from the vocabulary. We inspect the probability of a specific object _o_, softmax( _f_ _l_ ( _k_ _i_ )) _o_ . For multi-token objects, we take the maximum probability value over the object tokens. This provides a generalizable framework for analyzing specific latent image representations via text, with respect to specific objects. Next, we find the maximum probability over all image representations over all layers. For object _o_, we compute: _c_ _o_ = max _{_ softmax( _f_ _l_ ( _k_ _i_ )) _o_ _}_ (2) 1 _≤l≤L_ 1 _≤i≤n_ We define _c_ _o_ as the VLMs _internal confidence_ of an object _o_ existing in the image: the highest probability of object presence across _n_ image representations through _L_ layers of the language model. **Comparing the internal confidence of present and not present objects.** To determine if internal confidence provides meaningful information about objects in the image, we examine _c_ _o_ for objects present and not present in an image. We use InstructBLIP and LLaVA to caption 5000 random COCO2014 images in the Karpathy validation split (Lin et al., 2015) and determine _c_ _o_ for all 80 COCO objects, only a few of which are present in each image. Since there are many more objects not present than present, we randomly sample a subset of the internal confidences for objects not present. Figure 2 exhibits the internal confidences for objects present and not present in the image. We empirically find that the VLMs’ internal confidences are higher for present objects than not present ones. We use this claim later to classify objects as hallucinations in Section 5.1. **Object localization** . Given that the language model can distinguish between objects present and not present in an image, we examine whether it can attribute high object internal confidence to specific patches in an image. For each image embedding _k_ _i_ in _n_ image embeddings, we find the maximum softmax probability of an object within the layers of the model, max 1 _≤l≤L_ _{_ softmax( _f_ _l_ ( _k_ _i_ )) _o_ _}_ . Using these internal confidence values, we localize the objects in the image patches, each of which maps to an image embedding. We focus on LLaVA for this task, since its image encoder preserves the spatial mapping of image patches to image features. We observe that image representations that exhibit higher internal confidence for specific objects correspond to the image patches in which those objects are visually present (examples in Figure 3). Building on our previous observation, we see that the intermediate image representations semantically align with latent token representations of objects present in them while maintaining their spatial locality. We use this unique finding for zero-shot segmentation in Section 5.3. 4 Input image “catˮ probabilities “catˮ localization “bicycleˮ probabilities“bicycleˮ localization Input image “bottleˮ probabilities “bottleˮ localization “bowlˮ probabilities “bowlˮ localization Figure 3: **Localizing objects using internal confidence values** . We find the probabilities of objects through layers of the language model for every image embedding in LLaVA. We use the highest layer probability per image embedding to localize an object within the image. While the model is not directly trained to map the image representations closer to the text representations of objects within them, we can unembed the image representations in the text vocabulary for localization and find differences in internal confidence for present and hallucinated objects. In Section 5.1, we will use this observation for various applications including hallucination detection and zero-short segmentation. 4 E RASING KNOWLEDGE FROM VLM S Recognizing that image embeddings are directly interpretable (Section 3.2), we edit these embeddings to erase the presence of objects from image captions. We propose a linear editing algorithm that subtracts the text embedding of a target object from all image embeddings. When applied on singular and multiple object removals, we find that it erases hallucinated objects more effectively than correctly detected (CD) objects (i.e. real objects that the model correctly detects). 4.1 E RASING OBJECTS FROM IMAGE REPRESENTATIONS We present an algorithm, P ROJECT A WAY (Figure 4), that orthogonalizes image representations with respect to text representations in order to erase objects in image captions, applying it to remove objects one at a time and all at once. Given an image and an object to remove, we edit the latent representations _h_ _l_ _I_ ( _k_ _i_ ) at a hidden layer _l_ _[I]_ across all image embeddings _k_ _i_ . We do not modify any latent representations outside of those belonging to image features. We compute the dot product, _p_, of _h_ _l_ _I_ ( _k_ _i_ ) and the object’s text embedding _[⃗]_ _t_, subtracting a weighted _[⃗]_ _t_ from _h_ _l_ _I_ ( _k_ _i_ ) only if the dot product is positive. At _α_ = 1, P ROJECT A WAY is equivalent to orthogonalizing the image representations with respect to the text representation. To compute text representation _[⃗]_ _t_, we pass the object (e.g. “hot dog”) into the VLM’s text model and extract _h_ _l_ _T_ ( _t_ -1 ) at hidden layer _l_ _[T]_, where _t_ -1 is the last token of the object. We use the last token of the object to capture the whole of the object’s meaning. Algorithm 1: P ROJECT A WAY **Input:** A set of image embeddings _K_, text embedding _[⃗]_ _t_, and weight factor _α_ **Output:** A set of modified image embeddings _K_ _[′]_ projected away from the text embedding **Initialization:** _K_ _[′]_ _←∅_ **for** _[⃗]_ _k ∈_ _K_ **do** _p ←_ _[⃗]_ _k ·_ _[⃗]_ _t_ **if** _p >_ 0 **then** _K_ _[′]_ _←_ _K_ _[′]_ _∪{_ _[⃗]_ _k −_ _α ·_ _∥_ _[⃗]_ _tp∥_ [2] 2 _[·][ ⃗t][}]_ **else** _K_ _[′]_ _←_ _K_ _[′]_ _∪{_ _[⃗]_ _k}_ **end if** **end for** Figure 4: Our editing algorithm erases the presence of an object from image embeddings by orthogonalizing them with respect to the object’s text embedding. 5 Idea Generation Category:
1Cross-Domain Application
94kQgWXojH
# T OWARDS U NDERSTANDING THE R OBUSTNESS OF D IFFUSION -B ASED P URIFICATION : A S TOCHASTIC P ERSPECTIVE **Yiming Liu** [1] _[∗]_ **, Kezhao Liu** [1] _[∗]_ **, Yao Xiao** [1] **, Ziyi Dong** [1] **, Xiaogang Xu** [2] **, Pengxu Wei** [1] _[,]_ [3] _[†]_ **, Liang Lin** [1] _[,]_ [3] 1 Sun Yat-Sen University, 2 Chinese University of Hong Kong, 3 Peng Cheng Laboratory {liuym225, liukzh9, xiaoy99, dongzy6}@mail2.sysu.edu.cn xiaogangxu00@gmail.com, weipx3@mail.sysu.edu.cn, linliang@ieee.org A BSTRACT Diffusion-Based Purification (DBP) has emerged as an effective defense mechanism against adversarial attacks. The success of DBP is often attributed to the forward diffusion process, which reduces the distribution gap between clean and adversarial images by adding Gaussian noise. While this explanation is theoretically sound, the exact role of this mechanism in enhancing robustness remains unclear. In this paper, through empirical analysis, we propose that the intrinsic stochasticity in the DBP process is the primary factor driving robustness. To test this hypothesis, we introduce a novel Deterministic White-Box (DW-box) setting to assess robustness in the absence of stochasticity, and we analyze attack trajectories and loss landscapes. Our results suggest that DBP models primarily rely on stochasticity to avoid effective attack directions, while their ability to purify adversarial perturbations may be limited. To further enhance the robustness of DBP models, we propose Adversarial Denoising Diffusion Training (ADDT), which incorporates classifier-guided adversarial perturbations into the diffusion training process, thereby strengthening the models’ ability to purify adversarial perturbations. Additionally, we propose Rank-Based Gaussian Mapping (RBGM) to improve the compatibility of perturbations with diffusion models. Experimental results validate the effectiveness of ADDT. In conclusion, our study suggests that future research on DBP can benefit from a clearer distinction between stochasticity-driven and purification-driven robustness. 1 I NTRODUCTION Deep learning has achieved remarkable success across various domains, including computer vision (He et al., 2016), natural language processing (OpenAI, 2023), and speech recognition (Radford et al., 2023). However, within this flourishing landscape, the persistent s ~~pecter~~ ~~of~~ ~~adversarial~~ attacks casts a shadow over the reliability of these neural models. For vision models, adversarial attacks typically involve introducing imperceptible perturbations into input images, tricking the models into producing incorrect outputs with high confidence (Goodfellow et al., 2015; Szegedy et al., 2014). This vulnerability has spurred substantial research into adversarial defenses (Zhang et al., 2019; Samangouei et al., 2018; Shafahi et al., 2019; Wang et al., 2023). Figure 1: Comparison of attack trajectories under different evaluation settings. The attack trajectory in the standard white-box setting significantly deviates from the DW-box trajectory and demonstrates lower effectiveness. Recently, diffusion-based purification (DBP) (Nie et al., dard white-box setting significantly de2022) has emerged as a promising defense against adver- viates from the DW-box trajectory and sarial attacks. Existing studies suggest that DBP robustness demonstrates lower effectiveness. primarily stems from the forward diffusion process, which reduces the distribution gap between clean and adversarial images by applying Gaussian noise (Nie et al., 2022; Wang et al., 2022). While this reduction is theoretically supported, its contribution to DBP robustness has not been sufficiently - Equal contribution, - Corresponding author. 1 validated through empirical studies. Additionally, experimental results suggest that the stochastic nature of DBP might also play a significant role in enhancing its robustness (Nie et al., 2022). In this paper, we present a new perspective that emphasizes the role of stochasticity throughout the DBP process as a key contributor to its robustness, challenging the traditional focus on the forward diffusion process. To assess the impact of stochasticity, we propose a Deterministic White-box (DW-box) attack setting, in which the attacker has complete knowledge of both the model parameters and the stochastic elements. Our findings reveal that DBP models experience a significant loss of robustness when the process is made entirely deterministic from the attacker’s perspective. Further analysis of attack trajectories and the loss landscape shows that DBP models do not defend against adversarial perturbations by relying on a flat loss landscape, as is common in adversarial training (AT) (Madry et al., 2018); instead, they leverage stochasticity to bypass the most effective attack directions, as illustrated in Figure 1. Building on this new perspective of DBP robustness, we hypothesize that strengthening the diffusion model’s ability to purify adversarial perturbations could further improve the robustness. To test this hypothesis, we propose Adversarial Denoising Diffusion Training (ADDT) for DBP models. This method follows an iterative two-step process: first, the Classifier-Guided Perturbation Optimization (CGPO) step generates adversarial perturbations; then, the diffusion model training step updates the parameters of the diffusion model using these perturbations. To better integrate these perturbations into the diffusion framework, we propose Rank-Based Gaussian Mapping (RBGM), which adjusts the adversarial perturbations to more closely resemble Gaussian noise, in line with the theoretical foundation of diffusion models. Experiments confirm that ADDT consistently enhances the robustness of DBP models. Through further empirical analysis and discussion, we argue that future research on DBP should separate the robustness derived from stochasticity and that achieved through purification. This distinction points to two complementary directions for improving DBP: (1) enhancing its ability to purify adversarial perturbations through efficient training methods, and (2) defending against Expectation over Transformation (EoT) attacks (Athalye et al., 2018b) by increasing the variance of attack gradients. Our main contributions are as follows: - We offer a novel perspective on DBP robustness, highlighting the crucial role of stochasticity while challenging the conventional, purification-based view that robustness primarily arises from distribution gap reduction during the forward diffusion process. - We introduce a new Deterministic White-box attack setting and demonstrate that DBP models rely on stochastic attack gradients to evade the most effective attack directions, revealing key differences from adversarially trained models. - We show that DBP robustness can be further improved by enhancing the diffusion model’s ability to purify adversarial perturbations through the proposed ADDT method. 2 R ELATED W ORK **Adversarial training (AT).** First introduced by Madry et al. (2018), AT seeks to develop a robust classifier by incorporating adversarial examples into the training process. It has nearly become the de facto standard for improving the adversarial robustness of neural networks (Athalye et al., 2018a; Gowal et al., 2020; Rebuffi et al., 2021). Recent advances in AT harness the generative power of diffusion models to augment training data and prevent overfitting (Gowal et al., 2021; Wang et al., 2023). However, the application of AT to DBP methods has not been thoroughly explored. **Adversarial purification.** Adversarial purification utilizes generative models to remove adversarial perturbation from inputs before they are processed by downstream models. Traditionally, generative adversarial networks (GANs) (Samangouei et al., 2018) or autoregressive models (Song et al., 2018) have been used as purification models. More recently, diffusion models have been introduced for adversarial purification in a technique known as diffusion-based purification (DBP), which has shown promising results (Nie et al., 2022; Wang et al., 2022; Wu et al., 2022; Xiao et al., 2022). The robustness of DBP models is often attributed to the wash-out effect of Gaussian noise introduced during the forward diffusion process. Nie et al. (2022) propose that the forward diffusion process reduces the Kullback-Leibler (KL) divergence between the distributions of clean and adversarial images. Gao et al. (2022) suggest that while the forward diffusion process improves robustness 2 by reducing model invariance, the backward process restores this invariance, thereby undermining robustness. However, these theoretical explanations lack strong experimental support. 3 P RELIMINARIES **Adversarial training.** Adversarial training aims to build a robust model by including adversarial samples during training (Madry et al., 2018). This approach can be formulated as a min-max problem, where it first generates adversarial samples (the maximization step) and then adjusts the parameters to resist these adversarial samples (the minimization step). Formally, this can be represented as: _**θ**_ _[∗]_ = arg min _**θ**_ E ( _**x**_ _,y_ ) _∼D_ [max _**δ**_ _∈B_ _L_ ( _f_ ( _**θ**_ _,_ _**x**_ + _**δ**_ ) _, y_ )] _,_ (1) where _L_ is the loss function, _f_ is the classifier, ( _**x**_ _, y_ ) _∼D_ denotes the sampling of training data from the distribution _D_, and _B_ defines the set of permissible perturbation _**δ**_ . **Diffusion models.** Denoising Diffusion Probabilistic Models (DDPM) (Ho et al., 2020) and Denoising Diffusion Implicit Models (DDIM) (Song et al., 2020) simulate a gradual transformation in which noise is progressively added to an image and then removed to restore the original image. The forward process can be represented as: _**x**_ _t_ = _[√]_ _α_ _t_ _**x**_ 0 + _√_ 1 _−_ _α_ _t_ _**ϵ**_ _,_ _**ϵ**_ _∼N_ ( **0** _,_ _**I**_ ) _,_ (2) where _**x**_ 0 is the original image and _**x**_ _t_ is the noisy image. _α_ _t_ is the cumulative noise level at step _t_ ( 1 _< t ≤_ _T_, where _T_ is the number of diffusion training steps). The parameters _**θ**_ are optimized by minimizing the distance between the actual and estimated noise: _**θ**_ _[∗]_ = arg min _**θ**_ E _**x**_ 0 _,t,_ _**ϵ**_ � _∥_ _**ϵ**_ _−_ _**ϵ**_ _**θ**_ ( _[√]_ _α_ _t_ _**x**_ 0 + _√_ 1 _−_ _α_ _t_ _**ϵ**_ _, t_ ) _∥_ 2 [2] � _,_ (3) where _**θ**_ _[∗]_ represents the optimized parameters, and _**ϵ**_ _**θ**_ is the model’s predicted noise. Using _**ϵ**_ _**θ**_ _∗_, we can estimate ˆ _**x**_ 0 in a single step: _**x**_ ˆ 0 = � _**x**_ _t_ _−_ _√_ 1 _−_ _α_ _t_ _**ϵ**_ _**θ**_ _∗_ ( _**x**_ _t_ _, t_ )� _/_ _[√]_ _α_ _t_ _,_ (4) where ˆ _**x**_ 0 is the recovered image. DDPM typically takes an iterative approach to restore the image, removing a small amount of Gaussian noise at a time: ˆ _β_ _t_ _**x**_ _t−_ 1 = _**x**_ _t_ _−_ _**ϵ**_ _**θ**_ _[∗]_ ( _**x**_ _t_ _, t_ ) _/_ � � _√_ 1 _−_ _α_ _t_ � 1 _−_ _β_ _t_ + ~~�~~ _β_ _t_ _**ϵ**_ _,_ (5) where _β_ _t_ is the noise level at step _t_, ˆ _**x**_ _t−_ 1 is the image recovered at step _t −_ 1, and _**ϵ**_ is sampled from _N_ ( **0** _,_ _**I**_ ) . DDIM speeds up the denoising process by skipping certain intermediate steps. Recent work suggests that DDPM could also benefit from a similar approach (Nichol & Dhariwal, 2021). Score SDEs (Song et al., 2021a) provide a score function perspective on DDPM and further lead to the derivations of DDPM++ (VPSDE) and EDM (Karras et al., 2022). In this diffusion process, the noise terms _**ϵ**_ in Equation (2) and Equation (5) represent the key stochastic elements that control the randomness of the process. More discussion of stochastic elements is provided in Appendix C.1. **Diffusion-based purification (DBP).** DBP uses diffusion models to remove adversarial perturbation from images. Instead of using a complete diffusion process between the clean image and pure Gaussian noise (between _t_ = 0 and _t_ = _T_ ), they first diffuse _**x**_ 0 to a predefined timestep _t_ = _t_ _[∗]_ ( _t_ _[∗]_ _<_ _T_ ) via Equation (2), and recover the image ˆ _**x**_ 0 via the reverse diffusion process in Equation (5). 4 S TOCHASTICITY -D RIVEN R OBUSTNESS OF DBP 4.1 S TOCHASTICITY AS THE M AIN F ACTOR OF DBP R OBUSTNESS As discussed in Section 2, previous studies primarily attribute the robustness of DBP models to the forward diffusion process, which perturbs inputs with Gaussian noise to reduce the distribution gap between adversarial and clean images (Nie et al., 2022; Wang et al., 2022). As a result, adversarial perturbations can be “washed out” by the Gaussian noise. However, it has also been observed that the robustness of DiffPure diminishes when switching from Stochastic Differential Equation (SDE) sampling to Ordinary Differential Equation (ODE) (Nie et al., 2022), which introduces less stochasticity. This reduction in robustness cannot be fully explained by the “wash out” effect of Gaussian noise, suggesting that stochasticity plays a role in DBP robustness. 3 To assess the impact of stochasticity on DBP robustness, we implemented both DDPM and DDIM within the DiffPure framework (Nie et al., 2022) and compared their performance (referred to as **DP** **DDPM** and **DP** **DDIM** ). Note that the original implementation of DiffPure adopts a DDPM discretization form of DDPM++ (VPSDE), which differs only minimally from DDPM (Ho et al., 2020). Thus, the primary difference between DiffPure and our DP DDPM is that DiffPure employs a larger UNet. DDPM utilizes a stochastic SDE-based reverse process, which introduces Gaussian noise in both the forward and reverse processes, making the entire process stochastic. In contrast, DDIM uses a deterministic ODE-based reverse process and introduces Gaussian noise only in the forward process. The results are shown in Figure 2 (labeled as _Clean_ and _White_ ), where _White_ refers to _ℓ_ _∞_ white-box PGD (Madry et al., 2018) + EoT (Athalye et al., 2018b) attacks (as detailed in Section 6.1) Although DP DDPM achieves higher clean accuracy, it shows lower robust accuracy under adaptive white-box attacks, consistent with the findings of Nie et al. (2022). This suggests that the stochasticity in the reverse diffusion process may also play a crucial role in DBP robustness. To better control the stochasticity in both the forward and reverse diffusion processes and to isolate its ef- ~~DP~~ DDPM ~~DP~~ DDIM 100 fect on DBP robustness, we introduce a novel attack setting called the **Deterministic White-Box** (DWbox) setting. In this setting, the attacker not only has 50 complete knowledge of the model parameters but also of the exact values of the stochastic elements sampled during evaluation, effectively making the diffusion 0 |85.94 88.38<br>DPDDPM DPDDIM<br>47.27 42.19 45.41 38.00 42.19<br>16.80<br>4.98 4.98|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10| |---|---|---|---|---|---|---|---|---|---| |85.94 88.38<br>DPDDPM DPDDIM<br>47.27 42.19 45.41 38.00 42.19<br>16.80<br>4.98 4.98|||||||||| |85.94 88.38<br>DPDDPM DPDDIM<br>47.27 42.19 45.41 38.00 42.19<br>16.80<br>4.98 4.98|||||||||| |85.94 88.38<br>DPDDPM DPDDIM<br>47.27 42.19 45.41 38.00 42.19<br>16.80<br>4.98 4.98|||||||||| |85.94 88.38<br>DPDDPM DPDDIM<br>47.27 42.19 45.41 38.00 42.19<br>16.80<br>4.98 4.98|||||||||| ||||||||||| process deterministic from the attacker’s perspective. _Clean W hite DW_ Fwd _DW_ Rev _DW_ Both This setting could be realistic if the attacker knows the seed or the initial random state used for pseudo- Figure 2: DP DDPM and DP DDIM robust accurandom number generation in the model. For our racy under different attack settings on CIFARevaluations, we define three levels of attacker knowl- 10. Both models lose most of their robustness edge: (1) Conventional **White-box** setting, where the only when the attacker knows all stochasattacker has access to the model parameters but not tic elements (highlighted in bold: DW Both the stochastic elements; (2) **DW** **Fwd** **-box** / **DW** **Rev** **-box** box for DP DDPM /DP DDIM and DW Fwd -box for setting, where the attacker knows the stochastic ele- DP DDIM ). ments in the forward/reverse process in addition to the model parameters; (3) **DW** **Both** **-box** setting, where the attacker has complete knowledge of the model parameters and all stochastic elements in both the forward and reverse processes. Details of these settings are provided in Appendix C.2. With the **Deterministic White-Box** attack, we are able to compare traditional theories with our proposed hypothesis. These explanations diverge in behavior when stochasticity is controlled. Traditional theories emphasize the forward diffusion process as the primary defense mechanism, suggesting that both DP DDPM and DP DDIM should behave similarly under the DW Fwd -box setting. In contrast, our hypothesis emphasizes the stochasticity throughout the diffusion process as the crucial factor. We hypothesize that as DP DDIM becomes deterministic under the DW Fwd -box setting, it should experience a significant reduction in robustness—similar to DP DDPM under the DW Both -box setting. We evaluated adversarial robustness on CIFAR-10 using _ℓ_ _∞_ attacks (see Section 6.1). The results are shown in Figure 2. Under the DW Fwd -box setting, DP DDPM maintains a significant portion of its robustness, whereas DP DDIM loses almost all resistance to adversarial attacks. This phenomenon can not be explained by the “wash out” theory. Furthermore, under the DW Both -box setting, DP DDPM shows a substantial drop similar to DP DDIM in the DW Fwd setting. This suggests that stochasticity in both the forward and reverse diffusion processes plays a critical role in maintaining robustness. Our findings suggest that DBP models primarily rely on stochasticity to resist adversarial attacks, rather than depending solely on the forward diffusion process, and it also reveals that DBP models lack the ability to _effectively purify adversarial perturbations_ . 100 50 0 _Clean W hite DW_ Fwd _DW_ Rev _DW_ Both Figure 2: DP DDPM and DP DDIM robust accuracy under different attack settings on CIFAR10. Both models lose most of their robustness only when the attacker knows all stochastic elements (highlighted in bold: DW Both box for DP DDPM /DP DDIM and DW Fwd -box for DP DDIM ). 4.2 E XPLAINING S TOCHASTICITY -D RIVEN R OBUSTNESS When attacking stochastic models, a commonly used technique is EoT (Athalye et al., 2018b). To assess the influence of adaptive attacks, we compare the performance of white-box attacks with and without using EoT. The results in Table 1 show that EoT-based adaptive attacks have only a modest impact on robustness, which contrasts sharply with DW-box attacks. A more detailed discussion of EoT steps is provided in Appendix A. 4 To compare different attacks, we visualize the at- Table 1: Evaluation of DBP methods under varitack trajectories using t-SNE. These trajectories ous attack settings shows that EoT significantly are projected onto the _xy_ -plane, with loss values affects the model’s robustness accuracy (%). plotted on the _z_ -axis. We compare three types of attacks: white-box without EoT (White-box), white- Metric DiffPure GDMP (MSE) DP DDPM DP DDIM box with EoT (White-box-EoT), and Deterministic Clean 89.26 91.80 85.94 88.38 White-box (DW-box). As shown in Figure 1, the PGD20+EoT10PGD20 55.9669.04 40.9753.13 47.2760.25 42.1954.59 trajectories vary significantly across all settings, reflecting the stochasticity of DBP models. Notably, DW-box attacks lead to a significant increase in loss values, whereas white-box attacks—even with EoT—result in only moderate increases. This observation suggests that _stochasticity prevents attackers from finding the optimal attack direction_ . Even if the EoT estimate accurately captures the mean gradient direction, the high variance in attack gradients may prevent alignment with the optimal attack direction (the direction of the DW-box attack). This leads to reduced attack performance. Additional evidence is provided in Appendix B. To study how different attacks impact robust accuracy, we compared the loss landscapes under White-box-EoT and Deterministic White-box attacks. We computed the average loss variation across a batch of images and present the results in Figure 3. The trajectory of the Whitebox-EoT attack deviates from that of the Deterministic White-box attack, resulting in a flatter loss landscape along its path, whereas the Deterministic White-box attack produces a steep increase in loss. This observation suggests that the inherent stochasticity in DBP models prevents White-box-EoT attacks from identifying Figure 3: Visualisation of attack trajectories for the optimal attack direction, and that removing White-box-EoT attacks and DW-box attacks on the this stochasticity makes the model vulnerable to loss landscape. The loss landscape is steep in the adversarial perturbations. These findings con direction of the DW-box attack. The plot is based trast with adversarially trained models, where on the first 128 images of CIFAR-10. the loss landscape remains flat even along adversarial directions (Shafahi et al., 2019). Further details on the loss landscape visualization can be found in Appendix F. Table 1: Evaluation of DBP methods under various attack settings shows that EoT significantly affects the model’s robustness accuracy (%). Metric DiffPure GDMP (MSE) DP DDPM DP DDIM Clean 89.26 91.80 85.94 88.38 PGD20 69.04 53.13 60.25 54.59 PGD20+EoT10 55.96 40.97 47.27 42.19 To study how different attacks impact robust accuracy, we compared the loss landscapes under White-box-EoT and Deterministic White-box attacks. We computed the average loss variation across a batch of images and present the results in Figure 3. The trajectory of the Whitebox-EoT attack deviates from that of the Deterministic White-box attack, resulting in a flatter loss landscape along its path, whereas the Deterministic White-box attack produces a steep increase in loss. This observation suggests that the inherent stochasticity in DBP models prevents White-box-EoT attacks from identifying Figure 3: Visualisation of attack trajectories for the optimal attack direction, and that removing White-box-EoT attacks and DW-box attacks on the this stochasticity makes the model vulnerable to loss landscape. The loss landscape is steep in the adversarial perturbations. These findings con direction of the DW-box attack. The plot is based trast with adversarially trained models, where on the first 128 images of CIFAR-10. the loss landscape remains flat even along adversarial directions (Shafahi et al., 2019). Further details on the loss landscape visualization can be found in Appendix F. In conclusion, we propose that DBP models, instead of _exhibiting a flat loss landscape_, leverage stochasticity to _evade the most effective attack directions_ . Note that while certified defense methods such as random smoothing also incorporate stochasticity (Xiao et al., 2022; Carlini et al., 2022), their mechanisms and implications differ from those of DBP methods (see Appendix D). 5 T OWARDS I MPROVING THE P URIFICATION C APABILITY OF DBP M ODELS Based on the analysis in Section 4, and considering the loss increase along the DW-box attack direction, we propose that as an alternative to stochasticity-driven robustness, the performance of DBP can be further improved by flattening the loss landscape. Achieving this requires incorporating adversarial samples into the training of DBP models. From the perspective of adversarial purification, this involves enhancing the diffusion model’s ability to purify adversarial perturbations. To address this, we propose **Adversarial Denoising Diffusion Training (ADDT)**, a method that incorporates adversarial perturbations into the training of diffusion models in DBP. ADDT follows an iterative two-step process: (1) **Classifier-Guided Perturbation Optimization (CGPO)**, which generates adversarial perturbations by maximizing the classification error of a pre-trained classifier; (2) **Diffusion Model Training**, which trains the diffusion model on these perturbations to improve its ability to purify adversarial perturbations. Integrating adversarial perturbations into diffusion training is challenging due to the Gaussian noise assumption inherent in diffusion models. To overcome this, we propose **Rank-Based Gaussian** **Mapping (RBGM)**, a technique that transforms adversarial perturbations into a form more consistent 5 Idea Generation Category:
3Other
shqjOIK3SA
# C A N N E U R A L N E T W O R K S A C H I E V E O P T I - M A L C O M P U TAT I O N A L - S TAT I S T I C A L T R A D E - O F F ? A N A N A LY S I S O N S I N G L E -I N D E X M O D E L **Siyu Chen** _[∗]_ [1] **, Beining Wu** _[∗]_ [2] **, Miao Lu** [3] **, Zhuoran Yang** [1] **, Tianhao Wang** [4] 1 Yale University, 2 University of Chicago, 3 Stanford University, 4 Toyota Technological Institute at Chicago _{_ siyu.chen.sc3226, zhuoran.yang _}_ @yale.edu, beiningw@uchicago.edu miaolu@stanford.edu, tianhao.wang@ttic.edu A B S T R AC T In this work, we tackle the following question: Can neural networks trained with gradient-based methods achieve the optimal statistical-computational tradeoff in learning Gaussian single-index models? Prior research has shown that any polynomial-time algorithm under the statistical query (SQ) framework requires Ω( _d_ _[s]_ _[⋆]_ _[/]_ [2] _∨d_ ) samples, where _s_ _[⋆]_ is the generative exponent representing the intrinsic difficulty of learning the underlying model. However, it remains unknown whether neural networks can achieve this sample complexity. Inspired by prior techniques such as label transformation and landscape smoothing for learning single-index models, we propose a unified gradient-based algorithm for training a two-layer neural network in polynomial time. Our method is adaptable to a variety of loss and activation functions, covering a broad class of existing approaches. We show that our algorithm learns a feature representation that strongly aligns with the unknown signal _θ_ _[⋆]_, with sample complexity _O_ [r] ( _d_ _[s]_ _[⋆]_ _[/]_ [2] _∨_ _d_ ), matching the SQ lower bound up to a polylogarithmic factor for all generative exponents _s_ _[⋆]_ _≥_ 1 . Furthermore, we extend our approach to the setting where _θ_ _[⋆]_ is _k_ -sparse for _k_ = _o_ ( _√d_ ) by introduc ing a novel weight perturbation technique that leverages the sparsity structure. We derive a corresponding SQ lower bound of order Ω( [r] _k_ _[s]_ _[⋆]_ ), matched by our method up to a polylogarithmic factor. Our framework, especially the weight perturbation technique, is of independent interest, and suggests potential gradient-based solutions to other problems such as sparse tensor PCA. 1 I N T RO D U C T I O N The success of neural networks is largely attributed to their remarkable ability to learn rich and useful features from data during gradient-based training (Girshick et al., 2014). This feature-learning capability allows them to outperform traditional methods like kernel-based approaches, which rely on predefined features (Allen-Zhu & Li, 2019; Ghorbani et al., 2019; Refinetti et al., 2021). However, when trained using (stochastic) gradient descent, neural networks can sometimes fall into a “kernel regime”, where their behavior resembles that of a fixed kernel method, constrained by their random initialization (Jacot et al., 2018; Chizat et al., 2019). In this regime, the ability of the network to learn complex representations is severely limited, undermining the primary advantage of deep learning. Therefore, it is crucial to understand when and how neural networks trained with gradient-based method can perform effective feature learning to unlock their full potential, particularly in scenarios where a balance between computational efficiency and statistical performance is essential. In this work, we approach this question in the context of Gaussian single-index models, a canonical class of problems in statistics and learning (MacCullagh & Nelder, 1989; Ichimura, 1993; Hristache et al., 2001; Hardle et al. ¨, 2004). The model is defined as follows: for covariates _z ∼N_ (0 _, I_ _d_ ), the output _y_ depends on the inner product _⟨θ_ _[⋆]_ _, z⟩_ with an unknown signal _θ_ _[⋆]_ _∈_ R _[d]_ through a link distribution _p_, i.e., _y ∼_ _p_ ( _· | ⟨θ_ _[⋆]_ _, z⟩_ ) . The goal is to recover _θ_ _[⋆]_ using i.i.d. samples ( _z_ 1 _, y_ 1 ) _, . . .,_ ( _z_ _n_ _, y_ _n_ ) generated by the underlying model. While _n_ = Ω( _d_ ) samples suffice to recover _θ_ _[⋆]_ information-theoretically _∗_ Equal Contribution 1 |1.6 × 105<br>9.9 × 104<br>6.0 × 104<br>3.6 × 104<br>3|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |1.6 × 105<br>9.9 × 104<br>6.0 × 104<br>3.6 × 104<br>3||||||||| |1.6 × 105<br>9.9 × 104<br>6.0 × 104<br>3.6 × 104<br>3||7 4|5 5|5 6|7 8|1 99 12|1 99 12|2| (a) Scatter plot of ( _d, n_ ) (b) ( _n, k_ ) paradigm Figure 1: (a) The contour plots of (log _d,_ log _n,_ acc( _d, n_ )) for Algorithm 1 under model _y_ = _⟨z, θ_ _[⋆]_ _⟩_ [2] exp( _−⟨z, θ_ _[⋆]_ _⟩_ [2] ), which has generative exponent _s_ _[⋆]_ = 4 (Example 2.2). Here acc( _d, n_ ) is the average of the largest 8 values of the alignment between the neuron weights and the unknown signal _θ_ _[⋆]_ . The slopes of these contour lines are all close to 2, indicating a sample complexity _n ≈_ _d_ [2] for _s_ _[⋆]_ = 4 . (b) The paradigm of sample complexity achieved by our algorithm for different generative exponent _s_ _[⋆]_ and sparsity level _k_, illustrating the success of achiving computational-statistical tradeoff. (Bach, 2017; Damian et al., 2024), achieving this efficiently is difficult for polynomial-time algorithms, where the required sample size also depends on properties of the link distribution _p_, creating a computational-statistical gap. For example, when _y_ is a polynomial of _⟨θ_ _[⋆]_ _, z⟩_, it has been shown that two-layer neural networks with square loss need _d_ [Θ(] _[q]_ _[⋆]_ [)] samples (Arous et al., 2021; Bietti et al., 2022; Damian et al., 2023), where _q_ _[⋆]_ is the information exponent of the polynomial link function (Arous et al., 2021; Dudeja & Hsu, 2018). Such sample complexity is indeed inevitable under the correlational statistical query (CSQ) framework, leading to a computational-statistical gap for _q_ _[⋆]_ _≥_ 2 . However, the CSQ framework does not capture the fundamental limits of all gradient-based algorithms. Recent works have shown that by leveraging higher-order terms in the gradient, neural networks can learn polynomials with as few as _O_ [r] ( _d_ ) samples (Lee et al., 2024; Arnaboldi et al., 2024). It turns out that the intrinsic learning difficulty is captured by another quantity called the _generative exponent_ _s_ _[⋆]_, which is at most 2 for polynomial link functions, and the corresponding SQ lower bound on the sample complexity is _n_ = Ω( _d_ _[s]_ _[⋆]_ _[/]_ [2] ) [1] (Damian et al., 2024). Thus, there is no computational-statistical gap up to poly-log factors for learning polynomial single-index models. However, for general single-index models with _s_ _[⋆]_ _≥_ 3, no gradient-based algorithm for neural networks has been shown to match the SQ lower bound, leaving it an open problem (Arnaboldi et al., 2024; Lee et al., 2024). Furthermore, learning the Gaussian single-index model can benefit from additional structures in the signal _θ_ _[⋆]_, such as sparsity, which can significantly reduce the sample complexity compared to those depending on the ambient dimension _d_ (Candes et al. `, 2006; Donoho et al., 2009; Raskutti et al., 2012). Recent work by Vural & Erdogdu (2024) examines the effectiveness of pruning in learning sparse features, demonstrating that it matches the correlated statistical query (CSQ) lower bound _d_ _[q]_ _[⋆]_ for _k ≪_ _√d_ . However, the method fails to achieve the CSQ lower bound in non-sparse settings. For sparse single-index models with information exponent _q_ _[⋆]_ = 1, gradient descent on diagonal linear networks nearly achieves the information-theoretic lower bound thanks to its implicit regularization effect (Fan et al., 2023). Nonetheless, how to achieve the optimal sample complexity for general _s_ _[⋆]_ _≥_ 1 is also unknown under the sparse setting. **Contributions.** Towards characterizing the fundamental feature learning capability of neural networks in the Gaussian single-index model, our main contributions are as follows: 1. We propose a unified recipe of gradient-based algorithms for training a two-layer neural network to learn the Gaussian single-index model. Our method integrates a general gradient oracle with a weight perturbation technique, carefully designed to exploit the underlying structure of the 1 This Ω( _d_ _s_ _[⋆]_ _/_ 2 ) sample complexity lower bound is essentially for the detection problem. Dudeja & Hsu (2021) shows that there is an estimation-detection gap for tensor PCA under the SQ framework, though it is unclear whether such gap exists universally. Throughout the paper, we always refer to the SQ lower bound as the detection lower bound, since detection in general is assumed to be easier than estimation. 2 Gaussian single-index model. This allows the neural network to perform feature learning of the unknown signal _θ_ _[⋆]_ in a computationally efficient manner. Our framework encompasses many existing approaches as special cases, such as batch reusing (Dandi et al., 2024; Lee et al., 2024), label transformation (Chen & Meka, 2020), and landscape smoothing (Damian et al., 2023). 2. We show that for an _unknown link distribution_ _p_ with _any_ generative exponent _s_ _[⋆]_ _≥_ 1, the weights of the neural network achieve strong recovery of the true signal _θ_ _[⋆]_ after training by our algorithm using _O_ [r] ( _d_ _[s]_ _[⋆]_ _[/]_ [2] _∨_ _d_ ) samples and polynomial running time. Our method achieves the SQ lower bound up to a polylogarithmic factor, and is the first gradient-based algorithm for training two-layer neural networks that attains the nearly optimal computational-statistical tradeoff for Gaussian single-index models with any _s_ _[⋆]_ _≥_ 1. Figure 1 (a) illustrates an example for _s_ _[⋆]_ = 4. 3. Furthermore, our method is able to take advantage of additional structural information of the true signal _θ_ _[⋆]_ . Specifically, we consider the case where _θ_ _[⋆]_ is _k_ -sparse for _k_ = _o_ ( _√d_ ), and develop a _novel weight perturbation procedure_ tailored to the sparsity of _θ_ _[⋆]_ . Equipped with this, we show that the weights of the neural network can achieve strong recovery of the sparse signal _θ_ _[⋆]_ after training with _O_ [r] ( _k_ _[s]_ _[⋆]_ ) samples in polynomial time for any generative exponent _s_ _[⋆]_ _≥_ 1 . This sample complexity is also nearly optimal according to the sample complexity lower bound we establish for SQ algorithms, which might be of independent interest. Also, our method suggests a new approach to achieve the computational-statistical tradeoff for sparse tensor PCA. In summary, our work provides a unified framework for training neural networks that can achieve the nearly optimal computational-statistical tradeoff for the Gaussian single-index model with any generative exponent _s_ _[⋆]_ _≥_ 1 . Our method not only tackles the intrinsic difficulty of learning the underlying model posed by the link distribution _p_, but also leverages the additional structural information of the true singal _θ_ _[⋆]_ that benefits the learning process. Integrating these results, our method attains nearly optimal balance between computational efficienty and statistical performance across almost all regimes of sparsity levels and generative exponent _s_ _[⋆]_ _≥_ 1, as illustrated in Figure 1 (b). 2 P RO B L E M S E T U P We begin by introducing the notation used in the paper, and then describe the problem setup. For a probability distribution P, we denote by _L_ [2] (P) the space of square-integrable functions with respect to _L_ [2] (P) P, and = means equality in _L_ [2] (P) . We denote the normalized probabilist’s Hermite polynomials by _{h_ _s_ ( _·_ ) _}_ _s≥_ 0, where each _h_ _s_ ( _x_ ) := [(] _[−]_ [1][)] _[s]_ _[·][ e]_ _[x]_ [2] _[/]_ [2] _[ ·]_ d [d] _[s][s]_ _[ e]_ _[−][x]_ [2] _[/]_ [2] [. These polynomials form an orthonormal] _{h_ _s_ ( _·_ ) _}_ _s≥_ 0, where each _h_ _s_ ( _x_ ) := [(] _[−]_ ~~_√_~~ [1] _s_ [)] ! _[s]_ _[·][ e]_ _[x]_ [2] _[/]_ [2] _[ ·]_ d [d] _x_ _[s][s]_ _[ e]_ _[−][x]_ [2] _[/]_ [2] [. These polynomials form an orthonormal] basis for _L_ [2] ( _N_ (0 _,_ 1)), i.e., the space of square-integrable functions under the Gaussian measure. **Gaussian single-index model.** We study the following Gaussian single-index model: The environment first samples an unobservable signal _θ_ _[⋆]_ _∼_ _π_ from some known prior _π ∈_ _P_ (S _[d][−]_ [1] ) . Then i.i.d. data ( _z_ 1 _, y_ 1 ) _, . . .,_ ( _z_ _n_ _, y_ _n_ ) _∈_ R _[d]_ _×_ R are generated according to the following distribution P _θ_ _⋆_ given _θ_ _[⋆]_ : P _θ_ _⋆_ : _z ∼N_ (0 _, I_ _d_ ) _,_ _y ∼_ _p_ ( _· | ⟨θ_ _[⋆]_ _, z⟩_ ) _._ (2.1) Here _p_ ( _· | ·_ ) : R _�→_ _P_ (R) is referred to as the _link distribution_ . A canonical example is the additive model where _y_ = _ϕ_ ( _⟨θ_ _[⋆]_ _, z⟩_ ) + _ϵ_ for some deterministic link function _ϕ_ : R _→_ R and random noise _ϵ_ . See Damian et al. (2024) for more complicated examples. **Generative exponent.** The following discussion on the generative exponent is based on the work of Damian et al. (2024). We aim to learn (2.1) where the link distribution _p_ has _generative exponent_ _s_ _[⋆]_ _≥_ 1, a measure of the computational-statistical gap for learning single-index models. We let _x_ = _⟨θ_ _[⋆]_ _, z⟩_ . Notice that P _θ_ _⋆_ ( _y, z_ ) = P( _y, x_ ) _· N_ ( _z_ _[⊥]_ ; 0 _, I_ _d−_ 1 ) where we use P to denote the joint distribution of ( _x, y_ ) as this joint distribution is independent of _θ_ _[⋆]_ . As the marginal distribution of _y_ is also independent of _θ_ _[⋆]_, we define the _null distribution_ Q( _y, z_ ) := _N_ ( _z_ ; 0 _, I_ _d_ ) _⊗_ Q( _y_ ) and denote Q( _y, x_ ) := _N_ ( _x_ ; 0 _,_ 1) _⊗_ Q( _y_ ) where Q( _y_ ) = � R [P][(] _[y, x]_ [)d] _[x]_ [. Under a square-integrable condition] under Q, the likelihood ratio admits a Hermite expansion with coefficient functions _{ζ_ _s_ ( _y_ ) _}_ _s≥_ 1, i.e., [)] _[s]_ [d] _[s]_ _s_ ! _[·][ e]_ _[x]_ [2] _[/]_ [2] _[ ·]_ d _x_ ~~_√_~~ P _θ⋆_ ( _y,z_ ) _θ⋆_ ( _y,z_ ) [P][(] _[y,][x]_ [)] Q( _y,z_ ) [=] Q( _y,x_ ) _L_ [2] (Q) _∞_ Q [P][(] ( _[y,]_ _y,x_ _[x]_ [)] ) = � _s_ =0 _[ζ]_ _[s]_ [(] _[y]_ [)] _[ ·][ h]_ _[s]_ [(] _[x]_ [)] _[,]_ where _ζ_ _s_ ( _y_ ) = E P [ _h_ _s_ ( _x_ ) _|y_ ] _,_ (2.2) and E Q [ _ζ_ _s_ ( _y_ ) [2] ] _≤_ 1 for all _s ≥_ 1 . Note that (2.2) makes sense only when we are working with the inner product of P _/_ Q and a square-integrable function under the null distribution Q. **Definition 2.1** (Generative exponent) **.** _For the Gaussian single-index model defined in_ (2.1) _, the_ _generative exponent_ _s_ _[⋆]_ _of the link distribution_ _p_ _is defined as_ _s_ _[⋆]_ ( _p_ ) := min _{s ≥_ 1 : E Q [ _ζ_ _s_ ( _y_ ) [2] ] _>_ 0 _}_ _._ 3 **Example 2.2** (Example 2.7, Damian et al. (2024)) **.** _Consider the special case of the Gaussian single-_ _index model_ (2.1) _where_ _y_ = _ϕ_ ( _⟨θ_ _[⋆]_ _, z⟩_ ) _for a deterministic link function_ _ϕ_ : R _→_ R _. When_ _ϕ_ _is_ _a polynomial function, it holds that_ _s_ _[⋆]_ ( _ϕ_ ) _≤_ 2 _, and the equality holds if and only if_ _f_ _is an even_ _polynomial. In particular,_ _s_ _[⋆]_ ( _h_ _s_ ) = 1 _for odd_ _s_ _and_ _s_ _[⋆]_ ( _h_ _s_ ) = 2 _for even_ _s_ _. While for the example of_ _ϕ_ ( _x_ ) = _x_ [2] exp( _−x_ [2] ) _, which is not a polynomial, it has generative exponent s_ _[⋆]_ ( _ϕ_ ) = 4 _._ **Two-layer neural networks.** We consider using a two-layer neural network with _M_ hidden neurons to learn the single-index model (2.1) . The weight vector for each neuron _m ∈_ [ _M_ ] is _θ_ _m_ _∈_ R _[d]_, and the weights of the second layer are _a_ 1 _, . . ., a_ _M_ _∈_ R . We collect all the weights and denote _**θ**_ = ( _θ_ 1 _, . . ., θ_ _M_ ) _∈_ R _[d][×][M]_, _**a**_ = ( _a_ 1 _, . . ., a_ _M_ ) _[⊤]_ _∈_ R _[M]_ . Now for any input _z ∈_ R _[d]_, the output of the network is given by _f_ ( _z_ ; _**θ**_ _,_ _**a**_ ) := [�] _[M]_ _m_ =1 _[a]_ _[m]_ _[ ·][ σ]_ [(] _[⟨][z, θ]_ _[m]_ _[⟩]_ [)][, where] _[ σ]_ [ :][ R] _[ →]_ [R][ is the activation.] 3 O V E RV I E W O F T E C H N I Q U E S In this work, we apply gradient-based methods to learn Gaussian single-index models, with a focus on feature learning in neural networks and the corresponding computational-statistical tradeoff. To motivate the techniques involved, we begin by discussing an illustrative example that highlights such tradeoffs. For this overview, we focus on _s_ _[⋆]_ _>_ 2 and the uniform prior _π_ = Unif(S _[d][−]_ [1] ) . It has been shown that a gap exists between the information-theoretic lower bound Ω( _d_ ) and the SQ lower bound Ω( _d_ _[s]_ _[⋆]_ _[/]_ [2] ) under this setting when _s_ _[⋆]_ _>_ 2 (Bach, 2017; Damian et al., 2024). For illustration, let us consider training a two-layer network with a single neuron under the population square loss. When the weight of the second layer is small, the rescaled negative gradient _g_ satisfies 2 _g_ = _−_ (2 _a_ ) _[−]_ [1] _∇_ _θ_ � _f_ ( _z_ ; _θ, a_ ) _−_ _y_ � = _−_ � _a · σ_ ( _⟨z, θ⟩_ ) _−_ _y_ � _· σ_ _[′]_ ( _⟨z, θ⟩_ ) _· z_ = ( _yσ_ _[′]_ ( _⟨z, θ⟩_ ) + err) _· z,_ where we take _yσ_ _[′]_ ( _⟨z, θ⟩_ ) as the signal term and treat _−aσ_ ( _⟨z, θ⟩_ ) _σ_ _[′]_ ( _⟨z, θ⟩_ ) as the error term since it scales with _a_ . [2] We ignore the error term in the following discussion. Taking expectation over ( _z, y_ ) _∼_ P _θ_ _⋆_ and using the likelihood ratio decomposition in (2.2), we have E P _θ⋆_ [ _g_ ] _≈_ E Q [ _y_ ] _·_ E Q [ _σ_ _[′]_ ( _⟨z, θ⟩_ ) _· z_ ] ~~�~~ � ~~�~~ ~~�~~ bias + � � �E Q [ _yζ_ _s_ ( _y_ )] _·_ E Q [ _h_ _s_ ( ~~��~~ _⟨θ_ _[⋆]_ _, z⟩_ ) _· σ_ _[′]_ ( _⟨z, θ⟩_ ) _· z_ ~~�~~ ] _s≥s_ _[⋆]_ informative queries _,_ (3.1) where we use the fact that _y_ and _z_ are independent under the null distribution Q . Note that the _bias_ term does not contain any information about _θ_ _[⋆]_, and it can be easily removed by a debiasing procedure, so we assume for simplicity that E[ _y_ ] = 0. **Failure of vanilla online minibatch SGD.** We first consider the vanilla online minibatch SGD, which updates the weight vector _θ_ by _θ ←_ _θ −_ _η_ [�] _[n]_ _i_ =1 _[g]_ _[i]_ [ for a minibatch of size] _[ n]_ [. The sample] complexity of gradient-based methods is determined by the signal-to-noise ratio (SNR) of the onesample gradient, which in our case is defined as SNR := E[ _⟨g, θ_ _[⋆]_ _⟩_ ] [2] _/_ E[ _∥g∥_ 2 [2] []] [. This is the square of] the alignment between _g_ and _θ_ _[⋆]_, governed primarily by the informative query corresponding to the lowest degree _s_ _[⋆]_ in (3.1) assuming that E Q [ _yζ_ _s_ _⋆_ ( _y_ )] _̸_ = 0 . It can be shown that the inner product between the lowest-degree informative query in (3.1) and the signal _θ_ _[⋆]_ satisfies (see Lemma H.1) E Q [ _h_ _s_ _⋆_ ( _⟨θ_ _[⋆]_ _, z⟩_ ) _· σ_ _[′]_ ( _⟨z, θ⟩_ ) _· ⟨z, θ_ _[⋆]_ _⟩_ ] _≈_ _s_ _[⋆]_ _·_ p _σ_ _s_ _⋆_ _· ⟨θ_ _[⋆]_ _, θ⟩_ _[s]_ _[⋆]_ _[−]_ [1] = p _σ_ _s_ _⋆_ _· O_ ( _d_ _[−]_ [(] _[s]_ _[⋆]_ _[−]_ [1)] _[/]_ [2] ) _,_ (3.2) where p _σ_ _s_ _⋆_ is the _s_ _[⋆]_ -th coefficient in the Hermite expansion of _σ_ . While for _∥g∥_ 2, we have E P _θ⋆_ � _∥g∥_ 2 [2] � _≈_ _d ·_ E Q � _y_ [2] _σ_ _[′]_ ( _⟨z, θ⟩_ ) [2] [�] = Ω( _d_ ) _,_ where the high-order terms in the likelihood ratio decomposition are ignored and we come back to this point later. Now we can argue why vanilla online minibatch SGD has difficulty achieving the SQ lower bound for generative exponent _s_ _[⋆]_ _>_ 2 : Suppose E Q [ _yζ_ _s_ _⋆_ ( _y_ )] and p _σ_ _s_ _⋆_ are both nonzero constants. Then the one-sample SNR is _O_ ( _d_ _[−][s]_ _[⋆]_ ) . For a minibatch with _n_ samples, the SNR of the gradient averaged over the minibatch is roughly _n_ times the one-sample SNR [3], i.g., _nd_ _[−][s]_ _[⋆]_ . To ensure 2 A rigorous derivation of the error term with multiple neurons and general loss function _ℓ_ can be found in the proof of Example 4.6 in Appendix C.2.2. 3 This argument is not fully rigorous because E P _θ⋆_ [ _∥g∥_ 22 []] [ also includes “bias”] _[ ∥]_ [E] P _θ⋆_ [[] _[g]_ []] _[∥]_ 2 [2] [besides the] fluctuations, but it remains valid as long as _∥g∥_ 2 [2] [is dominated by fluctuations from all] _[ d]_ [ directions at initialization.] 4 one update step achieves alignment, i.e., the square root of the _n_ -sample SNR, _√nd_ _[−][s]_ _[⋆]_, exceeding the trivial _d_ _[−]_ [1] _[/]_ [2] threshold attained by a random vector, it requires at least _d_ _[s]_ _[⋆]_ _[−]_ [1] samples. Note that the sample complexity would become even worse if _s_ _[⋆]_ _<_ argmin _s≥s_ _⋆_ _{s_ : E Q [ _yζ_ _s_ ( _y_ )] _̸_ = 0 _}_ . This contrasts with the sample complexity _O_ ( _d_ _[s]_ _[⋆]_ _[/]_ [2] ) suggested by the SQ lower bound. The above failure of vanilla online minibatch SGD exposes three key challenges: (i) ( **Non-polynomial** ) How to handle the infinite sum of high-order terms in the likelihood ratio? (ii) ( **Low SNR** ) How to enhance the SNR to achieve the SQ lower bound? (iii) ( **Zero correlation** ) How to ensure that the algorithm still works if E Q [ _yζ_ _s_ _⋆_ ( _y_ )] = 0? Below we discuss our techniques for addressing these challenges. **Label transformation via general gradient oracle.** The idea to fix the zero correlation problem is to apply a nonlinear transformation _T_ : R _→_ R to _y_ such that _T_ ( _y_ ) has nonzero correlation with _ζ_ _s_ _⋆_ ( _y_ ) . This label transformation technique has been widely used in the literature (Lu & Li, 2020; Mondelli & Montanari, 2018; Dudeja & Hsu, 2018; Chen & Meka, 2020; Damian et al., 2024). In particular, Lee et al. (2024) show that the label transformation can be automatically realized by running two gradient steps on the same batch, a technique termed as _batch-reusing_ (Dandi et al., 2024; Arnaboldi et al., 2024). In this work, we study a more _general class of gradient-based methods_ with gradient of form _g_ = _ψ_ ( _y, ⟨θ, z⟩_ ) _z_, which is an abstract form of the transformed gradient _T_ ( _y_ ) _σ_ _[′]_ ( _⟨z, θ⟩_ ) _z_ . The desired condition becomes E Q [ _ψ_ [p] _s_ _⋆_ _−_ 1 ( _y_ ) _ζ_ _s_ _⋆_ ( _y_ )] _̸_ = 0, where _ψ_ [p] _s_ ( _y_ ) is the _s_ -th Hermite coefficient function of _ψ_ ( _y, x_ ) in the Hermite basis of _x_ . One particular way to obtain such a gradient is to use a modified loss function, similar to the approach in Joshi et al. (2024), while in our case the specific choice of _ψ_ is also related to the other two challenges addressed as follows. **Exploration by weight perturbation with high-pass activation.** The low-SNR challenge corresponds to the fact that points on the equator of S _[d][−]_ [1] orthogonal to _θ_ _[⋆]_ are all saddle points in terms of _|⟨θ, θ_ _[⋆]_ _⟩|_, and random initialization typically lies near this equator. To efficiently escape from such saddle points, we perform random weight perturbation, akin to the approach in Jin et al. (2017) for non-convex optimization. To understand the effectiveness of weight perturbation, we still stick to the squared loss and the two-layer neural network for the following second-moment calculation. [4] Specifically, suppose the activation _σ_ is high-pass and has the lowest degree _s_ _[⋆]_, i.e., _σ_ ( _x_ ) = [�] _s≥s_ _[⋆]_ _[σ]_ [p] _[s]_ _[h]_ _[s]_ [(] _[x]_ [)] [,] and consider for simplicity the case of odd _s_ _[⋆]_ . In the extreme case where _θ_ is perturbed into i.i.d. pure noise _θ_ 1 _, . . ., θ_ _L_ _∼_ Unif(S _[d][−]_ [1] ), we compute the gradient for each _θ_ _l_ and aggregate them into _g_ = _L_ _[−]_ [1] ( _g_ 1 + _· · ·_ + _g_ _L_ ) . Using the properties of the Gaussian noise operator (see Appendix B for details), the second moment of this aggregated gradient satisfies _d_ _L_ E � _∥g∥_ 2 [2] � _≈_ _L_ [2] � _l,l_ _[′]_ =1 [E] [Q] [[] _[y]_ [2] []] _[ ·]_ [ E] [Q] [[] _[σ]_ _[′]_ [(] _[⟨][z, θ]_ _[l]_ _[⟩]_ [)] _[σ]_ _[′]_ [(] _[⟨][z, θ]_ _[l]_ _[′]_ _[⟩]_ [)]] _[ ≈]_ _[d]_ [ �] _s≥s_ _[⋆]_ _[s][ ·]_ [ p] _[σ]_ _s_ [2] _[·]_ [ E] _[θ,θ]_ _[′]_ [[] _[⟨][θ, θ]_ _[′]_ _[⟩]_ _[s][−]_ [1] []] _[,]_ where _θ, θ_ _[′]_ are drown independently from Unif(S _[d][−]_ [1] ) . Since _⟨θ, θ_ _[′]_ _⟩≈_ _d_ _[−]_ [1] _[/]_ [2], we have E[ _∥g∥_ 2 [2] []] _[ ≈]_ _O_ ( _d_ _[−]_ [(] _[s]_ _[⋆]_ _[−]_ [3)] _[/]_ [2] ), yielding a higher one-sample SNR as the first moment remains unchanged and pushing the sample complexity towards the SQ lower bound. Moreover, we also see from the above calculation that the weight perturbation resolves the non-polynomial issue thanks to the nearorthogonality of the perturbed weights. The above heuristics can be made rigorous for polynomially large _L_, thereby handling non-polynomial link and activation functions. Our approach also draws inspiration from the landscape smoothing method in Damian et al. (2024), but in constrast to their problem setup, we do not require full knowledge of the link distribution in advance. Instead, it suffices to know the generative exponent _s_ _[⋆]_ to construct a high-pass activation function as well as the gradient oracle _ψ_ . See Example 4.6 for a detailed discussion on this. 4 G R A D I E N T - B A S E D A L G O R I T H M F O R U N I F O R M P R I O R We first present our method and results for the case of _θ_ _[⋆]_ _∼_ Unif(S _[d][−]_ [1] ), or equivalently, when there is no structural information on _θ_ _[⋆]_ . Motivated by the discussion in Section 3, we propose a gradient-based meta algorithm (Algorithm 1) that can train a two-layer neural network to learn the unknown signal _θ_ _[⋆]_ with _O_ [r] ( _d_ _[s]_ _[⋆]_ _[/]_ [2] _∨_ _d_ ) sample complexity, nearly matching the corresponding SQ lower bound. This 4 The label transformation only affects the second moment by a constant factor as we show in Appendix C.1. 5 Idea Generation Category:
0Conceptual Integration
is4nCVkSFA
# - S CALING T RANSFORMERS FOR L OW -B ITRATE H IGH Q UALITY S PEECH C ODING **Julian D. Parker** _[∗]_ **Anton Smirnov** **Jordi Pons** **CJ Carr** **Zack Zukowski** **Zach Evans** **Xubo Liu** _[∗]_ Stability AI _{_ julian.parker, xubo.liu _}_ @stability.ai A BSTRACT The tokenization of speech with neural audio codec models is a vital part of modern AI pipelines for the generation or understanding of speech, alone or in a multimodal context. Traditionally such tokenization models have concentrated on low parameter-count architectures using only components with strong inductive biases. In this work we show that by scaling a transformer architecture with large parameter count to this problem, and applying a flexible Finite Scalar Quantization (FSQ) based bottleneck, it is possible to reach state-of-the-art speech quality at extremely low bit-rates of 400 or 700 bits-per-second. The trained models strongly out-perform existing baselines in both objective and subjective tests. 1 I NTRODUCTION Compressed coding of audio and speech data in digital format has been an active area of research since the 1970s, and reached particular prominence in the late 1990s with the emergence of mp3 (Painter & Spanias, 2000). Research into improving the sound quality and compression ratio of such codecs (mainly using signal processing techniques) has continued (Valin et al., 2016). The main purpose of these codecs is to improve the efficiency of transmission and storage of what is traditionally a data-intensive medium. In recent times, the research community began to apply the techniques of machine learning to the audio coding problem (Zeghidour et al., 2021). These models are referred to as _neural audio codecs_ (NACs). Initially the goal of these models was similar to traditional audio codecs, which aim to maximize compression and audio quality at low computational cost. However, a paradigm shift occurred with the proposal of powerful generative models utilizing the token sequences produced by these codecs (Borsos et al., 2023a; Wang et al., 2023; Borsos et al., 2023b). With the arrival of these models and the plethora of new use-cases they encompass, the design goals of NACs have shifted to be less concerned with computational complexity, and more concerned with pushing compression (especially in the temporal dimension) to the maximum level possible. Our goal is to design a speech codec model in the spirit of this paradigm shift, whose primary purpose is to be used in combination with modern generative architectures for generation or understanding of speech signals. We make the observation that in a typical modern generative pipeline for speech there may be models totalling billions of parameters, a tiny fraction of which is usually dedicated to the codec model. There is therefore some headroom to increase the size of this component without overly impacting overall computational burden. This opens up scaling of the codec model size as a route to higher quality audio and higher compression levels. Neural audio codec models have largely been based on convolutional or recurrent architectures, which can be challenging to scale to larger model sizes without placing restrictions on the architecture. Even with such restrictions, the largest successful purely convolutional networks are generally below 1B parameters (Woo et al., 2023). Transformers (Vaswani, 2017) have shown the ability to scale to billions of parameters in many domains (Hoffmann et al., 2022), but have not been fully utilized in a codec context yet. Recent work has also deployed transformer blocks in the bottleneck of a convolutional codec, showing improvements in compression ratio (D´efossez et al., 2024). - Equal contribution 1 However, transformers have not so far been deployed as the main component of a codec model. One major contribution of this work is to design a new codec architecture that is predominantly transformer-based, and scale such an architecture into the 1B parameter range. The majority of current codecs utilize a Residual Vector Quantizer (RVQ) (Zeghidour et al., 2021) in some form. This is effective in maximizing the expressivity of the bottleneck for a given bit-rate, but presents a number of challenges for generative modeling. One challenge is that it produces many parallel hierarchical streams of tokens. The causal relationship between the streams introduces a variety of complications that must be accounted for during training and inference (Borsos et al., 2023a; Copet et al., 2023; D´efossez et al., 2024). An additional challenge is that VQs and RVQs can suffer from poor or inconsistent codebook utilization, making the process of learning the token distribution more difficult and prone to bias. In this work we address some of the issues of VQ and RVQ by instead adopting a quantization scheme derived from Finite Scalar Quantization (FSQ) (Mentzer et al., 2023), and a novel post-hoc method decomposing FSQ into low-order residuals. We demonstrate how these contributions enable the training of a waveform codec model that achieves high compression for speech, with ultra-low bitrates of 400 bps and 700 bps, while still preserving good audio quality. Code and models will be released at: github.com/Stability-AI/stable-codec. 2 R ELATED WORK 2.1 N EURAL A UDIO C ODECS The dominant paradigm for training NACs has so far been based on the VQ-VAE structure, consisting of a classic autoencoder-like structure of encoder and decoder model with an information bottleneck placed in between them in the form of a quantizer. Soundstream (Zeghidour et al., 2021) was the first example of such a model aimed at handling varying bit-rates and types of audio with a single model. Soundstream introduced an adversarial loss in addition to reconstruction loss, and residual vector quantization (RVQ) for use in the bottleneck. EnCodec (D´efossez et al., 2022) proposed a number of improvements to this formulation and achieved higher audio quality. SpeechTokenizer (Zhang et al., 2023b), building on Encodec, introduces the use of semantic tokens in the first channel of discrete RVQ codecs, bridging the gap between text tokens and acoustic tokens for speech coding. DAC (also known as improved RVQGAN) (Kumar et al., 2023) investigated several design choices in this type of NAC, including the introduction of periodic inductive biases and improvements in codebook utilization. This approach achieved notable performance, compressing 44 _._ 1 kHz audio into discrete codes at an 8 kbps bitrate. While DAC delivers high-quality reconstruction at this compression level, its bitrate remains relatively high for generative audio modeling, requiring over 700 tokens per second for 44 _._ 1 kHz audio due to the large number of residual tokens. 2.2 L OW BITE - RATE SPEECH CODING Recently, there has been growing interest (Li et al., 2024; Liu et al., 2024a; D´efossez et al., 2024) in optimizing bitrate efficiency in NACs while maintaining high reconstruction quality. Such lowbitrate, high-fidelity codecs are particularly crucial for improving efficiency and reducing latency in generative audio modeling. However, achieving extremely low bitrates (such as below 1 kbps for 16 kHz audio) remains challenging due to the complexities involved in accurately compressing high-frequency components in the audio waveform. SingleCodec (Li et al., 2024) addressed neural speech coding by proposing an enhanced VQ-VAE combined with bidirectional LSTM for mel-spectrogram compression, achieving a notably low bandwidth of 304 bps for 24 kHz speech mel-spectrogram coding, followed by BigVGAN (Lee et al., 2022) as a vocoder for waveform reconstruction. Inspired by recent advances in generative models, SemantiCodec (Liu et al., 2024a) offers a different approach by leveraging a latent diffusion model to generate latent features from a pre-trained mel-spectrogram VAE (which also requires a vocoder for waveform reconstruction). The diffusion model is conditioned on k-means clustered audio tokens derived from a pre-trained AudioMAE encoder. SemantiCodec supports low bitrates 2 ranging from 0 _._ 31 kbps to 1 _._ 43 kbps for 16 kHz speech mel-spectrogram coding, offering a promising solution for maintaining high reconstruction quality at extremely low bitrates. Mimi (D´efossez et al., 2024) is a recent end-to-end waveform codec for speech based on SoundStream and Encodec. Mimi introduces transformer layers around the RVQ bottleneck between the convolutional encoder and decoder. By scaling its training data to 7 million hours, Mimi has achieved impressive performance in neural speech coding, operating at 1 _._ 1 kbps with a 12 _._ 5 kHz latent for 24 kHz speech in a causal way, utilizing 8 tokens per latent frame (100 tokens per second). 2.3 G ENERATIVE MODELS FOR AUDIO AND SPEECH Autoregressive models can operate directly on quantized audio waveforms, but can be slow during inference (Oord et al., 2016). Recent models, such as VALL-E (Wang et al., 2023), AudioLM (Borsos et al., 2023a), MusicGen (Copet et al., 2023), and VQ-VAE-based approaches for sound synthesis (Liu et al., 2021), improve efficiency by instead modeling quantized latent sequences. Non-autoregressive models (Oord et al., 2018) and adversarial audio synthesis (Donahue et al., 2018) were developed to overcome the inefficiencies of autoregressive models. Recent non-autoregressive models such as VampNet (Garcia et al., 2023), SoundStorm (Borsos et al., 2023b), or StemGen (Parker et al., 2024) are based on masked token modeling (Chang et al., 2022). End-to-end diffusion modeling can also be computationally demanding (Rouard & Hadjeres, 2021; Pascual et al., 2023). Recent efficiency improvements rely on latent diffusion models (Liu et al., 2023; 2024b; Yuan et al., 2024; Evans et al., 2024a;b;c; Yang et al., 2024), which often rely on VAEs for latent encoding. The recent growth of multi-modal and speech-first generative models such as SpeechGPT (Zhang et al., 2023a), LLaMA3 (Dubey et al., 2024) and Moshi (D´efossez et al., 2024) is also heavily reliant on tokenized representations of speech and audio. As such, learning quantized or continuous latent spaces with codecs is crucial for advancing audio and speech generation. 3 A RCHITECTURE The architecture of the codec is shown in overview form in Fig. 1. We will discuss the design of the encoder and decoder sections with FSQ-based bottleneck separately. 3.1 E NCODER AND D ECODER Our encoder and decoder structures are designed to look very similar to a standard transformer architecture. Both consist of multiple blocks, each operating at a specific temporal resolution. These sections consist of a strided 1d dense convolution layer (for downsampling in the encoder) or its transposed equivalent (for upsampling in the decoder) and a chain of relatively standard transformer blocks. The only difference between the encoder and decoder architecture is that the downsampling or upsampling layer is placed in a different location—in the encoder at the start of the block, and in the decoder at the end of the block. This maintains symmetry of the architecture. The stacked transformer blocks consist of a self-attention section and a feedforward section, with pre-norm placement of layer norm blocks. The layer norm blocks are configured with a higher than standard _ϵ_ as discussed in Appendix B.1. In addition, the self-attention utilizes QK-norm. The feedforward block consists of a reverse bottleneck with a gated MLP, utilizing the SiLU activation function. Both attention blocks and feedforward blocks are followed by LayerScale (Touvron et al., 2021), to further stabilize training. The self-attention uses a sliding window to restrict receptive field and aid generalization of the architecture to arbitrary length sequences. The self-attention mechanism incorporates Rotary Positional Embeddings (RoPE) (Su et al., 2024) and operates without a causal attention mask. However, a causal variant suited for streaming purposes is possible with relatively minor modifications, as described in Appendix A.4. We further examine the model’s receptive field, causality, and latency in Appendix B.2. In contrast to convolutional architectures, we want the majority of temporal downsampling or upsampling of the signal to occur at the input or output to the architecture. This is to avoid feeding very small dimension embeddings to the transformer blocks, and also to limit sequence length. Only minimal further resampling happens within the architecture using the strided convolutions and transposed convolutions in each encoder or decoder block. To achieve this we can use any filter-bank representation of the input signal which conforms to perfect reconstruction criteria. The 3 Figure 1: Architecture of the proposed model. Detail is shown for the encoder block and sub-blocks. The decoder block is configured identically to the encoder block, with the exception of the strided convolution, which is replaced with its transposed equivalent and moved to the end of the _T_ _m_ blocks. details of this choice are discussed in Appendix B.4. Following the conclusions of this analysis and taking inspiration from Vision Transformer (ViT) architectures (Dosovitskiy et al., 2021), we utilize sequence-wise patching of the signal before passing to the encoder. Additionally we utilize dense 1d convolutional blocks at the inputs and outputs of the encoder and decoder structure. These blocks map between the embedding dimension used within the transformer (which is uniform) and the required dimension for the input/output patches and the latent representation used in the bottleneck. All convolutional layers use a weight-normalized parameterization. We call the resulting architecture a Transformer Audio AutoEncoder (TAAE). A major distinction between TAAE and traditional CNN-based codecs is the extensive use of transformer layers in TAAE, which results in a larger model size compared to CNN-based codecs Zeghidour et al. (2021); D´efossez et al. (2022); Kumar et al. (2023). CNN-based models leverage convolutional operations, which offer a strong inductive bias and high parameter efficiency. In contrast, the TAAE uses a transformer-based architecture, providing enhanced scalability, albeit with reduced parameter efficiency. An explanation of these differences and discussion comparing convolution and attention mechanisms can be found in the App B.3. 3.2 D ISCRETE BOTTLENECK In order to mitigate the inherent problems of VQ and RVQ quantization, we employ a modified version of Finite Scalar Quantization (FSQ) (Mentzer et al., 2023). Instead of a learnable codebook of embeddings connected to particular tokens as in VQ/RVQ, FSQ derives a token sequence by projecting the latent representation to a low-dimensional space, then scalar quantizing each dimension of this space in regular intervals. Each combination of quantized levels can then be mapped to a unique integer value, producing the tokenization. FSQ is known to exhibit almost full codebook utilisation even with very large codebook sizes (e.g., 2 [18] ) (Mentzer et al., 2023). We make some modifications to the FSQ algorithm to preserve symmetry of the quantized latents around the origin for any number of levels. Our formulation for the scalar quantizer function _Q_ _L_ for a given fixed number of levels _L_, applied to some scalar _x_ is given by: 2 _Q_ _L_ ( _x_ ) = _L −_ 1 _[⌊]_ [(] _[L][ −]_ [1)tanh] 2 _[ x]_ [ + 1] (1) 2 _[⌋−]_ [1] _[ x]_ [ + 1] + [1] 2 2 4 This scalar quantization function is applied (potentially with different L per dimension), to the elements of a latent vector **z** to produce the quantized latent. To train with this scalar quantizer, we use a hybrid approach. Some percentage of the time we emulate the effect of quantization by adding uniform noise (Brendel et al., 2024), giving an approximate quantization function: _Q_ _L_ ( _x_ ) _≈_ tanh _x_ + _[U][{][−]_ [1] _[,]_ [ 1] _[}]_ (2) _L −_ 1 which contains no explicit quantization. We also utilize use straight-through gradient estimation. We find that randomly mixing these two approaches along with unmodified latents produces better performance compared to utilizing only one or the other. This random mixing is achieved by starting with the unmodified latents, then replacing elements according to a random mask derived from a Bernoulli distribution with a parameter of 0 _._ 5. This procedure is performed twice, once for elements with the straight-through approximation and once with the noise-based approximation. During training we also randomly select uniformly between a pre-selected set of quantization level numbers _L_ . This is similar to the quantizer-dropout process used in training RVQ-based bottlenecks, and allows us to trade-off quality and codebook size at inference time. 3.2.1 P OST - TRAINING BOTTLENECK MODIFICATION The formulation of FSQ used here has many post-training possibilities for adjusting the reconstruction quality against the number and range of the discrete tokens. Firstly, the regularization provided by training the FSQ bottleneck with uniform noise allows the number of levels for each dimension of the FSQ to be modified after training. As long as the number of levels is greater than or equal to the smallest seen during training, the error produced by the quantization is within the bounds previously seen and therefore is still valid. By default FSQ produces one token per time-step. In general this is advantageous for our purposes. However, if the use-case requires it, we can decompose this single token post-hoc into multiple tokens using either a parallel partitioning of the dimensions, or (for particular choices of quantizationlevel number) into a hierarchical residual set of tokens ala RVQ. Parallel partitioning introduces a bi-directional causal relationship between tokens which is unexplored in the generative modeling context, and therefore for this work we concentrate on the hierarchical residual decomposition. Residual FSQ can be applied post-hoc to a bottleneck trained with a single quantizer but requires some restrictions. Namely, is required to only use numbers of levels conforming to _L_ = 2 _[n]_ + 1 _, n ∈_ Z [+] . This sequence of levels can be derived by starting from levels at _{−_ 1 _,_ 0 _,_ 1 _}_ ( _L_ = 3), and continually subdividing the intervals between levels exactly at the half way point. These level configurations are shown up to _n_ = 3 in Tab. 1. We denote the set containing the positions corresponding to a particular number of levels _L_, as _ℓ_ _L_ . We can clearly see by examination that each larger set is a superset of the previous sets i.e _ℓ_ 2 _n_ +1 _⊃_ _ℓ_ 2 _n−_ 1 +1, and also that we can can construct any particular set of levels using the Minkowski sum of smaller _ℓ_ 3 sets, progressively halved e.g _ℓ_ 3 + _[ℓ]_ 2 [3] _[⊃]_ _[ℓ]_ [5] [,] _[ ℓ]_ [3] [ +] _[ℓ]_ 2 [3] [+] _[ℓ]_ 4 [3] _[⊃]_ _[ℓ]_ [9] [ (albeit with extraneous new values outside the original range).] A similar analysis holds for other level numbers conforming to the restriction given above, with the scalings consequently changed. We can utilize this property to do post-hoc residual quantization, using the standard formulation of a residual quantizer for a given latent **z** : [3] _[ℓ]_ [3] 2 _[⊃]_ _[ℓ]_ [5] [,] _[ ℓ]_ [3] [ +] 2 [3] _[ℓ]_ [3] 2 [+] 4 **ˆz** = _K_ � _q_ _k_ _,_ _k_ =0 **q** 0 = _κ_ 0 ( **z** ) _,_ **q** _k_ = _κ_ _k_ ( **z** _−_ _k−_ 1 � **q** _i_ ) (3) _i_ =0 where _q_ _k_ denote the quantizer outputs, and _κ_ _k_ denote the quantizer functions themselves, which we define in terms of our scalar quantizer function using levels _L_ = 2 _[n]_ + 1 _, n ∈_ Z [+], _Q_ 2 _n_ +1 as: _κ_ _k_ ( **z** ) = _[Q]_ [2] _[n]_ [+1] [((][2] _[n]_ [)] _[k]_ **[z]** [)] (4) (2 _n_ ) _[k]_ 5 Idea Generation Category:
1Cross-Domain Application
4YpMrGfldX
# Q-SFT: Q-L EARNING FOR L ANGUAGE M ODELS VIA S UPERVISED F INE -T UNING **Joey Hong** **Anca Dragan** **Sergey Levine** University of California, Berkeley {jxihong,anca,svlevine}@berkeley.edu A BSTRACT Value-based reinforcement learning (RL) can in principle learn effective policies for a wide range of multi-turn problems, from games to dialogue to robotic control, including via offline RL from static previously collected datasets. However, despite the widespread use of policy gradient methods to train large language models for single turn tasks (e.g., question answering), value-based methods for multi-turn RL in an off-policy or offline setting have proven particularly challenging to scale to the setting of large language models. This setting requires effectively leveraging pretraining, scaling to large architectures with billions of parameters, and training on large datasets, all of which represent major challenges for current value-based RL methods. In this work, we propose a novel offline RL algorithm that addresses these drawbacks, casting Q-learning as a modified supervised fine-tuning (SFT) problem where the probabilities of tokens directly translate to Q-values. In this way we obtain an algorithm that smoothly transitions from maximizing the likelihood of the data during pretraining to learning a near-optimal Q-function during finetuning. Our algorithm has strong theoretical foundations, enjoying performance bounds similar to state-of-the-art Q-learning methods, while in practice utilizing an objective that closely resembles SFT. Because of this, our approach can enjoy the full benefits of the pretraining of language models, without the need to reinitialize any weights before RL finetuning, and without the need to initialize new heads for predicting values or advantages. Empirically, we evaluate our method on both pretrained LLMs and VLMs, on a variety of tasks including both natural language dialogue and robotic manipulation and navigation from images. 1 I NTRODUCTION Recently, some of the most impressive feats in AI have been performed through language models, which are pretrained on large-scale data and adapted to a wide range of downstream tasks (Bommasani et al., 2021). Many of these tasks, such as natural language dialogue or robotic control, require complex sequential decision-making. Reinforcement learning (RL) Sutton & Barto (2018) is a powerful paradigm for solving such tasks (Mnih et al., 2013; Silver et al., 2017; AlphaStar, 2019). Furthermore, offline RL Levine et al. (2020) has been shown to do so from only static datasets, such as suboptimal demonstrations from any unknown behavior policy, without the need for any additional interaction. Though offline RL has been used to fine-tune large language models (LLMs) or vision language models (VLMs) (Ouyang et al., 2022; Bai et al., 2022b), its usefulness has been limited to generating better single responses rather than multi-turn, sequential scenarios where RL should theoretically shine. For example, across various dialogue tasks, offline RL fine-tuning of LLMs does not reliably outperform supervised fine-tuning (SFT) (Sodhi et al., 2023; Abdulhai et al., 2023). Furthermore, in the realm of navigation and control, popular VLMs are still fine-tuned for multi-task control using SFT (Brohan et al., 2023b;a; Collaboration et al., 2024). Single-turn problems, such as answering questions, can be tackled with policy gradient methods (Ouyang et al., 2022; Rafailov et al., 2023), but sequential or _multi-turn_ problems, such as dialogue or robotic control, require sample-efficient methods that can utilize data to reason about the dynamics of the problem, which typically requires training value functions (Abdulhai et al., 2023; Hong et al., 2023). This is in multi-turn problems, the agent must plan their actions to optimize some long-term objective. Although there are many effective value-based RL methods that could be applied to 1 Figure 1: Our proposed approach allows us to directly leverage the logits from a pretrained model to train value functions. Prior approaches require separately initializing a value head. LLMs and VLMs, in practice such methods have been difficult to adapt to these models with the same effectiveness as policy gradients. We posit that this is due in part to a mismatch between the pretraining objective that these models use, i.e. maximum likelihood estimation, and the fine-tuning objective necessary to train value functions. This discrepancy means that fine-tuning using multi-turn RL may require discarding some of knowledge gained by maximum likelihood pretraining of LLMs and VLMs, including a broad understanding of language, vision, and even sequential reasoning. Specifically, we hypothesize two reasons for why fine-tuning foundation models using offline RL is unsuitable in practice. First, typical offline RL methods require regressing value functions that estimate how appropriate actions, such as an utterance in dialogue, are. Such algorithms, known as Q-learning, have achieved impressive results when applied on small networks (AlphaStar, 2019; Mnih et al., 2013), but surprisingly attain disappointing performance when scaled to larger ones (Sodhi et al., 2023). Recent work has attributed this lack of scaling to instability in the valuelearning objective, namely in regression towards non-stationary values (Farebrother et al., 2024). More importantly, a major advantage of SFT is the potential to leverage existing capabilities of large pretrained models to drastically improve the efficiency when learning a new downstream task. However, language models are trained to predict likelihoods, but Q-learning instead aims to predict action values; therefore, when fine-tuning, Q-learning algorithms discard the learned likelihoods in favor of only utilizing the underlying representations, which eliminates some of the useful prior knowledge within the pretrained models. We illustrate this in Figure 1, where value functions are trained must be trained via a new head with reset weights. In this work, we propose a new algorithm that remedies both drawbacks. Our key insight is simple: _by adding weights to the traditional supervised fine-tuning objective, we can learn probabilities that_ _conservatively estimate the value function instead of the behavior policy_ . In practice, our approach is implemented by adding weights to the maximum likelihood objective, yielding a _weighted cross_ _entropy loss_ where weights are target action values computed from the Bellman recurrence relations. By using this objective, we are able to avoid the unstable regression objective commonly used in value learning, as well as directly leverage the initial likelihoods resulting from large-scale pretraining. Theoretically, we can show that such objective results in learned likelihoods that are a product of the data distribution and Q-values, and that our approach is principled and results in performance bounds competitive with other state-of-the-art approaches. Empirically, we demonstrate the effectiveness of our method on a variety of tasks involving both LLMs, such as language games and dialogue, as well as VLMs, such as navigation and robotic manipulation. 2 R ELATED W ORK Much of the recent work on reinforcement learning (RL) finetuning of LLMs and VLMs uses policy gradient methods and reward models learned from human feedback (e.g., RLHF) (Ziegler et al., 2020; Stiennon et al., 2020; Wu et al., 2021; Nakano et al., 2022; Bai et al., 2022a; Christiano et al., 2023; Rafailov et al., 2023), or from handcrafted AI systems (e.g., RLAIF) (Bai et al., 2022b), to generate better responses to various queries. However, there is a large discrepancy in the capabilities required to perform self-contained responses in single-step tasks, such as question-answering, and responses in a multi-turn scenarios, such as dialogue. Namely, the latter requires planning to optimize a long-term 2 objective,. Various prior works provide evidence that existing fine-tuning methods are insufficient to enable language models with such planning capabilities (Bachmann & Nagarajan, 2024). In principle, value-based RL (Lange et al., 2012; Levine et al., 2020), specifically Q-learning, can learn effective policies for multi-step tasks that outperform pure imitation via supervised finetuning (Kumar et al., 2022). Many offline RL algorithms exist that reap the benefits of value-based RL using only static datasets, such as those currently used to fine-tune language models. Though offline RL algorithms require handling distribution shift (Kumar et al., 2019), where the learned policy selects out-of-distribution (OOD) actions with unpredictable consequences, many methods exist that effectively tackle this challenge (Kumar et al., 2020; Kostrikov et al., 2021; Kidambi et al., 2020; Yu et al., 2020; 2021). Due to the promising benefits of offline RL on learning from demonstrations, algorithms have been proposed for learning LLM policies to some success in robotic manipulation (Chebotar et al., 2023) and language tasks (Snell et al., 2022). However, recent evaluation has shown that, on a variety of natural language tasks, Q-learning approaches are often outperformed by supervised ones (Sodhi et al., 2023; Abdulhai et al., 2023). We hypothesize this is due to the mismatch between value-based RL fine-tuning and maximum likelihood pretraining, and propose a new approach that remedies this core issue. There also exist a paradigm of supervised approaches called return conditioned supervised learning (RCSL), which learn conditional policies on return via a supervised learning objective (Brandfonbrener et al., 2022). The most notable algorithm is Decision Transformer (DT) (Chen et al., 2021), which can train LLM policies that outperform traditional offline RL methods that rely on Q-learning. Though it performs well in practice, there is theoretical evidence that the ceiling of performance of such algorithms is below that of value-based offline RL. Specifically, Brandfonbrener et al. (2022) showed that DT and similar approaches can only identify the optimal policy under stronger conditions on the offline data than value-based RL. Our proposed algorithm is similar to RCSL in that we also use a maximum likelihood loss, but we learn values and reap the theoretical benefits of other value-based methods. Recently, prior attempts have also been made to improve value-based RL algorithms for fine-tuning language models. Chebotar et al. (2023) propose Q-learning with transformer value functions in manipulation and control tasks by converting actions to sequences of tokens. We adopt their insight when evaluating on robotics tasks, but use a fundamentally different objective to learn values. Most similar to ours, Farebrother et al. (2024) propose to replace the regression loss from Q-learning with a cross-entropy loss by casting value learning as a classification problem. However, while the proposed method also converts value functions to distributions, these likelihoods are not naturally derived from the logits obtained from large-scale pretraining, and must instead be learned from scratch via a separate head with reset weights. Therefore, like traditional Q-learning, they also suffer from being unable to leverage pretraining efficiently, unlike our approach whose likelihoods are directly initialized by the logits of pretrained LLMs or VLMs. 3 P RELIMINARIES Our work proposes a new RL algorithm for fine-tuning language models, specifically for multiturn tasks such as dialogue or manipulation and control. Language models operate over a discrete vocabulary of tokens _V_, and are trained to maximize the likelihood the best next-token _x_ _m_ +1 given an input sequence ( _x_ 0 _, . . ., x_ _m_ ) of tokens, given by _π_ ( _x_ _m_ +1 _|x_ 0 _, . . ., x_ _m_ ) . In a multi-turn task such a dialogue, the tokens are words that are chained to form utterances, and the best next-token requires complex, sequential reasoning to understand the utterances so far and plan for the next one. Traditionally, this kind of reasoning can be learned via reinforcement learning (RL). **RL fundamentals.** RL aims to optimize agents that interact with a Markov Decision Process (MDP) defined by a tuple ( _S, A, P, r, µ_ 1 _, γ_ ), where _S_ represents the set of all possible states, _A_ is the set of possible actions, _µ_ 1 is the initial state distribution, and _γ_ is the discount factor. When action _a ∈A_ is executed at state _s ∈S_, the next state is generated according to _s_ _[′]_ _∼_ _P_ ( _·|s, a_ ), and the agent receives stochastic reward with mean _r_ ( _s, a_ ) _∈_ [0 _,_ 1]. The Q-function _Q_ _[π]_ ( _s, a_ ) for a policy _π_ ( _·|s_ ) represents the discounted long-term reward attained by executing _a_ given observation history _s_ and then following policy _π_ thereafter. _Q_ _[π]_ satisfies the 3 Bellman recurrence: _Q_ _[π]_ ( _s, a_ ) = _r_ ( _s, a_ ) + _γ_ E _s_ _′_ _∼P_ ( _·|s,a_ ) _,a_ _′_ _∼π_ ( _·|s_ _′_ ) [ _Q_ _[π]_ ( _s_ _[′]_ _, a_ _[′]_ )] _._ The value function _V_ _[π]_ considers the expectation of the Q-function over the policy _V_ _[π]_ ( _h_ ) = E _a∼π_ ( _·|s_ ) [ _Q_ _[π]_ ( _s, a_ )]. Meanwhile, the Q-function of the optimal policy _Q_ _[∗]_ satisfies: _Q_ _[∗]_ ( _s, a_ ) = _r_ ( _s, a_ ) + _γ_ E _s_ _′_ _∼P_ ( _·|s,a_ ) �max _a_ _[′]_ _[ Q]_ _[∗]_ [(] _[s]_ _[′]_ _[, a]_ _[′]_ [)] � _,_ and the optimal value function is _V_ _[∗]_ ( _s_ ) = max _a_ _Q_ _[∗]_ ( _s, a_ ) . Finally, the expected cumulative reward is given by _J_ ( _π_ ) = E _s_ 1 _∼µ_ 1 [ _V_ _[π]_ ( _s_ 1 )] . The goal of RL is to optimize a policy _π_ ( _· | s_ ) that maximizes the cumulative reward _J_ ( _π_ ) = E _µ_ 1 [ _V_ _[π]_ ( _s_ 1 )]. In offline RL, we are provided with a dataset _D_ = _{_ ( _s_ _i_ _, a_ _i_ _, r_ _i_ _, s_ _[′]_ _i_ [)] _[}]_ _i_ _[N]_ =1 [of size] _[ |D|]_ [ =] _[ N]_ [. We assume] that the dataset _D_ is generated i.i.d. from an effective behavior policy _π_ _β_ ( _a|s_ ) . Many state-of-the-art offline RL methods build on Q-learning, which trains a Q-function using parameters _θ_ on dataset _D_ by minimizing temporal difference (TD) error. _L_ _T D_ ( _θ_ ) = E ( _s,a,r,s_ _′_ ) _∼D_ 2 [�] _r_ + _γ_ max _,_ (1) �� _a_ _[′]_ _[ Q]_ _[θ]_ [¯] [(] _[s]_ _[′]_ _[, a]_ _[′]_ [)] _[ −]_ _[Q]_ _[θ]_ [(] _[s, a]_ [)] � where _Q_ _θ_ ( _s, a_ ) is the parameterized Q-function, and _θ_ [¯] parameterize a target network and is a slow-moving copy of _θ_ . **RL for language generation.** Language generation can be viewed as an MDP, where states are sequences of tokens from a finite vocabulary _V_ (Ramamurthy et al., 2023). All tokens that the agent initially observes are used as our initial state, _s_ 0 = ( _x_ 0 _, . . ., x_ _m_ ), where _x_ _i_ _∈V, ∀i ∈_ [ _m_ ] . At timestep _t_, an action _a_ _t_ _∈V_ is some token in the vocabulary. As long as _a_ _t_ is not a special end-of-sequence <EOS> token, the transition function deterministically appends _a_ _t_ to state _s_ _t_ to form _s_ _t_ +1 . Otherwise, the agent observes (potentially stochastic) responses from the environment, i.e. utterances by conversational partners in the case of multi-turn dialogue, _o_ _t_ = ( _y_ 0 _, . . ., y_ _n_ ), which also consist of tokens in the vocabulary; then, the transition function appends both _a_ _t_ and responses _o_ _t_ to state _s_ _t_ . This continues until the last timestep _T_ where we obtain a state _s_ _T_ and the agent receives a deterministic reward _r_ ( _s_ _T_ ). It becomes clear that a policy _π_ ( _a|s_ ) is a language model that parses all the language tokens seen so far as the state, and computes a distribution over tokens as the next action to take. Recently, RL has been considered for learning policies that are LLMs or VLMs for difficult tasks such as generalist robotic manipulation or dialogue. Because value learning is very different from traditional next-token prediction, preforming such fine-tuning requires reparameterizing the pretrained language model, such as by adding value heads with independently initialized weights (Snell et al., 2023). 4 Q-L EARNING VIA S UPERVISED F INE -T UNING We will now describe our proposed offline RL algorithm, which we dub Q-learning via Supervised Fine-tuning (Q-SFT). Concretely, instead of training value functions by fitting Q-values to their Bellman backup target via a regression loss, we instead fine-tune directly on the probabilities learned from large-scale pretraining —like in SFT— via a _weighted cross-entropy_ loss, such that the resulting probabilities also capture the desired Q-values. 4.1 L EARNING V ALUES AS P ROBABILITIES Recently, large neural networks such as LLMs and VLMs have been successfully trained and finetuned on demonstration data using supervised learning. If we adopt the earlier multi-turn formalism in Section 3 and view these models as agents, such approaches train a policy _π_ _ϕ_ ( _a|s_ ) with parameters _ϕ_ by minimizing cross-entropy loss: _L_ CE ( _ϕ_ ) = E ( _s,a_ ) _∼D_ [log _π_ _ϕ_ ( _a | s_ )] _._ (2) Because the resulting policy approximates the behavior policy _π_ _ϕ_ ( _a|s_ ) _≈_ _π_ _β_ ( _a|s_ ), this approach has also been dubbed behavioral cloning (BC). While BC scales well to complex tasks and networks, the 4 resulting policy can only be as good as the behavior policy, which is insufficient when the dataset is not curated from expert demonstrations. In contrast, Q-learning enables the learned policy to greatly outperform the behavior policy (Kumar et al., 2022), by instead having the policy behave according to the estimated Q-values. This can be done via _policy extraction_, such as _π_ ( _a|s_ ) = 1 [ _a_ = arg max _[′]_ _a_ _[Q]_ _[θ]_ [(] _[s, a]_ _[′]_ [)]] [ or the entropy-regularized] variant _π_ ( _a|s_ ) _∝_ exp( _Q_ _θ_ ( _s, a_ )) . However, as alluded to earlier, the Q-function _Q_ _θ_ ( _s, a_ ) cannot be naturally derived from pretrained language models, which output probabilities, and require modifying their architectures as in Figure 1. Our goal is to provide a way to learn Q-values for multi-turn RL problems with language models such that the Q-function can be initialized from a model pretrained via supervised learning (i.e., maximum likelihood estimation), _without_ the need to reinitialize weights or add new heads to represent the Q-values. An autoregressive sequence model (e.g., a transformer) outputs the probability of each token conditioned on the past history. In order to avoid adding new heads or reinitializing weights, the Q-values have to also be represented by these same probabilities. Furthermore, to maximize transfer from pretraining, we would like our proposed _loss function_ to also closely resemble the maximum likelihood loss function used for pretraining. We propose a simple modification to the BC objective in Equation 2. Our modification hinges on the following observation. Let _p_ _θ_ ( _a|s_ ) represent the probability of action _a_ under state _s_, and are optimized via the _weighted_ cross entropy loss _L_ WCE ( _θ_ ) = E ( _s,a_ ) _∼D_ [ _w_ ( _s, a_ ) log _p_ _θ_ ( _a | s_ ) + (1 _−_ _w_ ( _s, a_ )) log _p_ _θ_ ( _a_ _d_ _| s_ )] _,_ where _w_ ( _s, a_ ) are weights, and _a_ _d_ is some dummy action. The resulting probabilities that optimize this objective approximate � _p_ _θ_ ( _a|s_ ) _≈_ _w_ ( _s, a_ ) _π_ _β_ ( _a|s_ ) for all _a ̸_ = _a_ _d_ . Our goal is, via a proper choice of weights, to learn probabilities that are conservative estimates of the true Q-values � _p_ _θ_ ( _s, a_ ) _≈_ _Q_ _[∗]_ ( _s, a_ ) . In order to do so, we require the following assumption on bounded total rewards: **Assumption 4.1.** _For any policy π, we have_ [�] _[∞]_ _t_ =1 _[γ]_ _[t][−]_ [1] _[r]_ _[t]_ _[ ≤]_ [1] _[.]_ This assumption has been made by multiple prior works without loss of generality (Ren et al., 2021; Kumar et al., 2022), as rewards can, in theory, be scaled without affecting the optimal policy in the MDP. Furthermore, many tasks of interest, such as dialogue, have sparse rewards, where we observe success or failure only after the conversation has ended. Following the above observation, let us define the _empirical Bellman probability operator_ _B_ [�] _[∗]_ for transition ( _s, a, r, s_ _[′]_ ) as � _p_ _θ_ ( _a_ _[′]_ _| s_ _[′]_ ) _B_ _[∗]_ _p_ _θ_ ( _a | s_ ) = _r_ + _γ_ max _a_ _[′]_ _π_ _β_ ( _a_ _[′]_ _| s_ _[′]_ ) _[.]_ Note that this is different from the traditional Bellman operator in that we additionally divide by _π_ _β_ in the backup. Then, we consider the following weighted cross-entropy loss: _̸_  _̸_ _._ (3) _̸_  ¯ 1 _−_ _B_ [�] _[∗]_ _p_ _θ_ ( _a_ _| s_ ) _̸_ _L_ WCE ( _θ_ ) = E ( _s,a,r,s_ _′_ ) _∼D_ _̸_  _̸_ _B_ _[∗]_ _p_ _θ_ ¯ ( _a | s_ ) log _p_ _θ_ ( _a | s_ ) + � � _a_ _[′]_ = _̸_ _a_ _a_ _[′]_ = _̸_ _a_ log _p_ _θ_ ( _a_ _[′]_ _| s_ ) _|A| −_ 1 _̸_ _̸_ Here, we see that our loss is an instance of weighted cross entropy loss with weights approximately ¯ equal to Bellman target values _B_ [�] _[∗]_ _p_ _θ_ ( _a|s_ ) . The primary difference is that instead of introducing a dummy action, we equally distribute the leftover weight across the remaining actions. As we will show, this acts as a label-smoothing term that ultimately regularizes the probabilities. We will show later that in the absence of sampling error, our learned likelihood function � � _p_ _θ_ ( _a|s_ ) satisfies _Q_ _[∗]_ ( _s, a_ ) _≥_ _p_ _θ_ ( _a|s_ ) _≥_ _π_ _β_ ( _a|s_ ) _Q_ _[∗]_ ( _s, a_ ) . This means that we are able to effectively learn a conservative estimation of the Q-function as a likelihood, without the need for optimizing a potentially unstable and poorlyscaling TD objective. In addition, because probabilities are modeled directly by existing language models, we do not need to modify the parameterization of such models in order to perform such fine-tuning, i.e. by resetting weights or adding a new head. Namely, our likelihood function � _p_ _θ_ ( _a|s_ ) can be directly initialized from the logits of a pretrained LLM or VLM. 5 Idea Generation Category:
0Conceptual Integration
v4MTnPiYXY
# T OWARD G UIDANCE -F REE AR V ISUAL G ENERATION VIA C ONDITION C ONTRASTIVE A LIGNMENT **Huayu Chen** [1] **, Hang Su** [1] **, Peize Sun** [2] **, Jun Zhu** [1] _[,]_ [3] _[∗]_ 1 Department of Computer Science & Technology, Institute for AI, BNRist Center, Tsinghua-Bosch Joint ML Center, THBI Lab, Tsinghua University 2 The University of Hong Kong 3 Shengshu Technology, Beijing A BSTRACT Classifier-Free Guidance (CFG) is a critical technique for enhancing the sample quality of visual generative models. However, in autoregressive (AR) multi-modal generation, CFG introduces design inconsistencies between language and visual content, contradicting the design philosophy of unifying different modalities for visual AR. Motivated by language model alignment methods, we propose _Condition_ _Contrastive Alignment_ (CCA) to facilitate guidance-free AR visual generation with high performance and analyzes its theoretical connection with guided sampling methods. Unlike guidance methods that alter the sampling process to achieve the ideal sampling distribution, CCA directly fine-tunes pretrained models to fit the _same_ distribution target. Experimental results show that CCA can significantly enhance the guidance-free performance of all tested models with just one epoch _∼_ of fine-tuning ( 1% of pretraining epochs) on the pretraining dataset, on par with guided sampling methods. This largely removes the need for guided sampling in AR visual generation and cuts the sampling cost by half. Moreover, by adjusting training parameters, CCA can achieve trade-offs between sample diversity and fidelity similar to CFG. This experimentally confirms the strong theoretical connection between language-targeted alignment and visual-targeted guidance methods, unifying two previously independent research fields. Code and models: [https://github.com/thu-ml/CCA.](https://github.com/thu-ml/CCA) (a) LlamaGen (b) VAR Figure 1: CCA significantly improves guidance-free sample quality for AR visual generative models with just one epoch of fine-tuning on the pretraining dataset. 1 I NTRODUCTION Witnessing the scalability and generalizability of autoregressive (AR) models in language domains, recent works have been striving to replicate similar success for visual generation (Esser et al., 2021; Lee et al., 2022). By quantizing images into discrete tokens, AR visual models can process images using the same _next-token prediction_ approach as Large Language Models (LLMs). This approach is _∗_ Corresponding author 1 attractive because it provides a potentially unified framework for vision and language, promoting consistency in reasoning and generation across modalities (Team, 2024; Xie et al., 2024). Despite the design philosophy of maximally aligning visual modeling with language modeling methods, AR visual generation still differs from language generation in a notable aspect. AR visual generation relies heavily on Classifier-Free Guidance (CFG) (Ho & Salimans, 2022), a sampling technique unnecessary for language generation, which has caused design inconsistencies between the two types of content. During sampling, while CFG helps improve sample quality by contrasting conditional and unconditional models, it requires two model inferences per visual token, which doubles the sampling cost. During training, CFG requires randomly masking text conditions to learn the unconditional distribution, preventing the simultaneous training of text tokens (Team, 2024). In contrast to visual generation, LLMs rarely rely on guided sampling. Instead, the surge of LLMs’ instruction-following abilities has largely benefited from fine-tuning-based alignment methods (Schulman et al., 2022). Motivated by this observation, we seek to study: _“Can we avoid guided sampling_ _in AR visual generation, but attain similar effects by directly fine-tuning pretrained models?”_ In this paper, we derive Condition Contrastive Alignment (CCA) for enhancing visual AR performance without guided sampling. Unlike CFG which necessitates altering the sampling process to achieve a more desirable sampling distribution, CCA directly fine-tunes pretrained AR models to fit the _same_ distribution target, leaving the sampling scheme untouched. CCA is quite convenient to use since it does not rely on any additional datasets beyond the pretraining data. Our method functions by contrasting positive and negative conditions for a given image, which can be easily created from the existing pretraining dataset as matched or mismatched image-condition pairs. CCA is also highly efficient given its fine-tuning nature. We observe that our method achieves ideal performance within _∼_ just one training epoch, indicating negligible computational overhead ( 1% of pretraining). In Sec. 4, we highlight a theoretical connection between CCA and guided sampling techniques (Dhariwal & Nichol, 2021; Ho & Salimans, 2022). Essentially these methods all target at the same sampling distribution. The distributional gap between this target distribution and pretrained models is related to a physical quantity termed conditional residual ( log _[p]_ _p_ [(] _**[x]**_ ( _**x**_ _[|]_ _**[c]**_ ) [)] [). Guidance methods typically] train an additional model (e.g., unconditional model or classifier) to estimate this quantity and enhance pretrained models by altering their sampling process. Contrastively, CCA follows LLM alignment techniques (Rafailov et al., 2023; Chen et al., 2024a) and parameterizes the conditional residual with the difference between our target model and the pretrained model, thereby directly training a sampling model. This analysis unifies language-targeted alignment and visual-targeted guidance methods, bridging the gap between the two previously independent research fields. We apply CCA to two state-of-the-art autoregressive (AR) visual models, LLamaGen (Sun et al., 2024) and VAR (Tian et al., 2024), which feature distinctly different visual tokenization designs. Both quantitative and qualitative results show that CCA significantly and consistently enhances the guidance-free sampling quality across all tested models, achieving performance levels comparable to CFG (Figure 1). We further show that by varying training hyperparameters, CCA can realize a controllable trade-off between image diversity and fidelity similar to CFG. This further confirms their theoretical connections. We also compare our method with some existing LLM alignment methods (Welleck et al., 2019; Rafailov et al., 2023) to justify its algorithm design. Finally, we demonstrate that CCA can be combined with CFG to further improve performance. Our contributions: 1. We take a big step toward guidance-free visual generation by significantly improving the visual quality of AR models. 2. We reveal a theoretical connection between alignment and guidance methods. This shows that language-targeted alignment can be similarly applied to visual generation and effectively replace guided sampling, closing the gap between these two fields. 2 2 B ACKGROUND 2.1 A UTOREGRESSIVE (AR) V ISUAL M ODELS **Autoregressive models.** Consider data _**x**_ represented by a sequence of discrete tokens _**x**_ 1: _N_ := _{x_ 1 _, x_ 2 _, ..., x_ _N_ _}_, where each token _**x**_ _n_ is an integer. Data probability _p_ ( _**x**_ ) can be decomposed as: _p_ ( _**x**_ ) = _p_ ( _**x**_ 1 ) _N_ � _p_ ( _**x**_ _n_ _|_ _**x**_ _<n_ ) _._ (1) _n_ =2 AR models thus aim to learn _p_ _ϕ_ ( _**x**_ _n_ _|_ _**x**_ _<n_ ) _≈_ _p_ ( _**x**_ _n_ _|_ _**x**_ _<n_ ), where each token _**x**_ _n_ is conditioned only on its previous input _**x**_ _<n_ . This is known as _next-token prediction_ (Radford et al., 2018). **Visual tokenization.** Image pixels are continuous values, making it necessary to use vectorquantized tokenizers for applying discrete AR models to visual data (Van Den Oord et al., 2017; Esser et al., 2021). These tokenizers are trained to encode images _**x**_ into discrete token sequences _**x**_ 1: _N_ and decode them back by minimizing reconstruction losses. In our work, we utilize pretrained and frozen visual tokenizers, allowing AR models to process images similarly to text. 2.2 G UIDED S AMPLING FOR V ISUAL G ENERATION Despite the core motivation of developing a unified model for language and vision, the AR sampling strategies for visual and text contents differ in one key aspect: AR visual generation necessitates a sampling technique named Classifier-Free Guidance (CFG) (Ho & Salimans, 2022). During inference, CFG adjusts the sampling logits _ℓ_ [sample] for each token as: _ℓ_ [sample] = _ℓ_ _[c]_ + _s_ ( _ℓ_ _[c]_ _−_ _ℓ_ _[u]_ ) _,_ (2) where _ℓ_ _[c]_ and _ℓ_ _[u]_ are the conditional and unconditional logits provided by two separate AR models, _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) and _p_ _ϕ_ ( _**x**_ ) . The condition _**c**_ can be class labels or text captions, formalized as prompt tokens. The scalar _s_ is termed guidance scale. Since token logits represent the (unnormalized) log-likelihood in AR models, Ho & Salimans (2022) prove that the sampling distribution satisfies: _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) _p_ [sample] ( _**x**_ _|_ _**c**_ ) _∝_ _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) � _p_ _ϕ_ ( _**x**_ ) _s_ _._ (3) � At _s_ = 0, the sampling model becomes exactly the pretrained conditional model _p_ _ϕ_ . However, previous works (Ho & Salimans, 2022; Podell et al., 2023; Chang et al., 2023; Sun et al., 2024) have widely observed that an appropriate _s >_ 0 is critical for an ideal trade-off between visual fidelity and diversity, making training another unconditional model _p_ _ϕ_ necessary. In practice, the unconditional model usually shares parameters with the conditional one, and can be trained concurrently by randomly dropping condition prompts _**c**_ during training. Other guidance methods, such as Classifier Guidance (Ho & Salimans, 2022) and Energy Guidance (Lu et al., 2023) have similar effects of CFG. The target sampling distribution of these methods can all be unified under Eq. 3. 2.3 D IRECT P REFERENCE O PTIMIZATION FOR L ANGUAGE M ODEL A LIGNMENT Reinforcement Learning from Human Feedback (RLHF) is crucial for enhancing the instructionfollowing ability of pretrained Language Models (LMs) (Schulman et al., 2022; OpenAI, 2023). Performing RL typically requires a reward model, which can be learned from human preference data. Formally, the Bradley-Terry preference model (Bradley & Terry, 1952) assumes. _e_ _[r]_ [(] _**[c]**_ _[,]_ _**[x]**_ _[w]_ [)] _p_ ( _**x**_ _w_ _≻_ _**x**_ _l_ _|_ _**c**_ ) := (4) _e_ _[r]_ [(] _**[c]**_ _[,]_ _**[x]**_ _[l]_ [)] + _e_ _[r]_ [(] _**[c]**_ _[,]_ _**[x]**_ _[w]_ [)] [ =] _[ σ]_ [(] _[r]_ [(] _**[c]**_ _[,]_ _**[ x]**_ _[w]_ [)] _[ −]_ _[r]_ [(] _**[c]**_ _[,]_ _**[ x]**_ _[l]_ [))] _[,]_ where _**x**_ _w_ and _**x**_ _l_ are respectively the winning and losing response for an instruction _**c**_, evaluated by human. _r_ ( _·_ ) represents an implicit reward for each response. The target LM _π_ _θ_ should satisfy _π_ _θ_ ( _**x**_ _|_ _**c**_ ) _∝_ _µ_ _ϕ_ ( _**x**_ _|_ _**c**_ ) _e_ _[r]_ [(] _**[c]**_ _[,]_ _**[x]**_ [)] _[/β]_ to attain higher implicit reward compared with the pretrained LM _µ_ _ϕ_ . 3 Direct Preference Optimization (Rafailov et al., 2023) allows us to directly optimize pretrained LMs on preference data, by formalizing _r_ _θ_ ( _**c**_ _,_ _**x**_ ) := _β_ log _π_ _θ_ ( _**x**_ _|_ _**c**_ ) _−_ _β_ log _µ_ _ϕ_ ( _**x**_ _|_ _**c**_ ): _L_ [DPO] _θ_ = _−_ E _{_ _**c**_ _,_ _**x**_ _w_ _≻_ _**x**_ _l_ _}_ log _σ_ _β_ log _[π]_ _[θ]_ [(] _**[x]**_ _[w]_ _[|]_ _**[c]**_ [)] � _µ_ _ϕ_ ( _**x**_ _w_ _|_ _**c**_ ) _µ_ _ϕ_ ( _**x**_ _l_ _|_ _**c**_ ) _[π]_ _[θ]_ [(] _**[x]**_ _[w]_ _[|]_ _**[c]**_ [)] _[π]_ _[θ]_ [(] _**[x]**_ _[l]_ _[|]_ _**[c]**_ [)] _µ_ _ϕ_ ( _**x**_ _w_ _|_ _**c**_ ) _[−]_ _[β]_ [ log] _µ_ _ϕ_ ( _**x**_ _l_ _|_ _**c**_ ) _._ (5) � DPO is more streamlined and thus often more favorable compared with traditional two-stage RLHF pipelines: first training reward models, then aligning LMs with reward models using RL. 3 C ONDITION C ONTRASTIVE A LIGNMENT Autoregressive visual models are essentially learning a parameterized model _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) to approximate the standard conditional image distribution _p_ ( _**x**_ _|_ _**c**_ ) . Guidance algorithms shift the sampling policy _p_ [sample] ( _**x**_ _|_ _**c**_ ) away from _p_ ( _**x**_ _|_ _**c**_ ) according to Sec. 2.2: _p_ ( _**x**_ _|_ _**c**_ ) _p_ [sample] ( _**x**_ _|_ _**c**_ ) _∝_ _p_ ( _**x**_ _|_ _**c**_ ) � _p_ ( _**x**_ ) _s_ _._ (6) � At guidance scale _s_ = 0, sampling from _p_ [sample] ( _**x**_ _|_ _**c**_ ) = _p_ ( _**x**_ _|_ _**c**_ ) _≈_ _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) is most straightforward. However, it is widely observed that an appropriate _s >_ 0 usually leads to significantly enhanced sample quality. The cost is that we rely on an extra unconditional model _p_ _ϕ_ ( _**x**_ ) _≈_ _p_ ( _**x**_ ) for sampling. This doubles the sampling cost and causes an inconsistent training paradigm with language. In this section, we derive a simple approach to directly model the same target distribution _p_ [sample] using a **single** AR model _p_ [sample] _θ_ . Specifically, our methods leverage a singular loss function for directly optimizing pretrained models _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) _≈_ _p_ ( _**x**_ _|_ _**c**_ ) to become _p_ [sample] _θ_ ( _**x**_ _|_ _**c**_ ) _≈_ _p_ [sample] ( _**x**_ _|_ _**c**_ ) . Despite having similar effects as guided sampling, our approach does not require altering the sampling process. We theoretically derive our method in Sec. 3.1 and discuss its practical implementation in Sec. 3.2. 3.1 A LGORITHM D ERIVATION The core difficulty of directly learning _p_ [sample] _θ_ is that we cannot access datasets under the distribution of _p_ [sample] . However, we observe the distributional difference between _p_ [sample] ( _**x**_ _|_ _**c**_ ) and _p_ ( _**x**_ _|_ _**c**_ ) is related to a simple quantity that can be potentially learned from existing datasets. Specifically, by taking the logarithm of both sides in Eq. 6 and applying some algebra, we have [1] : 1 _[p]_ [sample] [(] _**[x]**_ _[|]_ _**[c]**_ [)] _s_ [log] _**x**_ _**c**_ (7) _p_ ( _**x**_ ) _[,]_ [(] _**[x]**_ _[|]_ _**[c]**_ [)] = log _[p]_ [(] _**[x]**_ _[|]_ _**[c]**_ [)] _p_ ( _**x**_ _|_ _**c**_ ) _p_ ( _**x**_ ) of which the right-hand side (i.e., log _[p]_ _p_ [(] _**[x]**_ ( _**x**_ _[|]_ _**[c]**_ ) [)] [) corresponds to the log gap between the conditional] probability and unconditional probability for an image _**x**_, which we term as _conditional residual_ . Our key insight here is that the conditional residual can be directly learned through contrastive learning approaches (Gutmann & Hyvärinen, 2012), as sated below: **Theorem 3.1** (Noise Contrastive Estimation, proof in Appendix A) **.** _Let_ _r_ _θ_ _be a parameterized model_ _which takes in an image-condition pair_ ( _**x**_ _,_ _**c**_ ) _and outputs a scalar value_ _r_ _θ_ ( _**x**_ _,_ _**c**_ ) _. Consider the loss_ _function:_ _L_ _[NCE]_ _θ_ ( _**x**_ _,_ _**c**_ ) = _−_ E _p_ ( _**x**_ _,_ _**c**_ ) log _σ_ ( _r_ _θ_ ( _**x**_ _,_ _**c**_ )) _−_ E _p_ ( _**x**_ ) _p_ ( _**c**_ ) log _σ_ ( _−r_ _θ_ ( _**x**_ _,_ _**c**_ )) _,_ (8) _where σ_ ( _·_ ) _is the standard logistic function: σ_ ( _w_ ) := 1 _/_ (1 + _e_ _[−][w]_ ) _._ _Given unlimited model expressivity for r_ _θ_ _, the optimal solution for minimizing L_ _[NCE]_ _θ_ _satisfies_ _r_ _θ_ _[∗]_ [(] _**[x]**_ _[,]_ _**[ c]**_ [) = log] _[p]_ [(] _**[x]**_ _[|]_ _**[c]**_ [)] (9) _p_ ( _**x**_ ) _[.]_ Now that we have a tractable way of learning _r_ _θ_ ( _**x**_ _,_ _**c**_ ) _≈_ log _[p]_ _p_ [(] _**[x]**_ ( _**x**_ _[|]_ _**[c]**_ ) [)] [, the target distribution] _[ p]_ [sample] [ can] be jointly defined by _r_ _θ_ ( _**x**_ _,_ _**c**_ ) and the pretrained model _p_ _ϕ_ . However, we would still lack an explicitly 1 We ignore a normalizing constant in Eq. 7 for brevity. A more detailed discussion is in Appendix B. 4 Positive data Negative data 𝑝𝑥, 𝑐 𝑝𝑥𝑝𝑐 (a) Training batch (b) AR model likelihood (c) Alignment loss |𝑐<br>1<br><Cat>|𝑐<br>2<br><Dog>|𝑐<br>3<br><Bird>|…|𝑐<br>𝐾<br><Van>| |---|---|---|---|---| |{𝑥1, 𝑐1}|{𝑥1, 𝑐2}|{𝑥1, 𝑐3}|…|{𝑥1, 𝑐𝐾}| |{𝑥2, 𝑐1}|{𝑥2, 𝑐2}|{𝑥2, 𝑐3}|…|{𝑥2, 𝑐𝐾}| |{𝑥3, 𝑐1}|{𝑥3, 𝑐2}|{𝑥3, 𝑐3}|…|{𝑥3, 𝑐𝐾}| |…|…|…||…| |{𝑥𝐾, 𝑐1}|{𝑥𝐾, 𝑐2}|{𝑥𝐾, 𝑐3}|…|{𝑥𝐾, 𝑐𝐾}| Figure 2: An overview of the CCA method. Given a training batch of _K_ <image, label> pairs, CCA treats these as positive samples, and generates _K_ negative samples by randomly assigning a negative label from _K −_ 1 remaining labels for each image. CCA then fine-tunes pretrained models by contrasting positive and negative data using an alignment loss. Pseudo code in Appendix D. parameterized model _p_ [sample] _θ_ if _r_ _θ_ ( _**x**_ _,_ _**c**_ ) is another independent network. To address this problem, we draw inspiration from the widely studied alignment techniques in language models (Rafailov et al., 2023) and parameterize _r_ _θ_ ( _**x**_ _,_ _**c**_ ) with our target model _p_ [sample] _θ_ ( _**x**_ _|_ _**c**_ ) and _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) according to Eq. 7: _θ_ ( _**x**_ _|_ _**c**_ ) _r_ _θ_ ( _**x**_ _,_ _**c**_ ) := [1] _s_ [log] _[p]_ [sample] _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) _._ (10) Then, the loss function becomes 1 _[p]_ _θ_ [sample] ( _**x**_ _|_ _**c**_ ) _s_ [log] _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) [1] _[p]_ _θ_ [sample] ( _**x**_ _|_ _**c**_ ) _s_ [log] _p_ _ϕ_ ( _**x**_ _|_ _**c**_ ) 1 _L_ [CCA] _θ_ = _−_ E _p_ ( _**x**_ _,_ _**c**_ ) log _σ_ � _s_ � _−_ E _p_ ( _**x**_ ) _p_ ( _**c**_ ) log _σ_ � _−_ [1] _s_ _._ (11) � During training, _p_ [sample] _θ_ is learnable while pretrained _p_ _ϕ_ is frozen. _p_ [sample] _θ_ can be initialized from _p_ _ϕ_ . This way we can fit _p_ [sample] with a single AR model _p_ [sample] _θ_, eliminating the need for training a separate unconditional model for guided sampling. Sampling strategies for _p_ [sample] _θ_ are consistent with standard language model decoding methods, which unifies decoding systems for multi-modal generation. 3.2 P RACTICAL A LGORITHM Figure 2 illustrates the CCA method. Specifically, implementing Eq. 11 requires approximating two expectations: one under the joint distribution _p_ ( _**x**_ _,_ _**c**_ ) and the other under the product of its two marginals _p_ ( _**x**_ ) _p_ ( _**c**_ ) . The key difference between these distributions is that in _p_ ( _**x**_ _,_ _**c**_ ), images _**x**_ and conditions _**c**_ are correctly paired. In contrast, _**x**_ and _**c**_ are sampled independently from _p_ ( _**x**_ ) _p_ ( _**c**_ ), meaning they are most likely mismatched. In practice, we rely solely on the pretraining dataset to estimate _L_ [CCA] _θ_ . Consider a batch of _K_ data pairs _{_ _**x**_ _,_ _**c**_ _}_ 1: _K_ . We randomly shuffle the condition batch _**c**_ 1: _K_ to become _**c**_ [neg] 1: _K_ [, where each] _**[ c]**_ [neg] _k_ represents a negative condition of image _x_ _k_, while the original _**c**_ _k_ is a positive one. This results in our training batch _{_ _**x**_ _,_ _**c**_ _,_ _**c**_ [neg] _}_ 1: _K_ . The loss function is _L_ [CCA] _θ_ ( _**x**_ _k_ _,_ _**c**_ _k_ _,_ _**c**_ [neg] _k_ [) =] _[ −]_ [log] _[ σ]_ _β_ log _[p]_ _θ_ [sample] ( _**x**_ _k_ _|_ _**c**_ _k_ ) � _p_ _ϕ_ ( _**x**_ _k_ _|_ _**c**_ _k_ ) � _−λ_ log _σ_ _−_ _β_ log _[p]_ _θ_ [sample] ( _**x**_ _k_ _|_ _**c**_ [neg] _k_ [)] � _p_ _ϕ_ ( _**x**_ _k_ _|_ _**c**_ [neg] _k_ [)] _,_ (12) � ~~�~~ ~~��~~ ~~�~~ relative likelihood for positive conditions _↑_ ~~�~~ ~~�~~ � ~~�~~ relative likelihood for negative conditions _↓_ where _β_ and _λ_ are two hyperparameters that can be adjusted. _β_ replaces the guidance scale parameter _s_, while _λ_ is for controlling the loss weight assigned to negative conditions. The learnable _p_ [sample] _θ_ is initialized from the pretrained conditional model _p_ _ϕ_, making _L_ [CCA] _θ_ a fine-tuning loss. We give an intuitive understanding of Eq. 12. Note that log _σ_ ( _·_ ) is monotonically increasing. The first term of Eq. 12 aims to increase the likelihood of an image _**x**_ given a positive condition, with a 5 Idea Generation Category:
0Conceptual Integration
kGvXIlIVLM
# MLLM S K NOW W HERE TO L OOK : - T RAINING FREE P ERCEPTION OF S MALL V ISUAL D ETAILS WITH M ULTIMODAL LLM S Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, Filip Ilievski University of Southern California, USA Vrije Universiteit Amsterdam, The Netherlands A BSTRACT Multimodal Large Language Models (MLLMs) have experienced rapid progress in visual recognition tasks in recent years. Given their potential integration into many critical applications, it is important to understand the limitations of their visual perception. In this work, we study whether MLLMs can perceive small visual details as effectively as large ones when answering questions about images. We observe that their performance is very sensitive to the size of the visual subject of the question, and further show that this effect is in fact causal by conducting an intervention study. Next, we study the attention patterns of MLLMs when answering visual questions, and intriguingly find that they consistently know where to look, even when they provide the wrong answer. Based on these findings, we then propose training-free visual intervention methods that leverage the internal knowledge of any MLLM itself, in the form of attention and gradient maps, to enhance its perception of small visual details. We evaluate our proposed methods on two widely-used MLLMs and seven visual question answering benchmarks and show that they can significantly improve MLLMs’ accuracy _without requiring_ _any training_ . Our results elucidate the risk of applying MLLMs to visual recognition tasks concerning small details and indicate that visual intervention using the model’s internal state is a promising direction to mitigate this risk. [1] 1 I NTRODUCTION Multimodal large language models (MLLMs) (Hurst et al., 2024; Team et al., 2024; Anthropic, 2024; Wang et al., 2024; Li et al., 2024a; Team et al., 2025; Chen et al., 2025) have greatly progressed the state of multimodal reasoning and planning, and are rapidly being integrated into various downstream applications, ranging from robotics (Li et al., 2024b; Chen et al., 2024), biomedicine (Li et al., 2023a), autonomous driving (Xu et al., 2024b; Zhang et al., 2023a) to visual mathematical reasoning (Gao et al., 2023; Zhang et al., 2024c;b) and even food recipe generation (Chhikara et al., 2024). Given the rapidly growing application of MLLMs, especially in critical domains such as biomedicine and security, it is crucial to study the limitations of their visual perception to elucidate the potential risks that may affect their downstream applications. To motivate the limitation that will be the focus of this work, we start by presenting three revealing visual question answering examples in Fig. 1, in which we ask a popular MLLM BLIP-2 (FlanT5 XL ) (Li et al., 2023b) to identify an object’s presence or type in each image as we vary the size of the object. In the absence of any prior evidence, we might reasonably expect the MLLM’s answer to be invariant to the size of the object, because of the MLLM’s large representational capacity and pretraining on a wide variety of images containing objects of various sizes. To the contrary, in Fig. 1 (left), we observe that initially the model does not recognize the existence of a small street sign and assigns a lower probability to the correct answer; however, zooming into the image (via more focused visual cropping) towards the street sign gradually increases the probability assigned to the correct answer, suggesting that the model gradually perceives more and more relevant details of the street sign. 1 [Our code is available at https://github.com/saccharomycetes/mllms_know.](https://github.com/saccharomycetes/mllms_know) 1 ~~Q:~~ ~~Are~~ ~~there~~ ~~any~~ ~~street~~ ~~signs~~ ~~in~~ ~~the~~ ~~picture?~~ ~~Q:~~ ~~What~~ ~~kind~~ ~~of~~ ~~bird~~ ~~is~~ ~~this?~~ ~~Q:~~ ~~What~~ ~~brand~~ ~~of~~ ~~clock~~ ~~is~~ ~~this?~~ |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| ||Smaller crop si|Smaller crop si|ze Sma|ze Sma|ller crop size|ller crop size|Smaller|crop size||| Figure 1: The effect of visual cropping on the probability of answers predicted by BLIP-2 FlanT5 XL zero-shot VQA model. The x-axis labels are indices to the respective cropped images displayed under each plot that the model sees at each step. The model gradually finds the correct answer. In Fig. 1 (middle), we observe further evidence of this difficulty in perceiving small details: the model initially predicts _white_ as the type of the bird, but when we zoom into the image towards the bird, without changing the question in any way, we observe that the model gradually assigns higher probability to the correct bird type of _egret_ . This suggests that the model was not making a semantic error of misunderstanding what _type_ means, rather it was unable to perceive sufficient details to discriminate _egret_ from other _white_ birds, which is mitigated by visual cropping. Similarly, in Fig. 1 (right), we observe that the model’s initial answer is not entirely irrelevant (“ama” compared to the correct answer “moma”), suggesting that the model knows where to look based on the question but cannot accurately perceive the actual word, which is again mitigated by visual cropping. In this work, we will study the limitation observed in Fig. 1 extensively, elucidate its cause, and propose potential solutions to mitigate its consequences. In Sec. 3, we quantitatively show that there indeed exists a difficulty in perceiving small visual concepts across various widely-used MLLMs. Our findings are consistent with prior works on evaluating the text-image matching in vision-language joint embedding models, which have observed a reverse correlation between visual object size in images and the text-image matching score (Zhao et al., 2022), but we further establish a causal connection between visual concept size and MLLMs’ perception ability through an intervention study. In Sec. 4, we study whether the MLLMs’ difficulty in perceiving small visual concepts stems from a difficulty in perceiving visual details, or from a difficulty in locating the concept due to its small size. We quantitatively show that MLLMs consistently know where to look, even when they fail to answer the question correctly. In Sec. 5, we propose three automatic visual cropping methods—leveraging the attention maps and gradients of the MLLM itself—as scalable and training-free solutions to the visual perception limitation. Finally, in Sec. 6, we apply our proposed methods to two popular MLLMs and evaluate them on seven visual question answering (VQA) benchmarks, showing their efficacy in enhancing MLLMs’ accuracy, especially on detail-sensitive benchmarks. 2 R ELATED W ORKS **Multimodal Large Language Models (MLLMs).** MLLMs are foundation models capable of handling diverse language and vision tasks. These models fall into two categories: _end-to-end_ _pretrained_ and _modular pretrained_ . End-to-end models process joint image-language data through architectures such as dual-encoder (Radford et al., 2021), fusion-encoder (Li et al., 2021), encoderdecoder (Cho et al., 2021), and unified transformer (Wang et al., 2022), using objectives like image-text matching, contrastive learning, and masked language modeling. Modular pretrained models, which dominate recent state-of-the-art approaches, avoid costly full pretraining by adapting existing components: BLIP2 (Li et al., 2023b) and InstructBLIP (Dai et al., 2023) train a Transformerbased connector between a frozen pretrained ViT (Dosovitskiy et al., 2021) image encoder and a frozen LLM, which transforms ViT output tokens into a fixed set of image tokens in the input space of the LLM; Qwen-VL (Bai et al., 2023), similarly uses a fixed-length token connector (a single cross-attention layer), but trains both the connector and the LLM; LLaVA (Liu et al., 2023b) and LLaVA-1.5 (Liu et al., 2023a) instead use a linear projection and a two-layer MLP as their connectors, respectively, and train both. Our work will contribute to a better understanding of the perception limitations of MLLM and improve their perception scalably and without training, offering orthogonal benefits to existing approaches. 2 **Visual Localization Methods.** Dedicated visual localization methods, such as YOLO (Redmon et al., 2016), SAM (Kirillov et al., 2023), and GLIP (Li et al., 2022b), rely heavily on dense spatial annotations for identifying salient image regions. Native approaches, such as Grad-CAM (Selvaraju et al., 2017), localize regions by analyzing gradients from classifier decisions without spatial supervision. Prior work adapts Grad-CAM to BLIP (Li et al., 2022a) leveraging its dedicated image-text similarity computation neural network called the Image-Text Matching network (Tiong et al., 2022; Guo et al., 2023). In this work, we derived a more general way for localizing the attention of MLLMs to images, not relying on the specific BLIP architecture. Several recent works have explored ways to improve the visual localization capability of MLLMs for visual question answering, including chain-of-thought (Shao et al., 2024; Liu et al., 2024b), tool-using (Wu and Xie, 2023), and visual programming approaches (Surís et al., 2023; Gupta and Kembhavi, 2023). In contrast, we demonstrate that MLLMs can often effectively localize the visual subject of a question in their internal states, and propose training-free methods to leverage their internal states for improving their visual perception. **Visual Perception Limitations in MLLMs.** The difficulty of answering questions about small objects in images has been observed by several prior and concurrent works (Zhang et al., 2023b; 2024a; Liu et al., 2024a; Wu and Xie, 2023), which have explored mitigating solutions based on high-resolution fine-tuning (Liu et al., 2024a; Dehghani et al., 2023; Wang et al., 2024), multi-agent pipelines (Wu and Xie, 2023), and use of visual cropping (Zhang et al., 2023b). In this work, we provide more extensive evidence for this difficulty, establish its causal effect on MLLMs’ performance, and show that it is rooted in a failure to observe small visual details as opposed to a failure to locate small objects. Several works have also shown that MLLMs suffer from object hallucination (Li et al., 2023c; Yu et al., 2024). Furthermore, Zhang et al. (2024a) have shown visual blind spots in MLLMs—i.e., locations on the image where the MLLMs’ perception degrades—as well as their sensitivity to visual quality, presence of visual distractors in the image, and even local object location perturbations. 3 MLLM S ’ S ENSITIVITY TO THE S IZE OF V ISUAL C ONCEPTS In this section, our goal is to quantitatively study our qualitative observations in Fig. 1 that MLLMs struggle with describing small visual details in images. To that end, we consider the TextVQA dataset, in which for each question we can find the image ground-truth bounding box that contains the correct textual answer. We partition its validation set into three groups based on the relative size of the ground-truth bounding box _S_ = _AA_ _totalbb_ [, where] _[ A]_ _[bb]_ [ denotes the area of the ground-truth bounding box,] and _A_ _total_ the total area of the image: 1) _S <_ 0 _._ 005 ( small ) consisting of 773 question-image pairs, 2) 0 _._ 005 _≤_ _S <_ 0 _._ 05 ( medium ) consisting of 2411 question-image pairs, and 3) _S ≥_ 0 _._ 05 ( large ) consisting of 1186 question-image pairs. We chose TextVQA for this study because it contains Table 1: Sensitivity of the accuracy of MLLMs to the size of visual concepts in TextVQA. As the relative visual size of the answer decreases (right to left in each row), we observe a decline in the accuracy of the original models (no cropping) in answering questions, whereas visual cropping (human-CROP) significantly improves accuracy on smaller objects. Answer Bbox Size ( _S_ ) Model Method small medium large no cropping 12.13 19.57 36.32 BLIP-2 (FlanT5 XL ) human-CROP 55.76 52.02 45.73 no cropping 21.79 30.58 45.30 InstructBLIP (Vicuna-7B) human-CROP 69.60 61.56 53.39 no cropping 39.38 47.74 50.65 LLaVA-1.5 (Vicuna-7B) human-CROP 69.95 65.36 56.96 no cropping 56.42 65.09 68.60 Qwen-VL (Qwen-7B) human-CROP 70.35 75.49 71.05 no cropping 65.76 72.81 69.17 GPT-4o human-CROP 75.63 81.32 71.72 3 a significant number of questions about small visual concepts, and textual answers are harder for MLLMs to guess from other side information in the image (compared to object types and attributes). **Sensitivity Study.** If a model’s perception is not sensitive to the size of visual concepts, we expect it to have similar accuracy in all three partitions. In Tab. 1, we observe that the accuracy of all MLLMs declines as the ground-truth bounding box becomes relatively smaller (right to left on the _no cropping_ rows). BLIP-2 and InstructBLIP are not trained on TextVQA ( _i.e_ ., are zero-shot models), and their accuracy declines by 24 and 23 absolute percentage points between the large and small partitions, respectively. LLaVA-1.5 and Qwen-VL are trained on the training set of TextVQA, yet, their accuracy also declines by 11 and 12 between the large and small partitions, respectively. Lastly, even the most recent commercial GPT-4o, with an unknown training set that might include all of TextVQA, is suffering from a 7 percentage point decline in accuracy between small and medium partitions. These findings suggest that MLLMs have a bias against perceiving smaller visual concepts. **Intervention Study.** The perceptual limitation we observed above might be merely correlated with size. To study whether this limitation is causally related to size, we conduct an intervention study where we provide the MLLMs with visually cropped images based on the ground-truth bounding boxes, denoted as human-CROP . More specifically, for each image-question pair and each MLLM, we crop the smallest square-shaped region containing the ground-truth bounding box from the image, and resize it to the input image resolution of the MLLM (the square-shaped cropping prevents extreme deformation of the cropped image when resizing). The cropped image is then provided to the MLLM in addition to the original image-question pair (see more details in Fig. 4). We observe in Tab. 1 that human-CROP significantly improves the accuracy of all MLLMs on the small and medium partitions, and to a lesser extent on the large partition. These findings show that the perception limitation is indeed caused by the size of the visual concepts, and that visual cropping can be a promising direction to mitigate this limitation. 4 D O MLLM S K NOW W HERE TO L OOK ? The limitation in perceiving small visual concepts can have two primary reasons: 1) they are hard to locate in the larger image, and 2) their small details are hard to perceive correctly. In Fig. 1, we observed that the MLLM’s incorrect answer may contain partially correct information, hinting that it might know where to look in the image. In this section, we quantitatively study that observation to answer whether MLLMs’ sensitivity to size is rooted in a perception limitation or a localization limitation. To that end, we first utilize the attention maps computed inside the Transformer layers of an MLLM to quantify its spatial attention over the image and then compare the total amount of this attention inside the ground-truth bounding box to other bounding boxes of the same size. **MLLMs’ Setup.** The considered MLLMs process a given image-question pair ( _x, q_ ) in four steps (depicted in Fig. 4): 1) the image is divided into _N × N_ non-overlapping patches and processed by the ViT image encoder into _N × N_ output tokens; 2) the ViT output tokens are transformed into the input space of the backbone LLM—by either an MLP (LLaVA-1.5) or a Transformer connector (BLIP-2, InstructBLIP and Qwen-VL)—which we refer to as image tokens; 3) the image tokens are then prepended to the question tokens and a predefined starting answer token, and fed to the LLM; 4) the LLM is sampled auto-regressively following the starting answer token (we use greedy sampling). **Quantifying MLLMs’ Spatial Attention over the Image.** We first measure how important each image token is to the MLLM’s decision ( _answer-to-token attention_ ) by extracting the softmax crossattention of the starting answer token to all image tokens in all layers of the backbone LLM, resulting in _A_ _st_ ( _x, q_ ) _∈_ R _[L][×][H][×]_ [1] _[×][T]_, where _L, H_ are the number of layers and heads-per-layer in the LLM, and _T_ is the number of image tokens provided to the LLM. We then take the average over attention heads to arrive at the answer-to-token attention _A_ [ˆ] _st_ ( _x, q_ ) = _H_ [1] � _Hh_ =1 _[A]_ _[st]_ [(] _[x, q]_ [)] [. Next, we measure] how important each image region is to each image token ( _token-to-image attention_ ). For the MLLMs that use a Transformer connector to resample ViT output tokens into a fixed number of image tokens (BLIP-2, InstructBLIP and Qwen-VL), we extract the softmax cross-attention of each image token to all ViT output tokens in all layers of the connector, resulting in _A_ _ti_ _∈_ R _[L]_ _[c]_ _[×][H]_ _[c]_ _[×][T][ ×][N]_ [ 2], where _L_ _c_ _, H_ _c_ are the number of layers and heads-per-layer in the connector, _T_ the number of learnable query tokens (that become input image tokens to the LLM), and _N_ [2] the number of image patches of the ViT image encoder. We then take the average over attention heads to arrive at the token-to-image attention 4 **Q:** What player number is this football player? **A: 21** **Q:** What phone number can a person call? **A: 202-555-2000** **Q:** What is the color of the bicycle? (A) blue (B) white (C) silver (D) red **A: C** **Q:** What number is next exit? **A: 100** **Q:** What is the number? **A: 8** **Q:** Is there a car in the image? **A: No** Figure 2: Examples of MLLMs knowing where to look despite answering incorrectly. The right panel in each example displays relative attention to the image (defined in Sec. 4) of one layer in the MLLM. _A_ ˆ _ti_ ( _x_ ) = _H_ 1 _c_ � _Hh_ =1 _c_ _[A]_ _[ti]_ [(] _[x]_ [)] [. For LLaVA-1.5 which uses an MLP to transform ViT output tokens to] image tokens, we set _A_ [ˆ] _ti_ ( _x_ ) to the identity tensor. Finally, we compute the _answer-to-image attention_ by computing the tensor product of the answer-to-token and token-to-image attention, resulting in _A_ _si_ ( _x, q_ ) _∈_ R _[L][×][L]_ _[c]_ _[×]_ [1] _[×][N]_ [2] where _A_ _[mk]_ _si_ [(] _[x, q]_ [) = ˆ] _[A]_ _st_ _[m]_ [(] _[x, q]_ [) ˆ] _[A]_ _[k]_ _ti_ [(] _[x]_ [)] [ (superscripts] _[ m]_ [ and] _[ k]_ [ denote layer] indices on the LLM and the connector, respectively). **Relative Attention.** One issue with using the softmax cross-attention is that not all highly attended tokens are semantically relevant to the input question. For example, recent work has observed that Transformers may use several tokens as registers to aggregate global information (Darcet et al., 2023). To emphasize semantically relevant attention, we propose to normalize the answer-to-image attention of an image-question pair ( _x, q_ ) by its value on a generic instruction _q_ _[′]_ . Specifically, we consider a fixed instruction _q_ _[′]_ = “Write a general description of the image.”, and compute **relative attention** as _A_ _rel_ ( _x, q_ ) = _A_ _[A]_ _si_ _[si]_ ( [(] _x,q_ _[x][,q]_ _[′]_ [)] ) [under element-wise division. Fig.][ 2][ shows examples of relative attention for] LLaVA-1.5 and InstructBLIP ( _A_ _[mk]_ _rel_ [at layers] _[ m]_ [ = 14] _[, k]_ [ = 0][ and] _[ m]_ [ = 15] _[, k]_ [ = 2][, respectively).] **Do MLLMs Know Where to Look?** Equipped with relative attention, we now return to our question of whether MLLMs have a localization limitation or perception limitation. To that end, we consider the validation set of TextVQA again. For each image-question pair, we first compute the relative attention. We then define **attention ratio** as the ratio of the total (sum) relative attention inside the answer ground-truth bounding box to its average across all bounding boxes of the same size |1.3<br>1.2|Col2|Col3|Col4|Col5|XL|Col7| |---|---|---|---|---|---|---| |1.3<br>1.2||||||| |1.1<br>1.0<br>0.9<br>0 24 48 72<br>I Layer|1.1<br>1.0<br>0.9<br>0 24 48 72<br>I Layer|1.1<br>1.0<br>0.9<br>0 24 48 72<br>I Layer|24 4|8 72|96|120| |1.1<br>1.0<br>0.9<br>0 24 48 72<br>I Layer|1.1<br>1.0<br>0.9<br>0 24 48 72<br>I Layer|1.1<br>1.0<br>0.9<br>0 24 48 72<br>I Layer|24 4|8 72||| |3.0|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| |2.5<br>2.0|||||| |1.5<br>1.0<br>0 32 64|||||| |1.5<br>1.0<br>0 32 64|||96 128|160|192| |1.5<br>1.0<br>0 32 64|||I Layer|I Layer|I Layer| |6<br>Ratio<br>4<br>Attention<br>2<br>0 4 8 12 16<br>I Layer|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |6<br>Ratio<br>4<br>Attention<br>2<br>0 4 8 12 16<br>I Layer|||||||| |6<br>Ratio<br>4<br>Attention<br>2<br>0 4 8 12 16<br>I Layer|||||||| |6<br>Ratio<br>4<br>Attention<br>2<br>0 4 8 12 16<br>I Layer|||||20 24 28|20 24 28|20 24 28| |3 Correctly Answered Incorrectly Answer Ratio<br>tion<br>2|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| |3 Correctly Answered Incorrectly Answer Ratio<br>tion<br>2||Incorrectly Answer|ed||| |3 Correctly Answered Incorrectly Answer Ratio<br>tion<br>2|||||| |Atten<br>1<br>0 4 8|||||| |Atten<br>1<br>0 4 8|||12 16 20 24 28 32<br>I Layer|12 16 20 24 28 32<br>I Layer|12 16 20 24 28 32<br>I Layer| Figure 3: MLLMs’ attention ratio across all layers (average with 95% CI over TextVQA). The attention ratio measures how significantly the MLLM is attending to the ground-truth bounding box (defined in Sec. 4). We observe that it is greater than 1 in most layers, showing that the MLLMs know where to look in the image even when they fail to answer correctly. 5 Idea Generation Category:
2Direct Enhancement
DgaY5mDdmT
# - V ALUE ALIGNED B EHAVIOR C LONING FOR O FFLINE - - R EINFORCEMENT L EARNING VIA B I LEVEL O PTIMIZA ## TION **Xingyu Jiang** [1] **, Ning Gao** [1] **, Xiuhui Zhang** [1] **, Hongkun Dou** [1] **, Yue Deng** [1] _[,]_ [2] _[ ∗]_ 1 Beihang University, 2 Beijing Zhongguancun Academy {jxy33zrhd,gaoning_ai,zhangxiuhui,douhk,ydeng}@buaa.edu.cn A BSTRACT Offline reinforcement learning (RL) aims to optimize policies under pre-collected data, without requiring any further interactions with the environment. Derived from imitation learning, Behavior cloning (BC) is extensively utilized in offline RL for its simplicity and effectiveness. Although BC inherently avoids out-ofdistribution deviations, it lacks the ability to discern between high and low-quality data, potentially leading to sub-optimal performance when facing with poor-quality data. Current offline RL algorithms attempt to enhance BC by incorporating value estimation, yet often struggle to effectively balance these two critical components, specifically the alignment between the behavior policy and the pre-trained value estimations under in-sample offline data. To address this challenge, we propose the Value-aligned Behavior Cloning via Bi-level Optimization (VACO), a novel bi-level framework that seamlessly integrates an inner loop for weighted supervised behavior cloning (BC) with an outer loop dedicated to value alignment. In this framework, the inner loop employs a meta-scoring network to evaluate and appropriately weight each training sample, while the outer loop maximizes value estimation for alignment with controlled noise to facilitate limited exploration. This bi-level structure allows VACO to identify the optimal weighted BC policy, ultimately maximizing the expected estimated return conditioned on the learned value function. We conduct a comprehensive evaluation of VACO across a variety of continuous control benchmarks in offline RL, where it consistently achieves superior performance compared to existing state-of-the-art methods. 1 I NTRODUCTION Over the past decade, reinforcement learning (RL) has made remarkable advancements and exerted significant influence in domains such as robotics control ( 33 ), dynamic programming ( 49 ), and recommendation systems ( 6 ). However, in real-world scenarios where interactions are either extremely challenging or prohibitively costly, traditional online RL faces considerable limitations and may often prove impractical. As a result, offline RL has emerged as a promising alternative, attracting growing attention and active research. Unlike online RL, which actively collects trajectory samples through direct interaction with the environment, offline RL is restricted to deriving policies from a static, pre-collected dataset ( 30 ), without any further interactions with the environment. Interestingly, the primary advantage of offline RL, its independence from environment interactions, also poses its most significant challenge. Due to the limited and potentially sub-optimal nature of offline data samples, offline RL confront two severe challenges: 1) out-of-distribution (OOD) issues and 2) value alignment issues. Both of these issues fundamentally stem from inadequate sampling of the state and action spaces in offline datasets. As illustrated in Fig.1, we provide a schematic diagram to intuitively depict the two critical challenges in offline reinforcement learning. Fig.1(a) presents the learned value-action curve for a given state, where the blue dots represent actions collected from the offline dataset, which we refer to as the *in-sample domain*. The adjacent red area, representing actions that have not been collected, is _∗_ Corresponding author 1 Figure 1: Illustration of the two challenges in offline RL: (a) value overestimation in OOD issues and (b) sub-optimal policy extraction in value alignment issues. termed the *out-of-distribution (OOD) domain*. Empirically, the values of actions in the OOD domain tend to be overestimated, while the action values within the in-sample domain are more confident and closer to their true values. Fig.1(b) further refines the in-sample domain. During the behavior policy learning process, the inadequacy of action space often causes the behavior policy to converge on sub-optimal actions within the in-sample domain (e.g., _A_ _[−]_ ), which actually do not align with the optimal actions suggested by the value estimations (e.g., _A_ _[∗]_ ). This misalignment can lead to sub-optimal policy performance. To avoid above challenge, behavior cloning (BC ( 37 )) is initially employed in offline RL settings. BC utilizes supervised learning to directly learn from the in-sample data, effectively avoiding OOD challenge. However, BC lacks the capability to discern between high-quality and low-quality data, which typically results in competitive performance only with expert offline datasets and leads to failure within poor/sub-optimal offline datasets. To circumvent this limitation, recent offline RL approaches have begun incorporating certain auxiliary information (e.g., value function) into BC, guiding it towards differentiated learning. These methods can be well categorized into following primary types: (1) **explicit regularization constraints** ( 13 ; 27 ), which involve heuristic regularization terms like KL divergence or expectile regression to directly constrain the behavior policy; (2) **implicit** **regularization constraints** ( 22 ; 55 ), which leverage generative models such as VAEs ( 9 ; 29 ) and diffusion models ( 19 ; 44 ) to enforce constraints on the behavior policy within latent action space; and (3) **return-conditioned supervised learning** ( 5 ; 23 ), which utilizes decision transformer to directly model dynamic programming process in a conditioned supervised learning manner. Although the aforementioned methods have achieved notable performance, they often struggle to balance the OOD challenge and value alignment issue concurrently. **Our contribution.** In this paper, inspired by recent advancements in meta-learning ( 34 ; 41 ; 21 ; 52 )), we introduce the Value-aligned Behavior Cloning via Bi-level Optimization (VACO), a novel bi-level offline RL framework to balance the OOD and alignment challenges. This framework integrates a simple multi-layer perception (meta-scoring network) to assign differential importance weights to various state-action sample pairs. In this way, the conventional BC loss function is transformed into a weighted summation of individual sample losses, enabling differentiated learning for varying quality samples. Such learning manner is intuitive and closely mirrors human behavior learning processes. In detail, the internal loop of VACO executes weighted supervised behavior cloning with the assigned scores of in-sample data; and the external loop of VACO maximizes value estimation for value alignment and introduces controlled noise to enable limited exploration. Such bi-level configuration allows VACO to identify the optimal weighted BC policy, ultimately maximizing the expected estimated return conditioned on the learned value function without numerous hyperparameters or complicated secondary components, such as generative models and transformer. It is worth mentioning that the meta-scoring network utilized in VACO is operational solely during the training phase and can be deactivated during testing, ensuring that the inference speed of the algorithm remains unaffected. We evaluated the proposed VACO on the D4RL benchmark dataset ( 11 ) for continuous control tasks. The results indicate that our model achieves state-of-the-art (SOTA) performance, surpassing the above three categories of offline reinforcement learning algorithms, through a simple integration of BC ( 37 ) with DPG ( 42 ) algorithm in bi-level manner. Such achievement provides new insights 2 on the potential of bi-level optimization in offline RL, and further ablation studies underscore the effectiveness of our proposed meta-scoring network. 2 R ELATED WORK In recent years, offline RL has emerged as a pivotal solution for real-world scenarios, particularly in domains such as autonomous driving ( 43 ), dynamic programming ( 49 ), and robotics control ( 33 ). Offline RL is inherently limited by insufficient data and the lack of interaction with the environment, leading to two major challenges: 1) the OOD problem and 2) the value alignment issue. Traditional offline methods (e.g., BC ( 37 ) and TD3 ( 14 )) often focus on one specific aspect. More recent strategies ( 15 ; 54 ; 24 ; 47 ; 56 ) seek to integrate these approaches for performance boosting. Depending on the integration manner, these approaches can generally be classified into three categories: **Explicit Regularization** directly incorporates BC as a regularization within value estimation. For instance, TD3+BC ( 13 ) directly combines the loss function of BC and TD3, with hyperparameters fine-tuning the balance between them. BRAC+ ( 53 ) integrates BC into the value and policy update processes by KL divergence ( 18 ) to provide regularization and constraint. IQL ( 27 ) employs expected regression to perform in-sample value estimation, subsequently guiding BC across various state-action pairs using advantage-weighted regression (AWR) ( 36 ). This category of methods typically designs a divergence measure to ensure that the learned policies closely with the dataset’s sampling policy to some extent. **Implicit Regularization** posits that constraints should be moderately stringent, often executed within a latent space to subtly constrain policies. Key contributions in this area include MOPO ( 51 ), which introduces model-based concept into offline settings by incorporating penalty rewards to limit the learning process; PLAS ( 55 ), which uses a CVAE ( 2 ) to model policies within the action latent space, thus implicitly restricting learning to the support range of the in-sample dataset; and EDP ( 22 ), which utilizes diffusion models for implicit action estimation. These methods ( 20 ) typically employ generative models to encode state and action into latent space, thereby constraining learned policies. **Return-conditioned Supervised Learning** represents a novel paradigm in offline RL that conditions the policy not only on the current state but also on expected future returns. This can be viewed as a variant of conditional BC. For instance, DT ( 5 ) introduced the use of decision transformers and return-to-go to achieve supervised modeling of offline trajectories;DS4 ( 8 ) innovates by replacing the time-invariant state-space layers with transformer, thereby facilitating efficient dynamic modeling. DC ( 23 ) emphasizes the importance of local attention in Markov processes and implements an innovative approach by substituting attention mechanisms with convolution. **Bi-level Optimization in Offline RL.** Bi-level optimization is committed to optimizing another set of parameters other than target network parameters, which describes higher-level elements related to training neural networks( 52 ; 21 ). In offline RL domain, the most relevant work to us is ( 56 ) and we differ in motivation and upper-lower functions as follows: (1)Motivation: we introduce the bi-level optimization framework, aiming to balance the OOD and value alignment problems,while ( 56 ) mainly focuses on the distributional shift issue. (2)Upper-lower functions: Different motivations lead to different upper and lower functions. We first introduce a meta-scoring network in the upper function to assign adaptive weights and mainly focus on the behavior policy updating in the lower function. Differently, ( 56 ) adopts the behavior policy updating in the upper function and mainly focuses on the Q value approximation in the lower function. In our work, we explore the integration of BC and value estimation through a bi-level optimization framework, introducing a novel training methodology for offline RL to balance OOD and value alignment challenges concurrently. Although our proposed framework, VACO, is grounded in BC and DPG value estimation loss, it should better be regarded as a flexible framework capable of bridging supervised behavior policies with value estimation loss. With tailored modifications, this framework can be properly applied to the aforementioned three kinds of methods. 3 P RELIMINARIES **Behavior cloning.** BC is an approach within imitation learning that trains policies to emulate expert behaviors by directly mapping observed states to corresponding actions. Typically, BC ( 37 ) 3 employs supervised learning models to approximate the policy demonstrated by the expert, effectively replicating the expert’s decision-making process. Recently, behavioral cloning has gained popularity in offline RL due to its simplicity and straightforward application; however, its efficacy is heavily dependent on the quality and comprehensiveness of the demonstration data. **Offline reinforcement learning setting.** In RL setting, the dynamic system is described in terms of a Markov decision process (MDP) setting, which is represented as a tuple _M_ = _{S, A, r, P, ρ, γ}_, where _S_ and _A_ are the state and action space, _r_ ( _s, a_ ) is a scalar reward function, _P_ is the transition dynamics, _ρ_ is the initial state distribution, and _γ ∈_ (0 _,_ 1) is a discount factor. The objective of RL is to learn a policy _π_ ( _a|s_ ) with parameters _ϕ_ by maximizing the expected cumulative discounted return E _π_ [ [�] _[∞]_ _t_ =0 _[γ]_ _[t]_ _[r]_ [(] _[s]_ _[t]_ _[, a]_ _[t]_ [)]] [, which is typically approximated by a value function] _[ Q]_ [(] _[s, a]_ [)] [ with parameters] _θ_ . For actor-critic based methods ( 26 ; 17 ) in a continuous action space, the parameters _θ_ is typically updated by minimizing the squared Bellman error with an experience replay dataset _D_ and a target function: _J_ _Q_ ( _θ_ ) = E ( _s,a,r,s_ _′_ ) _∼D_ [ _Q_ _θ_ ( _s, a_ ) _−_ _r −_ _γQ_ _θ_ ¯ ( _s_ _[′]_ _, π_ _ϕ_ ( _s_ _[′]_ ))] [2] (1) ¯ where _Q_ _θ_ denotes a target Q-function, which is a delayed copy of the current Q-function _Q_ _θ_ . Then, the policy _π_ _ϕ_ can be updated following the deterministic policy gradient (DPG (42)) theorem: _J_ _π_ ( _ϕ_ ) = E _s∼D_ [ _−Q_ _θ_ ( _s, π_ _ϕ_ ( _s_ ))] (2) In offline RL setting, the learning policy is constrained on a pre-collected dataset _D_ ( _s, a, s_ _[′]_ _, r_ ) without further interaction with the environment during the learning process. Meanwhile, the dataset _D_ is generated by a unknown behavior policy _π_ _β_ . Directly applying standard RL methods in the offline setting suffers from severe OOD problem in value and alignment issue in policy. To avoid above challenges, a widely used offline RL framework ( 48 ; 7 ) adopt the following behavior regularization scheme which regularizes the divergence between the learned policy _π_ _ϕ_ and the unknown behavior policy _π_ _β_ of the dataset _D_ : _π_ = arg min E _s∼D_ [ _−Q_ _θ_ ( _s, π_ _ϕ_ ( _s_ )) + _D_ ( _π_ _ϕ_ ( _·|s_ ) _||π_ _β_ ( _·|s_ ))] (3) _π_ where _D_ ( _·||·_ ) is some divergence measures, which can have either an explicit or implicit form, to constrain how _“closeness”_ of the learned policy _π_ _ϕ_ and the unknown behavior policy _π_ _β_ . 4 M ETHOD : V ALUE - ALIGNED B EHAVIOR C LONING WITH B I - LEVEL O PTIMIZATION In this section, we first discuss the motivation for balancing OOD and alignment problem. Subsequently, we introduce how to transform the behavior cloning supervised loss function into a weighted behavior cloning supervised loss function. Finally, we describe how to correctly guide the proposed weighted supervised loss through bi-level optimization manner, thereby effectively balance the OOD issue and the alignment problem. 4.1 M OTIVATION FOR B ALANCING OOD AND A LIGNMENT P ROBLEM In offline reinforcement learning (RL) settings, two classic approaches—behavior cloning (e.g., BC ( 37 )) and value estimation (e.g., TD3 ( 14 ))—are commonly employed to address the OOD and value alignment challenges, respectively. The objective function of behavior cloning (see Eq.4) effectively mitigates OOD issues but suffers significantly from value misalignment. In contrast, the objective function of value estimation (see Eq.2) adeptly resolves alignment issues but is highly susceptible to OOD challenges. These phenomena are evident in our experimental results, as shown in Fig.2: behavior cloning consistently demonstrates stable performance across different datasets but tends to under-perform, while value estimation achieves superior results in specific datasets but displays extreme instability, sometimes resulting in _“zero”_ performance in others. In offline RL, these dual challenges—OOD and value misalignment—significantly impact model performance, and it is crucial to strike an appropriate balance between them. 4 Figure 2: Normalized result comparisons on various Hopper and Walker2D datasets across TD3, BC, TD3+BC, EDP and VACO. Our method achieves superior performance. Current offline RL algorithms ( 13 ; 53 ) attempt to balance the OOD issue and the alignment problem by using a mixed policy learning objective function, as represented in Eq.3, with notable success. However, the straightforward combination of behavior cloning ( 26 ) and deterministic policy gradient (DPG) ( 42 ) objective functions (e.g., TD3+BC ( 13 )) can still lead to sub-optimal performance in some certain scenarios, as illustrated in Fig.2. To address this gap, we propose a novel bi-level optimization framework. In this framework, a weighted behavior cloning algorithm is employed in the inner loop for policy extraction, while a value estimation algorithm is adopted in the outer loop for value alignment. Additionally, we introduce a meta-scoring network to facilitate an indirect fusion between behavior cloning and value estimation. In the following sections, we will describe the bi-level framework in detail. 4.2 W EIGHTED B EHAVIOR C LONING We start with behavior cloning. In conventional behavior cloning training, a batch of state-action pairs ( _s, a_ ) is sampled from the dataset _D_ and fed to the policy network _π_ _ϕ_ . The policy network _π_ _ϕ_ employs a supervised learning approach, utilizing simple L2 loss function to optimize the learning process. The objective function _J_ _BC_ can be expressed as follows: _J_ _BC_ ( _ϕ_ ) = E ( _s,a_ ) _∼D_ [ _π_ _ϕ_ ( _s_ ) _−_ _a_ ] [2] (4) Above additive loss _J_ _BC_ implies that different state-action pairs are treated equally in training, although they are actually worse action choices. For a more reasonable loss design, the weights of different training pairs should be properly evaluated and exploited. Therefore, we introduce a more intuitive behavior cloning loss _J_ _BC_ _[w]_ [by a meta-scoring network] _[ w]_ [(] _[s, a, Q]_ _[θ]_ [(] _[s, a]_ [))] [ with parameters] _[ α]_ [:] _J_ _BC_ _[w]_ [(] _[ϕ]_ [) =][ E] ( _s,a_ ) _∼D_ _[{][w]_ _[α]_ [(] _[s, a, Q]_ _[θ]_ [(] _[s, a]_ [))] _[ ·]_ [ [] _[π]_ _[ϕ]_ [(] _[s]_ [)] _[ −]_ _[a]_ []] [2] _[}]_ (5) We emphasize here that evaluating individual state-action pairs is extremely challenging, particularly because most sample trajectories in offline reinforcement learning are sub-optimal. This complexity arises from the inherent nature of the collected data, which does not always represent the optimal policy but rather a mixture of various policy executions, often leading to less than ideal decisions being captured in the dataset. Accordingly, rather than a heuristic manner, we opt to adopt the parameterized meta-scoring mechanism that can automatically assess the importance of each training pair through a learnable neural network _w_ _α_ with ( _s, a, Q_ _θ_ ( _s, a_ )) as input. 4.3 B I - LEVEL O PTIMIZATION F RAMEWORK While the aforementioned parameterized weighting concept is simple, it yields a highly underdetermined and non-convex objective function coupled with two unknown neural networks. Without 5 extra constraints, the direct minimization of _J_ _BC_ _[w]_ [can easily lead to a trival solution. In this case, the] meta-scoring neural network _w_ _α_ may intend to assign (near) zero weights to all sample pairs and hence totally mute the functions of the policy network (see the multiplications between the weight term and behavior cloning loss term in Eq.5). To avoid such trivial solution, extra constraints or guiding information should be imposed to restrict the feasibility of the learned policy. In this work, aiming to balance the OOD problem and value alignment issue concurrently, we consider enforcing the feasibility of the learned policy by maximizing the value estimation function in Eq.2, thereby meanwhile achieving value alignment. With this objective, the whole learning process is subject to the following bi-level optimization with a controllable Gaussian noise _N_ (0 _, σ_ ) for limited exploration: min _J_ _π_ ( _ϕ_ ) := E _s∼D_ [ _−Q_ _θ_ ( _s, π_ _ϕ_ ( _s_ + _N_ (0 _, σ_ )))] _α_ (6) s.t. _ϕ_ _[∗]_ ( _α_ ) = arg min _J_ _BC_ _[w]_ [(] _[ϕ]_ [) :=][ E] [(] _[s,a]_ [)] _[∼][D]_ _[{][w]_ _[α]_ [(] _[s, a, Q]_ _[θ]_ [(] _[s, a]_ [))] _[ ·]_ [ [] _[π]_ _[ϕ]_ [(] _[s]_ [)] _[ −]_ _[a]_ []] [2] _[}]_ _ϕ_ The above bi-level optimization is composed of the internal loop in the constraint and the external loop in the objective function. The internal loop minimizes the empirical squared error of the offline in-sample dataset under guidance of the different weights provided by the external loop. The external loop maximizes the value estimation of the learned policy to tune the parameter space of the meta-scoring network for better value alignment. Through such a bi-level optimization framework, we achieve policy extraction from weighted behavior cloning in the internal loop to avoid out-ofdistribution challenges and maintain value alignment from value estimation maximization in the external loop, respectively. In practice, we opt not to incorporate the learning of the value network _Q_ _θ_ within our bi-level optimization framework, as detailed in Eq.6. Consistent with practices in IQL ( 27 ), in offline RL, the learning of the value network often remains uncorrelated with the iterative updates of the policy network _π_ _ϕ_ . As a result, value estimation can be independently executed prior to policy extraction. As delineated in Algorithm 1, to maintain the stability of training for the meta-scoring network, we structured the training process into two distinct phases: the initial phase focuses on value network evaluation following the TD learning of ( 27 ), and the subsequent phase engages in bi-level optimization for achieving value-aligned behavior cloning. Although integrating value network learning into the bi-level optimization is feasible, further details on this integration are provided in the Appendix J. Additionally, we incorporated controlled, progressively decreasing noise in the outer loop to facilitate limited exploration during the early stages of training. To implement our VACO framework, we alternate between the internal loop and external loop optimization. The parameter _ϕ_ is only involved in the internal loop and can be easily updated with typical gradient descending approaches, where the _η_ 1 denotes the learning rate of policy model: _ϕ_ _t_ _←_ _ϕ_ _t−_ 1 _−_ _η_ 1 _∇_ _ϕ_ _J_ _BC_ _[w]_ [(] _[ϕ]_ [)] (7) The major difficulty of the VACO optimization stems from the external loop to learn parameter _α_ for the meta-scoring neural network. As witnessed in Eq.6, _α_ is coupled into _ϕ_ and its gradient can be derived by applying the chain rule and assuming _[∂][ϕ]_ _∂α_ _[t][−]_ [1] _≈_ 0 [1] : _∇_ _α_ _J_ _π_ ( _ϕ_ ) = _[∂J]_ _[π]_ [(] _[ϕ]_ [)] _∂α_ _[π]_ [(] _[ϕ]_ [)] _·_ _[∂][ϕ]_ _[t]_ _∂ϕ_ _t_ _∂α_ = _[∂J]_ _[π]_ [(] _[ϕ]_ [)] _·_ _∂_ ( _ϕ_ _t−_ 1 _−∇_ _ϕ_ _J_ _BC_ _[w]_ [(] _[ϕ]_ _[t][−]_ [1] [))] _∂ϕ_ _t_ _∂α_ (8) _≈−_ _[∂J]_ _[π]_ [(] _[ϕ]_ [)] _·_ _∂_ [2] _J_ _BC_ _[w]_ [(] _[ϕ]_ _[t][−]_ [1] [))] _∂ϕ_ _t_ _∂ϕ_ _t−_ 1 _∂α_ _[π]_ [(] _[ϕ]_ [)] _·_ _∂_ [ _π_ _ϕt−_ 1 ( _s_ ) _−a_ ] [2] _∂ϕ_ _t_ _∂ϕ_ _t−_ 1 = _−_ _[∂J]_ _[π]_ [(] _[ϕ]_ [)] _−_ 1 _s_ _−a_ _·_ _[∂w]_ _[α]_ [(] _[s][,][a][,Q]_ _[θ]_ [(] _[s][,][a]_ [))] _∂ϕ_ _t−_ 1 _∂α_ _∂α_ The above equation is an approximate solution for external loop updating. Thus, we get the update rule of _α_, 1 More details can be found in Appendix C 6 _α_ _t_ _←_ _α_ _t−_ 1 + _η_ 2 _∂J_ _π_ ( _ϕ_ ) _·_ _[∂]_ [2] _[J]_ _BC_ _[w]_ [(] _[ϕ]_ _[t][−]_ [1] [))] (9) _∂ϕ_ _t_ _∂ϕ_ _t−_ 1 _∂α_ According to the updating rules based on Eq.7 and Eq.9, we alternately optimize two sets of parameters as in Algorithm 1. **Algorithm 1:** Value-aligned Behavior Cloning via Bi-level Optimization (VACO) **Input:** Fixed offline dataset _D_, value network _Q_ _θ_, policy network _π_ _ϕ_, meta-scoring network _w_ _α_, update steps for value phase _K_ 1, update steps for bi-level phase _K_ 2 **1** **// Value Training Phase** **2** **for** _update step k_ = 1 _...K_ 1 **do** **3** Sample a minibatch sample pairs ( _s, a, r, a_ _[′]_ ) from _D_ **4** Update value _θ_ according to IQL’s(27) TD learning **5** **end** **6** **// Bi-level Optimization Phase** **7** **for** _update step k_ = 1 _...K_ 2 **do** **8** Sample a minibatch sample pairs ( _s, a, r, a_ _[′]_ ) from _D_ **9** Fix meta-scoring _α_ and update policy _ϕ_ according to Eq.7 **10** Fix policy _ϕ_ and update meta-scoring _α_ according to Eq.9 **11** **end** 5 E XPERIMENTS 5.1 S ETTING **D4RL.** We utilize the MuJoCo and AntMaze domain tasks from the D4RL ( 11 ) benchmark for evaluation. Mujoco domain includes a variety of continuous locomotion tasks with dense rewards. Within this domain, we perform experiments in three environments: halfcheetah, hopper, and walker2d. For each environment, we investigate four different v2 datasets, each representing a distinct data quality level: medium, medium-replay, medium-expert, and expert. Consequently, the MuJoCo domain provides an excellent platform for assessing the effects of diverse datasets derived from policies at varying proficiency levels. **Baselines.** We consider baselines including four different domain to provide a thorough comparison. (1) Classic methods: BC( 37 ), TD3( 14 ), and CQL( 28 ) ; (2) Explicit regularization methods: TD3+BC( 13 ), IQL( 27 ), PRDC( 40 ), TD7( 12 ) and A2PO( 38 ); (3) Implicit regularization methods: MOPO( 51 ), PLAS( 55 ), and EDP( 22 ); (4) Return-conditioned methods: DT( 5 ), DS4( 8 ), and DC( 23 ); Other methods: SAC-RND(35). **Setup.** We implement our VACO framework by combining the official implementation of ( 13 ) and ( 5 ). Specifically, value network, policy network and meta-scoring network are all 3-layer MLP. All hidden dimensions of the network are set to 256. Relu ( 1 ) activation is performed after each hidden layer. As for training, we adopt Adam optimizer ( 25 ) with a learning rate of 3e-5 for meta-scoring network and with a learning rate of 3e-4 for value and policy network. All experiments are conducted on single NVIDIA RTX 3090 GPU. More detailed experimental setting of D4RL dataset, baselines and hyperparameter setting, are all available in Appendix A. 5.2 P ERFORMANCE ON M UJOCO D OMAIN Tab.1 presents the performance outcomes for various algorithms, including other baselines and our VACO model, in offline settings using the D4RL-Mujoco datasets. All scores are normalized, with a value of 100 indicating the performance level of an expert policy, as described by (11). As shown, we observe the following: (1) Our VACO surpasses the various methods across almost all tasks and varying quality datasets. (2) Some methods exhibit performance deficiencies in certain 7 Table 1: Averaged normalized scores on MuJoCo locomotion tasks for D4RL dataset. We evaluate 10 times averaged over 5 random seeds, _±_ standard deviation. The dataset names are abbreviated as follows: ‘medium’ to ‘m’, ‘medium-replay’ to ‘m-r’, ‘medium-expert’ to ‘m-e’, ‘expert’ to ‘e’. The first segment of the table contains classic offline methods, the second segment for explicit regularization methods, the third segment for implicit regularization methods and the fourth segment for return-conditioned decision transformer methods. Our model outperforms various offline RL algorithms on almost all tasks. We mark the best results in **bold** and the second best with an underline for easy comparison. * denotes averaged scores without ‘expert’ dataset. |Dataset:|Hopper|HalfCheetah|Walker2D|average average*| |---|---|---|---|---| |Method|m m-r m-e e|m m-r m-e e|m m-r m-e e|m m-r m-e e| |TD3+BC<br>IQL<br>CQL|59.3 60.9 98.0 109.6 48.3 44.6 90.7 93.4 83.7 81.8 110.1 110.0<br>66.2 94.7 91.5 108.8 47.4 44.2 86.7 95.0 78.3 73.8 109.6 109.4<br>61.9 86.3 96.9 106.5 46.9 45.3 95.0 97.3 79.5 76.8 109.1 109.3|59.3 60.9 98.0 109.6 48.3 44.6 90.7 93.4 83.7 81.8 110.1 110.0<br>66.2 94.7 91.5 108.8 47.4 44.2 86.7 95.0 78.3 73.8 109.6 109.4<br>61.9 86.3 96.9 106.5 46.9 45.3 95.0 97.3 79.5 76.8 109.1 109.3|59.3 60.9 98.0 109.6 48.3 44.6 90.7 93.4 83.7 81.8 110.1 110.0<br>66.2 94.7 91.5 108.8 47.4 44.2 86.7 95.0 78.3 73.8 109.6 109.4<br>61.9 86.3 96.9 106.5 46.9 45.3 95.0 97.3 79.5 76.8 109.1 109.3|82.53 75.27<br>83.78 76.93<br>84.20 77.52| explicit regularization manner methods: PRDC **100.3** 100.1 109.2 - **63.5** **55.0** 94.5 - 85.2 92.0 111.2 - - 90.11 TD7 76.1 91.1 108.2 - 58.0 53.8 **104.6** - **91.1** 89.7 111.8 - - 87.16 A2PO 80.3 101.6 107.4 - 47.1 44.7 95.6 - 84.9 82.8 112.1 - - 84.07 implicit regularization manner methods: MOPO 26.5 92.5 51.7 16.2 40.2 54.0 57.9 1.4 14.0 42.7 55.0 0.1 37.68 48.28 PLAS 32.9 27.9 111.0 - 39.3 43.9 96.6 - 44.6 30.2 89.6 - - 57.33 EDP 81.9 101.0 97.4 - 52.1 49.4 95.5 - 6.9 **94.9** 110.2 - - 85.48 return-conditioned manner methods: DT 67.6 82.7 107.6 106.3 42.6 36.6 86.8 92.4 74.0 66.6 108.1 107.6 81.58 74.73 DS4 89.5 87.7 110.5 109.3 47.3 43.8 94.8 89.1 81.4 80.3 109.6 105.7 87.42 82.77 DC 92.5 94.2 110.4 110.5 43.0 41.3 93.0 87.5 79.2 76.6 109.6 107.8 87.13 82.2 Ours 97.2 **102.3** **112.6** **114.0** 60.2 51.4 98.3 **100.6** 88.8 92.4 **114.5** **112.9** **95.43** **90.86** VACO _±_ 4.2 _±_ 7.1 _±_ 2.2 _±_ 4.5 _±_ 2.4 _±_ 1.2 _±_ 7.2 _±_ 1.5 _±_ 1.8 _±_ 4.9 _±_ 0.3 _±_ 0.6 - tasks or at specific data levels, whereas VACO consistently achieves competitive performance across the board. For example, MOPO( 51 ) fails on each expert datasets and EDP( 22 ) gets a large decrease on Walker2D medium dataset(11). 5.3 P ERFORMANCE ON A NT M AZE D OMAIN Tab.2 presents the performance outcomes of various algorithms on the AntMaze tasks, which primarily test the model’s trajectory stitching capability. As shown, our VACO algorithm achieved the highest average score and demonstrated competitive performance across tasks of varying difficulty levels. In contrast, TD3+BC( 13 ) and PLAS( 55 ) failed to perform well on the larger-scale tasks, while SAC-RND( 35 ) and EDP( 22 ) showed significant drops in performance compared to our method on the medium-play and large-play datasets, respectively. Table 2: Averaged normalized scores on AntMaze tasks. We evaluate 100 times averaged over 5 random seeds, _±_ standard deviation. Our model outperforms various methods on the average score of all tasks. We mark the best results in **bold** and the second best with an underline for easy comparison. |Task name|BC TD3+BC IQL CQL PLAS SAC-RND EDP|VACO(ours)| |---|---|---| |antmaze-umaze<br>antmaze-umaze-diverse<br>antmaze-medium-play<br>antmaze-medium-diverse<br>antmaze-large-play<br>antmaze-large-diverse|65.0 66.3 83.3 74.0 70.7 97.0 94.2<br>55.0 53.8 70.6 84.0 45.3 66.0 79.0<br>0.0 26.5 64.6 61.2 16.0 Idea Generation Category:
0Conceptual Integration
elTJBP7Fbv
# L OOK B EFORE Y OU L EAP : U NIVERSAL E MERGENT M ECHANISM FOR R ETRIEVAL IN L ANGUAGE M ODELS **Alexandre Variengien** _[∗]_ EU AI Office European Commission **Eric Winsor** _[∗]_ UK AI Security Institute A BSTRACT When solving challenging problems, language models (LMs) are able to identify relevant information from long and complicated contexts. To study how LMs solve retrieval tasks in diverse situations, we introduce ORION, a collection of structured retrieval tasks, from text understanding to coding. We apply causal analysis on ORION for 18 open-source language models with sizes ranging from 125 million to 70 billion parameters. We find that LMs internally decompose retrieval tasks in a modular way: middle layers at the last token position process the request, while late layers retrieve the correct entity from the context. Building on our high-level understanding, we demonstrate a proof of concept application for scalable internal oversight of LMs to mitigate prompt-injection while requiring human supervision on only a single input. 1 I NTRODUCTION Recent advances in language models (LMs) (Vaswani et al., 2017) have demonstrated their flexible problem-solving abilities and their expert-level knowledge in a wide range of fields (Bubeck et al., 2023; OpenAI, 2023). Researchers have developed a series of techniques such as fine-tuning (Ouyang et al., 2022) and Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022) to ensure models output honest and helpful answers. However, as their abilities reach human level, supervision from human feedback becomes costly and even impossible. This necessitates more efficient or automated methods of supervision, known generally as _scalable oversight_ . Moreover, existing methods only control for the output of the model while leaving the internals of the model unexamined (Casper et al., 2023; Ngo et al., 2023). This is a critical limitation as many internal processes can elicit the same output while using trustworthy or untrustworthy mechanisms. For instance, we would like to know whether a model answers faithfully based on available information or simply gives a user’s preferred answer (Perez et al., 2022). We call this problem _internal oversight._ Recent works on mechanistically interpreting LMs have shown success on narrow tasks (Wang et al., 2022; Nanda et al., 2023). Some have provided insight into factual recall (Geva et al., 2023) and in-context learning (Olsson et al., 2022). Causal interventions have even been used to understand how models encode tasks from few shot examples (Hendel et al., 2023) or bind entities to attributes (Feng and Steinhardt, 2023). However, these works are still scoped to relatively narrow contexts and lack demonstration of concrete applications. In this work, we study how LMs solve retrieval tasks, i.e. in-context learning problems that involve answering a request (e.g. “What is the city of the story?”) to retrieve a keyword (e.g. “Paris”) from a context (e.g. a story). We start by introducing ORION, a collection of 15 datasets of retrieval tasks spanning six different domains from question answering to coding abilities and variable binding. We systematize the task structure by annotating each textual input with an abstract representation where the context is a table of attributes, and the request is a simple SQL-like query, as illustrated in Figure 2. We apply causal analysis (Pearl, 2009; Vig et al., 2020; Geiger et al., 2021) to 18 open source LMs ranging in size from 125 million to 70 billion parameters to investigate the successive role of layers _∗_ Work conducted while at Conjecture. Corresponding author: alexandre.variengien@gmail.com 1 Figure 1: Illustration of our main experimental discovery. Patching the mid-layer residual stream on a retrieval task from ORION causes the language model to output a modular combination of the request from _x_ 1 (asking for the city) and the context from _x_ 2 (a story about Bob in Paris). We call this phenomenon _request-patching_ . at the last position on tasks from ORION. The shared abstract representation enables us to define and interpret experiments across tasks and models at scale, without the need for setting-specific labor. We discover that language models handle retrieval tasks by cleanly separating the layers at which they process the request and the context at the last token position. These results suggest that there exists an emergent modular decomposition of tasks that applies across models and tasks. We complement this coarse-grained causal analysis with a finer-grained case study of a question-answering task on Pythia-2.8b (Biderman et al., 2023). We demonstrate that our understanding of how models solve retrieval tasks can be directly leveraged to mitigate the effect of prompt injection (Perez and Ribeiro, 2022) in a question-answering task. Models are given inputs containing distractor sequences that trigger models to output a token unrelated to the task. We present a proof-of-concept based on request-patching that only requires humans to verify the model output on a _single_ trusted input. Our technique significantly improves the performance of models on sequences with distractors (0% _→_ 70.5% accuracy for Pythia-410m, 15.5% _→_ 97.5% for Pythia-12b). To our knowledge, this is the first demonstration that scalable internal oversight of LMs is feasible. In summary, our main contributions are as follows: 1. We introduce ORION, a collection of structured retrieval tasks. It is a data-centric approach enabling a comparative study of 18 models on 6 domains. 2. We discover a macroscopic modular decomposition of retrieval tasks in LMs’ internals that is universal across tasks and models. 3. We link macroscopic and microscopic descriptions of LMs’ internals with a fine-grained case study of a question-answering task on Pythia-2.8b. 4. We apply this knowledge to a proof of concept for _scalable internal_ oversight of LMs solving a retrieval task in the presence of prompt injection. 2 B ACKGROUND 2.1 T HE T RANSFORMER ARCHITECTURE FOR AUTOREGRESSIVE LANGUAGE MODELS An autoregressive language model, _M_ _θ_ with parameters _θ_, maps a sequence of input tokens _x_ = ( _x_ 1 _, x_ 2 _, ..., x_ _n_ ) to a probability distribution over the next token _x_ _n_ +1 . For the Transformer architecture (Vaswani et al., 2017), we have: _p_ ( _x_ _n_ +1 _|x_ ) = _M_ _θ_ ( _x_ ) = softmax( _π_ _n_ ( _x_ )) 2 The pre-softmax values _π_ _n_ are the logits at the _n_ -th token position. The final logits _π_ _l_ are constructed by iteratively building a series of intermediate activations _z_ _k_ _[l]_ [we call the] _[ residual stream]_ [, following] Elhage et al. (2021). The residual stream _z_ _k_ _[l]_ [at token position] _[ k]_ [ and layer] _[ l]_ [ is computed from the] residual stream at previous token positions at the previous layer _z_ _≤_ _[l][−]_ _k_ [1] [by adding the results of] _[ a]_ _k_ _[l]_ [, a] multi-headed attention module that depends on _z_ _≤_ _[l][−]_ _k_ [1] [, and] _[ m]_ _k_ _[l]_ [, a two-layer perceptron module that] depends on _z_ _k_ _[l][−]_ [1] . We provide a complete description of the Transformer architecture in Appendix G. 2.2 C OMPUTATIONAL GRAPH AS CAUSAL GRAPH The experimental paradigm of causal analysis applied to machine learning models initiated by (Vig et al., 2020) and (Geiger et al., 2021) treats the computational graph of a neural network as a causal graph. The goal of causal analysis is to answer questions about _why_ a model outputs a given answer. This requires uncovering the causal links tying the inputs to the output, as well as characterizing the role of the internal components critical for the model’s function. To this end, researchers rely on _causal interventions_ (Pearl, 2009), experiments that replace a set of activations with fixed values. In this work, we use single-input _interchange intervention_ [1] (Geiger et al., 2021). It is a simple form of causal intervention where we intervene on one variable at a time by fixing its value to be the value of that same variable on another input. We write _M_ ( _x|A ←_ _A_ ( _x_ _[′]_ )) the output of the model after the single-input interchange intervention on the _target input_ _x_, replacing the activation of the node _A_ by its value on the _source input x_ _[′]_ . 3 ORION: A COLLECTION OF STRUCTURED RETRIEVAL TASKS Our study concentrates on retrieval, a fundamental aspect of in-context learning, which involves answering a request (e.g. “What is the name of the city?”) by identifying the correct attribute (e.g. a city name) from the context (e.g. a story). To facilitate this study, we crafted a collection of datasets dubbed the **O** rganized **R** etr **I** eval **O** perations for **N** eural networks (ORION). **Abstract representation.** Each textual input (i.e. LM prompt) from ORION is annotated with an abstract representation ( _C, R_ ) where _C_ represents the context and _R_ the request. In the example of Figure 2, the context is a story introducing a place, a character, and an action, while the request is a question written in English asking for the city of the story. The context _C_ is abstractly represented as a table where each line is a list of attributes. The request _R_ is retrieving a target attribute _ATTR_ _t_ (e.g. the “name” attribute in Figure 2), from lines where a filter attribute _ATTR_ _f_ (e.g. the narrative role) has the value _v_ _f_ (e.g. “city”). The request can be written using a language in the style of SQL as follows: SELECT _ATTR_ _t_ FROM _C_ WHERE _ATTR_ _f_ = _v_ _f_ (e.g. SELECT _Name_ FROM Context WHERE _Role_ =City). We note _R_ ( _C_ ) the results of applying the request on the context. This is the ground truth completion for LMs evaluated on the retrieval task. In practice, _R_ ( _C_ ) is a single token called the _label token_ . On the example we have _R_ ( _C_ ) = “ Valencia”. **Desiderata for datasets.** To facilitate the application of causal analysis, we enforce a list of desiderata on datasets from ORION. The most important desiderata is ensuring datasets are _decomposable_ . For every dataset _D_ in ORION, for every abstract representations ( _C_ 1 _, R_ 1 ) _,_ ( _C_ 2 _, R_ 2 ) in _D_, _R_ 2 ( _C_ 1 ) and _R_ 1 ( _C_ 2 ) are well-defined. This means that an arbitrary request can be applied to an arbitrary context from the same task. Abstract representations of requests and contexts can be freely interchanged across a task. This constraint enables the design of interchange interventions at scale. We define four additional desiderata Structured, Single token, Monotasking, and Flexible in Appendix H and share the motivation behind their definition. **Dataset composition.** The dataset includes the retrieval task from domains: question-answering, translation, factual recall, variable binding, induction pattern-matching, and type hint understanding. For each domain, we created two or three variations. Each dataset is created using a semi-automated process leveraging the LLM assistant ChatGPT. We provide a detailed overview of the dataset and its creation in Appendix H. 1 It is sometimes called “activation patching” in the literature see e.g. Wang et al. (2022) 3 **Textual input:** **Abstract** **representation:** Story: In the lively city of Valencia, a skilled veterinarian [...]. "I’m Christopher" he replied, [...]. Question: What is the city of the story? The story takes place in **Request** _R_ SELECT _Name_ FROM Context WHERE _Role_ =City |Context C|C| |---|---| |Name|Role| |_Valencia<br>_Christopher<br>_veterinarian|City<br>Main Character<br>Character Job| Figure 2: Example input from ORION for the question-answering task. Textual inputs are annotated with an abstract representation of the context and the request. Abstract context representations are tables where each line lists attributes relative to a story element. Requests can be formulated using simple SQL-like queries. **Performance metrics.** We define a task _T_ as a set of input-output pairs ( _x, y_ ) where _x_ is the LM input and _y_ is the expected label token. We use two main metrics to quantify the performance of a language model on an ORION task _T_ . - **Accuracy:** **E** ( _x,y_ ) _∼T_ [ _M_ ( _x_ ) = _y_ ] - **Token probability** : **E** ( _x,y_ ) _∼T_ [ _p_ ( _y|x_ )] Accuracy serves as our primary metric to assess model performance in solving tasks due to its straightforward interpretation and practical application in language models, where the most probable token is often chosen. However, accuracy falls short in capturing nuanced aspects of predictions, for instance, accuracy doesn’t measure the margin by which a token is the most probable. To have a granular evaluation of model behavior after interventions, we employ token probability, offering a continuous measure. We evaluate the performance of 18 models from four different model families: GPT-2 (Radford et al., 2019), Pythia (Biderman et al., 2023), Falcon (Almazrouei et al., 2023) and Llama 2 (Touvron et al., 2023). We study base language models for all families except Falcon where we include two instruction fine-tuned models. We choose the models to capture diverse scales, architecture, and training techniques. Unsurprisingly, larger models can solve a wider range of problems. Models with more than 6 billion parameters are able to solve every task with more than 70% accuracy. Nonetheless, even GPT-2 small with 125M parameters, one of the smallest models, can solve the simplest version of the question-answering task with 100% accuracy. Detailed evaluations using the token probability and logit difference are available in Appendix A. In the following analyses, we only consider settings where the model can robustly solve the task. Thus, we focus on pairs of models and tasks that have greater than 70% accuracy. 4 M ACROSCOPIC CAUSAL ANALYSIS ON ORION: A UNIVERSAL EMERGENT DECOMPOSITION OF RETRIEVAL TASKS To correctly solve retrieval tasks, an LM has to gather and combine at the last token position information coming from the request and the context. We focus our investigations on understanding how these two processing steps are organized in the intermediate layers of the last token position. In this section, we choose to consider a coarse-grained division of the model, intervening on full layers instead of a finer-grained division, e.g. considering single-attention heads and MLP blocks. We find this level of analysis is sufficient to develop a high-level causal understanding of how language models solve retrieval tasks while providing a computationally tractable set of experiments to run at 4 Figure 3: Normalized token probability and accuracy for the label tokens _R_ 1 ( _C_ 1 ), _R_ 1 ( _C_ 2 ) and _R_ 2 ( _C_ 2 ) after patching the residual stream across all layers. Patching early (before _L_ 1 = 13 ) and late (after _L_ 3 = 27 ) leads to the expected results, respectively no change in output and patching the output from _x_ 1 . However, intervening on the middle layer ( _L_ 2 = 16 ) leads to the model confidently outputting the token _R_ 1 ( _C_ 2 ), a modular combination of the request from _x_ 1 and the context from _x_ 2 . scale. We complement this general coarse-grained analysis in Section 5 with a finer-grained case study on Pythia-2.8b solving a question-answering task. 4.1 M ETHODS Our main experimental technique is _residual stream patching_ . Residual stream patching is a singleinput interchange intervention, replacing the residual stream at a layer _L_ at the last position in the forward pass of the model on input _x_ 2 with its activation from another input _x_ 1 . Following the notation introduced in Section 2.2, we note _M_ ( _x_ 2 _|z_ _n_ _[L]_ _[←]_ _[z]_ _n_ _[L]_ [(] _[x]_ [1] [))] [ the model output on] _[ x]_ [2] [after this] intervention. As shown in Figure 1, residual stream patching makes every component before layer _L_ have the activation it takes on _x_ 1 while the components after layer _L_ receive mixed activations (denoted by the yellow color in the figure). These later layers see activations at the last position coming from _x_ 1 while activations from earlier positions come from _x_ 2 . To characterize the output of the patched model, we measure the token probability and accuracy for three different label tokens related to the inputs _x_ 1 and _x_ 2 . We use both label tokens from the input _x_ 1 and _x_ 2, _R_ 1 ( _C_ 1 ) and _R_ 2 ( _C_ 2 ) respectively, and the label token _R_ 1 ( _C_ 2 ) that is the result of applying the request from _x_ 1 on the context of _x_ 2 . To facilitate comparisons between different tasks and models, we normalize the token probability based on the mean probability of the correct token given by the model for the task. In addition, we calculate the normalized accuracy where 0 represents the accuracy of random guess, i.e. responding to a random request in a given context while 1 denotes the model’s baseline accuracy for that task. We perform residual stream patching at the last position for every layer, model, and task of ORION. For each task, we use a dataset of 100 prompts and average the results of 100 residual stream patching experiments with _x_ 1 and _x_ 2 chosen uniformly from the task dataset. 4.2 R ESULTS OF RESIDUAL STREAM PATCHING Figure 3 shows the results of residual stream patching on the question-answering task with a uniform answer prefix for the Pythia-2.8b model. We observe that after residual stream patching on the layer before layer 13, the model is outputting _R_ 2 ( _C_ 2 ) with 100% normalized token probability. Our interpretation is that this intervention does not perturb the model processing of _x_ 2 . We further observe that residual stream patching after layer 27 causes the model to output _R_ 1 ( _C_ 1 ) with more than 80% normalized token probability. In effect, patching the residual stream after a certain layer is equivalent to hard-coding the model output on _x_ 1 . Surprisingly, when patching between layers 15 and 16, we observe that the model outputs _R_ 1 ( _C_ 2 ) with 100% normalized accuracy, i.e. with the same accuracy level as the baseline task accuracy. The model is outputting the results of the request contained in the input _x_ 1 in the context of the input _x_ 2 . 5 Idea Generation Category:
0Conceptual Integration
eIB1UZFcFg
# - S PECTRAL C OMPRESSIVE I MAGING VIA U NMIXING DRIVEN S UBSPACE D IFFUSION R EFINEMENT **Haijin Zeng** [1] _[,][∗]_ **, Benteng Sun** [2] _[,][∗]_ **, Yongyong Chen** [2] _[,][†]_ **, Jingyong Su** [2] _[,][†]_ **, Yong Xu** [2] 1 Harvard University, 2 Harbin Institute of Technology (Shenzhen) haijin.zeng2018@gmail.com, SMARK2019@outlook.com A BSTRACT Spectral Compressive Imaging (SCI) reconstruction is inherently ill-posed because a single observation admits multiple plausible reconstructions. Traditional deterministic methods struggle to effectively recover high-frequency details. Although diffusion models offer promising solutions to this challenge, their application is constrained by the limited training data and high computational demands associated with multispectral images (MSIs), making direct diffusion training impractical. To address these issues, we propose a novel Predict-and-unmixingdriven-Subspace-Refine framework (PSR-SCI). This framework begins with a light-weight predictor that produces an initial, rough estimate of the MSI. Subsequently, we introduce a unmixing-driven reversible spectral embedding module that decomposes the MSI into subspace images and spectral coefficients. This compact representation facilitates the adaptation of pre-trained RGB diffusion models and focuses refinement processes on high-frequency details, thereby enabling efficient diffusion generation with minimal MSI data. Additionally, we design a high-dimensional guidance mechanism enforcing SCI consistency during sampling. The refined subspace image is then reconstructed back into an MSI using the reversible embedding, yielding the final MSI with full spectral resolution. Experimental results on the standard KAIST and zero-shot datasets NTIRE, ICVL, and Harvard show that PSR-SCI enhances overall visual quality and delivers PSNR and SSIM results competitive with state-of-the-art diffusion, transformer, and deep-unfolding baselines. This framework provides a robust alternative to traditional deterministic SCI reconstruction methods. Code and models are available [at https://github.com/SMARK2022/PSR-SCI.](https://github.com/SMARK2022/PSR-SCI) 1 I NTRODUCTION Multispectral imaging extends beyond the visible light spectrum, capturing image data across diverse wavelength ranges, such as infrared and ultraviolet spectra. This method, aided by filters or specialized instruments, reveals information beyond human perception, which is limited to red, green, and blue wavelengths. Consequently, multispectral images (MSIs) find applications in diverse fields such as remote sensing Yuan et al. (2017); Zeng et al. (2020), medical imaging Lu & Fei (2014); Meng et al. (2020b), and environmental monitoring Thenkabail et al. (2014). Despite their utility, traditional multispectral imaging suffers from prolonged acquisition times due to spatial or temporal scanning, posing a significant hurdle for many computer vision applications Arad et al. (2022). Recent advancements in snapshot compressive imaging (SCI) systems have streamlined the acquisition of two-dimensional measurements of MSIs, facilitating efficient multispectral image acquisition and processing Cao et al. (2016); Yuan et al. (2015); Ma et al. (2021). However, SCI reconstruction poses unique challenges compared to traditional denoising or reconstruction tasks, as it must recover MSIs from compressed measurements. This process also involves coping with severe degradation caused by physical modulation, spectral compression, and unpredictable system noise. Reconstructing MSI with full spatial-spectral resolution from a single measurement presents an inherently challenging and ill-posed inverse problem. Current methods face obstacles in accurately reconstructing specific aspects due to inadequate sampling in certain areas. Insufficient sampling _∗_ Equal contribution. _†_ Corresponding author. 1 (2019), PnP-Diffusion Pan et al. (2024). ground truth in a supervised manner falls under the umbrella of end-to-end methods Ongie et al. (2020). While these methods perform well within their distribution, they may exhibit fragility to distributional shifts or changes in the image degradation or imaging process Jalal et al. (2021). Diffusion model Nichol & Dhariwal (2021); Choi et al. (2021); Kawar et al. (2022) has demonstrated notable proficiency in generating content from RGB images Zhu et al. (2023). Leveraging its generative capacity to address challenging-to-reconstruct segments holds promise for enhancing multispectral SCI results Ho et al. (2020); Song et al. (2020a); Choi et al. (2021); Anderson (1982); Chung et al. (2022). Nonetheless, two significant challenges must be confronted: (i) Due to the broader spectrum captured by MSI, there is limited training data available for MSIs compared to RGB images. (ii) The high-dimensional nature of MSIs significantly increases the computational cost for diffusion denoising, especially when considering the number of sampling steps involved. Consequently, training a diffusion model directly on MSIs presents a considerable challenge. Diffusion models pre-trained on large RGB datasets hold great potential for MSI reconstruction. However, several key challenges emerge when integrating diffusion models into the MSI domain: (1) Directly inputting MSIs, which comprise dozens of spectral bands, into existing diffusion models pre-trained on 3-channel RGB images is unfeasible due to the mismatch in channel numbers. (2) MSIs exhibit a significantly different wavelength spectrum compared to RGB images, and there exists a complex spectral interrelation among the bands of MSIs. (3) Diffusion models require considerable sampling time, a challenge intensified in MSIs by the increased computational cost of denoiser networks multiplied by sampling steps. This paper addresses these issues with four contributions: **(i)** Our approach introduces a spectral unmixing-driven predict-and-subspace refine strategy (PSRSCI) for SCI reconstruction. This method yields improved perceptual quality than deterministic methods and more efficient enhancement than typical diffusion models. **(ii)** Given the ill-posedness of spectral unmixing models, we introduce a reversible decomposition module. The module performs hierarchical low-rank decomposition, preserving reversibility and exploiting spectral sparsity for compression. **(iii)** Rather than directly enhancing the MSI, we focus the diffusion generation exclusively on the high-frequency component. This approach accelerates fine-tuning and significantly reduces the amount of required training data, thus addressing MSI data scarcity. **(iv)** We introduce a high-dimensional guidance with SCI imaging consistency. 2 We evaluated the PSR-SCI performance on simulated and real datasets. As shown in Fig. 1, PSR-SCI preserves finer details and attains a higher PSNR than current SOTAs. 2 R ELATED W ORKS The existing frameworks for SCI reconstruction predominantly consists of _model-based, Plug-_ _and-Play, End-to-end (E2E)_, and _Deep unfolding methods_ . _Model-based methods_ Wagadarikar et al. (2008); Kittle et al. (2010); Liu et al. (2019); Wang et al. (2016); Zhang et al. (2019); Yuan (2016); Tan et al. (2016); Figueiredo et al. (2007) depend on hand-crafted image priors such as total variation, sparsity, and low-rank structures. Although these methods offer theoretical guarantees and interpretability, they require manual parameter tuning, which slows down the reconstruction process. Additionally, they are often limited by their representation capacity and generalization ability. _Plug-and-play (PnP)_ algorithmsChan et al. (2016); Qiao et al. (2020); Yuan et al. (2020); Meng et al. (2021); Zheng et al. (2021b); Yuan et al. (2021b) incorporate pre-trained denoising networks into traditional model-based methods for multispectral imaging (MSI) reconstruction. However, because these pre-trained networks are fixed and not re-trained, their performance is limited by the fixed denoiser capacity and mismatch to MSI statistics. _End-to-end (E2E) algorithms_ Meng et al. (2020b;a); Hu et al. (2022); Miao et al. (2019); Yuan et al. (2021a) leverage convolutional neural networks (CNNs) to establish a mapping function from measurements to MSIs. Despite the advantages of deep learning, these methods often neglect the fundamental principles of SCI systems and are deficient in theoretical foundations, interpretability, and adaptability due to variations in imaging models. _Deep unfolding methods_ Wang et al. (2020; 2019); Meng et al. (2023); Ma et al. (2019); Huang et al. (2021); Fu et al. (2021); Zhang et al. (2022), on the other hand, utilize multi-stage networks to transform measurements into MSI cubes, providing interpretability through explicit characterization of image priors and system imaging models. In addition to the four classic frameworks mentioned above, the advancement of _generative models_ Lin et al. (2023); Miao et al. (2023); Ho et al. (2020); Wang et al. (2022); Whang et al. (2022) has led to the emergence of two additional works. These works primarily aim to enhance the accuracy of SCI reconstruction by leveraging the potential of _denoising diffusion models_ . Specifically, a model named DiffSCI Pan et al. (2024) utilizes a pre-trained denoising diffusion model for RGB images as the denoiser within the PnP framework. This approach combines structural insights from deep priors and optimization-based methodologies with the generative capabilities of contemporary denoising diffusion models. Another work is to use latent diffusion model to generate clean image priors for deep unfolding network, to facilitate high-quality hyperspectral reconstruction Wu et al. (2023). 3 O UR PSR-SCI M ETHOD 3.1 P ROBLEM D EFINITION AND C HALLENGES **Degradation Model of CASSI:** A type of snapshot compressive imaging system is the Coded Aperture Snapshot Spectral Compressive Imaging (CASSI) system Wagadarikar et al. (2008); Meng et al. (2020a); Gehm et al. (2007) shown in Fig. 2. In this system, two-dimensional measurements _Y ∈_ R _[H][×]_ [(] _[W]_ [ +] _[d][×]_ [(] _[B][−]_ [1))] are modulated from a three-dimensional MSI _X ∈_ R _[H][×][W][ ×][B]_, where _H_, _W_, _d_, and _B_ denote the MSI’s height, width, shifting step, and total number of wavelengths, respectively. To formulate the imaging process, we firstly denote the vectorized measurement as **y** _∈_ R _[n]_ with _n_ = _H_ ( _W_ + _d_ ( _B−_ 1)) Cai W et al. (2022d); Ma et al. (2019), vector- H **modulation** **dispersion** H **imaging** ized shifted MSI as **x** _∈_ R _[nB]_, mask as � � � ~~�~~ H **Φ** _∈_ R _[n][×][nB]_ . Then, the imaging process Figure 2: Illustration of a single disperser CASSI system. can be formulated as: **y** = **Φx** + **n** _,_ (1) where **n** _∈_ R _[n]_ denotes the imaging noise generated by the detector. Subsequently, it is necessary to decode the measurement **y** to obtain **x** with full spatial-spectral resolution, given **Φ** Tropp & Gilbert (2007); Donoho (2006); Jalali & Yuan (2019). **Denoising Diffusion Models for SCI?** In addressing the inherently ill-posed nature of SCI reconstruction, existing approaches face various challenges in achieving accurate detail reconstruction simultaneously. One promising solution to this predicament lies in the denoising diffusion model, renowned for its generative capability. Nevertheless, (i) the existing diffusion-based methods are W **modulation** **dispersion** **imaging** � � H � ~~�~~ Figure 2: Illustration of a single disperser CASSI system. 3 |(a) Initial reconstruction & frequency separation<br>Frequency Separator<br>Initial<br>Predictor<br>SCI Measurement|(b) Unmixing-driven reversible spectral embedding (URSe)<br> <br> <br> <br> <br> <br>Embedding<br> Spectral<br>conv1×1   <br>conv1×1             Spectral Embedding<br>Interpolation<br>conv3  ×  3             c o nv1×1<br>conv1×1 conv1×1| |---|---| |High-dimensional HSI Unmixing-driven fast diffusion (low-dimension) High-dimensional guidance<br>Frozen + imaging consistency<br>Unmixing Imaging<br>... embedding guidance<br>9:<br>URSe: Eq.<br>Tuned<br>Reverse URSe<br>(c) Detail reconstruction Dimension raising|High-dimensional HSI Unmixing-driven fast diffusion (low-dimension) High-dimensional guidance<br>Frozen + imaging consistency<br>Unmixing Imaging<br>... embedding guidance<br>9:<br>URSe: Eq.<br>Tuned<br>Reverse URSe<br>(c) Detail reconstruction Dimension raising| Figure 3: The overall framework of our PSR-SCI consists of three distinct yet interrelated modules, including (a) the initial predictor with frequency separator, and (b) the spectral unmixing-driven hierarchical spectral embedding, serving as a latent space decomposition method with physical significance in the context of SCI. Additionally, we (c) fine-tune the diffusion generation of highfrequency subspace images atop large-scale RGB images pre-trained models. mostly designed for RGB images in which the input and output are with three channels, while the task of SCI reconstruction involves decoding a complete multi-band MSI from a single-band measurement. (ii) Meanwhile, limited by the inadequate datasets of MSI and the high dimension of data, the resource consumption required for retraining a powerful diffusion model from scratch on MSIs is a challenge. (iii) Furthermore, although many recent works have explored alternative sampling strategies that reduce the number of sampling steps Song et al. (2021); San-Roman et al. (2021); Kong & Ping (2021); Lee et al. (2021) for low-dimensional RGB images, the iterative diffusion process for high-dimensional MSIs with multi-bands is still time-intensive. 3.2 P REDICT - AND -U NMIXING - DRIVEN D IFFUSION F RAMEWORK In this section, given a measurement _Y_ _∈_ 529.5nm R _[H][×]_ [(] _[W]_ [ +] _[d][×]_ [(] _[B][−]_ [1))], we introduce a method for generating a refined approximation of full spatial-spectral ˆ _**initl**_ _**diffh**_ Reference resolution MSI, denoted as _X ∈_ [ˆ] R _[H][×][W][ ×][B]_, through 575.5nm Input a _predict-and-subspace refine framework_ with diffusion generation adjustment. The overall diagram of ˆ _**initl**_ _**diffh**_ Reference our PSR-SCI method is shown in Fig. 3. Initially, Figure 4: Illustration of initial low-frequency prewe obtain a cost-effective initial estimate via a cheap diction and final high-frequency component generpredictor _ϕ_ _θ_ : _X_ _init_ = _ϕ_ _θ_ ( _Y_ ) . Then, we separate the ated from diffusion, where _X_ _diff_ _[h]_ [=] _[ ψ]_ _θ_ _[−]_ [1] [(] _[A]_ _[h]_ _diff_ _[, E]_ [)][.] frequency components via a frequency separator _τ_ _θ_ as depicted in Fig. 3-(a): ( _X_ _[h]_ _init, X_ _[l]_ _init_ ) = _τ_ _θ_ ( _X_ _init_ ), preserving the PSNR-critical low-frequency structures intact, while leaving the sparse, detail-rich high-frequency texture regions to the diffusion model. Subsequently, as shown in Fig. 3-(c), to facilitate a fast diffusion process while making full use of diffusion models pre-trained by large-scale RGB data, we decompose _X_ _init_ _[h]_ [into low-dimensional] abundance map _A_ and spectral coefficient _E_ using a reversible spectral embedding module _ψ_ : 529.5nm ˆ _**initl**_ _**diffh**_ Reference _**h**_ _**diff**_ 575.5nm Input ˆ _**l**_ _**init**_ _**h**_ _**diff**_ Reference Figure 4: Illustration of initial low-frequency prediction and final high-frequency component generated from diffusion, where _X_ _diff_ _[h]_ [=] _[ ψ]_ _θ_ _[−]_ [1] [(] _[A]_ _[h]_ _diff_ _[, E]_ [)][.] ( _A_ _[h]_ _init_ _[, E]_ [) =] _[ ψ]_ _[θ]_ [(] _[X]_ _[ h]_ _init_ [)] _[,]_ (2) where the inverse of _ψ_, denoted as _ψ_ _[−]_ [1], satisfies that _ψ_ _θ_ _[−]_ [1] [(] _[A]_ _init_ _[h]_ _[, E]_ [)] _[ ≈X]_ _init_ _[ h]_ [,] _[ θ]_ [ denotes the weight] within the predictor and module. Subsequently, a fine-tuned diffusion model operates on this lowdimensional abundance map: _A_ _diff_ = diff( _A_ _init_ ). To ensure the diffusion sampling process aligns with the provided measurement _Y_, we modify the diffusion model to enhance the high-frequency component of _A_ : _A_ _[h]_ _diff_ [= diff(] _[A]_ _init_ _[h]_ [)] [. This] modification allows the fine-tuned RGB pretrained diffusion model to focus solely on modeling the residuals, thereby minimizing deviations from the measurement. Finally, we get the reconstructed MSI by reversing the spectral embedding _ψ_ : _X_ ˆ = _ψ_ _θ_ _[−]_ [1] [(] _[A]_ _[h]_ _diff_ _[, E]_ [) +] _[ X]_ _[ l]_ _init_ _[,][ A]_ _[h]_ _diff_ [= diff(] _[A]_ _[h]_ _init_ [)] _[.]_ (3) 4 The initial predictor, which runs only once, effectively reduces the computational burden on the subsequent diffusion model by offloading the majority of the processing tasks to itself. Our predictand subspace refine method not only reduces the number of images required for fine-tuning the denoising diffusion process but also enables MSI generation capability through pre-trained diffusion models. Fine-tuning the RGB pre-trained denoising diffusion model with added parallel UNet encoder layers in the subspace allows for efficient diffusion sampling on high-dimensional MSI. Without this subspace sampling approach, the computational budget for iterative denoising of high-dimensional MSI increases significantly, as any rise in computational cost due to dimensionality amplifies with the number of sampling steps used. 3.3 U NMIXING -D RIVEN R EVERSIBLE S PECTRAL E MBEDDING The spectral unmixing theory posits that an MSI can be decomposed into an abundance map and spectral endmembers. It is inherently an ill-posed problem with numerous potential solutions. Abundance fractions denote the relative proportions of distinct pure materials, known as endmembers, present within a mixed pixel Keshava & Mustard (2002). To expedite the diffusion process and leverage pre-trained RGB denoising diffusion models efficiently, we propose decomposing the underlying MSI into a reduced lowdimensional image _A_ and spectral coefficients _E_ while ensuring an approximately reversible decomposition process. To achieve this, we introduce a Unmixing-driven reversible spectral embedding module (URSe). Utilizing a hierarchical spectral subspace learning strategy, as illustrated in Fig. 3-(b), URSe ensures that the compression and reconstruction gap within each stage is minimized. The backbone of URSe comprises simple Conv _N × N_ layers, focusing on compressing and decompressing spectral information. The upsampling operator utilized in URSe is “ _Bilinear interpolation + Conv_ " instead of the widely used transposed convolution to reduce the checkerboard artifacts as shown in Fig. 5. Figure 5: Illustration of the proposed spectral embedding (top), the PSNR and SSIM are the averaged results of 10 scenes of the KAIST dataset, and comparison of upsampling within URSe (bottom). Additionally, to mitigate information loss during the reverse process of spectral embedding, we introduce a spectral attention module to generate spectral coefficient _E_ from the embedding process. This spectral coefficient is reused during reversal to enhance reconstruction fidelity as shown in Eq. equation 3. As depicted in Fig. 5, URSe trained with the CAVE dataset achieves fast spectral embedding (0.00073s) and accurate inverse reconstruction (0.00016s), yielding a PSNR of 47.39dB and SSIM of 0.9928. Notably, due to its minimal parameter count, URSe can achieve effective training and decomposition even on a single image, as demonstrated in Fig. 10-(a)(b). 3.4 U NMIXING - DRIVEN MSI DIFFUSION REFINEMENT The proposed unmixing-driven reversible spectral embedding module enables the transformation of a high-dimensional MSI into a reduced low-dimensional subspace image, with a promising inverse mapping for reversal. This facilitates the utilization of diffusion models pre-trained on large-scale RGB datasets to address MSI data absence issues, while also enabling fast diffusion process to alleviate computational budget constraints for MSI. On the basis of Sec. 3.3, this section outlines a methodology for producing accurate high-frequency subspace approximations ( _A_ _[h]_ _diff_ [). This is achieved by fine-tuning the stable diffusion model Rombach] et al. (2022) pre-trained on large-scale RGB datasets, augmented with a tailored high-dimensional MSI control mechanism, atop the IRControlNet architecture Lin et al. (2023), as shown in Fig. 3-(c). As stable diffusion, all the diffusion processes of our method are performed in latent space, where an autoencoder Kingma & Welling (2013) is used to convert an image _x_ into a latent _z_ with encoder _E_ and reconstructs it with decoder _D_ . **Basic Diffusion Process** . The forward process is a Markov chain, where Gaussian noise with variance _β_ _t_ _∈_ (0 _,_ 1) at time _t_ is progressively added to the latent _z_ = _E_ ( _x_ ) to produce the noisy latent: _z_ _t_ = _[√]_ _α_ ¯ _t_ _z_ + _√_ 5 1 _−_ _α_ ¯ _t_ _ϵ,_ (4) Idea Generation Category:
0Conceptual Integration
Q150eWkQ4I
# - N O E QUATIONS N EEDED : L EARNING S YSTEM D YNAM ICS W ITHOUT R ELYING ON C LOSED -F ORM ODE S **Krzysztof Kacprzyk** University of Cambridge kk751@cam.ac.uk **Mihaela van der Schaar** University of Cambridge mv472@cam.ac.uk A BSTRACT Data-driven modeling of dynamical systems is a crucial area of machine learning. In many scenarios, a thorough understanding of the model’s behavior becomes essential for practical applications. For instance, understanding the behavior of a pharmacokinetic model, constructed as part of drug development, may allow us to both verify its biological plausibility (e.g., the drug concentration curve is nonnegative and decays to zero in the long term) and to design dosing guidelines (e.g., by looking at the peak concentration and its timing). Discovery of closed-form ordinary differential equations (ODEs) can be employed to obtain such insights by finding a compact mathematical equation and then analyzing it (a two-step approach). However, its widespread use is currently hindered because the analysis process may be time-consuming, requiring substantial mathematical expertise, or even impossible if the equation is too complex. Moreover, if the found equation’s behavior does not satisfy the requirements, editing it or influencing the discovery algorithms to rectify it is challenging as the link between the symbolic form of an ODE and its behavior can be elusive. This paper proposes a conceptual shift to modeling low-dimensional dynamical systems by departing from the traditional two-step modeling process. Instead of first discovering a closed-form equation and then analyzing it, our approach, direct semantic modeling, predicts the semantic representation of the dynamical system (i.e., description of its behavior) directly from data, bypassing the need for complex post-hoc analysis. This direct approach also allows the incorporation of intuitive inductive biases into the optimization algorithm and editing the model’s behavior directly, ensuring that the model meets the desired specifications. Our approach not only simplifies the modeling pipeline but also enhances the transparency and flexibility of the resulting models compared to traditional closed-form ODEs. 1 I NTRODUCTION **Background: data-driven modeling of dynamical systems through ODE discovery.** Modeling dynamical systems is a pivotal aspect of machine learning (ML), with significant applications across various domains such as physics (Raissi et al., 2019), biology (Neftci & Averbeck, 2019), engineering (Brunton & Kutz, 2022), and medicine (Lee et al., 2020). In real-world applications, understanding the model’s behavior is crucial for verification and other domain-specific tasks. For instance, in drug development, it is important to ensure the pharmacokinetic model (Mould & Upton, 2012) is biologically plausible (e.g., the drug concentration is non-negative and decays to zero), and the dosing guidelines may be set up based on the peak concentration and its timing (Han et al., 2018). One effective approach to gain such insights is the discovery of closed-form ordinary differential equations (ODEs) (Bongard & Lipson, 2007; Schmidt & Lipson, 2009; Brunton et al., 2016a), where a concise mathematical representation is first found by an algorithm and then analyzed by a human. **Motivation: the primary goal of discovering a closed-form ODE is its semantic representation.** We assume that the primary objective of discovering a closed-form ODE, as opposed to using a black-box model, is to have a model representation that can be analyzed by humans to understand the model’s behavior (Qian et al., 2022). Under this assumption, the specific form of the equation, its _syntactic representation_, is just a medium that allows one to obtain the description of the model’s behavior, its _semantic representation_, through post-hoc mathematical analysis. We call the process of 1 discovering an equation and then analyzing it a _two-step modeling_ approach. An illustrative example showing the difference between a syntactic and semantic representation of the same ODE (logistic growth model (Verhulst, 1845)) can be seen in Figure 1. **Limitations of the traditional two-step modeling.** The traditional two-step modeling pipeline, where an ODE is first discovered and then analyzed to understand its behavior, presents several limitations. The analysis process can be time-consuming, and requiring substantial mathematical expertise. It may even be impossible if the discovered equation is too complex. Furthermore, as the link between syntactic and semantic representation may not be straightforward, modifying the discovered equation to adjust the model’s behavior may pose significant challenges. This complicates the refinement process and limits the ability to ensure that the model meets specific requirements. **Proposed approach: direct semantic modeling.** To overcome these limitations, we propose a novel approach, called _direct semantic modeling_, that shifts away from the traditional two-step pipeline. Instead of first discovering a closed-form ODE and then analyzing it, our approach generates the semantic representation of the dynamical system directly from data, eliminating the need for post-hoc mathematical analysis. By working directly with the semantic representation, our method allows for intuitive adjustments and the incorporation of constraints that reflect the system’s behavior. This direct approach also facilitates more flexible modeling and improved performance, as it does not rely on a compact closed-form equation. **Contributions and outline.** In Section 3, we define the _syntactic_ and _semantic representation_ of ODEs, discuss the limitations of the traditional _two-step modeling pipeline_ and introduce _direct_ _semantic modeling_ as an alternative. We formalize semantic representation (Section 4) and then use it to introduce _Semantic ODE_ in Section 5, a concrete instantiation of our approach for modeling 1D systems. Finally, we illustrate its practical usability and flexibility in (Section 6). |≤𝑥0 < 1.4|Col2|1.4 ≤𝑥0 < 2.8|Col4|Col5|𝑥0 = 2.8| |---|---|---|---|---|---| |𝑥(𝑡)|𝑥(𝑡)|𝑥(𝑡)|𝑥(𝑡)|𝑥(𝑡)|𝑥(𝑡)| |Syntactic representation Semantic representation|Col2|Col3| |---|---|---| |||| |In (f 𝜌le (c 𝑥t 0io )n p .o 4i )nt 𝑥(𝑡)0 ≤𝑥0 < 1.4 𝑥(𝑡)1.4 ≤𝑥0 < 2.8 𝑥0 = 2.8 2.8 < 𝑥0 𝜊(𝑥0M ),i 𝑥d 0-p Τo 2in +t<br>, 1 1.4<br>𝑥(𝑡) 𝑥(𝑡)<br>𝑥ሶ 𝑡 = 𝑥(𝑡) 1 −𝑥 2( .𝑡 8) 𝜌(𝑥0) 2.8 𝜊 l( n𝑥 (0 2) )<br>𝑥0 = 𝑥0 0.0 0.0 𝑥0 1.4 0.0 0.0 𝑡 0.0 𝑡 0.0 𝑡 0.0 𝑡 0.0 2.8 𝑥0<br>𝑥0li →m 𝜌𝑥0 = +∞ 𝑡→lim +∞𝑥𝑡 = 2.8 𝑥l 0i →m ∞𝜊𝑥0 = 0<br>0+|2.8 < 𝑥0 𝜊(𝑥0M ),i 𝑥d 0-p Τo 2in +t<br>1.4<br>𝜊(𝑥0)<br>ln(2)<br>0.0|2.8 < 𝑥0 𝜊(𝑥0M ),i 𝑥d 0-p Τo 2in +t<br>1.4<br>𝜊(𝑥0)<br>ln(2)<br>0.0| Figure 1: Syntactic representation of a logistic growth model refers to its symbolic form, whereas semantic representation describes its behavior for different initial conditions. 2 F ORECASTING MODELS AND DISCOVERY OF CLOSED - FORM ODE S In this section, we formulate the task of discovering closed-form ODEs from data and show how it can be reinterpreted as a more general problem of fitting a forecasting model. Let _**f**_ : R _[M]_ [+1] _→_ R _[M]_, and let _T_ = ( _t_ 0 _,_ + _∞_ ). A system of _M_ ODEs is described as _**x**_ ˙ ( _t_ ) = _**f**_ ( _**x**_ ( _t_ ) _, t_ ) _∀t ∈T,_ (1) where _**x**_ : _T →_ R _[M]_ is called a _trajectory_ and ˙ _x_ _m_ = [d] d _[x]_ _t_ _[m]_ [is the derivative of] _[ x]_ _[m]_ [ with respect to] _[ t]_ [.] We also assume each _x_ _m_ _∈_ _C_ [2] ( _T_ ), i.e., it is twice continuously differentiable on _T_ . [1] We denote the dataset of observed trajectories as _D_ = _{_ ( _t_ [(] _n_ _[d]_ [)] _[,]_ _**[ y]**_ _n_ [(] _[d]_ [)] [)] _[N]_ _n_ =1 _[d]_ _[}]_ _[D]_ _d_ =1 [, where each] _**[ y]**_ _n_ [(] _[d]_ [)] represents the noisy measurement of some ground truth trajectory _**x**_ [(] _[d]_ [)] governed by _**f**_ at time point _t_ [(] _n_ _[d]_ [)] [.] A closed-form equation (Qian et al., 2022) is a mathematical expression consisting of a finite number of variables, constants, binary arithmetic operations ( + _, −, ×, ÷_ ), and some well-known functions such as exponential or trigonometric functions. A system of ODEs is called closed-form when each function _f_ _m_ is closed-form. The task is to find a closed-form _**f**_ given _D_ . Traditionally (Bongard & Lipson, 2007; Schmidt & Lipson, 2009) discovery of governing equations has been performed using genetic programming (Koza, 1992). In a seminal paper, Brunton et al. 1 We assume _x_ _m_ _∈_ _C_ 2 ( _T_ ) instead of _C_ 1 ( _T_ ), so that we can discuss curvature and inflection points. 2 (2016a) proposed to represent an ODE as a linear combination of terms from a prespecified library. This was followed by numerous extensions, including implicit equations (Kaheman et al., 2020), equations with control (Brunton et al., 2016b), and partial differential equations (Rudy et al., 2017). Approaches based on weak formulation of ODEs that allow to circumvent derivative estimation have also been proposed (Messenger & Bortz, 2021a; Qian et al., 2022). The extended related works section can be found in Appendix F. Each system of ODEs _**f**_ [2] defines a forecasting model [3] _**F**_ through the initial value problem (IVP), i.e., for each initial condition _**x**_ ( _t_ 0 ) = _**x**_ 0 _∈_ R _[M]_, _**F**_ maps _**x**_ 0 to a trajectory governed by _**f**_ satisfying this initial condition. Therefore, ODE discovery can be treated as a special case of fitting a forecasting model _**F**_ : R _[M]_ _→_ _C_ [2] ( _T_ ). 3 F ROM DISCOVERY AND ANALYSIS TO DIRECT SEMANTIC MODELING In this section, we define the syntactic and semantic representations, describe the traditional two-step modeling and its limitations, and introduce our approach, direct semantic modeling. 3.1 S YNTAX VS . SEMANTICS . ODEs are usually represented symbolically as closed-form equations. For instance, ˙ _x_ ( _t_ ) = (1 _−_ _x_ ( _t_ )) _x_ ( _t_ ). We refer to this kind of representation as _syntactic_ . **Syntactic representation** of a closed-form ODE refers to its symbolic form, i.e., the arrangement of variables, arithmetic operations, numerical constants, and some well-known functions. The output of the current ODE discovery algorithms is in the form of syntactic representation. We assume that the primary objective of discovering a closed-form ODE, as opposed to using a black-box model, is to have a model representation that can be analyzed by humans to understand its behavior. Such understanding is necessary to ensure that the model behaves as expected; for instance, it operates within the range of values and exhibits trends consistent with domain knowledge. We call the description of the dynamical system’s behavior its _semantic representation_ . **Semantic representation** describes the behavior of a dynamical system. Semantic representation of a single trajectory may include its shape, properties, and asymptotic behavior, whereas semantic representation of a forecasting model, including a system of ODEs, describes how they change under different conditions, e.g., for different initial conditions. Comparison between the syntactic and semantic representation of the same ODE is shown in Figure 1. 3.2 T WO - STEP MODELING AND ITS LIMITATIONS Semantic representation of a dynamical system is usually obtained by first discovering an equation (e.g., using an ODE discovery algorithm) and then analyzing it. This _two-step modeling_ approach has several limitations (depicted in Figure 2). - **Analysis** of a closed-form ODE may be time-consuming, and requiring mathematical expertise. It may be impossible if the discovered equation is too complex. As a result, it may introduce a trade-off between better fitting the data and being simple enough to be analyzed by humans. - **Obtained insights may be nonactionable** . As the link between syntactic and semantic representations is often far from trivial, it is difficult to edit the syntactic representation of the model to cause a specific change in its semantic representation and to provide feedback to the optimization algorithm to solicit a model with different behavior. - **Incorporation of prior knowledge** . Often the prior knowledge about the dynamical system concerns its semantic representation rather than its syntax. For instance, we may know what shape the trajectory should have (e.g., decreasing and approaching a horizontal asymptote) rather than what kind of terms or arithmetic operations are present in the best-fitting equation. 3.3 D IRECT SEMANTIC MODELING To address the limitations of two-step modeling, we propose a conceptual shift in modeling lowdimensional dynamical systems. Instead of discovering an equation from data and then analyzing it 2 With some regularity conditions to ensure uniqueness of solutions. 3 In our work we refer to a forecasting model as any model that outputs a trajectory. 3 to obtain its semantic representation, our approach, _direct semantic modeling_, generates the semantic representation directly from data, eliminating the need for post-hoc mathematical analysis. **Forecasting model determined by semantic representation** A major difference between our approach and traditional two-step modeling is how the model ultimately predicts the values of the trajectory. Given a system of closed-form ODEs _**f**_, a forecasting model _**F**_ is directly given by the equation. We just need to solve the initial value problem (IVP) for the given initial condition. There are plenty of algorithms to do so numerically, the forward Euler method being the simplest (Butcher, 2016). In contrast, the result of direct semantic modeling is a _semantic predictor_ _F_ sem (that corresponds to the semantic representation of the model) that predicts the semantic representation of the trajectory. Then it passes it to a _trajectory predictor_ _**F**_ traj whose role is to find a trajectory in a given hypothesis space that has a matching semantic representation. The matching does not need to be unique but _**F**_ traj needs to be deterministic. Defining _**F**_ as _**F**_ traj _◦_ _F_ sem has multiple advantages. No posthoc mathematical analysis is required as the semantic representation of _**F**_ is directly accessed through _F_ sem . The model can be easily edited to enforce a specific change in the semantic representation because we can directly edit _F_ sem . Incorporating prior knowledge and feedback into the optimization algorithm is also streamlined and more intuitive. Finally, as the resulting model does not need to be further analyzed, it does not need to have a compact symbolic representation, increasing its flexibility. Figure 2 compares two-step modeling and direct semantic modeling. **Semantic ODE as a concrete instantiation** We have outlined the core principles of direct semantic modeling above. In the following sections, we propose a concrete machine learning model that realizes these principles. It is a forecasting model that takes the initial condition _x_ 0 _∈_ R and predicts a 1 -dimensional trajectory, _x_ : _T →_ R . We call it _Semantic ODE_ because it maps an initial condition to a trajectory (like ODEs implicitly do). Although Semantic ODE can only model 1 -dimensional trajectories, we believe direct semantic modeling can be successfully applied to multi-dimensional systems. We describe our proposed roadmap for future research to achieve that goal in Appendix G.2. Before we describe the building blocks of Semantic ODE in Section 5, we need a formal definition of semantic representation. i i Two-step modeling i i Prior knowledge Difficult to inject i i Challenging to edit i i Direct semantic modeling Prior knowledge Easy to inject i i i i i i i i i i i i i i Fitting Predictive model ሶ Human analysis Fitting Determines Predictive model i i i i i i i i i i _Semantic_ _representation_ i i i i Cannot provide feedback! i i i i i i i i Figure 2: Comparison between two-step modeling and direct semantic modeling. Left: The discovery of closed-form ODEs often allows for human analysis, but editing the equation or providing feedback to the optimization algorithm is challenging. Right: We propose to predict the semantic representation directly from data, which allows for editing the model and steering the optimization algorithm. To propose a concrete instantiation of direct semantic modeling ~~in~~ ~~Se~~ ction 5 c ~~alled~~ _Semantic ODE_, |Col1|Col2|Col3| |---|---|---| |in Se|ction 5 c|alle| we need to formalize the definition of semantic representation in S ~~ection~~ ~~3~~ ~~to~~ ~~m~~ ake it operational. We consider a setting where _F_ : R _→_ _C_ [2] ( _T_ ) is a 1D forecasting model (any ODE can be treated as i i i i such ~~a~~ ~~model)~~ . ~~We~~ ~~frst~~ i ~~defne~~ i ~~a~~ ~~semantic~~ ~~representation~~ ~~of~~ ~~a~~ ~~trajectory~~ ~~_x_~~ ~~_∈_~~ ~~_C_~~ [2] ~~(~~ ~~_T_~~ ~~)~~ ~~and~~ ~~th~~ en use it to define a semantic representation of _F_ itself. **Semantic representation as composition and properties** Our definition of semantic representation is motivated by the framework proposed by Kacprzyk et al. (2024b). Following that work, each trajectory _x_ can be assigned a _composition_ (denoted _c_ _x_ ) that describes the general shape of the trajectory and the set of properties (denoted _p_ _x_ ) which is a set of numbers that describes this shape quantitatively. The composition of the trajectory depends on the chosen set of _motifs_ . Each motif describes the shape of the trajectory on a particular interval. For instance, “increasing and strictly convex”. Given a set of motifs, we can then subdivide _T_ into shorter intervals such that _x_ is described by a single motif on each of them. This results in a motif sequence and the shortest such sequence is called a composition. The points between two motifs and on the boundaries are called _transition_ _points_ . An example of a trajectory, its composition, and its transition points is shown in Figure 3a. 4 Motifs: Composition: |2<br>s++b s+−b s−+b s−−b<br>1 3 5 t2 t2 t1 t1<br>4 t1 t1 t2 t2<br>in cc or𝑠 ne+ ca− as𝑏 vin eg d ce oc𝑠 r n− e c− a a𝑏 s vi eng de cc o𝑠 r− e n+ a v𝑏 es xing in cc o𝑠 re+ na+ vs𝑏 ei xng s++u s+−u s−+u s−−u<br>tend tend<br>sition: (𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏) tend tend|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |2<br>s++b s+−b s−+b s−−b<br>1 3 5 t2 t2 t1 t1<br>4 t1 t1 t2 t2<br>in cc or𝑠 ne+ ca− as𝑏 vin eg d ce oc𝑠 r n− e c− a a𝑏 s vi eng de cc o𝑠 r− e n+ a v𝑏 es xing in cc o𝑠 re+ na+ vs𝑏 ei xng s++u s+−u s−+u s−−u<br>tend tend<br>sition: (𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏) tend tend|2<br>s++b s+−b s−+b s−−b<br>1 3 5 t2 t2 t1 t1<br>4 t1 t1 t2 t2<br>in cc or𝑠 ne+ ca− as𝑏 vin eg d ce oc𝑠 r n− e c− a a𝑏 s vi eng de cc o𝑠 r− e n+ a v𝑏 es xing in cc o𝑠 re+ na+ vs𝑏 ei xng s++u s+−u s−+u s−−u<br>tend tend<br>sition: (𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏) tend tend|2<br>s++b s+−b s−+b s−−b<br>1 3 5 t2 t2 t1 t1<br>4 t1 t1 t2 t2<br>in cc or𝑠 ne+ ca− as𝑏 vin eg d ce oc𝑠 r n− e c− a a𝑏 s vi eng de cc o𝑠 r− e n+ a v𝑏 es xing in cc o𝑠 re+ na+ vs𝑏 ei xng s++u s+−u s−+u s−−u<br>tend tend<br>sition: (𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏) tend tend|2<br>s++b s+−b s−+b s−−b<br>1 3 5 t2 t2 t1 t1<br>4 t1 t1 t2 t2<br>in cc or𝑠 ne+ ca− as𝑏 vin eg d ce oc𝑠 r n− e c− a a𝑏 s vi eng de cc o𝑠 r− e n+ a v𝑏 es xing in cc o𝑠 re+ na+ vs𝑏 ei xng s++u s+−u s−+u s−−u<br>tend tend<br>sition: (𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏) tend tend|2<br>s++b s+−b s−+b s−−b<br>1 3 5 t2 t2 t1 t1<br>4 t1 t1 t2 t2<br>in cc or𝑠 ne+ ca− as𝑏 vin eg d ce oc𝑠 r n− e c− a a𝑏 s vi eng de cc o𝑠 r− e n+ a v𝑏 es xing in cc o𝑠 re+ na+ vs𝑏 ei xng s++u s+−u s−+u s−−u<br>tend tend<br>sition: (𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏) tend tend|2<br>s++b s+−b s−+b s−−b<br>1 3 5 t2 t2 t1 t1<br>4 t1 t1 t2 t2<br>in cc or𝑠 ne+ ca− as𝑏 vin eg d ce oc𝑠 r n− e c− a a𝑏 s vi eng de cc o𝑠 r− e n+ a v𝑏 es xing in cc o𝑠 re+ na+ vs𝑏 ei xng s++u s+−u s−+u s−−u<br>tend tend<br>sition: (𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏) tend tend|s++b s+−b s−+b s−−b<br>t2 t2 t1 t1<br>t1 t1 t2 t2| ||𝑠+−𝑏<br>increasing<br>concave|𝑠+−𝑏<br>increasing<br>concave|𝑠−−𝑏<br>decreasing<br>concave|𝑠−+𝑏<br>decreasing<br>convex|𝑠++𝑏<br>increasing<br>convex|𝑠++𝑏<br>increasing<br>convex| |sition:|sition:|(𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏)|(𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏)|(𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏)|(𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏)|(𝑠+−𝑏, 𝑠−−𝑏, 𝑠−+𝑏, 𝑠++𝑏)| Transition points: 1 2 3 4 5 start local maximum (a) Inflection end _s_ + _−h_ _s_ _−_ + _h_ _t_ end _t_ end (b) point local minimum Figure 3: ( **a** ) Composition and transition points of _x_ ( _t_ ) = sin( _t_ ) on [0 _,_ 2 _π_ ] . ( **b** ) Motif set used in the proposed formalization of semantic representation. **Extending dynamical motifs** The set of motifs we choose is inspired by the original set of _dynamical motifs_ (Kacprzyk et al., 2024b) but we adjusted and extended it to cover unbounded time domains and different asymptotic behaviors. We define a set of ten motifs, four _bounded_ motifs and six _unbounded_ motifs. Each motif is of the form _s_ _±±∗_, i.e., is described by two symbols (each + or _−_ ) and one letter ( _b/u/h_ ). The symbols refer to its first and second derivatives. The letter _b_ signifies the motif is for **b** ounded time domains (e.g., for interval ( _t_ 1 _, t_ 2 ) ). Both _h_ and _u_ refer to **u** nbounded time domains. These motifs are always the last motif of the composition, describing the shape on ( _t_ end _,_ + _∞_ ) where _t_ end is the _t_ -coordinate of the last transition point. _h_ specifically describes motifs with horizontal asymptotes. For instance, _s_ _−_ + _h_ is an unbounded motif that describes a function that is decreasing ( _−_ ), strictly convex ( + ) and with horizontal asymptote ( _h_ ). All motifs are visualized in Figure 3b. Note that we excluded the three original motifs describing straight lines to simplify the modeling process. If necessary, they can be approximated by other motifs with infinitesimal curvature. We denote the set of all compositions constructed from these motifs as _C_ . **Properties** Apart from the composition, the semantic representation of a trajectory also involves a set of properties. Ideally, the properties should be sufficient to visualize what each of the motifs looks like and to constrain the space of trajectories with the corresponding semantic representation. Following the original work, we include the coordinates of the transition points as they characterize bounded motifs well. In contrast to their bounded counterparts, the unbounded motifs are not described by their right transition point but by a set of _motif properties_ . These, in turn, depend on how we describe the unbounded motif. For instance, we could parameterize _s_ ++ _u_ as _x_ ( _t_ ) = _x_ ( _t_ end )2 [(] _[t][−][t]_ [end] [)] _[/B]_, where ( _t_ end _, x_ ( _t_ end )) is the position of the last transition point. In that setting, _B_ is the _property_ of _s_ ++ _u_ that describes the doubling time of _x_ ( _x_ ( _t_ + _B_ ) = 2 _x_ ( _t_ ) ). In reality, choosing a good parametrization with meaningful properties is challenging, and we discuss it in more detail in Appendix D.2. The set of properties also includes the first derivative at the first transition point ( _t_ 0 ) and the first and the second derivative at the last transition point ( _t_ end ). They are needed for the trajectory predictor described in Section 5.2. Each composition _c ∈C_ may require a different set of properties that we denote _P_ _c_ . For instance, a trajectory _x_ with _c_ _x_ = ( _s_ ++ _b_ _, s_ + _−h_ ) will have _p_ _x_ = ( _t_ 0 _, t_ 1 _, x_ ( _t_ 0 ) _, x_ ( _t_ 1 ) _,_ ˙ _x_ ( _t_ 0 ) _,_ ˙ _x_ ( _t_ 1 ) _,_ ¨ _x_ ( _t_ 1 ) _, h, t_ 1 _/_ 2 ), where each ( _t_ _i_ _, x_ ( _t_ _i_ )) is a transition point, and ( _h, t_ 1 _/_ 2 ) are the properties of the unbounded motif (see Figure 4). We denote all possible sets of properties as _P_, where _P_ = [�] _c∈C_ _[P]_ _[c]_ [.] We are finally ready to provide a formal definition of the semantic representation of a trajectory _x ∈_ _C_ [2] ( _T_ ) and a forecasting model _F_ : R _→_ _C_ [2] ( _T_ ) . Given this formal definition of semantic representation, we introduce our model, Semantic ODE, in the next section. **Definition 1.** The _semantic representation of a trajectory_ _x ∈_ _C_ [2] ( _T_ ) is a pair ( _c_ _x_ _, p_ _x_ ), where _c_ _x_ _∈C_ is the composition of _x_ and _p_ _x_ _∈P_ _c_ _x_ is the set of properties as specified by _c_ _x_ . **Definition 2.** The _semantic representation_ of _F_ : R _→_ _C_ [2] ( _T_ ) is a pair ( _C_ _F_ _, P_ _F_ ) : R _→C × P_ defined as follows. _C_ _F_ : R _→C_ is called a _composition map_ and it maps any initial condition _x_ 0 _∈_ R to a composition of the trajectory determined by its initial condition. Formally, _C_ _F_ ( _x_ 0 ) = _c_ _F_ ( _x_ 0 ) . _P_ _F_ : R _→P_ is called a _property map_, and it maps any initial condition _x_ 0 _∈_ R to the properties of the predicted trajectory _C_ _F_ ( _x_ 0 ). Formally, _P_ _F_ ( _x_ 0 ) = _p_ _F_ ( _x_ 0 ) . 5 Idea Generation Category:
0Conceptual Integration
kbm6tsICar
# - R OOT C AUSE A NALYSIS OF A NOMALIES IN M ULTI VARIATE T IME S ERIES THROUGH G RANGER C AUSAL D ISCOVERY **Xiao Han** **[1]** **Saima Absar** **[2]** **Lu Zhang** **[2]** **Shuhan Yuan** **[1]** 1 Utah State University, 2 University of Arkansas _{_ xiao.han,shuhan.yuan _}_ @usu.edu, _{_ sa059,lz006 _}_ @uark.edu A BSTRACT Identifying the root causes of anomalies in multivariate time series is challenging due to the complex dependencies among the series. In this paper, we propose a comprehensive approach called AERCA that inherently integrates Granger causal discovery with root cause analysis. By defining anomalies as interventions on the exogenous variables of time series, AERCA not only learns the Granger causality among time series but also explicitly models the distributions of exogenous variables under normal conditions. AERCA then identifies the root causes of anomalies by highlighting exogenous variables that significantly deviate from their normal states. Experiments on multiple synthetic and real-world datasets demonstrate that AERCA can accurately capture the causal relationships among time series and effectively identify the root causes of anomalies. 1 I NTRODUCTION Root cause analysis on multivariate time series data, which is to identify the underlying causes of an anomaly, has a wide spectrum of applications in various domains, such as diagnosing the fault of online cloud-based systems or cyber-physical systems (Jayathilaka et al., 2017; Jeyakumar et al., 2019; Soldani & Brogi, 2022; Yu et al., 2021). Traditional approaches, which manually trace the root cause based on the topology of the systems, have become impractical due to increasing system complexity, leading to a greater focus on data-driven methods. One promising direction is based on a causal framework, which models system components and their dependencies via a causal graph and then traces how the failure of one component might propagate through the system (Assaad et al., 2023a; Li et al., 2022; Zhang et al., 2021; Ikram et al., 2022; Wang et al., 2023b; Okati et al., 2024). For instance, for a cyber-physical system like a water treatment plant equipped with multiple sensors—such as water level, pH level, and electrical conductivity—that generate multivariate time series data, if an attacker overdoses sodium hydroxide, it could lead to abnormal readings in metrics like pH level and electrical conductivity. Root cause analysis aims to identify the root cause of such abnormal behavior, even when the time series data create ripple effects across other metrics—for example, an increase in sodium hydroxide leading to abnormalities in additional measurements. Despite the advantages of providing a scalable and systematic way of understanding the relationships and causal chains in complex systems, existing causal inference-based root cause analysis approaches usually suffer from various limitations. For example, Budhathoki et al. (2022) and Assaad et al. (2023b) assume the causal relationships as prior knowledge, which may not be feasible in real cases. On the other hand, although some approaches (Yang et al., 2022; Meng et al., 2020; Wang et al., 2018b) try to learn the causal structures from the observational data, they usually leverage the existing causal discovery algorithms, which do not consider the need for identifying root causes. In this paper, we propose a comprehensive approach that inherently integrates Granger causal discovery with root cause analysis. We treat the root cause of the anomaly, such as an overdose of sodium hydroxide, as an intervention on the exogenous variables in a structural causal model (SCM). We refer to this as the exogenous intervention, where the exogenous variables follow a stable distri 1 bution under normal conditions but undergo interventions when anomalies occur [1] . Under this core idea, we identify the key to root cause identification, which is to model the normality of exogenous variables for multivariate time series and then highlight abnormal exogenous variables. The current causal discovery approaches mainly focus on identifying the causal structures among time series/endogenous variables without explicitly modeling the impact of exogenous variables, making them unsuitable for locating the root cause of an anomaly due to exogenous interventions. Therefore, to achieve our goal, we propose a novel autoencoder-based framework for root cause analysis, referred to as AERCA. This framework identifies Granger causal relationships in time series by explicitly modeling the distributions of exogenous variables, which serves as the foundation for our root cause localization approach. Specifically, to model the data generation process, i.e., the causal relationships as well as the distributions of exogenous variables, the encoder models the abductive reasoning process to derive the exogenous variable for each time series. Based on our core assumption that the exogenous variables are mutually independent, we establish effective constraints to ensure this independence. Meanwhile, the decoder learns a deductive reasoning process to infer the observed data from the exogenous variables. We theoretically show that to predict the input at time _t_, rather than using exogenous variables of all time steps before _t_, the decoder only needs to take in the exogenous variables and observed time series from a window prior to _t_ . We train AERCA on the normal data. Then, upon deployment, if the encoder-derived values of exogenous variables significantly deviate from the norm, the corresponding time series are highly likely to be the root cause of the anomaly. The contributions of this paper are as follows: 1) we propose a novel encoder-decoder structure for Granger causal discovery, which can not only learn the causal relationships between time series but also capture the distribution of exogenous variables; 2) based on the learned structural causal model, AERCA can not only identify the root cause time series but also highlight the root cause time steps; 3) experimental results on multiple datasets show that AERCA can achieve state-of-theart performance on both Granger causal discovery and root cause identification. 2 R ELATED W ORK Understanding the root cause of an anomaly has received increasing attention because of wide realworld applications. Accurate root cause localization can help domain users understand and mitigate abnormal behaviors. The mainstream approaches in root cause analysis follow a two-step framework: identifying the dependency between variables from observational data and then localizing the root cause by exploring the dependency graph. Therefore, the key step is to build the dependency graph. Traditionally, domain knowledge or a systems tool can be leveraged to build the dependency graph. For example, in a microservice system, a directed edge between two nodes usually indicates a system call (Kim et al., 2013; Weng et al., 2018; Wang et al., 2018a; Yu et al., 2021). However, as the system becomes sophisticated, it becomes impractical to build the dependency graph based on domain knowledge, and the call graph learned by system tools may not represent the true dependency between sensors (Kim et al., 2013). Therefore, data-driven approaches are now commonly used for learning the dependency between variables. For example, various deep neural networks are developed to capture the temporal and spatial correlations in the multivariate time series for root cause analysis (Zhang et al., 2019; Tuli et al., 2022; Zhao et al., 2020). Recently, causal inference-based root cause analysis has received increasing attention, which models the anomaly as data under intervention (Assaad et al., 2023a; Li et al., 2022). Under this assumption, root cause localization is to identify the intervention on observational data (Li et al., 2022). Several approaches leverage the PC algorithm (Spirtes et al., 2001) or its variance to build the causal graph by using the conditional independent test (Zhang et al., 2021; Ikram et al., 2022). Some approaches also leverage the graph neural networks to learn the causal relationships between nodes by simulating the data generation process (Wang et al., 2023b;a). 1 Note that not all attacks can be treated as interventions on exogenous variables. Understanding the nature of anomalies is crucial before applying our method to real-world applications. 2 In this work, we propose a comprehensive approach that inherently integrates Granger causal discovery with root cause analysis. By assuming that anomalies are caused by exogenous interventions, we introduce a novel method for Granger causal discovery that explicitly models the distribution of exogenous variables. Consequently, unlike existing studies that can only locate the root-cause time series without specifying the abnormal time steps, our approach identifies the root cause as the time series receiving exogenous interventions at specific time steps, providing much more informative and precise localization. 3 P RELIMINARY : G RANGER C AUSALITY Granger causality (Granger, 1969; Dahlhaus & Eichler, 2003) is commonly used for modeling causal relationships in multivariate time series. The key assumption is that if the prediction of the future value _Y_ can be improved by knowing past elements of _X_, then _X_ “Granger causes” _Y_ . Granger causality was originally defined for linear relationships, while recently, the non-linear Granger causality has been proposed (Tank et al., 2021; Assaad et al., 2022): Let a stationary time-series as **X** = ( **x** 1 _, . . .,_ **x** _t_ _, . . .,_ **x** _T_ ), where **x** _t_ _∈_ R _[d]_ is a d-dimensional vector (e.g., d-dimensional time series data from _d_ sensors) at a specific time _t_ . Suppose that the true data generation mechanism is defined in the form of _x_ [(] _t_ _[j]_ [)] := _f_ [(] _[j]_ [)] ( **x** [(1)] _≤t−_ 1 _[,][ · · ·][,]_ **[ x]** [(] _≤_ _[d]_ _t_ [)] _−_ 1 [) +] _[ u]_ [(] _t_ _[j]_ [)] _[,]_ [ for][ 1] _[ ≤]_ _[j][ ≤]_ _[d,]_ (1) where **x** [(] _≤_ _[j]_ _t_ [)] _−_ 1 [= [] _[· · ·][, x]_ _t_ [(] _−_ _[j]_ [)] 2 _[, x]_ [(] _t−_ _[j]_ [)] 1 []][ denotes the past of series] _[ j]_ [;] _[ u]_ [(] _t_ _[j]_ [)] _∈_ **u** [(] _[j]_ [)] indicates exogenous variable for time series _j_ at time step _t_ ; _f_ [(] _[j]_ [)] ( _·_ ) is a function for time series _j_ that captures how the past values impact the future values of **x** [(] _[j]_ [)] . The time series _i_ Granger causes _j_, if _f_ [(] _[j]_ [)] depends on **x** [(] _≤_ _[i]_ [)] _t−_ 1 [, i.e.,] _[ ∃]_ **[x]** _[′]_ [(] _≤_ _[i]_ [)] _t−_ 1 _[̸]_ [=] **[ x]** [(] _≤_ _[i]_ [)] _t−_ 1 [:] _[ f]_ [ (] _[j]_ [)] [(] **[x]** [(1)] _≤t−_ 1 _[,][ · · ·][,]_ **[ x]** _[′]_ [(] _≤_ _[i]_ [)] _t−_ 1 _[,][ · · ·][,]_ **[ x]** [(] _≤_ _[d]_ _t_ [)] _−_ 1 [)] = _̸_ _f_ [(] _[j]_ [)] ( **x** [(1)] _≤t−_ 1 _[,][ · · ·][,]_ **[ x]** [(] _≤_ _[i]_ [)] _t−_ 1 _[,][ · · ·][,]_ **[ x]** [(] _≤_ _[d]_ _t_ [)] _−_ 1 [)][. (Tank et al., 2021; Marcinkeviˇcs & Vogt, 2021; Shojaie] & Fox, 2022) **Limitations of Granger Causality** . While Granger causality is a valuable method for detecting temporal causal dependencies, it is important to understand its limitations. Specifically, Granger causality assumes no hidden confounding, i.e., all relevant variables influencing the causal relationship are observed and included in the model, and no instantaneous effects between variables, i.e., the influence of one variable on another is not immediate but occurs with some time lag. Violating these assumptions can lead to erroneous conclusions in Granger causality analysis, highlighting the importance of careful assessment of assumptions and consideration of alternative models. 4 M ETHODOLOGY 4.1 P ROBLEM F ORMULATION AND F RAMEWORK Based on the structural equation of multivariate time series defined in Eq. 1, in this work, we focus on the anomaly ˜ _x_ [(] _t_ _[j]_ [)] caused by exogenous interventions on a single or multiple time series, leading to a significantly deviating value in its exogenous variable ˆ _u_ [(] _t_ _[j]_ [)] [, which can be defined as] _x_ ˜ _t_ [(] _[j]_ [)] = _f_ [(] _[j]_ [)] ( **x** [(1)] _≤t−_ 1 _[,][ · · ·][,]_ **[ x]** [(] _≤_ _[d]_ _t_ [)] _−_ 1 [) + ˆ] _[u]_ [(] _t_ _[j]_ [)] = _f_ [(] _[j]_ [)] ( **x** [(1)] _≤t−_ 1 _[,][ · · ·][,]_ **[ x]** [(] _≤_ _[d]_ _t_ [)] _−_ 1 [) +] _[ u]_ [(] _t_ _[j]_ [)] + _ϵ_ [(] _t_ _[j]_ [)] _[,]_ [ for][ 1] _[ ≤]_ _[j][ ≤]_ _[d,]_ (2) where ˆ _u_ [(] _t_ _[j]_ [)] = _u_ [(] _t_ _[j]_ [)] + _ϵ_ [(] _t_ _[j]_ [)] with an anomaly term _ϵ_ [(] _t_ _[j]_ [)] [. Note that the abnormal time series caused by] exogenous interventions can be either a point anomaly or a sequential anomaly. The point anomaly can be due to an exogenous intervention on a specific time series at a time step. In contrast, a sequence anomaly can be caused by the propagation of an exogenous intervention through time by following the causal structural model or a consistent exogenous intervention over time steps. Therefore, an informative root cause analysis shows not just the time series but also the time steps receiving the exogenous intervention. Based on this motivation, we define the task of root cause identification below. **Definition 1.** _The root cause identification is to locate the time series/variables_ ( _j_ ) _at specific time_ _step(s) t with the abnormal exogenous variable_ ˆ _u_ [(] _t_ _[j]_ [)] _[.]_ 3 For the anomaly caused by the exogenous interventions, to achieve the root cause analysis, we learn the Granger causality in multivariate time series by explicitly modeling the distribution of exogenous variables. To this end, we develop an encoder-decoder structure for root cause analysis, called AERCA, which can calculate the exogenous variable for each time series at a specific time step. AERCA explicitly computes the exogenous variables via an encoder, and a decoder predicts the current value by simulating the data generation mechanism defined by the Granger causality. By training the encoder-decoder structure on the normal time series, the model can capture the distribution of exogenous variables in the normal status. When an exogenous intervention occurs, the derived exogenous variables should significantly differ from the normal ones. Meanwhile, because we explicitly derive the exogenous variables at each time step, even if the time series is still abnormal due to the error propagation through time, AERCA can distinguish the root cause from the downstream impact. Figure 1 shows the framework of AERCA. In the following, we explain each components of the framework. Figure 1: The overview of AERCA. 4.2 G RANGER C AUSAL D ISCOVERY **Motivation** . To model the data generation process, i.e., the causal relationships as well as the distributions of exogenous variables, we adopt the encoder-decoder structure to simulate both the abductive and deductive reasoning processes. Abductive reasoning is to seek the most plausible explanations, i.e., to infer the most likely exogenous variables (causes) that could have generated the observed time series data. As shown in Eq. 1, based on the Granger causality, the value of the time series at step _t_ is a function of past time series plus an exogenous term at the current step, i.e., **x** _t_ := _f_ ( **x** _≤t−_ 1 ) + **u** _t_, with simplified notations. To simulate abductive reasoning, the encoder derives the exogenous variables based on the observed data by rewriting Eq. 1 as **u** _t_ := **x** _t_ _−_ _f_ ( **x** _≤t−_ 1 ) _._ (3) On the other hand, deductive reasoning derives effects from known causes, i.e., reconstructing the observed data from exogenous variables. By recursively resolving each previous time step—such as expressing **x** _t−_ 1 in terms of its predecessor **x** _t−_ 2, and continuing this process backward to the first time step—we can rewrite Eq. 1 in a different way as a function of the exogenous variables: **x** _t_ = _f_ [˜] ( **u** _≤t−_ 1 ) + **u** _t_ _,_ (4) which shows that the observed data at step _t_ is represented as a function _f_ [˜] ( _·_ ) of all preceding exogenous variables. Within the encoder-decoder framework, this function acts as the decoder for reconstructing the observed data directly from the exogenous variables. Based on the above analysis, we develop an encoder-decoder structure, where the encoder learns _·_ Granger causal relationships _f_ ( ) by using past time series values as input to compute the exogenous 4 variables, simulating Eq. 3. The decoder _f_ [˜] ( _·_ ) then takes these exogenous variables from the encoder as input to reconstruct the value of the current time step **x** _t_, simulating Eq. 4. **Encoder-decorder Structure** . Given normal multivariate time series **X** = ( **x** 1 _, . . .,_ **x** _t_ _, . . .,_ **x** _T_ ), we define a window with length _K_ as **W** _t_ = ( **x** _t−K_ +1 _, ...,_ **x** _t_ ) and convert a time series **X** to a sequence of sliding windows _W_ = ( **W** _K_ _,_ **W** _K_ +1 _, ...,_ **W** _T_ ). We first aim to learn the Granger causality of time series in a window, i.e., a window causal graph (Assaad et al., 2022). Given a time series window, we first parameterize the Granger causality in time series defined in Eq. 1 as **x** _t_ = _K_ � _ω_ _θ_ _k_ ( **x** _t−k_ ) **x** _t−k_ + **u** _t_ _,_ (5) _k_ =1 where _ω_ _θ_ _k_ ( **x** _t−k_ ) indicates the _k_ -th neural network to predict the Granger causal relationship between **x** _t−k_ and **x** _t_ . The output of _ω_ _θ_ _k_ ( **x** _t−k_ ) can be reshaped as a _d × d_ coefficient matrix, where the entry element ( _i, j_ ) indicates the influence of _x_ [(] _t−_ _[j]_ [)] _k_ [on] _[ x]_ _t_ [(] _[i]_ [)] [. As shown in Eq. 5,] _[ K]_ [ neural net-] works are used to predict the weights of past _K_ time legs on deriving **x** _t_ . Therefore, relationships between _d_ time series over _K_ time lags can be explored by inspecting _K_ coefficient matrices. Following Eq. 3, we rewrite the Eq. 5 as **u** _t_ = **x** _t_ _−_ _K_ � _ω_ _θ_ _k_ ( **x** _t−k_ ) **x** _t−k_ _._ (6) _k_ =1 Then, given a time series window **W** _t_, we apply the encoder _K_ times to derive the exogenous variables in a window, denoted as **U** _t_ = ( **u** _t−K_ +1 _, . . .,_ **u** _t_ ). To enforce independence between the derived exogenous variables, we ensure that the distribution of **U** _t_ adheres to an isotropic standard Gaussian distribution _Q_ . By assuming that the exogenous variables follow a multivariate Gaussian distribution and applying the KL divergence to quantify the distribution difference, we formulate the independence constraint as _D_ _t_ _[KL]_ ( _P_ ( **U** _t_ ) _∥Q_ ) = [1] � 2 �tr(Σ _Q_ _[−]_ [1] [Σ] _[t]_ [) + (] _[µ]_ _[Q]_ _[ −]_ _[µ]_ _[t]_ [)] _[T]_ [ Σ] _[−]_ _Q_ [1] [(] _[µ]_ _[Q]_ _[ −]_ _[µ]_ _[t]_ [)] _[ −]_ _[d]_ [ + log det Σ] det Σ _[Q]_ _t_ det Σ _t_ (7) [1] �tr _{_ Σ _t_ _}_ + _µ_ _[T]_ _t_ _[µ]_ _[t]_ _[−]_ _[d][ −]_ [log det Σ] _[t]_ � _,_ 2 = [1] where _µ_ _Q_ = 0 and Σ _Q_ = _I_ represent the mean and covariance matrix of the isotropic standard Gaussian distribution _Q_ ; _µ_ _t_ and Σ _t_ are the mean and covariance matrix of **U** _t_ . The decoder is to reconstruct the input **x** _t_ based on the exogenous variables **U** _t_ . One challenge is that theoretically, the value **x** _t_ at the current time step is computed by the exogenous variables of all the previous time steps. However, considering the potential infinite length of the time series, it is impractical to reconstruct **x** _t_ by using all the previous time steps. To tackle this challenge, we iteratively replace the **x** _t−k_ with **x** _t−_ ( _k_ +1) for a subsequence with length _n_ and derive the following proposition. **Proposition 1.** _Consider a basic autoregressive model where ω_ _k_ = _ω_ _θ_ _k_ ( **x** _t−k_ ) _as a framework for_ _analyzing Granger causality. The value at the current time step_ **x** _t_ _can be derived by the exogenous_ _variables from a previous window_ [ **u** _t−_ 1 _, ...,_ **u** _t−K_ ] _and the observed time series from a previous_ _window_ [ **x** _t−K−_ 1 _, ...,_ **x** _t−_ 2 _K_ ] _with the following equation:_ _K_ +1 � _α_ _K_ +1 _−m_ _m_ =2 _K_ � _ω_ _k_ **x** _t−k−_ ( _K_ +1 _−m_ ) _,_ (8) _k_ = _m_ **x** _t_ = _K_ � _α_ _K−m_ **u** _t−_ ( _K−m_ ) + _α_ _K_ **x** _t−K_ + _m_ =1 _where ω_ _k_ _indicates the parameter of Granger causality, and α_ _n_ = [�] _[n]_ _i_ =1 _[ω]_ _[n]_ _[α]_ _[n][−][i]_ _[,]_ [ 1] _[ ≤]_ _[n][ ≤]_ _[K][, is a]_ _recursive equation with α_ 0 = 1 _._ We provide proof of the proposition in the Appendix A.1. Inspired by Proposition 1, we propose a decoder structure that combines both observed time series and exogenous variables. Specifically, we ¯ parameterize the impact of exogenous variable **u** _t−k_ on **x** _t_ by a neural network ¯ _ω_ _θ_ _k_ and the impact of observed time series **x** _t−K−k_ on **x** _t_ by another neural network ¯ _ω_ _θ_ _[′]_ ¯ _k_ _[′]_ [. Then, the decoder computes] **x** _t_ based on the following equation. 5 Idea Generation Category:
0Conceptual Integration
k38Th3x4d9
# W EIGHTED M ULTI -P ROMPT L EARNING WITH - - D ESCRIPTION FREE L ARGE L ANGUAGE M ODEL D IS ## TILLATION **Sua Lee** [1] _[∗]_ **,** **Kyubum Shin** [2] _[∗]_ **,** **Jung Ho Park** [1] 1 Seoul National University, 2 Naver AI A BSTRACT Recent advances in pre-trained Vision Language Models (VLM) have shown promising potential for effectively adapting to downstream tasks through _prompt_ _learning_, without the need for additional annotated paired datasets. To supplement the text information in VLM trained on correlations with vision data, new approaches leveraging Large Language Models (LLM) in prompts have been proposed, enhancing robustness to unseen and diverse data. Existing methods typically extract text-based responses (i.e., _descriptions_ ) from LLM to incorporate into prompts; however, this approach suffers from high variability and low reliability. In this work, we propose **De** scription-free **Mul** ti-prompt Learning( **DeMul** ), a novel method that eliminates the process of extracting descriptions and instead directly distills knowledge from LLM into prompts. By adopting a description-free approach, prompts can encapsulate richer semantics while still being represented as continuous vectors for optimization, thereby eliminating the need for discrete pre-defined templates. Additionally, in a multi-prompt setting, we empirically demonstrate the potential of prompt weighting in reflecting the importance of different prompts during training. Experimental results show that our approach achieves superior performance across 11 recognition datasets. 1 I NTRODUCTION What are the sentences that “best” describe the _Golden Retriever_ in the image shown in Fig. 1? Even when familiar with the dog, answers to this question will always differ, as there cannot be definitive correct answers. Thus, if such ambiguous sentences are defined as the “categories” of the image, it becomes challenging for others to categorize it accurately. Recently, there has been a growing interest in leveraging pre-trained Large Language Models (LLM), such as GPT(Brown et al., 2020), across various tasks. In image recognition, the _de-_ _scriptions_ that LLM responded to the query, e.g. _{_ "What are useful features for distinguishing a class name?" _}_, have been used to enhance accuracy compared to using only a single class label. Notably, in vision language models (VLM), these descriptions are adeptly integrated into prompts, which are then utilized as categories for classification. However, even with well-pretrained LLM, there are limitations, including **high variability** in responses and **low reliability** of some descriptions. These limitations occur because: (i) the query is inherently open-ended, leading to multiple plausible interpretations, (ii) the format of the query influences the responses, and (iii) inherent biases affect the generated descriptions. Formally, aside from applying descriptions, _prompt learning_ has been studied as an efficient method to enhance generalization in pre-trained VLM, such as CLIP(Radford et al., 2021), GLIP(Li et al., 2022), and ALIGN(Jia et al., 2021), without the necessity of additional task-specific annotated data. To set the text prompt used in these models, the standard approach involves simply applying a pre-defined template, e.g., "A photo of a _{_ class _}_ ." . However, trivial variations in this template, such as "a" or ".", can have a profound impact on inference performance(Zhou et al., 2022b). To mitigate this variation, the prompts can be defined as learnable continuous vectors that can be optimized, so that each prompt can be trained to have optimal arrangements and semantics. _∗_ [Equal contribution. Correspondence to: sualee.susan@gmail.com](mailto:sualee.susan@gmail.com) 1 Figure 1: **High variability and low reliability of GPT-based descriptions:** The example of descriptions obtained by asking GPT the question, "What are useful features for distinguishing a Golden Retriever?" . Some descriptions highlight highly distinctive visual features, while others convey ambiguous meanings and often begin with qualifiers such as ‘often’, ‘may’, or ‘maybe’ that can reduce clarity. Moreover, it is uncertain whether the last description accurately portrays the characteristics of a Labrado Retriever. Comparing the text similarity between these descriptions and the class name “Labrado Retriever” reveals significant variability in reliability. There is also a noticeable discrepancy between our manual assessment of the descriptions and the determination of useful features for classification based on their similarity. With prompts no longer fixed to a single template, multiple prompts can be assigned to each class, empirically demonstrating superiority over single prompts. This raises the question: _Which prompt_ _holds relatively significant semantics?_ While there has been made effort to handle the distribution of existing learnable prompts, the method dealing with the importance of prompts has yet to be studied. In this work, we present **Description-free Multi-prompt Learning (DeMul)** as a way to directly distillate the LLM’s pre-trained distribution without descriptions. Specifically, instead of directly inserting descriptions into prompts, our approach map learnable prompts into the LLM embedding space and distill them to absorb the semantics. We chose GPT-based embedding models as the LLM to distill, which are accessible through the APIs provided by OpenAI. The public API available for transferring GPT is divided into two main types(Balkus & Yan, 2022): the _Completion Endpoint_ which is a text-based conversational model, and the _Embedding Endpoint_ which uses embeddings with more reliable performance. While existing description-based methods are limited to using the Completion Endpoint, we first employ the Embedding Endpoint, enabling description-free distillation. By leveraging this API, we can handle prompts as embedding vectors instead of text, eliminating the need for pre-defined templates and allowing them to be optimized as CoOp(Zhou et al., 2022b) explored. Additionally, since prompts are learnable vectors, the importance of the semantics they contain continuously changes during training. DeMul introduces prompt weighting to reflect this variation in importance. To summarize our contributions: 1. We propose a **description-free distillation** approach that removes the process of extracting descriptions and instead directly distills the pre-trained knowledge of LLM. The learnable prompts are mapped into the LLM embedding space, where they are optimized to capture meaningful semantics. 2. In a multi-prompt setting, the semantics that each prompt learns and their corresponding importance dynamically change during training. We introduce **prompt weighting** to adjust this importance, and our experiments demonstrate that this approach benefits the learning process. 3. We utilize CLIP, one of the most extensively researched models for prompt learning in VLM, as our baseline. We compared its performance across a total of 11 datasets. Ours demonstrates superior results in most datasets, surpassing the existing baselines using description-based methods or learnable prompt optimization methods. 2 Figure 2: **An overall framework of DeMul:** Here, _c_ _∗_ denotes each class, _g_ represents the CLIP text encoder, and _h_ represents the GPT embedding model. The learning objective is to develop learnable prompts and a function _φ_ that captures the semantics of GPTs. Initially, the learnable prompts are randomly generated but are then updated and trained to minimize the angular difference with the GPT embedding vectors of each class. Trained with a mapping loss, _φ_ initially preserves the embedding orientation of the set of classes, but is subsequently trained to maintain the directions of the class embeddings adjusted to the prompts. 2 P RELIMINARIES AND B ACKGROUND In this section, we provide a concise overview of zero-shot Vision-Language Models (VLM) using hand-crafted prompts, specifically focusing on CLIP. We then describe CoOp, a few-shot method for prompt learning that automatically optimizes prompt templates. Our method applies to various models and tasks that use text embeddings. **Contrastive Language-Image Pretraining (CLIP)** CLIP is trained on a web-scale dataset comprising image-text pairs to learn aligned representations through contrastive learning. Specifically, it incorporates two modality-specific encoders—an image encoder _f_ ( _·_ ) and a text encoder _g_ ( _·_ ) —which are trained to ensure that the correct pairs of embeddings are closely matched in the joint embedding space. Consider an input ( _x, y_ ), where _x ∈_ R _[C][×][H][×][W]_ and _y ∈_ R _[K]_ . The normalized feature vectors are denoted as _z_ = _f_ ( _x_ ) _/∥f_ ( _x_ ) _∥_ 2 for the image and _w_ _y_ = _g_ ( _p_ 0 ( _y_ )) _/∥g_ ( _p_ 0 ( _y_ )) _∥_ 2 for the text _y_, where _p_ 0 ( _y_ ) is the prompt generated for the class _y_ using a pre-defined template, "A photo of a _{_ class _}_ ." . The prediction probability of _x_ _i_ is calculated as shown in Eq. 1, where _τ_ is the temperature parameter of the softmax function. P( _y|x_ ) = � _Ki_ ex =1 p( [exp(] _z_ _[⊤]_ _w_ _[z]_ _[⊤]_ _y_ _/_ _[w]_ _τ_ _[i]_ ) _[/τ]_ [)] (1) **Context Optimization (CoOp)** CoOp is a pioneering method to efficiently adapt the context of prompts for downstream visual recognition tasks. Instead of using pre-defined discrete tokens, CoOp employs _N_ learnable continuous parameters _{v_ 1 _, v_ 2 _, · · ·, v_ _N_ _}_ as the template to be optimized, where each _v_ _i_ vector has the same dimension as the word embeddings. The learnable vectors _V_ = _{v_ _i_ _}_ _[N]_ _i_ =1 [are shared across all classes, and the generated prompts for each class] _[ c]_ [ are defined as] _t_ = _p_ _∗_ ( _V, c_ ) = [ _v_ 1 ][ _v_ 2 ] _· · ·_ [ _v_ _N_ ][ _c_ ], where prompting _p_ _∗_ is a concatenation function of consecutive vectors and the class name. The learnable tokens are optimized using few-shot samples by maximizing the matching score between the text and image embeddings. 3 M ETHOD In this section, we present DeMul and its key components. We propose a method for extracting information from an LLM into learnable prompts without relying on hand-crafted descriptions. 3 Additionally, we introduce a weighted multi-prompt learning approach using importance sampling to perform few-shot recognition with a fixed number of prompts effectively. 3.1 D ISTILLING L ARGE L ANGUAGE M ODEL WITHOUT M ANUAL D ESCRIPTIONS The key question in our proposed approach is how to distill directly the GPT text semantics without hand-crafted descriptions. Specifically, _Can we infuse class-specific information learned by GPT into_ _a prompt without explicit descriptions?_ To achieve this, instead of querying GPT to extract features related to a class, we aim to train prompts to have maximized semantic correlations with class names within the GPT embedding space. (The overall workflow for better understanding is illustrated in Figure 2.) GPT models were originally developed for text generation tasks, but they are also utilized for text similarity-related tasks (e.g., text search, code search, sentence similarity, text classification). These models take text inputs and output 3072 -dimensional vectors, having learned from large-scale language data to effectively measure similarities between various texts. While the exact training data, architecture, and other details are not publicly disclosed, this approach has demonstrated superior performance across many LLM-based embedding models. In our study, we employed the text-embedding-3-large model, which achieved a 64 _._ 6 % accuracy in MTEB(Muennighoff et al., 2022) eval. **Mapping prompts** To leverage the semantic potential of the GPT embedding space for visual prompts, we developed a mechanism for aligning the CLIP embedding vectors with the GPT space. Since 5 -layered MLPs are nonlinear and are not diffeomorphism in general, we focus on preserving the direction of the embedding vectors. The function _φ_ maps from the CLIP embedding space into the GPT embedding space and _ψ_ maps from the GPT embedding space back to the CLIP embedding space such that _ψ ◦_ _φ_ forms a conformal map on a set of points. Throughout this paper, we refer to this conformal map as a _cyclic mapping_ . Since the few-shot recognition task requires a small amount of training data, providing nice initial values can increase the learning stability of the model. Thus, both _φ_ and _ψ_ are pre-trained on a comprehensive dataset _D_ name that contains common class names as well as class names that appear in the benchmark dataset. _D_ name consists of class names from WordNet and 12 other datasets used in the section 4.1, encompassing a wide array of semantic contexts to establish robust initial mappings. As the fine-tuning process of _φ_ progresses with _ψ_ frozen, the set of directions preserved by the cyclic mapping _ψ ◦_ _φ_ changes from _D_ name to a dataset _D_ mapping of learnable prompts. To learn cyclic mapping, we propose the following loss function which measures how closely the cycled embeddings resemble the directions of the original embeddings from the dataset. The similarity loss is formulated as follows: _L_ mapping = 1 _−_ [1] _N_ _N_ � _d_ cos ( _ψ_ ( _φ_ ( _t_ _i_ )) _, t_ _i_ ) (2) _i_ =1 where _t_ _i_ represents a prompt embedding in _D_ mapping, _d_ cos denotes the cosine similarity, and _N_ is the number of data points in _D_ mapping . **Distillation in GPT embedding space** In the multi-prompt setting, for each class _c_, a set of _M_ multiple learnable prompt vectors is defined as _T_ = _{t_ _i_ _}_ _[M]_ _i_ =1 [, where each] _[ t]_ _[i]_ [ is generated by applying] the prompting _p_ _∗_ to different learnable vector sets _V_ _i_ and the class _c_, denoted by _t_ _i_ = _p_ _∗_ ( _V_ _i_ _, c_ ) . This approach allows for creating a diverse array of prompts for each class, leveraging multiple vector configurations to capture various semantic nuances of the class. The effectiveness of these prompts is measured using a distillation process in the GPT embedding space. Each prompt _t_ _i_ is mapped to its GPT embedding by a function _φ_, resulting in a set of transformed prompts _{φ_ ( _t_ _i_ ) _}_ _[M]_ _i_ =1 [. These] transformed prompts are then aligned with the corresponding GPT class embeddings _h_ ( _c_ ), optimized to ensure the correct semantic correlation:   _M_ [1]  (3)  _L_ distill = _−_ [1] _K_ _K_ � _i_ =1 _M_ _M_ � � log P( _φ_ ( _t_ _ij_ ) _| h_ ( _c_ _i_ )) _j_ =1 where P is the softmax function of Eq. 1, facilitating the training process by aiming to maximize the probability that each prompt correctly aligns with its respective class in the GPT space. 4 3.2 W EIGHTED M ULTI - PROMPT L EARNING In the context of visual recognition, the challenge is not only to capture the semantic richness of classes via text prompts but also to ensure effective classification in the CLIP embedding space. In a multi-prompt setting, where each class _c_ _i_ is associated with multiple prompts _T_ _i_ = _{t_ _ij_ _}_ _[M]_ _j_ =1 [, the] classification loss is initially defined as the average probability overall prompts for a given class:   _M_ [1]  (4)  _L_ cls = _−_ [1] _K_ _K_ � log _i_ =1 _M_ _M_ � � P( _y_ = _c_ _i_ _|x, t_ _ij_ ) _j_ =1 This approach, however, does not account for the varying importance of each prompt within a class, as different semantics may contribute differently to the recognition task. To address this, we introduce a prompt weighting mechanism that dynamically adjusts the importance of each prompt during training, recognizing that the number of relevant semantics or the importance of each semantic can vary significantly between classes. This is particularly crucial because the optimal number of prompts to effectively represent a class is not only non-trivial to determine heuristically but also varies across classes. Each prompt _t_ _ij_ within the class _c_ _i_ is assigned a learnable weight _w_ _ij_, reflecting its relative importance. The classification loss for each class is then reformulated to incorporate these weights, providing a weighted average probability that accounts for the differentiated contribution of each prompt:   _M_ [1]  + _λ_  _M_ � _|w_ _ij_ _|_ (5) _j_ =1 _L_ cls = _−_ [1] _K_ _K_ � log _i_ =1 _M_ _K_ � _i_ =1 _M_ � � _w_ _ij_ _·_ P( _y_ = _c_ _i_ _|x, t_ _ij_ ) _j_ =1 where _λ_ is a regularization parameter that controls the trade-off between the classification loss and the L1 penalty. The weights _{w_ _ij_ _}_ are normalized for each class, ensuring that [�] _[M]_ _j_ =1 _[w]_ _[ij]_ [ = 1] [. This] weighted approach with L1 regularization enhances model flexibility by allowing it to emphasize more informative prompts while diminishing the impact of less relevant ones. The addition of the L1 term encourages sparsity, promoting a scenario where fewer but more significant prompts are actively used, thereby optimizing the classification performance and computational efficiency in processing visual tasks. 3.3 T RAINING The overall training objective for the Weighted Multi-prompt Learning system combines the distillation loss and the weighted classification loss into a total loss function. This total loss is designed to optimize both the semantic alignment in the GPT embedding space and the classification accuracy in the CLIP embedding space. It is formulated as a weighted sum of the two losses: _L_ total = _L_ cls + _αL_ distill (6) where _α_ is a hyperparameter that balances the contribution of the distillation loss and the classification loss. This parameter allows the model to prioritize between the alignment of the prompts with the GPT model’s embeddings and the direct classification performance, depending on the specific requirements of the task or the dataset characteristics. 4 E XPERIMENTS 4.1 E XPERIMENT SETUP **Datasets** We evaluate our approach over 11 datasets, including ImageNet(Deng et al., 2009) and publicly available image recognition datasets used in GalLoP(Lafon et al., 2024): SUN397(Xiao et al., 2010), Stanford Cars(Krause et al., 2013), UCF101(Soomro et al., 2012), Caltech101(Li et al., 2017), EuroSAT(Helber et al., 2019), FGVC Aircraft(Maji et al., 2013), Food101(Bossard et al., 2014), DTD(Cimpoi et al., 2014), Oxford Flowers(Nilsback & Zisserman, 2008) and Oxford Pets(Parkhi 5 Idea Generation Category:
0Conceptual Integration
NDLmZZWATc
# A LARGE-SCALE DATASET AND BENCHMARK FOR COMMUT- ING ORIGIN-DESTINATION FLOW GENERATION **Can Rong** [1] **Jingtao Ding** [1] **Yan Liu** [2] **Yong Li** [1] _,_ _[∗]_ 1 Department of Electronic Engineering, BNRist, Tsinghua University, Beijing, China 2 Computer Science Department, University of Southern California, Los Angeles, CA, U.S.A. ``` rc20@mails.tsinghua.edu.cn, dingjt15@tsinghua.org.cn, liyong07@tsinghua.edu.cn ``` ABSTRACT Commuting Origin-Destination (OD) flows are critical inputs for urban planning and transportation, providing crucial information about the population residing in one region and working in another within an interested area. Due to the high cost of data collection, researchers have developed physical and computational models to generate commuting OD flows using readily available urban attributes, such as sociodemographics and points of interest, for cities lacking historical OD flows —commuting OD flow generation. Existing works developed models based on different techniques and achieved improvement on different datasets with different evaluation metrics, which hinderes establishing a unified standard for comparing model performance. To bridge this gap, we introduce a large-scale dataset containing commuting OD flows for 3,333 areas including a wide range of urban environments around the United States. Based on that, we benchmark widely used models for commuting OD flow generation. We surprisingly find that the networkbased generative models achieve the optimal performance in terms of both precision and generalization ability, which may inspire new research directions of graph generative modeling in this field. The dataset and benchmark are available at `[https://github.com/tsinghua-fib-lab/CommutingODGen-Dataset](https://github.com/tsinghua-fib-lab/CommutingODGen-Dataset)` . 1 INTRODUCTION Commuting refers to the daily round-trip movement of individuals from their homes to their workplaces, which is an important topic in fields like urban planning, transportation, environmental science, and economics (Batty, 2007; Gonzalez et al., 2008; Iqbal et al., 2014; Liu et al., 2020). These movement between all pair of origins and destinations within the interested area can be effectively recorded as Origin-Destination (OD) flows. All OD flows across the entire area named the commuting OD matrix, where each element represents the number of people reside in one region and work in another. The commuting OD matrix can be naturally modeled as a directed weighted graph, i.e, commuting OD network, where nodes represent regions and edges represent the commuting OD flows between regions (Saberi et al., 2017; 2018). Understanding commuting OD flows at both the pairwise and network-level allows urban planners to analyze the structured mobility patterns, optimize the transportation system, and make informed decisions on urban development (Zeng et al., 2022; 2024; Imai et al., 2021; Zhong et al., 2014). However, collecting the data often costs a lot and raises privacy concerns. Thus, researchers have developed both classic physical models (Zipf, 1946; Simini et al., 2012) and more recent, promising data-driven approaches (Pourebrahim et al., 2019; Liu et al., 2020; Simini et al., 2021; Rong et al., 2023c;b;d) to model commuting OD flows and generate data for areas lacking historical flows. This task is named as commuting OD flow generation. There are two main challenges lying on two aspects: the lack of a comprehensive dataset and the absence of a unified and systematic evaluation. In details, existing works can be categorized in three types: physical models, classic machine learning models, and graph neural network models. Physical models campare the OD flow to physical phenomenon, such as the gravity model (Zipf, 1946; Barbosa et al., 2018) and radiation model (Simini et al., 2012). The physical models utilize simple mathematical equations to capture the pair-wise relationships between origins and destinations, which have a strong theoretical basis but are limited by the underfitting of the complex human mobility. Recent popular data-driven models (Rodriguez-Rueda et al., 2021; Pourebrahim et al., 2019; _∗_ Corresponding author. 1 2018; Robinson & Dilkina, 2018; Simini et al., 2021; Liu et al., 2020; Rong et al., 2023c) can capture the complex relationships between urban attributes and commuting OD flows with sophisticated models. These works based on machine learning or deep learning techniques learning from only one single or a few areas, have shown poor generalizablility to distinct urban environments. Despite the significant practical value of commuting OD flow generation, it has not gained widespread attention from the deep learning community. One key reason is the lack of a unified benchmark based on a comprehensive dataset. Currently, studies use their own datasets from individual city scenarios for evaluation, making it difficult to compare and communicate insights between different model designs. To address the above issue, we collect data from multiple sources and construct a **large** -scale dataset containing **commuting OD** matrices for 3,333 diverse areas around the whole United States ( **LargeCommuingOD** ). Thanks to the extensive spatial scale of the dataset, various urban environments are covered, including metropolitan areas, small cities, towns, and rural areas. For supporting better study of modeling, each area in the dataset has not only the commuting OD matrix but also regional sociodemographics and numbers of point-of-interests (POIs) within different categories for all regions in the area. Specifically, each area is profiled with its boundary and the boundaries of regions within it, which are represented as polygons with detailed geographic coordinates, i.e., latitude and longitude. The sociodemographics include the population of different genders and age groups, the number of househoulds, and income levels, etc. The point-of-interests are categorized into various types, such as restaurants, education, and shopping, etc. This dataset can be used to comprehensively study and evaluate the models for commuting OD flow generation. Based on our dataset, we benchmark the existing widely used models for commuting OD flow generation in a common framework. We utilize randomly selected areas in the dataset as the test set, which covers diverse urban environments, to comprehensively evaluate the models in terms of both precision and generalizablility. The remaining areas are leveraged to train the models. Existing works including physical models, classic machine learning models, and graph neural network models are all benchmarked. Besides, the generative models trained on the large-scale dataset emerge powerful performance, which has been demonstrated not only in fields like natural language processing (Brown et al., 2020; Kaplan et al., 2020) and computer vision (Peebles & Xie, 2023) but also in spatialtemporal data modeling (Yuan et al., 2024; Jin et al., 2023). We introduce a preliminary adaptation of the graph diffusion model to **W** eighted **E** dges **D** iffusion condition on **A** ttributed **N** odes (WEDAN) into our benchmark. We surprisingly find that the network-based generative models perform the best in terms of both precision and generalization ability, which may call for a new paradigm of graph generative modeling in this field. In summary, the contributions of this work are as follows: - We construct a large-scale dataset (LargeCommuingOD) containing commuting OD flows for 3,333 diverse areas around the United States covering 9,372,610 km [2] including a wide range of urban environments. Each area also includes sociodemographics and point-of-interests totaly 131 features as urban attributes for regions within it. - Based on the LargeCommuingOD, we benchmark the existing widely used models for commuting OD flow generation. With dataset containing distinct areas, we can comprehensively evaluate the models in terms of precision and generalizablility. - We find that network-based modeling for commuting OD flow supported by our dataset gives a promising performance, which treats an area and the commuting OD flow within it as a network. Training on a large number of commuting OD networks, generative models can capture the universal and distinct mobility patterns at the city level, leading to better generalizablility. 2 PRELIMINARIES In this section, we introduce the definitions and problem formulation of commuting OD flow modeling, followed by the existing works of this field. 2.1 DEFINITIONS AND PROBLEM FORMULATION **Definition 1. Regions.** We divide the interested area into non-overlapping regions, represented as _R_ = _{r_ _i_ _|i_ = 1 _,_ 2 _, ..., N_ _}_, with _N_ being the total count of the regions. Each region fulfills unique functions, indicated by their urban attributes **X** _r_, which include sociodemographics and the distribution of points-of-interests in different categories. 2 Table 1: Comparison of the proposed dataset and other dataset utilized in existing works. |Dataset|#Area Area Type Cover Area (km2) Metropolitan Town Rural Curated & Public| |---|---| |Karimi et al. (2020)<br>Pourebrahim et al. (2018; 2019)<br>Liu et al. (2020)<br>Yao et al. (2020)<br>Lenormand et al. (2015)<br>Rong et al. (2023c;d;b)<br>Simini et al. (2021)|1 Central District -    <br>1 Whole City 789    <br>1 Whole City 789    <br>1 Central District 900    <br>2 Whole City 15,755    <br>8 Whole City 25,954    <br>2,911 National Gridding Coverage 686,983    | |Ours|3,333 Census Area Coverage 9,372,610    | **Definition 2. Spatial Characteristics.** The spaital characteristics of an area _C_ _R_ are composed of urban attributes of each region _{_ **X** _r_ _i_ _|r_ _i_ _∈R}_ and the interactions, such as distance, between all regions _{d_ _ij_ _|r_ _i_ and _r_ _j_ _∈R}_ . **Definition 3. Commuting OD Flow.** The term commuting OD flow refers to the population _F_ _r_ _org_ _,r_ _dst_, residing in _r_ _org_ and working at _r_ _dst_ . **Definition 4. Commuting OD Matrix.** Denoted by **F** _∈_ R _[N]_ _[×][N]_, the commuting OD matrix includes commutings among all regions within the area. _F_ _i,j_ means the commuting from _r_ _i_ to _r_ _j_ . PROBLEM 1. _**Commuting OD Flow Modeling.**_ _The problem aims to learn a model, given any area’s_ _spatial characteristics C_ _R_ _, generating their corresponding commuting OD matrices_ **F** _that closely_ _resemble those in the real world without any historical information._ 2.2 EXISTING WORKS ON COMMUTING OD FLOW MODELING **Limitations of Dataset Used in Existing Works.** As shown in Table 1, existing datasets used in commuting OD flow modeling have several major limitations. _First_, existing datasets utilized in the literature have a **limited spatial scale**, usually focusing on a single or few large cities, leading two very limited spatial coverage. For example, Karimi et al. (2020) and Yao et al. (2020) only consider a central district in a city, and Pourebrahim et al. (2018; 2019), Liu et al. (2020), Lenormand et al. (2015), and Rong et al. (2023c;d) only consider less than 8 large metropolitans, whose areas are less than 30,000 km [2] . Although Simini et al. (2021) consider a national gridding coverage in the United Kingdom and Italy, the area is still limited to 686,983 km [2] . Besides, they do not provide the curated dataset for public use, which cannot be used for further research. In contrast, our dataset covers 3,333 areas around the United States, a total area of 9,372,610 km [2], providing a much broader spatial scale. And our dataset is curated and publicly available, which can be found at `[https://github.com/](https://github.com/tsinghua-fib-lab/CommutingODGen-Dataset)` `[tsinghua-fib-lab/CommutingODGen-Dataset](https://github.com/tsinghua-fib-lab/CommutingODGen-Dataset)` . _Second,_ with the limited spatial scale, existing datasets usually focus on **a single type of urban environments**, such as metropolitan areas, central districts, or whole cities, which cannot include a massive areas with high diversity in terms of size and structure. Models trained on such datasets may not be generalized to other areas with different characteristics, limiting their applicability on only similar areas. Our dataset covers metropolitan areas, towns, and rural areas around the United States, providing a more comprehensive dataset for training and evaluating models. With the diversity of areas, models trained on our dataset can be more generalizable. **OD Flow Modeling Approaches.** Existing works can be categorized into three types. The **first** is _physical models_, such as the gravity model (Zipf, 1946) and the radiation model (Simini et al., 2012), which mimick the commuting OD flows as physical pheonomena and utilize simple mathematical equations to model the flows. Physicists dive into the mechanisms of individual mobility decisions and try to explain the phenomenon of commuting OD flows. The **second** is _statistical_ _learning models_, such as tree-based models (Robinson & Dilkina, 2018; Pourebrahim et al., 2018; 2019), SVR (Rodriguez-Rueda et al., 2021), artificial neural networks (ANNs) (Sana et al., 2018; Lenormand et al., 2016; Simini et al., 2021), which predict the OD flows between pairs of regions in data-driven schemes. The **third** is _graph learning models_ . Liu et al. (2020); Cai et al. (2022) utilized GATs to aggregate the neighbors’ information to profile the regions better and improve the prediction accuracy. Yao et al. (2020) model the local spatial adjacenct structure of regions with graph convolutional networks and imputate the missing OD flows in a semi-supervised manner. Rong et al. (2023d;b) introduce adversarial and denoising diffusion generative methods with graph transformers to model the commuting OD matrix generation as graph generation problem. Many researchers from urban planning and transportation have shown interest in data-driven models because of the better performance Barbosa et al. (2018); Luca et al. (2021); Rong et al. (2023a). But there lacks a largescale dataset containing a wide range of urban environments and unified benchmark for comparing 3 Figure 1: Discreption of the pipeline constructing our datasest. the performance of different models, which hinders the development of more powerful models. Our dataset and benchmark can fill this gap and provide a common ground for evaluating the models. 3 LARGECOMMUINGOD: A LARGE-SCALE COMMUTING OD FLOW DATASET 3.1 DATA COLLECTION AND CURATION The pipeline for constructing our dataset is shown in Figure 1. As shown in the figure, the dataset contains four main componets: 1) boundaries of areas and regions 2) sociodemographics, 3) POIs distributions, 4) commuting OD flows. First of all, the boundaries of areas and regions are download from the U.S. Census Bureau, which include all counties, metropolitans, census tracts, and census block groups (CBGs). And we set the counties as the areas and census tracts as the regions for the county areas, and set the metropolitans as the areas and CBGs as the regions for the metropolitan areas. The counties can be related to the census tracts by code of Federal Information Processing Standards (FIPS). The CBGs belong to the metropolitans, which is detected by the spatial relationship between the boundaries of CBGs and metropolitans, i.e., whether the CBG is inside the metropolitan. Then, the sociodemographics for each region can be accessed from the American Community Survey (ACS) on the website of the U.S. Census Bureau. For each indicators, we use regression analysis on the indicator and flow intensity to decide whether to choose the indicator into the urban attributes. The information not related to human mobility is excluded. And for each region, we use API of OpenStreetMap to get the number POIs in different categories. The POIs are divided into 36 categories, including restaurants, schools, hospitals, etc. The commuting OD flow is provided by the 2018 Longitudinal Employer-Household Dynamics Origin-Destination Employment Statistics (LODES) dataset on the website of the U.S. Census Bureau. The data is orgagized in form of tables. Each table contains the commuting information of one state. Each row in the table represents the commuting flow between two specific census blocks. We aggregate the flow into census tract level and construct the OD matrix. 3.2 DATA DESCRIPTION We have collected data from a total count of 3,333 areas around the United States. There are two kind of spatial divisions in LargeCommuingOD: 1) 3,233 counties as the areas and census tracts inside each county as the regions, 2) 100 metropolitans, where the population is more than 1 million, as the areas and census block groups CBGs inside each metropolitan as the regions. LargeCommuingOD includes the following information: 1) regional urban attributes, including sociodemographics and POIs, 2) commuting OD flows, represented by OD matrices, which are aggregated commuting flows within areas. The counties are defined by the U.S. Census Bureau. Each county is a local government unit in the United States, and the counties should cover a similar number of households and population. The metropolitan boundaries are obtained from Topologically Integrated Geographic Encoding and Referencing (TIGER) dataset. The metropolitan areas exclude the rural areas, which do not have population and urban functionalities. 4 Table 2: Summary of urban attributes used to profile a region. Categories Centents #Features Sociodemographics Total population 1 Population with different genders and ages 56 Median age of people with different genders 3 Median earnings 1 Ratio of different classes of jobs 5 Vehicle ownership 4 The number of households with different types 4 Population with different education levels 21 Poverty with different genders 2 POIs The number of POIs in different kind. 34 Total 131 **Regional Urban Attributes.** Each region is characterized by sociodemographics and urban functionalities, derived from American Community Survey (ACS) (U.S. Census Bureau, 2012) by the U.S. Census Bureau and the distribution of POIs from OpenStreetMap (OpenStreetMap contributors, 2017), as shown in Table 2. Demographics include the population structure of a region based on age, gender, income, education, and other factors, encompassing a total of 97 dimensions. POIs are divided into 36 different categories. The distances between regions are calculated using the planar Euclidean distance between their centroids. **OD matrices.** We construct commuting OD matrices for all areas using data on commuting patterns from the 2018 Longitudinal Employer-Household Dynamics Origin-Destination Employment Statistics (LODES) dataset. These matrices represent aggregated commuting flows within areas. Each entry in an OD matrix denotes the count of individuals residing in one region and working in another, effectively mapping commuting patterns of workers across different regions. The LODES dataset is widely used in existing works (Liu et al., 2020; Pourebrahim et al., 2019; 2018). In LargeCommuingOD, the commuting information is aggregated by the cooperation and other kind of work units, which is more reliable and accurate than the individual commuting data. Therefore, in the data collection process, information has been ensured to be representative at a national scale, thus eliminating sampling errors. The raw data provided is at the census block level, which is then aggregated to the census tract level for the county areas and to the CBG level for the metropolitan areas. It is worth noting that the commuting OD flows within the 3,233 counties cannot carry the mobility across different counties, while the flows within metropolitans can. So LargeCommuingOD include both intra-county and inter-county flows. 3.3 DATA STATISTICS We provide a statistical analysis of LargeCommuingOD to illustrate the diversity of the dataset. We analysis the dataset from two perspectives: area characteristics and mobility patterns. From Figure 2, we can see that the number of regions in each area varies significantly, which shows the heterogeneity of the areas in LargeCommuingOD. Furthermore, cases in Figure 3 reveal the diverse structure of the areas, including monocentric, polycentric, and evenly distributed spreading. For analyzing the mobility patterns, we measure the average trip distances and the variance of the regional mobility intensity. The travel distances tend to be shorter but there are still long-distance trips, make the mobility patterns complex. The variance of the regional mobility intensity is also diverse in a wide range, which indicates the heterogeneity of the mobility patterns. For commonalities among areas, we analyze the distribution of OD flows and outflows in areas of different scales, as shown in Figure 4. We can observe that the heterogeneity exsits between different scales of areas. Yet, the commonalities also exist, i.e., the scaling behaviors are the same among areas. This demonstrates that LargeCommuingOD is a comprehensive dataset that covers a wide range of urban environments with diverse mobility patterns. To further intuitively understand the dataset, we provide the Visualization of the OD flows via heatmaps in Appendix A.1. 3.4 DISCUSSION **Superiority.** From the statistical analysis, we can see that LargeCommuingOD is large-scale and comprehensive, covering a wide range of areas of different scales and mobility patterns, i.e., diverse urban environments. For **learning**, the sufficient scenarios in LargeCommuingOD can support the modeling research to capture the distinctness and commonalities of the mobility patterns in different 5 Idea Generation Category:
3Other
WeJEidTzff
# U NCOVERING O VERFITTING IN L ARGE L ANGUAGE M ODEL E DITING **Mengqi Zhang** **[1]** _[∗]_ **, Xiaotian Ye** **[2]** _[∗]_ **, Qiang Liu** **[3]** **, Shu Wu** **[3]** _[†]_ **, Pengjie Ren** **[1]** _[†]_ **, Zhumin Chen** **[1]** 1 Shandong University 2 School of Computer Science, Beijing University of Posts and Telecommunications 3 New Laboratory of Pattern Recognition (NLPR) State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS) Institute of Automation, Chinese Academy of Sciences _{_ mengqi.zhang, renpengjie, chenzhumin _}_ @sdu.edu.cn yexiaotian@bupt.edu.cn, _{_ qiang.liu, shu.wu _}_ @nlpr.ia.ac.cn A BSTRACT Knowledge editing has been proposed as an effective method for updating and correcting the internal knowledge of Large Language Models (LLMs). However, existing editing methods often struggle with complex tasks, such as multi-hop reasoning. In this paper, we identify and investigate the phenomenon of **Editing** **Overfit**, where edited models assign disproportionately high probabilities to the edit target, hindering the generalization of new knowledge in complex scenarios. We attribute this issue to the current editing paradigm, which places excessive emphasis on the direct correspondence between the input prompt and the edit target for each edit sample. To further explore this issue, we introduce a new benchmark, EVOKE (EValuation of Editing Overfit in Knowledge Editing), along with fine-grained evaluation metrics. Through comprehensive experiments and analysis, we demonstrate that Editing Overfit is prevalent in current editing methods and that common overfitting mitigation strategies are ineffective in knowledge editing. To overcome this, inspired by LLMs’ knowledge recall mechanisms, we propose a new plug-and-play strategy called Learn the Inference (LTI), which introduce a Multi-stage Inference Constraint module to guide the edited models in recalling new knowledge similarly to how unedited LLMs leverage knowledge through in-context learning. Extensive experimental results across a wide range of tasks validate the effectiveness of LTI in mitigating Editing Overfit. 1 I NTRODUCTION Large Language Models (LLMs) have achieved remarkable success across various Natural Language Processing (NLP) tasks (Zhao et al., 2023), yet they often contain outdated or incorrect information, raising concerns about their reliability and factual accuracy. Knowledge Editing (Yao et al., 2023) has emerged as a promising solution to precisely update or correct a model’s knowledge. Among the different editing strategies, parameter-modifying methods, which directly alter the model’s internal parameters, have garnered significant attention from the research community. These include finetuning-based techniques such as FT-L (Zhu et al., 2020), meta-learning approaches like KE (De Cao et al., 2021) and MEND (Mitchell et al., 2021), and locate-then-edit techniques such as ROME (Meng et al., 2022a) and MEMIT (Meng et al., 2022b). Although existing methods have achieved promising results, their performance experiences a catastrophic decline when transferred to complex tasks involving reasoning (Yao et al., 2023). For instance, in the representative multi-hop reasoning task, after the LLM is updated with _Steve Jobs_ as _the founder of Microsoft_, it can easily respond to straightforward questions like “ _Who is the founder_ _of Microsoft?_ ” with “ _Steve Jobs_ .” However, it struggles to accurately answer more complex queries, such as “ _Which college did the founder of Microsoft attend?_ ” _∗_ Equal contribution. _†_ Corresponding authors. 1 Figure 1: Example of Editing Overfit. To investigate the reasons behind the failure of edited LLMs in complex tasks, ~~we~~ ~~frs~~ i t experimentally analyse the outputs from edited models on a multi-hop reasoning task (§3). The results reveal an abnormally high probability that the edited models output the edit target _o_ _[∗]_ for multi-hop questions, even when such responses are entirely implausible as valid answers (§3.2). We refer to this phenomenon as **Editing Overfit, indicates that edited models tend to assign unusually high** **prediction probabilities to the edit target** _o_ _[∗]_ **of edit sample** ( _s, r, o, o_ _[∗]_ ) **, skewing the response** **accuracy for complex questions where the correct answer is not** _o_ _[∗]_ **.** For instance, as shown in Figure 1, after editing “ _Microsoft is founded by Bill Gates →_ _Steve Jobs,_ ” it erroneously answers the question “ _Which college did the founder of Microsoft attend?_ ” with “ _Steve Jobs._ ” We hypothesize that Editing Overfit is a key factor contributing to the suboptimal performance of edited LLMs on complex tasks, like multi-hop editing. **This phenomenon likely stems from** **existing knowledge editing paradigms emphasize the direct correspondence between the input** **prompt** _p_ ( _s, r_ ) **and the output** _o_ _[∗]_ **for each edit sample** ( _s, r, o, o_ _[∗]_ ) **. Given the typically limited** **number of optimization samples, this focus on optimizing the** _p_ ( _s, r_ ) _→_ _o_ _[∗]_ **relationship can** **lead to severe overfitting issues.** Specifically, as shown in Figure 1, all current editing methods for LLMs rely on a primary loss function that maximizes the likelihood of the new target _o_ _[∗]_ given the input prompt _p_ ( _s, r_ ). The main differences between these methods lie in the techniques used for parameter updates. For example, FT-based methods either directly optimizes or uses parameterefficient fine-tuning (Hu et al., 2022; Ren et al., 2024) to adjust model parameters, MEND employ a hypernetwork to make updates, while ROME and MEMIT apply low-rank updates to derive closedform solutions for specific parameters. When the model is updated with the new knowledge such as “ _Microsoft is founded by Steve Jobs_,” it risks overfitting by learning only the correspondence between “ _Microsoft is founded by_ ” and “ _Steve Jobs_ .” As a result, the edited model may output “ _Steve Jobs_ ” whenever it encounters the terms “ _Microsoft_ ” and “ _is founded by_ .” This also explains the abnormally high prediction probabilities of edit targets in multi-hop reasoning task, as the edited model may simply recognize patterns in the prompt and tend to output the corresponding edit target. In this study, we particularly investigate the Editing Overfit phenomenon that occurs in edited LLMs. To this end, we first construct a benchmark for **EV** aluating of Editing **O** verfit in **K** nowledge **E** diting (EVOKE) (§4.1), which comprises six tasks across two categories. The overfit tasks in EVOKE include various patterns prone to causing overfitting in models, allowing us analyze and investigate overfitting phenomena in current editing methods. By applying existing editing methods to EVOKE, we conduct an in-depth analysis to identify specific input patterns are prone to overfitting (§4.2). Furthermore, we evaluate the effectiveness of four existing overfitting mitigation strategies (§5), _Norm Constraints_, _Batch Editing_, _Multi-layer Editing_, and _Data Augmentation_, in addressing the Editing Overfit problem. To further alleviate Editing Overfit, inspired by the knowledge mechanism of LLMs, we propose a plug-and-play strategy named **L** earn **T** o **I** nference (LTI) (§6), which enables the edited models to learn how to infer with new knowledge rather than simply establish input-output mappings. Specifically, LTI introduces a Multi-Stage Constraint module, which imposes constraints on crucial 2 reasoning steps of LLMs during the editing process. This ensures that the edited model utilizes new knowledge in a way that closely resembles how an unedited model leverage new knowledge through in-context learning, helping to prevent the model from overfitting solely on input-output mapping. Additionally, LTI can be combined with various knowledge editing methods and used in conjunction with other overfitting mitigation techniques. Our contributions can be summarized as follows: - We reveal and investigate the overfitting issue caused by current editing paradigm, identifying it as a key factor behind the suboptimal performance of edited models, a phenomenon we term the Editing Overfit problem. - We construct EVOKE, a benchmark with detailed evaluation metrics, to enable a fine-grained assessment and analysis of mainstream editing methods. Additionally, we explore the effectiveness of four general overfitting mitigation techniques in addressing the Editing Overfit problem. - We propose a new plug-in strategy, Learn the Inference, designed to further mitigate overfitting. Extensive experiments demonstrate that integrating LTI with different editing methods effectively reduces the severity of Editing Overfit. 2 R ELATED W ORK Knowledge editing (KE) updates LLM outputs to (i) accurately respond to new knowledge, (ii) preserve existing knowledge without catastrophic forgetting, and (iii) leverage updated knowledge in complex reasoning tasks. Each piece of knowledge is formulated as a triple ( _s, r, o_ ) (De Cao et al., 2021), consisting of a subject _s_, relation _r_, and object _o_ . An edit sample is defined as _e_ = ( _s, r, o, o_ _[∗]_ ), representing a knowledge update from ( _s, r, o_ ) to ( _s, r, o_ _[∗]_ ). Our study focuses on parameter-modifying methods, which are divided into three main categories (Yao et al., 2023): **Fine-tuning-based methods** generally follow the supervised fine-tuning paradigm. For example, to edit a fact such as _“Microsoft is founded by Steve Jobs,”_ the model’s weights are updated via gradient descent to increase the probability of the edit target, _Steve Jobs_ . Some approaches aim to improve robustness by incorporating norm constraints (Zhu et al., 2020) or data augmentation(Gangadhar & Stratos, 2024; Wei et al., 2024). However, vanilla fine-tuning often affects unrelated knowledge, leading to catastrophic forgetting, making it unsuitable for direct application in knowledge editing. **Meta-learning-based methods** employ a hypernetwork to adjust model parameters specifically for editing. This hypernetwork is trained to convert fine-tuning gradients into updated weights, with the aim of predicting weights that closely resemble those obtained through fine-tuning with augmented data. KE (De Cao et al., 2021) pioneered this approach, which MEND (Mitchell et al., 2021) later extended to LLMs by predicting low-rank decompositions of parameter updates. **Locate-then-edit methods** originate from research into the internal mechanisms of LLMs, advocating for identifying the specific weights responsible for storing knowledge before applying targeted updates. Geva et al. (2021; 2023) propose viewing MLP modules as key-value memory. Building on this foundation, the Knowledge Neuron theory (Dai et al., 2022) posits that these MLP key-value pairs encode factual knowledge. Meng et al. (2022a) introduce causal tracing to analyze LLMs’ factual recall mechanisms, leading to the development of ROME (Meng et al., 2022a) and MEMIT (Meng et al., 2022b), which achieved state-of-the-art results on several traditional metrics. In recent years, researchers have recognized the limitations of current editing methods on specific complex tasks such as multi-hop reasoning, leading to the development of task-specific approaches (Zhong et al., 2023; Zhang et al., 2024b;a). More detailed related work is provided in Appendix B. In contrast, our work explores the reasons behind the suboptimal performance of editing methods by constructing a benchmark and proposes a more general strategy to enhance editing performance by addressing the issue of overfitting. 3 P RELIMINARY E XPERIMENTS To investigate the causes of edited LLMs’ poor performance on complex tasks, we begin by analyzing the outputs of the edited models on a representative multi-hop reasoning dataset, C OUNTER 3 FACT P LUS (Yao et al., 2023), where each entry contains an edited knowledge _e_ = ( _s, r, o, o_ _[∗]_ ) along with a multi-hop question _q_ = ( _s, r, r_ _[′]_ ) that requires reasoning based on the edited sample. 3.1 M ETRIC D EFINITIONS To perform a fine-grained analysis of the outputs from edited models, we define several metrics in response to complex prompts, such as multi-hop questions within the dataset. Specifically, for each edit sample _e_ = ( _s, r, o, o_ _[∗]_ ), when the edited LLM is presented with a prompt consisting of a complex question, it may produce one of the following outputs: the original answer to the complex question, the correct answer, or the edited target _o_ _[∗]_ . Accordingly, we define the following metrics: - **Correct Answer Probability (CAP)** : The probability that the model generates the correct answer ans for a given prompt, formalized as P (ans _|_ prompt). - **Original Answer Probability (OAP)** : The probability that the model outputs the original answer ori (before editing) in response to the given prompt, defined as P (ori _|_ prompt). - **Direct Probability (DP)** : The likelihood that the model produces the edit target _o_ _[∗]_, expressed as P ( _o_ _[∗]_ _|_ prompt). To further evaluate the influence of both the target edit _o_ _[∗]_ and the original answer ori on the correct answer ans, we follow Meng et al. (2022a) and define two additional comprehensive metrics to gauge the model’s overall editing effectiveness: - **Editing Overfit Score (EOS)** : This metric evaluates the performance of the edited model on complex questions where the correct answer is not _o_ _[∗]_ . It serves as a primary indicator of the model’s overfitting and overall performance. The score is calculated as the proportion of cases where the model overfits by favoring the edit target _o_ _[∗]_ over the correct answer ans, formalized as E [I[P (ans _|_ prompt) _>_ P ( _o∗|_ prompt)]]. - **Answer Modify Score (AMS)** : This metric evaluates the negative interference of old knowledge on the correct answers. It is assessed by calculating the proportion of cases where the probability of the correct answer exceeds that of the original answer, defined as E [I[P (ans _|_ prompt) _>_ P (ori _|_ prompt)]]. 3.2 E DITING O VERFIT P HENOMENON Subsequently, we apply the ROME and OAP ↓ CAP ↑ DP ↓ AMS ↑ EOS ↑ MEMIT methods to GPT-J to evaluate the per- 100 94.16 100 100 formance of the edited models on C FACT P LUS using the aforementioned metrics, OUNTER - 75 75 80.92 75 75.67 as shown in Figure 2.tions, the edit target _o_ _[∗]_ for each edit sampleIn multi-hop evalua- 5025 12.61 30.66 5025 17.96 41.03 34.91 5025 18.49 25.80 50.49 ( _s, r, o, o_ _[∗]_ ) is typically not a possible answer to 0 5.20 0.27 0 3.65 0 5.82 the multi-hop prompt, and its output probabil- BASE ROME MEMIT ity should therefore be negligible. For instance, Figure 2: Performance of GPT-J edited with “ _Steve Jobs_ ” would be an implausible response ROME and MEMIT on C OUNTERFACT P LUS . to “ _Which college did the founder of Microsoft_ _attend?_ ” The base model’s DP score of 0 _._ 27% confirms that the unedited model is highly unlikely to output _o_ _[∗]_ as a response. However, after editing, both models exhibit significantly higher average probabilities of _o_ _[∗]_ (DP), with ROME even reaching 41 _._ 03%. Both models also show substantially lower Editing Overfit Score (EOS) values, indicating that for many evaluation samples, the probability of generating the correct answer is lower than that of outputting _o_ _[∗]_ . This anomalous probability distribution substantially impacts model performance, as the inflated _o_ _[∗]_ prediction probability diminishes the Correct Answer Probability (CAP) and obscures the model’s actual output. From these observations, we define the phenomenon of **Editing Overfit** as follows: **After an LLM** **has been edited based on an editing example** _e_ = ( _s, r, o, o_ _[∗]_ ) **, the edited LLM exhibits a height-** **ened likelihood of producing the edit target** _o_ _[∗]_ **as the answer to questions that implicitly or** **explicitly contains** _s_ **or** _r_ **, even when the correct answer is unrelated to** _o_ _[∗]_ **.** OAP ↓ CAP ↑ DP ↓ AMS ↑ EOS ↑ 100 75 50 25 100 75 50 25 100 75 50 25 0 BASE 0 ROME 0 MEMIT Figure 2: Performance of GPT-J edited with ROME and MEMIT on C OUNTERFACT P LUS . 4 4 A NALYSIS ON E DITING O VERFIT To further investigate the severity of Editing Overfit in edited LLMs, we construct EVOKE, a new benchmark designed to analyze overfitting phenomena across various tasks. We then assess the performance of different editing methods using this benchmark and examine the effectiveness of several existing mitigation strategies in reducing Editing Overfit. 4.1 EVOKE B ENCHMARK EVOKE comprises Recall Tasks and Overfit Tasks, covering six tasks in total. The Recall Tasks assess the edited model’s ability to recall new edited knowledge, including **Efficacy** and **Paraphrase** evaluation. The Overfit Tasks pose complex challenges that are prone to inducing overfitting in editing methods, including **Multi-hop Reasoning**, **Prefix Distraction**, **Subject Specificity**, and **Relation Specificity** . These tasks are specifically designed to evaluate the model’s capability to utilize newly integrated knowledge for more challenging scenarios, with a particular emphasis on examining the degree of Editing Overfit. Details of EVOKE construction can be found in Appendix C. Taking the edit “ _Microsoft is founded by Bill Gates →_ _Steve Jobs_ ” as an example, we introduce the recall tasks used to assess editing success rate of the edit (Meng et al., 2022a; Yao et al., 2023): - **Efficacy** directly validates whether the edited models can recall the new edited knowledge ( _s, r, o_ _[∗]_ ) under the editing prompt _p_ ( _s, r_ ). In the context of the above example, the model would be asked: “ _Who is the founder of Microsoft?_ ” - **Paraphrase** examines the model’s ability of recall the new knowledge ( _s, r, o_ _[∗]_ ) using paraphrased forms of the editing prompt _p_ ( _s, r_ ). For instance, it might ask:“ _Who established Microsoft?_ ” The design of overfit tasks are based on the two principles: First, the input questions explicitly or implicitly contain the information of subject _s_ or relation _r_ to induce potential overfitting responses from the model; Second, the correct answers to these questions are entirely unrelated to _o∗_, making it easier to determine whether the edited model exhibits overfitting. Accordingly, the overfit tasks are constructed as follows: - **Multi-hop Reasoning** evaluates the edited model’s ability to integrate the newly edited knowledge with existing knowledge to correctly answer questions spanning multiple entities or relations. For example, “ _Which university did the founder of Microsoft attend?_ ” These questions typically contain implicit subject _s_ and relation _r_ information from the edit sample, but the answer is not the target _o_ _[∗]_ . They are well-suited for evaluating whether the edited model has overfit to the _p_ ( _s, r_ ) _→_ _o_ _[∗]_ pattern. A model that has overfit to this pattern might incorrectly produce _‘Steve_ _Jobs’_ as the answer to this question. - **Prefix Distraction** uses the new knowledge ( _s, r, o_ _[∗]_ ) as a perfix for unrelated questions, evaluating weather the edited model can still provide the original correct answer. For example: “ _Microsoft_ _was founded by Steve Jobs. Who is the founder of Amazon?_ ” This evaluation also assess weather the edited model has overfit to the _p_ ( _s, r_ ) _→_ _o_ _[∗]_ pattern, providing a more explicit measure compared to multi-hop reasoning. - **Subject Specificity** presents questions with the same subject _s_ as the edit sample but with different relations _r_ _[′]_ . For example: “ _When was Microsoft founded?_ ” These questions typically contain information about the subject _s_, but the correct answer is not the target _o_ _[∗]_, making them ideal for evaluating whether the edited model has overfit to the _s →_ _o_ _[∗]_ pattern. - **Relation Specificity** includes questions with different subjects _s_ _[′]_ from the edit sample but the same relation _r_, such as: _“Who is the founder of Amazon?”_ These questions contain information about the relation _r_, but the answer is not the target _o_ _[∗]_ . They are used to evaluate whether the model has overfit to the _r →_ _o_ _[∗]_ pattern. This task also corresponds to the locality evaluation in C OUNTERFACT (Meng et al., 2022a). The recall task is evaluated using the AMS metric. For the multi-hop reasoning task, we employ all five metrics defined in Section 3.1 for a comprehensive analysis. In the Prefix Distraction, Subject Specificity, and Relation Specificity tasks, the correct answer is identical to the original answer, making OAP equivalent to CAP, with the EOS metric used to evaluate performance in these tasks. 5 Idea Generation Category:
0Conceptual Integration
t8qcGXaepr
# T RANSFUSION : P REDICT THE N EXT T OKEN AND D IFFUSE I MAGES WITH O NE M ULTI -M ODAL M ODEL **Chunting Zhou** _[µ][∗]_ **Lili Yu** _[µ][∗]_ **Arun Babu** _[µ]_ **Kushal Tirumala** _[µ]_ **Michihiro Yasunaga** _[µ]_ **Leonid Shamis** _[µ]_ **Jacob Kahn** _[µ]_ **Xuezhe Ma** _[σ]_ **Luke Zettlemoyer** _[µ]_ **Omer Levy** _[µ]_ _µ_ Work done at Meta _σ_ University of Southern California A BSTRACT We introduce Transfusion, a recipe for training a multi-modal model over discrete and continuous data. Transfusion combines the language modeling loss function (next token prediction) with diffusion to train a single transformer over mixedmodality sequences. We pretrain multiple Transfusion models up to 7B parameters from scratch on a mixture of text and image data, establishing scaling laws with respect to a variety of uni- and cross-modal benchmarks. Our experiments show that Transfusion scales significantly better than quantizing images and training a language model over discrete image tokens. By introducing modality-specific encoding and decoding layers, we can further improve the performance of Transfusion models, and even compress each image to just 16 patches. We further demonstrate that scaling our Transfusion recipe to 7B parameters and 2T multi-modal tokens produces a model that can generate images and text on a par with similar scale diffusion models and language models, reaping the benefits of both worlds. 1 I NTRODUCTION Multi-modal generative models need to be able to perceive, process, and produce both discrete elements (such as text or code) and continuous elements (e.g. image, audio, and video data). While language models trained on the next token prediction objective dominate discrete modalities (OpenAI et al., 2024; Dubey et al., 2024), diffusion models (Ho et al., 2020; Rombach et al., 2022a) and their generalizations (Lipman et al., 2022) are the state of the art for generating continuous modalities (Dai et al., 2023; Esser et al., 2024b; Bar-Tal et al., 2024). Many efforts have been made to combine these approaches, including extending a language model to use a diffusion model as a tool, either explicitly (Liu et al., 2023) or by grafting a pretrained diffusion model onto the language model (Dong et al., 2023; Koh et al., 2024). Alternatively, one can quantize the continuous modalities (Van Den Oord et al., 2017) and train a standard language model over discrete tokens (Ramesh et al., 2021; Yu et al., 2022; 2023), simplifying the model’s architecture at the cost of losing information. In this work, we show it is possible to fully integrate both modalities, with no information loss, by training a single model to both predict discrete text tokens and diffuse continuous images. We introduce **Transfusion**, a recipe for training a model that can seamlessly generate discrete and continuous modalities. We demonstrate Transfusion by pretraining a transformer model on 50% text and 50% image data using a different objective for each modality: next token prediction for text and diffusion for images. The model is exposed to both modalities and loss functions at each training step. Standard embedding layers convert text tokens to vectors, while patchification layers represent each image as a sequence of patch vectors. We apply causal attention for text tokens and bidirectional attention for image patches. For inference, we introduce a decoding algorithm that combines the standard practices of text generation from language models and image generation from diffusion models. Figure 1 illustrates Transfusion. In a controlled comparison with Chameleon’s discretization approach (Chameleon Team, 2024), we show that Transfusion models scale better in every combination of modalities. In text-to-image _∗_ Equal contribution. 1 A cute cat . <BOI> <EOI> What color is its nose Figure 1: A high-level illustration of Transfusion. A single transformer perceives, processes, and produces data of every modality. Discrete (text) tokens are processed autoregressively and trained on the next token prediction objective. Continuous (image) vectors are processed together in parallel and trained on the diffusion objective. Marker BOI and EOI tokens separate the modalities. generation, we find that Transfusion exceeds the Chameleon approach at less than a third of the compute, as measured by both FID and CLIP scores. When controlling for FLOPs, Transfusion achieves approximately 2 _×_ lower FID scores than Chameleon models. We observe a similar trend in image-to-text generation, where Transfusion matches Chameleon at 21.8% of the FLOPs. Surprisingly, Transfusion is also more efficient at learning text-to-text prediction, achieving perplexity parity on text tasks around 50% to 60% of Chameleon’s FLOPs. Ablation experiments reveal critical components and potential improvements for Transfusion. We observe that the intra-image bidirectional attention is important, and that replacing it with causal attention hurts text-to-image generation. We also find that adding U-Net down and up blocks to encode and decode images enables Transfusion to compress larger image patches with relatively small loss to performance, potentially decreasing the serving costs by up to 64 _×_ . Finally, we demonstrate that Transfusion can generate images at similar quality to other diffusion models. We train from scratch a 7B transformer enhanced with U-Net down/up layers (0.27B parameters) over 2T tokens: 1T text tokens, and approximately 5 epochs of 692M images and their captions, amounting to another 1T patches/tokens. Figure 7 shows some generated images sampled from the model. On the GenEval (Ghosh et al., 2023) benchmark, our model outperforms other popular models such as DALL-E 2 and SDXL; unlike those image generation models, it can generate text, reaching the same level of performance as Llama 1 on text benchmarks. Our experiments thus show that Transfusion is a promising approach for training truly multi-modal models. 2 B ACKGROUND Transfusion is a single model trained with two objectives: language modeling and diffusion. Each of these objectives represents the state of the art in discrete and continuous data modeling, respectively. This section briefly defines these objectives, as well as background on latent image representations. 2.1 L ANGUAGE M ODELING Given a sequence of discrete tokens _y_ = _y_ 1 _, ..., y_ _n_ from a closed vocabulary _V_, a language model predicts the probability of the sequence _P_ ( _y_ ) . Standard language models decompose _P_ ( _y_ ) into a product of conditional probabilities [�] _[n]_ _i_ =1 _[P]_ _[θ]_ [(] _[y]_ _[i]_ _[|][y]_ _[<i]_ [)] [. This creates an autoregressive classification] task, where the probability distribution of each token _y_ _i_ is predicted conditioned on the prefix of a sequence _y_ _<i_ using a single distribution _P_ _θ_ parameterized by _θ_ . The model can be optimized by minimizing the cross-entropy between _P_ _θ_ and the empirical distribution of the data, yielding the standard next-token prediction objective, colloquially referred to as _LM loss_ : _L_ LM = E _y_ � _−_ [1] _n_ _n_ � log _P_ _θ_ ( _y_ _i_ _|y_ _<i_ )� (1) _i_ =1 Once trained, language models can also be used to generate text by sampling token by token from the model distribution _P_ _θ_, typically using temperature and top-p truncation. 2.2 D IFFUSION Denoising diffusion probabilistic models (a.k.a. _DDPM_ or _diffusion models_ ) operate on the principle of learning to reverse a gradual noise-addition process (Ho et al., 2020). Unlike language models that typically work with discrete tokens ( _y_ ), diffusion models operate over continuous vectors ( **x** ), making 2 (a) (b) (c) (d) (e) (f) (g) (h) Figure 2: Generated images from a 7B Transfusion trained on 2T multi-modal tokens. Captions are: (a) A bread, an apple, and a knife on a table. (b) A corgi. (c) Three spheres made of glass falling into ocean. Water is splashing. Sun is setting. (d) A wall in a royal castle. There are two paintings on the wall. The one on the left a detailed oil painting of the royal raccoon king. The one on the right a detailed oil painting of the royal raccoon queen. (e) A kangaroo holding a beer, wearing ski goggles and passionately singing silly songs. (f) “Transfusion" is written on the blackboard. (g) an egg and a bird made of wheat bread. (h) A cloud in the shape of two bunnies playing with a ball. The ball is made of clouds too. them particularly suited for tasks involving continuous data like images. The diffusion framework involves two processes: a forward process that describes how the original data is turned into noise, and a reverse process of denoising that the model learns to perform. **Forward Process** From a mathematical perspective, the forward process defines how the noised data (which serves as the model input) is created. Given a data point **x** 0, Ho et al. (2020) define a Markov chain that gradually adds Gaussian noise over _T_ steps, creating a sequence of increasingly noisy versions **x** 1 _,_ **x** 2 _, ...,_ **x** _T_ . Each step of this process is defined by _q_ ( **x** _t_ _|_ **x** _t−_ 1 ) = _N_ ( **x** _t_ ; _[√]_ 1 _−_ _β_ _t_ **x** _t−_ 1 _, β_ _t_ **I** ), where _β_ _t_ increases over time according to a predefined noise schedule (see below). This process can be reparameterized in a way that allows us to directly sample **x** _t_ from **x** 0 using a single sample of Gaussian noise _**ϵ**_ _∼N_ ( **0** _,_ **I** ): **x** _t_ = _[√]_ _α_ ¯ _t_ **x** 0 + _√_ 1 _−_ _α_ ¯ _t_ _**ϵ**_ (2) Here, ¯ _α_ _t_ = [�] _[t]_ _s_ =1 [(1] _[ −]_ _[β]_ _[s]_ [)] [, providing a useful abstraction over the original Markov chain. In fact,] both the training objective and the noise scheduler are eventually expressed (and implemented) in these terms. **Reverse Process** The diffusion model is trained to perform the reverse process _p_ _θ_ ( **x** _t−_ 1 _|_ **x** _t_ ), learning to denoise the data step by step. There are several ways to do so; in this work, we follow the approach of Ho et al. (2020) and model the Gaussian noise _**ϵ**_ in Equation 2 as a proxy for the cumulative noise at step _t_ . Specifically, a model _ϵ_ _θ_ ( _·_ ) with parameters _θ_ is trained to estimate the noise _**ϵ**_ given the noised data **x** _t_ and timestep _t_ . In practice, the model often conditions on additional contextual information _c_, such as a caption when generating an image. The parameters of the noise prediction model are thus optimized by minimizing the mean squared error loss: _L_ DDPM = E **x** 0 _,t,_ _**ϵ**_ � _||_ _**ϵ**_ _−_ _**ϵ**_ _θ_ ( **x** _t_ _, t, c_ ) _||_ [2] [�] (3) **Noise Schedule** When creating a noised example **x** _t_ (Equation 2), ¯ _α_ _t_ determines the variance of the noise for timestep _t_ . In this work, we adopt the commonly used cosine scheduler Nichol & Dhariwal (2021), which largely follows _[√]_ _α_ ¯ _t_ _≈_ cos( _[t]_ _[·]_ _[π]_ [)][ with some adjustments.] _[π]_ 2 [)][ with some adjustments.] _[π]_ _T_ _[·]_ 2 3 U-Net Up |or Linear|r U-| |---|---| ||| |Transformer|Transformer| Linear or U-Net Down Figure 3: We convert images to and from latent representations using a pretrained VAE, and then into patch representations with either a simple linear layer or U-Net down blocks. A cute cat <BOI> <EOI> What A cute cat <BOI> <EOI> What Figure 4: Expanding on the causal mask, Transfusion allows patches of the same image to condition on each other. **Inference** Decoding is done iteratively, pealing away some of the noise at each step. Starting with pure Gaussian noise at **x** _T_, the model _**ϵ**_ _θ_ ( **x** _t_ _, t, c_ ) predicts the noise accumulated at timestep _t_ . The predicted noise is then scaled according to the noise schedule, and the proportional amount of predicted noise is removed from **x** _t_ to produce **x** _t−_ 1 . In practice, inference is done over fewer timesteps than training. Classifier-free guidance (CFG) (Ho & Salimans, 2022) is often used to improve generation by contrasting the prediction of the model conditioned on the context _c_ with the unconditioned prediction, at the cost of doubling the computation. 2.3 L ATENT I MAGE R EPRESENTATION Early diffusion models worked directly in pixel space (Ho et al., 2020), but this proved computationally expensive. Variational autoencoders (VAEs) (Kingma & Welling, 2013) can save compute by encoding images into a lower-dimensional latent space. Implemented as deep CNNs, modern VAEs are trained on a combination of reconstruction and regularization losses (Esser et al., 2021), allowing downstream models like latent diffusion models (LDMs) (Rombach et al., 2022a) to operate efficiently on compact image patch embeddings; e.g. represent every 8 _×_ 8 pixel patch as an 8-dimensional vector. For autoregressive language modeling approaches (Ramesh et al., 2021; Yu et al., 2022), images must be discretized. Discrete autoencoders, such as vector-quantized VAEs (VQ-VAE) (Van Den Oord et al., 2017), achieve this by introducing a quantization layer (and related regularization losses) that maps continuous latent embeddings to discrete tokens. 3 T RANSFUSION Transfusion is a method for training a single unified model to understand and generate both discrete and continuous modalities. Our main innovation is demonstrating that we can use separate losses for different modalities – language modeling for text, diffusion for images – over shared data and parameters. Figure 1 illustrates Transfusion. **Data Representation** We experiment with data spanning two modalities: discrete text and continuous images. Each text string is tokenized into a sequence of discrete tokens from a fixed vocabulary, where each token is represented as an integer. Each image is encoded as latent patches using a VAE (see §2.3), where each patch is represented as a continuous vector; the patches are sequenced left-to-right top-to-bottom to create a sequence of patch vectors from each image. For mixed-modal examples, we surround each image sequence with special _beginning of image_ (BOI) and _end of image_ (EOI) tokens before inserting it to the text sequence; thus, we arrive at a single sequence potentially containing both discrete elements (integers representing text tokens) and continuous elements (vectors representing image patches). 4 **Model Architecture** The vast majority of the model’s parameters belong to a single transformer, which processes every sequence, regardless of modality. The transformer takes a sequence of highdimensional vectors in R _[d]_ as input, and produces similar vectors as output. To convert our data into this space, we use lightweight modality-specific components with unshared parameters. For text, these are the embedding matrices, converting each input integer to vector space and each output vector into a discrete distribution over the vocabulary. For images, we experiment with two alternatives for compressing local windows of _k × k_ patch vectors into a single transformer vector (and vice versa): (1) a simple linear layer, and (2) up and down blocks of a U-Net (Nichol & Dhariwal, 2021; Saharia et al., 2022). Figure 3 illustrates the overall architecture. **Transfusion Attention** Language models typically use causal masking to efficiently compute the loss and gradients over an entire sequence in a single forward-backward pass without leaking information from future tokens. While text is naturally sequential, images are not, and are usually modeled with unrestricted (bidirectional) attention. Transfusion combines both attention patterns by applying causal attention to every element in the sequence, and bidirectional attention within the elements of each individual image. This allows every image patch to attend to every other patch within the same image, but only attend to text or patches of other images that appeared previously in the sequence. We find that enabling intra-image attention significantly boosts model performance (see §4.3). Figure 4 shows an example Transfusion attention mask. **Training Objective** To train our model, we apply the language modeling objective _L_ LM to predictions of text tokens and the diffusion objective _L_ DDPM to predictions of image patches. LM loss is computed per token, while diffusion loss is computed per image, which may span multiple elements (image patches) in the sequence. Specifically, we add noise _**ϵ**_ to each input latent image **x** 0 according to the diffusion process to produce **x** _t_ before patchification, and then compute the image-level diffusion loss. [1] We combine the two losses by simply adding the losses computed over each modality with a balancing coefficient _λ_ : _L_ Transfusion = _L_ LM + _λ · L_ DDPM (4) This formulation is a specific instantiation of a broader idea: combining a discrete distribution loss with a continuous distribution loss to optimize the same model. We leave further exploration of this space, such as replacing diffusion with flow matching (Lipman et al., 2022)), to future work. **Inference** Reflecting the training objective, our decoding algorithm also switches between two modes: LM and diffusion. In _LM mode_, we follow the standard practice of sampling token by token from the predicted distribution. When we sample a BOI token, the decoding algorithm switches to _diffusion mode_, where we follow the standard procedure of decoding from diffusion models. Specifically, we append a pure noise **x** _T_ in the form of _n_ image patches to the input sequence (depending on the desired image size), and denoise over _T_ steps. At each step _t_, we take the noise prediction and use it to produce **x** _t−_ 1, which then overwrites **x** _t_ in the sequence; i.e. the model always conditions on the last timestep of the noised image and cannot attend to previous timesteps. Once the diffusion process has ended, we append an EOI token to the predicted image, and switch back to LM mode. This algorithm enables the generation of any mixture of text and image modalities. 4 E XPERIMENTS We demonstrate in a series of controlled experiments that Transfusion is a viable, scalable method for training a unified multi-modal model. The setup of our experiments is detailed in Appendix B.1. 4.1 S ETUP **Evaluation** We evaluate model performance on a collection of standard uni-modal and cross-modal benchmarks (Table 7 in Appendix). For text-to-text, we measure perplexity on 20M held-out tokens from Wikipedia and the C4 corpus (Raffel et al., 2019), as well as accuracy on the pretraining evaluation suite of Llama 2 (Touvron et al., 2023b). For text-to-image, we use the MS-COCO benchmark (Lin et al., 2014), where we generate images on randomly selected 30k prompts from validation set and measure their photo-realism using zero-shot Frechet Inception Distance (FID) 1 Ergo, downstream tokens condition on noisy images during training. See §B.2 for further discussion. 5 Idea Generation Category:
0Conceptual Integration
SI2hI0frk6