text
stringlengths
13.9k
38.7k
labels
class label
4 classes
id
stringlengths
10
10
# L EARNING VECTOR FIELDS OF DIFFERENTIAL EQUATIONS ## ON MANIFOLDS WITH GEOMETRICALLY CONSTRAINED # ## OPERATOR VALUED KERNELS **Daning Huang** Department of Aerospace Engineering The Pennsylvania State University University Park, PA 16802, USA daning@psu.edu **John Harlim** Department of Mathematics, Department of Meteorology and Atmospheric Science Institute for Computational and Data Sciences The Pennsylvania State University University Park, PA 16802, USA jharlim@psu.edu A BSTRACT **Hanyang He** Department of Electrical Engineering The Pennsylvania State University University Park, PA 16802, USA hfh5310@psu.edu **Yan Li** Department of Electrical Engineering The Pennsylvania State University University Park, PA 16802, USA yql5925@psu.edu We address the problem of learning ordinary differential equations (ODEs) on manifolds. Existing machine learning methods, particularly those using neural networks, often struggle with high computational demands. To overcome this issue, we introduce a geometrically constrained operator-valued kernel that allows us to represent vector fields on tangent bundles of smooth manifolds. The construction of the kernel imposes the geometric constraints that are estimated from the data and ensures the computational feasibility for learning high dimensional systems of ODEs. Once the vector fields are estimated, e.g., by the kernel ridge regression, we need an ODE solver that guarantees the solution to stay on (or close to) the manifold. To overcome this issue, we propose a geometry-preserving ODE solver that approximates the exponential maps corresponding to the ODE solutions. We deduce a theoretical error bound for the proposed solver that guarantees the approximate solutions to lie on the manifold in the limit of large data. We verify the effectiveness of the proposed approach on high-dimensional dynamical systems, including the cavity flow problem, the beating and travelling waves in Kuramoto-Sivashinsky equations, and the reaction-diffusion dynamics. 1 I NTRODUCTION In this paper, we consider the problem of learning ODE whose solutions lie on a manifold, which arises from a wide range of applications, from mechanical multibody systems to electrical circuit simulation and power systems (see the references in Ascher & Petzold (1998); Kunkel (2006)), where the system of ODEs on manifolds is formulated based on Differential-Algebraic Equations (Rheinboldt, 1984). One of the main challenges in this problem is that the underlying manifold (geometric) constraints are not explicitly known and need to be uncovered from the data. A popular solution to this problem is to employ a nonlinear dimensionality reduction approach, such as autoencoders, to represent the geometrical constraint of the dynamics, i.e., the manifold. A typical strategy is to learn a low-dimensional latent space to represent the original data and learn the dynamics in the latent space; the dynamics are represented using, e.g., a neural-network (NN) based discrete-time mapping (Linot & Graham, 2020), recurrent neural network (Maulik et al., 2021; Vlachas et al., 2022), sequence-to-sequence mapping (Wu et al., 2024), Latent Dynamics 1 Network (Regazzoni et al., 2024), and SINDy (Fukami et al., 2021; Lin et al., 2024). The main pitfall of this class of methods is that the latent space only provides a global parametrization of the manifold, whose dimension is typically larger than the intrinsic dimension of the manifold, and the dynamics are not guaranteed to learn the exponential maps (for discrete-time model) or vector fields (for continuous-time model) of the manifold. As a result, the predicted trajectories may deviate from the manifold, limiting the long-term prediction accuracy, as we shall see in several numerical examples with NN baselines. Furthermore, the computational cost to train such NN-based nonlinear estimators is known to be expensive. In our numerical comparison with the proposed linear method, we found that while their testing times are comparable, the training time for NN models is over 800 times slower on a simple example with two ambient dimensions. **Contribution.** This motivates us to develop a linear estimator of the vector fields on the tangent bundles of smooth manifolds. Motivated by SINDy algorithm (Brunton et al., 2016), we construct a geometrically constrained dictionary to represent the unknown vector fields where these constraints will be approximated from the available point cloud data induced by the observed time series of the dynamical systems. Since such a dictionary is subjected to the curse of dimensionality, we employ the standard “kernel trick” to mitigate this issue. Unlike previous works in non-manifold setting with scalar-valued kernels (Baddoo et al., 2022; Yang et al., 2024) and kriging/Gaussian process (Glaz et al., 2010), the geometrical constraints give rise to an operator-valued kernel that leverages the intrinsic dimension of the manifold to enable practical implementation. To numerically integrate the system of ODEs with the estimated vector fields, we devise an ODE solver that guarantees the solutions to be on the manifolds in the limit of large data. We will demonstrate the effectiveness of this approach numerically on several high-dimensional test problems, including the cavity flow problem, the beating and travelling waves in Kuramoto-Sivashinsky equations, and reaction-diffusion dynamics. **Paper organization.** The remainder of this paper is organized as follows: In Section 2, we discuss the geometrically constrained dictionary, extending the SINDy approach and generalizing it to an operator-valued kernel to mitigate high dimensional problems. In Section 3, we introduce a geometry-preserving time integration scheme, provide an illuminating example that motivate this integrator, and discuss the convergence properties. In Section 4, we discuss closely related approaches that will be used as a baseline to quantify the performance of the proposed approaches documented in Section 5. In Section 6, we give a brief summary. We supplement the paper with five appendices that report the detailed technical numerical tools needed in the algorithm, computational complexity, the theoretical proofs, and additional numerical results. 2 G EOMETRICALLY C ONSTRAINED D ICTIONARY Consider dynamical systems governed by a system of ODEs, **x** ˙ _=_ **f** ( **x** ), **x** _∈_ R _[n]_, (1) where the vector field **f** : R _[n]_ _→_ R _[n]_ . The Sparse Identification of Nonlinear Dynamics (SINDy) approach (Wang et al., 2011; Brunton et al., 2016) is to approximate components of **f** _=_ ( _f_ [1],..., _f_ _[n]_ ) by a sparse regression on a set of appropriate basis functions. Typical choices of basis functions can be polynomials and/or trigonometric functions. An example of polynomial dictionary is, _θ_ ( **x** ) _=_ �1 **x** _[⊤]_ ( **x** [2] ) _[⊤]_ ... � _∈_ R _[m]_ where **x** _[j]_ : _=_ � _x_ 1 _[j]_ [1] _[x]_ _[ j]_ 2 [2] _[···]_ _[x]_ _[ j]_ _n_ _[n]_ [:] _[ ∀]_ _[j][ =][ j]_ 1 _[+][ j]_ 2 _[+]_ [...] _[+][ j]_ _n_ � . Given a set of labeled training data { **x** _i_, ˙ **x** _i_ } _i_ _=_ 1,..., _N_, where the subscript index denotes temporal information, **x** _i_ : _=_ **x** ( _t_ _i_ ) and ˙ **x** _i_ : _=_ ˙ **x** ( _t_ _i_ ), the SINDy approach is to approximate _f_ _[k]_ ( **x** ) _≈_ _f_ _ϵ_ _[k]_ [(] **[x]** [;] [ ˆ] _[ξ]_ _[k]_ [) :] _[=][ θ]_ [(] **[x]** [)] _[ξ]_ [ˆ] _[k]_ [with coefficients] [ ˆ] _[ξ]_ _[k]_ _[∈]_ [R] _[m]_ [ ob-] tained from solving the following sparse regression problem, Ξˆ _=_ argmin Ξ _N_ � _∥_ **x** ˙ _i_ _−_ **f** _ϵ_ ( **x** _i_ ; Ξ) _∥_ [2] _+_ _λ∥_ Ξ _∥_ 1 . (2) _i_ _=_ 1 where _λ >_ 0 is a sparsity parameter, **f** _ϵ_ _=_ ( _f_ _ϵ_ [1] [,...,] _[ f]_ _[ n]_ _ϵ_ [), and] [ ˆ][Ξ] _[ =]_ [ ((] _[ξ]_ [1] [)] _[⊤]_ [,...,] [(] _[ξ]_ _[n]_ [)] _[⊤]_ [)] _[⊤]_ _[∈]_ [R] _[nm]_ [; by default] _∥·∥_ means _ℓ_ 2 norm. There are two key issues with this approach as it stands. First, the method is 2 sensitive to the choice of dictionary. If the space spanned by the dictionary does not encompass the underlying function, the estimated vector field will not be accurate when evaluated on new sample data from the same distribution. Second, this method is computationally impractical as _n_ increases since the size of the dictionary increases (exponentially) as a function of _n_ . Particularly, if the dictionary consists of monomials of degree up to _p_, then the size of the dictionary is _m ∝_ _p_ _[n][−]_ [1] . Let us now focus on the first issue for a class of dynamics where the solutions lie on a _d_ dimensional Riemannian sub-manifold _M ⊂_ R _[n]_ . In this context, the vector field **f** _∈_ X ( _M_ ) is a map **f** : _M →_ _T M_ that identifies the state **x** _∈_ _M_ to a vector in the tangent space **f** ( **x** ) _∈_ _T_ **x** _M_ . Denote the bases of the tangent space _T_ **x** _i_ _M_ _=_ _[∼]_ R _[d]_ and the normal space as columns of **T** _i_ _∈_ R _[n][×][d]_ and **N** _i_ _∈_ R _[n][×]_ [(] _[n][−][d]_ [)], respectively. We note that these basis vectors can be identified from point cloud data using the local SVD technique (Donoho & Grimes, 2003; Zhang & Zha, 2004) or higher-order methods (Jiang et al., 2024). See Appendix A for the details. For the remainder of this paper, we denote **T** [ˆ] _i_ and **N** [ˆ] _i_ to be the point cloud approximation to **T** _i_ and **N** _i_, respectively. Let the matrix _P_ ( **x** ) : R _[n]_ _→_ _T_ **x** _M ⊂_ R _[n]_ be an orthogonal projection to the local tangent space at **x** . One can show that _P_ ( **x** _i_ ) _=_ **T** _i_ **T** _[⊤]_ _i_ [, where columns of] **[ T]** _[i]_ _[ ∈]_ [R] _[n][×][d]_ [ forms a set of orthonormal vectors] that span _T_ **x** _i_ _M_ . With this background, since **f** ( **x** ) _∈_ _T_ **x** _M_, it is clear that _P_ ( **x** ) **f** ( **x** ) _=_ **f** ( **x** ) under the _n−_ dimensional Euclidean inner product. Practically, when the manifold is unknown, one can approximate _P_ ( **x** _i_ ) by _P_ [ˆ] ( **x** _i_ ) _=_ **T** [ˆ] _i_ **T** [ˆ] _[⊤]_ _i_ [. Based on this information, we propose the following] modification on the SINDy dictionary for modeling the vector field, **f** ( **x** ) _≈_ **f** _ϵ_ ( **x** ; Ξ [ˆ] ) _=_ _P_ [ˆ] ( **x** )Θ( **x** ) Ξ [ˆ], (3) where Θ ( **x** ) _∈_ R _[n][×][nm]_ is a block diagonal matrix with _θ_ ( **x** ) as the diagonal block component, and the coefficients Ξ [ˆ] _∈_ R _[nm]_ are obtained by fitting the model to the observed vector field, ˙ **x** _i_ . In practice, when the available data are only the time series _X =_ { **x** _i_ } _i_ _[N]_ _=_ 1 [, one needs to approximate] the derivatives. We consider approximating ˙ **x** _i_ with **y** _i_ _=_ **T** [ˆ] _i_ **T** [ˆ] _[⊤]_ _i_ [(] **[x]** _[i]_ _[+]_ [1] _[ −]_ **[x]** _[i]_ [)/][∆] _[t]_ [.] As we mentioned before, another challenge with this dictionary is that it is subjected to the curse of dimensionality, that the number of candidate functions grows exponentially and eventually becomes computationally intractable as the dimension of states increases. To mitigate this problem, we propose an operator-valued kernel deduced from the dictionary in (3) that allows the vector field to lie on the tangent bundle in the limit of large data with a compact model having a rank equal to the intrinsic dimension of the manifold. In the remainder of this section, we first use the kernel trick to motivate a Geometrically constrained Multivariate Kernel Ridge Regression (GMKRR) model in the ambient space. Then we formalize the GMKRR model rigorously using the Reproducing Kernel Hilbert Space (RKHS) with a family of operator-valued kernels. Lastly, we manipulate the GMKRR model into the intrinsic space to enable practical computational implementation. 2.1 K ERNELIZATION OF THE GEOMETRICALLY CONSTRAINED DICTIONARY While kernel regression with _ℓ_ _q_ regularization (0 _< q ≤_ 1) has been studied extensively (see Shi et al., 2019, and the references therein), it is computationally much simpler to employ _ℓ_ 2 regularization, which will be the focus in this paper. Specifically, we focus on the following problem modifying (2) as the primal form, Ξˆ _=_ argmin Ξ �� **y** _−_ **Ψ** Ξ�� 2 _+_ _λ∥_ Ξ _∥_ 2, (4) where **y** _=_ [ **y** _[⊤]_ 1 [,] **[y]** _[⊤]_ 2 [,] _[···]_ [,] **[y]** _[⊤]_ _N_ []] _[⊤]_ _[∈]_ [R] _[nN]_ [ with] **[ y]** _[i]_ _[ =]_ [ ˆ] **[T]** _[i]_ [ ˆ] **[T]** _[⊤]_ _i_ [(] **[x]** _[i]_ _[+]_ [1] _[ −]_ **[x]** _[i]_ [)/] [∆] _[t]_ [, and] **[ Ψ]** _[ =]_ [ [] _**[ψ]**_ 1 _[⊤]_ [,] _**[ψ]**_ _[⊤]_ 2 [,] _[···]_ [,] _**[ψ]**_ _[⊤]_ _N_ []] _[⊤]_ _[∈]_ R _[nN]_ _[×][nm]_ with _**ψ**_ _i_ _=_ _**ψ**_ ( **x** _i_ ) _=_ _P_ [ˆ] ( **x** _i_ )Θ( **x** _i_ ) _∈_ R _[n][×][nm]_ . Next, introduce the dual variable _**α**_ _∈_ R _[nN]_ so that Ξ _=_ **Ψ** _[⊤]_ _**α**_, and the dual form of (4) is _**α**_ ˆ _=_ argmin _**α**_ �� **y** _−_ **ΨΨ** _⊤_ _**α**_ �� 2 _+_ _λ_ �� **Ψ** _⊤_ _**α**_ �� 2 _≡_ argmin _**α**_ �� **y** _−_ **K** _**α**_ �� 2 _+_ _λ_ _**α**_ _⊤_ **K** _**α**_ (5) where the gram matrix **K** _=_ **ΨΨ** _[⊤]_ _∈_ R _[nN]_ _[×][nN]_ and its ( _i_, _j_ )th block is **K** _i j_ _=_ _**ψ**_ ( **x** _i_ ) _**ψ**_ ( **x** _j_ ) _[⊤]_ _≡_ _k_ ( **x** _i_, **x** _j_ ) _∈_ R _[n][×][n]_ . The solution to the dual form is _**α**_ _[∗]_ _=_ ( _λ_ **I** _+_ **K** ) _[−]_ [1] **y**, and the predictive model for a new input **x** is given by, **f** _ϵ_ ( **x** ) _=_ _**ψ**_ ( **x** )Ξ _=_ _**ψ**_ ( **x** ) **Ψ** _[⊤]_ _**α**_ _=_ **k** ( **x** )( _λ_ **I** _+_ **K** ) _[−]_ [1] **y** (6) 3 where **k** ( **x** ) _=_ [ _k_ ( **x**, **x** 1 ), _k_ ( **x**, **x** 2 ), _···_, _k_ ( **x**, **x** _N_ )]. Since _**ψ**_ ( **x** ) _=_ _P_ [ˆ] ( **x** )Θ( **x** ) _∈_ R _[n][×][nm]_, we have _k_ ( **x**, **x** _[′]_ ) _=_ _**ψ**_ ( **x** ) _**ψ**_ ( **x** _[′]_ ) _[⊤]_ _=_ _P_ [ˆ] ( **x** )Θ( **x** )Θ( **x** _[′]_ ) _[⊤]_ _P_ [ˆ] ( **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) _P_ [ˆ] ( **x** ) _P_ [ˆ] ( **x** _[′]_ ) (7) where _ρ_ ( **x**, **x** _[′]_ ) _=_ _θ_ ( **x** ) _θ_ ( **x** _[′]_ ) _[⊤]_ _∈_ R, and the last equality is because Θ ( **x** ) Θ ( **x** _[′]_ ) _[⊤]_ _=_ diag[ _ρ_ ( **x**, **x** _[′]_ ), _···_, _ρ_ ( **x**, **x** _[′]_ )] _= ρ_ ( **x**, **x** _[′]_ ) **I** _∈_ R _[n][×][n]_ . Up to this point the regression problem with (3) is converted to a GMKRR problem. The geometrically-constrained function _k_ in (7) is used as a matrix-valued kernel and constructed from a finite set of candidate functions defined in the ambient space. 2.2 T HE RKHS OF THE INTRINSIC GMKRR MODEL In the following, we generalize the GMKRR model to a family of matrix kernel functions that may include infinitely many candidate functions via the construction of a X ( _M_ )-valued RKHS _H_ . **Definition 2.1.** _Let_ _X_ _be a non-empty set,_ _W_ _a separable Hilbert space with inner product_ _〈·_, _·〉_ _, and_ _L_ ( _W_ ) _a Banach space of bounded linear operators on_ _W_ _. A function_ _k_ : _X × X �→_ _L_ ( _W_ ) _is SPD if_ _(1) for any pair_ ( **x**, **x** _[′]_ ) _∈_ _X × X_ _,_ _k_ ( **x**, **x** _[′]_ ) _[∗]_ _= k_ ( **x** _[′]_, **x** ) _, and (2) for any finite set of points_ { **x** _i_ } _i_ _[N]_ _=_ 1 _[in]_ _[ X]_ _[ and]_ { **f** _i_ } _i_ _[N]_ _=_ 1 _[in][ W][,]_ [ �] _i_ _[N]_, _j_ _=_ 1 � **f** _i_, _k_ ( **x** _i_, **x** _j_ ) **f** _j_ � _≥_ 0 _. The function k is an operator-valued kernel on X and W ._ **Definition 2.2.** _Following the notation from the previous definition, for each_ **x** _∈_ _X_ _and_ **f**, **g** _∈_ _W_ _, define_ _k_ **x** **f** ( **x** _[′]_ ) _= k_ ( **x**, **x** _[′]_ ) **f** _for all_ **x** _[′]_ _∈_ _X_ . _For_ **f** _[′]_ _=_ [�] _i_ _[N]_ _=_ 1 _[k]_ **[x]** _[i]_ **[f]** _[i]_ _[ and]_ **[ g]** _[′]_ _[ =]_ [ �] _i_ _[N]_ _=_ 1 _[k]_ **[x]** _[′]_ _i_ **[g]** _[i]_ _[, define the inner]_ � **f** _i_, _k_ ( **x** _i_, **x** _[′]_ _j_ [)] **[g]** _[j]_ �. _Then_ _H =_ span{ _k_ **x** **f** _|_ **x** _∈_ _X_, **f** _∈_ _W_ } _forms an RKHS with_ _product,_ � **f** _[′]_, **g** _[′]_ [�] _H_ _[=]_ [ �] _i_ _[N]_, _j_ _=_ 1 _reproducing kernel k. The RKHS has the reproducing property that_ � **f** ( **x** ), **g** � _W_ _[=]_ � **f** ( _·_ ), _k_ ( _·_, **x** ) **g** � _H_ _[.]_ **Definition 2.3.** _The inner product_ _〈·_, _·〉_ _H_ _also induces the RKHS norm,_ _∥_ **f** _[′]_ _∥_ _H_ _=_ � **Definition 2.3.** _The inner product_ _〈·_, _·〉_ _H_ _also induces the RKHS norm,_ _∥_ **f** _[′]_ _∥_ _H_ _=_ � _〈_ **f** _[′]_, **f** _[′]_ _〉_ _H_ _for all_ **f** _[′]_ _=_ � _iN=_ 1 _[k]_ **[x]** _[i]_ **[f]** _[i]_ _[. When]_ _[ W]_ _[ is a]_ _[ n]_ _[-dimensional Euclidean space,]_ �� **f** _′_ �� _H_ _[=]_ ~~_�_~~ **f** _[⊤]_ **Kf** _, where_ **f** _=_ [ **f** _[⊤]_ 1 [,] **[f]** _[⊤]_ 2 [,] _[···]_ [,] **[f]** _[⊤]_ _N_ []] _[⊤]_ _and_ **K** _∈_ R _[nN]_ _[×][nN]_ _with the_ ( _i_, _j_ ) _th block_ **K** _i j_ _= k_ ( **x** _i_, **x** _j_ ) _._ **Lemma 2.1.** _Consider a function_ _k_ : _M × M �→_ _L_ ( X ( _M_ )) _, defined as_ _k_ ( **x**, **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) _P_ [ˆ] ( **x** ) _P_ [ˆ] ( **x** _[′]_ ), _where ρ_ : R _[n]_ _×_ R _[n]_ _�→_ R _is a scalar-valued kernel. Then k is an operator-valued kernel._ See Appendix D for the proof of Lemma 2.1. The operator-valued kernel _k_ forms the desired X ( _M_ )-valued RKHS, denoted as _H_ _M_ . In practice, we can use any SPD kernels such as the squared exponential (SE) kernel _ρ_ ( **x**, **x** _[′]_ ) _=_ exp � _−_ �� **x** _−_ **x** _′_ �� 2 / _γ_ � or the Matérn kernels (see Appendix E.1). The function in (7) is a special case of the operator-valued kernel on _M_ and X ( _M_ ). Subsequently, the GMKRR model is reformulated via _H_ _M_ . Given a dataset { ( **x** _i_, **y** _i_ ) } _i_ _[N]_ _=_ 1 [, the unknown vector] field **f** _∈_ _H_ _M_ is parametrized as **f** ( **x** ) _=_ [�] _i_ _[N]_ _=_ 1 _[k]_ [(] **[x]** _[i]_ [,] **[x]** [)] _**[α]**_ _[i]_ _[ ≡]_ **[k]** [(] **[x]** [)] _**[α]**_ [,] [ where] _**[ α]**_ _[ =]_ [ [] _**[α]**_ 1 _[⊤]_ [,] _**[α]**_ _[⊤]_ 2 [,] _[···]_ [,] _**[α]**_ _[⊤]_ _N_ []] _[⊤]_ [is] determined by minimizing the following objective function, **f** _[⊤]_ **Kf** _, where_ **f** _=_ [ **f** _[⊤]_ 1 [,] **[f]** _[⊤]_ 2 [,] _[···]_ [,] **[f]** _[⊤]_ _N_ []] _[⊤]_ _N_ _J_ ( **f** ) _=_ � _i_ _=_ 1 �� **y** _i_ _−_ **f** ( **x** _i_ )�� 2 _+_ _λ∥_ **f** _∥_ 2 _H_ _M_ _[≡]_ �� **y** _−_ **K** _**α**_ �� 2 _+_ _λ_ _**α**_ _⊤_ **K** _**α**_, which is the same optimization problem solved in the previous section, and the solution is given by (6) . However, now the new formulation admits a family of operator-valued kernels, that may involve an infinite set of candidate functions. 2.3 C ONVERSION TO INTRINSIC SPACE Subsequently, we formulate the GMKRR model (6) so that the predictive model is effectively defined in the intrinsic space and becomes computationally tractable to train and evaluate. Note that _P_ [ˆ] ( **x** ) _=_ **T** [ˆ] **x** **T** [ˆ] _[⊤]_ **x** [, the operator-valued kernel] _[ k]_ [ in ambient space can be rewritten as] _k_ ( **x**, **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) _P_ [ˆ] ( **x** ) _P_ [ˆ] ( **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) **T** [ˆ] **x** _O_ **xx** _′_ **T** [ˆ] _[⊤]_ **x** _[′]_ _[ ≡]_ **[T]** [ˆ] **[x]** _[r]_ [(] **[x]** [,] **[x]** _[′]_ [)ˆ] **[T]** _[⊤]_ **x** _[′]_ [,] where _O_ **xx** _′_ _=_ **T** [ˆ] _[⊤]_ **x** **[T]** [ˆ] **x** _[′]_ _[ ∈]_ [R] _[d]_ _[×][d]_ [, and] _r_ ( **x**, **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) **T** [ˆ] _[⊤]_ **x** **[T]** [ˆ] **x** _[′]_ _[ =][ ρ]_ [(] **[x]** [,] **[x]** _[′]_ [)] _[O]_ **xx** _[′]_ _[ ∈]_ [R] _[d]_ _[×][d]_ [.] (8) 4 Using (8), the gram matrix **K** in ambient space is decomposed as **K** _= T_ **R** _T_ _[⊤]_, where _T ∈_ R _[nN]_ _[×][dN]_ is a block diagonal matrix with diagonal block entries **T** [ˆ] 1, **T** [ˆ] 2,..., **T** [ˆ] _N_, and **R** _∈_ R _[dN]_ _[×][dN]_ is a _N × N_ block matrix with the ( _i_, _j_ )th block **R** _i j_ _= r_ ( **x** _i_, **x** _j_ ) _= ρ_ ( **x** _i_, **x** _j_ ) _O_ **x** _i_ **x** _j_ . Similarly, **k** ( **x** ) _=_ **T** [ˆ] **x** [ _r_ ( **x**, **x** 1 ), _r_ ( **x**, **x** 2 ), _···_, _r_ ( **x**, **x** _N_ )] _T_ _[⊤]_ _≡_ **T** [ˆ] **x** **r** ( **x** ) _T_ _[⊤]_ . (9) Using the above decompositions, the GMKRR formulation (6) in ambient space is converted to, **f** _ϵ_ ( **x** ) _=_ **k** ( **x** )( _λ_ **I** _+_ **K** ) _[−]_ [1] **y** _=_ **T** [ˆ] **x** **r** ( **x** ) _T_ _[⊤]_ [�] _λ_ **I** _+_ _T_ **R** _T_ _[⊤]_ [�] _[−]_ [1] **y** _=_ **T** [ˆ] **x** **r** ( **x** )( _λ_ **I** _+_ **R** ) _[−]_ [1] _T_ _[⊤]_ **y** (10) where the Woodbury identity is used in the last equality. In the intrinsic GMKRR formulation (10), the matrix ( _λ_ **I** _+_ **R** ) is of dimension _dN ×_ _dN_ and computationally tractable to invert, especially if _d_ is small; this is regardless of the ambient dimension _n_ . Furthermore, the term **T** [ˆ] **x** guarantees that the vector fields lie on the local tangent space of the underlying manifold at **x** in the limit of large data. Lastly, we briefly discuss the intrinsic GMKRR model from an RKHS point of view. First, it can be proved that the function _r_ in (8) is an operator-valued kernel on _M_ and _L_ ( R _[d]_ ) (the proof is similar to that of Lemma 2.1); _r_ is referred to as the intrinsic operator-valued kernel. Then, _r_ induces an RKHS and the corresponding GMKRR model in the intrinsic space. The GMKRR is effectively applied to a modified dataset { ( **x** _i_, ˜ **y** _i_ _=_ **T** [ˆ] _[⊤]_ **x** _i_ **[y]** _[i]_ [)] [}] _i_ _[N]_ _=_ 1 [, where the modified label] [ ˜] **[y]** _[i]_ [ is the] vector field expressed in the local tangent space at **x** _i_ . In the kernel _r_ ( **x**, **x** _[′]_ ), if (1) _ρ_ is chosen to be the Diffusion Map kernel and (2) the pairs of data points ( **x**, **x** _[′]_ ) are sufficiently close so that _O_ **xx** _′_ _=_ **T** [ˆ] _[⊤]_ **x** **[T]** [ˆ] **x** _[′]_ [ is always orthogonal, then the RKHS induced by] _[ r]_ [ is a subset of] _[ L]_ [2] [(] [X] [(] _[M]_ [)) that is] spanned by smooth eigenvector-fields of the Connection Laplacians (Singer & Wu, 2012). 3 G EOMETRY -P RESERVING T IME I NTEGRATOR While standard ODE solvers such as Runge-Kutta methods often empirically produce solutions that are close enough to the manifold (i.e., a manifold invariant scheme) for sufficiently small time step, it is well known that the invariant manifold property is only valid when the solvers are employed on a special class of manifolds (Calvo et al., 1996). Various ODE solvers on manifolds have been proposed in literature, see Hairer (2011); Crouch & Grossman (1993) for general vector fields and Leimkuhler & Patrick (1996) for Hamiltonian systems. In this section, we will illustrate this issue on a simple example, propose a normal correction (NC) to the classical explicit Euler scheme which we call Euler+NC in the remainder of this paper, and provide a convergence study. This approach can be viewed as a realization of the local coordinate approach (see Section III.2 in Hairer, 2011) with local parameterization being estimated by GMLS. The proposed normal correction approximates all of the higher-order terms in an exponential map, exp **x** _i_ ( **f** ( **x** _i_ ) ∆ _t_ ), including the second fundamental form, and is computationally more attractive than the classical Taylor’s method that requires the derivatives of the estimated vector fields. 3.1 A MOTIVATING EXAMPLE (a) Parameterized manifolds. (b) Predictions at a long time period. Figure 1: Dynamics on a series of 1D manifolds. Consider a scalar ODE: _θ_ [˙] _=_ [3] 2 _[−]_ [cos] [(] _[θ]_ [),] _[ θ]_ [(0)] _[ =]_ [ 0, whose solution has a period of 2] _[π]_ [, and embed the] solution in a 2D ambient space by ( _x_ 1, _x_ 2 ) _=_ ( _r_ ( _θ_ ) cos ( _θ_ ), _r_ ( _θ_ ) sin ( _θ_ )), where _r_ ( _θ_ ) _=_ 1 _+_ _D_ cos ( _K θ_ ). The 2D embedding is illustrated in Fig. 1a for _K =_ 3 and a series of _D_ values, where the neighboring points separate apart by one step size ∆ _t =_ 0 . 04. When _D =_ 0, the 1D manifold is a “simple” unit circle. As _D_ increases, the manifold becomes more distorted, which may pose a challenge in solving the dynamics. 5 Idea Generation Category:
0Conceptual Integration
OwpLQrpdwE
# P ROXY D ENOISING FOR S OURCE -F REE D OMAIN A DAPTATION **Song Tang** **[1,2,3]** **, Wenxin Su** **[1]** **, Yan Gan** **[4]** **, Mao Ye** **[5,*]** **, Jianwei Zhang** **[2]** **& Xiatian Zhu** **[6,]** _[∗]_ 1 University of Shanghai for Science and Technology, 2 Universität Hamburg, 3 ComOriginMat Inc. 4 Chongqing University, 5 University of Electronic Science and Technology of China, 6 University of Surrey tangs@usst.edu.cn, {suwenxin43,cvlab.uestc}@gmail.com, xiatian.zhu@surrey.ac.uk A BSTRACT Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to an unlabeled target domain with no access to the source data. Inspired by the success of large Vision-Language (ViL) models in many applications, the latest research has validated ViL’s benefit for SFDA by using their predictions as pseudo supervision. However, we observe that ViL’s supervision could be noisy and inaccurate at an unknown rate, introducing additional negative effects during adaption. To address this thus-far ignored challenge, we introduce a novel _**Pro**_ _xy_ _**De**_ _noising_ ( **ProDe** ) approach. The key idea is to leverage the ViL model as a proxy to facilitate the adaptation process towards the latent domain-invariant space. We design a proxy denoising mechanism to correct ViL’s predictions, grounded on a proxy confidence theory that models the dynamic effect of proxy’s divergence against the domain-invariant space during adaptation. To capitalize on the corrected proxy, we derive a mutual knowledge distilling regularization. Extensive experiments show that ProDe significantly outperforms current state-ofthe-art alternatives under the conventional closed set setting and more challenging open set, partial set, generalized SFDA, multi-target, multi-source, and test-time settings. Our code and data are available at [https://github.com/tntek/](https://github.com/tntek/source-free-domain-adaptation) [source-free-domain-adaptation.](https://github.com/tntek/source-free-domain-adaptation) 1 I NTRODUCTION Unsupervised Domain Adaptation (UDA) uses well-annotated source data and unannotated target data concurrently to achieve cross-domain transfer. However, this data access requirement raises increasing concerns about safety and privacy. Thus, there is a call for restricted access to source domain training data, leading to a more practical but challenging transfer learning setting, Source-Free Domain Adaptation (SFDA) (Li et al., 2020a; Xia et al., 2021; Roy et al., 2022). In the absence from the source domain, cross-domain distribution matching approaches are no longer applicable (Ganin & Lempitsky, 2015; Kang et al., 2019). Self-supervised learning then comes into play by generating and mining auxiliary information to enable unsupervised adaptation in two main routes. _The first_ makes SFDA as a special case of UDA by explicitly creating a pseudo source domain, enabling UDA methods such as adversarial learning (Xia et al., 2021; Kurmi et al., 2021) or minimizing domain shift (Tian et al., 2022; Kundu et al., 2022). _The second_ further refines generated supervision from the source model (Lao et al., 2021; Wang et al., 2022a; Huang et al., 2021) or target data (Yang et al., 2022; Tang et al., 2022), as the constructed pseudo source domain may be noisy. These methods all perform alignment without any guidance from the target feature space to the unknown domain-invariant feature space. There has been growing interest in leveraging pre-trained Vision-Language (ViL) models (e.g., CLIP (Radford et al., 2021)) for transfer learning challenges. This is because ViL models were trained with a massive amount of diverse vision-language data, encompassing rich knowledge potentially useful for many downstream tasks. For instance, Ge et al. (2022); Lai et al. (2023); Singha et al. (2023) disentangle domain and category information in the visual features of the ViL model _∗_ Corresponding author 1 by learning domain-specific textual or visual prompts. ViL models have also been used to address the SFDA problem (Tang et al., 2024c; Xiao et al., 2024). They treat the ViL model’s predictions as ground truth which would be noisy in many unknown cases, finally harming their performance. To address the limitation mentioned above, in this paper, we propose a new **Pro** xy **De** noising ( **ProDe** ) ap- Source domain Proxy ViL space proach for SFDA. In contrast to (Tang et al., 2024c; Xiao et al., 2024), we consider the ViL model/space Proxy error as a _noisy_ proxy of the latent domain-invariant space [1], with a need to be denoised. At the absence of any good reference models for measuring the noisy Latent domain‐ degree with the already strong ViL model’s pre- In‐training model invariant space dictions, we exploit _the dynamics of domain adap-_ Adapting path Adapting direction Proxy error correction _tation process_, starting at the source model space Aligning to proxy space Desired adapting direction and terminating presumably in the latent domaininvariant space. In particular, this takes into account Figure 1: Conceptual illustration of ProDe. the proxy’s divergence against the domain-invariant We align the adapting direction with the despace (Fig. 1). Specifically, we model approximately sired trajectory by leveraging a proxy space the effect of ViL model’s prediction error on domain that approximates the latent domain-invariant adaption by formulating a proxy confidence theory, space. This process incorporates direction in relation to the discrepancy between the source do- adjustments based on proxy error correction, main and the current under-adaptation model. This implementing proxy denoising, and finally leads to a novel proxy denoising mechanism for ViL achieves enhanced model adaptation. prediction correction. To capitalize on the corrected ViL predictions more effectively, a mutual knowledge distilling regularization is further designed. Figure 1: Conceptual illustration of ProDe. We align the adapting direction with the desired trajectory by leveraging a proxy space that approximates the latent domain-invariant space. This process incorporates direction adjustments based on proxy error correction, implementing proxy denoising, and finally achieves enhanced model adaptation. Our **contributions** are summarized as follows: **(1)** We for the first time investigate the inaccurate predictions of ViL models in the context of SFDA. **(2)** We formulate a novel ProDe method that reliably corrects the ViL model’s predictions under the guidance of a proxy confidence theory. A mutual knowledge distilling regularization is introduced for better capitalizing on refined proxy predictions. **(3)** Extensive experiments on open benchmarks show that our ProDe significantly outperforms previous alternatives in closed-set settings, as well as the more challenging partial-set, open-set, and generalized SFDA, multi-target, multi-source and test-time settings. 2 R ELATED W ORK **Source-Free Domain Adaptation** One main challenge with SFDA is lack of supervision during model adaptation. To overcome this, current methods are broadly divided into three categories. The _firstcategory_ involves converting SFDA to conventional UDA by introducing a pseudo-source domain. This can be achieved by building the pseudo-source domain through generative models (Tian et al., 2022; Li et al., 2020b) or by extracting a subset similar to the distribution of sources from the target domain (Du et al., 2023). The _second category_ involves mining auxiliary information from the pre-trained source model to assist in aligning the feature distribution from the target domain to the source domain. Commonly used auxiliary factors include multi-hypothesis (Lao et al., 2021), prototypes (Zhou et al., 2024), source distribution estimation (Ding et al., 2022), or hard samples (Li et al., 2021). The _last category_ focuses on the target domain and creates additional constraints to correct the semantic noise in model transferring. In practice, domain-aware gradient control (Yang et al., 2021b), data geometry such as the intrinsic neighborhood structure (Tang et al., 2021) and target data manifold (Tang et al., 2022; Tang et al., 2024a), have been exploited to generate high-quality pseudo-labels (Liang et al., 2020; Chen et al., 2022b) or inject assistance in an unsupervised fashion (Yang et al., 2021a). These methods refine auxiliary information from domain-specific knowledge, such as the source model and unlabeled target data, without resorting to external knowledge sources, such as pre-trained multimodal foundation models. 1 The issue of noisy predictions is evidenced by the inferior zero-shot performance of the ViL model, e.g., CLIP, on the target domains (see Tab. 4). Here, “domain invariant space" refers to an ideal latent embedding space where the mapped features from different domains align with the same probability distribution. 2 Figure 2: **Left:** Dynamics of effect of ViL model’s prediction error (or proxy error) during alignment. (a) In the initial adaptation phase, it is acceptable to overlook the proxy errors. However, as the in-training model approaches the proxy space, these errors grow to be more noticeable, leading to continuous decline in the reliability of ViL predictions as shown in (b) and (c). **Right:** Our ProDe capitalizes on the corrected proxy, involving a mutual knowledge distilling regularization and a proxy denoising mechanism imposing refinement on the ViL logits. **Vision-Language Models** ViL models, such as CLIP (Radford et al., 2021) and GLIP (Li et al., 2022), have shown promise in various tasks (Liang et al., 2023; Wang et al., 2022c) due to their ability to capture modality invariant features. There are two main lines of research. The _first line_ aims to improve their performance. For instance, text-prompt learning (Zhou et al., 2022; Ge et al., 2022) and visual-prompt learning (Wang et al., 2023; Jia et al., 2022) were adopted, using learnable prompts related to application scenarios. Data efficiency of these models can be improved by repurposing (Andonian et al., 2022) or removing noisy data (Wang et al., 2021b). The _second line_ focuses on using ViL models as external knowledge to boost downstream tasks. Three strategies are involved: Plain fusion (Liu et al., 2024), knowledge distillation (Pei et al., 2023) and information entropy regulating (Cha et al., 2022). Beyond latest ViL based SFDA models (Tang et al., 2024c; Xiao et al., 2024), we uniquely tackle the challenge of mitigating the noise of ViL’s supervision. 3 M ETHODOLOGY 3.1 P ROBLEM F ORMULATION We start with a labeled source domain and an unlabeled target domain, handling the same _C_ categories. Let _X_ _S_ and _Y_ _S_ be the source samples and labels. The target samples and truth target labels are denoted as _X_ _T_ = _{_ _**x**_ _i_ _}_ _[n]_ _i_ =1 [and] _[ Y]_ _[T]_ [ =] _[{][y]_ _[i]_ _[}]_ _[n]_ _i_ =1 [, respectively, where] _[ n]_ [ is the sample number. SFDA aims] to learn a target model _θ_ _t_ : _X_ _T_ _→Y_ _T_ given (1) the pre-trained source model _θ_ _s_ : _X_ _S_ _→Y_ _S_, (2) the unlabeled target data _X_ _T_ . In addition, we leverage a ViL model _θ_ _v_ that produces noise supervision. To address noisy ViL supervision, we exploit the dynamics of domain adaptation process. As shown in Fig. 2 (a), we deal with three spaces: source domain _D_ _S_ (i.e., source image embedding space), domain-invariant space _D_ _I_, and ViL space _D_ _V_ (the best possible proxy w.r.t _D_ _I_ ). In this context, _D_ _I_ typically refers to an _ideal, unknown latent embedding space_ that is domain generalized. We want to align the in-training model _D_ _T_ _[t]_ [from] _[ D]_ _[S]_ [ to] _[ D]_ _[I]_ [ as] _[ t][ ∈]_ [[0] _[ ∼]_ _[T]_ []] _[ ≫]_ [0][.] Without access to _D_ _I_, we propose to perform _**proxy alignment**_ by aligning _D_ _T_ _[t]_ [towards] _[ D]_ _[V]_ [. We] denote the discrepancy between _D_ _I_ and _D_ _V_ as _**proxy error**_ _**e**_ _VI_, reflecting ViL’s prediction errors. We then transform the task of minimizing the errors of ViL predictions to control the proxy error by establishing a proxy confidence theory. 3.2 P ROXY C ONFIDENCE T HEORY Understanding the impact of proxy errors on domain adaptation is critical. To account for the dynamics of domain adaptation, as demonstrated in Fig. 2 (a), we consider two typical situations of the proxy alignment process. We denote the distance of _D_ _T_ _[t]_ [to] _[ D]_ _[V]_ [ and] _[ D]_ _[I]_ [ as] _**[ d]**_ _[t]_ _V_ [and] _**[ d]**_ _[t]_ _I_ [,] 3 respectively, and note that the distinction between _D_ _V_ and _D_ _I_, i.e., the proxy error _**e**_ _VI_, is a space-to-space distance in the vector form. To ease understanding, we note two cases: - **Case1:** When _D_ _T_ _[t]_ [is way far from] _[ D]_ _[V]_ [, e.g., at the beginning of adaptation (] _[t]_ [ = 0] [), it is held] that _**d**_ [0] _I_ _[≈]_ _**[d]**_ [0] _V_ _[≫]_ _**[e]**_ _[VI]_ [. This implies that aligning to] _[ D]_ _[I]_ [ or] _[ D]_ _[V]_ [ is equivalent. Consequently,] the proxy errors _**e**_ _VI_ can be ignored, that is, the ViL prediction can be deemed trustworthy. - **Case2:** When _D_ _T_ _[t]_ [approaches] _[ D]_ _[V]_ [, e.g., the later phase in the adaptation (] _[t]_ [ =] _[ U][ ≫]_ [0] [),] tackling the proxy errors becomes increasingly crucial; Also, the distance relationship evolves to the equation that _**d**_ _[U]_ _I_ [=] _**[ d]**_ _[U]_ _V_ [+] _**[ e]**_ _[VI]_ [ (according to the vector geometric property] that _**u**_, _**v**_, and _**u**_ + _**v**_ form a triangle, where _**u**_ and _**v**_ are two sides). At this moment, ViL predictions become less reliable. The proxy errors dynamically affect the proxy alignment, as reflected in the relative relationship between _**d**_ _[t]_ _V_ [and] _**[ d]**_ _[t]_ _I_ [defined as:] _I_ _[|]_ _V_ [+] _**[ e]**_ _[VI]_ _[|]_ _η_ _t_ = _[|]_ _**[d]**_ _[t]_ _[|]_ _**[d]**_ _[t]_ _|_ _**d**_ _[t]_ _V_ _[|]_ [ =] _|_ _**d**_ _[t]_ _V_ _[|]_ [ +] _[|]_ _**[e]**_ _[VI]_ _[|]_ = 1 + _[|]_ _**[e]**_ _[VI]_ _[|]_ _|_ _**d**_ _[t]_ _V_ _[|]_ _|_ _**d**_ _[t]_ _V_ _[|]_ [+] _**[ e]**_ _[VI]_ _[|]_ _V_ _[|]_ [ +] _[|]_ _**[e]**_ _[VI]_ _[|]_ _≤_ _[|]_ _**[d]**_ _[t]_ _|_ _**d**_ _[t]_ _V_ _[|]_ _|_ _**d**_ _[t]_ _V_ _[|]_ (1) _|_ _**d**_ _[t]_ _V_ _[|][,]_ where _η_ _t_ quantifies the _error impact degree_, _| · |_ means the absolute value (length) of a distance vector. During proxy alignment, the quantity _|_ _**e**_ _VI_ _|/|_ _**d**_ _[t]_ _V_ _[|]_ [ in Eq. (][1][) gradually increases from a very small] value (e.g., Case 1) to bigger ones (e.g., Case 2), leading to increase in impact degree _η_ _t_ from 1. With this dynamics, as shown in Fig. 2 (b), the variance of ViL prediction gradually increases, implying a progressive decrease in the reliability of ViL prediction. At any time _t_, we treat the ViL predictions that approximate a Gaussian distribution _N_ ( _θ_ _v_ ( _x_ _i_ ) _, δ_ _t_ ) with the mean _θ_ _v_ ( _x_ _i_ ) and the prediction variance _δ_ _t_ _∝_ _η_ _t_ (Fig. 2 (c)). This is because, we consider the ViL predictions to be influenced by various sources of noise and uncertainty, which justifies the Gaussian approximation according to the _Central Limit Theorem_ (Chow & Teicher, 1988). Given that _**e**_ _VI_ is unknown, we cannot formulate these dynamics explicitly. We thus approximate this problem by quantifying the prediction variance with the varying confidence of ViL predictions. This conversion can be expressed in the form of a probability distribution with proxy confidence as: _N_ ( _θ_ _v_ ( _x_ _i_ ) _, δ_ _t_ ) = _⇒_ _P_ � _G_ _P_ ( _V_ ) = _True, t_ � _P_ ( _V_ ) _,_ (2) where _P_ ( _V_ ) is the probability distribution of the proxy space _D_ _V_ ; _G_ _P_ ( _V_ ) stands for a random event that the sampling result (i.e., a ViL prediction) from _P_ ( _V_ ) is confident; _P_ � _G_ _P_ ( _V_ ) = _True, t_ � is denoted as the _proxy confidence_, indicating the probability of the event _G_ _P_ ( _V_ ) being true at a time _t_ . This confidence will decreases progressively, as the ViL prediction reliability reduces relatively against the ability of the in-training model. By framing the ViL prediction as a probabilistic event, we can leverage the concept of proxy confidence, _P_ � _G_ _P_ ( _V_ ) = _True, t_ �, to quantify the reliability of ViL predictions at any point during adaptation. This facilitates the measurement about the impact of proxy errors. Specifically, we formulate the _proxy confidence theory_ as in **Theorem 1** (see proof in Appendix A). **Theorem 1** _We note that the source domain (_ _D_ _S_ _), the domain-invariant space (_ _D_ _I_ _), the proxy space_ _(_ _D_ _V_ _) and the in-training model (_ _D_ _T_ _[t]_ _[) follow the probability distributions]_ _[ P]_ [(] _[S]_ [)] _[,]_ _[ P]_ [(] _[I]_ [)] _[,]_ _[ P]_ [(] _[V]_ [)] _[ and]_ _P_ ( _T_ _[t]_ ) _, respectively, where_ _S_ _,_ _I_ _,_ _V_ _and_ _T_ _[t]_ _are corresponding random variables. With our proxy_ _alignment idea (see Sec.3.1), the proxy confidence can be expressed as:_ _P_ � _G_ _P_ ( _V_ ) = _True, t_ � _∝_ _[P]_ [(] _[T]_ _[ t]_ [)] (3) _P_ ( _S_ ) _[.]_ This theorem tells that _the effect of ViL prediction errors on domain adaption can be approximately_ _estimated by contrasting the distributions of the source model and the current in-training model._ 3.3 C APITALIZING ON THE C ORRECTED P ROXY **Overview** To better leverage the corrected proxy, we propose a novel ProDe method featured with two designs: (1) A proxy denoising mechanism, refining the original ViL predictions at the logit level, and (2) a mutual knowledge distilling regularization, encouraging extraction of useful knowledge from the ViL model _θ_ _v_ to the in-training target model _θ_ _t_, as shown in Fig. 2 (d). 4 **Proxy denoising** This module aims to denoise the ViL predictions. By **Theorem** 1 (Eq. (3)), we further convert the ViL space’s probability distribution with proxy confidence (i.e., Eq. (2)) into _P_ ( _T_ _t_ ) log � _P_ ( _S_ ) _[P]_ [(] _[V]_ [)] � = log _P_ ( _V_ ) _−_ �log _P_ ( _S_ ) _−_ log _P_ ( _T_ _[t]_ )� _,_ (4) where the latter two items form an adjustment used to correct for the first item (i.e., ViL prediction). Under this formula, we realize our denoising mechanism as: _**p**_ _[′]_ _i_ [= softmax (] _[θ]_ _[v]_ [(] _**[x]**_ _[i]_ _[,]_ _**[ v]**_ [)] _[ −]_ _[ω]_ [[] _[θ]_ _[s]_ [(] _**[x]**_ _[i]_ [)] _[ −]_ _[θ]_ _[t]_ [(] _**[x]**_ _[i]_ [)])] _[,]_ (5) where _θ_ _v_ _/θ_ _s_ _/θ_ _t_ () apply the ViL/source/target model to get the corresponding logits, and the hyperparameter _ω_ specifies the correction strength. The output _**p**_ _[′]_ _i_ [is a denoised ViL prediction.] **Mutual knowledge distilling** This component aims to distill useful knowledge of the ViL model to our target model. This is achieved by designing two loss terms: _L_ Ref � ~~�~~ � ~~�~~ _C_ � ¯ � _q_ _c_ log ¯ _q_ _c_ _c_ =1 E _**x**_ _i_ _∈X_ _t_ � _−_ _β_ _C_ (6) �1� _c_ = _y_ _i_ _[′]_ �log _p_ _i,c_ _,_ _c_ =1 _L_ ProDe = min _θ_ _t_ _,_ _**v**_ _[α]_ _L_ Apt � ~~�~~ � ~~�~~ _C_ _−_ E _**x**_ _i_ _∈X_ _t_ **MI** � _**p**_ _i_ _[′]_ _[,]_ _**[ p]**_ _i_ � + _γ_ � _q_ ¯ _c_ log ¯ _q_ _c_ � _c_ =1 � _−_ E _**x**_ _i_ _∈X_ _t_ **MI** � _**p**_ _i_ _[′]_ _[,]_ _**[ p]**_ _i_ � + _γ_ The first term _L_ Apt adapts both the target model and the learnable prompt of ViL model by maximizing the unbiased mutual information **MI** ( _·, ·_ ) (Ji et al., 2019) between the denoised ViL prediction _**p**_ _[′]_ _i_ [and] the target prediction _**p**_ _i_ = softmax( _θ_ _t_ ( _**x**_ _i_ )) . This design is motivated by that despite massive (often noisy) training data used, the ViL model (e.g., CLIP) don’t always outperform a speical expert model such as the supervised source model. There are three reasons: (1) ViL models are generalists, while source domain models are specialized. (2) ViL models may include irrelevant data, whereas source domain models use curated, relevant data. (3) ViL models might overlook task-specific features that are captured by source domain models. To avoid solution collapse (Ghasedi Dizaji et al., 2017), we use a common category balance constraint (Yang et al., 2021a) where ¯ _q_ _c_ = _n_ [1] � _ni_ =1 _[p]_ _[i,c]_ [ is the average] likelihood of class _c_ over _n_ training samples by the target model, across a total of _C_ categories. The second term _L_ Ref refers to a typical pseudo labeling strategy where a classification objective is applied, with the pseudo label _y_ _i_ _[′]_ [obtained by the denoised ViL predictions and] [ 1][[] _[c]_ [ =] _[ y]_ _i_ _[′]_ []] [ denotes an] indicator function. Note that as the training proceeds, the ViL predictions become less reliable and useful whilst the negative effect of _**e**_ _VI_ would grow in a relative sense. That means our proposed denoising could get more important across adaptation. We provide the model training procedure in Appendix B. 4 E XPERIMENTS **Datasets** We evaluate four widely used domain adaptation benchmarks. Among them, **Office-** **31** (Saenko et al., 2010) and **Office-Home** (Venkateswara et al., 2017) are small-scaled and mediumscale datasets, respectively, whilst **VisDA** (Peng et al., 2017) and **DomainNet-126** (Saito et al., 2019) are both challenging large-scale datasets. Their details are provided in Appendix C. **Settings** We consider a variety of SFDA settings: (1) closed-set, (2) partial-set (initialized in SHOT (Liang et al., 2020)), (3) open-set (initialized in SHOT (Liang et al., 2020)), (4) generalized SFDA (Yang et al., 2021b), (5) multi-target (SF-MTDA, detailed in (Kumar et al., 2023)), (6) multisource (SF-MSDA, detailed in (Ahmed et al., 2021)), and (7) test-time adaptation (TTA) (Wang et al., 2021a). More details are given in Appendix D. 4.1 C OMPETITORS To evaluate ProDe, we select 30 related comparisons divided into four groups. _(1) The first_ includes 2 base models involved in the SFDA problem: The source model (termed Source) and CLIP zero-shot (termed CLIP) (Radford et al., 2021). _(2) The second_ includes 7 current state-of-the-art domain adaptation methods with ViL model (adopting CLIP in practice), covering UDA and SFDA settings: DAPL-R (Ge et al., 2022), PADCLIP-R (Lai et al., 2023), ADCLIP-R (Singha et al., 2023), PDAR (Bai et al., 2024), DAMP-R (Du et al., 2024), DIFO-R (Tang et al., 2024c) and DIFO-V (Tang 5 Idea Generation Category:
2Direct Enhancement
FIj9IEPCKr
# RB-M ODULATION : T RAINING -F REE S TYLIZATION - USING R EFERENCE BASED M ODULATION **Litu Rout** [1] _[,]_ [2] _[∗]_ **Yujia Chen** [1] **Nataniel Ruiz** [1] **Abhishek Kumar** [3] **Constantine Caramanis** [2] **Sanjay Shakkottai** [2] **Wen-Sheng Chu** [1] 1 Google 2 UT Austin 3 Google DeepMind _{_ litu.rout,constantine,sanjay.shakkottai _}_ @utexas.edu _{_ liturout,yujiachen,natanielruiz,abhishk,wschu _}_ @google.com A BSTRACT We propose Reference-Based Modulation (RB-Modulation), a new plug-and-play solution for training-free personalization of diffusion models. Existing trainingfree approaches exhibit difficulties in (a) style extraction from reference images in the absence of additional style or content text descriptions, (b) unwanted content leakage from reference style images, and (c) effective composition of style and content. RB-Modulation is built on a novel stochastic optimal controller where a style descriptor encodes the desired attributes through a terminal cost. The resulting drift not only overcomes the difficulties above, but also ensures high fidelity to the reference style and adheres to the given text prompt. We also introduce a cross-attention-based feature aggregation scheme that allows RB-Modulation to decouple content and style from the reference image. With theoretical justification and empirical evidence, our test-time optimization framework demonstrates precise extraction and control of _content_ and _style_ in a training-free manner. Further, our method allows a seamless composition of content and style, which marks a departure from the dependency on external adapters or ControlNets. See project [page https://rb-modulation.github.io/ for code and further details.](https://rb-modulation.github.io/) 1 I NTRODUCTION Text-to-image (T2I) generative models (Ramesh et al., 2021; Rombach et al., 2022; Saharia et al., 2022) have excelled in crafting visually appealing images from text prompts. These T2I models are increasingly employed in creative endeavors such as visual arts (Xu et al., 2024), gaming (Pearce et al., 2023), personalized image synthesis (Ruiz et al., 2023; Huang et al., 2024a; Hu et al., 2021; Shah et al., 2023), stylized rendering (Sohn et al., 2023; Hertz et al., 2023; Wang et al., 2024a; Jeong et al., 2024), and image inversion or editing (Ulyanov et al., 2018; Delbracio & Milanfar, 2023; Rout et al., 2023b; 2024; Mokady et al., 2023). Content creators often need precise control over both the _content_ and the _style_ of generated images to match their vision. While the content of an image can be conveyed through text, articulating an artist’s unique style – characterized by distinct brushstrokes, color palette, material, and texture – is substantially more nuanced. This has led to research on personalization through visual prompting (Sohn et al., 2023; Hertz et al., 2023; Wang et al., 2024a). Recent studies have focused on finetuning pre-trained T2I models to learn style from a set of reference images (Gal et al., 2022; Ruiz et al., 2023; Sohn et al., 2023; Hu et al., 2021). This involves optimizing the model’s text embeddings, model weights, or both, using the denoising diffusion loss. However, these methods demand substantial computational resources for training or finetuning large-scale foundation models, thus making them expensive to adapt to new, unseen styles. Furthermore, these methods often depend on human-curated images of the same style, which is less practical and can compromise quality when only a single reference image is available. In training-free **stylization**, recent methods (Hertz et al., 2023; Wang et al., 2024a; Jeong et al., 2024) manipulate keys and values within the attention layers using just one reference style image. These methods face challenges in both extracting the style from the reference style image and accurately transferring the style to a target content image. For instance, during the DDIM inversion step (Song et al., 2021a) utilized by StyleAligned (Hertz et al., 2023), fine-grained details tend to be compromised. To mitigate this issue, InstantStyle (Wang et al., 2024a) incorporates features from _∗_ This work was done during an internship at Google. 1 Reference style Reference content A guitar A piano A butterfly Figure 1: Given a single reference image (rounded rectangle), our method **RB-Modulation** offers a plug-and-play solution for (a) stylization, and (b) content-style composition with various prompts while maintaining sample diversity and prompt alignment. For instance, given a reference style image (e.g., “melting golden 3d rendering style”) and content image (e.g., “a dog”), our method adheres to the desired prompts without leaking contents (e.g., flower) from the reference style image and without being restricted to the fixed pose or layout of the reference dog image. the reference style image into specific layers of a previously trained IP-Adapter (Ye et al., 2023). However, identifying the exact layer for feature injection in a model is complex and not universally applicable across models. Also, feature injection can cause content leakage from the style image into the generated content. Moving on to content-style **composition**, InstantStyle (Wang et al., 2024a) employs a ControlNet (Zhang et al., 2023) (an additionally trained network) to preserve image layout, which inadvertently limits its diversity. We introduce Reference-Based Modulation (RB-Modulation), a novel approach for stylization and composition that eliminates the need for training or finetuning diffusion models ( _e.g_ . ControlNet (Zhang et al., 2023) or adapters (Ye et al., 2023; Hu et al., 2021)). Our work reveals that the reverse dynamics in diffusion models can be formulated as stochastic optimal control problem. By incorporating style features into the controller’s terminal cost, we modulate the drift field in diffusion model’s reverse dynamics, enabling training-free personalization. Unlike conventional attention processors that often leak content from the reference style image, we propose to enhance the image fidelity via an Attention Feature Aggregation (AFA) module that decouples content from reference style image. We demonstrate the effectiveness of our method in stylization (Hertz et al., 2023; Wang et al., 2024a; Jeong et al., 2024) and style+content composition, as illustrated in Figure 1(a) and (b), respectively. Our experiments show that RB-Modulation outperforms current SoTA methods (Hertz et al., 2023; Wang et al., 2024a) in terms of human preference and prompt-alignment metrics. **Our contributions are summarized as follows:** _•_ We present reference-based modulation (RB-Modulation), a novel stochastic optimal control based test-time optimization framework that enables training-free, personalized style and content control, with a new Attention Feature Aggregation (AFA) module to maintain high fidelity to the reference image while adhering to the given prompt ( _§_ 4). _•_ We provide theoretical justifications connecting optimal control and reverse diffusion dynamics. We leverage this connection to incorporate desired attributes ( _e.g_ ., style) in our controller’s terminal cost and personalize T2I models in a training-free manner ( _§_ 5). _•_ We perform extensive experiments covering stylization and content-style composition, demonstrating superior performance over SoTA methods in human preference metrics ( _§_ 6). 2 R ELATED W ORK **Personalization of T2I models:** T2I generative models (Rombach et al., 2022; Podell et al., 2023; Pernias et al., 2024) can now generate high quality images from text prompts. Their text-following 2 ability has unlocked new avenues in personalized content creation, including text-guided image editing (Mokady et al., 2023; Rout et al., 2024), solving inverse problems (Rout et al., 2023b; 2024), concept-driven generation (Ruiz et al., 2023; Tewel et al., 2023; Kumari et al., 2023; Chen et al., 2024), personalized outpainting (Tang et al., 2023), identity-preservation (Ruiz et al., 2024; Huang et al., 2024a; Wang et al., 2024b), and stylized synthesis (Sohn et al., 2023; Wang et al., 2024a; Hertz et al., 2023; Shah et al., 2023). To tailor T2I models for a specific style ( _e.g_ ., painting) or content ( _e.g_ ., object), existing methods follow one of two recipes: (1) full finetuning (FT) or parameter efficient finetuning (PEFT) and (2) training-free, which we discuss below. **Finetuning T2I models for personalization:** FT (Ruiz et al., 2023; Everaert et al., 2023) and PEFT (Kumari et al., 2023; Hu et al., 2021; Sohn et al., 2023; Shah et al., 2023) methods excel at capturing style or object details when the underlying T2I model can be finetuned on a few (typically 4) reference images for few thousand iterations. PARASOL (Tarr´es et al., 2024) requires supervised data via a cross-modal search to train both the denoising U-Net and a projector network. Diff-NST (Ruta et al., 2023) trains the attention processor by targeting the ‘V’ values within the denoising U-Net. The curation of supervised data and resource-intensive finetuning for every style or content makes these methods challenging for practical usage. **Training-free methods for personalization:** Training-free personalization methods are preferable to finetuning methods given the vastly faster time of execution. In **StyleAligned** (Hertz et al., 2023), a reference style image and a text prompt describing the style are used to extract style features via DDIM inversion (Song et al., 2021a). Target queries and keys are then normalized using adaptive instance normalization (Huang & Belongie, 2017) based on reference counterparts. Finally, reference image keys and values are merged with DDIM-inverted latents in self-attention layers, which tends to leak content information from the reference style image (Figure 2). Moreover, the need for textual description in the DDIM inversion step can degrade its performance. **DiffusionDisen-** **tanglement** (Wu et al., 2023) aims to reduce the approximation error in DDIM inversion by jointly minimizing a perceptual loss and a directional CLIP loss, which is prone to content leakage (Wang et al., 2024a). **Swapping Self-Attention (SSA)** (Jeong et al., 2024) addresses these limitations by replacing the target keys and values in self-attention layers with those from a reference style image. It still relies on DDIM inversion to cache keys and values of the reference style, which tends to compromise fine-grained details (Wang et al., 2024a). Both StyleAligned (Hertz et al., 2023) and SSA (Jeong et al., 2024) require two reverse processes to share their attention layer features and thus demand significant memory. **InstantStyle** (Wang et al., 2024a) injects reference style features into specific cross-attention layers of IP-Adapter (Ye et al., 2023), addressing two key limitations: DDIM inversion and memory-intensive reverse processes. However, pinpointing the exact layers for feature injection is complex, and may not generalize to other models. In addition, when composing style and content, InstantStyle (Wang et al., 2024a) relies on ControlNet (Zhang et al., 2023), which can limit the diversity of generated images to fixed layouts and deviate from the prompt. **Optimal Control:** Stochastic optimal control finds wide applications in diverse fields such as molecular dynamics (Holdijk et al., 2024), economics (Fleming & Rishel, 2012), non-convex optimization (Chaudhari et al., 2018), robotics (Theodorou et al., 2011), and mean-field games (Carmona et al., 2018) Despite its extensive use, and recent works on its connections to diffusion based generative models (Berner et al., 2024; Tzen & Raginsky, 2019; Chen et al., 2023), it has been less explored in training-free personalization. In this paper, we introduce a novel test-time optimization framework leveraging the main concepts from optimal control to achieve training-free personalization. A key aspect of optimal control is designing a controller to guide a stochastic process towards a desired terminal condition (Fleming & Rishel, 2012). This aligns with our goal of training-free personalization, as we target a specific style or content at the end of the reverse diffusion process, which can be incorporated in the controller’s terminal condition. RB-Modulation overcomes several challenges encountered by SoTA methods (Hertz et al., 2023; Jeong et al., 2024; Wang et al., 2024a). Since RB-Modulation does not require DDIM inversion, it retains fine-grained details unlike StyleAligned (Hertz et al., 2023). Using a stochastic controller to refine the trajectory of a single reverse process, it overcomes the limitation of coupled reverse processes (Hertz et al., 2023). By incorporating a style descriptor in our controller’s terminal cost, it eliminates the dependency on Adapters (Ye et al., 2023; Hu et al., 2021) or ControlNets (Zhang et al., 2023) by InstantStyle (Wang et al., 2024a). 3 3 P RELIMINARIES **Diffusion models** consist of two stochastic processes: (a) _noising process_, modeled by a Stochastic Differential Equation (SDE) known as forward-SDE: d _X_ _t_ = _f_ ( _X_ _t_ _, t_ ) d _t_ + _g_ ( _X_ _t_ _, t_ ) d _W_ _t_ _, X_ 0 _∼_ _p_ 0, and (b) _denoising process_, modeled by the time-reversal of forward-SDE under mild regularity conditions (Anderson, 1982), also known as reverse-SDE: d _X_ _t_ = � _f_ ( _X_ _t_ _, t_ ) _−_ _g_ [2] ( _X_ _t_ _, t_ ) _∇_ log _p_ ( _X_ _t_ _, t_ )� d _t_ + _g_ ( _X_ _t_ _, t_ ) d _W_ _t_ _,_ _X_ 1 _∼N_ (0 _,_ I _d_ ) _._ (1) Here, _W_ = ( _W_ _t_ ) _t≥_ 0 is standard Brownian motion in a filtered probability space, (Ω _, F,_ ( _F_ _t_ ) _t≥_ 0 _, P_ ), _p_ ( _·, t_ ) denotes the marginal density of _p_ at time _t_, and _∇_ log _p_ _t_ ( _·_ ) the corresponding score function. _f_ ( _X_ _t_ _, t_ ) and _g_ ( _X_ _t_ _, t_ ) are called drift and volatility, respectively. A popular choice of _f_ ( _X_ _t_ _, t_ ) = _−X_ _t_ and _g_ ( _X_ _t_ _, t_ ) = _√_ 2 corresponds to the well-known forward Ornstein Uhlenbeck (OU) process. For T2I generation, the reverse-SDE (1) is simulated using a neural network _s_ ( **x** _t_ _, t_ ; _θ_ ) (Hyv¨arinen & Dayan, 2005; Vincent, 2011) to approximate _∇_ **x** log _p_ ( **x** _t_ _, t_ ). Importantly, to accelerate the sampling process in practice (Song et al., 2021a; Karras et al., 2022; Zhang & Chen, 2022), the reverse-SDE (1) shares the same path measure with a probability flow ODE: d _X_ _t_ = � _f_ ( _X_ _t_ _, t_ ) _−_ [1] 2 _[g]_ [2] [(] _[X]_ _[t]_ _[, t]_ [)] _[∇]_ [log] _[ p]_ [(] _[X]_ _[t]_ _[, t]_ [)] � d _t_, where _X_ 1 _∼N_ (0 _,_ I _d_ ). **Personalized diffusion models** either fully finetune _θ_ of _s_ ( **x** _t_ _, t_ ; _θ_ ) (Ruiz et al., 2023; Everaert et al., 2023), or train a parameter-efficient adapter ∆ _θ_ for _s_ ( **x** _t_ _, t_ ; _θ_ + ∆ _θ_ ) on reference style images (Hu et al., 2021; Sohn et al., 2023; Shah et al., 2023). Our method does not finetune _θ_ or train ∆ _θ_ . Instead, we derive a new drift field through a stochastic control that _modulates_ the reverse-SDE (1). 4 M ETHOD **Personalization using optimal control:** Normalize time _t_ by the total number of diffusion steps _T_ such that 0 _≤_ _t ≤_ 1. Let us denote by _u_ : R _[d]_ _×_ [0 _,_ 1] _→_ R _[d]_ a controller from the admissible set of controls _U ⊆_ R _[d]_, _X_ _t_ _[u]_ _∈_ R _[d]_ a state variable, _ℓ_ : R _[d]_ _×_ R _[d]_ _×_ [0 _,_ 1] _→_ R the transient cost, and _h_ : R _[d]_ _→_ R the terminal cost of the reverse process ( _X_ _t_ _[u]_ [)] [0] _t_ =1 [. We show in] _[ §]_ [5][ that] training-free personalization can be formulated as a control problem where the drift of the standard reverse-SDE (1) is modified via RB-modulation: 0 min _ℓ_ ( _X_ _t_ _[u]_ _[, u]_ [(] _[X]_ _t_ _[u]_ _[, t]_ [)] _[, t]_ [) d] _[t]_ [ +] _[ γh]_ [(] _[X]_ 0 _[u]_ [)]] _[,]_ where (2) _u∈U_ [E][[] � 1 d _X_ _t_ _[u]_ [=] � _f_ ( _X_ _t_ _[u]_ _[, t]_ [)] _[ −]_ _[g]_ [2] [(] _[X]_ _t_ _[u]_ _[, t]_ [)] _[∇]_ [log] _[ p]_ [(] _[X]_ _t_ _[u]_ _[, t]_ [) +] _[ u]_ [(] _[X]_ _t_ _[u]_ _[, t]_ [)] � d _t_ + _g_ ( _X_ _t_ _[u]_ _[, t]_ [)d] _[W]_ _[t]_ _[, X]_ 1 _[u]_ _[∼N]_ [ (0] _[,]_ [ I] _[d]_ [)] _[ .]_ Importantly, the terminal cost _h_ ( _·_ ), weighted by _γ_, captures the discrepancy in feature space between the styles of the reference image and the generated image. The resulting controller _u_ ( _·, t_ ) modulates the drift over time to satisfy this terminal cost. We derive the solution to this optimal control problem through the Hamilton-Jacobi-Bellman (HJB) equation (Fleming & Rishel, 2012); refer to Appendix A for details. Our proposed RB-Modulation **Algorithm 1** has two key components: (a) stochastic optimal controller and (b) attention feature aggregation. Below, we discuss each in turn. **(a) Stochastic Optimal Controller (SOC):** We show that the reverse dynamics in diffusion models can be framed as a stochastic optimal control problem with a quadratic terminal cost (theoretical analysis in _§_ 5). For personalization using a reference style image _X_ 0 _[f]_ [=] **[ z]** [0] [, we use a Contrastive] Style Descriptor (CSD) (Somepalli et al., 2024) to extract style features Ψ( _X_ 0 _[f]_ [)][. Since the score] functions _s_ ( **x** _t_ _, t_ ; _θ_ ) _≈∇_ log _p_ ( _X_ _t_ _, t_ ) are available from pre-trained diffusion models (Podell et al., 2023; Pernias et al., 2024), our goal is to add a correction term _u_ ( _·, t_ ) to modulate the reverseSDE and minimize the overall cost (2). We approximate _X_ 0 _[u]_ [with its conditional expectation using] Tweedie’s formula (Efron, 2011; Rout et al., 2023b; 2024). Finally, we incorporate the style features into our controller’s terminal cost as: _h_ ( _X_ 0 _[u]_ [) =] _[ ∥]_ [Ψ(] _[X]_ 0 _[f]_ [)] _[ −]_ [Ψ(][E][ [] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ [])] _[∥]_ [2] 2 [.] Our theoretical results ( _§_ 5) suggest that the optimal controller can be obtained by solving the HJB equation and letting _γ →∞_ . In practice, this translates to dropping the transient cost _ℓ_ ( _X_ _t_ _[u]_ _[, u]_ [(] _[X]_ _t_ _[u]_ _[, t]_ [)] _[, t]_ [)][ and solving (][2][) with only the terminal constraint,] _[ i.e]_ [.,] min 0 [)] _[ −]_ [Ψ(][E][ [] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ [])] _[∥]_ [2] 2 _[.]_ (3) _u∈U_ _[∥]_ [Ψ(] _[X]_ _[f]_ 4 Thus, we solve (3) to find the optimal control _u_ and use this controller in the reverse dynamics (2) to update the current state from _X_ _t_ _[u]_ [to] _[ X]_ _t_ _[u]_ _−_ ∆ _t_ [(recall that time flows backwards in the reverse-SDE (][1][)).] Our implementation of (3) is given in **Algorithm 1**, which follows from our theoretical insights. **Implementation challenge:** For smaller models (Rombach et al., 2022), we can directly solve our control problem (3). However, for larger models (Podell et al., 2023; Pernias et al., 2024), the control objective (3) requires back propagation through the score network with tentatively billions of parameters. This significantly increases time and memory complexity (Rout et al., 2023b; 2024). We propose a test-time proximal gradient descent approach to address this challenge. The key ingredient of our **Algorithm 1** is to find the previous state _X_ _t−_ ∆ _t_ by modulating the current state _X_ _t_ based on an optimal controller _u_ _[∗]_ . The optimal controller _u_ _[∗]_ is obtained by minimizing the discrepancy in style between _X_ [¯] 0 _[u]_ [:=][ E][[] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ [=] **[ x]** _[t]_ []][, obtained using our controlled reverse-SDE (][3][), and] the reference style image **z** 0 . Motivated by this interpretation, an alternate **Algorithm 2** avoids back propagation through _X_ ¯ 0 _[u]_ [in the terminal cost. Instead of forcing] _s_ ( **x** _t_ _, t_ ; _θ_ ) by introducing a dummy variable **[ x]** [0] [to be decided by the dynamics of the reverse-SDE as] **x** 0, which serves as a proxy for in **Algorithm 1**, we allow it to be only approximately faithful to the dynamics. This is implemented by adding a proximal penalty, _i.e_ . **x** _[∗]_ 0 [= arg min] **x** 0 _∈_ R _[d]_ _[∥]_ [Ψ(] _[X]_ 0 _[f]_ [)] _[ −]_ [Ψ(] **[x]** [0] [)] _[∥]_ 2 [2] [+] _[ λ][∥]_ **[x]** [0] _[−]_ [E][ [] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ []] _[∥]_ [2] 2 [,] where the hyper-parameter _λ_ controls the faithfulness of the reverse dynamics. This penalty assumes that with a small step-size in (3), **x** _[∗]_ 0 [and][ E][[] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ [=] **[ x]** _[t]_ []][ will be close. Thus,] **[ Algorithm][ 2]** [ enables] personalization of large-scale foundation models, _matching the speed of training-free methods and_ _obtaining 5-20X speedup over training-based methods_ ; see Table 4 in Appendix B.2 for details. While prior works (Chung et al., 2023; Zhu et al., 2023; He et al., 2024) have used a proximal sampler in related settings, their underlying generative model is not personalized. We believe that this is an important reason why our method results in a significant speedup while satisfying the terminal constraints. Our paper takes the first step in personalizing the underlying generative model via a novel attention processor as discussed below. **(b) Attention Feature Aggregation (AFA):** Let _d_ denote the dimension of the latent variable _X_ _t_, _n_ _q_ the embedding dimension for query _Q_, and _n_ _h_ the output dimension of the hidden layer. Transformer-based diffusion models (Rombach et al., 2022; Podell et al., 2023; Pernias et al., 2024) consist of self-attention and cross-attention layers operating on latent embedding **x** _t_ _∈_ R _[d][×][n]_ _[h]_ . Within the attention module Attention( _Q, K, V_ ), **x** _t_ is projected into queries _Q ∈_ R _[d][×][n]_ _[q]_, keys _K ∈_ R _[d][×][n]_ _[q]_, and values _V ∈_ R _[d][×][n]_ _[h]_ using linear projections. Through _Q_, _K_, and _V_, attention layers capture global context and improve long-range dependencies within **x** _t_ . To incorporate a reference image ( _e.g_ ., style or content) while retaining alignment with the prompt, we introduce the Attention Feature Aggregation (AFA) module. Given a prompt **p**, a reference style image _I_ _s_, and a reference content image _I_ _c_, we first extract the embeddings using CLIP text encoder (Radford et al., 2021) and CSD image encoder (Somepalli et al., 2024). These embeddings are projected into keys and values using linear projection. We denote by _K_ _p_ and _V_ _p_ the keys and values from **p**, _K_ _s_ and _V_ _s_ from _I_ _s_, _K_ _c_ and _V_ _c_ from _I_ _c_ (used only in content-style composition). The query _Q_, derived from a linear projection of **x** _t_, remains consistent in the AFA module. To maintain consistency between text and style, we compose the keys and values of both text and style in our attention mechanism. The final output of the AFA module is given by _AFA_ = Avg ( _A_ _text_ _, A_ _style_ _, A_ _text_ + _style_ ) _, A_ _text_ = Attention( _Q,_ [ _K_ ; _K_ _p_ ] _,_ [ _V_ ; _V_ _p_ ]) _,_ _A_ _style_ = Attention( _Q,_ [ _K_ ; _K_ _s_ ] _,_ [ _V_ ; _V_ _s_ ]) _, A_ _text_ + _style_ = Attention( _Q,_ [ _K_ ; _K_ _p_ ; _K_ _s_ ] _,_ [ _V_ ; _V_ _p_ ; _V_ _s_ ]) _,_ where [ _K_ ; _K_ _p_ ] _∈_ R [2] _[d][×][n]_ _[q]_ indicates concatenation of _K_ with _K_ _p_ along the number of tokens dimension. For style-content composition, we process the content image _I_ _c_ in the same way as the reference style image _I_ _s_, and obtain another set of attention outputs: _AFA_ = Avg ( _A_ _text_ _, A_ _style_ _, A_ _content_ _, A_ _content_ + _style_ ) _,_ _A_ _content_ = Attention( _Q,_ [ _K_ ; _K_ _c_ ] _,_ [ _V_ ; _V_ _c_ ]) _, A_ _content_ + _style_ = Attention( _Q,_ [ _K_ ; _K_ _s_ ; _K_ _c_ ] _,_ [ _V_ ; _V_ _s_ ; _V_ _c_ ]) _._ Importantly, the AFA module is computationally tractable as it only requires the computation of a multi-head attention, which is widely used in practice (Podell et al., 2023). **Disentangling content and style.** In stylization (content described by text; style illustrated by a reference style image), prior works (Hertz et al., 2023; Wang et al., 2024a) inject the entire reference style image _I_ _s_ that does not disentangle content and style. However, our AFA module injects 5 Idea Generation Category:
0Conceptual Integration
bnINPG5A32
# P REDICTION R ISK AND E STIMATION R ISK OF THE R IDGELESS L EAST S QUARES E STIMATOR UNDER G ENERAL A SSUMPTIONS ON R EGRESSION E RRORS **Sungyoon Lee** Department of Computer Science Hanyang University sungyoonlee@hanyang.ac.kr **Sokbae Lee** Department of Economics Columbia University sl3841@columbia.edu A BSTRACT In recent years, there has been a significant growth in research focusing on minimum _ℓ_ 2 norm (ridgeless) interpolation least squares estimators. However, the majority of these analyses have been limited to an unrealistic regression error structure, assuming independent and identically distributed errors with zero mean and common variance. In this paper, we explore prediction risk as well as estimation risk under more general regression error assumptions, highlighting the benefits of overparameterization in a more realistic setting that allows for clustered or serial dependence. Notably, we establish that the estimation difficulties associated with the variance components of both risks can be summarized through the trace of the variance-covariance matrix of the regression errors. Our findings suggest that the benefits of overparameterization can be extended to time series, panel and grouped data. 1 I NTRODUCTION Recent years have witnessed a fast growing body of work that analyzes minimum _ℓ_ 2 norm (ridgeless) interpolation least squares estimators (see, e.g., Bartlett et al., 2020; Hastie et al., 2022; Tsigler & Bartlett, 2023, and references therein). Researchers in this field were inspired by the ability of deep neural networks to accurately predict noisy training data with perfect fits, a phenomenon known as “double descent” or “benign overfitting” (e.g., Belkin et al., 2018; 2019; 2020; Zou et al., 2021; Mei & Montanari, 2022, among many others). They discovered that to achieve this phenomenon, overparameterization is critical. In the setting of linear regression, we have the training data _{_ ( _x_ _i_ _, y_ _i_ ) _∈_ R _[p]_ _×_ R : _i_ = 1 _, · · ·, n}_, where the outcome variable _y_ _i_ is generated from _y_ _i_ = _x_ _[⊤]_ _i_ _[β]_ [ +] _[ ε]_ _[i]_ _[, i]_ [ = 1] _[, . . ., n,]_ _x_ _i_ is a vector of features (or regressors), _β_ is a vector of unknown parameters, and _ε_ _i_ is a regression error. Here, _n_ is the sample size of the training data and _p_ is the dimension of the parameter vector _β_ . In the literature, the main object for the theoretical analyses has been mainly on the out-of-sample prediction risk. That is, for the ridge or interpolation estimator _β_ [ˆ], the literature has focused on E ( _x_ _[⊤]_ 0 _[β]_ [ˆ] _[ −]_ _[x]_ _[⊤]_ 0 _[β]_ [)] [2] _[ |][ x]_ [1] _[, . . ., x]_ _[n]_ _,_ � � where _x_ 0 is a test observation that is identically distributed as _x_ _i_ but independent of the training data. For example, Dobriban & Wager (2018); Wu & Xu (2020); Richards et al. (2021); Hastie et al. (2022) analyzed the predictive risk of ridge(less) regression and obtained exact asymptotic expressions under the assumption that _p/n_ converges to some constant as both _p_ and _n_ go to infinity. Overall, they found the double descent behavior of the ridgeless least squares estimator in terms of the prediction risk. Bartlett et al. (2020); Kobak et al. (2020); Tsigler & Bartlett (2023) characterized the phenomenon of benign overfitting in a different setting. 1 To the best of our knowledge, a vast majority of the theoretical analyses have been confined to a simple data generating process, namely, the observations are independent and identically distributed (i.i.d.), and the regression errors have mean zero, have the common variance, and are independent of the feature vectors. That is, ( _y_ _i_ _, x_ _[⊤]_ _i_ [)] _[⊤]_ _[∼]_ [i.i.d. with][ E][[] _[ε]_ _[i]_ [] = 0][,][ E][[] _[ε]_ [2] _i_ [] =] _[ σ]_ [2] _[ <][ ∞]_ [and] _[ ε]_ _[i]_ [is independent of] _[ x]_ _[i]_ [.] (1) This assumption, although convenient, is likely to be unrealistic in various real-world examples. For instance, Liao et al. (2023) adopted high-dimensional linear models to examine the double descent phenomenon in economic forecasts. In their applications, the outcome variables include S&P firms’ earnings, U.S. equity premium, U.S. unemployment rate, and countries’ GDP growth rate. As in their applications, economic forecasts are associated with time series or panel data. As a result, it is improbable that (1) holds in these applications. As another example, Spiess et al. (2023) examined the performance of high-dimensional synthetic control estimators with many control units. The outcome variable in their application is the state-level smoking rates in the Abadie et al. (2010) dataset. Considering the geographical aspects of the U.S. states, it is unlikely that the regression errors underlying the synthetic control estimators adhere to (1). In short, it is desirable to go beyond the simple but unrealistic regression error assumption given in (1). |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|train error (c = 0)|train error (c = 0)|||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|train error (c = 0)|train error (c = 0)|train error (c = 0)|train error (c = 0)|train error (c = 0)|train error (c = 0)|train error (c = 0)| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|train error (c = 1/4)<br>train error (c = 2/4)||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|train error (c = 3/4)<br>test error (c = 0)||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|test error (c = 1/4)<br>test error (c = 2/4)||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|test error (c = 3/4)||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|test error (c = 3/4)|||||500 2000|500 2000| Figure 1: Comparison of in-sample and out-of-sample mean squared error (MSE) across various degrees of clustered noise. The vertical line indicates _p_ = _n_ (= 1 _,_ 415). To further motivate, we start with our own real-data example from American Community Survey (ACS) 2018, extracted from IPUMS USA (Ruggles et al., 2024). The ACS is an ongoing annual survey by the US Census Bureau that provides key information about the US population. To have a relatively homogeneous population, the sample extract is restricted to white males residing in California with at least a bachelor’s degree. We consider a demographic group defined by their age, the type of degree, and the field of degree. Then, we compute the average of log hourly wages for each age-degree-field group, treat each group average as the outcome variable, and predict group wages by various group-level regression models where the regressors are constructed using the indicator variables of age, degree, and field as well as their interactions. We consider 7 specifications ranging from 209 to 2,182 regressors. To understand the role of non-i.i.d. regressor errors, we add the artificial noise to the training sample. See Appendix A for details regarding how to generate the artificial noise. In the experiment, the constant _c_ varies such that _c_ = 0 corresponds to no clustered dependence across observations but as a positive _c_ gets larger, the noise has a larger share of clustered errors but the variance of the overall regression errors remains the same regardless of the value of _c_ . Figure 1 shows the in-sample (train) vs. out-of-sample (test) mean squared error (MSE) for various values of _c ∈{_ 0 _,_ 0 _._ 25 _,_ 0 _._ 5 _,_ 0 _._ 75 _}_ . It can be seen that the experimental results are almost identical across different values of _c_ especially when _p > n_, suggesting that the double descent phenomenon might be universal for various degrees of clustered dependence, provided that the overall variance of the regression errors remains the same. It is our main goal to provide a firm foundation for this empirical phenomenon. To do so, we articulate the following research questions: - How to analyze the out-of-sample prediction risk of the ridgeless least squares estimator under _general_ assumptions on the regression errors? 2 - Why does _not_ the prediction risk seem to be affected by the degrees of dependence across observations? To delve into the prediction risk, suppose that Σ := E[ _x_ 0 _x_ _[⊤]_ 0 []][ is finite and positive definite. Then,] E ( _x_ _[⊤]_ 0 _[β]_ [ˆ] _[ −]_ _[x]_ _[⊤]_ 0 _[β]_ [)] [2] _[ |][ x]_ [1] _[, . . ., x]_ _[n]_ = E ( _β_ [ˆ] _−_ _β_ ) _[⊤]_ Σ( _β_ [ˆ] _−_ _β_ ) _| x_ 1 _, . . ., x_ _n_ _._ � � � � If Σ = _I_ (i.e., the case of isotropic features), where _I_ is the identity matrix, the mean squared error of the estimator defined by E[ _∥β_ [ˆ] _−_ _β∥_ [2] ], where _∥· ∥_ is the usual Euclidean norm, is the same as the expectation of the prediction risk defined above. However, if Σ _̸_ = _I_, the link between the two quantities is less intimate. One may regard the prediction risk as the Σ-weighted mean squared error of the estimator; whereas E[ _∥β_ [ˆ] _−_ _β∥_ [2] ] can be viewed as an “unweighted” version, even if Σ _̸_ = _I_ . In other words, regardless of the variance-covariance structure of the feature vector, E[ _∥β_ [ˆ] _−_ _β∥_ [2] ] treats each component of _β_ “equally.” The mean squared error of the estimator is arguably one of the most standard criteria to evaluate the quality of the estimator in statistics. For instance, in the celebrated work by James & Stein (1961), the mean squared error criterion is used to show that the sample mean vector is not necessarily optimal even for standard normal vectors (so-called “Stein’s paradox”). Many follow-up papers used the same criterion; e.g., Hansen (2016) compared the mean-squared error of ordinary least squares, James–Stein, and Lasso estimators in an underparameterized regime. Both Σ-weighted and unweighted versions of the mean squared error are interesting objects to study. For example, Dobriban & Wager (2018) called the former “predictive risk” and the latter “estimation risk” in high-dimensional linear models; Berthier et al. (2020) called the former “generalization error” and the latter “reconstruction error” in the context of stochastic gradient descent for the least squares problem using the noiseless linear model. In this paper, we analyze both weighted and unweighted mean squared errors of the ridgeless estimator under general assumptions on the data-generating processes, not to mention anisotropic features. Furthermore, our focus is on the finite-sample analysis, that is, both _p_ and _n_ are fixed but _p > n_ . Although most of the existing papers consider the simple setting as in (1), our work is not the first paper to consider more general regression errors in the overparameterized regime. Chinot et al. (2022); Chinot & Lerasle (2023) analyzed minimum norm interpolation estimators as well as regularized empirical risk minimizers in linear models without any conditions on the regression errors. Specifically, Chinot & Lerasle (2023) showed that, with high probability, without assumption on the regression errors, for the minimum norm interpolation estimator, ( _β_ [ˆ] _−_ _β_ ) _[⊤]_ Σ( _β_ [ˆ] _−_ _β_ ) is bounded from above by � _∥β∥_ [2] [ �] _i≥c·n_ _[λ]_ _[i]_ [(Σ)] _[ ∨]_ [�] _[n]_ _i_ =1 _[ε]_ _i_ [2] � _/n_, where _c_ is an absolute constant and _λ_ _i_ (Σ) is the eigenvalues of Σ in descending order. Chinot & Lerasle (2023) also obtained the bounds on the estimation error ( _β_ [ˆ] _−_ _β_ ) _[⊤]_ ( _β_ [ˆ] _−_ _β_ ). Our work is distinct and complements these papers in the sense that we allow for a general variance-covariance matrix of the regression errors. The main motivation of not making any assumptions on _ε_ _i_ in Chinot et al. (2022) and Chinot & Lerasle (2023) is to allow for potentially adversarial errors. We aim to allow for a general variance-covariance matrix of the regression errors to accommodate time series and clustered data, which are common in applications. See, e.g., Hansen (2022) for a textbook treatment (see Chapter 14 for time series and Section 4.21 for clustered data). The main contribution of this paper is that we provide _exact finite-sample_ characterization of the variance component of the prediction and estimation risks under the assumption that _X_ = [ _x_ 1 _, x_ 2 _, · · ·, x_ _n_ ] _[⊤]_ is _left-spherical_ (e.g., _x_ _i_ ’s can be i.i.d. normal with mean zero but more general); _ε_ _i_ ’s _can be correlated and have non-identical variances_ ; and _ε_ _i_ ’s are independent of _x_ _i_ ’s. Specifically, the variance term can be factorized into a product between two terms: one term depends only on the _trace_ of the variance-covariance matrix, say Ω, of _ε_ _i_ ’s; the other term is solely determined by the distribution of _x_ _i_ ’s. Interestingly, we find that although Ω may contain non-zero off-diagonal elements, only the trace of Ω matters, as hinted by Figure 1, and further demonstrate our finding via numerical experiments. In addition, we obtain exact finite-sample expression for the bias terms when the regression coefficients follow the random-effects hypothesis (Dobriban & Wager, 2018). Our finite-sample findings offer a distinct viewpoint on the prediction and estimation risks, contrasting with the asymptotic inverse relationship (for optimally chosen ridge estimators) between the predictive and estimation risks uncovered by Dobriban & Wager (2018). Finally, we connect our findings to the existing results on the prediction risk (e.g., Hastie et al., 2022) by considering the asymptotic behavior of estimation risk. Remarkably, our findings stand in sharp contrast 3 to the well-established results in econometrics. In the latter, unlike in our framework, one of the key objectives is to estimate the variance-covariance matrix, denoted by _V_ LS, of the asymptotic distribution of the least squares estimators. In this context, the off-diagonal elements of Ω _do_ affect _V_ LS, implying that any consistent estimator of _V_ LS must account for these off-diagonal components. One of the limitations of our theoretical analysis is that the design matrix _X_ is assumed to be leftspherical, although it is more general than i.i.d. normal with mean zero. We not only view this as a convenient assumption but also expect that our findings will hold at least approximately even if _X_ does not follow the left-spherical distribution. It is a topic for future research to formally investigate this conjecture. 2 T HE F RAMEWORK UNDER G ENERAL A SSUMPTIONS ON R EGRESSION E RRORS We first describe the minimum _ℓ_ 2 norm (ridgeless) interpolation least squares estimator in the overparameterized case ( _p > n_ ). Our goal is to understand the generalization ability of overparameterized models trained with gradient-based optimization (e.g., gradient descent) Gunasekar et al. (2017). Define _y_ := [ _y_ 1 _, y_ 2 _, · · ·, y_ _n_ ] _[⊤]_ _∈_ R _[n]_ _,_ _ε_ := [ _ε_ 1 _, ε_ 2 _, · · ·, ε_ _n_ ] _[⊤]_ _∈_ R _[n]_ _,_ _X_ _[⊤]_ := [ _x_ 1 _, x_ 2 _, · · ·, x_ _n_ ] _∈_ R _[p][×][n]_ _,_ so that _y_ = _Xβ_ + _ε_ . The estimator we consider is _β_ ˆ := arg min _b∈_ R _[p]_ _[ {∥][b][∥]_ [:] _[ Xb]_ [ =] _[ y][}]_ [ = (] _[X]_ _[⊤]_ _[X]_ [)] _[†]_ _[X]_ _[⊤]_ _[y]_ [ =] _[ X]_ _[†]_ _[y,]_ where _A_ _[†]_ denotes the Moore–Penrose inverse of a matrix _A_ . The main object of interest in this paper is the prediction and estimation risks of _β_ [ˆ] under the data scenario such that the regression error _ε_ _i_ may _not_ be i.i.d. Formally, we make the following assumptions. **Assumption 2.1.** (i) _y_ = _Xβ_ + _ε_, where _ε_ is independent of _X_, and E[ _ε_ ] = 0. (ii) Ω:= E[ _εε_ _[⊤]_ ] is finite and positive definite (but not necessarily spherical). We emphasize that Assumption 2.1 is more general than the standard assumption in the literature on benign overfitting that typically assumes that Ω _≡_ _σ_ [2] _I_ . Assumption 2.1 allows for non-identical variances across the elements of _ε_ because the diagonal elements of Ω can be different among each other. Furthermore, it allows for non-zero off-diagonal elements in Ω. It is difficult to assume that the regression errors are independent among each other with time series or clustered data; thus, in these settings, it is important to allow for general Ω = _̸_ _σ_ [2] _I_ . Below we present a couple of such examples. **Example 2.1** (Time Series - AR(1) Errors) **.** Suppose that the regressor error follows an autoregressive process: _ε_ _i_ = _ρε_ _i−_ 1 + _η_ _i_ _,_ (2) where _ρ ∈_ ( _−_ 1 _,_ 1) is an autoregressive parameter, _η_ _i_ is independent and identically distributed with mean zero and variance _σ_ [2] (0 _< σ_ [2] _< ∞_ ) and is independent of _X_ . Then, the ( _i, j_ ) element of Ω is _σ_ [2] Ω _ij_ = 1 _−_ _ρ_ [2] _[ ρ]_ _[|][i][−][j][|]_ _[.]_ Note that Ω _ij_ _̸_ = 0 as long as _ρ ̸_ = 0. **Example 2.2** (Panel and Grouped Data - Clustered Errors) **.** Suppose that regression errors are mutually independent across clusters but they can be arbitrarily correlated within the same cluster. For instance, students in the same school may affect each other and also have the same teachers; thus it would be difficult to assume independence across student test scores within the same school. However, it might be reasonable that student test scores are independent across different schools. For 4 example, assume that (i) if the regression error _ε_ _i_ belongs to cluster _g_, where _g_ = 1 _, . . ., G_ and _G_ is the number of clusters, E[ _ε_ [2] _i_ [] =] _[ σ]_ _g_ [2] [for some constant] _[ σ]_ _g_ [2] _[>]_ [ 0][ that can vary over] _[ g]_ [; (ii) if the] regression errors _ε_ _i_ and _ε_ _j_ ( _i ̸_ = _j_ ) belong to the same cluster _g_, E[ _ε_ _i_ _ε_ _j_ ] = _ρ_ _g_ for some constant _ρ_ _g_ _̸_ = 0 that can be different across _g_ ; and (iii) if the regression errors _ε_ _i_ and _ε_ _j_ ( _i ̸_ = _j_ ) do not belong to the same cluster, E[ _ε_ _i_ _ε_ _j_ ] = 0. Then, Ω is block diagonal with possibly non-identical blocks. For vector _a_ and square matrix _A_, let _∥a∥_ [2] _A_ [:=] _[ a]_ _[⊤]_ _[Aa]_ [. Conditional on] _[ X]_ [ and given] _[ A]_ [, we define] Bias _A_ ( _β_ [ˆ] _| X_ ) := _∥_ E[ _β_ [ˆ] _| X_ ] _−_ _β∥_ _A_ and Var _A_ ( _β_ [ˆ] _| X_ ) := Tr(Cov( _β_ [ˆ] _| X_ ) _A_ ) _,_ and we write Var = Var _I_ and Bias = Bias _I_ for the sake of brevity in notation. The mean squared prediction error for an unseen test observation _x_ 0 with the positive definite covariance matrix Σ := E[ _x_ 0 _x_ _[⊤]_ 0 []][ (assuming that] _[ x]_ [0] [is independent of the training data] _[ X]_ [) and the] mean squared estimation error of _β_ [ˆ] conditional on _X_ can be written as: _R_ _P_ ( _β_ [ˆ] _| X_ ) := E�( _x_ _[⊤]_ 0 _[β]_ [ˆ] _[ −]_ _[x]_ _[⊤]_ 0 _[β]_ [)] [2] _[ |][ X]_ � = [Bias Σ ( _β_ [ˆ] _| X_ )] [2] + Var Σ ( _β_ [ˆ] _| X_ ) _,_ _R_ _E_ ( _β_ [ˆ] _| X_ ) := E� _∥β_ [ˆ] _−_ _β∥_ [2] _| X_ � = [Bias( _β_ [ˆ] _| X_ )] [2] + Var( _β_ [ˆ] _| X_ ) _._ In what follows, we obtain exact finite-sample expressions for prediction and estimation risks: _R_ _P_ ( _β_ [ˆ] ) := E _X_ [ _R_ _P_ ( _β_ [ˆ] _| X_ )] and _R_ _E_ ( _β_ [ˆ] ) := E _X_ [ _R_ _E_ ( _β_ [ˆ] _| X_ )] _._ We first analyze the variance terms for both risks and then study the bias terms. 3 T HE V ARIANCE C OMPONENTS OF P REDICTION AND E STIMATION R ISKS 3.1 T HE VARIANCE COMPONENT OF PREDICTION RISK We rewrite the variance component of prediction risk as follows: Var Σ ( _β_ [ˆ] _| X_ ) = Tr(Cov( _β_ [ˆ] _| X_ )Σ) = Tr( _X_ _[†]_ Ω _X_ _[†⊤]_ Σ) = _∥SX_ _[†]_ _T_ _∥_ _F_ [2] _[,]_ (3) where positive definite symmetric matrices _S_ := Σ [1] _[/]_ [2] and _T_ := Ω [1] _[/]_ [2] are the square root matrices of the positive definite matrices Σ and Ω, respectively. To compute the above Frobenius norm of the matrix _SX_ _[†]_ _T_, we need to compute the alignment of the right-singular vectors of _B_ := _SX_ _[†]_ _∈_ R _[p][×][n]_ with the left-eigenvectors of _T ∈_ R _[n][×][n]_ . Here, _B_ is a random matrix while _T_ is fixed. Therefore, we need the distribution of the right-singular vectors of the random matrix _B_ . Perhaps surprisingly, to compute the _expected_ variance E _X_ [Var Σ ( _β_ [ˆ] _| X_ )], it turns out that we do not need the distribution of the singular vectors if we make a minimal assumption (the _left-spherical_ _symmetry_ of _X_ ) which is weaker than the assumption that _{x_ _i_ _}_ _[n]_ _i_ =1 [is i.i.d. normal with][ E][[] _[x]_ [1] [] = 0][.] **Definition 3.1** (Left-Spherical Symmetry (Dawid, 1977; 1978; 1981; Gupta & Nagar, 1999)) **.** A random matrix _Z_ or its distribution is called to be _left-spherical_ if _OZ_ and _Z_ have the same distribution ( _OZ_ = _d_ _Z_ ) for any fixed orthogonal matrix _O ∈_ _O_ ( _n_ ) := _{A ∈_ R _n×n_ : _AA_ _⊤_ = _A_ _⊤_ _A_ = _I}_ . **Assumption 3.2.** The design matrix _X_ is left-spherical. For the isotropic error case (Ω= _I_ ), we have E _X_ [Var Σ ( _β_ [ˆ] _| X_ )] = E _X_ [Tr(( _X_ _[⊤]_ _X_ ) _[†]_ Σ)] directly from (3) since _X_ _[†]_ _X_ _[†⊤]_ = ( _X_ _[⊤]_ _X_ ) _[†]_ . Moreover, for the arbitrary error, the left-spherical symmetry of _X_ plays a critical role to _factor out_ the same E _X_ [Tr(( _X_ _[⊤]_ _X_ ) _[†]_ Σ)] and the trace of the variancecovariance matrix of the regression errors, Tr(Ω), from the variance after the expectation over _X_ . **Lemma 3.3.** _For a subset S ⊂_ R _[m][×][m]_ _satisfying C_ _[−]_ [1] _∈S for all C ∈S, if matrix-valued random_ _variables Z and AZ have the same distribution measure µ_ _Z_ _for any A ∈S, then we have_ E _Z_ [ _f_ ( _Z_ )] = E _Z_ [ _f_ ( _AZ_ )] = E _Z_ [E _A_ _′_ _∼ν_ [ _f_ ( _A_ _[′]_ _Z_ )]] _for any function f ∈_ _L_ [1] ( _µ_ _Z_ ) _and any probability density function ν on S._ **Theorem 3.4.** _Let Assumptions 2.1, and 3.2 hold. Then, we have_ E _X_ [Var Σ ( _β_ [ˆ] _| X_ )] = _n_ [1] [Tr(Ω)][E] _[X]_ [[Tr((] _[X]_ _[⊤]_ _[X]_ [)] _[†]_ [Σ)]] _[.]_ 5 Idea Generation Category:
3Other
AsAy7CROLs
# O N Q UANTIZING N EURAL R EPRESENTATION FOR V ARIABLE -R ATE V IDEO C ODING **Junqi Shi, Zhujia Chen, Hanfei Li, Qi Zhao, Ming Lu** _[∗]_ **, Tong Chen, Zhan Ma** School of Electronic Science and Engineering, Nanjing University _{_ junqishi,zhujiachen,hanfei ~~l~~ i,qizhao _}_ @smail.nju.edu.cn, _{_ minglu,chentong,mazhan _}_ @nju.edu.cn A BSTRACT This work introduces NeuroQuant, a novel post-training quantization (PTQ) approach tailored to non-generalized Implicit Neural Representations for variablerate Video Coding (INR-VC). Unlike existing methods that require extensive weight retraining for each target bitrate, we hypothesize that variable-rate coding can be achieved by adjusting quantization parameters (QPs) of pre-trained weights. Our study reveals that traditional quantization methods, which assume inter-layer independence, are ineffective for non-generalized INR-VC models due to significant dependencies across layers. To address this, we redefine variablerate INR-VC as a mixed-precision quantization problem and establish a theoretical framework for sensitivity criteria aimed at simplified, fine-grained rate control. Additionally, we propose network-wise calibration and channel-wise quantization strategies to minimize quantization-induced errors, arriving at a unified formula for representation-oriented PTQ calibration. Our experimental evaluations demonstrate that NeuroQuant significantly outperforms existing techniques in varying bitwidth quantization and compression efficiency, accelerating encoding by up to eight times and enabling quantization down to INT2 with minimal reconstruction loss. This work introduces variable-rate INR-VC for the first time and lays a theoretical foundation for future research in rate-distortion optimization, advancing the field of video coding technology. The materials will be avail[able at https://github.com/Eric-qi/NeuroQuant.](https://github.com/Eric-qi/NeuroQuant) 1 I NTRODUCTION Implicit Neural Representations (INRs) (Sitzmann et al., 2020; Chen et al., 2021a) have recently introduced a new approach to video coding. They focus on learning a mapping from coordinates, like frame indices, to pixel values, such as colors. This represents a significant departure from the widely used variational autoencoder (VAE)-based frameworks (Lu et al., 2019; Li et al., 2021a; Lu et al., 2024), which rely on generalized models trained on large datasets to create compact representations for various input signals. Instead, INR-based video coding (INR-VC) encodes each video as a unique neural network through end-to-end training, removing the need for extensive datasets. By using specific, non-generalized network weights for each video, INR-VC provides a tailored video coding method that has shown promising results (Chen et al., 2023; Kwan et al., 2024a). INR-VC typically focuses on two main objectives: 1) **Representation**, where a neural network models the target video with a minimized distortion, and 2) **Compression**, where the network’s weights are compressed to lower the bitrate. Many prominent methods adopt a consistent precision (quantization bitwidth) for all weights before lossless entropy coding, meaning the video bitrate depends solely on the number of learnable weights. Consequently, independent weight training is needed for each target bitrate, making the process very time-consuming. For example, encoding a 1080p video with 600 frames at a specific bitrate can take up to 10 hours. To address this inefficiency, we consider how bitrate is managed in pretrained INR-VC model, which is proportional to the sum of the bitwidth of each weight. Inspired by generalized codecs (Sullivan _∗_ Corresponding Author 1 **Train** ~~**`GPU`**~~ ~~**`GPU`**~~ ~~**`GPU`**~~ **Fixed QP** **Variable QP** ~~**W**~~ **e** ~~**i**~~ **g** ~~**hts**~~ **Decode** ~~**W**~~ **e** ~~**i**~~ **g** ~~**ht**~~ **s** **Decode** Figure 1: **Left** : Typical INR-VCs assume a consistent bitwidth and require separate weight training with varying quantities for each target rate. **Right** : The proposed NeuroQuant achieves variable rate by modifying the corresponding QPs, significantly reducing training costs. et al., 2012; Li et al., 2023) that adjust quantization parameters (QPs) (Wang & Kwong, 2008) to control bitrate, we pose the hypothesis: _Can variable-rate INR-VC be achieved by modifying the QP_ _of post-training weights_, thus eliminating the need for repeated model training for each target rate? In the context of weight quantization, this can be approached by: 1) allocating quantization bitwidth to match the target bitrate, and 2) calibrating QPs to preserve reconstruction fidelity. However, directly adopting a consistent quantization bitwidth cannot support fine-grained rate control, e.g., only seven options from INT2 to INT8 are available. Additionally, existing mixedprecision quantization methods (Nagel et al., 2021; Chen et al., 2021b), primarily designed for general-purpose neural networks, encounter two key problems when applied to non-generalized INR-VCs. First, mixed-precision algorithms (Dong et al., 2019; 2020; Chen et al., 2021b) typically assume inter-layer independence with tolerable approximation errors. This assumption breaks down in non-generalized INR-VCs, where layers exhibit significant dependencies. Second, popular layerwise calibration methods [1] (Nagel et al., 2020; Li et al., 2021b) also rely on inter-layer independence and aims at generalizing the network, making them unsuitable for INR-VC. Therefore, a dedicated quantization methodology tailored for variable-rate INR-VC is necessary. In this work, we explore, for the first time, the post-training quantization (PTQ) of weights in nongeneralized INR-VCs. Building on both empirical and theoretical insights, we propose NeuroQuant, a state-of-the-art PTQ approach for INR-VC that enables variable-rate coding without complex retraining. Our contributions tackle key challenges through the following research questions: 1. **How to realize variable bitrate** (Sec. 3.1): We redefine variable-rate coding as a mixed-precision quantization problem. By theoretically demonstrating that the assumption of inter-layer independence (Dong et al., 2020; Guan et al., 2024) does not apply to non-generalized models, we highlight the necessity of incorporating weight perturbation directionality and off-diagonal Hessian information for sensitivity assessment in quantizing INR-VC. Additionally, we introduce the Hessian-Vector product to simplify computations by eliminating the need for explicit Hessian calculations. 2. **How to ensure reconstruction quality** (Sec. 3.2): We enhance reconstruction quality by calibrating the QPs on the corresponding video-specific weights. Through second-order analysis, we derive a unified formula for MSE-oriented calibration across varying granularities. By considering significant cross-layer dependencies and the diverse distribution of weights, we conduct network-wise calibration and channel-wise quantization to minimize reconstruction loss. 3. **How NeuroQuant performs** (Sec. 4): We benchmark proposed NeuroQuant across various architectures against existing quantization techniques, achieving state-of-the-art results. For variable-rate coding, NeuroQuant outperforms competitors while reducing encoding time by 80%. Moreover, NeuroQuant is able to quantize weights down to INT2 without notable performance degradation. 4. **How to advance INR-VC** (Sec. 3.3): We revisit INR-VC through the lens of variational inference, proposing that the success of NeuroQuant stems from resolving the mismatch between the representation and compression. We also suggest that rate-distortion (R-D) optimization is also applicable to INR-VC and has the potential to achieve improved performance. 1 To avoid ambiguity, we use the term _calibration_ to describe the process of optimizing QPs, though some literature refers to this as _reconstruction_ . In this paper, _reconstruction_ refers to the decoded video from INR-VC system. And for simplicity, layer calibration also stands for block calibration. 2 2 P RELIMINARIES **Basic Notations.** We follow popular notations used in neural network. Vectors are denoted by lowercase bold letters, while matrices (or tensors) are denoted by uppercase bold letters. For instance, _**W**_ refers to the weight tensor, and _**w**_ is its flattened version. The superscript of _**w**_ [(] _[l]_ [)] indicates the layer index. For a convolutional or a fully-connected layer, we mark input and output vectors by _**x**_ and _**z**_ . Given a feedforward neural network with _n_ layers, the forward process is expressed as _**x**_ [(] _[l]_ [+1)] = _h_ ( _**z**_ [(] _[l]_ [)] ) = _h_ ( _**w**_ [(] _[l]_ [)] _**x**_ [(] _[l]_ [)] ) _,_ 1 _≤_ _l ≤_ _n,_ (1) where _h_ ( _·_ ) denotes the activation function. For simplicity, we omit the additive bias, merging it into the activation. In the following, the notation _|| · ||_ represents the Frobenius norm. Suppose _**x**_ is sampled from the dataset _X_, then the overall task loss is expressed as E _**x**_ _∼X_ [ _L_ ( _**w**_ _,_ _**x**_ )]. **INR-based Video Coding.** INR-VC operates on the principle that a target video can be encoded into learned weights through end-to-end training. For each frame _V_ _t_ in an RGB video sequence V = _{V_ _t_ _}_ _[T]_ _t_ =1 _[∈]_ [R] _[T][ ×]_ [3] _[×][H][×][W]_ [, INR-VC assumes the existence of an implicit continuous mapping] _F_ : [0 _,_ 1] _[d]_ _[in]_ _→_ R _[d]_ _[out]_ in the real-world system such that _V_ _t_ = _F ◦_ _t_ . According to the Universal Approximation Theorem (Hanin, 2019; Park et al., 2021), the unknown _F_ can be approximated by a neural network _D_ of finite length _L_ _D_ . The estimated _V_ [ˆ] _t_ is then expressed as: ˆ _V_ _t_ = _D ◦E_ ( _t_ ) = _U_ _L_ _◦_ _h ◦_ _U_ _L−_ 1 _◦· · · ◦_ _h ◦_ _U_ 1 _◦E_ ( _t_ ) _,_ (2) where _D_ consists of cascaded upsampling layers _U_, and _E_ ( _·_ ) is an embedding of the timestamp _t_ . Typically, index-based INR-VCs (Chen et al., 2021a) employ a fixed Positional Encoding function or a learnable grid (Lee et al., 2023) as _E_ ( _·_ ), while content-based INR-VCs (Chen et al., 2023; Zhao et al., 2023) utilize a learnable encoder. The encoding of INR-VC involves training the learnable weights _**w**_ and subsequently compressing _**w**_ into a bitstream using quantization and entropy coding techniques. While existing INR-VC works primarily focus on minimizing distortion during the training stage, video coding is fundamentally a R-D trade-off. **Post-Training Quantization.** PTQ offers a push-button solution to quantize pretrained models without weights training. It contrasts with Quantization-Aware Training (QAT), which involves both weight optimization and quantization during training, leading to huge training cost. PTQ is generally a two-step process: 1) initializing QPs (e.g., steps) with allocated bitwidth and weight distribution statistics; 2) calibrating QPs to reduce quantization-induced loss. PTQ typically employs uniform affine transformation to map continuous _w ∈_ R to fixed-point integers ˆ _w_ . Traditional methods aim to minimize quantization error _||w_ ˆ _−_ _w||_ . However, an increasing number of explorations (Stock et al., 2020; Nagel et al., 2020; Hubara et al., 2021) suggest that this approach can yield sub-optimal results, as the parameter space error does not equivalently reflect task loss. To analyze quantizationinduced loss degradation, AdaRound (Nagel et al., 2020) interprets quantization error as weight perturbation, i.e., ˆ _**w**_ = _**w**_ + ∆ _**w**_ . The loss degradation can be approximated using Taylor series: E[ _L_ ( _**w**_ + ∆ _**w**_ _,_ _**x**_ ) _−L_ ( _**w**_ _,_ _**x**_ )] _≈_ ∆ _**w**_ _[T]_ _·_ _**g**_ [(] _**[w]**_ [)] + [1] (3) 2 [∆] _**[w]**_ _[T]_ _[ ·]_ _**[ H]**_ [(] _[w]_ [)] _[ ·]_ [ ∆] _**[w]**_ _[,]_ where _**g**_ [(] _**[w]**_ [)] = E[ _∇_ _**w**_ _L_ ] and _**H**_ [(] _**[w]**_ [)] = E[ _∇_ [2] _**w**_ _[L]_ []][ represent expected gradient and the second-] order Hessian matrix, respectively. For well-converged weights, gradients tend to be close to 0. AdaRound further assumes inter-layer independence, leading to a diagonal Hessian matrix optimization. BRECQ (Li et al., 2021b) extends AdaRound’s layer-wise calibration to block granularity based on inter-block independence. However, these methods can significantly degrade the performance of non-generalized INR-VCs, which exhibit significant dependencies among layers. **Mixed-Precision Quantization.** Mixed-precision quantization facilitates fine-grained rate control in INR-VCs, with bit allocation being crucial due to the varying levels of redundancy across layers and their different contributions to overall performance. However, determining optimal bitwidth assignments presents a significant challenge because of the extensive search space. For a network with _N_ layers and _M_ candidate bitwidths per layer, exhaustive combinatorial searches exhibit exponential time complexity of _O_ ( _M_ _[N]_ ). To address it, various strategies have been explored, including search-based reinforcement learning Wang et al. (2019); Lou et al. (2019), neural architecture search (Wu et al., 2016), and Hessian-based criteria (Dong et al., 2019; 2020). Despite these efforts, they often prove impractical for INR-VCs, as the search costs may surpass those of retraining a model. Furthermore, many existing criteria lack a robust theoretical basis for their optimality, rendering them less reliable in INR-VC systems. 3 |101<br>102<br>103|Col2|Col3|HN|eRV|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |101<br>102<br>103||||||||| |101<br>102<br>103||||||||| (a) Layer-wise sensitivity Ω (b) 2-th loss landscape (c) 6-th loss landscpae Figure 2: Examples of quantizing layers in sequence. (a) Different layers exhibits varying sensitivities. (b) Lower Ω means flatter loss landscape. (c) Higher Ω is otherwise, and the loss landscape shows pronounced directivity, indicating the necessity of considering the direction of ∆ _**w**_ . 3 M ETHODOLOGY We introduce the proposed NeuroQuant for high-performance variable-rate INR-VC as follows: **Problem 1** (NeuroQuant) **.** _Given learned video-specific weights, the objective of NeuroQuant is to_ _achieve different R-D trade-offs by quantizing post-training weights with variable QPs. This can be_ _formulated as a rate-constrained optimization process:_ arg min E[ _L_ ( _Q_ ( _**w**_ ) _, Q_ ( _**e**_ )) _−L_ ( _**w**_ _,_ _**e**_ )] (4) _s.t._ _L_ � _Param_ ( _**w**_ [(] _[l]_ [)] ) _· b_ [(] _**w**_ _[l]_ [)] [+] _l_ =1 _T_ � _Param_ ( _**e**_ [(] _[t]_ [)] ) _· b_ _**e**_ = _R ± ϵ,_ (5) _t_ =1 _where R represents the target bitrate,_ _**e**_ _denotes the embedding, Param_ ( _·_ ) _indicates the number of_ _parameters, and b denotes the bitwidth._ We decouple this problem into three sub-problems: 1)Sec. 3.1: The rate-constrained term in Eq. 5 is defined as a mixed-precision bit assignment problem, accounting for fine-grained rate control and varying layer sensitivity; 2) Sec. 3.2: The objective in Eq. 4 is interpreted as QP calibration problem, focusing on calibration and quantization granularity of non-generalized INR-VC; 3) Sec. 3.3: We revisit the entire problem from the perspective of variational inference to provide a broader theoretical grounding. 3.1 H OW TO REALIZE VARIABLE BITRATE **Sensitivity Criterion.** The core concept of mixed-precision quantization is to allocate higher precision (e.g., greater bitwidth) to sensitive layers while reducing precision in insensitive ones. Sensitivity can be intuitively understood through the flatness of the loss landscape (Li et al., 2018), as illustrated in Fig. 2. A flatter landscape, indicating lower sensitivity, corresponds to smaller loss changes with weight perturbations, whereas a sharper landscape indicates otherwise. Sensitivity essentially captures the curvature of the loss function, often described using second-order information, particularly the Hessian matrix _**H**_ [(] _[w]_ [)] . _**H**_ [(] _**[w]**_ [)] defines how perturbations in weights affect task loss. For instance, HAWQ (Dong et al., 2019) uses the top Hessian eigenvalue as a sensitivity criterion, while HAWQ-V2 (Dong et al., 2020) demonstrates that the trace offers a better measure. However, these criteria rely on two key assumptions: 1) **Layer Independence** : Layers are mutually independent, allowing _**H**_ [(] _[w]_ [)] to be treated as diagonal. 2) **Isotropy** : The loss function is directionally uniform under weight perturbations ∆ _**w**_, meaning only _**H**_ [(] _[w]_ [)] is considered, ignoring ∆ _**w**_ . While these assumptions may hold for general-purpose networks, they break down in the context of non-generalized INR-VC, where significant inter-layer dependencies (Fig. 3(c)) and anisotropic behavior (Fig. 2(c)) exist. The following toy examples demonstrate why relying solely on diagonal information from _**H**_ is suboptimal. 4 **Example 1** (Inter-Layer Dependencies) **.** _Consider three functions, F_ 1 = 4 _x_ [2] + _y_ [2] _, F_ 2 = 4 _x_ [2] +2 _y_ [2] _,_ _and F_ 3 = 4 _x_ [2] + 2 _y_ [2] + 5 _xy. Their corresponding Hessians are given as:_ 8 5 _,_ _**H**_ [(] _[F]_ [3] [)] = 5 4 _._ (6) � � � 8 0 _**H**_ [(] _[F]_ [1] [)] = 0 2 � 8 0 _,_ _**H**_ [(] _[F]_ [2] [)] = 0 4 � � _All three functions share the same top eigenvalue (_ 8 _), yet F_ 2 _and F_ 3 _are clearly more sensitive than_ _F_ 1 _. Although F_ 2 _and F_ 3 _have the same trace (_ 12 _), F_ 3 _exhibits greater sensitivity due to the presence_ _of off-diagonal terms (i.e.,_ 5 _xy)._ This demonstrates that inter-layer dependencies are overlooked when relying solely on diagonal information (e.g., eigenvalues or traces). Off-diagonal terms are essential to accurately capture sensitivity, highlighting the need to consider the full Hessian matrix. The story does not end there. **Example 2** (Weight Perturbation Directions) **.** _Assuming a perturbation_ [∆ _x,_ ∆ _y_ ] _applied to H_ [(] _[F]_ [3] [)] _from above, the increase in loss is approximately proportional to_ _F_ 3 ( _x_ + ∆ _x, y_ + ∆ _y_ ) _−F_ 3 ( _x, y_ ) _≈_ [∆ _x,_ ∆ _y_ ] _**H**_ [∆ _x,_ ∆ _y_ ] _[T]_ = 8∆ _x_ [2] + 4∆ _y_ [2] + 10∆ _x_ ∆ _y._ (7) _Now, consider two cases: 1) Lower perturbation:_ [∆ _x,_ ∆ _y_ ] = [0 _._ 1 _,_ 0 _._ 1] _; 2) Higher perturbation:_ [∆ _x,_ ∆ _y_ ] = [0 _._ 2 _, −_ 0 _._ 2] _. The increases in task loss are_ 0 _._ 22 _and_ 0 _._ 08 _, respectively. Surprisingly, the_ _higher perturbation results in a smaller task loss._ This counterintuitive behavior is also observed in practice, where quantizing layers with higher _**H**_ sensitivity to a lower bitwidth does not necessarily lead to significant performance degradation. We argue that allocating higher bitwidth to layers primarily reduces _||_ ∆ _**w**_ _||_ . However, this does not always guarantee a lower task loss, as _L_ is anisotropy under ∆ _**w**_ in INR-VC. The key insight is that task loss also depends on the direction of ∆ _**w**_, not just its magnitude _||_ ∆ _**w**_ _||_ . In conclusion, the sensitivity criterion of INR-VC must account for both the full Hessian matrix _**H**_ [(] _[w]_ [)] and the direction of weight perturbations ∆ _**w**_ . This leads to the following theorem: **Theorem 1.** _Assuming the INR-VC weights are twice differentiable and have converged to a local_ _minima such that the first and second order optimality conditions are satisfied (i.e., the gradients are_ _zero and the Hessian is positive semi-definite), the optimal sensitivity criteria for mixed-precision_ _INR-VC is given by weighted Hessian information_ Ω= ∆ _**w**_ _[T]_ _·_ _**H**_ [(] _**[w]**_ [)] _·_ ∆ _**w**_ _._ The criteria Ω, formed by Hessian-Vector product, can essentially be interpreted as a linear transformation on _**H**_ [(] _**[w]**_ [)], accounting for _**H**_ [(] _**[w]**_ [)] along the weight perturbation directions. Existing Hessianbased criteria can be viewed as a degraded version of the proposed Ω that neglects the off-diagonal terms. For instance, Eq. 7 would degrade to 8∆ _x_ [2] + 4∆ _y_ [2], and thus, loss is independent of intervariable dependencies and perturbation direction. **Approximating Hessian-Vector Product.** The Hessian matrix is challenging to explicitly compute and store as its quadratic complexity relative to the number of weights. Instead of forming _**H**_ [(] _**[w]**_ [)] explicitly, we focus on the sensitivity criterion Ω= ∆ _**w**_ _[T]_ _·_ _**H**_ [(] _**[w]**_ [)] _·_ ∆ _**w**_ . Let’s construct a function of the form _G_ = _**g**_ ∆ _**w**_, where _**g**_ is the gradient of _L_ with respect to _**w**_ . The gradient of _G_ can be expressed as: [∆] _**[w]**_ = _**H**_ [(] _**[w]**_ [)] ∆ _**w**_ + _**g**_ _[∂]_ [∆] _**[w]**_ _∂_ _**w**_ _∂_ _**w**_ _∇_ _**w**_ _G_ = _[∂]_ _**[g]**_ [∆] _**[w]**_ _[∂]_ _**[g]**_ _∂_ _**w**_ [∆] _**[w]**_ [ +] _**[ g]**_ _[ ∂]_ _∂_ [∆] _**w**_ _**[w]**_ [∆] _**[w]**_ = _[∂]_ _**[g]**_ _∂_ _**w**_ _∂_ _**w**_ [∆] _**[w]**_ = _[∂]_ [2] _[L]_ _∂_ _**w**_ _∂_ _**w**_ [2] _[∂]_ _[L]_ _∂_ _**w**_ [2] [ ∆] _**[w]**_ [ +] _**[ g]**_ _[ ∂]_ _∂_ [∆] _**w**_ _**[w]**_ (8) _∂_ _**w**_ _[.]_ In a converged model, _**g**_ approaches 0. Moreover, quantization error can be modeled as a random vector, with its component sampled independently form a Uniform distribution: ∆ _**w**_ _∼_ _U_ ( _−_ 0 _._ 5 _,_ 0 _._ 5) (Ball´e et al., 2017). Thus, the second term in Eq. 8 can be ignored. This approximation is also akin to straight-through estimator (STE) (Liu et al., 2022), where _∂_ _[∂]_ _**ww**_ [ˆ] [=] _[∂]_ _∂_ _**[w]**_ _**w**_ [leads to] mation is also akin to straight-through estimator (STE) (Liu et al., 2022), where _∂_ _**ww**_ [=] _∂_ _**[w]**_ _**w**_ [leads to] _∂_ ∆ _**w**_ _∂_ _**ww**_ [= 0][. Consequently, we arrive at the final formulation for][ Ω][:] Ω= E[∆ _**w**_ _[T]_ _∇_ _**w**_ _G_ ] _,_ _where G_ = _**g**_ ∆ _**w**_ = E[ _∇_ _**w**_ _L_ ∆ _**w**_ ] _, G ∈_ R [1] _._ (9) In Eq. 9, ∆ _**w**_ is treated as a perturbation around _**w**_, allowing us to compute _**g**_ centered at _**w**_ . For each potential bitwidth configuration, we only need to compute ∆ _**w**_ and the gradient of _G_ in linear time. Notably, different from using _L_ directly, such criteria-based methods do not require supervised labels or forward inference over the entire full datasets for each potential bitwidth candidate, enabling efficient mixed-precision search using techniques like integer programming, genetic algorithms (Guo et al., 2020), or iterative approaches. So far, we have realized bit allocation for a target bitrate. The next step involves calibrating QPs to minimize the reconstruction distortion. 5 Idea Generation Category:
0Conceptual Integration
44cMlQSreK
# - T IGHT L OWER B OUNDS UNDER A SYMMETRIC H IGH - O RDER H ¨ OLDER S MOOTHNESS AND U NIFORM C ON ## VEXITY **Cedar Site Bai** Department of Computer Science Purdue University West Lafayette, IN, USA bai123@purdue.edu **Brian Bullins** Department of Computer Science Purdue University West Lafayette, IN, USA bbullins@purdue.edu A BSTRACT In this paper, we provide tight lower bounds for the oracle complexity of minimizing high-order Holder smooth and uniformly convex functions. ¨ Specifically, for a function whose _p_ _[th]_ -order derivatives are Holder continuous with ¨ degree _ν_ and parameter _H_, and that is uniformly convex with degree _q_ and parameter _σ_, we focus on two asymmetric cases: (1) _q_ _>_ _p_ + _ν_, and (2) _q_ _<_ _p_ + _ν_ . Given up to _p_ _[th]_ -order oracle access, we estab lish worst-case oracle complexities of Ω � _Hσ_ � 2( _q−p−ν_ ) _σϵ_ � _q_ (3( _p_ + _ν_ ) _−_ 2) in the � 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 [�] _σϵ_ first case with an _ℓ_ _∞_ -ball-truncated-Gaussian smoothed hard function and Ω _H_ _σ_ �� 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 + log log _σH_ _[p]_ [+] _[q][ν]_ �� 1 _[p]_ [+] _[ν]_ _p_ + _ν−q_ 1 _H_ _[q]_ � _ϵ_ _ϵ_ in the second case, for reaching �� an _ϵ_ -approximate solution in terms of the optimality gap. Our analysis generalizes previous lower bounds for functions under first- and second-order smoothness as well as those for uniformly convex functions, and furthermore our results match the corresponding upper bounds in this general setting. 1 I NTRODUCTION With the advancement in computational power, high-order optimization methods ( _p_ _[th]_ -order with _p ≥_ 2 ) are gaining more attention for their merit of faster convergence and higher precision. Consequently, uniformly convex problems (with degree _q_ ) have become a recent focus, particularly the subproblems of some high-order optimization methods. The subproblem of the cubic-regularized Newton ( _p_ = 2 _, q_ = 3 ) (Nesterov & Polyak, 2006) is an example, as are methods of even higher orders ( _p ≥_ 3, _q ≥_ 4) (Zhu & Cartis, 2022). Although these problems are high-order smooth by definition, a lower-order algorithm may be employed to obtain an approximate solution. For instance, solving the subproblem of cubic-regularized (i.e., _q_ = 3 ) Newton with gradient descent (accessing first-order oracle, i.e., _p_ = 1 ), or, more generally, approximately solving the subproblem of ( _q −_ 1) _[th]_ -order Taylor descent (Bubeck et al., 2019) (which typically contains a regularization term to the power of _q_ ) with lower-order oracle access, introduces an asymmetry between the algorithm’s oracle access order and the degree of uniform convexity ( _q > p_ + 1). Conversely, a lower-degree regularization can be paired with a higher-order smooth function. This enables methods that access higher-order oracles, which leads to the opposite asymmetry ( _q < p_ + 1 ). Examples include the objective function of logistic regression, which is known to be infinite-order smooth. Coupled with standard _ℓ_ 2 -regularization, the problem can be analyzed as a _p_ _[th]_ -order smooth and strongly convex ( _q_ = 2 ) problem, e.g., _p_ = 2 with access to the Hessian matrix, _p_ = 3 accessing the third-order derivative tensor. In addressing specific instances of this asymmetry, previous works established some upper bounds (Gasnikov et al., 2019; Song et al., 2021) and lower bounds (Arjevani et al., 2019; Kornowski 1 & Shamir, 2020; Doikov, 2022; Thomsen & Doikov, 2024) for the oracle complexity. Notably, Song et al. (2021) proposed a unified acceleration framework for functions that are _p_ _[th]_ -order Holder smooth with degree ¨ _ν_, and uniformly convex with degree _q_, providing upper bounds for any combination of _p_, _q_, and _ν_ . For the case where _q > p_ + _ν_, they show an oracle complex ity of _O_ � _Hσ_ � 2( _q−p−ν_ ) _σϵ_ � _q_ (3( _p_ + _ν_ ) _−_ 2), and for the case where _q < p_ + _ν_, the complexity is � 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 [�] _σϵ_ _O_ _H_ _σ_ �� 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 + log log _σH_ _[p]_ [+] _[q][ν]_ �� 1 _[p]_ [+] _[ν]_ _p_ + _ν−q_ 1 _H_ _[q]_ � _ϵ_ _ϵ_ . To the best of our knowledge, no lower bounds �� exist in this general setting, particularly with H¨older smoothness and uniform convexity. In this paper, we provide matching lower bounds to the upper bounds in (Song et al., 2021) for these asymmetric cases. Specifically, we establish Ω _Hσ_ �� 2( _q−p−ν_ ) _σϵ_ � _q_ (3( _p_ + _ν_ ) _−_ 2) for _q > p_ + _ν_ � 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 [�] _σϵ_ 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 + log log _σH_ _[p]_ [+] _[q][ν]_ �� and Ω _H_ _σ_ �� 1 _[p]_ [+] _[ν]_ _p_ + _ν−q_ 1 _H_ _[q]_ � _ϵ_ _ϵ_ for _q < p_ + _ν_ . For the _q > p_ + _ν_ case, we �� adopt the framework proposed by (Guzman & Nemirovski ´, 2015), utilizing a smoothing operator to generate a high-order smooth function. We propose the use of _ℓ_ _∞_ -ball-truncated Gaussian smoothing, which, as we later justify, is novelly designed to achieve the optimal rate and be compatible with both high-order smooth and uniformly convex settings. Both the truncated Gaussian smoothing and the construction of the _ℓ_ _∞_ ball are crucial to improve upon the sub-optimal derivation using uniform smoothing within an _ℓ_ 2 ball in (Agarwal & Hazan, 2018). Our results generalize the lower bounds in (Doikov, 2022; Thomsen & Doikov, 2024) to higher-order and Holder smooth settings. For the ¨ _q < p_ + _ν_ case, we adopt Nesterov’s framework (Nesterov et al., 2018) and generalize the lower bounds in (Arjevani et al., 2019; Kornowski & Shamir, 2020) to include Holder smooth and uniformly ¨ convex settings. 2 R ELATED W ORK **Upper Bounds.** Doikov & Nesterov (2021) showcase the upper bound for uniformly convex functions with Holder-continuous Hessian via cubic regularized Newton method, but the rate is not ¨ optimal. For higher order result, Bubeck et al. (2019) and Jiang et al. (2019) established a near optimal 2 upper bound of _O_ [˜] _ϵ_ _[−]_ 3 _p_ +1 in the simpler case of _ν_ = 1 without uniform convexity. Gasnikov et al. � � (2019) achieve the same near-optimal rate, but also consider uniform convexity, and by the restarting mechanism, derive the rate that for _q > p_ + 1 as well, generalizing the upper bounds established in second-order (Monteiro & Svaiter, 2013) and matching the lower bounds later derived in (Kornowski & Shamir convexity or H, 2020 older smoothness. For minimizing uniformly convex functions, ¨ ). Kovalev & Gasnikov (2022) closed the log � 1 _ϵ_ � gap, but does not consider uniform Juditsky & Nesterov (2014) and Roulet & d’Aspremont (2017) study the complexity of first-order methods. Recently, Song et al. (2021) establish the most general upper bounds for arbitrary combinations of the order of Holder smoothness and the degree of uniform convexity, which include the rates for both ¨ _q > p_ + _ν_ and _q < p_ + _ν_ cases. **Lower Bounds.** Agarwal & Hazan (2018) proved for _p_ _[th]_ -order smooth convex functions an 2 Ω _ϵ_ _[−]_ 5 _p_ +1 lower bound based on constructing the hard function with randomized smoothing uni� � formly over a unit ball. But their rate is not optimal due to the extra dimension factor appearing in the smoothness constant due to the uniform randomized smoothing. Garg et al. (2021) added 2 softmax smoothing prior to randomized smoothing, achieving a near-optimal rate of Ω _ϵ_ _[−]_ 3 _p_ +1 � � for randomized and quantum algorithms. Separately, Arjevani et al. (2019) also established the 2 optimal lower bound of Ω _ϵ_ _[−]_ 3 _p_ +1 with the Nesterov’s hard function construction approach. Fur� � thermore, for the asymmetric case of _q < p_ + 1, Arjevani et al. (2019) proved the lower bound of _Hσ_ � 7 [2] Ω _H_ �� _σ_ 7 + log log � _Hσ_ [3][2] _[ ϵ]_ _[−]_ [1] [��] for the _p_ = 2 and _q_ = 2 case, and the result is later generalized to the _p_ _[th]_ order in (Kornowski & Shamir, 2020). No _q >_ 2 uniformly convex settings were considered in these works. For the case of _q > p_ + _ν_, lower bounds for uniformly convex functions for _q ≥_ 3 are limited to the first-order smoothness setting where _p_ = 1 (Juditsky & Nesterov, 2014; Doikov, 2 2022; Thomsen & Doikov, 2024). No lower bounds for uniformly convex functions were established, to our knowledge, in the high-order setting. 3 P RELIMINARIES AND S ETTINGS **Notations.** We use [ _n_ ] to represent the set _{_ 1 _,_ 2 _, ..., n}_ . We use _∥· ∥_ to denote an _ℓ_ 2 operator norm. We use _∇_ for gradients, _∂_ for subgradients, and _⟨·, ·⟩_ for inner products. Related to the algorithm, bold lower letters for vectors (e.g., **x**, **y** ), and with subscript, the vectors in different iterations (e.g., **x** _T_ ). We use regular lower letters for scalars, and with subscript, a coordinate of a vector (e.g., _x_ _i_ ). Depending on the context, we use capital letters for a matrix or a random variable. We use _ϕ_ for the probability density function of the standard normal or the standard multivariate normal (MVN), and Φ for the cumulative (density) function of standard normal or MVN. We further overuse the notation of _ϕ_ [ _·,·_ ] Φ [ _·,·_ ] for their truncated counterparts for the normal distribution (standard normal if not specified with parameters), and _ϕ_ _∥·∥_ _∞_ _≤·_ Φ _∥·∥_ _∞_ _≤·_ for the MVN truncated within an _ℓ_ _∞_ ball. 3.1 D EFINITIONS **Definition 1** (High-order Smoothness) **.** _For_ _p ∈_ Z [+] _, a function_ _f_ : R _[d]_ _→_ R _is_ _p_ _[th]_ _-order smooth_ _or whose_ _p_ _[th]_ _- derivatives are_ _L_ _p_ _-Lipschitz if for_ _L_ _p_ _>_ 0 _,_ _∀_ **x** _,_ **y** _∈_ R _[d]_ _,_ _∥∇_ _[p]_ _f_ ( **x** ) _−∇_ _[p]_ _f_ ( **y** ) _∥≤_ _L_ _p_ _∥_ **x** _−_ **y** _∥._ ¨ **Definition 2** (High-order Holder Smoothness) **.** _For_ _p ∈_ Z [+] _, a function_ _f_ : R _[d]_ _→_ R _is_ _p_ _[th]_ _-_ _order Holder smooth or has H_ _¨_ _older continuous_ _¨_ _p_ _[th]_ _-order derivatives if for_ _ν ∈_ (0 _,_ 1] _and_ _H >_ 0 _,_ _∀_ **x** _,_ **y** _∈_ R _[d]_ _, ∥∇_ _[p]_ _f_ ( **x** ) _−∇_ _[p]_ _f_ ( **y** ) _∥≤_ _H∥_ **x** _−_ **y** _∥_ _[ν]_ _._ **Definition 3.** (Uniform Convexity (Nesterov et al., 2018, Section 4.2.2)) _For integer_ _q ≥_ 2 _and_ _σ >_ 0 _, a function_ _f_ : R _[d]_ _→_ R _is uniformly convex with degree_ _q_ _and modulus_ _σ_ _if_ _∀_ **x** _,_ **y** _∈_ R _[d]_ _,_ _f_ ( **y** ) _−f_ ( **x** ) _−⟨∇f_ ( **x** ) _,_ **y** _−_ **x** _⟩≥_ _[σ]_ _q_ _[∥]_ **[y]** _[−]_ **[x]** _[∥]_ _[q]_ _[, or the function satisfies]_ _[ ⟨∇][f]_ [(] **[y]** [)] _[ −∇][f]_ [(] **[x]** [)] _[,]_ **[ y]** _[ −]_ **[x]** _[⟩≥]_ _σ∥_ **y** _−_ **x** _∥_ _[q]_ _._ 4 L OWER B OUND FOR THE _q > p_ + _ν_ C ASE The derivation of the lower bound is to find such a function by construction that satisfies the uniformly convex and Holder smooth conditions and requires at least a certain amount of iterations to reach an ¨ _ϵ_ -approximate solution. The general steps follow from the framework of showing lower complexity bounds for smooth convex optimization (Guzman & Nemirovski ´, 2015), which originates from (Nemirovskii & Nesterov, 1985) and serves as the basis for results in various follow-up settings (Agarwal & Hazan, 2018; Garg et al., 2021; Doikov, 2022). The construction starts from a nonsmooth function, then smooths the function with some smoothing operator (e.g. Moreau envelope in (Guzman & Nemirovski ´, 2015; Doikov, 2022), randomized smoothing uniformly within a ball in (Agarwal & Hazan, 2018; Garg et al., 2021)). We design a truncated Gaussian smoothing operator within the _ℓ_ _∞_ ball and start the derivation by stating its formal definition and key properties. 4.1 T RUNCATED G AUSSIAN S MOOTHING **Definition 4** (Truncated Gaussian Smoothing) **.** _For_ _f_ : R _[d]_ _→_ R _and a parameter_ _ρ >_ 0 _, define the_ _truncated Gaussian smoothing operator S_ _ρ_ [ _f_ ] : (R _[d]_ _→_ R) _→_ (R _[d]_ _→_ R) _as_ _S_ _ρ_ [ _f_ ]( **x** ) = E _V_ [ _f_ ( **x** + _ρV_ )] _where_ _V_ _is a_ _d_ _-dimensional random variable that follows the standard multivariate normal (MVN)_ _distribution truncated within a unit ball. That is, the probability density function (PDF) of V is_ 1 _−_ **[v]** _[⊤]_ **[v]** _Z_ ( _d_ )(2 _π_ ) _d_ 2 [exp] � 2 1 P[ _V_ = **v** ] = 2 I [ _∥_ **v** _∥_ _∞_ _≤_ 1] _,_ � _in which_ I [ _·_ ] = 1 _if_ _·_ _is true_ 0 _otherwise is the indicator function and_ _Z_ ( _d_ ) _is the normalizing factor,_ _i.e., the cumulative distribution within the d-dimensional unit ℓ_ _∞_ _-ball (Cartinhour, 1990)._ _We denote_ _f_ _ρ_ = _S_ _ρ_ [ _f_ ] _, and use the shorthand notation for the function that applied the smoothing_ _operator for p times: f_ _ρ_ _[p]_ [=] _[ S]_ _ρ_ _[p]_ [[] _[f]_ [] =] _[ S]_ _[ρ]_ [[] _[· · ·]_ [ [] _[S]_ _[ρ]_ [[] _[f]_ []]] _[ · · ·]_ [ ]] _[ for][ p][ times.]_ 3 Now we justify the choice of truncated Gaussian smoothing for the construction of hard function. We notice that Agarwal & Hazan (2018) choose randomized smoothing uniformly over a unit _ℓ_ 2 -ball, which by their Lemma 2.3 that the smoothed function is _O_ ( _d_ ) -smooth (which in fact can be tightened to _O_ ( _√d_ ) by (Yousefian et al., 2012; Duchi et al., 2012, Lemma 8)) where _d_ is the dimension of the variable. Since the number of iteration _T ∈O_ ( _d_ ), their result _O_ � _T_ _[−]_ 5 _p_ 2+1 [�] is sub-optimal by an extra _T_ comparing to the tight lower bound _O_ � _T_ _[−]_ 3 _p_ 2+1 [�] (Arjevani et al., 2019). Therefore we search for a smoothing operator with Lipschitz constant being _dimension-free_ . We notice that Gaussian smoothing (Duchi et al., 2012, Lemma 9), softmax smoothing (Bullins, 2020, Lemma 7), and Moreau smoothing (Doikov, 2022, Lemma 1) are such operators. Yet as the reader will later see in the proof that the converging points are generated through a sequence of functions, instead of those generated from one hard function. For these two sequences of points to be identical so that the lower bound is indeed for optimizing the hard function constructed, we need the smoothing operator to be _local_, that is, accessing information within _some neighborhood_ of the queried point, e.g., a unit _ℓ_ 2 -ball in (Doikov, 2022). Unfortunately, Gaussian smoothing and softmax smoothing need access to global information. For Moreau smoothing that indeed depends on local information, it’s successfully applied in proving the lower bound in the first-order setting (Doikov, 2022), but is not suited for the high-order setting. First, one may attempt the extension of Moreau smoothing with a _p_ _[th]_ -power regularization, yet it can be shown that the function is not _p_ _[th]_ -order smooth. Next, one may try to apply Moreau smoothing _p_ times, yet unlike randomized smoothing in (Agarwal & Hazan, 2018), the Lipschitz constant does not raise to the _p_ _[th]_ -power with the number of times the smoothing operator is applied, which leads to the same rate as in the first order. Observing the proof of (Agarwal & Hazan, 2018, Corollary 2.4), this is in essence due to the fact that the minimization in Moreau smoothing does not commute with derivative, whereas the expectation in randomized smoothing does. We then come up with the idea of a truncated multivariate Gaussian smoothing operator that is (i) local (ii) smooth with a dimension-free constant (iii) _p_ _[th]_ -order smooth with smoothness constant raising to the _p_ _[th]_ power as well. Initially, we applied the Gaussian smoothing truncated within a unit ball in _ℓ_ 2 by default. We noticed later, however, that the marginal distribution of unit- _ℓ_ 2 -ball truncated multivariate Gaussian is not the truncated standard normal between [ _−_ 1 _,_ 1], but with an extra _d_ -dependent normalizing constant, which adds the _d_ -dependency to the smoothness constant of the hard function. To ensure a dimension-free smoothness constant, we instead apply the multivariate Gaussian smoothing truncated within an _ℓ_ _∞_ ball, a.k.a., the hypercube with edge length 2, whose marginal distribution is indeed the truncated standard normal between [ _−_ 1 _,_ 1] (Cartinhour, 1990). The following lemma characterizes these desired properties including convexity, continuity, approximation, and smoothness, with proof deferred to Appendix A.1. **Lemma 1.** _Given a L-Lipschitz function f_ _, the function f_ _ρ_ _[p]_ [=] _[ S]_ _[ρ]_ [[] _[· · ·]_ [ [] _[S]_ _[ρ]_ [[] _[f]_ []]] _[ · · ·]_ [ ]] _[ satisfies]_ _(i) If f is convex, f_ _ρ_ _[p]_ _[is convex and][ L][-Lipschitz with respect to the][ ℓ]_ [2] _[norm.]_ _(ii) If f is convex, f_ ( **x** ) _≤_ _f_ _ρ_ _[p]_ [(] **[x]** [)] _[ ≤]_ _[f]_ [(] **[x]** [) +] [5] 4 _[p]_ _[Lρ]_ _√d._ 4 _[p]_ _[Lρ]_ _√_ _d._ _i_ _(iii) ∀i ∈_ [ _p_ ] _, ∀_ **x** _,_ **x** _[′]_ _∈_ R _[d]_ _, ∥∇_ _[i]_ _f_ _ρ_ _[p]_ [(] **[x]** [)] _[ −∇]_ _[i]_ _[f]_ _ρ_ _[ p]_ [(] **[x]** _[′]_ [)] _[∥≤]_ � _ρ_ 2 � _L∥_ **x** _−_ **x** _[′]_ _∥._ 4.2 T HE L OWER B OUND : F UNCTION C ONSTRUCTION AND T RAJECTORY G ENERATION **Theorem 1.** _For any_ _T_ _-step_ ( _√d −_ 1 _≤_ _T ≤_ _d_ ) _deterministic algorithm_ _A_ _with oracle access up to_ _the_ _p_ _[th]_ _order, there exists a convex function_ _f_ ( **x** ) _whose_ _p_ _[th]_ _-order derivative is Holder continuous of_ _¨_ _degree_ _ν_ _with modulus_ _H_ _and a corresponding_ _F_ ( **x** ) = _f_ ( **x** ) + _[σ]_ _q_ _[∥]_ **[x]** _[∥]_ _[q]_ _[ with regularization that is]_ _uniformly convex of degree q with modulus σ, such that q > p_ + _ν, it takes_ 2( _q−p−ν_ ) � _q_ (3( _p_ + _ν_ ) _−_ 2) � 2 3( _p_ + _ν_ ) _−_ 2 [�] _σ_ � _ϵ_ _T ∈_ Ω _H_ _σ_ �� _steps to reach an ϵ-approximate solution_ **x** _T_ _satisfying F_ ( **x** _T_ ) _−_ _F_ ( **x** _[∗]_ ) _≤_ _ϵ._ _Proof._ We begin the proof by constructing the hard function. 4 4.2.1 F UNCTION C ONSTRUCTION WITH T RUNCATED G AUSSIAN S MOOTHING _1. Non-smooth Function Construction._ We first construct the function _g_ _t_ ( **x** ) = max 1 _≤k≤t_ _[r]_ _[k]_ [(] **[x]** [)] _where_ _∀_ _k ∈_ [ _T_ ] _, r_ _k_ ( **x** ) = _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � _−_ ( _k −_ 1) _δ._ _ξ_ _k_ _∈{−_ 1 _,_ 1 _}_, **e** is the standard basis, _α_ is a permutation of [ _T_ ], and _δ >_ 0 is some parameter that we will choose later. Lemma 2 characterizes the properties of _g_ _t_ with proof in Appendix A.2. **Lemma 2.** _∀_ _t ∈_ [ _T_ ] _,_ _g_ _t_ _is convex and_ 1 _-Lipschitz with respect to the_ _ℓ_ _∞_ _-norm, and also the_ _ℓ_ 2 _-norm._ _2. Truncate Gaussian Smoothing._ Next, we smooth the function _g_ _t_ ( **x** ) with truncate Gaussian smoothing as in Definition 4. Given a parameter _ρ >_ 0 and _p ∈_ Z [+], _G_ _t_ ( **x** ) = _S_ _ρ_ _[p]_ [[] _[g]_ _[t]_ [](] **[x]** [)] Based on Lemma 1, we show that _G_ _t_ ( **x** ) satisfies the following lemma, with proof in Appendix A.2. **Lemma 3.** _∀_ _t ∈_ [ _T_ ] _, ∀_ **x** _,_ **y** _∈_ R _[d]_ _,_ _(i) G_ _t_ ( **x** ) _is convex and 1-Lipschitz, i.e., G_ _t_ ( **x** ) _−_ _G_ _t_ ( **y** ) _≤∥_ **x** _−_ **y** _∥._ _(ii) g_ _t_ ( **x** ) _≤_ _G_ _t_ ( **x** ) _≤_ _g_ _t_ ( **x** ) + [5] _[pρ]_ _√d._ [5] 4 _[pρ]_ _√_ _d._ _i_ _(iii) For some fixed p ∈_ Z [+] _, ∀_ _i ∈_ [ _p_ ] _, ∥∇_ _[i]_ _G_ _t_ ( **x** ) _−∇_ _[i]_ _G_ _t_ ( **y** ) _∥≤_ � _ρ_ 2 � _∥_ **x** _−_ **y** _∥._ _3. Adding Uniform Convexity._ Now that the constructed function _G_ _t_ ( **x** ) is all-order smooth, we add to it the uniformly convex regularization. We define _f_ _t_ ( **x** ) = _βG_ _t_ ( **x** ) _f_ ( **x** ) = _f_ _T_ ( **x** ) _F_ _t_ ( **x** ) = _f_ _t_ ( **x** ) + _d_ _q_ ( **x** ) for _d_ _q_ ( **x** ) = _[σ]_ _q_ _q_ �� **x** �� _,_ **x** _∈Q_ _F_ ( **x** ) = _F_ _T_ ( **x** ) _,_ 1 where _β >_ 0 is a parameter that we will choose later, _Q_ = _{_ **x** : _∥_ **x** _∥_ 2 _≤_ _D}_ [1] for _D ≤_ � 2 [1] _[−]_ _H_ _[ν]_ _C_ � _q−p−ν_ and _C_ = _σ_ ( _q −_ 1) _× · · · ×_ ( _q −_ _p_ ). **Lemma 4.** _For F_ ( **x** ) = _f_ _T_ ( **x** ) + _d_ _q_ ( **x** ) _where d_ _q_ ( **x** ) = _[σ]_ _q_ �� **x** �� _q_ _and_ **x** _∈Q,_ _(i) F is uniformly convex function with degree q and modulus σ >_ 0 _._ _(ii) F_ ( **x** ) _is p_ _[th]_ _-order H¨older smooth with parameter H_ = _ρ_ 2 _[p][p]_ [+][+1] _[ν][−]_ _β_ [1] _[,][ ∀]_ _[p][ ∈]_ [Z] [+] _[.]_ Therefore, by Lemma 4, the function constructed satisfies the desired uniform convexity and highorder smoothness conditions. Next, we characterize with Lemma 5 the upper and lower bounds of the constructed function which will be used in the proof later. **Lemma 5.** _For R_ ( **x** ) = _β_ max _k∈_ [ _T_ ] _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � + _[σ]_ _q_ _[∥]_ **[x]** _[∥]_ _[q]_ _[, we have]_ _R_ ( **x** ) _−_ _β_ ( _T −_ 1) _δ ≤_ _F_ ( **x** ) _≤_ _R_ ( **x** ) + [5] _√_ 4 _[pβρ]_ 4.2.2 C ONVERGENCE T RAJECTORY G ENERATION _d._ _4. Trajectory Generation Procedure._ The trajectory is generated following a standard _T_ -step iterative procedure same as outlined in (Guzm´an & Nemirovski, 2015; Doikov, 2022): _·_ For _t_ = 1, **x** 1 is the first point of the trajectory and is chosen by initialization of some algorithm _A_, independent of _F_ . Subsequently, choose _α_ (1) _∈_ arg max _k∈_ [ _T_ ] ��� **e** _α_ ( _k_ ) _,_ **x** 1 ��� _ξ_ 1 = sign �� **e** _α_ (1) _,_ **x** 1 �� _,_ after which a fixed _F_ 1 ( **x** ) is generated. 1 _th_ We would note that for the _q > p_ + _ν_ case, _F_ is guaranteed to be _p_ -order smooth only in the bounded domain as constructed, since the regularization term _d_ _q_ ( **x** ) may not be _p_ _[th]_ -order smooth on R _[d]_ . The construction is inspired by that in (Juditsky & Nesterov, 2014). This is not explicitly discussed in (Song et al., 2021; Doikov, 2022; Thomsen & Doikov, 2024). 5 _·_ For 2 _≤_ _t_ _≤_ _T_, at the beginning of each such iteration, we have access to **x** 1 _, · · ·,_ **x** _t−_ 1, the function _F_ _t−_ 1, and its gradient information, which we denote as _I_ _t−_ 1 ( **x** ) = _{F_ _t−_ 1 _, ∇F_ _t−_ 1 _, · · ·, ∇_ _[p]_ _F_ _t−_ 1 _}_ . The algorithm _A_ generates the next point with this information: **x** _t_ = _A_ ( _I_ _t−_ 1 ( **x** 1 ) _, · · ·, I_ _t−_ 1 ( **x** _t−_ 1 )). Then choose _α_ ( _t_ ) _∈_ arg max _k∈_ [ _T_ ] _\{α_ ( _i_ ): _i<t}_ ��� **e** _α_ ( _k_ ) _,_ **x** _t_ ��� _ξ_ _t_ = sign �� **e** _α_ ( _t_ ) _,_ **x** _t_ �� after which a fixed _F_ _t_ ( **x** ) is generated for the next iteration. _5. Indistinguishability of_ _F_ _t_ _and_ _F_ _for Trajectory Generation._ It’s important to note that the trajectory **x** 1 _, · · ·,_ **x** _T_ is generated based on _a sequence of functions_ _F_ 1 _, · · ·, F_ _T_, whereas our object of analysis should be just _one hard function F_ = _F_ _T_ . Here we show: **Lemma 6.** _The trajectory_ **x** 1 _, · · ·,_ **x** _T_ _generated by applying an algorithm_ _A_ _iteratively on the_ _sequence of functions_ _F_ 1 _, · · ·, F_ _T_ _, with up to_ _p_ _[th]_ _-order oracle access, is the same as the trajectory_ _generated applying_ _A_ _directly on_ _F_ _when oracle access pertains only local information within an_ _ℓ_ _∞_ _-ball with radius δ/_ 2 _._ _Proof._ The idea is to show that _∀_ 2 _≤_ _t ≤_ _T_, the function _g_ _t_ coincides with _g_ _T_ (so that _F_ _t_ coincides with _F_ _T_ in terms of generating **x** _t_ +1, i.e., _I_ _t_ = _I_ _T_ ) under some mild conditions. Similar proof can be found in (Guzm´an & Nemirovski, 2015; Doikov, 2022, Section 3). By construction, _∀_ _t ∈_ [ _T_ ], _g_ _t_ ( **x** ) = max max = max _g_ _s_ ( **x** ) _,_ max 1 _≤k≤t_ _[r]_ _[k]_ [(] **[x]** [) = max] � 1 _≤k≤s_ _[r]_ _[k]_ [(] **[x]** [)] _[,]_ [ max] _s<k≤t_ _[r]_ _[k]_ [(] **[x]** [)] � � _s<k≤t_ _[r]_ _[k]_ [(] **[x]** [)] � Furthermore, _α_ ( _s_ ) _∈_ arg max _k∈_ [ _T_ ] _\{α_ ( _i_ ): _i<s}_ ��� **e** _α_ ( _k_ ) _,_ **x** _s_ ��� and _ξ_ _s_ = sign �� **e** _α_ ( _s_ ) _,_ **x** _s_ ��, therefore _g_ _s_ ( **x** _s_ ) = max 1 _≤k≤s_ _[ξ]_ _[k]_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _k −_ 1) _δ ≥_ 1 max _≤k≤s_ _[ξ]_ _[k]_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _s −_ 1) _δ_ _≥_ ��� **e** _α_ ( _s_ ) _,_ **x** _s_ ��� _−_ ( _s −_ 1) _δ ≥_ _s<k_ max _≤t_ _[ξ]_ _[k]_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _s −_ 1) _δ_ _≥_ _s<k_ max _≤t_ _[ξ]_ _[k]_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _k −_ 1) _δ_ + _δ_ ( _k, s ∈_ Z [+] _, k > s_ = _⇒_ _k ≥_ _s_ + 1) If we limit the information access within an _ℓ_ _∞_ -ball with radius _δ/_ 2 when searching for the next point **x** _s_ +1 from **x** **s**, we then establish a local region _∀_ **x**, _∥_ **x** _−_ **x** **s** _∥_ _∞_ _≤_ 2 _[δ]_ [. Further by Lemma][ 2][ that] _g_ _s_ (also _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � ) is 1 -Lipschitz with respect to the _ℓ_ _∞_ norm, we have _∀_ _k_ such that _s < k ≤_ _t_, _g_ _s_ ( **x** _s_ ) _≥_ _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _k −_ 1) _δ_ + 2 _∥_ **x** _−_ **x** **s** _∥_ _∞_ _≥_ _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _k −_ 1) _δ_ + [ _g_ _s_ ( **x** _s_ ) _−_ _g_ _s_ ( **x** )] + � _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � _−_ _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** _s_ �� _,_ which implies that _g_ _s_ ( **x** ) _≥_ max _s<k≤t_ _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � _−_ ( _k_ _−_ 1) _δ_ = max _s<k≤t_ _r_ _k_ ( **x** ) . This concludes that _∀_ **x** such that _∥_ **x** _−_ **x** **s** _∥_ _∞_ _≤_ 2 _[δ]_ [,] _[ g]_ _[t]_ [(] **[x]** [) = max] _[ {][g]_ _[s]_ [(] **[x]** [)] _[,]_ [ max] _[s<k][≤][t]_ _[ r]_ _[k]_ [(] **[x]** [)] _[}]_ [ =] _[ g]_ _[s]_ [(] **[x]** [)] [, which further] implies _F_ _t_ ( **x** ) = _F_ _s_ ( **x** ) . Letting _t_ = _T_ we have _∀_ _t ∈_ [ _T_ ], _F_ _t_ ( **x** ) = _F_ _T_ ( **x** ) for _∥_ **x** _−_ **x** **t** _∥_ _∞_ _≤_ 2 _[δ]_ [.] 4.2.3 L OWER B OUND D ERIVATION _6. Bounding the Optimality Gap._ The following lemma bounds optimality gap, whose proof is based on Lemma 5, and is presented in Appendix A.2. _q_ _σT_ 2 **Lemma 7.** _F_ ( **x** _T_ ) _−_ _F_ ( **x** _[∗]_ ) _≥−β_ ( _T −_ 1) _δ −_ [5] 4 _[pβρ]_ _√_ _d_ + _[q][−]_ [1] _[−]_ [1] _β_ _[q]_ _q_ Idea Generation Category:
3Other
fMTPkDEhLQ
# T OWARDS R OBUST A LIGNMENT OF L ANGUAGE M ODELS : D ISTRIBUTIONALLY R OBUSTIFYING D IRECT P REFERENCE O PTIMIZATION **Junkang Wu** [1] _[∗]_ **Yuexiang Xie** [2] **Zhengyi Yang** [1] **Jiancan Wu** [1] _[†]_ **Jiawei Chen** [3] **Jinyang Gao** [2] **Bolin Ding** [2] **Xiang Wang** [1] **Xiangnan He** [4] _[†]_ 1 University of Science and Technology of China 2 Alibaba Group 3 Zhejiang University 4 MoE Key Lab of BIPC, University of Science and Technology of China _{_ jkwu0909, wujcan, xiangnanhe _}_ @gmail.com A BSTRACT This study addresses the challenge of noise in training datasets for Direct Preference Optimization (DPO), a method for aligning Large Language Models (LLMs) with human preferences. We categorize noise into pointwise noise, which includes low-quality data points, and pairwise noise, which encompasses erroneous data pair associations that affect preference rankings. Utilizing Distributionally Robust Optimization (DRO), we enhance DPO’s resilience to these types of noise. Our theoretical insights reveal that DPO inherently embeds DRO principles, conferring robustness to pointwise noise, with the regularization coefficient _β_ playing a critical role in its noise resistance. Extending this framework, we introduce Distributionally Robustifying DPO (Dr. DPO), which integrates pairwise robustness by optimizing against worst-case pairwise scenarios. The novel hyperparameter _β_ _[′]_ in Dr. DPO allows for fine-tuned control over data pair reliability, providing a strategic balance between exploration and exploitation in noisy training environments. Empirical evaluations demonstrate that Dr. DPO substantially improves the quality of generated text and response accuracy in preference datasets, showcasing enhanced performance in both noisy and noise-free settings. The code is available [at https://github.com/junkangwu/Dr_DPO.](https://github.com/junkangwu/Dr_DPO) 1 I NTRODUCTION Aligning Large Language Models (LLMs) (OpenAI, 2023; Touvron et al., 2023; Anil et al., 2023; Bubeck et al., 2023) with human preferences is critical for their implementation in real-world scenarios. Central to the alignment is the fine-tuning of LLMs using human feedback (Ouyang et al., 2022), ensuring they adhere to human values and mitigate safety risks. Among the alignment methods, Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022) is becoming a widely adopted technology. It initially learns a reward model on pairwise preference data, and optimizes LLMs using the Proximal Policy Optimization (PPO) (Schulman et al., 2017) method. However, its inherent reinforcement learning nature poses significant challenges to computational efficiency and training stability (Rafailov et al., 2023a; Zhao et al., 2023). Addressing these, Direct Preference Optimization (DPO) (Rafailov et al., 2023a) eschews the explicit reward model learning, using human preferences to train the LLMs directly. It achieves the same objectives (Azar et al., 2023) as RLHF by learning an optimal proxy for each pointwise instance and simultaneously ranking preferences in a pairwise manner, offering greater simplicity and training stability (Ivison et al., 2023). While offering an effective solution by directly learning a policy from collected data, DPO inevitably heightens the dependency on the data quality (Liu et al., 2023). However, training data is frequently marred by noise, potentially posing a significant challenge to DPO. Here we delineate two primary noise categories based on their origins: _∗_ Work done at Alibaba Group. _†_ Jiancan Wu and Xiangnan He are the corresponding authors. 1 |Col1|Col2| |---|---| ||| ||| ||Low ( Noisy ) High ( Clean )| Figure 1: **Left** : An example illustrating pointwise and pairwise noise. **Right** : Comparison of gradients between DPO and Dr. DPO under varying levels of pairwise noise. - _Pointwise noise_ (Gunasekar et al., 2023) refers to low-quality data points containing irrelevant or incoherent information. Taking the movie reviews in Figure 1 (Left) as an example, it might manifest as reviews filled with meaningless chatter, thus rendering them uninformative. - _Pairwise noise_ (Sharma et al., 2023; Cui et al., 2023), on the other hand, arises from erroneous associations between data pairs, leading to misjudged preference rankings. Revisiting the movie reviews in Figure 1 (Left), it is evident in misranked reviews where an inferior review ( _y_ _l_ ) is incorrectly rated higher than a superior one ( _y_ _w_ ). The presence of noisy preferences naturally raises a critical question: _How robust is DPO against_ _pointwise and pairwise noise?_ To answer this, we examine DPO through the lens of Distributionally Robust Optimization (DRO) (Namkoong & Duchi, 2017; Duchi & Namkoong, 2018). At the core of DRO is training a model across a distributional family, which is determined by an empirical distribution within a robust radius _η_ . As a result, DRO endows the model with enhanced robustness _w.r.t._ distributional uncertainty, usually caused by the data noise. By incorporating DRO principles, we can assess the resilience of DPO to the pointwise and pairwise noise. Specifically, our DRO lens on DPO offers insightful findings as follows: - **DPO is equivalent to applying DRO on the reward function.** The principal contribution of DPO is deriving the optimal policy for PPO in a closed-form expression. This achievement facilitates the implicit determination of a worst-case distribution for optimization, guided by the Kullback-Leibler (KL) divergence criterion. Such an approach endows DPO with intrinsic pointwise robustness, enabling it to explore a better policy model rather than relying solely on the reference model. - **The DPO’s** _β_ **and DRO’s** _η_ **share an inverse relationship, highlighting noise levels in the** **reference model.** Through DRO theory, we establish that higher noise in the reference model necessitates a larger search radius, corresponding to a larger _η_ (or equivalently, a smaller _β_ ). This inverse relationship provides a clear measure of the noise level in the reference model. These findings elucidate the strengths of DPO in ensuring pointwise robustness. Recent effort (Chowdhury et al., 2024) has started addressing pairwise noise in DPO frameworks; however, this method relies on explicit noise estimation, a process that is computationally intensive and may not fully capture noise complexities. Building on these insights, we introduce the _Distributionally_ _Robustifying DPO_ (Dr. DPO) [1] framework, aiming to incorporate pairwise robustness within the DPO paradigm. The core idea is optimizing against the worst-case pairwise scenarios, enabling the models to implicitly adjust the importance of data pairs in the gradient space and eliminate the explicit noise estimation. Towards the adjustment, Dr. DPO introduces a simple hyperparameter _β_ _[′]_ _∈_ (0 _,_ + _∞_ ) to modulate the loss function, balancing between exploration and exploitation of pairwise preferences. _β_ _[′]_ serves as a pivotal “knob”, allowing the navigation from a conservative strategy that diminishes the influence of potentially noisy pairs ( _e.g.,_ _β_ _[′]_ = 0 _._ 5 ) to a risk-tolerant stance that leverages such pairs ( _e.g.,_ _β_ _[′]_ = 2 ). Consequently, Dr. DPO fosters a more resilient optimization process that effectively mitigates the influence of both pointwise and pairwise noise. 1 The abbreviation “Dr. DPO” not only encapsulates “Distributionally Robustifying DPO” but is playfully intended to echo the abbreviation for ”Doctor,” adding a quirky element to the naming. 2 In a nutshell, our contribution is the development of Dr. DPO, which robustifies DPO with just a single additional line of code. Empirical evaluations reveal that Dr. DPO significantly enhances performance across diverse settings, such as controlling the sentiment in generated text and improving the response quality in single-turn dialogues, under both noisy and noise-free conditions. 2 P RELIMINARIES **Bradley-Terry Model.** Given a context _x_ within a finite space of contexts _X_, we employ the policy _π_ ( _y|x_ ) to independently generate a pair of actions ( _y_ 1 _, y_ 2 ) . These actions are presented to human raters, who then indicate their preference, with the preferred action labeled as _y_ _w_ and the less preferred as _y_ _l_, satisfying _y_ _w_ _⪰_ _y_ _l_ . Although we cannot directly observe the latent reward model _r_ _[∗]_ ( _x, y_ ) that underlies these preferences, the Bradley-Terry (BT) model (Bradley & Terry, 1952) offers a well-established approach for modeling pairwise comparisons, which is given as: exp ( _r_ _[∗]_ ( _x,_ _y_ 1 )) _p_ _[∗]_ ( _y_ 1 _⪰_ _y_ 2 _|x_ ) = (1) exp( _r_ _[∗]_ ( _x, y_ 1 ) + exp( _r_ _[∗]_ ( _x, y_ 2 ))) _[.]_ Given the dataset _O_ = ( _x_ [(] _[i]_ [)] _, y_ _w_ [(] _[i]_ [)] _[, y]_ _l_ [(] _[i]_ [)] [)] _i_ _[N]_ =1 [sampled from] _[ p]_ _[∗]_ [, we can parametrize a reward model] _r_ _ϕ_ ( _x, y_ ) and estimate the parameters by optimizing the following logistic regression loss: _L_ _R_ ( _r_ _ϕ_ _, O_ ) = _−_ E ( _x,y_ _w_ _,y_ _l_ ) _∼O_ [log _σ_ ( _r_ _ϕ_ ( _x, y_ _w_ ) _−_ _r_ _ϕ_ ( _x, y_ _l_ ))] _,_ (2) where _σ_ ( _·_ ) is the sigmoid function. As the size of dataset _O_ grows, the empirical distribution of the dataset _O_ converges to the underlying distribution _p_ _[∗]_, and the reward model _r_ _ϕ_ converges to the true reward model _r_ _[∗]_ . **Reinforcement Learning from Human Feedback (RLHF)** (Ouyang et al., 2022). The standard RLHF paradigm is composed of three phases: i) supervised fine-tuning, ii) reward modeling, and iii) RL fine-tuning. Using the reward model _r_ _ϕ_ learned from the reward modeling, we can then fine-tune the policy _π_ _θ_ by optimizing the following objective: max (3) _π_ _θ_ [E] _[x][∼O][,y][∼][π]_ _[θ]_ [(] _[y][|][x]_ [)] [[] _[r]_ _[ϕ]_ [(] _[x, y]_ [)]] _[ −]_ _[β]_ [D] [KL] [[] _[π]_ _[θ]_ [(] _[y][|][x]_ [)] _[||][π]_ [ref] [(] _[y][|][x]_ [)]] _[.]_ In practice, both the language model policy _π_ _θ_ and the reference policy _π_ ref are typically initialized to the same supervised fine-tuning (SFT) model _π_ SFT . Here, _β_ is a parameter that controls the strength of the regularization term, and D KL represents the KL divergence penalty used to regularize the policy _π_ _θ_ to be close to _π_ ref . **Directed Preference Optimization (DPO)** (Rafailov et al., 2023a). DPO offers an alternative approach to the RL paradigm described above. It establishes a functional mapping between the reward model and the optimal policy under a KL divergence constraint with the following formulation: _r_ ( _x, y_ ) = _β_ log _[π]_ _[θ]_ [(] _[y][|][x]_ [)] + _β_ log _Z_ ( _x_ ) _,_ (4) _π_ ref( _y|x_ ) where _Z_ ( _x_ ) = [�] _y_ _[π]_ [ref] [(] _[y][|][x]_ [) exp(] _[r]_ [(] _[x, y]_ [)] _[/β]_ [)] [ is the partition function. By incorporating this reward] into the BT model, the DPO objective enables the comparison of response pairs, facilitating the discrimination between preferred and dispreferred actions, given by: _[|][ x]_ [)] _L_ DPO ( _π_ _θ_ ; _π_ ref ) = _−_ E ( _x,y_ _w_ _,y_ _l_ ) _∼O_ [log _σ_ ( _β_ log _[π]_ _[θ]_ [(] _[y]_ _[w]_ _[|][ x]_ [)] _[|][ x]_ [)] _[π]_ _[θ]_ [(] _[y]_ _[w]_ _[π]_ _[θ]_ [(] _[y]_ _[l]_ _π_ ref ( _y_ _w_ _| x_ ) _[−]_ _[β]_ [ log] _π_ ref ( _y_ _l_ _| x_ (5) _π_ ref ( _y_ _l_ _| x_ ) [)]] _[.]_ **Distributionally Robust Optimization (DRO)** (Namkoong & Duchi, 2017; Duchi & Namkoong, 2018). DRO provides a strategic framework to effectively mitigate the uncertainty inherent in training data. It achieves this by optimizing for the worst-case expected loss across a set of potential distributions _Q_ . These distributions are confined within a robustness radius _η_ anchored around the empirical training distribution _Q_ 0, and are bounded by a prescribed divergence metric D _ϕ_ . The formal formulation of DRO can be succinctly expressed as follows: _L_ DRO = max _s.t._ D _ϕ_ ( _Q, Q_ 0 ) _≤_ _η,_ (6) _Q_ [E] _[Q]_ [[] _[L]_ [(] _[x]_ [;] _[ θ]_ [)]] _[,]_ where _L_ ( _x_ ; _θ_ ) represents the training loss for an input _x_ . Intuitively, models employing DRO exhibit increased robustness due to the presence of _Q_ that acts as an “adversary”, optimizing the model under a distribution set with adversarial perturbations instead of a single training distribution. 3 3 A NALYZING DPO’ S P OINTWISE R OBUSTNESS In this section, we explore DPO’s robustness to pointwise noise, analyzing its response to noise to identify key strengths and vulnerabilities. We assess how noise degrades performance and leverage insights from DRO to understand DPO’s underlying resilience mechanisms. 3.1 P OINTWISE N OISE I MPAIRS DPO P ERFORMANCE We begin by investigating the impact of pointwise noise on DPO through experiments on the IMDB sentiment dataset (Maas et al., 2011). Following the setup in (Havrilla et al., 2023), we fine-tune the GPT-2-large (Radford et al., 2019) model and use SiEBERT (Hartmann et al., 2023), a specialized variant of RoBERTa-large (Liu et al., 2019), for reward calculation. Pointwise noise is introduced exclusively during the SFT stage by incorporating responses generated by the unrefined GPT-2-large model, resulting in lower quality data for this stage, while the data used in the DPO stage remains unchanged. To assess DPO’s robustness to this pointwise noise, we evaluate each algorithm by examining the trade-off between the achieved reward and the KL divergence from the reference policy. Figure 2: Impact of pointwise noise on the expected reward frontier and KL divergence in DPO ( _β_ = 0 _._ 1). Figure 2 reveals that beyond a KL( _π_ _θ_ _||π_ ref ) threshold of 10.0, both models converge in terms of reward. Notably, the DPO model trained with high-quality data (blue points) significantly outperforms its low-quality data counterpart (orange points), highlighting the critical impact of data quality on optimizing model performance. 3.2 P OINTWISE R OBUSTNESS IN R EWARD M ODELING In Section 3.1, we explore how pointwise noise negatively affects individual instance rewards. To address this issue and enhance the robustness of LLMs, we propose integrating DRO during the reward modeling stage. We define the Reward Modeling DRO (RM-DRO) objective, which optimizes the expected reward under the worst-case noise distribution within a specified ambiguity set: max _π_ _θ_ [E] _[x][∼O][,y][∼][π]_ _[θ]_ [(] _[y][|][x]_ [)] [[] _[r]_ _[ϕ]_ [(] _[x, y]_ [)]] _s.t._ D _ϕ_ ( _π_ _θ_ ( _y|x_ ) _, π_ ref ( _y|x_ )) _≤_ _η._ (7) The direct consequence of pointwise noise is the resultant unreliability of the reference model (SFT). By adopting RM-DRO, we aim to maximize a surrogate objective that accounts for various potential distributions within a robustness radius _η_ around the reference distribution _π_ ref ( _y|x_ ), measured by the distance metric D _ϕ_ . With this formulation, we provide a fresh perspective on DPO. **A. DPO is Implicitly a Pointwise DRO.** **Theorem 3.1** (Optimal Reward Function under KL Divergence) **.** _Let the Kullback-Leibler_ _(KL) divergence between policy_ _π_ _θ_ _and reference policy_ _π_ _ref_ _be defined as:_ D _KL_ ( _π_ _θ_ _|π_ _ref_ ) = _π_ _θ_ ( _x_ ) � _π_ _θ_ ( _x_ ) log � _π_ _ref_ ( _x_ ) � _dx._ _Optimizing the RM-DRO objective as defined in Equation (7) yields an_ _optimal reward r_ _KL_ ( _x, y_ ) _given by:_ _r_ _KL_ ( _x, y_ ) = _β_ _[∗]_ ( _η_ ) log _[π]_ _[θ]_ [(] _[y][|][x]_ [)] _−_ _α._ (8) _π_ _ref_ ( _y|x_ ) _Here,_ _α, β_ _are Lagrange multipliers,_ _β_ _[∗]_ ( _η_ ) _denotes the optimal value of_ _β_ _that minimizes Equation_ _(7), acting as the regularization coefficient in DPO. By deriving the optimal value of α, given by:_ _α_ _[∗]_ = _−β_ log E _x∼O,y∼π_ _ref_ [exp( _[r]_ _[θ]_ [(] _β_ _[y][|][x]_ [)] )] _,_ (9) _Equation 8 can be re-expressed to match the ultimate form of the reward function in Equation 4._ Please refer to Appendix B.1 for detailed proofs and Appendix B.2 for the formal proof. For a broader discussion on optimal reward functions under general _ϕ_ -divergences, see Appendix C.1. Consistent with the reward function formulation in Rafailov et al. (2023a), Theorem 3.1 not only reaffirms established results but also introduces several novel insights, as outlined below: 4 |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56|Col2|noise ratio=2<br>noise ratio=3|0%<br>0%|Col5|Col6|Col7| |---|---|---|---|---|---|---| |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56||noise ratio=4|0%|0%||| |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56||||||| |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56||||||| |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56|4|4|3|3|2 1|2 1| (b) Performance w/ different _β_ on HH. (a) Performance w/ different _β_ on IMDB. Figure 3: (a) Comparative analysis of the effect of pointwise noise on the expected reward frontier for different _β_ values on IMDB dataset. (b) Comparative analysis of the effect of pointwise noise on on the win rate for different _β_ values on HH dataset. The star ( _⋆, ⋆, ⋆_ ) indicates the optimal _β_ selection for the corresponding pointwise noise ratio. **Why DPO is Robust to Pointwise Noise.** We propose that the reference distribution closely mirrors the empirical training distribution, given the pre-training step (SFT) common to both RLHF and DPO methods. This ensures the reference distribution in the DPO phase accurately reflects the training data noise. In terms of DRO, while the reference model _π_ ref may not be entirely _reliable_, the implicit robust framework of DPO counters data perturbations effectively. Specifically, the “worst-case distribution” is defined as the distribution that maximizes risk within established divergence constraints, analogous to an adversarial noise model in DRO. Varying _β_ enables DPO to exhibit varying search space for a better _π_ _θ_, leading to improved performance. For more discussion about the connection between DPO and DRO, please refer to Appendix C.2. Moreover, the incorporation of DRO provides a new interpretation of the coefficient _β_ in DPO, transforming it from a mere heuristic design into a “noise reflector”. We provide Lemma 3.2 to disclose the relationship between _β_ and _η_ . **B. The Optimal Value of** _β_ **Reflects the Noise within the SFT Model.** **Lemma 3.2.** _(Faury et al., 2020, Lemma 5) The optimal_ _β_ _[∗]_ ( _η_ ) _in DPO is monotonically decreasing_ _with respect to η and obeys the following relationship:_ _β_ _[∗]_ ( _η_ ) = ~~�~~ V _π_ _ref_ [ _r_ ( _x, y_ )] _/_ 2 _η,_ (10) _where_ V _π_ _ref_ [ _r_ ( _x, y_ )] = [�] _y_ _[π]_ _[ref]_ [(] _[x, y]_ [)(] _[r]_ [(] _[y][|][x]_ [)] _[ −]_ [�] _y_ _[π]_ _[ref]_ [(] _[y][|][x]_ [)] _[r]_ [(] _[x, y]_ [))] [2] _[ denotes the variance of the]_ _reward model r_ ( _x, y_ ) _under the reference distribution π_ _ref_ _._ _where_ V _π_ _ref_ [ _r_ ( _x, y_ )] = [�] _y_ _[π]_ _[ref]_ [(] _[x, y]_ [)(] _[r]_ [(] _[y][|][x]_ [)] _[ −]_ [�] Lemma 3.2 elucidates the inverse correlation between the parameter _β_ and the robustness radius _η_ . Specifically, as noise within the model increases, the required search space expands, necessitating a larger _η_ and consequently a smaller optimal _β_ . To empirically validate this relationship, we conducted experiments on the IMDB dataset, as outlined in Section 3.1. In these experiments, the noise ratio is controlled by the proportion of low-quality pairs ( _y_ _w_ _, y_ _l_ ) introduced into the training data, generated by the unrefined GPT-2 model. Figure 3a shows that models trained with lower _β_ values (e.g., 0.01) outperform those with higher _β_ values (e.g., 0.1) when trained on 100% low-quality data. This is because a lower _β_ allows for a larger search space to counteract significant pointwise noise in the SFT model. We also conducted experiments on the HH dataset, injecting pointwise noise during the SFT phase by incorporating rejected responses into the training samples. Importantly, during the DPO phase, the positive and negative samples remained consistent, ensuring noise was introduced only during SFT. The noise ratio is determined by the proportion of rejected responses used as training samples during SFT. As shown in Figure 3b, the optimal value of _β_ decreases as the noise ratio increases, indicating that higher noise levels in SFT require a smaller _β_ for optimal performance. For detailed experimental settings and procedures for both datasets, please refer to Appendix C.3, where more comprehensive explanations are provided. 5 Idea Generation Category:
0Conceptual Integration
CbfsKHiWEn
# O N THE E XPRESSIVENESS OF R ATIONAL R E LU N EURAL N ETWORKS W ITH B OUNDED D EPTH **Gennadiy Averkov** BTU Cottbus-Senftenberg averkov@b-tu.de **Christopher Hojny** TU Eindhoven c.hojny@tue.nl **Maximilian Merkert** TU Braunschweig m.merkert@tu-braunschweig.de A BSTRACT To confirm that the expressive power of ReLU neural networks grows with their depth, the function _F_ _n_ = max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_ has been considered in the literature. A conjecture by Hertrich, Basu, Di Summa, and Skutella [NeurIPS 2021] states that any ReLU network that exactly represents _F_ _n_ has at least _⌈_ log 2 ( _n_ +1) _⌉_ hidden layers. The conjecture has recently been confirmed for networks with integer weights by Haase, Hertrich, and Loho [ICLR 2023]. We follow up on this line of research and show that, within ReLU networks whose weights are decimal fractions, _F_ _n_ can only be represented by networks with at least _⌈_ log 3 ( _n_ + 1) _⌉_ hidden layers. Moreover, if all weights are _N_ -ary fractions, then _F_ _n_ can only be represented by networks with at least Ω( ln lnln _n N_ [)][ layers. These] results are a partial confirmation of the above conjecture for rational ReLU networks, and provide the first non-constant lower bound on the depth of practically relevant ReLU networks. 1 I NTRODUCTION An important aspect of designing neural network architectures is to understand which functions can be exactly represented by a specific architecture. Here, we say that a neural network, transforming _n_ input values into a single output value, _(exactly) represents_ a function _f_ : R _[n]_ _→_ R if, for every input _x ∈_ R _[n]_, the neural network reports output _f_ ( _x_ ). Understanding the expressiveness of neural network architectures can help to, among others, derive algorithms (Arora et al., 2018; Khalife et al., 2024; Hertrich & Sering, 2024) and complexity results (Goel et al., 2021; Froese et al., 2022; Bertschinger et al., 2023; Froese & Hertrich, 2023) for training networks. One of the most popular classes of neural networks are feedforward neural networks with ReLU activation (Goodfellow et al., 2016). Their capabilities to _approximate_ functions is well-studied and led to several so-called universal approximation theorems, e.g., see (Cybenko, 1989; Hornik, 1991). For example, from a result by Leshno et al. (1993) it follows that any continuous function can be approximated arbitrarily well by ReLU networks with a single hidden layer. In contrast to approximating functions, the understanding of which functions can be _exactly_ represented by a neural network is much less mature. A central result by Arora et al. (2018) states that the class of functions that are exactly representable by ReLU networks is the class of continuous piecewise linear (CPWL) functions. In particular, they show that every CPWL function with _n_ inputs can be represented by a ReLU network with _⌈_ log 2 ( _n_ + 1) _⌉_ hidden layers. It is an open question though for which functions this number of hidden layers is also necessary. An active research field is therefore to derive lower bounds on the number of required hidden layers. Arora et al. (2018) show that two hidden layers are necessary and sufficient to represent max _{_ 0 _, x_ 1 _, x_ 2 _}_ by a ReLU network. However, there is no single function which is known to require more than two hidden layers in an exact representation. In fact, Hertrich et al. (2021) formulate the following conjecture. **Conjecture 1.** _For every integer k with_ 1 _≤_ _k ≤⌈_ log 2 ( _n_ + 1) _⌉, there exists a function f_ : R _[n]_ _→_ R _that can be represented by a ReLU network with k hidden layers, but not with k −_ 1 _hidden layers._ Hertrich et al. (2021) also show that this conjecture is equivalent to the statement that any ReLU network representing max _{_ 0 _, x_ 1 _, . . ., x_ 2 _k_ _}_ requires _k_ + 1 hidden layers. That is, if the conjecture 1 holds true, the lower bound of _⌈_ log 2 ( _n_ + 1) _⌉_ by Arora et al. (2018) is tight. While Conjecture 1 is open in general, it has been confirmed for two subclasses of ReLU networks, namely networks all of whose weights only take integer values (Haase et al., 2023) and, for _n_ = 4, so-called _H_ -conforming neural networks (Hertrich et al., 2021). In this article, we follow this line of research by deriving a non-constant lower bound on the number of hidden layers in ReLU networks all of whose weights are _N_ -ary fractions. Recall that a rational number is an _N_ -ary fraction if it can be written as _Nz_ _[t]_ [ for some integer] _[ z]_ [ and non-negative integer] _[ t]_ [.] **Theorem 2.** _Let n and N be positive integers, and let p be a prime number that does not divide N_ _._ _Every ReLU network with weights being N_ _-ary fractions requires at least ⌈_ log _p_ ( _n_ + 1) _⌉_ _hidden_ _layers to exactly represent the function_ max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}._ **Corollary 3.** _Every ReLU network all of whose weights are decimal fractions requires at_ _least ⌈_ log 3 ( _n_ + 1) _⌉_ _hidden layers to exactly represent_ max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}._ While Theorem 2 does not resolve Conjecture 1 because it makes no statement about general real weights, note that in most applications floating point arithmetic is used (IEEE, 2019). That is, in neural network architectures used in practice, one is actually restricted to weights being _N_ -ary fractions. Moreover, when quantization, see, e.g., (Gholami et al., 2022) is used to make neural networks more efficient in terms of memory and speed, weights can become low-precision decimal numbers, cf., e.g., (Nagel et al., 2020). Consequently, Theorem 2 provides, to the best of our knowledge, the first non-constant lower bound on the depth of practically relevant ReLU networks. Relying on Theorem 2, we also derive the following lower bound. **Theorem 4.** _There is a constant C >_ 0 _such that, for all integers n, N ≥_ 3 _, every ReLU network_ _with weights being N_ _-ary fractions that represents_ max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _} has depth at least C_ _·_ ln lnln _n N_ _[.]_ Theorem 4, in particular, shows that there is no constant-depth ReLU network that exactly represents max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_ if all weights are rational numbers all having a common denominator _N_ . In view of the integral networks considered by Haase et al. (2023), we stress that our results do not simply follow by scaling integer weights to rationals, which has already been discussed in Haase et al. (2023, Sec. 1.3). We therefore extend the techniques by Haase et al. (2023) to make use of number theory and polyhedral combinatorics to prove our results that cover standard number representations of rationals on a computer. **Outline** To prove our main results, Theorems 2 and 4, the rest of the paper is structured as follows. First, we provide some basic definitions regarding neural networks that we use throughout the article, and we provide a brief overview of related literature. Section 2 then provides a short summary of our overall strategy to prove Theorems 2 and 4 as well as some basic notation. The different concepts of polyhedral theory and volumes needed in our proof strategy are detailed in Section 2.1, whereas Section 2.2 recalls a characterization of functions representable by a ReLU neural network from the literature, which forms the basis of our proofs. In Section 3, we derive various properties of polytopes associated with functions representable by a ReLU neural network, which ultimately allows us to prove our main results in Section 3.3. The paper is concluded in Section 4. **Basic Notation for ReLU Networks** To describe the neural networks considered in this article, we introduce some notation. We denote by Z, N, and R the sets of integer, positive integer, and real numbers, respectively. Moreover, Z + and R + denote the sets of non-negative integers and reals, respectively. Let _k ∈_ Z + . A _feedforward neural network with rectified linear units (ReLU)_ (or simply _ReLU_ _network_ in the following) with _k_ + 1 layers can be described by _k_ + 1 affine transformations _t_ [(1)] : R _[n]_ [0] _→_ R _[n]_ [1] _, . . ., t_ [(] _[k]_ [+1)] : R _[n]_ _[k]_ _→_ R _[n]_ _[k]_ [+1] . It _exactly represents_ a function _f_ : R _[n]_ _→_ R if and only if _n_ 0 = _n_, _n_ _k_ +1 = 1, and the alternating composition _t_ [(] _[k]_ [+1)] _◦_ _σ ◦_ _t_ [(] _[k]_ [)] _◦_ _σ ◦· · · ◦_ _t_ [(2)] _◦_ _σ ◦_ _t_ [(1)] coincides with _f_, where, by slightly overloading notation, _σ_ denotes the component-wise application of the _ReLU activation function σ_ : R _→_ R, _σ_ ( _x_ ) = max _{_ 0 _, x}_ to vectors in any dimension. For each _i ∈{_ 1 _, . . ., k_ + 1 _}_ and _x ∈_ R _[n]_ _[i][−]_ [1], let _t_ [(] _[i]_ [)] ( _x_ ) = _A_ [(] _[i]_ [)] _x_ + _b_ [(] _[i]_ [)] for some _A_ [(] _[i]_ [)] _∈_ R _[n]_ _[i]_ _[×][n]_ _[i][−]_ [1] and _b_ [(] _[i]_ [)] _∈_ R _[n]_ _[i]_ . The entries of _A_ [(] _[i]_ [)] are called _weights_ and those of _b_ [(] _[i]_ [)] are called _biases_ of the network. The network’s _depth_ is _k_ + 1, and the _number of hidden layers_ is _k_ . 2 The set of all functions R _[n]_ _→_ R that can be represented exactly by a ReLU network of depth _k_ + 1 is denoted by ReLU _n_ ( _k_ ). Moreover, if _R ⊆_ R is a ring, we denote by ReLU _[R]_ _n_ [(] _[k]_ [)][ the set of all] functions R _[n]_ _→_ R that can be represented exactly by a ReLU network of depth _k_ + 1 all of whose weights are contained in _R_ . Throughout this paper, we will mainly work with the rings Z, R, or the ring of _N_ -ary fractions. The set ReLU _[R]_ _n_ [(0)][ is the set of affine functions] _[ f]_ [(] _[x]_ [1] _[, . . ., x]_ _[n]_ [) =] _[ b]_ [+] _[a]_ [1] _[x]_ [1] [+] _[· · ·]_ [+] _[a]_ _[n]_ _[x]_ _[n]_ [with] _[ b][ ∈]_ [R][,] and _a_ 1 _, . . ., a_ _n_ _∈_ _R_ . It can be directly seen from the definition of ReLU networks that, for _k ∈_ N, one has _f ∈_ ReLU _[R]_ _n_ [(] _[k]_ [)][ if and only if] _[ f]_ [(] _[x]_ [) =] _[ u]_ [0] [+] _[ u]_ [1] [max] _[{]_ [0] _[, g]_ [1] [(] _[x]_ [)] _[}]_ [ +] _[ · · ·]_ [ +] _[ u]_ _[m]_ [max] _[{]_ [0] _[, g]_ _[m]_ [(] _[x]_ [)] _[}]_ holds for some _m ∈_ N, _u_ 0 _∈_ R _, u_ 1 _, . . ., u_ _m_ _∈_ _R_, and functions _g_ 1 _, . . ., g_ _m_ _∈_ ReLU _[R]_ _n_ [(] _[k][ −]_ [1)][.] **Related Literature** Regarding the expressiveness of ReLU networks, Hertrich et al. (2021) show that four layers are needed to exactly represent max _{_ 0 _, x_ 1 _, . . ., x_ 4 _}_ if the network satisfies the technical condition of being _H_ -conforming. By restricting the weights of a ReLU network to be integer, Haase et al. (2023) prove that ReLU [Z] _n_ [(] _[k][ −]_ [1)][ ⊊] [ReLU] [Z] _n_ [(] _[k]_ [)][ for every] _[ k][ ≤⌈]_ [log] 2 [(] _[n]_ [ + 1)] _[⌉]_ [. In] particular, max _{_ 0 _, x_ 1 _, . . ., x_ 2 _k_ _} /∈_ ReLU [Z] 2 _[k]_ [(] _[k]_ [)][. If the activation function is changed from ReLU] to _x �→_ 1 _{x>_ 0 _}_, Khalife et al. (2024) show that two hidden layers are both necessary and sufficient for all functions representable by such a network. If one is only interested in approximating a function, Safran et al. (2024) show that max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_ can be approximated arbitrarily well by ReLU [Z] _n_ [(2)][-networks of width] _n_ ( _n_ + 1) with respect to the _L_ 2 norm for continuous distributions. By increasing the depth of these networks, they also derive upper bounds on the required width in such an approximation. The results by Safran et al. (2024) belong to the class of so-called universal approximation theorems, which describe the ability to approximate classes of functions by specific types of neural networks, see, e.g., (Cybenko, 1989; Hornik, 1991; Barron, 1993; Pinkus, 1999; Kidger & Lyons, 2020). However, Vardi & Shamir (2020) show that there are significant theoretical barriers for depth-separation results for polynomially-sized ReLU _n_ ( _k_ )-networks for _k ≥_ 3, by establishing links to the separation of threshold circuits as well as to so-called natural-proof barriers. When taking specific data into account, Lee et al. (2024) derive lower and upper bounds on both the depth and width of a neural network that correctly classifies a given data set. More general investigations of the relation between the width and depth of a neural network are discussed, among others, by Arora et al. (2018); Eldan & Shamir (2016); Hanin (2019); Raghu et al. (2017); Safran & Shamir (2017); Telgarsky (2016). 2 P ROOF S TRATEGY AND T HEORETICAL C ONCEPTS To prove Theorems 2 and 4, we extend the ideas of Haase et al. (2023). We therefore provide a very concise summary of the arguments of Haase et al. (2023). Afterwards, we briefly mention the main ingredients needed in our proofs, which are detailed in the following subsections. A central ingredient for the results by Haase et al. (2023) is a polyhedral characterization of all functions in ReLU _n_ ( _k_ ), which has been derived by Hertrich (2022). This characterization links functions representable by a ReLU network and so-called support functions of polytopes _P ⊆_ R _[n]_ all of whose vertices belong to Z _[n]_, so-called _lattice polytopes_ . It turns out that the function max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_ in Theorems 2 and 4 can be expressed as the support function of a particular lattice polytope _P_ _n_ _⊆_ R _[n]_ . By using a suitably scaled version Vol _n_ of the classical Euclidean volume in R _[n]_, one can achieve Vol _n_ ( _P_ ) _∈_ Z for all lattice polytopes _P ⊆_ R _[n]_ . Haase et al. (2023) then show that, if the support function _h_ _P_ of a lattice polytope _P ⊆_ R _[n]_ can be exactly represented by a ReLU network with _k_ hidden layers, all faces of _P_ of dimension at least 2 _[k]_ have an even normalized volume. For _n_ = 2 _[k]_, however, Vol _n_ ( _P_ _n_ ) is odd. Hence, its support function cannot be represented by a ReLU network with _k_ hidden layers. We show that the arguments of Haase et al. (2023) can be adapted by replacing the divisor 2 with an arbitrary prime number _p_ . Another crucial insight is that the theory of mixed volumes can be used to analyze the behavior of Vol _n_ ( _A_ + _B_ ) for the Minkowski sum _A_ + _B_ := _{a_ + _b_ : _a ∈_ _A, b ∈_ _B}_ of lattice polytopes _A, B ⊂_ R _[n]_ . The Minkowski-sum operation is also involved in the polyhedral characterization of Hertrich (2022), and so it is also used by Haase et al. (2023), who provide a version of Theorem 2 for integer weights. They, however, do not directly use mixed volumes. A key observation used in our proofs, and obtained by a direct application of mixed volumes, is that the 3 map associating to a lattice polytope _P_ the coset of Vol _n_ ( _P_ ) modulo a prime number _p_ is additive when _n_ is a power of _p_ . Combining these ingredients yields Theorems 2 and 4. **Some Basic Notation** The standard basis vectors in R _[n]_ are denoted by _e_ 1 _, . . ., e_ _n_, whereas 0 denotes the null vector in R _[n]_ . Throughout the article, all vectors _x ∈_ R _[n]_ are column vectors, and we denote the transposed vector by _x_ _[⊤]_ . If _x_ is contained in the integer lattice Z _[n]_, we call it a _lattice_ _point_ . For vectors _x, y ∈_ R _[n]_, their scalar product is given by _x_ _[⊤]_ _y_ . For _m ∈_ N, we will write [ _m_ ] for the set _{_ 1 _, . . ., m}_ . The convex-hull operator is denoted by conv, and the base- _b_ logarithm by log _b_, while the natural logarithm is denoted ln. The central function of this article is max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_, which we abbreviate by _F_ _n_ . 2.1 B ASIC P ROPERTIES OF P OLYTOPES AND L ATTICE P OLYTOPES As outlined above, the main tools needed to prove Theorems 2 and 4 are polyhedral theory and different concepts of volumes. This section summarizes the main concepts and properties that we need in our argumentation in Section 3. For more background, we refer the reader to the monographs (Beck & Robins, 2020; Hug & Weil, 2020; Schneider, 2014). **Polyhedra, Lattice Polyhedra, and Their Normalized Volume** A _polytope P ⊆_ R _[n]_ is the convex hull conv( _p_ 1 _, . . ., p_ _m_ ) of finitely many points _p_ 1 _, . . ., p_ _m_ _∈_ R _[n]_ . We introduce the family _P_ ( _S_ ) := _{_ conv( _p_ 1 _, . . ., p_ _m_ ): _m ∈_ N _, p_ 1 _, . . ., p_ _m_ _∈_ _S}_ of all non-empty polytopes with vertices in _S ⊆_ R _[n]_ . Thus, _P_ (R _[n]_ ) is the family of all polytopes in R _[n]_ and _P_ (Z _[n]_ ) is the family of all _lattice polytopes_ in R _[n]_ . For _d ∈{_ 0 _, . . ., n}_, we also introduce the family _P_ _d_ ( _S_ ) := _{P ∈P_ ( _S_ ): dim( _P_ ) _≤_ _d}._ of polytopes of dimension at most _d_, where the dimension of a polytope _P_ is defined as the dimension of its affine hull, i.e., the smallest affine subspace of R _[n]_ containing _P_ . The _Euclidean volume_ vol _n_ on R _[n]_ is the _n_ -dimensional Lebesgue measure, scaled so that vol _n_ is equal to 1 on the unit cube [0 _,_ 1] _[d]_ . Note that measure-theoretic subtleties play no role in our context since we restrict the use of vol _n_ to _P_ (R _[n]_ ). The _normalized volume_ Vol _n_ in R _[n]_ is the _n_ -dimensional Lebesgue measure normalized so that Vol _n_ is equal to 1 on the _standard simplex_ ∆ _n_ := conv(0 _, e_ 1 _, . . ., e_ _n_ ). Clearly, Vol _n_ = _n_ ! _·_ vol _n_ and Vol _n_ takes non-negative integer values on lattice polytopes. **Support Functions** For a polytope _P_ = conv( _p_ 1 _, . . ., p_ _m_ ) _⊆_ R _[n]_, its _support function_ is _h_ _P_ ( _x_ ) := max _{x_ _[⊤]_ _y_ : _y ∈_ _P_ _},_ and it is well-known that _h_ _P_ ( _x_ ) = max _{p_ _[⊤]_ 1 _[x, . . ., p]_ _[⊤]_ _m_ _[x][}]_ [. Consequently,][ max] _[{]_ [0] _[, x]_ [1] _[, . . ., x]_ _[n]_ _[}]_ [ from] Theorems 2 and 4 is the support function of ∆ _n_ . **Mixed Volumes** For sets _A, B ⊆_ R _[n]_, we introduce the _Minkowski sum_ _A_ + _B_ := _{a_ + _b_ : _a ∈_ _A, b ∈_ _B}_ and the multiplication _λA_ = _{λa_ : _a ∈_ _A}_ of _A_ by a non-negative factor _λ ∈_ R + . For an illustration of the Minkowski sum, we refer to Figure 2. Note that, if _S ∈{_ R _[n]_ _,_ Z _[n]_ _}_ and _A, B ∈P_ ( _S_ ), then _A_ + _B ∈P_ ( _S_ ), too. If _A_ and _B_ are (lattice) polytopes, then _A_ + _B_ is also a (lattice) polytope, and the support functions of _A, B_ and _A_ + _B_ are related by _h_ _A_ + _B_ = _h_ _A_ + _h_ _B_ . If ( _G,_ +) is an Abelian semi-group (i.e., a set with an associative and commutative binary operation), we call a map _φ_ : _P_ (R _[n]_ ) _→_ _G Minkowski additive_ if the Minkowski addition on _P_ (R _[n]_ ) gets preserved by _φ_ in the sense that _φ_ ( _A_ + _B_ ) = _φ_ ( _A_ ) + _φ_ ( _B_ ) holds for all _A, B ∈P_ (R _[n]_ ). The following is a classical result of Minkowski. **Theorem 5** (see, e.g., (Schneider, 2014, Ch. 5)) **.** _There exists a unique functional, called the_ mixed volume _,_ V: _P_ (R _[n]_ ) _[n]_ _→_ R _,_ _with the following properties valid for all P_ 1 _, . . ., P_ _n_ _, A, B ∈P_ (R _[n]_ ) _and α, β ∈_ R + _:_ 4 _(a)_ V _is invariant under permutations, i.e._ V( _P_ 1 _, . . ., P_ _n_ ) = V( _P_ _σ_ (1) _, . . ., P_ _σ_ ( _n_ ) ) _for every permu-_ _tation σ on_ [ _n_ ] _._ _(b)_ V _is Minkowski linear in all input parameters, i.e., for all i ∈_ [ _n_ ] _, it holds that_ V( _P_ 1 _, . . . P_ _i−_ 1 _, αA_ + _βB, P_ _i_ +1 _, . . ., P_ _n_ ) = _α_ V( _P_ 1 _, . . . P_ _i−_ 1 _, A, P_ _i_ +1 _, . . ., P_ _n_ ) + _β_ V( _P_ 1 _, . . . P_ _i−_ 1 _, B, P_ _i_ +1 _, . . ., P_ _n_ ) _(c)_ V _is equal to_ Vol _n_ _on the diagonal, i.e.,_ V( _A, . . ., A_ ) = Vol _n_ ( _A_ ) _._ We refer to Chapter 5 of the monograph by Schneider (2014) on the Brunn-Minkowski theory for more information on mixed volumes, where also an explicit formula for the mixed volume is presented. Our definition of the mixed volume differs by a factor of _n_ ! from the definition in Schneider (2014) since we use the normalized volume Vol _n_ rather than the Euclidean volume vol _n_ to fix V( _P_ 1 _, . . ., P_ _n_ ) in the case _P_ 1 = _. . ._ = _P_ _n_ . Our way of introducing mixed volumes is customary in the context of algebraic geometry. It is known that, for this normalization, V( _P_ 1 _, . . ., P_ _n_ ) _∈_ Z + when _P_ 1 _, . . ., P_ _n_ are lattice polytopes; see, for example, (Maclagan & Sturmfels, 2015, Ch. 4.6). From the defining properties one can immediately see that, for _A, B ∈P_ (R _[n]_ ), one has the analogue of the binomial formula, which we will prove in Appendix A.2 for the sake of completeness: V( _A, . . ., A, B, . . ., B_ ) _._ (1) � ~~�~~ ~~��~~ ~~�~~ � ~~�~~ � ~~�~~ _i_ _n−i_ Vol _n_ ( _A_ + _B_ ) = _n_ � _i_ =0 _n_ � _i_ **Normalized Volume of Non-Full-Dimensional Polytopes** So far, we have introduced the normalized volume Vol _n_ : _P_ (R _[n]_ ) _→_ R +, i.e., if _P ∈P_ (R _[n]_ ) is not full-dimensional, then Vol _n_ ( _P_ ) = 0. We also associate with a polytope _P ∈P_ _d_ (Z _[n]_ ) of dimension at most _d_ an appropriately normalized _d_ -dimensional volume by extending the use of Vol _d_ : _P_ (Z _[d]_ ) _→_ Z + to Vol _d_ : _P_ _d_ (Z _[n]_ ) _→_ Z + . In the case dim( _P_ ) _< d_, we define Vol _d_ ( _P_ ) = 0. If _d_ = 0, let Vol _d_ ( _P_ ) = 1. In the non-degenerate case _d_ = dim( _P_ ) _∈_ N, we fix _Y_ to be the affine hull of _P_ and consider a bijective affine map _T_ : R _[d]_ _→_ _Y_ satisfying _T_ (Z _[d]_ ) = _Y ∩_ Z _[n]_ . For such choice of _T_, we have _T_ _[−]_ [1] ( _P_ ) _∈P_ (Z _[d]_ ). It turns out that the _d_ -dimensional volume of _T_ _[−]_ [1] ( _P_ ) depends only on _P_ and not on _T_ so that we define Vol _d_ ( _P_ ) := Vol _d_ ( _T_ _[−]_ [1] ( _P_ )). Based on (Beck & Robins, 2020, Corollary 3.17 and _§_ 5.4), there is the following intrinsic way of introducing Vol _d_ ( _P_ ). Let _G_ ( _P_ ) denote the number of lattice points in _P_ . Then, for _t ∈_ Z +, one has Vol _d_ ( _P_ ) := _d_ ! _·_ lim _t→∞_ _t_ [1] _[d]_ _[ G]_ [(] _[tP]_ [)][.] **Remark 6.** _For every d-dimensional affine subspace Y ⊆_ R _[n]_ _which is affinely spanned by d_ + 1 _lattice points, we can define_ Vol _d_ _for every polytope P ∈P_ ( _Y_ ) _, which is not necessarily a lattice_ _polytope, by the same formula_ Vol _d_ ( _P_ ) := Vol _d_ ( _T_ _[−]_ [1] ( _P_ )) _, using an auxiliary map T_ : R _[d]_ _→_ _Y_ _described above. Consequently, by replacing the dimension n with d and the family of polytopes_ _P_ (R _[n]_ ) _with the family P_ ( _Y_ ) _in Minkowski’s Theorem 5, we can introduce the notion of mixed_ _volumes for polytopes in P_ ( _Y_ ) _. More specifically, we will make use of the mixed volumes of lattice_ _polytopes in P_ ( _Y ∩_ Z _[n]_ ) _._ **Normalized Volume of the Affine Join** The following proposition, borrowed from Haase et al. (2023), addresses the divisibility properties of the convex hull of the union of lattice polytopes that lie in skew affine subspaces. **Proposition 7** (Haase et al. 2023, Lemma 6) **.** _Let A, B ∈P_ (Z _[n]_ ) _be polytopes of dimensions i ∈_ Z + _and j ∈_ Z + _, respectively, such that P_ := conv( _A_ _∪_ _B_ ) _is of dimension i_ + _j_ +1 _. Then_ Vol _i_ + _j_ ( _P_ ) _is_ _divisible by_ Vol _i_ ( _A_ ) Vol _j_ ( _B_ ) _. In particular, if i_ = 0 _, then P is a pyramid over B whose normalized_ _volume_ Vol 1+ _j_ ( _B_ ) _is divisible by the normalized volume_ Vol _j_ ( _B_ ) _of its base B._ For an example illustration, see Figure 1. Since _P_ 1 and _P_ 2 lie in skew affine subspaces, Proposition 7 applies. Indeed, Vol 3 (conv( _P_ 1 _∪_ _P_ 2 )) = 12 is divisible by Vol 2 ( _P_ 1 ) = 6 (and Vol 0 ( _P_ 2 ) = 1). 2.2 A P OLYHEDRAL C RITERION FOR F UNCTIONS R EPRESENTABLE W ITH _k_ H IDDEN L AYERS Next to the geometric concepts that we discussed before, the second main building block of our proofs is the polyhedral characterization of ReLU _n_ ( _k_ ) by Hertrich (2022). In the following, we introduce the necessary concepts and present Hertrich’s characterization. 5 Idea Generation Category:
2Direct Enhancement
uREg3OHjLL
# Mufu: Multilingual Fused Learning for Low- Resource Translation with LLM **Zheng Wei Lim** _[♥∗]_ **Nitish Gupta** _[♦]_ **Honglin Yu** _[♦]_ **Trevor Cohn** _[♥][,][♦]_ _♥_ The University of Melbourne _♦_ Google Abstract Multilingual large language models (LLMs) are great translators, but this is largely limited to high-resource languages. For many LLMs, translating in and out of lowresource languages remains a challenging task. To maximize data efficiency in this low-resource setting, we introduce Mufu, which includes a selection of automatically generated multilingual candidates and an instruction to correct inaccurate translations in the prompt. Mufu prompts turn a translation task into a postediting one, and seek to harness the LLM’s reasoning capability with auxiliary translation candidates, from which the model is required to assess the input quality, align the semantics cross-lingually, copy from relevant inputs and override instances that are incorrect. Our experiments on En-XX translations over the Flores-200 dataset show LLMs finetuned against Mufu-style prompts are robust to poor quality auxiliary translation candidates, achieving performance superior to NLLB 1.3B distilled model in 64% of low- and very-low-resource language pairs. We then distill these models to reduce inference cost, while maintaining on average 3.1 chrF improvement over finetune-only baseline in low-resource translations. 1 Introduction The most advanced of large language models (LLM) have demonstrated remarkable competence in translation-related tasks (Robinson et al., 2023; Hendy et al., 2023; Alves et al., 2024; Kocmi & Federmann, 2023; Raunak et al., 2023), but lag behind in translations involving lower-resource languages (Robinson et al., 2023; Hendy et al., 2023; Zhu et al., 2024; Lu et al., 2024), compared to specialized neural machine translation (NMT) systems like NLLB (Costa-jussà et al., 2022). This performance gap is caused primarily by scant pre-training data in these languages (Wei et al., 2023; Yuan et al., 2024; Alves et al., 2024), and is difficult to overcome despite growing efforts to support translations of long-tail languages (Kudugunta et al., 2024; Bapna et al., 2022; Lu et al., 2024). In this work, we introduce multilingual fused learning (Mufu), which combines multilingual context and a postediting task when translating into lower-resource languages using LLMs.1 Mufu-style prompts (see Table 1, top block) include several multilingual translation candidates along with a postediting target, from which a model learns “in-context” to translate from languages with which the target language is more closely aligned due to cultural relevance, geographical and genealogical proximity. We rely on a larger, more competent multilingual teacher model to generate auxiliary translations in these languages, which help disambiguate inputs and improve cross-lingual semantic alignment in a translation task. Given a task to postedit, LLMs are capable of “translating” better by iteratively improving the fluency and naturalness of the translation candidates (Chen et al., 2023). The goal is to induce in LLMs multi-step reasoning akin to chain-of-thought (CoT) (Wei et al., 2022), as the models are required to assess the input quality, align the candidates cross-lingually, and improve the final translation by drawing from the correct input and overriding incorrect instances. Translating this way can be challenging for small models with limited reasoning capacity. Inspired by Wang et al. (2023), we further propose finetuning against Mufu prompts, which allows the models to learn how to best exploit and benefit from the multilingual context. _∗_ Work done during an internship at Google. 1We borrow the name from 幕府 (mù fˇu), a secretariat for the imperial Chinese officers dating back to 229 BC (Wikipedia contributors, 2024). 1 `0` The English sentence has been translated into Malay, Javanese, Sundanese, Indonesian, Minangkabau and Achinese. These translations may contain errors. Correct the translation from English to Achinese. `1` English: The proposed amendment already passed both houses in 2011. `2` Automatic Malay: Pindaan yang dicadangkan telah diluluskan oleh kedua-dua dewan pada tahun 2011. `3` Automatic Javanese: Amandemen sing diusulake wis ditampa dening loro omah ing taun 2011. `4` Automatic Sundanese: Amandemen anu diusulkeun parantos lulus duanana imah dina 2011. `5` Automatic Indonesian: Amandemen yang diusulkan sudah disahkan oleh kedua majelis pada tahun 2011. `6` Automatic Minangkabau: Amandemen nan diusulkan alah disetujui dewan legislatif pado taun 2011. `7` Automatic Achinese: Amandemen nyang geupeugah nyan ka geupeugot bak keu-2 bak thôn 2011. `8` Corrected Achinese: Reference: Amandemen nyang geuusong ka geuteurimoeng lé banduwa majeulis bak thôn 2011. Baseline instruction: Translate from English to Achinese. Table 1: Prompt template for `mufu5` (top block) with Achinese as an example, which includes an instruction (line 0), an input (line 1, blue), five multilingual candidates (lines 2-6, orange) and a postediting target (line 7, red). For baseline we omit lines 2-7, replacing _Corrected Achinese_ with _Achinese_ and the initial instruction with the baseline instruction in purple. In `postediting`, we remove auxiliary languages (teal) in the instruction along with the multilingual candidates, retaining only the postediting target. We show that the best Mufu model, finetuned only with hundreds of parallel examples in each language pair, is competitive against the teacher model and the benchmark NLLB 1.3B distilled model, scoring on average 2.7 higher chrF on FLORES-200 devtest and 0.7 on NTREX test sets in En-XX translations.2 Importantly, Mufu works well on a range of pre-trained models including PaLM2 and Gemma, despite limited data and the fact that Gemma models are English-centric models that have not been trained for multilingual capabilities (Anil et al., 2023; Gemma Team et al., 2024). Our experiments further demonstrate knowledge distillation on Mufu models to be effective in reducing the inference cost, while maintaining competitive advantage against benchmark. 2 Multilingual fused learning 2.1 Combining two learning paradigms Few-shot in-context learning (ICL) is incredibly effective for eliciting translations from an LLM (Winata et al., 2021; Lin et al., 2022), but is usually less performant than more compute- and dataintensive finetuned models (Zhang et al., 2023b; Vilar et al., 2023; Xu et al., 2024; Lu et al., 2024). On one hand, ICL improves translations of LLMs by allowing for informative contexts that induce reasoning processes in the model, and prompt the model to reach a latent feature space that is otherwise difficult to access with shorter input (Wei et al., 2022; Wang et al., 2023; Vilar et al., 2023; Puduppully et al., 2023; Zhu et al., 2024; Zhang et al., 2023a). On the other hand, LLMs produce higher quality final predictions with parameter tuning. Motivated by Wang et al. (2023), our work combines the strengths of both learning paradigms by finetuning LLMs with reference output against multilingual prompts, and substantially improves the overall quality of LLMs’ translations over finetuned-only models, under a low-data condition. 2.2 Maximizing data efficiency with multilingual auxiliary translations Beyond providing few-shot examplars in a translation prompt, we incorporate translations in other languages as auxiliary information to the task. Learning to translate this way facilitates semantic alignment beyond the lexical level, by allowing the encoding of rich knowledge network embedded in the multilingual translations. This multilingual context includes a draft translation in the target language, thus turning the difficult task of translating from scratch into a postediting task. Taken together, this approach can be considered similar to CoT rationales, as we expect LLM to be able to disambiguate words and align across multilingual context, to copy from high-quality inputs and to disregard instances that are less informative or are of poor quality. Unlike typical CoT, however, Mufu models do not predict the chain of thought and is instead provided as a rich context for intermediate reasoning in translation. 2Based on the performance of PaLM2 XXS–NTL (mufu20), further details in Section 3.3. 2 Figure 1: Mufu involves two iterations. First, a teacher model generates a set of multilingual auxiliary translations and a postediting target. These translations then become part of the input during the second iteration, where the student model learns in-context to produce the corrected target translation. We then finetune the student model against target references. In practice, to obtain and to incorporate the auxiliary translations and postediting target in context, Mufu requires two iterations. During the first iteration, a teacher model is required to generate the intermediary translations. These translations are later included as part of the input for a student model, which learns in-context to correct the target translation in the second iteration.3 We illustrate an example of this process in Figure 1, where the teacher model first translates the same input from English to auxiliary translations in Malay, Sundanese, Javanese, Indonesian, Minangkabau and Achinese (the target language).4 These outputs are then added as part of the in-context prompt for the student model, along with an instruction to correct the target translation. 3 Experiments 3.1 Data and evaluation As a low-data setup, we train and validate on the FLORES-200 dev split (Costa-jussà et al., 2022), which differs from the usual practice of reserving the split entirely for validation.5 Out of 997 source sentences in the split, we randomly sampled 787 sentences as the train set, 100 sentences as the validation data, and another 100 sentences to perform initial prompt selection. We reserve the remaining ten source sentences, from which we sample five-shot exemplars used in generating auxiliary translations in the first iteration. Each of the source sentences is paired with translations in 203 languages, from which we finetune the student models to translate from English into a subset of 201 target languages.6 Some languages use more than one writing systems—for example, Achinese can be written in Latin and Arabic scripts; we treat translations into different scripts as individual language pairs. We evaluate our approach using chrF, a character overlap statistic (Popović, 2015). The finetuned models are tested on FLORES-200 devtest split for the ideal in-domain setting where train and test conditions are closely matched. The source sentences of FLORES-200 are sampled from Wikipedia—to assess our finetuned models out of domain, we use NTREX (Federmann et al., 2022), which comprises translations of English news data, on which we evaluate 112 languages, the subset of languages also found in FLORES-200.7 3.2 Prompt style and auxiliary languages We test a variety of prompts with a one-shot prompting and choose an instruction that list all auxiliary languages (e.g., _... from English to Malay, Sundanese, Javanese, ..._ ) over an instruction for the model to infer these languages from the prompt (e.g., _... from English to several languages as specified_ ). We also prepend _Automatic/Corrected_ labels to the language tags in the auxiliary translations instead 3The student may be the same model as the teacher in this setup. 4See Section 3.2 for details on how the intermediate languages are chosen. 5As described in Costa-jussà et al. (2022). 6The two languages omitted are Akan and Twi. 7The languages from FLORES-200 not supported in NTREX are shown as dashed entries in Table 8 (Appendix A.5). 3 of _Candidate/Reference_ pair. We show in Table 1 an example template of a Mufu instruction, in contrast with the `baseline` setup where we provide only an instruction to translate in the prompt, without any multilingual context or postediting target. Further details on prompt selection can be found in Appendix A.1. To select the most relevant auxiliary languages in Mufu, we rely on language data from URIEL (Littell et al., 2017) to select the closest languages by geological and genetic distance (equally weighted) for each target language, and arrange them by the farthest to closest in the prompt. Several languages are not included in the URIEL repository, in which case we sampled their auxiliary languages randomly.8 For the full list of auxiliary languages used in Mufu prompts, see Appendix A.2. We finetune with Mufu prompt over a varying number of auxiliary translations: `postediting` ( `mufu0` ) contains only a postediting target and does not include any multilingual context; `mufu` _N_ incorporates _N ∈{_ 5 _,_ 10 _,_ 20 _}_ auxiliary multilingual translations in addition to a postediting target. 3.3 Models The teacher model, PaLM2 S (also known as Bison), has shown excellent multilingual and translation capability (Anil et al., 2023), but there remains a significant performance gap between higher-resource and lower-resource languages—we report the teacher performance in Section 4 and show the gap can be largely reduced by the student models through Mufu. During the first iteration, the teacher model generates auxiliary translations for each instance with 5-shot prompting. For all prompt setups described in the previous section, we perform supervised finetuning jointly over 201 languages for En-XX translation over a range of student models: PaLM2 XXS (Gecko), PaLM2 XS (Otter), Gemma 2B-IT and Gemma 7B-IT; given the same auxiliary translations generated previously. When comparing the performance across student models, it is worth noting that PaLM2 are multilingual LLMs with superior initial translation capacity compared to Gemma models, which have not received any specialized training on multilingual tasks (Gemma Team et al., 2024). We also further pre-train PaLM2 XXS, the smallest model from PaLM2 family, on a corpora derived from the Next-Thousand-Language (NTL) effort, which comprise monolingual and parallel sentences in 1000+ languages (Caswell et al., 2020; Bapna et al., 2022). We refer to this version of the model as PaLM2 XXS–NTL henceforth. 4 Results We evaluate primarily using chrF rather than BLEU (Papineni et al., 2002), which heavily relies on tokenization that is underdeveloped for many low-resource languages.9 Table 2 shows the mean chrF across 201 En-XX language pairs of all teacher, student and benchmark models; and Win%, the percentage of language pairs where the model outperforms a benchmark. NLLB models only support 198 of these language pairs—to facilitate comparison, we therefore report also the average chrF and win percentages over just these languages.10 When tested with in-domain FLORES devtest data, Mufu finetuned models gain substantially over their baselines. Turning a translation task to a postediting one is advantageous to the output quality, and we see further improvements with multilingual context in Mufu prompts. Mufu models also show superior performance compared to the teacher, with PaLM2 XXS–NTL exceeding teacher performance in 54.2% translation pairs respectively. The exception is regular PaLM2 XXS, which score better than the baseline but underperforms compared to the teacher and the smaller NLLB model, presumably due to its limited capacity. In theory, it is possible for the student to be at least as good as the teacher through word-for-word copying from the postediting target. However, some Mufu translations are worse than the teacher. 8The languages not found in URIEL include Latgalian, Swahili, Kongo, Kanuri, Kanuri in Arabic script, Silesian, Pashto, Oromo, Guarani, Kabuverdianu, Tumbuka, Kimbundu, Filipino, Friulian, Dinka, Mongolian, Azerbaijani, Fulfulde, South Levantine Arabic, Uzbek, Sardinian, Limburgan, Persian, Tamazight, Crimean Tatar in Latin script, Dzongkha, Lombard and Dari. 9Nonetheless, we report the corresponding results in BLEU scores in Appendix A.4, which largely corroborate our main findings. 10The languages not supported by NLLB are Minangkabau in Arabic script, Arabic in Latin script and Santali. 4 **FLORES-200 devtest** **NTREX** Win% vs. Win% vs. Win% vs. chrF _↑_ chrF _↑_ Win% vs. chrF _↑_ Win% vs. NLLB NLLB NLLB (n=201) (n=198) teacher (n=112) teacher 1.3B 54B 1.3B PaLM2 S 43.3 43.7 - 58.1 43.2 48.6 - 73.2 (teacher) Win% vs. NLLB 54B chrF _↑_ chrF _↑_ Win% vs. (n=201) (n=198) teacher Win% vs. NLLB 1.3B chrF _↑_ Win% vs. (n=112) teacher NLLB 1.3B - 46.0 41.3 - 4.0 48.1 26.8 distilled NLLB 54B - 48.9 56.2 96.0 - - - MoE PaLM2 XXS –NTL PaLM2 XXS PaLM2 XS baseline 39.2 39.4 32.8 11.6 8.0 36.3 8.9 0.9 postedit 42.5 42.8 34.8 19.2 10.6 40.6 9.8 3.6 mufu5 47.1 47.3 46.8 57.1 24.6 46.5 17.0 21.4 mufu10 48.0 48.3 52.2 75.3 32.7 47.7 17.0 35.7 mufu20 **48.4** **48.7** 54.2 76.8 39.7 48.8 20.5 61.6 mufu5hrl 42.9 43.1 34.3 20.7 10.6 41.0 10.7 3.6 mufu5tr 44.4 44.6 42.3 33.8 19.1 43.0 11.6 7.1 mufu20+5hrl 47.1 47.4 47.3 63.1 23.1 46.9 15.2 25.9 distilled 45.1 45.5 42.8 35.4 17.1 **49.0** 45.5 48.2 baseline 35.8 35.9 26.9 7.6 5.5 34.2 5.4 1.8 postedit 41.7 42.0 28.9 22.2 9.0 **43.4** 6.2 8.9 mufu5 **41.9** **42.2** 30.8 20.2 11.6 43.1 7.1 8.9 mufu10 41.0 41.1 30.8 14.1 9.0 40.2 8.0 4.5 mufu20 41.1 41.2 30.8 14.1 9.5 40.3 8.0 4.5 baseline 31.7 31.9 21.9 2.5 1.0 31.3 5.4 0.0 postedit 43.8 44.1 36.8 28.3 16.6 43.3 8.9 10.7 mufu5 44.5 44.6 40.8 33.8 17.6 43.6 8.9 11.6 mufu10 44.5 44.7 40.3 36.9 19.1e 43.6 9.8 13.4 mufu20 **44.7** **44.8** 43.3 36.9 19.1 **43.8** 9.8 13.4 baseline 32.9 33.0 27.4 4.5 2.5 30.7 7.1 0.0 PaLM2 S mufu20 47.0 47.1 51.2 58.6 27.6 45.6 17.9 26.8 mufu20lora **47.2** **47.5** 99.0 72.2 59.8 **50.1** 91.1 83.9 Gemma 2B Gemma 7B baseline 34.4 34.4 28.9 9.1 4.0 29.2 6.2 0.9 postedit 44.1 44.3 32.8 37.9 16.1 41.4 8.0 7.1 mufu5 45.1 45.3 37.8 49.5 22.1 43.2 9.8 9.8 mufu10 45.4 45.5 39.3 47.0 21.1 43.3 9.8 10.7 mufu20 **45.5** **45.6** 39.3 47.5 22.6 **43.6** 10.7 13.4 baseline 39.9 40.0 33.3 15.7 9.5 35.1 7.1 0.9 postedit 46.3 46.5 41.8 54.0 24.6 43.2 9.8 12.5 mufu5 47.2 47.3 49.3 60.6 27.6 43.4 9.8 11.6 mufu10 47.2 47.3 49.3 61.6 27.1 43.2 9.8 14.3 mufu20 47.6 47.7 51.7 63.6 29.6 43.6 11.6 17.9 mufu5hrl 46.4 46.6 42.8 52.0 26.1 43.2 10.7 13.4 mufu5tr 42.9 42.9 42.3 28.8 17.6 37.5 9.8 4.5 mufu20+5hrl **47.7** **47.8** 51.2 66.7 30.7 44.1 12.5 17.9 distilled 44.4 44.5 41.3 26.8 18.1 **47.2** 33.9 41.1 Table 2: Mean chrF scores and win percentages against PaLM2 S as teacher model for 201 En-XX language pairs; NLLB 1.3B distilled model and NLLB 54B MoE model for 198 language pairs. **Bold** values are the best chrF scores in a given model class. Red values are win rates above 50%. Mufu{5, 10, 20} indicate the number of non-target multilingual candidates in the prompt. We also report the distillation performance of PaLM2 XXS–NTL and Gemma 7B finetuned with mufu20. We attribute this phenomenon to the limited amount of supervision in each language pair and autoregressive modeling objective with gold-standard translation—a strategy known to be inferior to distilling from model outputs (Kim & Rush, 2016; Wang et al., 2021; Finkelstein & Freitag, 2023). Mufu is effective for under-resourced languages with low-quality postediting candidates. However, improving high-quality translations in high-resource languages is harder and requires the student model to also learn the subtle differences between model- and human-generated output (Sizov et al., 2024; Zhang et al., 2024; Kocmi et al., 2024). It is also possible that the teacher model surpasses humans for some translations in high-resource languages—in which case, learning from the human translations could be detrimental. Compared to NLLB 1.3B distilled, PaLM2 XXS–NTL finetuned with mufu20 translates better in nearly 77% language pairs. The best Mufu models also outperform NLLB 54B MoE in up to nearly 5 Idea Generation Category:
0Conceptual Integration
0eMsrRMmCw
# N ET M O E: A CCELERATING M O E T RAINING THROUGH D YNAMIC S AMPLE P LACEMENT **Xinyi Liu** [1] **Yujie Wang** [1] **Fangcheng Fu** [1] **Xupeng Miao** [2] **Shenhan Zhu** [1] **Xiaonan Nie** [1] **Bin Cui** [1] _[,]_ [3] 1 School of CS & Key Lab of High Confidence Software Technologies (MOE), Peking University 2 Purdue University 3 Institute of Computational Social Science, Peking University (Qingdao) _{_ xy.liu, alfredwang, ccchengff _}_ @pku.edu.cn xupeng@purdue.edu, _{_ shenhan.zhu, xiaonan.nie, bin.cui _}_ @pku.edu.cn A BSTRACT Mixture of Experts (MoE) is a widely used technique to expand model sizes for better model quality while maintaining the computation cost constant. In a nutshell, an MoE model consists of multiple experts in each model layer and routes the training tokens to only a fixed number of experts rather than all. In distributed training, as experts are distributed among different GPUs, All-to-All communication is necessary to exchange the training tokens among the GPUs after each time of expert routing. Due to the frequent and voluminous data exchanges, All-to-All communication has become a notable challenge to training efficiency. In this paper, we manage to accelerate All-to-All communication in MoE models from the training sample perspective, which is unexplored so far. In particular, we put forward the observation that tokens in the same training sample have certain levels of locality in expert routing. Motivated by this, we develop NetMoE, which takes such locality into account and dynamically rearranges the placement of training samples to minimize All-to-All communication costs. Specifically, we model the All-to-All communication given the sample placement and formulate an integer programming problem to deduce the optimal placement in polynomial time. Experiments with 32 GPUs show that NetMoE achieves a maximum efficiency improvement of 1 _._ 67 _×_ compared with current MoE training frameworks. 1 I NTRODUCTION Data Parallelism In recent years, large language models (LLMs) have shown impressive performance in language understanding and generation (OpenAI, All-to-All Scatter 2023; Touvron et al., 2023; Zhou et al., 2024; Dubey et al., 2024; Shao et al., 2024; Zhang Model et al., 2024a) due to the increasing model Parallelism size. However, larger models often come with greater computational costs. To address this, All-to-All Gather Mixture of Experts (MoE) models have been in- output output output troduced to expand the model size greatly without increasing the computational cost. Com |Col1|Devi|ice0 Devi|Col4|Col5|… ice1 Devic|Col7|Col8|ceJ-1|Col10| |---|---|---|---|---|---|---|---|---|---| ||||||||||| ||MHA MHA MHA<br>input input input<br>0 1 J-1<br>gating gating gating|MHA MHA MHA<br>input input input<br>0 1 J-1<br>gating gating gating|MHA MHA MHA<br>input input input<br>0 1 J-1<br>gating gating gating|MHA MHA MHA<br>input input input<br>0 1 J-1<br>gating gating gating|MHA MHA MHA<br>input input input<br>0 1 J-1<br>gating gating gating|MHA MHA MHA<br>input input input<br>0 1 J-1<br>gating gating gating|MHA MHA MHA<br>input input input<br>0 1 J-1<br>gating gating gating|MHA MHA MHA<br>input input input<br>0 1 J-1<br>gating gating gating|| |||||All-to-All Scat|All-to-All Scat|ter|||| ||Expert1 Expert0|Expert1 Expert0||Expert3 Expert2|Expert3 Expert2||ExpertE-1 ExpertE-2|ExpertE-1 ExpertE-2|| ||All-to-All Gather<br>output output output|All-to-All Gather<br>output output output||All-to-All Gat|All-to-All Gat|her|her|her|her| bining MoE with Transformer-based models Figure 1: An example of expert parallelism applied to can yield outstanding performance across var- an MoE model with _J_ devices and _E_ = 2 _J_ experts ious tasks, including natural language process- (each device has two different experts). ing (Lepikhin et al., 2021; Fedus et al., 2022), computer vision (Riquelme et al., 2021; Liang et al., 2022), recommendation systems (Tang et al., 2020; Zou et al., 2022), and speech recognition (You et al., 2022; Kwon & Chung, 2023). MoE models often replace the feed-forward network (FFN) layer with the MoE layer, which consists of a gating network and several small FFNs, representing different experts. In the MoE layer, each token is routed by the gating network to only a few selected experts, and the final output is Model Parallelism Figure 1: An example of expert parallelism applied to an MoE model with _J_ devices and _E_ = 2 _J_ experts (each device has two different experts). 1 (a) An overview of a MoE layer example. |0-0|0-1|0-2|0-3| |---|---|---|---| |1-0|1-1|1-2|1-3| |2-0|2-1|2-2|2-3| |3-0|3-1|3-2|3-3| |0-0|0-1|0-2|0-3| |---|---|---|---| |1-0|1-1|1-2|1-3| |2-0|2-1|2-2|2-3| |3-0|3-1|3-2|3-3| |1-1|2-3|3-0|3-2| |---|---|---|---| |0-3|1-0|3-1|3-3| |0-0|0-2|1-3|2-1| |0-1|1-2|2-0|2-2| |0-0|0-1|0-2|0-3| |---|---|---|---| |1-0|1-1|1-2|1-3| |2-0|2-1|2-2|2-3| |3-0|3-1|3-2|3-3| |Col1|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens||Sample0<br>0-0 0-1 0-2 0-3<br>intra node<br>Sample1<br>1-0 1-1 1-2 1-3| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens||| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|1-1|2-3|3-0|3-2|3-2|3-2| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|1-1|2-3|3-0|3-2||| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|1-1|2-3|3-0|3-2|All-|All-| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|0-3|1-0|3-1|3-3|3-3|3-3| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|0-3|1-0|3-1|3-3|to-|to-| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>1 token<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>5 tokens|0-3|1-0|3-1|3-3|All|All| |Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Ga|Sample2<br>2-0 2-1 2-2 2-3<br>intra node<br>Sample3<br>3-0 3-1 3-2 3-3| |Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|ther|ther| |Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-0|0-2|1-3|2-1|2-1|2-1| |Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-0|0-2|1-3|2-1||| |Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-0|0-2|1-3|2-1||| |Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-1|1-2|2-0|2-2|2-2|2-2| |Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-1|1-2|2-0|2-2||| |Node1 5 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>0 token NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-1|1-2|2-0|2-2||| |Col1|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens||Sample3<br>3-0 3-1 3-2 3-3<br>intra node<br>Sample1<br>1-0 1-1 1-2 1-3| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens||| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|1-1|2-3|3-0|3-2|3-2|3-2| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|1-1|2-3|3-0|3-2||| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|1-1|2-3|3-0|3-2|All-|All-| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|0-3|1-0|3-1|3-3|3-3|3-3| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|0-3|1-0|3-1|3-3|to-|to-| |Node0<br>Expert0<br>Device0 1-1 2-3 3-0 3-2<br>1 token NVLink<br>2 tokens<br>Expert1<br>Device1 0-3 1-0 3-1 3-3<br>2 tokens|0-3|1-0|3-1|3-3|All|All| |Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Ga|Sample2<br>2-0 2-1 2-2 2-3<br>intra node<br>Sample0<br>0-0 0-1 0-2 0-3| |Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|ther|ther| |Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-0|0-2|1-3|2-1|2-1|2-1| |Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-0|0-2|1-3|2-1||| |Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-0|0-2|1-3|2-1||| |Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-1|1-2|2-0|2-2|2-2|2-2| |Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-1|1-2|2-0|2-2||| |Node1 2 tokens<br>Expert2<br>Device2 0-0 0-2 1-3 2-1<br>2 tokens NVLink<br>2 tokens<br>Expert3<br>Device3 0-1 1-2 2-0 2-2|0-1|1-2|2-0|2-2||| (b) A gather operation in the MoE layer without adjusting the sample placement. (c) A gather operation in the MoE layer after sample placement adjustment is enabled. Figure 2: An example of sample exchange. The figure illustrates the All-to-All gather operation in a MoE layer with two nodes, each containing two devices, and each device having one expert. Different colors represent tokens sent to different experts, and _i_ - _j_ denotes the _j_ -th token in the _i_ -th sample. Fig. 2(a) illustrates the complete process of a MoE layer during forward propagation. Fig. 2(b) shows the All-to-All gather operation in the MoE layer without adjusting the sample placement, where the inter-node communication volume of each node is 5 tokens. Fig. 2(c) displays the All-to-All gather operation after sample placement adjustment is enabled — the positions of samples on the devices change (samples 0 and 3 are exchanged), reducing the inter-node communication volume to 2 tokens per node. obtained by a weighted sum of the computations from the selected experts. By such means, we can increase the number of experts to expand the model size for better performance, while keeping the computation complexity constant. Despite the above benefit, given the potentially large number of experts, the memory capacity of a single device is often insufficient. As a result, expert parallelism (Lepikhin et al., 2021; Fedus et al., 2022) is a common technique to facilitate the distributed training of MoE models. As shown in Fig. 1, each device holds only a subset of the experts to reduce memory consumption. Meanwhile, other model parameters are replicated and stored on all devices, and the training data assigned to each device are different. In each MoE layer, based on the routing result of the gating network, each token is sent to the device where the selected expert is located. The output from the expert is then sent back to the original device of the corresponding token. This involves two communication operations, namely the All-to-All scatter and All-to-All gather (He et al., 2021), respectively. Due to the dynamic nature of routing, training MoE models efficiently faces several challenges, with the All-to-All communication, being the most significant one. Particularly, the All-to-All communication time can account for up to 80% of the total training time (Hwang et al., 2023; Liu et al., 2023; He et al., 2022; Li et al., 2023; Yu et al., 2024). One reason is because all tokens need to participate in the All-to-All operation, leading to a high communication volume. Another reason is the communication frequency. Considering both the forward and backward propagation, each MoE layer requires four All-to-All communications per training iteration. Such frequent and extensive communication incurs significant time costs. Therefore, accelerating All-to-All communication is essential to improve training efficiency. **Motivation:** Recent studies have demonstrated that expert routing exhibits a certain degree of _data_ _locality_ . To be specific, input tokens may have distinct preferences for experts, and the corresponding distribution is often skewed (He et al., 2022; Nie et al., 2023; Xue et al., 2024; Jiang et al., 2024). In other words, the All-to-All operation in MoE models can be highly unbalanced across different devices, thus bounded by the device pair with the highest communication volume. Mean 2 while, it is well known that _network locality_ is an inherent characteristic of modern clusters for deep learning training. In particular, there are various communication channels in modern clusters, e.g. intra-node devices usually communicate via PCIe or NVLINK, while inter-node devices use Ethernet or InfiniBand, with intra-node communication usually faster than inter-node ones. To achieve load balancing, existing methods propose techniques from the model perspective (Nie et al., 2023; Lewis et al., 2021). Typically, they either dynamically adjust the model placement but introduce a lot of additional communication, or modify the model definition but sacrifice the model performance (see §2.2 for more discussion). Yet optimization from the data perspective is under explored. Inspired by this, we propose NetMoE, which accelerates the All-to-All communication by combining the data locality in expert routing with the network locality among training devices. The essential idea of NetMoE is to dynamically adjust the placement of data samples during training based on expert routing results so that more tokens will be transmitted through high-speed channels rather than low-speed ones. As illustrated in Fig. 2, _sample_ 0 shows a preference for _expert_ 2 that resides on _node_ 1, while _sample_ 3 favors _expert_ 0 that resides on _node_ 0 . Using the vanilla All-to-All communication method would result in significant inter-node communication overhead, as shown in Fig. 2(b). However, by swapping the positions of _sample_ 0 and _sample_ 3 as depicted in Fig. 2(c), part of the inter-node communication can be converted into intra-node communication or even intradevice memory copying, significantly reducing the time cost (detailed in §3.1). In this way, we can accelerate the All-to-All communication without affecting the computing results. However, it is non-trivial to achieve dynamic sample placement. For one thing, how to adjust the placement to maximize efficiency is a complex and unexplored question. For another, since the adjustment should be done for every layer in every iteration, it is vital to devise an efficient algorithm to deduce the placement on the fly. To address these problems, we first revisit the cost modeling for All-to-All communication and formulate the dynamic sample placement problem into a combinatorial optimization problem. Subsequently, we split it into two stages to ease the solving and design a corresponding polynomial-time algorithm to ensure a timely solution. In short, the technical contributions of this work are summarized as follows: - We propose NetMoE, the first effort that leverages both the data locality and network locality to accelerate the All-to-All communication through dynamic sample placement. - We formulate the dynamic sample placement problem as a combinatorial optimization problem, which aims to find the best sample placement that maximizes efficiency given the expert routing. - We dissect the problem into two stages and develop a polynomial-time solution to efficiently derive the sample placement during training. - We conduct experiments with various models on 32 NVIDIA A800 GPUs. Results show that NetMoE outperforms current MoE training systems by up to 1 _._ 67 _×_ in terms of training efficiency. 2 P RELIMINARY 2.1 P ARALLELISM IN D ISTRIBUTED T RAINING **Data and Model Parallelism:** In data parallelism (Li et al., 2020; Sergeev & Balso, 2018; Wang et al., 2023; Zhang et al., 2024b), each device maintains a complete copy of the model parameters, while different training samples are assigned to each device. After the backward computation is completed, the model gradients from all devices are aggregated before updating the model parameters. In model parallelism (Narayanan et al., 2021b; Huang et al., 2019; Narayanan et al., 2021a; Guan et al., 2024), model parameters are distributed across multiple devices, with each device responsible for only a portion of the model. Communication operations are necessary to transmit the intermediate results (a.k.a. forward activations and their backward gradients) to accomplish the forward and backward propagation. **Expert Parallelism:** As shown in Fig. 1, expert parallelism (Lepikhin et al., 2021; Fedus et al., 2022) can be regarded as combining model parallelism and data parallelism. It distributes expert parameters across different devices like model parallelism, while replicating other parameters on all devices like data parallelism. In each MoE layer, each token will be routed by the gating network to top _K_ different experts for processing, where _K_ is a hyperparameter, typically a small value, such as 1 or 2, which helps to reduce the computational complexity. After the MoE layer obtains the 3 gating routes, tokens are sent to the devices where the corresponding experts are located based on the routing. The results from the expert computations are then sent back to the original devices where the tokens are located. Since the experts are distributed across different devices, communication during this process involves all devices sending and receiving messages with one another, leading to what is known as All-to-All communication. 2.2 D ISTRIBUTED T RAINING A CCELERATION T ECHNIQUES FOR M O E M ODELS **Dynamic Expert Placement:** The efficiency of MoE models is constrained by the extensive and frequent All-to-All communication required during training. In response to this issue, some studies have observed that data tends to show a preference for certain experts during training. Then, based on this observation, they further propose to dynamically adjust the placement of experts to reduce communication volume (He et al., 2022; Nie et al., 2023; Zhai et al., 2023). For instance, popular experts can be placed on more devices in the data parallel manner, so that the communication volume related to them would decrease. However, due to the growing size of experts, these approaches incur substantial overhead of transmitting expert parameters among the devices, so they cannot adjust the expert placement for every iteration, leading to sub-optimality. In contrast, our work tries to reduce the communication volume from a different perspective: we dynamically adjust the placement of samples in every iteration to accelerate the All-to-All communication. To be specific, we formulate an optimization problem to deduce the best sample placement that minimizes the time cost of Allto-All communication. As we will evaluate in §4, our work outperforms existing works based on dynamic expert placement when training MoE models. **Modification in Model Definition:** To achieve better workload balance in MoE training, there are many existing works developed to modify the model definition (e.g., routing mechanisms, model architectures). Some approaches modify the routing mechanism to balance the load across experts, which helps reduce synchronization time between devices (Lewis et al., 2021). Recognizing the network locality in distributed training, several works introduce a routing topology loss to prioritize routing tokens within the same node, thereby reducing inter-node communication (Li et al., 2024; Chen et al., 2022). Other approaches (Zeng & Xiong, 2023) map tokens to smaller hidden layer dimension before inter-node communication, further decreasing the communication load. SCMoE (Cai et al., 2024) proposes feeding the output of the current attention layer directly into the next MoE layer, enabling parallel forward propagation with the current MLP layer in order to fully overlap All-to-All communication with computation. Although these methods improve training efficiency, they inevitably impact model convergence. When applying these methods, we usually need to run numerous trials to tune the hyperparameters, such as adjusting the weight of the topology-aware routing loss (Chen et al., 2022) or tuning the hyper-parameters for different communication channels (Zeng & Xiong, 2023). Given that each trial of LLM training can take days or even months, their utility is inevitably hampered. In contrast, our work focuses on how to accelerate All-to-All communication without affecting model convergence. 3 N ET M O E |Scatter|Col2|Col3|Col4| |---|---|---|---| |FFN<br>Add|FFN<br>Add|FFN<br>Add|FFN<br>Add| |FFN<br>Add|FFN<br>Add|FFN<br>Add|| |FFN<br>Add|FFN<br>Add|Add|Add| |Scatter|Col2|FFN| |---|---|---| |Solver||| N nodes In this section, we introduce NetMoE, a novel Figure 3: The overview of the method of NetMoE. framework designed to optimize distributed training for MoE models by considering both data and network locality. Given a target MoE model and the hardware environment, NetMoE aims to minimize the All-to-All communication cost. Its core innovation lies in optimizing the placement of samples within each MoE layer to maximize the utilization of faster intra-node bandwidth, thereby reducing the communication volume over slower inter-node connections. Specifically, NetMoE swaps the samples across devices during each MoE layer, enabling more tokens to communicate within the node during All-to-All communication. 4 Table 1: Notations used throughout this work. We assume _I_ is divisible by _J_, and _J_ is divisible by _N_, which are common in distributed training. _L_ The number of tokens per sample. _H_ The hidden size for each token. _E_ The number of experts in the MoE layer. _K_ The number of experts to be routed per token. _I_ The number of samples per iteration (a.k.a. global batch size). _J_ The number of devices (i.e., GPUs). _N_ The number of nodes (machines). I[ _·_ ] The indicator function. � _n_ � The set of natural numbers less than _n_, i.e., _{_ 0 _,_ 1 _, · · ·, n −_ 1 _}_ . Table 2: Bandwidth of each channel of the NVIDIA A800 GPU cluster used in our experiments. Channel Bandwidth Intra-device _∼_ 2TB/s Intra-node 400GB/s Inter-node 100GB/s Fig. 3 illustrates the overview of this section. We begin by introducing the modeling of All-to-All communication in MoE training and formulate our optimization problem in §3.1. We then illustrate how to solve the problem in §3.2, with the detailed algorithm shown in Alg. 1. We also present our implementation details in §3.3. For clarity, the frequently used notations are listed in Table 1. 3.1 P ROBLEM F ORMULATION **Communication Modeling:** We first discuss the mathematical modeling of All-to-All communication, which is the optimization target of NetMoE. We use the _α_ - _β_ model (Sarvotham et al., 2001) to analyze All-to-All communication, where _α_ represents the latency cost and _β_ represents the bandwidth cost. Specifically, we classify communication into three categories: intra-device, intra-node, and inter-node communication, each using different channels. Table 2 lists the bandwidth of each channel used in our experiments. Since intra-device communication is typically achieved via memory copying, it is significantly faster than the other two categories and thus not considered in our modeling. Therefore, the communication time is determined by the maximum time required for data transfer across the intra-node and inter-node channels. The bandwidths of these channels are represented by _v_ _intra_, and _v_ _inter_, respectively. Thus, for each All-to-All communication, its time cost can be expressed by the following formula, where _s_ _·_ represents the communication volume for the corresponding channel. _t_ = max( _t_ _intra_ _, t_ _inter_ ) _,_ where _t_ _intra_ = _α_ _intra_ + _β_ _intra_ _s_ _intra_ _, β_ _intra_ = 1 _/v_ _intra_ _,_ (1) _t_ _inter_ = _α_ _inter_ + _β_ _inter_ _s_ _inter_ _, β_ _inter_ = 1 _/v_ _inter_ The bandwidth ( _v_ _·_ ) and latency ( _α_ _·_ ) can be obtained by profiling the hardware environment before training, while the communication volume ( _s_ _·_ ) needs to be dynamically determined based on the routing results within the MoE layer. We then analyze how to calculate the communication volume. Let _route ∈_ N _[I][×][L][×][K]_ be the token routing results of the gating network, which represents the _K_ experts that each token will be sent to. Then, the number of tokens that the _i_ -th sample needs to send to the _e_ -th expert can be counted as _num_ _i,e_ = � I[ _route_ _i,l,k_ = _e_ ] for _i ∈_ � _I_ � _, e ∈_ � _E_ � (2) _l,k_ Next, _num ∈_ N _[I][×][E]_ can be used to model the communication volume across different channels. Let ExpDev( _e_ ) be the device index of the _e_ -th expert, SmpDev( _i_ ) the device index where the _i_ -th sample should be routed to, and Node( _j_ ) the node index of the _j_ -th device. By considering the communication volume as the number of tokens that need to be transmitted, we have _s_ _intra_ = � � _num_ _i,e_ _,_ _s_ _inter_ = � ( _i,e_ ) _∈S_ _intra_ ( _i,e_ ) _∈S_ � _num_ _i,e_ (3) ( _i,e_ ) _∈S_ _inter_ where _S_ _intra_ and _S_ _inter_ can be calculated via the device indices of experts and samples: _S_ _intra_ = _{_ ( _i, e_ ) _|_ Node(SmpDev( _i_ )) = Node(ExpDev( _e_ )) _∧_ SmpDev( _i_ ) _̸_ = ExpDev( _e_ ) _}_ (4) _S_ _inter_ = _{_ ( _i, e_ ) _|_ Node(SmpDev( _i_ )) _̸_ = Node(ExpDev( _e_ )) _}_ **Rationality of Dynamic Sample Placement:** Given the aforementioned modeling, there is no doubt that the time cost of All-to-All communication is highly related to the placement of experts and 5 Idea Generation Category:
2Direct Enhancement
1qP3lsatCR
# P HYSICS OF L ANGUAGE M ODELS : P ART 3.3, K NOWLEDGE C APACITY S CALING L AWS [ EXTENDED ABSTRACT ] _[∗]_ **Zeyuan Allen-Zhu** FAIR at Meta zeyuanallenzhu@meta.com **Yuanzhi Li** Mohamed bin Zayed University of AI Yuanzhi.Li@mbzuai.ac.ae A BSTRACT Scaling laws describe the relationship between the size of language models and their capabilities. Unlike prior studies that evaluate a model’s capability via loss or benchmarks, we estimate information-theoretically the number of knowledge _bits_ a model stores. We focus on factual knowledge represented as tuples, such as (USA, capital, Washington D.C.) from a Wikipedia page. Through multiple controlled datasets, we establish that language models can and only can store _2_ _bits of knowledge per parameter, even when quantized to int8_, and such knowledge can be flexibly extracted for downstream applications. _More broadly, we present_ _12 results_ on how (1) training duration, (2) model architecture, (3) quantization, (4) sparsity constraints such as MoE, and (5) data signal-to-noise ratio affect a model’s knowledge storage capacity. 1 I NTRODUCTION The scaling laws of large language models remain a pivotal area of research, enabling predictions about the performance of extremely large models through experiments with smaller ones. On the training time aspect, established scaling laws (Hoffmann et al., 2022; Kaplan et al., 2020; Hernandez et al., 2021; Alabdulmohsin et al., 2022; Henighan et al., 2020) discuss the optimal training flops versus model size. However, recent studies (Muennighoff et al., 2023; Gunasekar et al., 2023; Li et al., 2023) challenge these laws, demonstrating that training smaller models with significantly more flops can yield superior results. While these laws talk about how much time/data is needed to train a model of a certain size, another fundamental question is: _what is the ultimate performance_ _a model can achieve, assuming sufficient training_ ? Despite the known emergent behaviors in large models (Bubeck et al., 2023; Yu et al., 2023), or even qualitative arguments that modern large models have reached L2 or L3-level intelligence (Allen-Zhu & Xu, 2025), there is a _lack of a principled,_ _quantitative analysis_ on how model size impacts its capacity when adequately trained. [1] Traditional theory on overparameterization suggests that scaling up model size in sufficiently trained models can enhance memorization of training data (Allen-Zhu et al., 2019b), improve generalization _∗_ This paper is part of the _Physics of Language Models_ series, one of the first six papers presented as a [two-hour tutorial at ICML 2024 in Austria (youtu.be/yBL7J0kgldU). Full and future editions of Part 3.3,](https://youtu.be/yBL7J0kgldU&t=2220) [including additional experiments and potential code releases, are available at physics.allen-zhu.com](https://physics.allen-zhu.com/part-3-knowledge/part-3-3) [and ssrn.com/abstract=5250617.](https://ssrn.com/abstract=5250617) 1 There is a rich literature comparing how pretrained models perform on benchmark tasks. Most comparisons are for different model families trained over different data: if LLaMA-70B is better than Mistral-7B, does the gain come from its choice of pretrain data, or the architecture difference, or really the size of the model? Some comparisons are among the same architecture, such as LLaMA-70B scores 63.6% on the world knowledge benchmark while LLaMA-7B scores only 48.9% (Touvron et al., 2023b); does this mean increasing model size by 10x increases its capacity only to 130% = 63 _._ 6 _/_ 48 _._ 9? Thus, it is highly important to use a more principled framework to study scaling laws in a controlled setting. 1 error (Hestness et al., 2017; Rosenfeld, 2021; Rosenfeld et al., 2019), and better fit complex target functions (Li & Liang, 2018; Allen-Zhu et al., 2019a). However, these results often overlook large constant or polynomial factors, leading to a significant discrepancy from practical outcomes. In this paper, we introduce a principled framework to examine _highly accurate_ scaling laws concerning model size versus its _knowledge storage capacity_ . It is intuitive that larger language models can store more knowledge, but does the total knowledge scale linearly with the model’s size? What is the **exact constant** of this scaling? Understanding this constant is crucial for assessing the efficiency of transformer models in knowledge storage and how various factors (e.g., architecture, quantization, training duration, etc.) influence this capacity. Knowledge is a, if not the, pivotal component of human intelligence, accumulated over our extensive history. Large language models like GPT-4 are celebrated not just for their sophisticated logic but also for their superior knowledge base. Despite rumors of GPT-4 having over 1T parameters, _is_ _it necessary to store all human knowledge?_ Could a 10B model, if trained sufficiently with highquality data, match GPT-4’s knowledge capacity? Our paper seeks to address these questions. **Knowledge Pieces.** Defining “one piece of human knowledge” precisely is challenging. This paper aims to make progress by focusing on a restricted, yet sufficiently interesting domain. We define a _piece_ of knowledge as a (name, attribute, value) tuple, e.g., (Anya Forger, birthday, 10/2/1996); and many data in world knowledge benchmarks can be broken down into pieces like this. [2] We generate _synthetic_ knowledge-only datasets by uniformly at random generating (name, attribute, value) tuples from a knowledge base and converting them into English descriptions. We pretrain language models (e.g., GPT-2, LLaMA, Mistral) on these texts using a standard auto-regressive objective from random initialization, and “estimate” the learned knowledge. By varying the number of knowledge pieces and model sizes, we outline a knowledge capacity scaling law. Our idealized setting, free from irrelevant data, allows for more accurate scaling law computations — we also discuss how “junk” data affects capacity. In contrast, it is difficult to quantify reallife knowledge; for instance, if LLaMA-70B outperforms LLaMA-7B by 30% on a benchmark, it doesn’t necessarily mean a tenfold model scaling only boosts capacity by 30% (see Footnote 1). The synthetic setting also lets us adjust various hyperparameters, like name/value lengths and vocabulary size, to study their effects on knowledge capacity scaling laws. Most of the paper shall focus on a setting with synthetically-generated human biographies as data, either using predefined sentence templates or LLaMA2-generated biographies for realism. **Bit Complexity and Capacity Ratio.** For _N_ knowledge pieces (i.e., _N_ tuples), we define the _bit_ _complexity_ as the minimum bits required to encode these tuples. For any language model trained on this data, we calculate its “bit complexity lower bound” (see Theorem 3.1), describing the minimum number of bits needed for the model to store the knowledge at its given accuracy. This formula is nearly as precise as the upper bound, within a 1 _−_ _o_ (1) factor. We train language models of varying sizes on knowledge data with different _N_ values. By comparing the models’ trainable parameters to the bit complexity lower bounds, we evaluate their knowledge storage efficiency. A model with 100M parameters storing 220M bits of knowledge has a _capacity ratio_ of 2 _._ 2 bits per parameter. **Our results.** Our findings are summarized as follows: - R ESULTS 1-3: B ASE SCALING LAW FOR GPT2. [3] **–** R ESULT 1+2+3: GPT2, trained with standard AdamW, consistently achieves a 2bit/param capacity ratio across all data settings after sufficient training. This includes various model sizes, depths, widths, data sizes, types (synthetic/semi-synthetic), and hyperparameters (e.g., name/value length, attribute number, value diversity). 2 Examples include (Africa, largest country, Sudan) and (It Happened One Night, director, Frank Capra) in TriviaQA (Joshi et al., 2017), or (Teton Dam, collapse date, 06/05/1976) and (USA, Capital, Washington D.C.) in NaturalQuestions (Kwiatkowski et al., 2019). 3 In this paper, GPT2 refers to that the GPT2 model with rotary embedding instead of positional embedding and without dropout. 2 (a) bioS( _N_ ) data ( **1000 exposures** ), peak _R_ ( _F_ ) _≥_ 2 (b) bioS( _N_ ) data ( **100 exposures** ), peak _R_ ( _F_ ) _≥_ 1 Figure 1: Scaling laws for GPT2 pretrained on bioS( _N_ ) data using fp16 (mixed-precision) for 1000/100 exposures. **Conclusion.** The _peak_ capacity ratios consistently exceed _R_ ( _F_ ) _≥_ 2 (resp. _≥_ 1) for 1000 exposures (resp. 100 exposures) of pretraining on each knowledge piece, **regardless of model depth/size** . **Remarks.** Each dot _ℓ_ - _h_ represents GPT2 with _ℓ_ layers, _h_ heads, and 64 _d_ dimensions. The learned knowledge is calculated by the bit-complexity lower bound Theorem 3.1. The full paper also includes: similar results for bioS [simple] ( _N_ ) and bioR( _N_ ) data, the _same holds_ for quantization using int8, and confirming full extractability of all learned knowledge. [5] **Larger models?** Training GPT2-20-16 on bioS(10 _M_ ) for 1000 exposures costs 8.5 days with 64 A100s, while GPT2-12-32 on bioS(20 _M_ ) for 100 exposures took 2.4 days. In our synthetic setting, we see no need to scale up further. Instead, we prefer to allocate GPUs to explore other aspects covered in this paper. _Remark_ 1.1 _._ This predicts **a sufficiently trained 7B language model** can store 14B bits of knowledge, surpassing the knowledge of English Wikipedia and textbooks by our estimation. [4] _Remark_ 1.2 _._ When we say the model _stores knowledge_, it isn’t word-by-word memorization. Instead, the knowledge is flexibly extractable (e.g., via QAs like “What is Anya Forger’s birthday”) (Allen-Zhu & Li, 2024) and applicable in downstream tasks (e.g., comparing birthdays) via fine-tune (Allen-Zhu & Li, 2025). - R ESULT 4: H OW TRAINING TIME AFFECTS MODEL CAPACITY . Achieving a 2bit/param capacity requires each knowledge piece to be visited 1000 times during training, termed _**1000-exposure**_ to differentiate from traditional “1000-pass” terminology, as a single data pass can expose a knowledge piece 1000 times. [6] **–** R ESULT 4: With 100 exposures, an _undertrained_ GPT2’s capacity ratio falls to 1bit/param. (See Figure 1.) _Remark_ 1.3 _._ Another perspective on Result 4 is that _rare_ knowledge, encountered only 100 times during training, is stored at a 1bit/param ratio. - R ESULTS 5-7: H OW MODEL ARCHITECTURE AFFECTS MODEL CAPACITY . We tested LLaMA, Mistral, and GPT2 architectures with reduced or even no MLP layers. 4 [As of February 1, 2024, English Wikipedia contains a total of 4.5 billion words, see https:](https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia#Size_of_the_English_Wikipedia_database) [//en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia#Size_of_the_English_](https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia#Size_of_the_English_Wikipedia_database) [Wikipedia_database, accessed March 2024. We estimate that the non-overlapping contents of English](https://en.wikipedia.org/wiki/Wikipedia:Size_of_Wikipedia#Size_of_the_English_Wikipedia_database) textbooks have fewer than 16 billion words in total, see Remark P.1. This amounts to 20.5 billion words, and we believe they contain fewer than 14 billion bits of knowledge. 5 A distinction exists between memorizable knowledge (e.g., text memorized during pretraining) and knowledge flexibly extractable via instruction fine-tuning (Allen-Zhu & Li, 2024); our results in this paper apply to both. 6 For example, it is plausible that one pass through Wiki data might present the knowledge piece (US, capital, Washington D.C.) 1000 times, and one pass through the Common Crawl might present it a million times. 3 **–** R ESULT 5: In the 1000-exposure setting, a 2bit/param capacity ratio appears to be a **univer-** **sal rule** : all models, even without MLP layers, closely achieve this ratio. **–** R ESULT 6: With 100 exposures, some archs show limitations; notably, LLaMA/Mistral’s capacity ratio is 1.3x lower than GPT2’s, even after best-tuned learning rates. **–** R ESULT 7: Further controlled experiments indicate that “gated MLP” usage leads to LLaMA/Mistral architecture’s underperformance in knowledge storage. _Remark_ 1.4 _._ **Our framework offers a principled playground to compare models.** This contrasts with traditional comparisons based on loss/perplexity, which can produce debatable conclusions. [7] Controlled data also reveal more significant differences between models. [8] - R ESULT 8: H OW QUANTIZATION AFFECTS MODEL CAPACITY . We applied GPTQ (Frantar et al., 2022) to quantize models from the base scaling laws to int8 or int4. Surprisingly, **–** R ESULT 8: Quantizing to int8 does not compromise model capacity (even for models on the boundary of 2bit/param); however, quantizing to int4 reduces capacity to 0.7bit/param. _Remark_ 1.5 _._ Since int8 is 8bit, LLMs can exceed 1/4 of the theoretical limit for storing knowledge; thus knowledge must be very compactly stored inside the model across all layers. _Remark_ 1.6 _._ Since 2bit/param is obtained after sufficient training, training longer _may not_ further improve model capacity, _but quantization can_ . While not covered in this paper, our framework also provides a principled playground to compare different quantization methods. - R ESULT 9: H OW SPARSITY (M O E) AFFECTS MODEL CAPACITY . Mixture-of-experts (MoE) models offer faster inference than dense models but often underperform dense models with the same total parameter count (not effective parameters). We show that this performance drop is likely not due to a lack of knowledge storage capability. **–** R ESULT 9: MoE models, even with 32 experts, only reduce 1.3x in capacity compared to the base scaling laws, despite using just 8 _._ 8% of the total parameters during inference. - R ESULTS 10-12: H OW JUNK KNOWLEDGE AFFECTS MODEL CAPACITY . Not all pretrain data are equally useful. Much of the internet data lacks valuable knowledge for training language models (Li et al., 2023), while knowledge-rich sources like Wikipedia represent only a small fraction of the training tokens. We explore the impact on model capacity by conducting a controlled experiment with both useful and “junk” data. **–** R ESULT 10+11: Junk data significantly reduces model capacity. As an example, with a 1:7 ratio of “useful to junk” training tokens, capacity for useful knowledge _loses by a factor of_ _20_ x, even when useful knowledge is exposed 100 times. [9] **–** R ESULT 12: An _effective mitigation_ is to prepend a special token to all useful knowledge. This is akin to adding a domain name like wikipedia.org at the start of every Wikipedia paragraph; the model _**autonomously**_ identifies high-quality data without prior knowledge of valuable domains. In the example above, the loss factor improves from 20x to 2x. **Conclusion.** Overall, our approach to studying knowledge capacity scaling laws offers a flexible and **more accurate playground** compared to traditional methods that evaluate language models trained on internet data against real-world benchmarks. This accuracy is partly due to the synthetic nature of our dataset, which eliminates concerns such as data contamination that could compromise the validity of real-world benchmark results. In this paper, we’ve conducted a thorough comparison across different model architectures and types of knowledge. While we haven’t explored various quantization methods, this represents a promising direction for future research. We’ve also investigated the impact of junk data and proposed mitigation strategies. We believe the insights gained from this principled exploration can assist practitioners in making informed decisions about model selection, training data preparation, and further theoretical research into LLMs. 7 A model might achieve better perplexity by performing _much better_ on simpler data but poorer on complex data, or by excelling in reasoning but not in knowledge. Our results offer a more nuanced view: GatedMLP doesn’t affect frequent knowledge but does impact moderately rare knowledge (e.g., with 100 exposures). 8 For example, Shazeer (2020) found GatedMLP offers a _∼_ 1% accuracy boost on benchmark tasks; our findings of a 1.3x difference translates for instance to accuracies 90% vs. 70%. 9 The loss factor improves to 3x/1.5x/1.3x with 300/600/1000 exposures of useful knowledge, compared to Result 4 which involves training without junk for only 100 exposures. 4 2 P RELIMINARIES In this paper, a piece of knowledge is a tuple of three strings: (name, attribute, value) = ( _n, a, v_ ). For instance, _n_ = “Anya” _, a_ = “birthday” _, v_ = “Oct 2, 1996”. 2.1 K NOWLEDGE (T HEORETICAL S ETTING ) The complexity of a knowledge set is determined not only by the number of knowledge pieces but also by the length of the value string _v_, the diversity of the vocabulary, and other factors. For instance, if the attribute _a_ =“passport number,” then the value _v_ contains more bits of knowledge compared with _a_ =“gender,” because the former has significantly higher _diversity_ . If the attribute _a_ =“birth date,” then the value _v_ could consist of 3 _chunks_ : (10 _,_ 2 _,_ 1996). Considering these examples, we propose a set of hyperparameters that may influence the complexity of knowledge: 1. _N_ — the number of (distinct) names _n_, denoted by _N_ . 2. _K_ — the number of attributes _a_, with _A_ representing the set of attributes. For simplicity, we assume _|A|_ = _K_ is fixed. 3. _T_ — the number of tokens _T_, where every character in _v_ belongs to _T_ for some _|T |_ = _T_ . For example, we can think of _T_ as “vocab size” in a tokenizer. 4. _C_ and _L_ — the number of chunks and the length of each chunk for the value: each value _v ∈_ ( _T_ _[L]_ ) _[C]_ can be expressed as _v_ = ( _v_ 1 _, v_ 2 _, · · ·, v_ _C_ ), where _v_ _i_ _∈T_ _[L]_ . 5. _D_ — the diversity of chunks: for each piece of knowledge ( _n, a, v_ ) and _i ∈_ [ _C_ ], the chunk def _v_ _i_ belongs to _D_ _a_ _⊂T_ _[L]_, for some set with cardinality _D_ = _|D_ _a_ _| ≪_ _T_ _[L]_ . _Remark_ 2.1 _._ For notation simplicity, we have assumed that all chunks within an attribute _a ∈A_ share the same diversity set _D_ _a_, and all chunks are of equal length, etc. This enables us to more easily demonstrate the influence of each hyperparameter on a model’s capacity. In practice, different attributes may have different diversity sets or value lengths — e.g., _D_ passport could be much larger than _D_ gender . Our theoretical results do apply to these settings, albeit with more complex notation. In our theoretical result, we introduce a dataset bioD( _N, K, C, D, L, T_ ) defined as follows: **Definition 2.2** (bioD data generation) **.** _Consider a fixed set of K attributes, such as a set A_ = � _“ID 1” . . . “ID K”}, and a fixed set N_ 0 _of candidate names (with N_ 0 def = _|N_ 0 _| ≫_ _N_ _)._ _1. Generate N names uniformly at random (without replacement) from N_ 0 _to form N_ _._ _2. For each attribute a ∈A, generate D distinct strings w_ 1 _,a_ _, · · ·, w_ _D,a_ _∈T_ _[L]_ _uniformly at_ _random (without replacement) to form the diversity set D_ _a_ _._ _3. For each name n ∈N and attribute a ∈A, generate value v_ _[⋆]_ ( _n, a_ ) = ( _v_ 1 _, v_ 2 _, · · ·, v_ _C_ ) _by_ _sampling each v_ _i_ _∈D_ _a_ _uniformly at random._ def _Let Z_ = �( _n, a, v_ _[⋆]_ ( _n, a_ )� _n∈N_ _,a∈A_ _[be the knowledge set.]_ **Proposition 2.3** (trivial, bit complexity upper bound) **.** _Given N_ 0 _and A and T, to describe a knowl-_ _edge set generated in Definition 2.2, one needs at most the following number of bits:_ log 2 � _|NN_ 0 _|_ � + _NKC_ log 2 _D_ + _K_ log 2 � _TD_ _L_ 0 _|_ _T_ _[L]_ _N_ + _NKC_ log 2 _D_ + _KD_ log 2 _D_ _D_ _L_ � _≈_ _N_ log 2 _|NN_ 0 _|_ _D_ _._ (The approximation is valid when _|N_ 0 _| ≫_ _N_ and _T_ _[L]_ _≫_ _D_ .) We will present a bit complexity lower bound in Section 3. 2.2 K NOWLEDGE (E MPIRICAL S ETTING ) We utilize both the synthetic bioD dataset, generated as per Definition 2.2, and several human biography datasets to evaluate language model scaling laws. Allen-Zhu & Li (2024) introduced a synthetic biography dataset comprising _N_ randomly-generated (fake) individuals, each characterized by six attributes: birth date, birth city, university, major, em 5 Idea Generation Category:
3Other
FxNNiUgtfa
# - C YBER H OST : A O NE STAGE D IFFUSION F RAMEWORK - FOR A UDIO DRIVEN T ALKING B ODY G ENERATION **Gaojie Lin** [1] _[∗]_ **, Jianwen Jiang** [1] _[∗†]_ **, Chao Liang** [1] **, Tianyun Zhong** [2] _[‡]_ **, Jiaqi Yang** [1] **, Yanbo Zheng** [1] 1 ByteDance, 2 Zhejiang University _{_ lingaojiecv,jianwen.alan,liangchao.0412 _}_ @gmail.com zhongtianyun@zju.edu.cn A BSTRACT Diffusion-based video generation technology has advanced significantly, catalyzing a proliferation of research in human animation. While breakthroughs have been made in driving human animation through various modalities for portraits, most of current solutions for human body animation still focus on video-driven methods, leaving audio-driven taking body generation relatively underexplored. In this paper, we introduce CyberHost, a one-stage audio-driven talking body generation framework that addresses common synthesis degradations in half-body animation, including hand integrity, identity consistency, and natural motion. CyberHost’s key designs are twofold. Firstly, the Region Attention Module (RAM) maintains a set of learnable, implicit, identity-agnostic latent features and combines them with identity-specific local visual features to enhance the synthesis of critical local regions. Secondly, the Human-Prior-Guided Conditions introduce more human structural priors into the model, reducing uncertainty in generated motion patterns and thereby improving the stability of the generated videos. To our knowledge, CyberHost is the first one-stage audio-driven human diffusion model capable of zero-shot video generation for the human body. Extensive experiments demonstrate that CyberHost surpasses previous works in both quantitative and qualitative aspects. CyberHost can also be extended to video-driven and audio-video hybrid-driven scenarios, achieving similarly satisfactory results. [Video samples are available at https://cyberhost.github.io/.](https://cyberhost.github.io/) 1 I NTRODUCTION Human animation aims to generate realistic and natural human videos from a single image and control signals such as audio, text, and pose sequences. Previous works (Prajwal et al., 2020; Yin et al., 2022; Wang et al., 2021; Ma et al., 2023; Zhang et al., 2023; Chen et al., 2024b; Xu et al., 2024b;a; Tian et al., 2024; Jiang et al., 2024a; Wang et al., 2024a; Jiang et al., 2024b) have primarily focused on generating talking head videos based on varied input modalities. Among these, audiodriven methods have recently attracted significant interest, particularly those employing diffusion models (Tian et al., 2024; Xu et al., 2024a;b). While these methods can yield impressive results, they are specifically tailored for portrait scenarios, making it challenging to extend them to halfbody scenarios to achieve audio-driven talking body generation. This is because generating talking body videos involves more intricate human appearance details and complex motion patterns. On the other end of the spectrum, some recent studies (Karras et al., 2023; Wang et al., 2024b; Hu et al., 2024; Zhang et al., 2024; Xu et al., 2024c; Huang et al., 2024; Corona et al., 2024) focus on tackling video-driven body animation. Unlike audio-driven settings, these methods rely on pose conditions to provide precise, pixel-aligned body structure prior, enabling the modeling and generation of large-scale human movements and fine-grained local details. Even so, accurately generating detailed body parts remains challenging, as shown in Figure 1. Video-driven body animation methods often require additional motion generation modules and retargeting techniques, which limit _∗_ Equal contribution. _†_ Project lead _‡_ Done during an internship at ByteDance. 1 Source Image MimicMotion (V2V) Ours (V2V) Source Image DiffGesture +MimicMotion (A2V) Ours (A2V) Figure 1: Existing body animation methods struggle to generate detailed hand and facial results in both videodriven (V2V) and audio-driven (A2V) settings. In contrast, our approach ensures hand integrity and facial identity consistency. These differences are also illustrated with videos in the supplementary materials. their practical applications. Recent works (Liao et al., 2020; Wang et al., 2023c; Hogue et al., 2024; Corona et al., 2024) have explored the implementation of two-stage systems to achieve audio-driven talking body generation. This generally consists of an audio-to-pose module and a pose-to-video module, using poses or meshes as intermediate representations. Nevertheless, this approach faces several critical limitations: (1) The two-stage framework design increases system complexity and reduces the model’s learning efficiency. (2) The poses or meshes carry limited information related to expressiveness, constraining the model’s ability to capture subtle human nuances. (3) Potential inaccuracies in pose or mesh annotations can diminish the model’s performance. Therefore, there is an urgent need to explore how to optimize the generation quality of talking body video within a one-stage audio-driven framework. In this paper, we aim to address one-stage talking body generation, a topic that remains unexplored in current literature. The challenge lies in two aspects: 1) _Details Underfitting._ Unlike video-driven methods, capturing local structural details from audio signals is difficult, making it harder to ensure the integrity of body parts, such as the face and hands. Moreover, critical human body parts occupy only a small portion of the frame but carry the majority of the identity information and semantic expression. Unfortunately, neural networks often fail to spontaneously prioritize learning in these key regions, intensifying the issue of underfitting in the generation of local details. 2) _Motion Uncer-_ _tainty._ Unlike portrait animation, body animation encompasses a higher degree of motion freedom and exhibits a weaker correlation between audio cues and limb movement patterns. Consequently, predicting body movements from audio signals introduces a more substantial one-to-many problem, leading to significant uncertainty in the motion generation process. This uncertainty exacerbates instability in the generated talking body videos, thereby complicating the direct adaptation of audiodriven portrait animation techniques to half-body scenarios. To address these two challenges, we propose CyberHost, a one-stage audio-driven talking body framework capable of zero-shot human videos generation. On one hand, CyberHost introduces a region attention module (RAM) to address the issue of underfitting local details. Specifically, RAM utilizes a learnable spatio-temporal latents bank to capture common local human details from the data, such as topological structures and motion patterns, thereby ensuring the maintenance of structural details. Additionally, it integrates appearance features from local cropped images, serving as identity descriptors, to supplement the identity-specific texture details. On the other hand, to address the motion uncertainty problem, we designed human-prior-guided conditions to incorporate motion pattern constraints and human structural priors into the human video generation process. Specifically, for global motion, we propose a body movement map to constrain the motion space of the human root node, and for local motion, we introduce a hand clarity score to mitigate hand degradation caused by motion blur. Additionally, for human structure, we utilize the skeleton map 2 of the reference image to extract pose-aligned reference features, thereby providing the model with initial pose information from the reference image. In our experiments, we validated the effectiveness of the region attention modules and humanprior-guided conditions, Both qualitative and quantitative experiments demonstrate that CyberHost achieves superior results compared to existing methods. Moreover, we validated the exceptional performance of CyberHost in various settings, including audio-driven, video-driven, and multimodaldriven scenarios, as well as its zero-shot video generation capability for open-set images. We summarize our technical contributions as follows: 1) We propose the first **one-stage audio-** **driven talking body framework** enabling zero-shot human body animation without relying any intermediate representations, and validate its effectiveness across multiple scenarios. 2) We crafted region attention module (RAM) to enhance the generation quality of key local regions such as hands and faces, by including a spatio-temporal latents bank to learn shared local structural details and an identity descriptor to supplement ID-specific texture details. 3) We designed a suite of human-priorguided conditions to mitigate the instability caused by motion uncertainty in audio-driven settings. 2 R ELATED W ORK **Video Generation.** Benefiting from the advancements in diffusion models, video generation has made significant progress in recent years. Some early works (Singer et al., 2022; Blattmann et al., 2023a; Zhou et al., 2022; He et al., 2022; Wang et al., 2023a) have attempted to directly extend the 2D U-Net pretrained on text-to-image tasks into 3D to generate continuous video segments. AnimateDiff (Guo et al., 2024) trained a pluggable temporal module on large-scale video data, allowing easy application to other text-to-image backbones and enabling text-to-video generation with minimal fine-tuning. For controllability, (Wang et al., 2023b) trained a Composer Fusion Encoder to integrate multiple modalities of input as control conditions, thereby making video generation for complex scenes such as human bodies more controllable. **Body Animation.** Existing body animation approaches mainly (Hu et al., 2024; Wang et al., 2024b; Xu et al., 2024c; Karras et al., 2023; Zhou et al., 2022) focus on video-driven settings, where control signals are pose sequence extract from the driving video. DreamPose (Karras et al., 2023) uses DensePose (G¨uler et al., 2018) to train a diffusion model for sequential pose transfer. MagicAnimate (Xu et al., 2024c) extends the 2D U-Net to 3D to enhance the temporal smoothness. AnimateAnyone (Hu et al., 2024) employs a dual U-Net architecture to maintain consistency between the generated video and the reference images. Some speech-driven body animation works (Liao et al., 2020; Ginosar et al., 2019; Wang et al., 2023c; Corona et al., 2024) do exist, but they typically employ a two-stage framework. Speech2Gesture (Ginosar et al., 2019) first predicts gesture sequence and then utilizes a pre-trained GAN to render it to final video. Similarly, Vlogger (Corona et al., 2024) employs two diffusion models to separately perform audio-to-mesh and mesh-to-video mapping. Two-stage methods rely on explicit intermediate representations to mitigate the training difficulty in audio-driven settings. However, the limited expressive capabilities of these intermediate representations can also constrain the overall performance. Unfortunately, the end-to-end training of a one-stage diffusion model for audio-driven talking body generation remains unexplored. 3 M ETHOD 3.1 O VERVIEW We develop our algorithm based on the Latent Diffusion Model (LDM) (Blattmann et al., 2023b), which utilizes a Variational Autoencoder (VAE) Encoder (Kingma & Welling, 2014) _E_ to transform the image _I_ from pixel space into a more compact latent space, represented as _z_ 0 = _E_ ( _I_ ), to reduce the computational load. During training, random noise is iteratively added to _z_ 0 at various timesteps _t ∈_ [1 _, ..., T_ ], ensuring that _z_ _T_ _∼N_ (0 _,_ 1). The training objective of LDM is to predict the added noise at every timestep _t_ : _L_ = E _z_ _t_ _,t,c,ϵ∼N_ (0 _,_ 1) � _∥ϵ −_ _ϵ_ _θ_ ( _z_ _t_ _, t, c_ ) _∥_ 2 [2] � _,_ (1) where _ϵ_ _θ_ denotes the trainable components such as the Denoising U-Net, and _c_ represents the conditional inputs like audio or text. During inference, the trained model is used to iteratively remove 3 Figure 2: **The overall structure of CyberHost.** We aim to generate a video clip by driving a reference image based on an audio signal. Region attention modules (RAMs) are inserted at multiple stages of the denoising U-Net for fine-grained modeling of local regions. Additionally, Human-Prior-Guided Conditions, including the body movement map, hand clarity score and pose-aligned reference features are also introduced to reduce motion uncertainty. The reference network extracts motion cues from motion frames for temporal continuation. noise from a noised latent sampled from a Gaussian distribution. Subsequently, the denoised latent is decoded into an image using the VAE Decoder _D_ . Our proposed CyperHost takes a human reference image and a speech audio clip as inputs, ultimately generating a synchronized human video. The overall architecture is illustrated in Figure 2. We referenced the design of the reference net from AnimateAnyone(Hu et al., 2024) and TryOnDiffusion(Zhu et al., 2023b), as well as the motion frames from Diffused-Heads(Stypulkowski et al., 2024) and EMO (Tian et al., 2024), to construct a baseline framework. Specifically, a copy of the 2D U-Net is utilized as a reference net to extract reference features from the reference image and motion features from the motion frames. For audio, we use Wav2vec (Schneider et al., 2019) to extract multi-scale features. For the denoising U-Net, we extend the 2D version to 3D by integrating the pretrained temporal module from AnimateDiff (Guo et al., 2024), enabling it to predict human body video clips. The reference, motion frames, and audio features are fed into the denoising U-Net in each resblock. They are respectively combined with latent features in the spatial dimension to share the self-attention layer, in the temporal dimension to share the temporal module, and through an additional cross-attention layer to achieve this. As shown in Figure 2, based on the baseline, we proposed two key designs to address the inherent challenges of the audio-driven talking body generation task. First, to enhance the model’s ability to capture details in critical human region, _i.e._, hands and faces, we adapt the proposed region attention module (RAM), detailed in section 3.2) to both the facial and hand regions and insert them into multiple stages of the Denoising U-Net. RAM consists of two parts: the spatio-temporal region latents bank learned from the data and the identity descriptor extracted from cropped local images. Second, to reduce the motion uncertainty in half-body animation driven solely by audio, several conditions (detailed in section 3.3) have been designed to integrate global-local motion constraints and human structural priors: (1) The body movement map is employed to stabilize the root movements of the body. It is encoded and merged with the noised latent, serving as the input for the denoising U-Net. (2) The hand clarity score is designed to prevent hand prediction degradation caused by motion blur in the training data. It is incorporated as a residual into the time embedding. (3) The pose encoder encodes the reference skeleton map, which is then integrated into the reference latent, yielding a pose-aligned reference feature. Note that the pose encoder for the body movement map and the reference skeleton map share the same model architecture, except for the first convolution layer, but do not share any model parameter. 4 Figure 3: An illustration of region attention module (RAM), using the hand region as an example. 3.2 R EGION S YNTHESIS WITH R EGION A TTENTION M ODULES While the popular dual U-Net architecture effectively maintains overall visual consistency between the generated video and reference image, it struggles with generating fine-grained texture details and complex motion patterns in local areas like the face and hands. This challenge is further exacerbated in the task of audio-driven human body animation due to the absence of explicit control signals. To address this issue, we meticulously designed the structure of the RAM to enhance its ability to learn local details. As shown in Figure 3, our proposed RAM comprises two key parts: spatiotemporal region latents bank and identity descriptor. The former aims to learn identity-agnostic general features, while the latter focuses on extracting identity-specific unique features. Together, they enhance the synthesis of local human regions. In subsequent sections, we apply it to the face and hands, two areas that typically present significant challenges, and confirm its effectiveness. **Region Latents Bank.** RAM enhances the model by adding a spatio-temporal latents bank as additional training parameters, prompting it to learn shared local structural priors, including common topological structures and motion patterns. The region latents are composed of two sets of learnable basis vectors: **L** spa _∈_ R [1] _[×][n][×][d]_ for spatial features and **L** temp _∈_ R [1] _[×][m][×][d]_ for temporal features, where _n_ and _m_ denote the number of basis vectors and _d_ denotes the channel dimension. We consider the combination of **L** spa and **L** temp as a pseudo 3D latents bank, endowing it with the capability to learn spatio-temporal features jointly. This capability facilitates the modeling of 3D characteristics such as hand motion. Furthermore, we constrain the basis vectors of the latents bank to be mutually orthogonal to maximize its learning capacity. The regional latent features are integrated into the U-Net through a spatio-temporal cross-attention as shown in Figure 3. Given the backbone feature **F** [in] unet [from U-Net, we apply cross attention with] **L** spa in the spatial dimension and with **L** temp in the temporal dimension. The final output **F** latent is formulated as the sum of two attentions’ result, **F** latent = Attn( **F** [in] unet _[,]_ **[ L]** [spa] _[,]_ **[ L]** [spa] [) +][ Attn][(] **[F]** [in] unet _[,]_ **[ L]** [temp] _[,]_ **[ L]** [temp] [)] (2) � � **QK** _[T]_ temp _√d_ � _·_ **V** temp (3) = softmax **QK** _[T]_ spa _√d_ � _·_ **V** spa + softmax where Attn( _∗, ∗, ∗_ ) denotes the cross attention, **Q**, **K** and **V** are the query, key, and value, respectively. We aim for **F** latent to fully utilize the spatio-temporal motion priors of the local region learned within the 3D latents bank, refining and guiding the U-Net features through residual addition. Notably, to effectively focus the latents bank on feature learning for the target local region while filtering out gradient information from unrelated areas, we require a regional mask to weight the residual addition process. Due to the absence of prior information on body part positions in the audio-driven scenario, we employ auxiliary convolutional layers as a regional mask predictor. This predictor directly estimates a regional attention mask **M** pred using the U-Net feature **F** [in] unet [. During] training, we use region detection boxes to generate supervision signals **M** gt for mask predictor. **Identity Descriptor.** The process of learning the latents bank is identity-agnostic. It leverages data to learn the shared local structural and motion patterns priors of the human body. However, identity-specific features such as hand size, skin color, and textures are also important and cannot be overlooked. To complement this, we employ a regional image encoder _R_ to extract identity-aware 5 Idea Generation Category:
0Conceptual Integration
vaEPihQsAA
# C AN L ARGE L ANGUAGE M ODELS U NDERSTAND S YMBOLIC G RAPHICS P ROGRAMS ? **Zeju Qiu** **[1,†]** **Weiyang Liu** **[1,2,†,*]** **Haiwen Feng** **[1,†]** **Zhen Liu** **[1,‡]** **Tim Z. Xiao** **[1,‡]** **Katherine M. Collins** **[2,‡]** **Joshua B. Tenenbaum** **[3]** **Adrian Weller** **[2]** **Michael J. Black** **[1]** **Bernhard Schölkopf** **[1]** 1 Max Planck Institute for Intelligent Systems, Tübingen 2 University of Cambridge 3 MIT - Joint first author - Joint second author - Project lead **[sgp-bench.github.io](https://sgp-bench.github.io)** A BSTRACT Against the backdrop of enthusiasm for large language models (LLMs), there is a growing need to scientifically assess their capabilities and shortcomings. This is nontrivial in part because it is difficult to find tasks which the models have not encountered during training. Utilizing symbolic graphics programs, we propose a domain well-suited to test multiple spatial-semantic reasoning skills of LLMs. Popular in computer graphics, these programs procedurally generate visual data. While LLMs exhibit impressive skills in general program synthesis and analysis, symbolic graphics programs offer a new layer of evaluation: they allow us to test an LLM’s ability to answer semantic questions about the images or 3D geometries without a vision encoder. To semantically understand the symbolic programs, LLMs would need to possess the ability to “imagine” and reason how the corresponding graphics content would look with only the symbolic description of the local curvatures and strokes. We use this task to evaluate LLMs by creating a large benchmark for the semantic visual understanding of symbolic graphics programs, built procedurally with minimal human effort. Particular emphasis is placed on transformations of images that leave the image level semantics invariant while introducing significant changes to the underlying program. We evaluate commercial and open-source LLMs on our benchmark to assess their ability to reason about visual output of programs, finding that LLMs considered stronger at reasoning generally perform better. Lastly, we introduce a novel method to improve this ability – _Symbolic Instruction Tuning_ (SIT), in which the LLM is finetuned with pre-collected instruction data on symbolic graphics programs. Interestingly, we find that SIT not only improves LLM’s understanding on symbolic programs, but it also improves general reasoning ability on various other benchmarks. 1 I NTRODUCTION What are large language models (LLMs) capable of? Recent studies [ 5, 58 ] have shown that LLMs are able to generate generic computer programs, indicating a degree of pragmatic understanding of the symbolic structure of programs. Motivated by this progress, we focus on another important family of computer programs, called symbolic graphics programs, where a graphics content ( _e.g._, image, 3D asset) can be generated by running a program. We are interested in the following question: _Can large_ _language models “understand” symbolic graphics programs?_ Before trying to answer this question, we start by defining what we consider “understanding” of symbolic graphics programs, in the context of this work. Because a (deterministic) graphics program can be uniquely rendered to an image (the graphics programs we consider here), we characterize LLMs’ understanding of the graphics program as the semantic understanding of the corresponding rendered image. More specifically, we approximate such a semantic visual understanding by the ability to correctly answer semantic questions only based on the raw program input. These semantic questions are generated based on the rendered image, such that they are easy to answer given the image and yet challenging given only the program as text prompt. Guided by this insight, we propose a generic pipeline for creating benchmarks that can evaluate this particular ability of LLMs to understand symbolic graphics programs, while requiring minimal human effort. While we of course recognize that there are other elements of visual reasoning that characterize understanding in humans, and that we ought to evaluate in machine intelligence, we believe that our benchmark provides insight into one element of “understanding” of symbolic graphics programs that helps assay what current LLMs are (and are not) capable of. 1 good “visual imagination” of this program. Second, symbolic programs represent a procedural way to generate the graphics content, and hence the semantic understanding also requires the long-range sequential reasoning of the program. The order of the symbolic operations may substantially affect its semantic meaning, making the problem quite challenging. Third, many semantic questions involve an accurate grounding of semantic components, and such a grounding in the symbolic program requires a fine-grained understanding of the program structure. This motivates us to study whether LLMs have the ability to semantically understand symbolic graphics programs, and furthermore, how to improve this ability. In general, correctly answering semantic questions about symbolic graphics programs requires a combination of multiple sophisticated reasoning abilities from LLMs, which therefore makes the task of symbolic graphics program understanding an ideal benchmark to contribute towards evaluating the holistic reasoning capabilities of LLMs. Reasoning over symbolic programs is of particular interest from a cognitive perspective as well [ 103, 84, 25 ]. To what extent LLMs can operate over such a representation with rich structures remains an open problem. Motivated by the significance of symbolic graphics program understanding, we build a benchmark, called _SGP-Bench_, for two important variants of symbolic graphics programs: scalable vector graphics (SVG) as a generic language for representing 2D vector graphics, and customized computeraided design (CAD) as a domain-specific language (DSL) for representing 2D/3D objects. Our benchmark consists of two types of evaluations. (1) _Semantic understanding_ : We construct a number of semantic questions ( _i.e._, multiple-choice questions with 4 options) from a set of images (from multiple different categories). These questions are fed to LLMs along with the symbolic program to evaluate the semantic understanding. (2) _Semantic consistency_ : To evaluate the robustness of LLM’s semantic understanding, we perform random translation and rotation to the original symbolic programs and then test the same semantic questions based on the perturbed programs. We evaluate the consistency of the answers from LLMs using these perturbed symbolic programs with identical semantic meaning. This evaluation can also help lower the possibility of test data leakage, because the randomly perturbed programs are unlikely to be seen during pretraining. An overview of SGP-Bench is given in Figure 1. We further validate our automated labels via a human study (see Appendix B). In addition to performing evaluation under the common in-context setting wherein LLMs are used “out-of-the-box” and not finetuned, we also evaluate whether finetuning LLMs on a curated dataset can boost performance. To this end, we propose _Symbolic Instruction Tuning_ (SIT). The key idea is to collect an instruction dataset based on the rendered images. Because the semantic questions of interest are usually easy to answer from the visual input, we take advantage of the rendered images (that correspond to symbolic programs) and query a powerful language-vision model ( _e.g._, GPT-4o) for detailed captioning. This leads to a scalable way to collect an instruction dataset for symbolic graphics programs. Then, we simply finetune open-source LLMs on this dataset. Our experiments demonstrate that SIT can improve a model’s semantic understanding of symbolic programs, and more importantly, its general reasoning ability. Our contributions are summarized below: 2 - We introduce a new task of symbolic graphics program understanding and propose a generic yet highly scalable benchmark creation pipeline for this task. - We build a large benchmark, SGP-Bench, for comprehensively evaluating LLM’s semantic understanding and consistency of symbolic graphics programs. In SGP-Bench, we consider two types of symbolic graphics programs: SVG for 2D vector graphics and CAD for 2D/3D objects. - To improve the symbolic program understanding, we collect an instruction-following dataset and propose symbolic instruction tuning, which can also improve general reasoning performance. - Finally, we introduce a symbolic MNIST dataset where the symbolic program is so challenging for LLMs to understand that GPT-4o can only achieve a chance-level performance, while the rendered image is easily recognizable by humans. 2 S EMANTIC U NDERSTANDING OF S YMBOLIC G RAPHICS P ROGRAMS We introduce the task of semantic symbolic graphics program un- derstanding. Our goal is to as- <svg 501.551"<polygon <g></g><polygon <polygon <polygon id="Layer_1"x=pointspoints"0px"pointspoints====y"333.845,104.49 396.539,12.539 377.731,0 306.678,104.49 ""354.743,135.837 464.457,52.245 449.829,34.482 354.743,106.58 "style="370.416,51.2 361.012,25.078 369.371,11.494 379.82,37.616 ""346.384,85.682 336.98,59.559 346.384,45.976 355.788,72.098 ""0px"="enable-background:new 0 0 501.551 501.551;"xml:space="preserve"> versionstyle="1.1"style="fill:#FF7058;"stylestyleviewBox="fill:#F2F2F2;"=="fill:#F2F2F2;""fill:#FF7058;"="0 0 501.551 /> /> /> /> Rendering sess to what extent a LLM is able <g><polygon <polygon <polygon pointspointspoints==="427.886,80.457 426.841,52.245 439.38,42.841 440.424,71.053 ""393.404,105.535 392.359,78.367 405.943,67.918 406.988,96.131 ""359.967,131.657 358.922,103.445 372.506,94.041 373.551,121.208 "stylestyle=style"fill:#F2F2F2;"="fill:#F2F2F2;"="fill:#F2F2F2;" /> /> /> to “understand” a symbolic graph- ics program, which may begin -17.763 c0-10.449,8.359-18.808,18.808-18.808h42.841v-51.2c6.269,1.045,13.584,1.045,19.853,1.045 </g><circle <circle <circle <path dcxcxcx=“M296.229,483.788c0,10.449-8.359,17.763-17.763,17.763H150.988c-10.449,0-18.808-8.359-18.808 ==="214.727""134.269""294.139" cycycy==="96.131""175.543""175.543"r=rr=="96.131""96.131""96.131"stylestylestyle==="fill:#FFD15C;""fill:#FF7058;""fill:#54C0EB;" /> /> /> Semantic understanding Image “visually imagine”. Specifically,we leverage the correspondenceto belie some latent capability tobetween deterministic symbolicgraphics programs and renderedimages, and then we character- ize the understanding of symbolic s13.584,0,19.853-1.045v51.2h42.841 C287.869,464.98,296.229,473.339,296.229,483.788z"stylec0-3.135,0-6.269,0-8.359c0-9.404,2.09-17.763,3.135-26.122H388.18c2.09,8.359,3.135,17.763,3.135,26.122 C392.359,231.967,392.359,235.102,392.359,237.192z"C390.269,211.069,391.314,219.429,392.359,228.833z"V413.78c6.269,1.045,13.584,1.045,19.853,1.045C220.996,414.824,228.31,414.824,234.58,413.78z"styleC123.82,285.257,123.82,286.302,123.82,286.302z"C149.943,260.18,148.898,260.18,148.898,260.18z"</svg><path <g></g><g></g> <path <path <path <path ==""fill:#CDD6E0;"fill:#E2E2E2;d=dddd===="M392.359,237.192c0,98.22-79.412,177.633-177.633,177.633S37.094,335.412,37.094,237.192 "M392.359,228.833H37.094c0-9.404,2.09-17.763,3.135-26.122H388.18 "M234.58,413.78v15.673c-6.269,1.045-13.584,1.045-19.853,1.045c-6.269,0-13.584,0-19.853-1.045 "M123.82,286.302H99.788c0,0-1.045,0-1.045-1.045V179.722h25.078V286.302 "M148.898,260.18h-24.033c0,0-1.045,0-1.045-1.045v-79.412h25.078L148.898,260.18 " /> /> stylestylestylestyle=="fill:#FF7058;""fill:#FF7058;"=="fill:#CDD6E0;""fill:#F2F2F2;" /> /> /> /> Symbolic program Q1: A: None B: One Q3: Q4: A: Cake A: ThreeQ2: A: Yellow B: Red C: Green **How many scoops of ice cream are in the bowl?What color is the scoop on the right?How many straws are there in the image?What type of dessert is shown in the image?** B: Two C: One D:Four B: Cream C: Two C: Pie D: Pudding D: Three D: Blue Questions graphics programs as the semantic Question answering based on the program understanding of the correspond- Figure 2: Illustration of the symbolic graphics program understanding task. ing rendered image. To do so, we use the performance on question-answering to evaluate the semantic understanding of images. The same set of questions, along with the corresponding symbolic graphics programs, are then used to evaluate the symbolic program understanding of LLMs (the rendered image will not be used here). Figure 2 gives an illustration of symbolic graphics program understanding. The intuition behind this evaluation is that, if an LLM has a good sense of the symbolic graphics and implicit de-rendering, then the LLM should have a rough understanding about its rendered image such that it is able to answer arbitrary semantic questions about the rendered image. Rendering Question answering based on the program Figure 2: Illustration of the symbolic graphics program understanding task. Symbolic graphics program understanding can be viewed as a form of visual question answering in the sense that visual input is represented by a symbolic program representation. Compared to current vision-language models [ 56, 55, 121 ] that encodes images with a text-aligned encoder [ 76 ], we consider the case where the visual input are encoded by a symbolic program that can exactly recover the graphics content. From this perspective, our task aims to uncover the potential of using symbolic programs as a representation to perform visual reasoning. 3 W HY IS U NDERSTANDING S YMBOLIC G RAPHICS P ROGRAMS I NTERESTING ? |Image|Col2| |---|---| |Image|Image| ||| To showcase why semantic understanding of symbolic graphics Figure 3: A qualitative example of CAD reasoning. problems require multiple sophisticated reasoning abilities, we provide a few qualitative examples of LLM’s output in Figure 3 and Figure 4. Figure 3 shows a qualitative example of how OpenAI-o1 reasons over the CAD program. The reasoning process is highly nontrivial, as it requires multiple reasoning abilities, such as numeric perception, spatial reasoning, geometric understanding, long-range planning and common sense. This is also part of the reason that SGP-Bench can well evaluate the general reasoning ability of LLMs. In Figure 4, we query different LLMs (from weak to strong: Llama-3.1-8B, Llama-3.1-70B, OpenAI-o1) by asking 3 |Col1|Col2|Col3| |---|---|---| ||Rendered image|| ~~**[Output**~~ ~~**from**~~ ~~**Llama**~~ **-** ~~**3**~~ **.** ~~**1**~~ **-** ~~**8B]**~~ ~~**[Output**~~ ~~**from**~~ ~~**Llama**~~ **-** ~~**3**~~ **.** ~~**1**~~ **-** ~~**70B]**~~ ~~**[Output**~~ ~~**from**~~ ~~**OpenAI**~~ ~~**o1]**~~ Figure 4: Qualitative examples of how LLMs reason over the symbolic program and obtain their answers. which digit this given SVG program represents. We can observe that all LLMs start to reason from the low-level curves and gradually build up its understanding from local components to semantic elements. Specifically, LLMs understand the symbolic program through line-by-line reasoning and then combine the results into an overall semantic understanding. This process is intriguing because it shows that LLMs understand symbolic programs through reasoning rather than memorization. More interestingly, the more powerful LLMs, _e.g._, OpenAI-o1, show better general understanding of the symbolic program and its fine-grained grounding, which is consistent with the results on SGP-Bench. this spurious correlation can be avoided when using the symbolic graphics program as the visual representation. Specifically, we construct a visual example that resembles the Ebbinghaus illusion, but the conclusion differs from the classic Ebbinghaus illusion ( _i.e._, two orange circles are of the same size, although they look different) as we intentionally make one of the orange circle obviously larger than the other (shown in Figure 5). Then we feed this curated example to OpenAI-o1 and ask which orange circle is larger. We compare three cases: (a) image input; (b) symbolic program input; and (c) image input with _indirect symbolic program prompting_ . Once we first ask the LLM to translate the image into symbolic program, the LLM will no longer suffer from the spurious correlation. The same 4 semantic questions based on the rendered images, and then we Figure 7: Dataset construction procedure. inspect them manually to make sure that these questions are reasonable and the answer to them is correct. We also run a human study over a randomized set of 500 of the automatically generated questions along with the corresponding images, and find high agreement (see Appendix B). The overall procedure for our dataset creation is given in Figure 7. In this pipeline, the rendering of symbolic programs and the GPT-4o querying are both scalable and can be done with minimal human involvement. Human annotators then inspect the generated question-answer pairs based on the rendered image, which requires much less efforts than manually writing questions and answers. We emphasize that this program-question-answer triplet creation method is general, as it works for most of the symbolic graphics programs. SVG and 2D CAD programs can directly produce 2D images, so it is straightforward to use this pipeline. For 3D CAD programs, they produce 3D models and we first render them into 2D images with a few fixed camera positions. These rendered images are used to query GPT-4o, and the following procedures are identical to the SVG case. Figure 7: Dataset construction procedure. 600 400 200 0 400 300 200 100 0 600 400 200 0 300 200 3D 3D Recon. 2D 30 20 10 0 100 0 1 2 3 4 5 6 7 8 9 10 1112 Number of operations |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| |||||||||||||||||||||| 30 40 50 60 70 80 Number of constraints 0 10 20 30 40 Number of operations 1 2 3 4 5 6 7 8 9 10 Number of operations (a) SVG dataset statistics summary (b) CAD dataset (3D, 3D Recon, 2D) statistics summary Figure 8: Key dataset statistics for both SVG and CAD programs. We show the distribution of the operation number in a program for both SVG and CAD data, together with the number of examples of each category in the SVG dataset. 4.2 B ENCHMARKING S EMANTIC U NDERSTANDING **SVG dataset statistics** . We collect 1 _,_ 085 SVG programs covering 19 categories, and each program has 4 semantic multiple-choice questions (with 4 options), resulting in a total of 4 _,_ 340 questions. We ensure that answers are evenly distributed across options. Dataset statistics are given in Figure 8(a). Our SVG benchmark consists of 5 types of questions, including “Semantic”: 1 _,_ 085 questions, “Color”: 864 questions, “Shape”: 1 _,_ 217 questions, “Count”: 819 questions, and “Reasoning”: 5 **Semantic** **What animal does** **the object represent?** A: Cat B: Dog C: Rabbit D: Bear **What type of object is** **depicted in the image?** A: Necklace B: Bracelet C: Earring D: Ring **What is the person in the** **image doing?** A: Running B: Swimming C: Cycling D: Playing Piano **What is the object most** **likely representing?** A: Scissors B: Wrench C: Binoculars D: Buckle **What type of animal is** **represented by the object?** A: Fish B: Bird C: Crab D: Turtle |Col1|What is t<br>the top pa<br>A: Brown| |---|---| ||C: Red| **How many holes are visible** **on the flat triangular face** **of the CAD object?** ~~A:~~ ~~One~~ ~~B:~~ ~~Two~~ C: Three D: Four **What is the general profile** A: Circular B: Rectangular C: Tapered D: Triangular |Col1|What is the<br>of the obje<br>A: Circular| |---|---| ||C: Tapered| **What feature is visible on** **the top surface of the CAD** **object?** A: Hole B: indentation C: protrusion D: feature **How many holes are there** **in the CAD object?** A: One B: Two C: Three D: Four **How many circles are there** **in the image?** A: 3 B: 4 C: 5 D: 6 **How is the smaller square** **positioned within the** **larger square?** A: Centered B: Left C: right D: top **Color** **Shape** **Count** **Reasoning** **What is the object most** **likely used for?** A: Drinking B: Reading C: Cooking D: Gardening C: Measuring D: Writing |Col1|What is the ob<br>primarily use<br>A: Hammering| |---|---| ||C: Measuring| **What is the result of the** **mathematical operation in** **the image?** A: 2 B: 5 C: 3 D: 4 **What is the expression of** **the object?** A: Sad B: Angry C: Happy D: Surprised **What material is the head** **of the object likely made of?** A: Wood B: Plastic C: Glass D: Metal **Which of the following best describes** **the height of the CAD object relative** **to its base diameter?** ~~A:~~ ~~Shorter~~ ~~B:~~ ~~Equal~~ C: Longer D: Indeterminate **How many distinct cylindrical** **sections are visible in the CAD object?** A: One B: Two C: Three D: Four **What geometric shape primarily** **makes up the head of the object in the** **image?** A: Cube B: Sphere C: Cylinder D: Cone **How many holes are visible on the** **CAD object?** A: One B: Two C: Three D: None **What shape is the main body of the** **CAD object?** A: Circle B: Triangle C: Teardrop D: Rectangle **How many circles are used as** **reference points at the corners of the** **rectangle?** A: Two B: Three C: Four D: Five **What is the primary** **color of the object?** A: Red B: Blue C: Yellow D: Green **What is the primary color of** **the top part of the object?** A: Brown B: Blue C: Red D: Green **What is the primary color of** **the upper wings of the object?** A: Red B: Blue C: Yellow D: Green **What color is the heart in** **the image?** A: Blue B: Red C: Green D: Yellow **What is the primary color of** **the object in the image?** A: Blue B: Green C: Yellow D: Pink **Which letter in the CAD object** **has a triangular cutout?** A: L B: M C: A D: D **How many straight bars are** **present inside the circular** **boundary of the CAD object?** A: One B: Two C: Three D: Four |Col1|How ma<br>present<br>bounda<br>A: One| |---|---| ||C: Three| **What is the basic shape of the** **object on the left?** A: Cylinder B: Sphere C: Hexagon D: Octagon **How many distinct tiers or** **levels does the CAD object** **have?** A: One B: Two C: Three D: Four **What shape is the main outline** **of the CAD object?** A: Triangle B: Rectangle C: Circle D: Square **What is the primary shape of** **the CAD object in the image?** A: Rectangle B: Triangle C: D-shape D: Circle **What shape is the outer** **boundary of the object?** A: Square B: Triangle C: Circle D: Hexagon **What shape is located at** **the bottom of the image?** A: Circle B: Square C: Triangle D: Oval **What is the primary** **shape of the object?** A: Circle B: Triangle C: Square D: Rectangle **What is the shape of the** **object in the center?** A: Square B: Triangle C: Rectangle D: Circle **What is the shape of the** **bottom part of the object?** A: Circular B: Square C: Arch D: Oval |Col1|How m<br>the obje<br>A: Two| |---|---| ||C: Thre| |Col1|How m<br>the obje<br>A: Two| |---|---| ||C: Thre| **What is the shape of the central** **body of the CAD object?** A: Sphere B: Cylinder C: Cube D: Prism **What type of geometric shape is** **attached to the larger object in** **the CAD image?** A: Cone B: Prism C: Pyramid D: Cylinder **How many visible cylindrical** **pins are there on the CAD** **object?** A: One B: Two C: Three D: Four **What shape is the central joint** **of the CAD object?** A: Square B: Circular C: Triangular D: Hexagonal **Where is the dot placed in each** **square?** A: Top left corner B: Center C: Bottom right corner D: edge **What basic geometric shape is the** **main body of the CAD object?** A: Circle C: Rectangle B: Triangle D: T-shaped **How many candles** **are on the object?** A: One B: Two C: Three D: Four **How many scoops of ice** **cream are on the cone?** A: One B: Two C: Three D: Four **How many legs does** **the object have?** A: 4 B: 6 C: 8 D: 10 **How many wheels does** **the object have?** A: Two B: One C: Three D: Four **How many wings does** **the object have?** A: Two B: One C: Three D: Four Figure 9: Example questions for SVG and CAD programs. Due to the space limit, we omit the programs and only show the rendered images. 355 questions. “Semantic” tests the global semantic meaning of the object represented by SVG codes, while the other four question types focus on detailed, local understanding of the object. “Color” is color-related questions about specific object parts, which evaluates the localization of the corresponding semantic part. “Count” is about counting the occurrences of certain patterns or semantic parts. “Shape” is about the shape of certain parts of the object, which is to find geometric shapes that closely resemble the object part. Figure 9 gives some SVG examples. **CAD dataset statistics** . We collect 2 _,_ 400 CAD programs from three different datasets [ 101, 105, 86 ]. The CAD dataset consists of 1000 programs from DeepCAD [ 105 ], which forms the _3D_ subset; 700 programs from the Fusion360 Reconstruction Dataset [ 101 ], which constitutes the _3D_ _complex_ subset; and 700 programs from SketchGraphs [ 86 ], which makes up the _2D_ subset (as shown in Table 1). Different from SVG, there is no generally established syntax for building CAD models from graphics codes; each of our 3 CAD subsets follows a different language syntax, with varying levels of complexity. When benchmarking LLMs for CAD tasks, we include domain-specific language syntax rules as part of the input prompt. The LLM is required to apply in-context learning of this syntax and answer the test questions. Then we feed the renderings to GPT-4o and generate one semantic multiple-choice question (with 4 options) and its answer. This gives us 2 _,_ 400 questions in total. We make sure that ground truth answers are evenly distributed across 4 options. Detailed dataset statistics are given in Figure 8(b). Some examples from our CAD dataset are provided in Figure 9. **Experimental results and discussion** . We find that graphics program understanding, as we operationalize it here, is challenging. The average accuracy of all models (proprietary and open-source) is below 70% (ranging from 30% to 67%) on SVG and below 75% (ranging from 28% to 74%) on CAD. Among these, SVG makes it more difficult for the models to understand as these 2D graphics contain richer semantics. Significant performance improvements are observed in line with scaling laws [ 113 ], as larger model sizes consistently lead to gains across various open-source LLMs. For example, Llama-3’s score is improved from 0.429 to 0.548 on SVG and from 0.633 to 0.694 when 6 its size increased from 8B to 70B, Qwen-1.5’s from 0.376 to 0.499 on SVG and 0.486 to 0.632 on CAD with the size from 7B to 110B. We also notice consistent improvements given the same model size and model family but from different generations. For example, Qwen-72B from 0.466 to 0.537, Llama-8B from 0.429 to 0.465 and Llama-70B from 0.548 to 0.574. The consistent performance gain on both SVG and CAD indicates that semantic understanding of symbolic graphics programs is a fundamental capability that is aligned with the scaling law of LLMs. Compared to the open-sourced LLMs that we consider here, proprietary models (GPTs) and (Claudes) outperform most of them by a large margin. Within the family of current most popular GPTs, we see a performance improvement with a 27% boost (from GPT-3.5’s 0.498 to GPT-4’s 0.633) when evaluating these GPT variants on our SGP-Bench. This result is aligned with the seeming improvement of reasoning ability in the GPT family, validating that SGP-Bench can well distinguish different LLMs. The overall best-performing model in both SVG and CAD is Claude 3.5 Sonnet. The semantic understanding of graphics programs can be probed across different aspects, ranging from attribute-level investigations of “color” and “shape” to higher-level discussions of “semantics”, “counting” and “reasoning”. Our benchmark is designed to cover these investigations for LLMs. Most LLMs perform well on color-related questions, followed by shape-related questions, with “count” and “semantic” questions showing progressively lower performance. This consistency is intriguing, as it resembles the coarse-to-fine structure of visual information processing. “Color” is the most visually salient feature, “shape” understanding requires a finer grasp of global and local structures, and “count” and “semantic” questions demand deeper comprehension and knowledge. The difficulty curve is evident, with most open-source models achieving roughly half the accuracy on semantic questions compared to color questions. For instance, the best-performing open-source model, Llama3.1-405B, achieves 37.6% accuracy on semantics and 81.6% accuracy on color grounding tasks. While opensource models struggle with “semantic” questions, ChatGPT performs quite well, with semantics being their second-best category after color grounding. 4.3 B ENCHMARKING S EMANTIC C ONSISTENCY LLMs are exposed to vast amounts of online SVG data. Whether answers are consistent? To investigate whether their semantic understanding abil- Original Original ity is due to potential data leakage, we propose a semantic Translation perturbation **Semantic** consistency test by introducing global translations or rotations to SVG graphics, ensuring SE(2) invariance. Such Whether answers are consistent? spatial interventions greatly alter the code representation, as SVG graphics consist of lines and Bezier curves with Original Original **programsSymbolic** anchor points, and SE(2) operations change all numerical Rotation perturbation values in the code. However, the SVG’s semantics—such Figure 10: The semantic consistency test assesses if semanas shape or color—remain unaffected by this perturba- tic understanding remains the same when the program is tion. This allows us to examine how LLMs behave when perturbed without semantically changing its rendered con tent. Image perturbations result in significant code-level the same vector graphics are presented with drastic code- changes, as symbolic programs use absolute coordinates. numerical changes (see Appendix A.1). We perform non-trivial coordinate-level perturbations to the code, rather than using SVG transformation functions, to prevent shortcut learning by LLMs. Due to the nested structure of the tested SVG code, we visually inspect the perturbed renderings to ensure that the semantics remain unchanged after perturbation. If the model performs consistently under these perturbations, it suggests that its semantic understanding stems from a fundamental level of comprehension rather than trivial memorization. **Dataset specifics** . We use our SVG dataset to evaluate the semantic consistency with respect to translation and rotation. For each SVG sample, we randomly choose 5 different translations (T) and rotations plus translations (SE(2), harder case), resulting in a visually small amount of spatial shifts of the rendered object, meaning nearly no changes in semantics, but **complete** change in SVG code numeric given the shift of SVG’s anchor points and curves. Then we evaluate all the LLMs with the same question set of the SVG-Understanding benchmark but with these perturbed code inputs. **Evaluation** . We measure the semantic consistency with two metrics: 1) the average accuracy of all perturbed SVG inputs’ question-answering accuracy, showing the overall accuracy once the samples are intervened; and 2) the proposed “consistency score” that counts the average frequency of the most selected answer to each question for all groups of perturbed samples (where they were translated or rotated from the same SVG program). This score indicates how much the LLMs being consistent (no 7 Translation perturbation Rotation perturbation Figure 10: The semantic consistency test assesses if semantic understanding remains the same when the program is perturbed without semantically changing its rendered content. Image perturbations result in significant code-level changes, as symbolic programs use absolute coordinates. Model SVG - Understanding SVG - Invariance CAD Avg Semantics Count Color Shape Reason T Avg. SE(2) Avg. T Cons. SE(2) Cons. Avg 3D 3D complex 2D _Open-source generic LLM_ Gemma-1.1-2B 0.317 0.321 0.333 0.25 0.356 0.287 0.312 0.270 0.954 0.920 0.278 0.294 0.253 0.281 Gemma-1.1-7B 0.393 0.347 0.275 0.453 0.523 0.299 0.403 0.390 0.917 0.894 0.476 0.497 0.464 0.460 InternLM2-7B 0.382 0.279 0.324 0.570 0.431 0.299 0.381 0.381 0.788 0.772 0.480 0.551 0.446 0.411 InternLM2-20B 0.424 0.255 0.379 0.623 0.483 0.276 0.426 0.407 0.777 0.727 0.525 0.586 0.490 0.474 InternLM2.5-7B 0.421 0.273 0.317 0.598 0.515 0.282 0.419 0.404 0.809 0.778 0.562 0.639 0.506 0.509 Yi-1.5-9B 0.355 0.309 0.404 0.493 0.297 0.301 0.372 0.374 0.947 **0.947** 0.469 0.581 0.416 0.361 Yi-1.5-34B 0.443 0.308 0.364 0.644 0.523 0.234 0.446 0.423 0.845 0.819 0.583 0.649 0.563 0.510 Aya-23-8B 0.290 0.244 0.255 0.343 0.326 0.259 0.290 0.273 0.942 0.896 0.428 0.508 0.384 0.359 Aya-23-35B 0.442 0.307 0.354 0.648 0.511 0.318 0.451 0.434 0.898 0.857 0.488 0.551 0.429 0.457 Command R-35B 0.461 0.311 0.442 0.676 0.495 0.341 0.478 0.443 0.833 0.803 0.536 0.579 0.509 0.504 Command R-104B 0.500 0.339 0.449 0.727 0.565 0.341 0.521 0.477 0.917 0.875 0.583 0.634 0.570 0.524 Qwen-1.5-7B 0.376 0.226 0.317 0.563 0.471 0.234 0.371 0.382 0.792 0.780 0.486 0.560 0.426 0.443 Qwen-1.5-32B 0.494 0.307 0.501 0.713 0.552 0.310 0.512 0.492 **0.972** 0.938 0.575 0.664 0.567 0.456 Qwen-1.5-72B 0.466 0.299 0.319 0.698 0.598 0.265 0.474 0.461 0.883 0.854 0.600 0.658 0.590 0.526 Qwen-1.5-110B 0.499 0.324 0.431 0.734 0.560 0.332 0.486 0.470 0.839 0.821 0.632 0.711 0.607 0.546 Qwen-2-72B 0.537 0.373 0.426 0.770 0.630 0.372 0.520 0.491 0.869 0.852 0.692 0.753 0.669 0.630 Mistral-7B v0.3 0.417 0.304 0.324 0.624 0.470 0.296 0.434 0.417 0.919 0.895 0.495 0.551 0.481 0.429 Idea Generation Category:
0Conceptual Integration
Yk87CwhBDx
# BOND: - - A LIGNING LLM S WITH B EST OF N D ISTILLATION **Pier Giuseppe Sessa, Robert Dadashi, L´eonard Hussenot, Johan Ferret, Nino Vieillard** _[†]_ **Alexandre Ram´e, Bobak Shahriari, Sarah Perrin, Abram L. Friesen, Geoffrey Cideron** **Sertan Girgin, Piotr Stanczyk, Andrea Michi, Danila Sinopalnikov, Sabela Ramos** **Am´elie H´eliou, Aliaksei Severyn, Matt Hoffman, Nikola Momchev, Olivier Bachem** Google DeepMind A BSTRACT Reinforcement learning from human feedback (RLHF) is a key driver of quality and safety in state-of-the-art large language models. Yet, a surprisingly simple and strong inference-time strategy is Best-of- N sampling that selects the best generation among _N_ candidates. In this paper, we propose Best-of- N Distillation ( BOND ), a novel RLHF algorithm that seeks to emulate Best-of- N but without its significant computational overhead at inference time. Specifically, BOND is a distribution matching algorithm that forces the distribution of generations from the policy to get closer to the Best-of- N distribution. We use the Jeffreys divergence (a linear combination of forward and backward KL ) to balance between mode-covering and modeseeking behavior, and derive an iterative formulation that utilizes a moving anchor for efficiency. We demonstrate the effectiveness of our approach and several design choices through experiments on abstractive summarization and Gemma models. 1 I NTRODUCTION State-of-the-art large language models (LLMs) such as Gemini (Gemini Team, 2023; Reid et al., 2024) and GPT-4 (OpenAI, 2023) are generally trained in three stages. First, LLMs are pre-trained on large corpora of knowledge using next-token prediction (Radford et al., 2018; 2019). Second, the pre-trained models are fine-tuned to follow instructions via supervised fine-tuning (SFT) (Raffel et al., 2020; Wei et al., 2022). Lastly, reinforcement learning from human feedback (RLHF) (Christiano et al., 2017; Ziegler et al., 2019; Stiennon et al., 2020) is used to further increase the quality of generations. The RLHF step generally consists of learning a reward model (RM) (Ouyang et al., 2022) on human preferences and then optimizing the LLM to maximize predicted rewards using reinforcement learning algorithms. **RLHF algorithms and their challenges.** Fine-tuning LLMs with reinforcement learning (RL) is challenging (Casper et al., 2023), notably since it can cause _forgetting_ (French, 1992) of pre-trained knowledge, and since loopholes in the RM (Clark & Amodei, 2016; Pan et al., 2022) can cause _reward_ _hacking_ (Askell et al., 2021; Skalse et al., 2022). The standard strategy is to use policy-gradient methods (Williams, 1992) with KL regularization towards the SFT policy. Those RL algorithms seek Pareto-optimal policies with high reward at low KL, to preserve the general capabilities of the original model and tackle the misalignment (Ngo et al., 2022) concerns. **Best-of-** **N** **sampling.** In practice, a surprisingly simple inference-time approach is often used to improve the quality of generations: Best-of- N sampling (Stiennon et al., 2020). It consists of drawing _N_ candidate generations from the reference (typically, supervised fine-tuned) model and selecting the one with the highest reward according to the RM. This strategy empirically achieves excellent reward- KL trade-offs (Nakano et al., 2021; Gao et al., 2023; Touvron et al., 2023) but increases the computational cost by a factor of _N_ . **BOND** **.** In this paper, we propose BOND (Best-of- N Distillation), a novel RLHF algorithm that _learns_ a policy that achieves the strong performance of Best-of- N sampling but, crucially, requires only a _†_ Correspondence to: Pier Giuseppe Sessa <piergs@google.com> 1 Figure 1: Best-of- N is an _inference-time_ strategy that selects the best generation among _N_ candidates from a reference LLM policy, improving quality at the cost of a large computational (need to sample and score _N_ times from the model). In contrast, the proposed BOND approach aims at obtaining a fine-tuned policy that can directly sample the Best-of- N generation. This would inherit the quality of Best-of- N sampling, while requiring a single sample at inference time. We achieve this by distilling the Best-of-N strategy into the policy via online _distribution matching_ . single sample at inference time, as depicted in Figure 1. Our key idea is to cast the alignment of the policy as a distribution matching problem, where we fine-tune the policy to emulate the Best-of- N distribution. To achieve this, we first derive an analytical expression for the Best-of- N distribution. This allows us to consider and optimize different divergence metrics. We first show how to minimize the _forward_ KL divergence using samples from the Best-of- N strategy, leading to a standard imitation learning setup with a mode covering behavior. We also show how to minimize the _backward_ KL, leading to a new form of quantile-based advantage, which does not depend on the reward scale, and corresponds to a mode seeking behavior. Then, we propose to minimize a linear combination of forward and backward KL, also known as _Jeffreys divergence_, which retains the best of both approaches. Furthermore, to optimize performance while keeping a reduced sample-complexity, we propose an _iterative_ BOND approach which consists of iteratively distilling the Best-of- N of a moving anchor policy. Finally, based on the aforementioned ideas, we propose J-BOND (J for Jeffreys), a novel, stable, efficient and practical RLHF algorithm to align LLMs. **Experiments.** We first demonstrate the effectiveness of BOND and of our design choices on the abstractive summarization XSum (Narayan et al., 2018) task. Then, in Section 6, we apply J-BOND to align Gemma (Gemma Team, 2024) policies. J-BOND does not require committing to a specific regularization strength, but it continuously improves the reward displaying a stable optimization and a better reward/ KL trade-off compared to standard RL algorithms. This translates into a higher quality and improved scores on popular real-world benchmarks. 2 P ROBLEM S ETUP We consider a LLM based on the transformer (Vaswani et al., 2017) architecture, defining a policy _π_ ( _x, ·_ ) by auto-regressively generating token sequences _y_ from the prompt _x_ . Given a pre-trained and typically supervised fine-tuned reference policy _π_ ref, we seek to further align it to human preferences. To achieve this, throughout the rest of the paper we assume access to a reward model (RM) which we denote as _r_ ( _·_ ), trained to reflect human preferences. **Standard RLHF.** Most RL algorithms optimize a linear combination of the expected reward and a KL divergence between the current and reference policy: _π_ RL = argmax _π_ E _π_ [ _r_ ( _y_ )] _−_ _β_ RL _·_ KL( _π || π_ ref ) _,_ (1) with regularization strength _β_ RL _≥_ 0 . This KL regularization forces the policy to remain close to its initialization _π_ ref (Geist et al., 2019; Lazaridou et al., 2020), reducing forgetting (French, 1992) and reward hacking (Skalse et al., 2022). Equation (1) is usually optimized with online algorithms, as they perform better than their offline counterparts (Tang et al., 2024). Moreover, simple methods have demonstrated the best results, e.g., REINFORCE (Williams, 1992) with a sampled baseline for variance reduction (Li et al., 2023; Ahmadian et al., 2024) outperform PPO (Schulman et al., 2017). **Best-of-** **N** **.** A complementary alignment strategy is Best-of- N, which is an inference-time strategy that involves sampling multiple times from _π_ ref and selecting the generation with highest reward 2 according to the RM _r_ . In contrast to RLHF strategies, Best-of- N does not fine-tune the weights of the LLM, but instead modifies the inference procedure. Best-of- N was empirically shown to be efficient (Touvron et al., 2023) when looking at reward/ KL trade-offs, and comes with theoretical guarantees (Qiping Yang et al., 2024) in terms of Pareto-optimality. Unfortunately, Best-of- N comes at a significantly higher inference cost which increases linearly with _N_, since producing _N_ generations is (in general) _N_ times more costly than sampling a single one. Motivated by the above considerations, we propose a novel alignment method which we name BOND for Best-of- N Distillation. The goal of BOND is to _distill the Best-of-_ _N_ _strategy into the policy_ . This allows the policy to reach the strong performance of Best-of- N sampling, while requiring only _a_ _single sample_ at inference time. We outline our overall approach in the next section. 3 T HE BOND A PPROACH We formulate the BOND approach in two main steps. First, we derive an analytical expression for the Best-of- N distribution (Section 3.1). Second, using the derived expression, we phrase the problem as a _distribution matching_ problem (Section 3.2), i.e., we want to steer the policy closer to the Best-of- N distribution. In Section 3.3, we draw insightful connections between BOND and standard RLHF. 3.1 T HE B EST - OF -N DISTRIBUTION In this section, we derive the exact analytical distribution of Best-of- N sampling and study its properties. For simplicity, we drop the context _x_ from all notation without loss of generality and assume that the reward _r_ ( _y_ ) induces a strict ordering on all generations _y_ [1] . We can affirm the following main theorem (proof in Appendix A.1). **Theorem 1.** _For any generation y, let_ _p_ _<_ ( _y_ ) = P _y_ _′_ _∼π_ _ref_ [ _r_ ( _y_ _[′]_ ) _< r_ ( _y_ )] (2) _denote the probability that a random generation y_ _[′]_ _from π_ _ref_ _is strictly worse than y and let_ _p_ _≤_ ( _y_ ) = P _y_ _′_ _∼π_ _ref_ [ _r_ ( _y_ _[′]_ ) _≤_ _r_ ( _y_ )] _,_ (3) _the probability that_ _y_ _[′]_ _is not better than_ _y_ _(thus including the equality case). Then, the probability_ _that y is the output of Best-of-N sampling is given by_ _π_ _BoN_ ( _y_ ) = _π_ _ref_ ( _y_ ) _× p_ _≤_ ( _y_ ) _[N]_ _[−]_ [1] � ~~�~~ � ~~�~~ ( `A` ) ~~�~~ ~~�~~ � ~~�~~ ( `B` ) _×_ _N_ � _i_ =1 _p_ _<_ ( _y_ ) � _p_ _≤_ ( _y_ ) _p_ _≤_ ( _y_ ) � _i−_ 1 _._ (4) **Interpretation.** Theorem 1 provides an intuitive explanation on the behavior of Best-of- N sampling: it essentially reweights the original sampling distribution _π_ ref, by the multiplicative terms ( `A` ) and ( `B` ) . The term ( `A` ) corresponds to a penalty exponential in _N_ based on the fraction of generations (for the same prompt) that are worse or equal to the considered generation _y_ . Intuitively, this ensures that we sample exponentially less from bad generations when increasing _N_ . The term ( `B` ) is an additional correction factor due to the potential of collisions among generations. Importantly, it is at most linear in _N_ as it is always bounded within [1 _, N_ ]: _N_ � _i_ =1 _p_ _<_ ( _y_ ) � _p_ _≤_ ( _y_ ) _p_ _≤_ ( _y_ ) _N_ � 1 _≤_ _N ._ (5) _i_ =1 1 _≤_ 1 + _N_ � _i_ =2 _p_ _<_ ( _y_ ) � _p_ _≤_ ( _y_ ) _p_ _≤_ ( _y_ ) _i−_ 1 = � _i−_ 1 _≤_ � It achieves its minimum at 1 for the worst generation _y_ _−_ since we have _p_ _<_ ( _y_ _−_ ) = 0 by definition. This is not surprising, as we need to sample _y_ _−_ exactly _N_ times in a row and which corresponds to _π_ BoN ( _y_ _−_ ) = _π_ ref ( _y_ _−_ ) _[N]_ (note that _p_ _≤_ ( _y_ _−_ ) = _π_ ref ( _y_ _−_ ) ). In contrast, if the likelihood of individual generations _y_ are low and such generations are good, then _p_ _<_ ( _y_ ) is almost _p_ _≤_ ( _y_ ) and term ( `b` ) is close to _N_ . Intuitively, this corresponds to the case where sampling a generation _y_ multiple times is unlikely. In the extreme case when _π_ ref is a continuous distribution, term ( `B` ) is constant and equal to _N_ (see Appendix A.2). 1 To distinguish between generations with the same reward, ties can be broken by any arbitrary strict ordering. 3 3.2 T HE BOND OBJECTIVE The analytical characterization of the Best-of- N distribution allows us to formulate BOND as a distribution matching problem. That is, we want to solve the objective: _π_ BOND = arg min (6) _π∈_ Π _[D]_ [(] _[π][ ∥]_ _[π]_ [BoN] [)] _[,]_ where _D_ ( _· ∥·_ ) is a divergence metric steering the training policy _π_ towards _π_ BoN . For this, a toolbox of possible divergences exist in the literature including, e.g., forward and backward KL (Kullback, 1959). Moreover, we can employ existing distribution matching techniques to estimate _D_ from online and offline samples. We defer the choice of divergences and resulting BOND algorithms to Section 4. 3.3 C ONNECTION WITH STANDARD RLHF In this section, we draw important connections between the two seemingly different objectives of standard RLHF (Equation (1)) and BOND (Equation (6)). It is well known (see, e.g., Vieillard et al., 2020; Rafailov et al., 2023) that the policy maximizing the RLHF objective from Equation (1) is: 1 _π_ RL ( _y_ ) _∝_ _π_ ref ( _y_ ) exp _r_ ( _y_ ) _._ (7) � _β_ RL � From the derived expression of _π_ BoN in Theorem 1, we see that the Best-of- N sampling distribution coincides with the optimal solution of standard RLHF when using the following specific BOND reward: _N_ � _i_ =1 _p_ _<_ ( _y_ ) � _p_ _≤_ ( _y_ ) _p_ _≤_ ( _y_ ) _r_ BOND ( _y_ ) = log _p_ _≤_ ( _y_ ) ~~�~~ ~~��~~ � (A) 1 + _N −_ 1 [log] ~~�~~ �� ~~�~~ (B) _i−_ 1 _,_ (8) � 1 and the specific regularization strength _β_ BOND = _N_ _−_ 1 [. The term] [ (B)] [ corresponds to the correction] factor in Theorem 1, which is bounded in 0 _,_ [lo] _N_ [g] _−_ _[ N]_ 1 for all generations _y_ . Instead term (A) lies � � in ( _−∞,_ 0]. This provides two interesting insights for Best-of-N sampling: 1. Best-of- N sampling corresponds to the solution of a standard KL -regularized RLHF problem where the choice of _N_ determines the level of KL regularization. 2. Best-of- N sampling corresponds to optimizing the expected log reward quantile, i.e., the log likelihood that the generation has larger reward than a random sample from the reference distribution. Interestingly, due to the concavity of the logarithm, _r_ BOND ( _y_ ) strongly encourages the model to avoid bad generations rather than encouraging to generate good ones. Moreover, _r_ BOND ( _y_ ) is _invariant to monotone transformations of the reward_ _r_ ( _·_ ), since it depends only on the rank among the generations. We conjecture that both these features make the BOND reward _r_ BOND ( _y_ ) more robust to reward hacking compared to standard RLHF. The connection to RLHF also inspires the proposed approach in this manuscript: if we can compute the BOND reward or equivalently the Best-of- N distribution _π_ BoN, then we can steer the policy towards Best-of- N via distribution matching. In the next section we explore different algorithms to tackle the main underlying challenges. 4 BOND C HALLENGES AND A LGORITHMS Implementing the BOND approach induces the three following challenges: _(1)_ how to estimate the reward quantiles, _(2)_ which is the appropriate divergence metric to use, and _(3)_ how to choose the hyperparameter _N_ representing the number of sampled generations in Best-of- N . We discuss and address these challenges in the next three subsections. 4.1 M ONTE -C ARLO QUANTILE ESTIMATION One key difficulty in estimating the _π_ BoN distribution is that we need to estimate the quantile _p_ _≤_ ( _y_ ) = P _y_ _′_ _∼π_ ref [ _r_ ( _y_ _[′]_ ) _≤_ _r_ ( _y_ )] _,_ (9) 4 of a given generation _y_ . The quantile _p_ _≤_ ( _y_ ) measures the quality of _y_ compared to generations from _π_ ref when conditioned on the same prompt (recall that we have suppressed the conditioning on _x_ in our notation). A very simple but effective quantile estimation method is _Monte-Carlo sampling_, sampling _k_ generations from _π_ ref and obtaining the following empirical estimate: ˆ _p_ _≤_ ( _y_ ) = [1] _k_ _k_ � I _{r_ ( _y_ _i_ ) _≤_ _r_ ( _y_ ) _}._ (10) _i_ =1 We found this to be a very effective in our experiments, even with a limited number of samples. In principle, though, one could also use alternative approaches, e.g., training a learned quantile model (as we explore in Appendix B.2). 4.2 J EFFREYS DIVERGENCE AS A ROBUST OBJECTIVE The choice of the divergence metric used in BOND is of crucial importance: different divergences can steer the policy to very different solutions. Here, we propose the _Jeffreys divergence_ as a robust distribution matching objective. The Jeffreys divergence (Jeffreys, 1946) between two distributions is defined as: _J_ _[β]_ effreys [(] _[p][ ∥]_ _[q]_ [) := (1] _[ −]_ _[β]_ [)] _[ ·]_ [ KL] � [(] ~~�~~ _[q]_ � _[ ∥]_ _[p]_ ~~�~~ [)] Forward KL + _β ·_ KL( _p ∥_ _q_ ) � ~~�~~ � ~~�~~ Backward KL _._ (11) The (generalized) Jeffreys divergence is a weighted average (with weight _β ∈_ [0 _,_ 1] ) between the forward and backward KL divergence. Notably, when fine-tuning policy _p_, the forward KL( _q ∥_ _p_ ) encourages that generations likely under _q_ are also likely under _p_, thus encouraging a _mode-covering_ behavior. Instead, the reverse KL( _p ∥_ _q_ ) is well-known to have a _mode-seeking_ effect, steering policy _p_ to produce generations that have a high likelihood according to _q_ (Agarwal et al., 2024). While the forward KL may produce over-spread distributions, the backward KL can lead to policy and entropy collapses. Instead, we empirically show that the Jeffreys divergence inherits the best of both divergences, producing better aligned policies. In the context of BOND, this translates into minimizing the divergence _J_ effreys _[β]_ [(] _[π][ ∥]_ _[π]_ [BoN] [)] [ which we] can estimate using samples from the training policy _π_ and reference policy _π_ ref as follows. **Estimation of the forward KL.** The forward KL defined as KL( _π_ BoN _∥_ _π_ ) = E _y∼π_ BoN [log _π_ BoN ( _y_ ) _−_ log _π_ ( _y_ )] (12) can be estimated directly drawing samples from the _π_ BoN (i.e., sampling _N_ times from _π_ ref and selecting the best one) and can be seen as a supervised fine-tuning loss on the Best-of-N samples: _∇_ _π_ KL( _π_ BoN _∥_ _π_ ) = _−_ E _y∼π_ BoN _∇_ log _π_ ( _y_ ) _._ (13) **Estimation of the backward KL.** The backward KL defined as KL( _π ∥_ _π_ BoN ) = E _y∼π_ [log _π_ ( _y_ ) _−_ log _π_ BoN ( _y_ )] (14) can be estimated from the policy samples (note the expectation w.r.t. _π_ ) and their estimated loglikelihood under _π_ BoN . In particular, by the analogies drawn in Section 3.3, we show (in Appendix A.3) that its gradient coincides with a policy gradient (e.g., used by REINFORCE (Williams, 1992) in standard RLHF): _∇_ _π_ KL( _π ∥_ _π_ BoN ) = _−_ ( _N −_ 1)E _y∼π_ � _∇_ _π_ log _π_ ( _y_ )� _r_ BOND ( _y_ ) _−_ _β_ BOND �log _π_ ( _y_ ) _−_ log _π_ ref ( _y_ )��� _,_ (15) with the equivalent reward _r_ BOND and regularization _β_ BOND defined in Section 3.3. Note that _r_ BOND ( _y_ ) depends on the true unknown quantile _p_ _≤_ ( _y_ ) and on the correction factor (B) defined in Equation (8). In practice, we substitute the true quantile by its estimate, while we observed the correction factor does not play a significant role. Thus, we use _r_ BOND ( _y_ ) = ˆ _p_ _≤_ ( _y_ ) . Moreover, to reduce the resulting variance, we use a policy gradient baseline (Sutton & Barto, 1998) which we compute as the average return for the other generations in the batch. Thus, the overall _J_ effreys _[β]_ [loss is a linear weighted combination between a supervised fine-tuning and a] policy gradient loss. 5 Idea Generation Category:
2Direct Enhancement
0tAXMiSufG
# U NIVERSAL GENERALIZATION GUARANTEES FOR W ASSERSTEIN DISTRIBUTIONALLY ROBUST MODELS **Tam Le** **J´erˆome Malick** Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK Grenoble, 38000, France A BSTRACT Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that generalization guarantees of robust models based on the Wasserstein distance have generalization guarantees that do not suffer from the curse of dimensionality. However, these results are either approximate, obtained in specific cases, or based on assumptions difficult to verify in practice. In contrast, we establish exact generalization guarantees that cover a wide range of cases, with arbitrary transport costs and parametric loss functions, including deep learning objectives with nonsmooth activations. We complete our analysis with an excess bound on the robust objective and an extension to Wasserstein robust models with entropic regularizations. 1 I NTRODUCTION 1.1 W ASSERSTEIN ROBUSTNESS : MODELS AND GENERALIZATION Machine learning models are challenged in practice by many obstacles, such as biases in data, adversarial attacks, or data shifts between training and deployment. Towards more resilient and reliable models, distributionally robust optimization has emerged as an attractive paradigm, where training no longer relies on minimizing the empirical risk but rather on an optimization problem that takes into account potential perturbations in the data distribution; see e.g., the review articles Kuhn et al. (2019); Blanchet et al. (2021a). More specifically, the approach consists in minimizing the worst-risk among all distributions in a neighborhood of the empirical data distribution. A natural way (Mohajerin Esfahani & Kuhn, 2018) to define such a neighborhood is to use the optimal transport distance, called the Wasserstein distance (C´edric & Villani, 2009). Between two suitable distributions _Q_ and _Q_ _[′]_ on a sample space Ξ, we may define the optimal transport cost among all coupling _π_ on Ξ _×_ Ξ having _Q_ and _Q_ _[′]_ as marginals: _W_ _c_ ( _Q, Q_ _[′]_ ) = inf E ( _ξ,ζ_ ) _∼π_ [ _c_ ( _ξ, ζ_ )] _,_ (1) _π∈P_ (Ξ _×_ Ξ) [ _π_ ] 1 = _Q,_ [ _π_ ] 2 = _Q_ _[′]_ where _c_ : Ξ _×_ Ξ _→_ R is a non-negative cost function. When _c_ is the power _p ≥_ 1 of a distance on Ξ, this corresponds to the _p_ -Wasserstein distance. For a class of loss functions _F_, the Wasserstein distributionally robust counterpart of the standard empirical risk minimization (ERM) then writes min sup E _ξ∼Q_ [ _f_ ( _ξ_ )] _,_ (2) _f_ _∈F_ _Q∈P_ (Ξ) _,W_ _c_ ( _P_ [�] _n_ _,Q_ ) _≤ρ_ for a chosen radius _ρ_ of the Wasserstein ball centered at the empirical data distribution, denoted _P_ [�] _n_ . This procedure is often referred to as Wasserstein Distributionally Robust Optimization (WDRO). In the degenerate case _ρ_ = 0, we have _Q_ = _P_ [�] _n_ and (2) boils down to ERM. If _ρ >_ 0, the training captures data uncertainty and provides more resilient learning models; see e.g. the discussions and illustrations in Shafieezadeh-Abadeh et al. (2015); Sinha et al. (2018); Zhao & Guan (2018); Kwon et al. (2020); Li et al. (2020); Taskesen et al. (2021); Gao et al. (2022); Arrigo et al. (2022); Belbasi et al. (2023). 1 To support theoretically the modeling versatility and the practical success of these robust models, some statistical guarantees have been proposed in the literature. For a population distribution _P_, i.i.d. samples _ξ_ 1 _, . . ., ξ_ _n_ drawn from _P_, and the associated empirical distribution _P_ [�] _n_ := _n_ [1] � _ni_ =1 _[δ]_ _[ξ]_ _i_ [,] the best concentration results for the Wasserstein distance (Fournier & Guillin, 2015) gives that if the radius _ρ_ is large enough, then the Wasserstein ball around _P_ [�] _n_ contains the true distribution _P_ with high probability, which in turn gives directly (see Mohajerin Esfahani & Kuhn (2018)) a generalization bound of the form sup E _ξ∼Q_ [ _f_ ( _ξ_ )] _≥_ E _ξ∼P_ [ _f_ ( _ξ_ )] _._ (3) _Q∈P_ (Ξ) _,W_ _c_ ( _P_ [�] _n_ _,Q_ ) _≤ρ_ This bound is exact in the sense that it introduces no approximation term between the true risk and the robust risk, unlike standard generalization bounds of ERM (Vapnik (1999); Bartlett & Mendelson (2006)). This property (3) is specific to WDRO and highlights its potential to give more resilient models: the left-hand-side, which is the quantity that we compute from data and optimize by training, provides a control on the right-hand-side which is the idealistic population risk. In order to obtain such an attractive guarantee, Mohajerin Esfahani & Kuhn (2018) needs to take a large radius _ρ_ . Indeed, the direct application of concentration results of Fournier & Guillin (2015) 1 requires a radius scaling as _O_ (1 _/n_ _d_ ), where _d_ is the data dimension. Due to the exponential dependence, in high dimension, this value is almost constant with respect to _n_, hence suggesting that the exact bound (3) can hold only for conservative values of _ρ_ . Recent works have proposed various statistical guarantees for WDRO by establishing generalization bounds that do not suffer from the above curse of the dimension; we further discuss them in the related work section in section 1.3. These results generally feature a radius _ρ_ scaling as _O_ (1 _/_ _[√]_ _n_ ~~)~~, which is the standard rate in ERM generalization bounds. Yet, no existing result on robust models precisely retrieve the original exact bound (3) with the 1 _/_ _[√]_ _n_ rate, in a general setting. 1.2 C ONTRIBUTIONS AND OUTLINE In this paper, we establish exact generalization guarantees of the form (3) under general assumptions that cover many machine learning situations. Our results apply to any kind of data lying in a metric space (e.g. classification and regression tasks with mixed features) and general classes of continuous loss functions (e.g. from standard regression tasks to deep learning models) as long as standard compactness conditions are satisfied. For instance, our results cover nonsmooth objectives that are particularly present in deep learning with the use of ReLU activation function, max-pooling operator, or optimization layers. To avoid using concentration results of Fournier & Guillin (2015) involving a radius scaling as 1 _O_ (1 _/n_ _d_ ), we develop a novel optimization-based proof, directly tackling the nonsmoothness of the robust objective function (2) with tools from variational analysis (Clarke, 1990; Rockafellar & Wets, 1998; Aliprantis & Border, 2006). We thus obtain general results with _ρ_ scaling as _O_ (1 _/_ _[√]_ _n_ ~~)~~, capturing all possible nonsmoothness and coinciding with previous study for robust linear models (Shafieezadeh-Abadeh et al., 2019). Moreover, our approach is systematic enough to (i) provide estimates of the excess errors quantifying by how much the robust objective may exceed the true risk, and (ii) extend to the recent versions of Wasserstein/Sinkhorn distributionally robust problems that involve (double) regularizations (Azizian et al., 2023b; Wang et al., 2023). We thus complete the only existing analysis of regularized WDRO (Azizian et al., 2023a) by obtaining generalization results for double regularization (Azizian et al., 2023b) when dealing with arbitrary costs and nonsmooth objectives. The paper is structured as follows. First, Section 2 introduces and illustrates the setting of this work. Then Section 3 presents and discusses the main results: the generalization guarantees (Theorem 3.1 and Theorem 3.2), the excess risk bounds (Proposition 3.1 and Proposition 3.3) and the specific case of linear models (Section 3.2). This section ends with Section 3.4 discussing the limitations of our study and potential extensions. Finally, Section 4 highlights our proof techniques, combining classical concentration lemma and advanced nonsmooth analysis. 2 1.3 R ELATED WORK The majority of papers studying generalization bounds of Wasserstein distributionally robust models establish _approximate_ generalization bounds. These approximate bounds introduce vanishing terms depending on _n_ and _ρ_ which embody the bias of WDRO. One of the first papers on such approximate bounds is Lee & Raginsky (2018), and important results in this direction include Blanchet et al. (2021b); Blanchet & Shapiro (2023) about asymptotical results for smooth losses, and Chen & Paschalidis (2018) about non-asymptotically results for linear models and for smooth loss functions. Let us also mention Yang & Gao (2022) which deals with 0-1 loss, and Gao (2022) which focuses on Wasserstein-1 uncertainty and the connection with Lipschitz norm regularization. In this paper, we rather focus of _exact_ bounds of the form of (3), which are out of reach of ERM-based models, and thus capture the essence of WDRO. The literature about exact bounds is scarcer than the one about approximate bounds and significantly different in terms of proof techniques. Let us mention Shafieezadeh-Abadeh et al. (2019) which establishes exact guarantees for linear regression models, and Gao (2022) which proposes tighter results for linear regression and Wasserstein-1 uncertainty. The closest work to our paper is Azizian et al. (2023a), which initiates a general study on exact bounds. There, the authors establish generalization results similar to ours, namely: exact bounds (3) in a regime where _ρ > O_ (1 _/_ _[√]_ _n_ ~~)~~ . In sharp contrast with our work, the results from Azizian et al. (2023a) rely on restrictive assumptions to overcome the nonsmoothness of the robust objective: the squared norm for the cost _c_, a Gaussian reference distribution, additional growth conditions, and abstract compactness conditions. We will further compare the setting and the results, in Section 2 and Section 3 and in the supplemental. In our work, we directly deal with nonsmoothness, thanks to tools from nonsmooth analysis, and thus we are able to alleviate extra assumptions and capture nonsmooth losses. The only other work regarding nonsmooth objectives is An & Gao (2021) which derives results on piece-wise smooth losses, at the price of abstract approximating constants. We underline that none of the existing results properly covers nonsmooth losses, in particular deep learning objectives with nonsmooth activations. Finally, let us mention that there exist many works studying generalization guarantees for other distributionally robust models, involving different uncertainty quantification. For instance, Zeng & Lam (2022) studies nonparametric families and divergence-based ambiguity, and Bennouna et al. (2023) considers deep learning models with ambiguity sets that combine KL divergence and adversarial corruptions. Though duality is always an important tool, we face in our framework to the specific difficulty of dealing with Wasserstein distances, so that the technicalities as well as the results of our paper are essentially different and disjoint from these works. 1.4 N OTATIONS Throughout the paper (Ξ _, d_ ) is a metric space, where _d_ is a distance, _F_ is a family of loss functions _f_ : Ξ _→_ R and _c_ : Ξ _×_ Ξ _→_ R is a cost function. **On probability spaces.** We denote the space of probability measures on Ξ by _P_ (Ξ). For all _π ∈P_ (Ξ _×_ Ξ), _i ∈{_ 1 _,_ 2 _}_, we denote the _i_ [th] marginal of _π_ by [ _π_ ] _i_ . We denote the Dirac mass at _ξ ∈_ Ξ by _δ_ _ξ_ . Given a measurable function _g_ : Ξ _→_ R, we denote the expectation of _g_ with respect to _Q ∈P_ (Ξ) by E _ξ∼Q_ [ _g_ ( _ξ_ )] and we may also use the shorthand E _Q_ [ _g_ ]. **On function spaces.** For a function _f_ : Ξ _→_ R, we denote the uniform norm by _∥f_ _∥_ _∞_ := sup _ξ∈_ Ξ _|f_ ( _ξ_ ) _|_ . By extension, we use the notation _∥F∥_ _∞_ := sup _f_ _∈F_ _∥f_ _∥_ _∞_ . Whenever well-defined, we denote the set of maximizers of _f_ on Ξ by arg max Ξ _f_ := _{ζ ∈_ Ξ : _f_ ( _ζ_ ) = max _ξ∈_ Ξ _f_ ( _ξ_ ) _}_ . We say _f_ is _Lipschitz_ with constant _L >_ 0 if for all _ξ, ζ ∈_ Ξ, _|f_ ( _ξ_ ) _−_ _f_ ( _ζ_ ) _| ≤_ _Ld_ ( _ξ, ζ_ ). For a function _ϕ_, we denote _∂_ _λ_ [+] _[ϕ]_ [ the right-sided derivative with respect to] _[ λ][ ∈]_ [R][, and] _[ ∂]_ _[λ]_ _[ϕ]_ [ its derivative,] whenever well-defined. 3 2 A SSUMPTIONS AND EXAMPLES In this section, we present the general framework and illustrate it by standard examples. We make the following assumptions on the sample space Ξ, the space of loss functions _F_ and the cost _c_ . **Assumption 2.1.** _1._ (Ξ _, d_ ) _is compact._ _2. c is jointly continuous with respect to d, non-negative, and c_ ( _ξ, ζ_ ) = 0 _if and only if ξ_ = _ζ._ _3. Every f ∈F is continuous and_ ( _F, ∥· ∥_ _∞_ ) _is compact. Furthermore, if N_ ( _t, X_ _, ∥· ∥_ _∞_ ) _is_ _the t-packing number of F_ [1] _, then Dudley’s entropy of F,_ _∞_ _I_ _F_ := � 0 � log _N_ ( _t, X_ _, ∥· ∥_ _∞_ ) _dt,_ _is finite._ Our setting encompasses many machine learning scenarios, with parametric models, general loss functions, and general transport costs as illustrated in the two paragraphs below. In Section 3.4, we will also come back on these assumptions to discuss their reach and limitations. Before this, let us underline that this set of assumptions is quite general and allows us to conduct a unified study and to relieve several restrictions found in previous works. In particular, we mention that the results of the closest work Azizian et al. (2023a) require convexity of Ξ, differentiability of losses _f ∈F_, restriction to the squared Euclidean distance, together with a strong [2] structural property on _F_ (Azizian et al., 2023a, Assumption 5), aimed at overcoming the nonsmoothness of the WDRO objective. In our sketch of proof in Section 4, we explain how we deal directly with nonsmoothness. **Parametric models and loss functions.** Our setting covers a wide range of machine learning models. Consider a parametric family _F_ = _{f_ ( _θ, ·_ ) : _θ ∈_ Θ _}_, where the parameter space Θ _⊂_ R _[p]_ is compact and the loss function _f_ : Θ _×_ Ξ _→_ R is jointly Lipschitz continuous. Since Ξ is compact, such a family is compact regarding _∥· ∥_ _∞_ and _I_ _F_ is finite, proportional to _[√]_ _p_ . This situation covers regression models, k-means clustering, and neural networks. For example: least-squares regression _f_ ( _θ,_ ( _x, y_ )) = ( _⟨θ, x⟩−_ _y_ ) [2] _,_ Ξ _⊂_ R _[m]_ _×_ R _,_ logistic regression _f_ ( _θ,_ ( _x, y_ )) = log 1 + _e_ _[−][y][⟨][θ,x][⟩]_ [�] _,_ Ξ _⊂_ R _[m]_ _× {−_ 1 _,_ 1 _},_ � and support vector machines with hinge loss _f_ ( _θ,_ ( _x, y_ )) = max _{_ 0 _,_ 1 _−_ _y⟨θ, x⟩},_ Ξ _⊂_ R _[m]_ _× {−_ 1 _,_ 1 _}._ Note that the latter is not differentiable, due to the max term. The k-means model also introduces a non-differentiable loss function: _f_ ( _θ, x_ ) = min 2 _[,]_ Θ _⊂_ R _[K][×][m]_ _,_ Ξ _⊂_ R _[m]_ _._ _i∈{_ 1 _,...,K}_ _[∥][θ]_ _[i]_ _[ −]_ _[x][∥]_ [2] Finally, most deep learning models fall in our setting. Indeed, they involve loss functions of the form _f_ ( _θ,_ ( _x, y_ )) = _ℓ_ ( _h_ ( _θ, x_ ) _, y_ ) _,_ where _ℓ_ is a dissimilarity measure and _h_ is a parameterized prediction function, built as a composition of affine transformations (which are the parameters to train) with activation functions (see e.g. Krizhevsky et al. (2012); LeCun et al. (2015); Redmon et al. (2016)). Our setting is general enough to encompass all continuous activation functions, even non-differentiable ones (as ReLU = max(0 _, ·_ )) as well as other nonsmooth elementary blocks (as max-pooling (He et al., 2016), sorting procedures (Sander et al., 2023), and optimization layers (Amos & Kolter, 2017)). As already underlined in introduction, these examples involving non-differentiable terms are not covered by existing results. 1 The maximal number of functions in _F_ that are at least at a distance _t_ from each other. 2 We show in Proposition F.5 in the appendix that the compactness assumptions (Azizian et al., 2023a, Assumption 5) hide strong conditions on the maximizers. 4 **Sample space and transport costs.** The choice of the transport cost _c_ depends on the nature of the data and of the potential data uncertainty. For instance, if the variables are continuous with Ξ _⊂_ R _[m]_, we consider the distance _d_ = _∥· −· ∥_ _p_ induced by _ℓ_ _p_ -norm ( _p ∈_ [1 _, ∞_ ]) and the cost as a power ( _q ∈_ [1 _, ∞_ )) of the distance _c_ ( _ξ, ξ_ _[′]_ ) = _∥ξ −_ _ξ_ _[′]_ _∥_ _[q]_ _p_ _[.]_ If the variables are discrete with Ξ _⊂{_ 1 _, . . ., J}_ _[m]_, we consider the distance _̸_ _̸_ _̸_ _d_ ( _ξ, ξ_ _[′]_ ) = _̸_ _̸_ _̸_ _m_ � 1 _{_ _ξ_ _i_ = _̸_ _ξ_ _i′_ _[}]_ _i_ =1 _̸_ _̸_ _̸_ and the cost as a power of this distance. If we deal with mixed data, i.e. they contain both continuous and discrete variables, a sum of the previous costs can be considered. In classification, for instance, with the samples composed of features _x ∈_ R _[m]_ and a target _y ∈{−_ 1 _,_ 1 _}_, we may take _c_ (( _x, y_ ) _,_ ( _x_ _[′]_ _, y_ _[′]_ )) = _∥x −_ _x_ _[′]_ _∥_ _[q]_ _p_ [+] _[ κ]_ [1] _{y_ = _̸_ _y_ _[′]_ _}_ for a chosen _κ >_ 0. This cost is obviously continuous with respect to the natural distance _d_ (( _x, y_ ) _,_ ( _x_ _[′]_ _, y_ _[′]_ )) = _∥x −_ _x_ _[′]_ _∥_ _p_ + 1 _{y_ = _̸_ _y_ _′_ _}_ _._ This extends to mixed data with categorical, binary and continuous variables; see e.g. Belbasi et al. (2023). 3 M AIN RESULTS 3.1 W ASSERSTEIN ROBUST MODELS Our main result establishes a generalization bound (3) for Wasserstein distributionally robust optimization (WDRO). Given a distribution _Q ∈P_ (Ξ) and a loss _f ∈F_, the robust risk around _Q_ with radius _ρ >_ 0 is defined as _R_ _ρ,Q_ ( _f_ ) := sup E _ξ∼Q_ _′_ [ _f_ ( _ξ_ )] _._ (4) _Q_ _[′]_ _∈P_ (Ξ) _,W_ _c_ ( _Q,Q_ _[′]_ ) _≤ρ_ In particular, taking _Q_ = _P_ [�] _n_ and _Q_ = _P_ in the above expression, we consider the empirical robust risk, _R_ [�] _ρ_ ( _f_ ), and the true robust risk, _R_ _ρ_ ( _f_ ): � _R_ _ρ_ ( _f_ ) := _R_ _ρ,_ � _P_ _n_ [(] _[f]_ [)] and _R_ _ρ_ ( _f_ ) := _R_ _ρ,P_ ( _f_ ) _._ We also introduce the following constant, called the _critical radius ρ_ crit, _̸_ _̸_ _̸_ _ρ_ crit := inf _f_ _∈F_ [E] _[ξ][∼][P]_ _̸_ _̸_ _̸_ min _c_ ( _ξ, ζ_ ) : _ζ ∈_ arg max _f_ _._ (5) � � Ξ �� _̸_ _̸_ _̸_ Note that _ρ_ crit is defined from the true distribution _P_, which makes it a deterministic quantity. In our results, we will make the further assumption that _ρ_ crit _>_ 0, which excludes [3] losses that remain constant across all samples from the ground truth distribution _P_ . This assumption reasonably aligns with practice and is also in line with the previous works An & Gao (2021); Azizian et al. (2023a). For instance, obtaining a predictor that precisely interpolates the ground truth distribution (leading to a loss equal to zero everywhere) is unrealistic. In this context, our main result then establishes the generalization bound when _n_ is large enough, for _ρ_ scaling with the standard 1 _/_ _[√]_ _n_ rate. **Theorem 3.1** (Generalization guarantee for Wasserstein robust models) **.** _If Assumption 2.1 holds_ _and ρ_ crit _>_ 0 _, then there exists λ_ low _>_ 0 _such that when n >_ [16][(] _ρ_ _[α]_ [2] crit [+] _[β]_ [)] [2] _and ρ >_ ~~_√_~~ _αn_ _, we have with_ _probability at least_ 1 _−_ _δ,_ � _R_ _ρ_ ( _f_ ) _≥_ E _ξ∼P_ [ _f_ ( _ξ_ )] _for all f ∈F,_ _where α and β are the two constants_ _̸_ _̸_ _̸_ 1 _α_ = 48 _∥F∥_ _∞_ + � _λ_ low _̸_ _̸_ _̸_ 2 _I_ _F_ + + [2] _[∥][F][∥]_ _[∞]_ �� _λ_ low � _λ_ low _̸_ _̸_ _̸_ � _̸_ _̸_ _̸_ � _̸_ _̸_ _̸_ [2] _β_ = [96] _[I]_ _[F]_ _δ_ _[,]_ _λ_ _̸_ _̸_ _̸_ _λ_ low _̸_ _̸_ _̸_ 2 log [2] _̸_ _̸_ _̸_ _[I]_ _[F]_ + [4] _[∥][F][∥]_ _[∞]_ _λ_ low _λ_ low _̸_ _̸_ _̸_ 2 log [4] _δ_ _[.]_ _̸_ _̸_ _̸_ 3 See Proposition F.3, in Appendix F.1 about the interpretation of the critical radius. 5 Idea Generation Category:
3Other
0h6v4SpLCY
# M ON ST3R: A S IMPLE A PPROACH FOR E STIMATING G EOMETRY IN THE P RESENCE OF M OTION **Junyi Zhang** [1] **Charles Herrmann** [2] _[,][†]_ **Junhwa Hur** [2] **Varun Jampani** [3] **Trevor Darrell** [1] **Forrester Cole** [2] **Deqing Sun** [2] _[,][∗]_ **Ming-Hsuan Yang** [2] _[,]_ [4] _[,][∗]_ 1 UC Berkeley 2 Google DeepMind 3 Stability AI 4 UC Merced **Video Depth** **Camera Intrinsics** **Dynamic / Static Mask** Figure 1: **MonST3R** processes a dynamic video to produce a time-varying dynamic point cloud, along with per-frame camera poses and intrinsics, in a predominantly feed-forward manner. This representation then enables the efficient computation of downstream tasks, such as video depth estimation and dynamic/static scene segmentation. A BSTRACT Estimating geometry from dynamic scenes, where objects move and deform over time, remains a core challenge in computer vision. Current approaches often rely on multi-stage pipelines or global optimizations that decompose the problem into subtasks, like depth and flow, leading to complex systems prone to errors. In this paper, we present Motion DUSt3R (MonST3R), a novel geometry-first approach that directly estimates per-timestep geometry from dynamic scenes. Our key insight is that by simply estimating a pointmap for each timestep, we can effectively adapt DUSt3R’s representation, previously only used for static scenes, to dynamic scenes. However, this approach presents a significant challenge: the scarcity of suitable training data, namely dynamic, posed videos with depth labels. Despite this, we show that by posing the problem as a fine-tuning task, identifying several suitable datasets, and strategically training the model on this limited data, we can surprisingly enable the model to handle dynamics, even without an explicit motion representation. Based on this, we introduce new optimizations for several downstream video-specific tasks and demonstrate strong performance on video depth and camera pose estimation, outperforming prior work in terms of robustness and efficiency. Moreover, MonST3R shows promising results for primarily feed-forward 4D reconstruction. Interactive 4D results, source code, and trained [models are available at: https://monst3r-project.github.io/.](https://monst3r-project.github.io/) 1 I NTRODUCTION Despite recent progress in 3D computer vision, estimating geometry from videos of dynamic scenes remains a fundamental challenge. Traditional methods decompose the problem into subproblems _†_ Project lead, _∗_ Equal contribution 1 such as depth, optical flow, or trajectory estimation, addressed with specialized techniques, and then combine them through global optimization or multi-stage algorithms for dynamic scene reconstruction (Luiten et al., 2020; Kumar et al., 2017; Bˆarsan et al., 2018; Mustafa et al., 2016). Even recent work often takes optimization-based approaches given intermediate estimates derived from monocular video (Lei et al., 2024; Chu et al., 2024; Wang et al., 2024b; Liu et al., 2024; Wang et al., 2024a). However, these multi-stage methods are usually slow, brittle, and prone to error at each step. While highly desirable, end-to-end geometry learning from a dynamic video poses a significant challenge, requiring a suitable representation that can represent the complexities of camera motion, multiple object motion, and geometric deformations, along with annotated training datasets. While prior methods have centered on the combination of motion and geometry, motion is often difficult to directly supervise due to lack of annotated training data. Instead, we explore using _only_ geometry to represent dynamic scenes, inspired by the recent work DUSt3R (Wang et al., 2024c). For static scenes, DUSt3R introduces a new paradigm that directly regresses scene geometry. Given a pair of images, DUSt3R produces a pointmap representation - which associates every pixel in each image with an estimated 3D location ( _i.e_ ., _xyz_ ) and aligns these pair of pointmaps in the camera coordinate system of the first frame. For multiple frames, DUSt3R accumulates the pairwise estimates into a global point cloud and uses it to solve numerous standard 3D tasks such as singleframe depth, multi-frame depth, or camera intrinsics and extrinsics. We leverage DUSt3R’s pointmap representation to directly estimate geometry of dynamic scenes. Our key insight is that pointmaps can be estimated per timestep and that representing them in the same camera coordinate frame still makes conceptual sense for dynamic scenes. As shown in Fig. 1, an estimated pointmap for the dynamic scene appears as a point cloud where dynamic objects appear at multiple locations, according to how they move. Multi-frame alignment can be achieved by aligning pairs of pointmaps based on static scene elements. This setting is a generalization of DUSt3R to dynamic scenes and allows us to use the same network and original weights as a starting point. One natural question is if DUSt3R can already and effectively handle video data with moving objects. However, as shown in Fig. 2, we identify two significant limitations stemming from the distribution of DUSt3R’s training data. First, since its training data contains only static scenes, DUSt3R fails to correctly align pointmaps of scenes with moving objects; it often relies on moving foreground objects for alignment, resulting in incorrect alignment for static background elements. Second, since its training data consists mostly of buildings and backgrounds, DUSt3R sometimes fails to correctly estimate the geometry of foreground objects, regardless of their motion, and places them in the background. In principle, both problems originate from a domain mismatch between training and test time and can be solved by re-training the network. However, this requirement for dynamic, posed data with depth presents a challenge, primarily due to its scarcity. Existing methods, such as COLMAP (Sch¨onberger & Frahm, 2016), often struggle with complex camera trajectories or highly dynamic scenes, making it challenging to produce even pseudo ground truth data for training. To address this limitation, we identify several small-scale datasets that possess the necessary properties for our purposes. Our main finding is that, surprisingly, we can successfully adapt DUSt3R to handle dynamic scenes by identifying suitable training strategies designed to maximally leverage this limited data and finetuning on them. We then introduce several new optimization methods for video-specific tasks using these pointmaps and demonstrate strong performance on video depth and camera pose estimation, as well as promising results for primarily feed-forward 4D reconstruction. The contributions of this work are as follows: - We introduce Motion DUSt3R (MonST3R), a geometry-first approach to dynamic scenes that directly estimates geometry in the form of pointmaps, even for moving scene elements. To this end, we identify several suitable datasets and show that, surprisingly, a small-scale fine-tuning achieves promising results for direct geometry estimation of dynamic scenes. - MonST3R obtains promising results on several downstream tasks (video depth and camera pose estimation). In particular, MonST3R offers key advantages over prior work: enhanced robustness, particularly in challenging scenarios; increased speed compared to optimization-based methods; and competitive results with specialized techniques in video depth estimation, camera pose estimation and dense reconstruction. 2 **Issue 1: Alignment based on dynamic object** **Issue 2: Foreground depth not estimated correctly** **DUSt3R** **MonST3R** **DUSt3R** **MonST3R** Figure 2: **Limitation of DUSt3R on dynamic scenes.** Left: DUSt3R aligns the moving foreground subject and misaligns the background points as it is only trained on static scenes. Right: DUSt3R fails to estimate the depth of a foreground subject, placing it in the background. 2 R ELATED W ORK **Structure from motion and visual SLAM.** Given a set of 2D images, structure from motion (SfM) (Sch¨onberger & Frahm, 2016; Teed & Deng, 2018; Tang & Tan, 2018) or visual SLAM (Teed & Deng, 2021; Mur-Artal et al., 2015; Mur-Artal & Tard´os, 2017; Engel et al., 2014; Newcombe et al., 2011) estimate 3D structure of a scene while also localizing the camera. However, these methods struggle with dynamic scenes with moving objects, which violate the epipolar constraint. To address this problem, recent approaches have explored joint estimation of depth, camera pose, and residual motion, optionally with motion segmentation to exploit the epipolar constraints on the stationary part. Self-supervised approaches (Gordon et al., 2019; Mahjourian et al., 2018; Godard et al., 2019; Yang et al., 2018) learn these tasks through self-supervised proxy tasks. CasualSAM (Zhang et al., 2022) finetunes a depth network at test time with a joint estimation of camera pose and movement mask. Robust-CVD (Kopf et al., 2021) jointly optimizes depth and camera pose given optical flow and binary masks for dynamic objects. Our approach directly estimates 3D structure of a dynamic scene in the pointmap representation without time-consuming test-time finetuning. **Representation for static 3D reconstruction.** Learning-based approaches reconstruct static 3D geometry of objects or scenes by learning strong 3D priors from training datasets. Commonly used output representations include point clouds (Guo et al., 2020; Lin et al., 2018), meshes (Gkioxari et al., 2019; Wang et al., 2018), voxel (Sitzmann et al., 2019; Choy et al., 2016; Tulsiani et al., 2017), implicit representation (Wang et al., 2021a; Peng et al., 2020; Chen & Zhang, 2019), _etc_ . DUSt3R (Wang et al., 2024c) introduces a pointmap representation for scene-level 3D reconstruction. Given two input images, the model outputs a 3D point of each pixel from both images in the camera coordinate system of the first frame. The model implicitly infers camera intrinsics, relative camera pose, and two-view geometry and thus can output an aligned points cloud with learned strong 3D priors. However, the method targets only static scenes. MonST3R shares the pointmap representation of DUSt3R but targets scenes with dynamic objects. **Learning-based visual odometry.** Learning-based visual odometry replaces hand-designed parts of geometry-based methods (Mur-Artal et al., 2015; Mur-Artal & Tard´os, 2017; Engel et al., 2017) and enables large-scale training for better generalization even with moving objects. Trajectory-based approaches (Chen et al., 2024; Zhao et al., 2022) estimate long-term trajectories along a video sequence, classify their dynamic and static motion, and then localize camera via bundle adjustment. Joint estimation approaches additionally infer moving object mask (Shen et al., 2023) or optical flow (Wang et al., 2021b) to be robust to moving objects while requiring their annotations during training. In contrast, our method directly outputs dynamic scene geometry via a pointmap representation and localizes camera afterwards. **Monocular and video depth estimation.** Recent deep learning works (Ranftl et al., 2020; 2021; Saxena et al., 2024; Ke et al., 2024) target zero-shot performance and with large-scale training combined with synthetic datasets (Yang et al., 2024a;b) show strong generalization to diverse domains. 3 Table 1: **Training datasets** used fine-tuning on dynamic scenes. All datasets provide both camera pose and depth, and most of them include dynamic objects. Dataset Domain Scene type # of frames # of Scenes Dynamics Ratio PointOdyssey (Zheng et al., 2023) Synthetic Indoors & Outdoors 200k 131 Realistic 50% TartanAir (Wang et al., 2020) Synthetic Indoors & Outdoors 1000k 163 None 25% Spring (Mehl et al., 2023) Synthetic Outdoors 6k 37 Realistic 5% Waymo Perception (Sun et al., 2020) Real Driving 160k 798 Driving 20% However, for video, these approaches suffer from flickering (temporal inconsistency between nearby estimates) due to their process of only a single frame and invariant training objectives. Early approaches to video depth estimation (Luo et al., 2020; Zhang et al., 2021) improve temporal consistency by fine-tuning depth models, and sometimes motion models, at test time for each input video. Self-supervised methods (Watson et al., 2021; Sun et al., 2023) are also explored to enhance temporal coherence without explicit annotations. Two recent approaches attempt to improve video depth estimation using generative priors. However, Chronodepth (Shao et al., 2024) still suffers from flickering due to its window-based inference, and DepthCrafter (Hu et al., 2024) produces scale-/shift-invariant depth, which is unsuitable for many 3D applications (Yin et al., 2021). **4D reconstruction.** Concurrently approaches (Lei et al., 2024; Chu et al., 2024; Wang et al., 2024b; Liu et al., 2024) introduce 4D reconstruction methods of dynamic scenes. Given a monocular video and pre-computed estimates ( _e.g_ ., 2D motion trajectory, depth, camera intrinsics and pose, _etc_ .), the approaches reconstruct the input video in 4D space via test-time optimization of 3D Gaussians (Kerbl et al., 2023) with deformation fields, facilitating novel view synthesis in both space and time. Our method is orthogonal to the methods and estimate geometry from videos in a feed-forward manner. Our estimates could be used as initialization or intermediate signals for these methods. 3 M ETHOD 3.1 B ACKGROUND AND BASELINES **Model architecture.** Our architecture is based on DUSt3R (Wang et al., 2024c), a ViT-based architecture (Dosovitskiy et al., 2021) that is pre-trained on a cross-view completion task (Weinzaepfel et al., 2023) in a self-supervised manner. Two input images are first individually fed to a shared encoder. A following transformer-based decoder processes the input features with cross-attention. Then two separate heads at the end of the decoder output pointmaps of the first and second frames aligned in the coordinate of the first frame. **Baseline with mask.** While DUSt3R is designed for static scenes as shown in Fig. 2, we analzye its applicability to dynamic scenes by using knowledge of dynamic elements (Chen et al., 2024; Zhao et al., 2022). Using ground truth moving masks, we adapt DUSt3R by masking out dynamic objects during inference at both the image and token levels, replacing dynamic regions with black pixels in the image and corresponding tokens with mask tokens. This approach, however, leads to degraded pose estimation performance (Sec. 4.3), likely because the black pixels and mask tokens are out-of-distribution with respect to training. This motivates us to address these issues in this work. 3.2 T RAINING FOR DYNAMICS **Main idea.** While DUSt3R primarily focuses on static scenes, the proposed MonST3R can estimate the geometry of dynamic scenes over time. Figure. 1 shows a visual example consisting of a point cloud where dynamic objects appear at different locations, according to how they move. Similar to DUSt3R, for a single image **I** _[t]_ at time _t_, MonST3R also predicts a pointmap **X** _[t]_ _∈_ R _[H][×][W][ ×]_ [3] . For a pair of images, **I** _[t]_ and **I** _[t]_ _[′]_, we adapt the notation used in the global optimization section of DUSt3R. The network predicts two corresponding pointmaps, **X** _[t]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ and **X** _[t]_ _[′]_ [;] _[t ]_ ~~_[t]_~~ _[′]_, with confidence map, **C** _[t]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ and **C** _[t]_ _[′]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ The first element _t_ in the superscript indicates the frame that the pointmap corresponds to, and _t_ ~~_t_~~ _[′]_ indicates that the network receives two frames at _t, t_ _[′]_ and that the pointmaps are in the coordinate frame of the camera at _t_ . The key difference from DUSt3R is that each pointmap in MonST3R relates to a single point in time. 4 **Training datasets.** A key challenge in modeling dynamic scenes as per-timestep pointmaps lies in the scarcity of suitable training data, which requires synchronized annotations of input images, camera poses, and depth. Acquiring accurate camera poses for real-world dynamic scenes is particularly challenging, often relying on sensor measurements or post-processing through structure from motion (SfM) (Sch¨onberger et al., 2016; Sch¨onberger & Frahm, 2016) while filtering out moving objects. Consequently, we leverage primarily synthetic datasets, where accurate camera poses and depth can be readily extracted during the rendering process. For our dynamic fine-tuning, we identify four large video datasets: three synthetic datasets PointOdyssey (Zheng et al., 2023), TartanAir (Wang et al., 2020), and Spring (Mehl et al., 2023), along with the real-world Waymo dataset (Sun et al., 2020), as shown in Tab. 1. These datasets contain diverse indoor/outdoor scenes, dynamic objects, camera motion, and labels for camera pose and depth. PointOdyssey and Spring are both synthetically rendered scenes with articulated, dynamic objects; TartanAir consists of synthetically rendered drone fly-throughs of different scenes without dynamic objects; and Waymo is a real-world driving dataset labeled with LiDAR. During training, we sample the datasets asymmetrically to place extra weight on PointOdyssey (more dynamic, articulated objects) and less weight on TartanAir (good scene diversity but static) and Waymo (a highly specialized domain). Images are downsampled such that their largest dimension is 512. **Training strategies.** Due to the relatively small size of this dataset mixture, we adopt several training techniques designed to maximize data efficiency. First, we only finetune the prediction head and decoder of the network while freezing the encoder. This strategy preserves the geometric knowledge in the CroCo (Weinzaepfel et al., 2022) features and should decrease the amount of data required for fine-tuning. Second, we create training pairs for each video by sampling two frames with temporal strides ranging from 1 to 9. The sampling probabilities increase linearly with the stride length, with the probability of selecting stride 9 being twice that of stride 1. This gives us a larger diversity of camera and scene motion and more heavily weighs larger motion. Third, we utilize a Field-of-View augmentation technique using center crops with various image scales. This encourages the model to generalize across different camera intrinsics, even though such variations are relatively infrequent in the training videos. We train the model with the same confidence-aware regression loss as DUSt3R. 3.3 D OWNSTREAM APPLICATIONS **Instrinsics and relative pose estimation.** Since the intrinsic parameters are estimated based on the pointmap in its own camera frame **X** _[t]_ [;] _[t ]_ ~~_[t]_~~ _[′]_, the assumptions and computation listed in DUSt3R are still valid, and we only need to solve for focal length _f_ _[t]_ to obtain the camera intrinsics **K** _[t]_ . To estimate relative pose **P** _[∗]_ = [ **R** _[∗]_ _|_ **T** _[∗]_ ], where **R** _[∗]_ and **T** _[∗]_ represent the camera’s rotation and translation, respectively, dynamic objects violate the assumptions for methods relying on correspondences between _two views_, _e.g_ ., epipolar matrix (Hartley & Zisserman, 2003) with 2D and Procrustes alignment (Luo & Hancock, 1999) with 3D correspondences. Instead, we leverage per-pixel 2D-3D correspondences within the _same view_ and use PnP (Lepetit et al., 2009) to recover the relative pose: 2 **x** _i_ _−_ _π_ **K** _[t]_ _[′]_ [ �] **RX** _[t]_ _i_ _[′]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ + **T** _,_ (1) ��� � ����� **R** _[∗]_ _,_ **T** _[∗]_ = arg min **R** _,_ **T** � _i∈I_ where **x** is the pixel coordinate matrix and _π_ ( _·_ ) is the projection operation ( _x, y, z_ ) _→_ ( _x/z, y/z_ ). To improve the robustness to outliers, we use RANSAC (Fischler & Bolles, 1981) and define valid correspondences by taking a threshold of the estimated confidence mask, _I_ = _{i |_ **C** _[t]_ _i_ _[′]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ _> α}_ . **Confident static regions.** We can infer static regions in frames _t, t_ _[′]_ by comparing the estimated optical flow with the flow field that results from applying only camera motion from _t_ to _t_ _[′]_ to the pointmap at _t_ . The two flow fields should agree for pixels where the geometry has been correctly estimated and are static. Given a pair of frames **I** _[t]_ and **I** _[t]_ _[′]_, we first compute two sets of pointmaps **X** _[t]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ _,_ **X** _[t]_ _[′]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ and **X** _[t]_ [;] _[t]_ _[′]_ ~~_[t]_~~ _,_ **X** _[t]_ _[′]_ [;] _[t]_ _[′]_ ~~_[t]_~~ . We then use these pointmaps to solve for the camera intrinsics ( **K** _[t]_ and **K** _[t]_ _[′]_ ) for each frame and the relative camera pose from _t_ to _t_ _[′]_, **P** _[t][→][t]_ _[′]_ = [ **R** _[t][→][t]_ _[′]_ _|_ **T** _[t][→][t]_ _[′]_ ] as above. We then compute the optical flow field induced by camera motion, **F** _[t]_ cam _[→][t]_ _[′]_ [, by backprojecting] 5 𝐗 [-;/], 𝐗 [-] [!] [;/] 𝐅 012/ ~~𝐗~~ ~~[-;/]~~, ~~𝐗~~ ~~[-]~~ [!] ~~[;/]~~ 𝐅 012/ **Input Video** **Pairwise Pointmap & Optical Flow** **Accumulated Intermediates** **Global Point Cloud & Camera Parameters** Figure 3: **Dynamic global point cloud and camera pose estimation.** Given a fixed sized of temporal window, we compute pairwise pointmap for each frame pair with MonST3R and optical flow from off-the-shelf method. These intermediates then serve as inputs to optimize a global point cloud and per-frame camera poses. Video depth can be directly derived from this unified representation. each pixel in 3D, applying relative camera motion, and projecting back to image coordinate, ˆ **F** _[t]_ cam _[→][t]_ _[′]_ = _π_ ( **D** _[t]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ **K** _[t]_ _[′]_ **R** _[t][→][t]_ _[′]_ **K** _[t][−]_ [1] **x** + **K** _[t]_ _[′]_ **T** _[t][→][t]_ _[′]_ ) _−_ **x** _,_ (2) where ˆ **x** is the pixel coordinate matrix **x** in homogeneous coordinates, and **D** _[t]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ is estimated depth extracted from the point map **X** _[t]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ . Then we compare it with optical flow ( _i.e_ ., **F** _[t]_ est _[→][t]_ _[′]_ ) computed by an off-the-shelf optical flow method (Wang et al., 2024d) and infer the static mask **S** _[t][→][t]_ _[′]_ via a simple thresholding: **S** _[t][→][t]_ _[′]_ = _α > ||_ **F** _[t]_ cam _[→][t]_ _[′]_ _[−]_ **[F]** _[t]_ est _[→][t]_ _[′]_ _||_ L1 _,_ (3) � � with a threshold _α_, _||·||_ L1 for smooth-L1 norm (Girshick, 2015), and [ _·_ ] for the Iverson bracket. This confident, static mask is both a potential output and will be used in the later global pose optimization. 3.4 D YNAMIC GLOBAL POINT CLOUDS AND CAMERA POSE Even a short video contain numerous frames ( _e.g_ . a 5-second video with 24 fps gives 120 frames) making it non-trivial to extract a single dynamic point cloud from pairwise pointmap estimates across the video. Here, we detail the steps to simultaneously solve for a global dynamic point cloud and camera poses by leveraging our pairwise model and the inherent temporal structure of video. **Video graph.** For global alignment, DUSt3R constructs a connectivity graph from all pairwise frames, a process that is prohibitively expensive for video. Instead, as shown on the left of Fig. 3, we process video with a sliding temporal window, significantly reducing the amount of compute required. Specifically, given a video **V** = [ **I** [0] _, . . .,_ **I** _[N]_ ], we compute pointmaps for all pairs _e_ = ( _t, t_ _[′]_ ) within a temporal window of size _w_, **W** _[t]_ = _{_ ( _a, b_ ) _| a, b ∈_ [ _t, . . ., t_ + _w_ ] _, a ̸_ = _b}_ and for all valid windows **W** . To further improve the run time, we also apply strided sampling. **Dynamic global point cloud and pose optimization.** The primary goal is to accumulate all pairwise pointmap predictions ( _e.g_ ., **X** _[t]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ _,_ **X** _[t]_ _[′]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ ) into the same global coordinate frame to produce worldcoordinate pointmap **X** _[t]_ _∈_ R _[H][×][W][ ×]_ [3] . To do this, as shown in Fig. 3, we use DUSt3R’s alignment loss and add two video specific loss terms: camera trajectory smoothness and flow projection. We start by re-parameterizing the global pointmaps **X** _[t]_ with camera parameters **P** _[t]_ = [ **R** _[t]_ _|_ **T** _[t]_ ] _,_ **K** _[t]_ and per-frame depthmap **D** _[t]_, as **X** _[t]_ _i,j_ [:=] **[ P]** _[t][−]_ [1] _[h]_ [(] **[K]** _[t][−]_ [1] [[] _[i]_ **[D]** _[t]_ _i,j_ [;] _[ j]_ **[D]** _[t]_ _i,j_ [;] **[ D]** _[t]_ _i,j_ [])][, with][ (] _[i, j]_ [)][ for pixel] coordinate and _h_ () for homogeneous mapping. It allows us to define losses directly on the camera parameters. To simplify the notation for function parameters, we use **X** _[t]_ as a shortcut for **P** _[t]_ _,_ **K** _[t]_ _,_ **D** _[t]_ . First, we use the alignment term in DUSt3R which aims to find a single rigid transformation **P** _[t]_ [;] _[e]_ that aligns each pairwise estimation with the world coordinate pointmaps, since both **X** _[t]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ and **X** _[t]_ _[′]_ [;] _[t ]_ ~~_[t]_~~ _[′]_ are in the same camera coordinate frame: _L_ align ( **X** _, σ,_ **P** _W_ ) = � _W_ _[i]_ _∈W_ � _e∈W_ _[i]_ � _||_ **C** _[t]_ [;] _[e]_ _·_ ( **X** _[t]_ _−_ _σ_ _[e]_ **P** _[t]_ [;] _[e]_ **X** _[t]_ [;] _[e]_ ) _||_ 1 _,_ (4) _t∈e_ where _σ_ _[e]_ is a pairwise scale factor. To simplify the notation, we use the directed edge _e_ = ( _t, t_ _[′]_ ) interchangeably with _t_ ~~_t_~~ _[′]_ . 6 We use a camera trajectory smoothness loss to encourage smooth camera motion by penalizing large changes in camera rotation and translation in nearby timesteps: _L_ smooth ( **X** ) = _N_ � _t_ =0 _t⊤_ _t_ +1 _t_ +1 _t_ ���� **R** **R** _−_ _**I**_ ��� f [+] �� **T** _−_ **T** �� 2 _,_ (5) � where the Frobenius norm _∥· ∥_ f is used for the rotation difference, the L2 norm _∥· ∥_ 2 is used for the translation difference, and _**I**_ is the identity matrix. We also use a flow projection loss to encourage the global pointmaps and camera poses to be consistent with the estimated flow for the confident, static regions of the actual frames. More precisely, given two frames _t, t_ _[′]_, using their global pointmaps, camera extrinsics and intrinsics, we compute the flow fields from taking the global pointmap **X** _[t]_, assuming the scene is static, and then moving the camera from _t_ to _t_ _[′]_ . We denote this value **F** [global] cam [;] _[t][→][t]_ _[′]_, similar to the term defined in the confident static region computation above. Then we can encourage this to be close to the estimated flow, **F** _[t]_ est _[→][t]_ _[′]_, in the regions which are confidently static **S** [global][;] _[t][→][t]_ _[′]_ according to the global parameters: _L_ flow ( **X** ) = � _W_ _[i]_ _∈W_ � _||_ **S** [global][;] _[t][→][t]_ _[′]_ _·_ ( **F** [global] cam [;] _[t][→][t]_ _[′]_ _−_ **F** _[t]_ est _[→][t]_ _[′]_ ) _||_ 1 _,_ (6) _t→t_ _[′]_ _∈W_ _[i]_ where _·_ indicates element-wise multiplication. Note that the confident static mask is initialized using the pairwise prediction values (pointmaps and relative poses) as described in Sec. 3.3. During the optimization, we use the global pointmaps and camera parameters to compute **F** [global] cam and update the confident static mask. Please refer to Appendix D for more details on _L_ smooth and _L_ flow . The complete optimization for our dynamic global point cloud and camera poses is: **X** ˆ = arg min _L_ align ( **X** _, σ,_ **P** _W_ ) + _w_ smooth _L_ smooth ( **X** ) + _w_ flow _L_ flow ( **X** ) _,_ (7) **X** _,_ **P** _W_ _,σ_ where _w_ smooth _, w_ flow are hyperparameters. Note, based on the reparameterization above, **X** [ˆ] includes all the information for **D** [ˆ] _,_ **P** [ˆ] _,_ **K** [ˆ] . **Video depth.** We can now easily obtain temporally-consistent video depth, traditionally addressed as a standalone problem. Since our global pointmaps are parameterized by camera pose and perframe depthmaps **D** [ˆ], just returning **D** [ˆ] gives the video depth. 4 E XPERIMENTS MonST3R runs on a monocular video of a dynamic scene and jointly optimizes video depth and camera pose. We compare the performance with methods specially designed for each individual subtask ( _i.e_ ., depth estimation and camera pose estimation), as well as monocular depth methods. 4.1 E XPERIMENTAL D ETAILS **Training and Inference.** We fine-tune the DUSt3R’s ViT-Base decoder and DPT heads for 25 epochs, using 20,000 sampled image pairs per epoch. We use the AdamW optimizer with a learning rate of 5 _×_ 10 _[−]_ [5] and a mini-batch size of 4 per GPU. Training took one day on 2 _×_ RTX 6000 48GB GPUs. Inference for a 60-frame video with _w_ = 9 and stride 2 (approx. 600 pairs) takes around 30s. **Global Optimization.** For global optimization Eq. (7), we set the hyperparameter of each weights to be _w_ smooth = 0 _._ 01 and _w_ flow = 0 _._ 01. We only enable the flow loss when the average value is below 20, when the poses are roughly aligned. The motion mask is updated during optimization if the per-pixel flow loss is higher than 50. We use the Adam optimizer for 300 iterations with a learning rate of 0 _._ 01, which takes around 1 minute for a 60-frame video on a single RTX 6000 GPU. 4.2 S INGLE - FRAME AND VIDEO DEPTH ESTIMATION **Baselines.** We compare our method with video depth methods, NVDS (Wang et al., 2023), ChronoDepth (Shao et al., 2024), and concurrent work, DepthCrafter (Hu et al., 2024), as well as singleframe depth methods, Depth-Anything-V2 (Yang et al., 2024b) and Marigold (Ke et al., 2024). 7 Table 2: **Video depth evaluation** on Sintel, Bonn, and KITTI datasets. We evaluate for both scaleand-shift-invariant and scale-invariant depth. The best and second best results in each category are **bold** and underlined, respectively. |Sintel|Bonn|KITTI| |---|---|---| |Alignment Category Method Abs Rel ↓δ<1.25 ↑|Abs Rel ↓δ<1.25 ↑|Abs Rel ↓δ <1.25 ↑| |Single-frame Marigold 0.532 51.5 0.091 93.1 0.149 79.6<br>depth Depth-Anything-V2 0.367 55.4 0.106 92.1 0.140 80.4<br>NVDS 0.408 48.3 0.167 76.6 0.253 58.8<br>Per-sequence Video depth ChronoDepth 0.687 48.6 0.100 91.1 0.167 75.9<br>scale & shift<br>DepthCrafter (Sep. 2024) 0.292 69.7 0.075 97.1 0.110 88.1<br>Robust-CVD 0.703 47.8 - - - -<br>Joint video<br>CasualSAM 0.387 54.7 0.169 73.7 0.246 62.2<br>depth & pose<br>MonST3R 0.335 58.5 0.063 96.4 0.104 89.5|0.091 93.1<br>0.106 92.1|0.149 79.6<br>0.140 80.4| |Single-frame Marigold 0.532 51.5 0.091 93.1 0.149 79.6<br>depth Depth-Anything-V2 0.367 55.4 0.106 92.1 0.140 80.4<br>NVDS 0.408 48.3 0.167 76.6 0.253 58.8<br>Per-sequence Video depth ChronoDepth 0.687 48.6 0.100 91.1 0.167 75.9<br>scale & shift<br>DepthCrafter (Sep. 2024) 0.292 69.7 0.075 97.1 0.110 88.1<br>Robust-CVD 0.703 47.8 - - - -<br>Joint video<br>CasualSAM 0.387 54.7 0.169 73.7 0.246 62.2<br>depth & pose<br>MonST3R 0.335 58.5 0.063 96.4 0.104 89.5|0.167 76.6<br>0.100 91.1<br>0.075 97.1|0.253 58.8<br>0.167 75.9<br>0.110 88.1| |Single-frame Marigold 0.532 51.5 0.091 93.1 0.149 79.6<br>depth Depth-Anything-V2 0.367 55.4 0.106 92.1 0.140 80.4<br>NVDS 0.408 48.3 0.167 76.6 0.253 58.8<br>Per-sequence Video depth ChronoDepth 0.687 48.6 0.100 91.1 0.167 75.9<br>scale & shift<br>DepthCrafter (Sep. 2024) 0.292 69.7 0.075 97.1 0.110 88.1<br>Robust-CVD 0.703 47.8 - - - -<br>Joint video<br>CasualSAM 0.387 54.7 0.169 73.7 0.246 62.2<br>depth & pose<br>MonST3R 0.335 58.5 0.063 96.4 0.104 89.5|- -<br>0.169 73.7<br>0.063 96.4|- -<br>0.246 62.2<br>0.104 89.5| |Per-sequence Video depth DepthCrafter (Sep. 2024) 0.692 53.5<br>scale Joint depth & pose MonST3R 0.345 56.2|0.217 57.6<br>0.065 96.3|0.141 81.8<br>0.106 89.3| Table 3: **Single-frame depth evaluation.** We Idea Generation Category:
2Direct Enhancement
lJpqxFgWCM
# - C OMPUTING C IRCUITS O PTIMIZATION VIA M ODEL B ASED C IRCUIT G ENETIC E VOLUTION **Zhihai Wang** **[1]** [∗] **, Jie Wang** **[1]** [†] **, Xilin Xia** **[1]** **, Dongsheng Zuo** **[3]** **, Lei Chen** **[2]** **, Yuzhe Ma** **[3]** **,** **JianYe Hao** [2,4], **Mingxuan Yuan** [2], **Feng Wu** [1] 1 MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China 2 Noah’s Ark Lab, Huawei Technologies 3 Microelectronics Thrust, Hong Kong University of Science and Technology (Guangzhou) 4 College of Intelligence and Computing, Tianjin University A BSTRACT Optimizing computing circuits such as multipliers and adders is a fundamental challenge in modern integrated circuit design. Recent efforts propose formulating this optimization problem as a reinforcement learning (RL) proxy task, offering a promising approach to search high-speed and area-efficient circuit design solutions. However, we show that the RL-based formulation (proxy task) converges to a _local optimal_ design solution (original task) due to the deceptive reward signals and incrementally localized actions in the RL-based formulation. To address this challenge, we propose a novel **m** odel-based circ **u** it gene **t** ic **e** volution (MUTE) framework, which reformulates the problem as a genetic evolution process by proposing a grid-based genetic representation of design solutions. This novel formulation avoids misleading rewards by evaluating and improving generated solutions using the true objective value rather than proxy rewards. To promote globally diverse exploration, MUTE proposes a _multi-granularity_ genetic crossover operator that recombines design substructures at varying column ranges between two grid-based genetic solutions. To the best of our knowledge, MUTE is _the first_ to reformulate the problem as a circuit genetic evolution process, which enables effectively searching for global optimal design solutions. We evaluate MUTE on several fundamental computing circuits, including multipliers, adders, and multiply-accumulate circuits. Experiments on these circuits demonstrate that MUTE significantly Pareto-dominates state-of-the-art approaches in terms of both area and delay. Moreover, experiments demonstrate that circuits designed by MUTE well generalize to large-scale computation-intensive circuits as well. 1 I NTRODUCTION Computing circuits such as multipliers and adders serve as the fundamental building blocks in numerous real-world circuits, particularly in central processing units, graphics processing units, and artificial intelligence (AI) chips (Holdsworth, 1987; Das et al., 2019; Sze et al., 2020). The multiplication and addition operations stand out as the most fundamental and frequently utilized arithmetic operations across various computation-intensive applications, including deep neural networks (DNNs), digital signal processors, and microprocessors (Hashemian, 2002; Elguibaly, 2000; Zuo et al., 2023). Notably, in many popular DNN architectures such as ResNet (He et al., 2016), ViT (Dosovitskiy et al., 2021), Transformer (Vaswani et al., 2017), and BERT (Devlin et al., 2019), the multiplication and addition operations constitute over 99% of all operations. Therefore, the design of high-speed and area-efficient computing circuits plays a pivotal role in enhancing the performance of computation-intensive applications, especially in AI chips. However, computing circuit optimization is a challenging combinatorial optimization problem due to its NP-hard nature (Hillar & Lim, 2013; Song et al., 2022). On one hand, the combinatorial ∗ This work was done when Zhihai Wang was an intern at Huawei Noah’s Ark Lab. - Corresponding author. Email: jiewangx@ustc.edu.cn. 1 design space grows exponentially with the input bit widths of the computing circuits (Roy et al., 2021). On the other hand, evaluating the post-synthesis performance of a circuit design (i.e., design performance) with circuit synthesis tools is highly time-consuming, leading to high sampling costs. Therefore, searching high-speed and area-efficient circuits in the vast design space using limited samples emerges as a significant challenge. To search high-speed and area-efficient circuits, recent efforts (Roy et al., 2021; Zuo et al., 2023; Song et al., 2022) propose formulating the computing circuits optimization problem as a reinforcement learning (RL) proxy task, offering a promising avenue for optimizing circuit designs using limited samples. Specifically, they start from an initial design solution, learn policies to incrementally modify the local design structure, and utilize design performance gains between two consecutive designs as reward signals. Intuitively, the manually designed rewards can guide RL agents to explore directions that progressively improve design performance at each step. However, we show that the RL-based formulation (proxy task) converges to a _local optimal_ design solution (original task) due to the deceptive reward signals and incrementally localized actions. First, the reward signals based on performance gains between two consecutive designs are deceptive, as maximizing the cumulative discounted rewards misaligns with the true objective. More specifically, the proxy RL formulation indeed optimizes the cumulative discounted performance of all encountered design solutions across a trajectory, while the true objective is to find the single bestperforming designs. Second, the actions based on the incrementally local modifications of design structure suffer from poor exploration capability, and thus struggle to escape local optima. To address these challenges, we propose a novel **m** odel-based circ **u** it gene **t** ic **e** volution (MUTE) framework, which proposes a grid-based genetic representation of solutions and reformulates the problem as a circuit genetic evolution process. The evolution formulation is an iterative process between circuit genetic variation and model-based selection, where each iteration evaluates and improves solutions using the true objective value, thus gradually converging toward the bestperforming solution (i.e., the original task). To promote globally diverse exploration for escaping local optima, MUTE proposes a multi-granularity crossover operator that recombines design substructures at varying column ranges between two grid-based genetic solutions. Moreover, to tackle the problem of high sampling costs, MUTE introduces a model-based selection method, which learns a model for rapid evaluation of a large number of solutions. We evaluate MUTE on several fundamental computing circuits, including multipliers, adders, and multiply-accumulate circuits. Experiments on these circuits, spanning a wide range of input widths, demonstrate that MUTE discovers state-of-the-art designs that significantly Pareto-dominate those produced by manual design, mathematical optimization, and learning-based approaches, improving the hypervolume by up to 38%. Moreover, we deploy circuits optimized by MUTE and the baselines into large-scale computation-intensive circuits, and experiments show that MUTE significantly outperforms the baselines in terms of both area and delay. Our results highlight the superior ability of MUTE to discover high-speed and area-efficient circuits for real-world important computing applications, especially for high-performance AI chips. We summarize our major contributions as follows. (1) We show that the RL-based formulation for computing circuits optimization converges to a local optimal design solution, indicating a significant objective gap between the RL-based formulation and the true objective. (2) To the best of our knowledge, our MUTE is _the first_ to reformulate the optimization problem as a novel circuit genetic evolution process, which enables effectively searching for the global optimal circuit design solutions. (3) MUTE proposes a multi-granularity genetic crossover operator to promote globally diverse exploration of the design space. (4) Experiments show that MUTE significantly outperforms state-of-the-art approaches in terms of both area and delay. 2 B ACKGROUND 2.1 C OMPUTING C IRCUITS A RCHITECTURE Most computing circuits such as prefix adders, vector adders, subtracters, multipliers, and multiplyaccumulate circuits rely on two fundamental circuit structures, i.e., the Compressor Tree and Prefix Tree (Weste & Harris, 2015; Roy et al., 2021; Zuo et al., 2023; Wang et al., 2024g). Note that the Compressor Tree and Prefix Tree both share similar tree structures that can both be represented by grid-based design solutions. We take a multiplier circuit with four input bits as an example to in 2 multiplicand multiplier Partial Products Compressed Products Final Products 1 0 1 0 x 0 1 0 1 ~~Partial~~ ~~Product~~ ~~Generator~~ 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 ~~**Compressor**~~ ~~**Tree**~~ 0 1 1 0 0 1 0 0 0 0 0 ~~Carry~~ - ~~Propagate~~ ~~Adder~~ |1<br>0 0|0<br>0<br>0|0 0 0<br>1 0<br>0| |---|---|---| |0 1 0<br>0 1|0 1 0<br>0 1|0 1 0<br>0 1| The number of 3:2 compressors in stage 2, column 1 **Grid-Based Design** **Solution Representation** **Columns** 1 2 3 4 5 6 7 1 0 1 0 0 0 1 1 0 0 1 0 0 0 0 0 2:2 Compressor Grid 1 2 3 4 5 6 7 a Carry 2:2 Compressor b Sum ( **Half Adder** ) ~~Sum=a⊗b~~ Carry=ab ~~a~~ b Carry 3:2 compressor c Sum ( **Full Adder** ) ~~Sum=a⊗b⊗c~~ ~~Carry=ab+(a⊗b)c~~ **Stages** Stage 1 ~~Stage~~ ~~2~~ Compressed Products Stage 1 Stage 2 0 1 0 0 0 0 0 0 1 0 1 0 0 0 3:2 Compressor Grid 1 2 3 4 5 6 7 ~~St~~ age ~~1~~ Stage 2 ~~0~~ ~~0~~ ~~1~~ ~~1~~ ~~1~~ ~~0~~ ~~0~~ 0 0 0 1 0 0 0 **Basic Component** **Compressor Tree Structure** **(a) Multiplier Architecture** **(b) Compressor Tree** Figure 1: An illustration of the multiplication process and multiplier architecture. troduce the Compressor Tree structure as shown in Figure 1. In binary multiplication, two unsigned binary numbers—the multiplicand and the multiplier—are combined to yield their product. Contemporary multiplier designs typically comprise three primary components: a partial product generator (PPG), a Compressor Tree, and a carry propagation adder (CPA). **Initially**, the PPG generates a bit matrix based on the multiplicand and multiplier, with each element representing a partial product. **Subsequently**, the Compressor Tree compresses each column of the bit matrix to one or two bits by concurrently summing up the partial products within each column. **Finally**, the CPA aggregates the resultant bit matrix from the Compressor Tree to derive the final product. In constructing a Compressor Tree, **a large number of full and half adders are typically employed** **to execute the summation of generated partial products concurrently** . A full adder, i.e., a 3:2 compressor, accepts three inputs—two single-bit values and a carry-in bit—and produces two outputs: a sum bit and a carry-out bit. A half adder, i.e., a 2:2 compressor, takes two single-bit values as inputs and yields two outputs: a sum bit and a carry-out bit. Notably, when a 3:2 (2:2) compressor is applied to the _𝑖_ -th column, it reduces two (one) bits in column _𝑖_ while increasing one bit in column ( _𝑖_ + 1). Thus, a Compressor Tree employs numerous compressors (i.e., full and half adders) across multiple stages to compress the partial products matrix into only two rows in parallel, significantly dominating the final performance of a multiplier circuit. Moreover, modifying the arrangement of 3:2 and 2:2 compressors within a Compressor Tree can result in significantly different tree structure designs, leading to variable design performance. 2.2 RL FOR C OMPUTING C IRCUITS O PTIMIZATION As the Compressor Tree and/or Prefix Tree usually dominates the final performance of a computing circuit (Zuo et al., 2023; Xiao et al., 2021), recent efforts have focused on optimizing the tree structure by formulating the optimization problem as a reinforcement learning (RL) proxy task (Zuo et al., 2023; Roy et al., 2021). We take the existing RL-based Compressor Tree optimization method as an example. RL-MUL starts from an initial Compressor Tree design solution, learns policies to sequentially modify the design structure locally, and utilizes design performance gains as reward signals. We specify the state space, action space, and reward function as follows. **(1) State Space** S **.** RL-MUL formulates each legal design solution as a state, where each state is represented by a grid-based image. **(2) Action Space** A **.** RL-MUL designs four types of local modifications to a Compressor Tree solution at a certain column. These local modifications include adding a 2:2 compressor, removing a 2:2 compressor, replacing a 3:2 compressor with a 2:2 compressor, and replacing a 2:2 compressor with a 3:2 compressor. The action space is a discrete set composed of 4 × _𝑁_ _𝐶_ discrete actions, where _𝑁_ _𝐶_ denotes the number of columns. Each action _𝑖_ ∈[0 _,_ 1 _, . . .,_ (4 × _𝑁_ _𝐶_ − 1)] is represented by executing the _𝑗_ -th modification type at the _𝑘_ -th column, where _𝑗_ = _𝑖_ (mod 4) and _𝑘_ = ⌊ 4 _[𝑖]_ [⌋][.] **[ (3) Reward Function]** _[ 𝑟]_ **[.]** [ RL-MUL uses a circuit synthesis tool to obtain the performance] of the designed solution at each step. The reward _𝑟_ _𝑡_ is defined as the difference between the area (delay) of the design at step _𝑡_ − 1 and that at step _𝑡_ . That is, _𝑟_ ( _𝑠_ _𝑡_ _, 𝑎_ _𝑡_ _, 𝑠_ _𝑡_ +1 ) = _𝑓_ ( _𝑠_ _𝑡_ ) − _𝑓_ ( _𝑠_ _𝑡_ +1 ), where _𝑓_ denotes the design evaluation function. Finally, RL-MUL leverages the deep Q-network algorithm (Mnih et al., 2015) to train Q-networks. We defer details to Appendix D. 3 L IMITATIONS OF E XISTING RL F ORMULATION 3.1 D ECEPTIVE R EWARD S IGNALS Existing methods formulate the optimization problem as an infinite-horizon Markov decision process (MDP) denoted by a tuple (S _,_ A _, 𝑟,𝑇, 𝛾, 𝜇_ 0 ), where S denotes the state space, A denotes the 3 action space, _𝑟_ : S × A × S → R denotes the reward function, _𝑇_ denotes the deterministic transition function, _𝛾_ denotes the discount factor, and _𝜇_ 0 denotes the given initial design solution. Based on the MDP, the return of a deterministic policy _𝜋_ is defined as _𝑅_ _[𝜋]_ = [�] [∞] _𝑡_ =0 _[𝛾]_ _[𝑡]_ _[𝑟]_ [(] _[𝑠]_ _[𝑡]_ _[, 𝑎]_ _[𝑡]_ _[, 𝑠]_ _[𝑡]_ [+][1] [)][, where] _𝑠_ 0 = _𝜇_ 0 _, 𝑎_ _𝑡_ = _𝜋_ ( _𝑠_ _𝑡_ ), and _𝑠_ _𝑡_ +1 = _𝑇_ ( _𝑠_ _𝑡_ _, 𝑎_ _𝑡_ ). Note that the reward is defined by the performance gain between the states _𝑠_ _𝑡_ and _𝑠_ _𝑡_ +1, i.e., _𝑟_ ( _𝑠_ _𝑡_ _, 𝑎_ _𝑡_ _, 𝑠_ _𝑡_ +1 ) = _𝑓_ ( _𝑠_ _𝑡_ ) − _𝑓_ ( _𝑠_ _𝑡_ +1 ) _._ (1) Here _𝑓_ : S → R denotes the underlying evaluation function of design solutions given by circuit synthesis tools. Note that multiple evaluation functions are employed, such as area and delay evaluation functions. For ease of analysis and consistent with previous work (Roy et al., 2021; Zuo et al., 2023), we assume that the evaluation function _𝑓_ is a linear weighted average of these evaluation functions. Intuitively, the manually designed proxy rewards based on performance gains are able to guide RL agents toward directions that progressively improve design performance, as the RL agent receives positive rewards for improving design performance. Thus, a desired question is: _Does the_ _optimal policy in the RL formulation converge to the global optimal design solution?_ To investigate this question, we first theoretically show _the RL-based optimal policy converges to_ _a local optimal design solution_ . Then we empirically show that the underlying evaluation function _𝑓_ is highly oscillatory, resulting in the local optimal design solutions found by the optimal policy diverging significantly from the global optimal solution. **Theoretical Analysis** We assume that the state space S is finite. For simplicity, we assume a terminal action for each state that can terminate the episode at this state. We define a state _𝑠_ ∈S as a local optimum of the function _𝑓_ if for all action _𝑎_ ∈A we have _𝑓_ ( _𝑇_ ( _𝑠, 𝑎_ )) ≥ _𝑓_ ( _𝑠_ ). **Theorem 3.1.** _The optimal RL policy 𝜋_ [∗] := arg max _𝜋_ _𝑅_ _[𝜋]_ _terminates at a state, and the state is a_ _local optimal design solution of the evaluation function 𝑓_ _._ This theorem demonstrates the superior capability of RL methods in achieving local optimal solutions. However, this raises a further question: _Is the converged local optimal point also the global_ _optimum?_ Given the lack of detailed information about the optimization objective function _𝑓_, a rigorous analysis of this problem is currently infeasible. Therefore, we present an intuitive and empirical analysis to demonstrate that _the converged local optimal solution can significantly diverge_ _from the global optimal solution_ as follows. **Illustrative Example** We revisit the optimization objective in the RL formulation, i.e., ∞ _𝛾_ _[𝑡]_ (1 − _𝛾_ ) _𝑓_ ( _𝑠_ _𝑡_ +1 ) _._ (2) ∑︁ _𝑡_ =0 _𝑅_ _[𝜋]_ = ∞ _𝛾_ _[𝑡]_ ( _𝑓_ ( _𝑠_ _𝑡_ ) − _𝑓_ ( _𝑠_ _𝑡_ +1 )) = _𝑓_ ( _𝑠_ 0 ) − ∑︁ _𝑡_ =0 This implies that standard RL methods in the existing formulation aim to minimize the _cumulative_ _discounted performance_ of all visited solutions across a trajectory, except the initial state, when the discount factor _𝛾<_ 1. This is a practical discount factor setting in standard RL and previous methods (Roy et al., 2021; Zuo et al., 2023). In contrast, the circuit optimization task is a best-case-seeking task, i.e., the final performance is measured by the _single or few best-performing_ design solutions found during training. Consequently, the RL-based optimization objective is inconsistent with the original optimization objective, possibly leading to a significant optimization objective gap. To illustrate the optimization objective gap problem, we provide a motivating example as shown in Figure 2 (Left). Specifically, we illustrate two distinct trajectories in the circuit optimization environment from a given starting solution following two deterministic policies _𝜋_ 1 and _𝜋_ 2 . We denote the two trajec |Col1|Col2|1 : r|return=-0.31,|, min=3.0| |---|---|---|---|---| |25<br>20 PPA<br>ed||1<br>2 : r|eturn=1.13,|min=7.0| |15 Weight<br>10<br>5<br>0 2 4 6 8<br>Steps||||| |15 Weight<br>10<br>5<br>0 2 4 6 8<br>Steps|0|2 4|6|8| |440 Training Curve on 64|Training Curve on 64|4-bit Booth Multiplier| |---|---|---| |430<br>420<br>410<br>400<br>0 1000 2000<br>Ste||| |430<br>420<br>410<br>400<br>0 1000 2000<br>Ste||| |430<br>420<br>410<br>400<br>0 1000 2000<br>Ste|0 1000 2000|3000 4000 5000| |430<br>420<br>410<br>400<br>0 1000 2000<br>Ste|0 1000 2000|ps| tories byand ( _𝑠_ 0 _[𝜋]_ [2] ( _[, 𝑎]_ _𝑠_ 0 _[𝜋]_ 0 _[𝜋]_ [2][1] _[, 𝑠][, 𝑎]_ 1 _[𝜋]_ 0 _[𝜋]_ [2][1] _[, 𝑎][, 𝑠]_ 1 _[𝜋]_ 1 _[𝜋]_ [2][1] _[, . . ., 𝑠][, 𝑎]_ 1 _[𝜋]_ [1] _[, . . ., 𝑠]_ _𝑇_ _[𝜋]_ [2] [)][, re-] _𝑇_ _[𝜋]_ [1] [)] Figure 2:jectories with conflicting returns and found best solutions. **(Right)** A practical training curve of the EA method. **(Left)** A motivating example of two distinct traspectively. Each point in Figure 2 (Left) corresponds to the performance of a state across the trajectory. As shown in Figure 2 (Left), the return of the policy _𝜋_ 2 is larger than that of the policy _𝜋_ 1, while the best solution found by _𝜋_ 1 Figure 2: **(Left)** A motivating example of two distinct trajectories with conflicting returns and found best solutions. **(Right)** A practical training curve of the EA method. 4 **3. Selection** **c) Circuit Genetic Variation Operators** **a) Genetic Formulation** **d) Model-Based Cascade Ranking** Figure 3: An illustration of the MUTE framework with a) Genetic Formulation, b) Population Initialization, c) Circuit Genetic Variation Operators, and d) Model-Based Cascade Ranking. is better than that found by _𝜋_ 2 . This illustrates a significant optimization objective gap between the RL-based formulation and the original true objective. 3.2 I NCREMENTALLY L OCALIZED A CTIONS **Non-smooth Objective Functions** We further empirically investigate the properties of the objective function _𝑓_ . As the domain of _𝑓_ is the high-dimensional space S, directly visualizing the landscape of the objective function is challenging. Instead, we sample a large number of diverse state points from the state space to approximate the function’s behavior. Specifically, we visualize the training curve produced by a simple evolutionary algorithm (EA) that uses random actions from the RL action space _𝑎_ ∈A to repeatedly perturb the current best solution locally (see Appendix E for details.) As shown in Figure 2 (Right), the sampled function values exhibit significant oscillations, indicating that the underlying objective function is highly oscillatory as well. The major reason for the oscillations of the evaluation function stems from the complex optimization mechanisms employed by circuit synthesis tools. Even minor modifications to the circuit structure can result in substantial performance variations when evaluated by these tools. Consequently, the oscillatory nature of the optimization objective results in numerous local optimal solutions. **Limited Exploration Ability of Localized Actions** The oscillatory nature of the objective functions results in numerous local optimal solutions, thus requiring globally diverse exploration of the design space to escape local optima. However, the actions in the existing RL formulation are limited to local modifications of design structure at a certain column, severely constraining the agent’s ability to explore diverse or distant regions of the search space. As a result, the search process may become confined to suboptimal regions, limiting the chances of discovering global optima. 4 A M ODEL -B ASED C IRCUIT G ENETIC E VOLUTION F RAMEWORK We start with an overview of our proposed MUTE in Section 4.1. Next, we outline the formal procedure at the core of MUTE, specifying the circuit genetic evolution formulation and population initialization in Section 4.2, our proposed efficient and effective circuit genetic variation operators in Section 4.3, and model-based cascade ranking for selection in Section 4.4. 4.1 O VERVIEW OF O UR F RAMEWORK We provide an illustration of our MUTE in Figure 3. To bridge the gap between the RL-based formulation and the original task, we design a grid-based genetic representation of solutions, and reformulate the computing circuits optimization problem as a circuit genetic evolution process. The evolution formulation is an iterative process between circuit genetic variation and model-based selection, where each iteration evaluates and improves solutions using the true objective value, thus bridging the objective gap. First, we propose a learning-based population initialization method to accelerate the evolution process by leveraging the existing RL methods to generate a population of high-performing design solutions. Then, we propose efficient and effective genetic variation operators to avoid redundant exploration and promote globally diverse exploration. Finally, to further improve sample efficiency, we propose a model-based cascade ranking method for efficient selection from a large number of generated offspring design solutions. 5 Idea Generation Category:
0Conceptual Integration
KWH4UIoQKS
# I MPROVING U NSUPERVISED C ONSTITUENCY P ARSING VIA M AXIMIZING S EMANTIC I NFORMATION **Junjie Chen** **[1]** **, Xiangheng He** **[2]** **, Yusuke Miyao** **[1]** **, Danushka Bollegala** **[3]** Department of Computer Science, the University of Tokyo [1] GLAM – Group on Language, Audio, & Music, Imperial College London [2] Department of Computer Science, the University of Liverpool [3] christopher@is.s.u-tokyo.ac.jp, x.he20@imperial.ac.uk yusuke@is.s.u-tokyo.ac.jp, danushka@liverpool.ac.uk A BSTRACT Unsupervised constituency parsers organize phrases within a sentence into a treeshaped syntactic constituent structure that reflects the organization of sentence semantics. However, the traditional objective of maximizing sentence log-likelihood (LL) does not explicitly account for the close relationship between the constituent structure and the semantics, resulting in a weak correlation between LL values and parsing accuracy. In this paper, we introduce a novel objective that trains parsers by maximizing SemInfo, the semantic information encoded in constituent structures. We introduce a bag-of-substrings model to represent the semantics and estimate the SemInfo value using the probability-weighted information metric. We apply the SemInfo maximization objective to training Probabilistic ContextFree Grammar (PCFG) parsers and develop a Tree Conditional Random Field (TreeCRF)-based model to facilitate the training. Experiments show that SemInfo correlates more strongly with parsing accuracy than LL, establishing SemInfo as a better unsupervised parsing objective. As a result, our algorithm significantly improves parsing accuracy by an average of 7.85 sentence-F1 scores across five PCFG variants and in four languages, achieving state-of-the-art level results in three of the four languages. 1 I NTRODUCTION Unsupervised constituency parsing is a syntactic task of organizing phrases of a sentence into a treeshaped constituent structure without relying on linguistic annotations (Klein & Manning, 2002). The constituent structure is a fundamental tool in analyzing sentence semantics (i.e., the meaning) (Carnie, 2007; Steedman, 2000). It can significantly improve performance for downstream Natural Language Processing systems, such as natural language inference (He et al., 2020), machine translation (Xie & Xing, 2017) and semantic role labeling (Chen et al., 2022) systems. It guides the progressive construction of the sentence semantics, as illustrated in Figure 1. Each constituent in the structure corresponds to a meaningful substring, forming partial representations of the sentence semantics. One can easily recover the full sentence semantics by gradually constructing the semantic representation of those constituent substrings. Following the observation, we hypothesize that _constituent substrings in the sentence carry significant semantic information_ . Maximizing sentence log-likelihood has traditionally been the primary training objective for training unsupervised constituency parsers (Eisner, 2016; Kim et al., 2019a). However, the Log-Likelihood (LL) function does not explicitly factor in the syntax-semantics alignment. This leads to a poor correlation between the LL value and the parsing accuracy. We will further discuss this poor correlation in Section 5.3. As pointed out in previous research, it is challenging to train a Probabilistic ContextFree Grammar (PCFG) parser that outperforms trivial baselines with the LL maximization objective (Carroll & Charniak, 1992; Kim et al., 2019a). Successful training commonly involves altering the LL maximization objective, such as imposing sparsity constraints (Cohen et al., 2008; Johnson et al., 2007) or heuristically estimating the LL value (Spitkovsky et al., 2010). Theses evidence suggests that the LL function might not provide robust information to distinguish between constituents and non-constituents, rendering LL an insufficient objective function for unsupervised parsing. 1 John has been working on a theory until late night Sentence: John has been working on a theory until late night Constituent tree in bracket form: (John (has been working on (a theory)) (until late night)) Oh, we have John has been working on a theory until late night has been working on a theory has been working on a theory a theory until late night until late night Oh, the man is John Oh, someone is working on it Oh, he is working until late night (John (has been working on (a theory)) (until late night)) ((has been working on (a theory)) (until late night) (has been working on (a theory)) Figure 1: An illustration of the progressive semantics build-up in accordance with the constituent structure. The tree structure in the top-right shows the simplified constituent structure for illustration purposes. Constituent substrings are highlighted in blue. In this paper, we propose a novel objective for training unsupervised parsers: maximizing SemInfo (the semantic information encoded in constituent structures). Specifically, we introduce a bag-ofsubstrings model to represent the sentence semantics with substring statistics, in parallel to how bag-of-words models represent document topics with word statistics. Next, we estimate the semantic information encoded in substrings (i.e., substring-semantic information) by applying the ProbabilityWeighted Information (PWI) metric (Aizawa, 2003) developed for the bag-of-words model to our bag-of-substrings model. Finally, we calculate the SemInfo value of a constituent structure by summing up the substring-semantic information associated with the structure. Experiments show a much stronger correlation between SemInfo and parsing accuracy than the correlation between LL and parsing accuracy. The improved correlation suggests SemInfo is an effective objective function for unsupervised constituency parsing. In addition, we develop a Tree Conditional Random Field (TreeCRF)-based model to apply the mean-field SemInfo maximization training to PCFG parsers (the state-of-the-art non-ensemble method for unsupervised constituency parsing (Liu et al., 2023)). Experiments demonstrate that the SemInfo maximization objective improves the PCFG’s parsing accuracy by 7.85 sentence-F1 scores across five latest PCFG variants and in four languages. Our main contributions are: (1) Proposing a novel method for estimating SemInfo, the semantic information encoded in constituent structures. (2) Demonstrating a strong correlation between SemInfo values and parsing accuracy. (3) Developing a TreeCRF model to apply mean-field SemInfo maximization training to PCFG parsers, significantly improving parsing accuracy and achieving state-of-the-art level results as non-ensemble parsers. 2 B ACKGROUND The idea that constituent structures reflect the organization of sentence semantics is central to modern linguistic studies (Steedman, 2000; Pollard & Sag, 1987). A constituent is a substring _s_ in a sentence _x_ that can function independently (Carnie, 2007) and carries self-contained meanings (Heim & Kratzer, 1998). A collection of constituents forms a tree-shaped structure _t_, which we can represent as a collection of its constituent substrings _t_ = _{s_ 1 _, s_ 2 _, ...}_ . For example, the constituent structure in the top right of Figure 1 can be represented as _{_ “a theory”, “until late night”,... _}_ . Previous research (Shen et al., 2017; Yang et al., 2021b) measures the accuracy of the parsing prediction by instance level sentence-F1 (SF1 _[i]_ ) score. Aggregating the SF1 _[i]_ score over the corpus gives the corpus-level sentence-F1 score (SF1 _[c]_ ), which previous research used to evaluate the parser quality. 2 In this paper, we will apply the Probability-Weighted Information (PWI) (Aizawa, 2003) designed to measure word-topic information in bag-of-words models (Figure 2a) to measuring substringsemantic information in our bag-of-substrings model. PWI is an information-theoretic interpretation of the term frequency-inverse document frequency (tf-idf) statistic. The tf-idf statistic is an effective feature in finding keywords in documents (Li et al., 2007) or in locating documents based on the given keyword (Mishra & Vishwakarma, 2015). Let _D_ denote a document corpus, _d_ _i_ the _i_ -th document in the corpus, and _w_ _ij_ the _j_ -th word in _d_ _i_ . The bag-of-words model represents the document _d_ _i_ as an unordered collection of words occurring in the document (i.e., _d_ _i_ = _{w_ _i_ 1 _, w_ _i_ 2 _, ...}_ ). Tf-idf, as shown in Equation 1, is the product of the term frequency _F_ ( _w_ _ij_ _, d_ _i_ ) (i.e. the frequency of _w_ _ij_ occurring in _d_ _i_ ) and the inverse document frequency (i.e. the inverse log-frequency of documents containing _w_ _ij_ ). PWI interprets the term frequency as the word generation probability and the inverse document frequency as the piecewise word-document information (Equation 2). The PWI value estimates the information that _w_ _ij_ carries with regard to _d_ _i_ . A high value indicates that _w_ _ij_ is both frequent in _d_ _i_ and strongly associated with _d_ _i_ . In other words, _w_ _ij_ is a keyword of _d_ _i_ . tf-idf( _w_ _ij_ _, d_ _i_ ) = _F_ ( _w_ _ij_ _, d_ _i_ ) � ~~�~~ � ~~�~~ term frequency _|D|_ _×_ log _|d_ _[′]_ : _d_ _[′]_ _∈D ∧_ _w_ _ij_ _∈_ _d_ _[′]_ _|_ � �� ~~�~~ inverse document frequency (1) (2) _≈_ _P_ ( _w_ _ij_ _|d_ _i_ ) � ~~�~~ � � word generation probability = _PWI_ ( _w_ _ij_ _, d_ _i_ ) _×_ log _[P]_ [(] _[d]_ _[i]_ _[|][w]_ _[i][j]_ [)] _P_ ( _d_ _i_ ) � ~~�~~ � ~~�~~ piecewise word-document information Our method is developed upon the finding of Chen et al. (2024): constituent structures can be predicted by searching for frequent substrings among semantically similar paraphrases. We extend their findings, interpreting the substring frequency statistic as a dominating term in our proposed substring-semantics information metric and applying it to improve unsupervised PCFG training. As we will see in Section 5.2, our method significantly outperforms theirs in three out of the four languages tested. PCFG is currently the state-of-the-art non-ensemble model for unsupervised constituency parsing (Liu et al., 2023; Yang et al., 2021a). Previous research trains binary PCFG parsers on a text corpus by maximizing the average LL of the corpus. PCFG is a generative model defined by a tuple ( _NT, T, R, S, π_ ), where _NT_ is the set of non-terminal symbols, _T_ is the set of terminal symbols, _R_ is the set of production rules, _S_ is the start symbol, and _π_ is the probability distribution over the rules. The generation process starts with the start symbol _S_ and iteratively applies non-terminal expansion rules ( _A →_ _BC_ : _A, B, C ∈_ _NT_ ) or terminal rewriting rules ( _A →_ _w_ : _A ∈_ _NT, w ∈_ _T_ ) until it produces a complete sentence _x_ . We can represent the generation process with a tree-shaped structure _t_ . The PCFG assigns a probability for each distinct way of generating _x_, defining a distribution _P_ ( _x, t_ ). The Inside-Outside algorithm (Baker, 1979) provides an efficient solution for computing the total sentence probability _P_ ( _x_ ) = [�] _t_ _[P]_ [(] _[x, t]_ [)][. It constructs a] _[ β]_ [(] _[s, A]_ [)][ table that records the total] probability of generating a substring _s_ of _x_ from the non-terminal _A_ . The sentence probability can be calculated as _P_ ( _x_ ) = _β_ ( _x, S_ ), the probability of _x_ being generated from the start symbol _S_ . The _β_ ( _x, S_ ) quantity is commonly referred to as _Z_ ( _X_ ) (Eisner, 2016). Besides the total sentence probability, the _β_ table can also be used to calculate the span-posterior probability of _s_ being a constituent (Eisner, 2016) (Equation 3). [1] _P_ ( _s_ is a constituent _|x_ ) = � _A∈NT_ _∂_ log _Z_ ( _x_ ) (3) _∂_ log _β_ ( _s, A_ ) Span-based TreeCRF model is widely adopted in constituency parsers (Kim et al., 2019b; Stern et al., 2017). It models the parser distribution _P_ ( _t|x_ ), the probability of constituent structure _t_ given _x_ . It determines the probability of _t_ by evaluating whether all substrings involved in the structure are constituents. It assigns a high score to a substring _s_ in its potential function _ϕ_ ( _s, x_ ) if _s_ is likely a constituent and a low score if _s_ is unlikely a constituent. Subsequently, It can represent the parser distribution as _P_ ( _t|x_ ) _∝_ [�] _s∈t_ _[ϕ]_ [(] _[s, x]_ [)][. In previous research,] _[ ϕ]_ [(] _[s, x]_ [)][ has been parameterized differ-] ently, such as using the span posterior probability for decoding ( _ϕ_ ( _s, x_ ) = _P_ ( _s_ is a constituent _|x_ )) (Yang et al., 2021b) or using the exponentiated output from Long-Short Term Memory model ( _ϕ_ ( _s, x_ ) = exp( _LSTM_ ( _x, s_ ))) (Kim et al., 2019b). 1 We explain the derivation in more detail in Section A.2. 3 (a) Bag-of-words model. (b) Bag-of-Substrings model. Figure 2: Parallel structure between the traditional bag-of-words representation of topics and the proposed bag-of-substrings representation of semantics. 3 S EM I NFO : A M ETRIC OF S EMANTIC I NFORMATION E NCODED IN C ONSTITUENT S TRUCTURES In this section, we introduce our estimation method of SemInfo, the semantic information encoded in constituent structures. We first propose a bag-of-substrings model (Figure 2b), representing the semantics of a sentence by examining how substrings in the sentence are _regenerated_ during a paraphrasing process. We assume the paraphrasing process is capable of generating _natural language_ _paraphrases_ (i.e., the paraphrases should both be acceptable as natural language sentences and have similar semantics to the original sentence). We use instruction-following large language models (LLMs) for the paraphrasing model, exploiting their outstanding zero-shot learning capability (Chia et al., 2023). Next, we apply the PWI metric (Aizawa, 2003) to measure the substring-semantics information, utilizing the parallel structure between the bag-of-words model and our bag-of-substrings model (Figure 2). Finally, we estimate the SemInfo value for constituent structures by summing the substring-semantics information associated with the structure. 3.1 D EFINING S UBSTRING -S EMANTIC I NFORMATION USING B AG - OF -S UBSTRINGS M ODEL Our bag-of-substrings model shares a parallel structure with the traditional bag-of-words model. As discussed in Section 2, the bag-of-words model can model the word-topic information using the PWI metric. Exploiting the structural parallelism, we can apply the PWI metric to our bag-of-substrings model to estimate the information between substrings and sentence semantics (Equation 4). The bag-of-substrings model is based on the paraphrasing model _P_ ( _x_ _[p]_ _|Sem_ ( _x_ )) shown in Figure 2b. The paraphrasing model takes a source sentence _x_ as input, internally analyzes its semantics _Sem_ ( _x_ ), and generates a paraphrase _x_ _[p]_ . We can repeatedly sample from the process, collecting a paraphrase set X _[p]_ = _{x_ _[p]_ 1 _[, x]_ _[p]_ 2 _[, ...][}]_ [. We define the bag-of-substrings model by examining whether a] substring _s_ of _x_ appears in X _[p]_ . We consider the appearance of _s_ in X _[p]_ as _s_ being generated by the bag-of-substrings model. The generation modeling establishes a relationship between the semantics _Sem_ ( _x_ ) and the substring _s_, which we will use to estimate the substring-semantic information. The PWI metric requires two components to calculate the substring-semantic information: [(] _[Sem]_ [(] _[x]_ [)] _[|][s]_ [)] _P_ ( _s|Sem_ ( _x_ )), the substring generation probability, and log _[P]_ _P_ ( _Sem_ ( _x_ )) [, the piecewise mutual in-] formation between _s_ and _Sem_ ( _x_ ). Similar to the bag-of-words model, we will calculate the two components using the frequency of _s_ in X _[p]_ and the inverse frequency of _s_ in the corpus _D_ . _I_ ( _s, Sem_ ( _x_ )) = _P_ ( _s|Sem_ ( _x_ )) log _[P]_ [(] _[Sem]_ [(] _[x]_ [)] _[|][s]_ [)] (4) _P_ ( _Sem_ ( _x_ )) 3.2 C ALCULATING PWI USING M AXIMAL S UBSTRINGS Naively measuring substring frequency among paraphrases X _[p]_ will yield a misleading estimate of _P_ ( _s|Sem_ ( _x_ )). The reason is that one substring can be nested in another substring. If a substring _s_ is generated to convey semantic information, we will observe an occurrence of _s_ along with an 4 occurrence of all its substrings. Hence, the naive substring frequency will wrongly count substring occurrences caused by the generation of larger substrings as occurrences caused by _P_ ( _s|Sem_ ( _x_ )). Let us consider the example illustrated in Figure 3. All three substrings in the example have a frequency of 2, yet only the first substring carries significant semantic information. This is because the occurrence of the first substring causes the occurrence of the second and third substrings. The true frequency of the second and third substrings should be 0 instead of 2. We introduce the notion of maximal substring to counter this problem. Given a source sentence _x_ and a paraphrase _x_ _[p]_ _i_ [, the maximal sub-] string between the two is defined in Equation 5. Intuitively, a maximal substring is the largest substring that occurs in both _x_ and _x_ _[p]_ _i_ [. For-] mally, we denote the partial order relationship of string _α_ being a substring in string _β_ by _α ≤_ _β_, and denote the set of maximal substrings by _MS_ ( _x, x_ _[p]_ _i_ [)][.] Using maximal substrings, we can avoid over-counting substring occurrences caused by the generation of larger substrings. John has been working on a theory John is working on a theory John was working on a theory |Col1|Naive/Maximal<br>Substring Frequency| |---|---| ||working on a theory: 2/2<br>working on: 2/0<br>on a: 2/0| Figure 3: An example for naive substring frequency among paraphrases failing to estimate _P_ ( _s|Sem_ ( _x_ )). _MS_ ( _x, x_ _[p]_ _i_ [) :=] _[ {][α]_ [ :] _[ α][ ≤]_ _[x][ ∧]_ _[α][ ≤]_ _[x]_ _[p]_ _i_ _[∧∀][α]_ _[′]_ [(] _[α < α]_ _[′]_ [ =] _[⇒¬][α]_ _[′]_ _[ ≤]_ _[x][ ∨¬][α]_ _[′]_ _[ ≤]_ _[x]_ _[p]_ _i_ [)] _[}]_ (5) We are now ready to define _P_ ( _s|Sem_ ( _x_ )) using the paraphrasing distribution _P_ ( _x_ _[p]_ _|Sem_ ( _x_ )) and the notion of maximal substrings. We define _P_ ( _s|Sem_ ( _x_ )) to be proportional to _s_ ’s probability of being generated as a maximal substring in paraphrases (Equation 6). The probability can then be approximated using the maximal substring frequency _F_ ( _s,_ X _[p]_ ), as shown in Equation 7. _P_ ( _s|Sem_ ( _x_ )) _∝_ E **1** ( _s ∈_ _MS_ ( _x_ _[p]_ _i_ _[, x]_ [))] (6) _x_ _[p]_ _i_ _[∼][P]_ [ (] _[x]_ _[p]_ _[|][Sem]_ [(] _[x]_ [))] _≈_ _F_ ( _s,_ X _[p]_ ) (7) Similarly, we define the inverse document frequency for maximal substrings (Equation 8). The inverse document frequency can serve as an estimate of the piecewise substring-semantics information, quantifying how useful a substring is to convey semantic information. A high inverse document frequency implies that only a few _Sem_ ( _x_ ) in the corpus generate _s_ as their maximal substring. In other words, we can easily identify the target semantics by examining whether _s_ appears as maximal substrings. _|D|_ log _[P]_ [(] _[Sem]_ [(] _[x]_ [)] _[|][s]_ [)] _≈_ log (8) _P_ ( _Sem_ ( _x_ )) _|{x_ _[′]_ : _x_ _[′]_ _∈D ∧_ _s ∈_ _MS_ ( _x, x_ _[′]_ ) _}|_ 3.3 E STIMATING S EM I NFO A constituent structure _t_ can be represented as a set of constituent substrings. We define SemInfo, the information between _t_ and _Sem_ ( _x_ ), as the cumulative substring-semantics information associated with _t_ (Equation 9). We estimate the substring-semantics information with the maximal substring frequency-inverse document frequency developed in the above section. _I_ ( _t, Sem_ ( _x_ )) = � _I_ ( _s, Sem_ ( _x_ )) (9) _s∈t_ _|D|_ _∝_ � _s∈t_ ~~�~~ _F_ ( _s,_ X _[p]_ ) log _|{x_ _[′]_ : _x_ _[′]_ ~~�~~ _∈D ∧_ � _s ∈_ _MS_ ( _x, x_ _[′]_ ) _}|_ ~~�~~ maximal substring frequency-inverse document frequency (10) 4 S EM I NFO M AXIMIZATION VIA T REE CRF M ODEL We train our PCFG models on Equation 12 using the pipeline shown in Figure 4. The pipeline consists of three steps: (1) We compute the log( _Z_ ( _x_ )) by applying the inside algorithm on the PCFG model. This step yields the leading log-likelihood term in Equation 12. More importantly, it constructs the computation graph needed to calculate the span-posterior probability _P_ ( _s_ is a constituent _|x_ ). (2) We extract the span-posterior probability via back-propagating 5 Idea Generation Category:
0Conceptual Integration
qyU5s4fzLg
# - M ULTIMODAL Q UANTITATIVE L ANGUAGE FOR G EN ERATIVE R ECOMMENDATION **Jianyang Zhai** **[1,2]** **, Zi-Feng Mai** **[1,3]** **, Chang-Dong Wang** **[1,3]** _[∗]_ **, Feidiao Yang** **[2*]** **,** **Xiawu Zheng** **[2,4]**, **Hui Li** **[4]**, **Yonghong Tian** **[2,5]** 1 Sun Yat-sen University, 2 Pengcheng Laboratory, 3 Guangdong Key Laboratory of Big Data Analysis and Processing, 4 Xiamen University, 5 Peking University {zhaijy01, yangfd}@pcl.ac.cn, changdongwang@hotmail.com A BSTRACT Generative recommendation has emerged as a promising paradigm aiming at directly generating the identifiers of the target candidates. Most existing methods attempt to leverage prior knowledge embedded in Pre-trained Language Models (PLMs) to improve the recommendation performance. However, they often fail to accommodate the differences between the general linguistic knowledge of PLMs and the specific needs of recommendation systems. Moreover, they rarely consider the complementary knowledge between the multimodal information of items, which represents the multi-faceted preferences of users. To facilitate efficient recommendation knowledge transfer, we propose a novel approach called Multimodal Quantitative Language for Generative Recommendation (MQL4GRec). Our key idea is to transform items from different domains and modalities into a unified language, which can serve as a bridge for transferring recommendation knowledge. Specifically, we first introduce quantitative translators to convert the text and image content of items from various domains into a new and concise language, known as quantitative language, with all items sharing the same vocabulary. Then, we design a series of quantitative language generation tasks to enrich quantitative language with semantic information and prior knowledge. Finally, we achieve the transfer of recommendation knowledge from different domains and modalities to the recommendation task through pre-training and fine-tuning. We evaluate the effectiveness of MQL4GRec through extensive experiments and comparisons with existing methods, achieving improvements over the baseline by 11.18%, 14.82%, and 7.95% on the NDCG metric across three different datasets, respectively. [1] 1 I NTRODUCTION ~~a~~ _ ~~3~~ ~~b~~ _ ~~4~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~5~~ 18 Piece Acrylic ~~a~~ _ ~~3~~ ~~b~~ _ ~~4~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~5~~ Paint Set ~~A~~ _ ~~4~~ ~~B~~ _ ~~3~~ ~~C~~ _ ~~8~~ ~~D~~ _ ~~6~~ ~~a~~ _ _ ~~c~~ _ _ Recommendation systems (RS) aim to Paint Set recommend items to users that they may ~~A~~ _ ~~4~~ ~~B~~ _ ~~3~~ ~~C~~ _ ~~8~~ ~~D~~ _ ~~6~~ be interested in, and are widely used **Arts** ~~Items~~ ~~a~~ _ ~~2~~ ~~b~~ _ ~~3~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~6~~ on many online platforms, such as e- Sengoku Basara: commerce and social networking (Chaves The Last Party ~~A~~ _ ~~1~~ ~~B~~ _ ~~4~~ ~~C~~ _ ~~2~~ ~~D~~ _ ~~6~~ Vocabulary et al., 2022; Covington et al., 2016). **Movies** **Quantitative Language** For a long time, recommendation models that represent users and items using Figure 1: Illustration of our MQL4GRec. We translate ~~their~~ ~~unique~~ ~~IDs~~ ~~(known~~ ~~as~~ ~~IDRec)~~ ~~have~~ items from different domains and modalities into a new been dominant in the field of RS (Kang & unified language, which can then serve as a bridge for McAuley, 2018a; Sun et al., 2019; Zhang transferring recommendation knowledge. et al., 2024). However, IDRec may encounter cold start and knowledge transferability issues due to its inherent properties. To address **Arts** ~~a~~ _ ~~2~~ ~~b~~ _ ~~3~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~6~~ ~~Items~~ Sengoku Basara: The Last Party ~~A~~ _ ~~1~~ ~~B~~ _ ~~4~~ ~~C~~ _ ~~2~~ ~~D~~ _ ~~6~~ Vocabulary **Movies** **Quantitative Language** Figure 1: Illustration of our MQL4GRec. We translate items from different domains and modalities into a new unified language, which can then serve as a bridge for transferring recommendation knowledge. _∗_ Corresponding authors. 1 [Our implementation is available at: https://github.com/zhaijianyang/MQL4GRec.](https://github.com/zhaijianyang/MQL4GRec) 1 these limitations, some literature (Hou et al., 2022; Sun et al., 2023) employs modal encoders (Devlin et al., 2018; He et al., 2016) to learn universal representations of items or sequences. While promising, these modal encoders are typically not specifically designed for recommendation tasks, resulting in suboptimal performance. Recently, generative recommendation has emerged as a promising paradigm, which employs an endto-end generative model to directly predict identifiers of target candidates (Geng et al., 2022; Rajput et al., 2023). Due to the success of PLMs in natural language generation (NLG) (Raffel et al., 2020a; Brown et al., 2020; Touvron et al., 2023), most existing methods attempt to leverage the prior knowledge of PLMs to improve the recommendation performance (Bao et al., 2023; Zhang et al., 2023; Zheng et al., 2023). They formalize the recommendation task as a sequence-to-sequence generation process, where the input sequence contains data of items interacted with users, and the output sequence represent identifiers of target items. Then they enable PLMs to perform recommendation tasks by adding instructions or prompts. Despite achieving decent performance, they suffer from the following limitations: 1) There are significant task differences between PLMs and RS, which may lead to inconsistencies between the general linguistic knowledge of PLMs and the specific requirements of RS; 2) They often overlook the complementary knowledge between the multimodal information of items, which is crucial for capturing the multi-faceted preferences of users. To address these limitations, it is crucial to bridge the gaps between different domains and modalities, leveraging their recommendation knowledge to enhance the performance of the target domains. Inspired by significant advancements in NLG, such as pretraining-finetuning (Devlin et al., 2018; Raffel et al., 2020b) and prompt-tuning (Brown et al., 2020; Touvron et al., 2023), we propose the idea of transforming items from various domains and modalities into a new and unified language. A key factor contributing to these significant advances is the use of a shared vocabulary, where tokens are endowed with rich semantic information and prior knowledge across various tasks, which can then be effectively transferred to downstream tasks. Thus, we aspire for this new language to encompass a vocabulary in which tokens can represent items from various domains and modalities, as depicted in Figure 1. Specifically, this language not only serves as a bridge for knowledge transfer but also as identifiers of items, and should be more concise than the original modalities (text and image) to avoid issues in generation (Hua et al., 2023). To this end, we propose a novel approach known as Quantitative Language for Multimodal Generative Recommendation (MQL4GRec). Specifically, we first introduce quantitative translators to convert the content of items (text and images) into the quantitative language. We train a separate quantitative translator for each modality of the item, each consisting of a modal encoder and a vector quantizer. Together, the codebooks of the two quantitative translators constitute the vocabulary. Then, we design a series of quantitative language generation tasks aiming at endowing quantitative language with rich semantic information and prior knowledge, and these tasks can be viewed as microcosms of NLG tasks. Specifically, we additionally incorporate some special tokens as task prompts. Finally, we transfer the source domain and multimodal recommendation knowledge to the recommendation tasks through pre-training and fine-tuning. To evaluate the effectiveness of our proposed MQL4GRec, we conduct extensive experiments and comparisons with existing methods. Relative to the baseline, we observe improvements of 11.18%, 14.82%, and 7.95% on the NDCG metric across three datasets, respectively. In summary, our proposed MQL4GRec achieves the transfer of recommendation knowledge by breaking down barriers between items across different domains and modalities, demonstrating strong scalability and potential. Our main contributions can be summarized as follows: - We propose MQL4GRec, a novel approach that translates items from various domains and modalities into a unified quantitative language, thereby breaking down the barriers between them and facilitating the transfer of recommendation knowledge. - We design a series of quantitative language generation tasks that endow quantitative language with rich semantic information and prior knowledge, and enhance the performance of recommendation tasks through pre-training and fine-tuning. - We conduct extensive experiments and analyses on three public datasets, and the results validate the effectiveness of our proposed method. 2 2 R ELATED W ORKS **Generative Recommendation.** Generative models are one of the hottest research topics in machine learning, resulting in some representative works such as Variational AutoEncoders (VAEs) (Kingma & Welling, 2014), Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) and Diffusion models (Ho et al., 2020). Generally, generative models aim to learn the distribution of the training data P( **x** )and generate new samples **z** _∼_ P( **x** ). These generative models have also been applied to recommendation, resulting in many remarkable works of VAE-based (Cai & Cai, 2022; Shenbin et al., 2020), GAN-based (He et al., 2018; Guo et al., 2022; Wang et al., 2022) and diffusion-based (Jiang et al., 2024; Wang et al., 2023c) recommendation. Recently, Transformer-based PLMs such as LLaMA (Touvron et al., 2023) and GPT (Brown et al., 2020) have also shown promising capabilities in language generation. With the help of such powerful generative PLMs, some PLM-based recommendation methods have also been proposed. Some early works, such as P5 (Geng et al., 2022) and M6-Rec (Cui et al., 2022), attempt to transform recommendation into a language generation task by designing prompts to bridge the gap between the downstream task and the pretraining task of PLMs. Some works focus on leveraging the prior knowledge in PLMs for recommendation by various tuning techniques such as parameter-efficient fine-tuning (PEFT) (Bao et al., 2023) and instruction tuning (Zhang et al., 2023). One of the most important tasks in PLM-based recommendation is how to assign an unique sequence of tokens to each item as its ID. Early works (Geng et al., 2022; Cui et al., 2022) directly use the original name of the item or randomly assign an integer for each item, which have weak transferability and are sometimes unintelligible to PLMs. SEATER (Si et al., 2023) constructs treestructured item IDs from a pretrained SASRec (Kang & McAuley, 2018b) model. P5-ID (Hua et al., 2023) investigates the effect of different item IDs on recommendation. ColaRec (Wang et al., 2024) captures the collaborative signals between items to construct generative item IDs. Notably, TIGER (Rajput et al., 2023) is the first attempt to use RQ-VAE to construct item IDs by quantizing the item embeddings. **Multi-modal Recommendation.** Multi-modal side information of items, such as descriptive text and images, has been shown to be effective in improving recommendations by providing richer contexts for interactions. Early works such as VBPR (He & McAuley, 2016) extract visual features by matrix factorization to achieve more personalized ranking. Some works (Wei et al., 2019; Sun et al., 2020; Wei et al., 2020) leverage various types of graph neural network (GNN) to fuse the multimodal features. For example, LATTICE (Zhang et al., 2021) designs a modality-aware learning layer to learn item-item structures for each modality and aggregates them to obtain latent item graphs. DualGNN (Wang et al., 2023b) proposes a multi-modal representation learning module to model the user attentions across modalities and inductively learn the user preference. MVGAE (Yi & Chen, 2022) uses a modality-specific variational graph autoencoder to fuse the modality-specific node embeddings. Recently, with the profound development of foundation models in different modalities (Radford et al., 2021; Brown et al., 2020; Raffel et al., 2020b), some recent works attempt to leverage pretrained foundation models as feature encoders to encode the multi-modal side information. Following P5 (Geng et al., 2022), VIP5 (Geng et al., 2023b) extends it into a multi-modal version which encodes the item images by a pretrained CLIP image encoder. MMGRec (Liu et al., 2024) utilizes a Graph RQ-VAE to construct item IDs from both multi-modal and collaborative information. Moreover, IISAN (Fu et al., 2024) propose a simple plug-and-play architecture using a Decoupled PEFT structure and exploiting both intra- and inter-modal adaptation. 3 M ETHOD In this section, we elaborate on the proposed MQL4GRec, a novel approach of transferring recommendation knowledge across different domains and modalities. We first translate item content into a unified quantitative language, which bridge the gaps between different domains and modalities. Then, we design a series of quantitative language generation tasks, and achieve the transfer of recommendation knowledge through pre-training and fine-tuning. The overall framework of the method is illustrated in Figure 2. 3 **Next Item Generation:** Ƹ |*_0|*_1|*_2|a_2|b_3|c_1|d_6|a_4|b_3|c_8|d_6|…|a_4|b_3|c_8|d_6| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| |*_3|*_4|*_5|A_2|B_3|C_1|D_6|A_4|B_3|C_8|D_6|…|A_4|B_3|C_8|D_6| |*_6|*_7|*_8|a_2|b_3|c_1|d_6|a_4|b_3|c_8|d_6|…|A_4|B_3|C_8|D_6| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| |*_9 *_10 *_11|*_9 *_10 *_11|*_9 *_10 *_11|A_2 B_3 C_1 D_6|A_2 B_3 C_1 D_6|A_2 B_3 C_1 D_6|A_2 B_3 C_1 D_6|A_4 B_3 C_8 D_6|A_4 B_3 C_8 D_6|A_4 B_3 C_8 D_6|A_4 B_3 C_8 D_6|…|a_4 b_3 c_8 d_6|a_4 b_3 c_8 d_6|a_4 b_3 c_8 d_6|a_4 b_3 c_8 d_6| ~~**Quantitative**~~ ~~**Language**~~ ~~**Alignment:**~~ Ƹ ~~A~~ _ ~~4~~ ~~B~~ _ ~~3~~ ~~C~~ _ ~~8~~ ~~D~~ _ ~~6~~ ~~a~~ _ ~~3~~ ~~b~~ _ ~~4~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~5~~ ~~A~~ _ ~~1~~ ~~B~~ _ ~~4~~ ~~C~~ _ ~~2~~ ~~D~~ _ ~~6~~ ~~a~~ _ ~~2~~ ~~b~~ _ ~~3~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~6~~ Ƹ Ƹ Ƹ ~~a~~ _ ~~2~~ ~~b~~ _ ~~3~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~6~~ Ƹ 18 Piece Acrylic Paint Set Sengoku Basara: ~~Th~~ e ~~L~~ ast ~~P~~ arty Ƹ Image Text Item: Ƹ Translator ~~Items~~ Translator Ƹ Ƹ Ƹ Ƹ Ƹ **Asymmetric Item Generation:** Ƹ Ƹ Ƹ Ƹ Ƹ Ƹ ~~a~~ _ ~~2~~ ~~b~~ _ ~~3~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~6~~ ~~A~~ _ ~~1~~ ~~B~~ _ ~~4~~ ~~C~~ _ ~~2~~ ~~D~~ _ ~~6~~ ~~A~~ _ ~~1~~ ~~B~~ _ ~~4~~ ~~C~~ _ ~~2~~ ~~D~~ _ ~~6~~ ~~a~~ _ ~~2~~ ~~b~~ _ ~~3~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~6~~ Ƹ Ƹ |*_12 *_13 *_14|Col2|Col3|a_2 b_3 c_1 d_6|Col5|Col6|Col7| |---|---|---|---|---|---|---| |*_15|*_16|*_17|A_1|B_4|C_2|D_6| |A_1 B_4 C_2 D_6|Col2|Col3|Col4| |---|---|---|---| |a_2|b_3|c_1|d_6| ~~2~~ ~~3~~ Ƹ ~~1~~ ~~6~~ Ƹ ~~*~~ _ ~~n~~ Ƹ Prompt Token **Target Sequence** Ƹ ~~a~~ _ ~~2~~ ~~b~~ _ ~~3~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~6~~ ~~<\s>~~ ~~+~~ Ƹ **Pre-training and Fine-tuning** Ƹ ME ℎ E z 𝑟 1 − ~~2~~ ~~=~~ ~~𝑟~~ 2 ~~𝑧~~ Ƹ D ෠ℎ ℎ E Ƹ z 𝑟 ~~2~~ ~~=~~ ~~𝑟~~ ~~𝑧~~ Ƹ Ƹ Transformer Encoder Transformer Decoder _ Text Token Ƹ Transformer Decoder Ƹ ~~a~~ _ ~~n~~ Ƹ 1 2 3 4 5 6 Codebook 1 Ƹ ~~*~~ _ ~~15~~ ~~*~~ _ ~~16~~ ~~*~~ _ ~~17~~ ~~A~~ _ ~~1~~ ~~B~~ _ ~~4~~ ~~C~~ _ ~~2~~ ~~D~~ _ ~~6~~ **Task Prompt** **Source Sequence** Ƹ ~~<s>~~ ~~a~~ _ ~~2~~ ~~b~~ _ ~~3~~ ~~c~~ _ ~~1~~ ~~d~~ _ ~~6~~ Ƹ ~~A~~ _ ~~n~~ Image Token Ƹ 2 [3] Ƹ 4 Ƹ Figure 2: The overall framework of MQL4GRec. We regard the quantizer as a translator, converting item content from different domains and modalities into a unified quantitative language, thus bridging the gap between them (left). Subsequently, we design a series of quantitative language generation tasks to facilitate the transfer of recommendation knowledge through pre-training and fine-tuning (right). 3.1 Q UANTITATIVE L ANGUAGE The original modal content of items is complex, which can affect the efficiency and performance of recommendations (Hua et al., 2023). Therefore, we translate item content from various domains and modalities into a concise and unified quantitative language. In this subsection, we introduce a quantitative translator to accomplish the aforementioned conversion. **Quantitative Translator.** Vector Quantization (VQ) is an information compression technique widely utilized across various domains (Van Den Oord et al., 2017; Zeghidour et al., 2021), which maps high-dimensional data onto a finite set of discrete vectors, known as the codebook. In this paper, we treat the quantizer as a translator that converts complex item content into a concise quantitative language. Here, the codebook serves as the vocabulary of the quantitative language. To obtain a unified quantitative language, we first employ a frozen modal encoder (LLaMA or ViT (Dosovitskiy et al., 2020)) to encode item content (text or image), and to obtain the item representation. Further, we take the item representation as input, and train a Residual-Quantized Variational AutoEncoder (RQ-VAE) (Zeghidour et al., 2021) for generating item tokens. RQ-VAE is a multilevel vector quantizer that applies quantization on residuals to generate a tuple of codewords ( _i.e._, item tokens). As shown in Figure 2 (left), for an item representation _**h**_, RQ-VAE first encodes it into _K_ a latent representation _**z**_ . At each level _**l**_, we have a codebook _C_ _[l]_ = � _**v**_ _k_ _[l]_ � _k_ =1 [, where each codebook] vector is a learnable cluster center. The residual quantization process can be represented as: Ƹ _c_ _i_ = arg min _k_ Ƹ _i_ 2 �� _**r**_ _i_ _−_ _**v**_ _k_ �� 2 _[,]_ (1) Ƹ _**r**_ _i_ +1 = _**r**_ _i_ _−_ _**v**_ _c_ _[i]_ _i_ _[,]_ (2) where _c_ _i_ is the codeword of the _i_ -th level, _**r**_ _i_ is the residual vector of the _i_ -th level, and _**r**_ 1 = _**z**_ . Assuming we have L-level codebooks, the quantization representation of _**z**_ can be obtained according to ˆ _**z**_ = [�] _[L]_ _i_ =1 _**[v]**_ _c_ _[i]_ _i_ [. Then][ ˆ] _**[z]**_ [ will be used as decoder input to reconstruct the item representation] _**[ h]**_ [.] The loss function can be represented as: _L_ recon = _∥_ _**h**_ _−_ _**h**_ [ˆ] _∥_ 2 [2] _[,]_ (3) Ƹ _L_ rqvae = Ƹ _H_ _i_ 2 2 � ��sg [ _**r**_ _i_ ] _−_ _**v**_ _c_ _i_ �� 2 [+] _[ β]_ �� _**r**_ _i_ _−_ sg � _**v**_ _c_ _[i]_ _i_ ��� 2 _[,]_ (4) _i_ =1 4 _L_ ( _h_ ) = _L_ recon + _L_ rqvae _,_ (5) where _**h**_ [ˆ] is the output of the decoder, sg[*] represents the stop-gradient operator, and _β_ is a loss coefficient. The overall loss is divided into two parts, _L_ recon is the reconstruction loss, and _L_ rqvae is the RQ loss used to minimize the distance between codebook vectors and residual vectors. Items typically encompass content from multiple modalities, representing various aspects of user preferences. In our setup, each item comprises two modalities: text and image. We train a quantitative translator for each modality, then add prefixes to the codewords from each of the two codebooks to form a dictionary. Specifically, for the text quantitative translator, we prepend lowercase letter prefixes to the codewords to obtain _V_ _t_ = _{a_ _1 _, b_ _2 _, . . ., d_ _ _K}_ ; for the image quantitative translator, we prepend uppercase letter prefixes to the codewords to obtain _V_ _v_ = _{A_ _1 _, B_ _2 _, . . ., D_ _ _K}_ . Here, _a/A_ represents the 1-th level codebook, _d/D_ represents the 4-th level codebook, etc. Subsequently, the dictionary can be represented as _V_ = _{V_ _t_ _, V_ _v_ _}_ . With each quantitative translator having _LK_ codewords, the size of our dictionary is 2 _LK_, enabling us to represent a total of _K_ _[L]_ items. Once the quantitative translators are trained, we can directly use them to translate new items into quantitative language. For example, for the item text _"Sengoku Basara: The Last Party"_, after encoding it through the text encoder and RQ-VAE, we obtain a set of codewords (2, 3, 1, 6). Then, by appending lowercase letters before each number, we can get the text quantitative language of the item as _<a_2><b_3><c_1><d_6>_ . Similarly, for the item’s image, we can obtain its image quantitative language as _<A_1><B_4><C_2><D_6>_ . **Handling Collisions.** Translating item content into quantitative language may lead to item collisions, where multiple items possess the same tokens. To address this issue, some methods (Rajput et al., 2023; Hua et al., 2023) append an additional identifier after the item indices, which may introduce semantically unrelated distributions. LC-Rec (Zheng et al., 2023) introduces a uniform distribution constraint to prevent multiple items from clustering in the same leaf node. However, this method does not completely resolve collisions, such as when items have the same modality information or when the number of collisions exceeds the size of the last level codebook, which can lead to inflated performance metrics. (More discussion in Appendix E.1.) To address the above issue, we reallocate tokens for colliding items based on the distance from the residual vector to the code vectors. Specifically, for _N_ colliding items, we first calculate the distances _**D**_ _∈_ R _[N]_ _[×][L][×][K]_ between the residual vectors and the code vectors for each level based on _**d**_ _[i]_ _k_ [=] �� _**r**_ _i_ _−_ _**v**_ _ki_ �� 22 [, and sort the distances to obtain the indices] _**[ I]**_ [ = argsort(] _**[D]**_ _[, axis]_ [ = 2)] _[ ∈]_ R _[N]_ _[×][L][×][K]_ . Then, we sort the colliding items based on their minimum distance to the code vectors of the last level, i.e., ( _item_ 1 _, item_ 2 _, . . ., item_ _N_ ) = sort min( _**d**_ _L_ ) ( _colliding items_ ). Finally, we reallocate tokens for the sorted colliding items based on _**I**_, following these principles: 1) Start from the last level to assign the nearest token to each item. If collisions occur, assign the next nearest token. 2) If there are insufficient tokens in the last level, for the remaining colliding items, reallocate tokens from the second last level based on distance, and then reallocate tokens from the last level. We repeat this process until all colliding items are handled. 3.2 Q UANTITATIVE L ANGUAGE G ENERATION T ASKS In this subsection, we design several quantitative language generation tasks with the aim of imbuing quantitative language with more semantic information, thereby transferring prior knowledge to the target task, as illustrated in Figure 2 (right). Specifically, we additionally include some special tokens in the dictionary, which can serve as prompts to differentiate the types of tasks. **Next Item Generation.** Since our primary goal is to predict the next item, the next item generation task is our main optimization objective. Specifically, each item contains both text and image modalities, so we have two subtasks: 1) Next Text Item Generation; 2) Next Image Item Generation. In this context, the input sequence is the item tokens sequence from the user interaction history, and the output sequence is the target item tokens corresponding to the respective modality. Different modal sequences reflect different aspects of user preferences. **Asymmetric Item Generation.** In the next item generation task, the input and output are tokens of the same modality, and we refer to this task as symmetric. To facilitate the interaction 5 of recommendation knowledge between two modalities, we introduce asymmetric item generation tasks. Here, there are two subtasks: 1) Asymmetric Text Item Generation, where the input is the image tokens of the interaction history items, and the output is the text tokens of the target item; 2) Asymmetric Image Item Generation, where the input is the text tokens of the interaction history items, and the output is the image tokens of the target item. For example, for the input sequence _"<*_6><*_7><*_8><a_2><b_3><c_1><d_6><a_4><b_3><c_8><d_6>"_, in humanunderstandable language, it can be described as follows: _"Based on the user’s text interaction se-_ _quence, please predict the next item’s image quantitative language: <a_2><b_3><c_1><d_6>,_ _<a_4><b_3><c_8><d_6>"_ . **Quantitative Language Alignment** Asymmetric item generation tasks enable the interaction of knowledge between two modalities, but they fall under the category of implicit alignment of the two modalities. We further introduce explicit Quantitative Language Alignment tasks to directly achieve alignment between the text and image quantitative languages of items. Here, we also have two subtasks: 1) Text-to-Image Alignment; 2) Image-to-Text Alignment. For example, for the input sequence _"<*_12><*_13><*_14><a_2><b_3><c_1><d_6>"_, in human-understandable language, it can be described as follows: _"Please provide the image quantitative language for the_ _following item: <a_2><b_3><c_1><d_6>"_ . 3.3 T RAINING AND R ECOMMENDATION **Training.** Quantitative language can be viewed as a microcosm of natural language. We employ a two-stage paradigm of pre-training and fine-tuning to optimize the model, which is similar to NLG tasks. For **pre-training**, we utilize the source domain datasets, where the pre-training task consists of two sub-tasks for next item generation. The purpose is to transfer recommendation knowledge from the source domains to the target domains. For **fine-tuning**, we conduct it on the target domain dataset, with tasks encompassing all quantitative language generation tasks. The aim is to leverage recommendation knowledge from different modalities to explore users’ multifaceted preferences. The tasks mentioned above are conditional language generation tasks performed in a sequence-tosequence manner. We optimize the negative log-likelihood of the generation target as follows: _L_ _θ_ = _−_ _|_ **Y** _|_ � log _P_ _θ_ ( **Y** _j_ _|_ **Y** _<j_ _,_ **X** ) _,_ (6) _j_ =1 where _θ_ is the model parameters, **X** is the input sequence of encoder, and **Y** **j** is the _j_ -th token of **Y** . **Re-ranking for recommendation.** There are two sub-tasks in the next item generation task, representing different user preferences. Although fine-tuning tasks can facilitate the transfer of recommendation knowledge between them, there might be some information loss. Therefore, we re-rank items by utilizing the recommendation lists generated from the two sub-tasks. The basic idea is that items appearing in both lists should be ranked higher. Specifically, we first obtain recommendation lists _R_ _t_ and _R_ _v_ for each sub-task through beam search, which include scores for each item. Then, the new score for each item can be formalized as: _s_ ( _x_ ) = ( _s_ _t_ ( _x_ ) + _s_ _v_ ( _x_ )) _/_ 2 + 1 _x ∈_ _R_ _t_ _, x ∈_ _R_ _v_ _s_ _t_ ( _x_ ) _x ∈_ _R_ _t_ _,_ (7)   _s_ _v_ ( _x_ ) _x ∈_ _R_ _v_ where _s_ _i_ ( _x_ ) is the score of item _x_ in the list _R_ _i_, and _i ∈{t, v}_ . 4 E XPERIMENTS 4.1 E XPERIMENTAL S ETTINGS **Datasets.** We evaluate the proposed approach on three public real-world benchmarks from the Amazon Product Reviews dataset (Ni et al., 2019), containing user reviews and item metadata from May 1996 to October 2018. In particular, we use six categories for pre-training, including _“Pet_ _Supplies”_, _"Cell Phones and Accessories"_, _“Automotive”_, _“Tools and Home Improvement”_, _“Toys_ 6 Table 1: Performance comparison of different methods on the three datasets. The best and secondbest performances are indicated in bold and underlined font, respectively. |Dataset|Metrics|GRU4Rec BERT4Rec SASRec FDSA S3-Rec VQ-Rec MISSRec|P5-CID VIP5 TIGER MQL4GRec Improv.| |---|---|---|---| |Instruments|HR@1<br>HR@5<br>HR@10<br>NDCG@5<br>NDCG@10|0.0566 0.0450 0.0318 0.0530 0.0339 0.0502 0.0723<br>0.0975 0.0856 0.0946 0.0987 0.0937 0.1062 0.1089<br>0.1207 0.1081 0.1233 0.1249 0.1123 0.1357 0.1361<br>0.0783 0.0667 0.0654 0.0775 0.0693 0.0796 0.0797<br>0.0857 0.0739 0.0746 0.0859 0.0743 0.0891 0.0880|0.0512 0.0737 0.0754 0.0833 +10.48%<br>0.0839 0.0892 0.1007 0.1115 +2.39%<br>0.1119 0.1071 0.1221 0.1375 +1.03%<br>0.0678 0.0815 0.0882 0.0977 +10.77%<br>0.0704 0.0872 0.0950 0.1060 +11.58%| |Arts|HR@1<br>HR@5<br>HR@10<br>NDCG@5<br>NDCG@10|0.0365 0.0289 0.0212 0.0380 0.0172 0.0408 0.0479<br>0.0817 0.0697 0.0951 0.0832 0.0739 0.1038 0.1021<br>0.1088 0.0922 0.1250 0.1190 0.1030 0.1386 0.1321<br>0.0602 0.0502 0.0610 0.0583 0.0511 0.0732 0.0699<br>0.0690 0.0575 0.0706 0.0695 0.0630 0.0844 0.0815|0.0421 0.0474 0.0532 0.0672 +26.32%<br>0.0713 0.0704 0.0894 0.1037 -<br>0.0994 0.0859 0.1167 0.1327 -<br>0.0607 0.0586 0.0718 0.0857 +17.08%<br>0.0662 0.0635 0.0806 0.0950 +12.56%| |Games|HR@1<br>HR@5<br>HR@10<br>NDCG@5<br>NDCG@10|0.0140 0.0115 0.0069 0.0163 0.0136 0.0075 0.0201<br>0.0544 0.0426 0.0587 0.0614 0.0527 0.0408 0.0674<br>0.0895 0.0725 0.0985 0.0988 0.0903 0.0679 0.1048<br>0.0341 0.0270 0.0333 0.0389 0.0351 0.0242 0.0385<br>0.0453 0.0366 0.0461 0.0509 0.0468 0.0329 0.0499|0.0169 0.0173 0.0166 0.0203 +1.00%<br>0.0532 0.0480 0.0523 0.0637 -<br>0.0824 0.0758 0.0857 0.1033 -<br>0.0331 0.0328 0.0345 0.0421 +8.23%<br>0.0454 0.0418 0.0453 0.0548 +7.66%| Table 2: Ablation study of handling collisions. Instruments Arts Games Methods HR@10 NDCG@10 HR@10 NDCG@10 HR@10 NDCG@10 TIGER 0.1221 0.0950 **0.1167** 0.0806 0.0857 0.0453 TIGER w/o user 0.1216 0.0958 0.1159 0.0810 0.0863 0.0464 Handling Collisions **0.1277** **0.0987** 0.1163 **0.0844** **0.0885** **0.0473** _and Games”_, _“Sports and Outdoors”_, and three categories for sequential recommendation tasks, including _“Musical Instruments”_, _“Arts Crafts and Sewing”_, _“Video Games”_ . We discuss the dataset statistics and pre-processing in Appendix A. **Evaluation Metrics.** We use top-k Recall (Recall@K) and Normalized Discounted Cumulative Gain (NDCG@K) with K = 1, 5, 10 to evaluate the recommendation performance. Following previous works (Geng et al., 2022; Hua et al., 2023), we employ the _leave-one-out_ strategy for evaluation. We perform full ranking evaluation over the entire item set instead of sample-based evaluation. For the generative methods based on beam search, the beam size is uniformly set to 20. 4.2 O VERALL P ERFORMANCE In this section, we compare our proposed approach for generative recommendation with the following sequential recommendation methods (which are described briefly in Appendix B): GRU4Rec (Hidasi et al., 2015), BERT4Rec (Sun et al., 2019), SASRec (Kang & McAuley, 2018b), FDSA (Zhang et al., 2019), S [3] -Rec (Zhou et al., 2020), VQ-Rec (Hou et al., 2023), MISSRec(Wang et al., 2023a), P5-CID (Hua et al., 2023), VIP5 (Geng et al., 2023a), and TIGER (Rajput et al., 2023). Results are shown in Table 1. Based on these results, we can find: For non-generative recommendation methods, MISSRec often achieves better performance in most cases, demonstrating that introducing multimodal information of items can enhance recommendation performance. For generative baseline methods, VIP5 with image information does not achieve good results, which may be due to the modal differences between PLMs and image information. Furthermore, TIGER performs well on the Instruments and Arts datasets but does not exhibit superiority on the Games dataset. This may be due to TIGER’s lack of auxiliary content information. In contrast, our proposed method introduces recommendation knowledge from different domains and modalities. Compared to baseline methods, our proposed MQL4GRec achieves the best performance in most cases, especially with significant improvements on the NDCG metric. This superior performance can be attributed to two factors: 1) We translate item content from different domains and modalities into a unified quantitative language, breaking down barriers between them; 2) The series of QLG tasks we designed enable the transfer of recommendation knowledge to target tasks through pretraining and fine-tuning methods. 7 Table 3: Ablation study of various quantitative language generation tasks without pre-training. Instruments Arts Games Modal Tasks HR@10 NDCG@10 HR@10 NDCG@10 HR@10 NDCG@10 Text Image NIG 1 0.1277 0.0987 0.1163 0.0844 0.0885 0.0473 NIG 0.1275 0.0986 0.1205 0.0877 0.0928 0.0493 + AIG 0.1279 0.0987 0.1249 0.0895 0.1002 0.0529 + QLA **0.1282** **0.0993** **0.1293** **0.0913** **0.1010** **0.0531** NIG 2 0.1243 0.0968 0.1117 0.0812 0.0881 0.0478 NIG 0.1262 0.0986 0.1158 0.0848 0.0899 0.0487 + AIG **0.1299** 0.0998 0.1218 0.0878 0.1002 0.0534 + QLA 0.1280 **0.1001** **0.1259** **0.0901** **0.1017** **0.0540** 4.3 A BLATION S TUDY **Handling Collisions.** We propose a method based on the distance between the residual vector and the codeword vector to resolve item collisions. To validate the effectiveness of our method, we compare it with the collision resolution approach in TIGER, which directly adds an item index layer to resolve item collisions, thereby Idea Generation Category:
0Conceptual Integration
v7YrIjpkTF
# O PEN -YOLO 3D: T OWARDS F AST AND A CCURATE O PEN -V OCABULARY 3D I NSTANCE S EGMENTATION **Mohamed El Amine Boudjoghra** **Angela Dai** **Jean Lahoud** **Hisham Cholakkal** TUM, MBZUAI TUM MBZUAI MBZUAI **Rao Muhammad Anwer** **Salman Khan** **Fahad Shahbaz Khan** MBZUAI, Aalto University MBZUAI, ANU MBZUAI, Linköping University A BSTRACT Recent works on open-vocabulary 3D instance segmentation show strong promise but at the cost of slow inference speed and high computation requirements. This high computation cost is typically due to their heavy reliance on aggregated clip features from multi-view, which require computationally expensive 2D foundation models like Segment Anything (SAM) and CLIP. Consequently, this hampers their applicability in many real-world applications that require both fast and accurate predictions. To this end, we propose a novel open-vocabulary 3D instance segmentation approach, named Open-YOLO 3D, that efficiently leverages only 2D object detection from multi-view RGB images for open-vocabulary 3D instance segmentation. We demonstrate that our proposed Multi-View Prompt Distribution (MVPDist) method makes use of multi-view information to account for misclassification from the object detector to predict a reliable label for 3D instance masks. Furthermore, since projections of 3D object instances are already contained within the 2D bounding boxes, we show that our proposed low granularity label maps, which require only a 2D object detector to construct, are sufficient and very fast to predict prompt IDs for 3D instance masks when used with our proposed MVPDist. We validate our Open-YOLO 3D on two benchmarks, ScanNet200 and Replica, under two scenarios: _(i)_ with ground truth masks, where labels are required for given object proposals, and _(ii)_ with class-agnostic 3D proposals generated from a 3D proposal network. Our Open-YOLO 3D achieves state-of-the-art performance on both datasets while obtaining up to _∼_ 16 _×_ speedup compared to the best existing method in literature. On ScanNet200 val. set, our Open-YOLO 3D achieves mean average precision (mAP) of 24.7% while operating at 22 seconds per scene. github.com/aminebdj/OpenYOLO3D 1 I NTRODUCTION 3D instance segmentation is a computer vision task that involves the prediction of masks for individual objects in a 3D point cloud scene. It holds significant importance in fields like robotics and augmented reality. Due to its diverse applications, this task has garnered increasing attention in recent years. Researchers have long focused on methods that typically operate within a closed-set framework, limiting their ability to recognize objects not present in the training data. This constraint poses challenges, particularly when novel objects must be identified or categorized in unfamiliar environments. Recent methods Nguyen et al. (2024); Takmaz et al. (2023) address the problem of novel class segmentation, but they suffer from slow inference that ranges from 5 minutes for small scenes to 10 minutes for large scenes due to their reliance on computationally heavy foundation models like SAM Kirillov et al. (2023) and CLIP Zhang et al. (2023) along with heavy computation for lifting 2D CLIP feature to 3D. Open-vocabulary 3D instance segmentation robotics tasks such as manipulating objects and inventory management require accurate predictions while being fast in the decision-making process. Furthermore, these tasks alter the point cloud throughout time, where objects can be rearranged, removed, or added; this would require open vocabulary 3D instance segmentation pipelines to re-run from 1 Figure 1: **Open-vocabulary 3D instance segmentation with our Open-YOLO 3D.** The proposed Open-YOLO 3D is capable of segmenting objects in a zero-shot manner. Here, We show the output for a ScanNet200 Rozenberszki et al. (2022) scene with various prompts, where our model yields improved performance compared to the recent Open3DIS Nguyen et al. (2024). We show zoomed-in images of hidden predicted instances in the colored boxes. Additional results are in suppl. material. scratch for every updated scene. Thus, recent methods would be ill-suited for such robotics tasks due to their low speed. Motivated by recent advances in 2D object detection Cheng et al. (2024), we look into an alternative approach that efficiently leverages fast object detectors instead of utilizing computationally expensive foundation models that are adopted by recent methods. This paper proposes a novel open-vocabulary 3D instance segmentation method, named Open-YOLO 3D, that utilizes efficient, joint 2D-3D information using bounding boxes and projections from 3D point clouds. We employ an open-vocabulary 2D object detector to generate bounding boxes with their class labels for all frames corresponding to the 3D scene; on the other side, we utilize a 3D instance segmentation network to generate 3D class-agnostic instance masks for the point clouds, which proves to be much faster than 3D proposal generation methods from 2D instances Nguyen et al. (2024); Lu et al. (2023). Unlike recent methods Takmaz et al. (2023); Nguyen et al. (2024) which use SAM and CLIP to lift 2D clip features to 3D for prompting the 3D mask proposal, we propose an approach that relies on the bounding box predictions from 2D object detectors which prove to be significantly faster. We use the predicted bounding boxes in all RGB frames corresponding to the point cloud scene to construct a Low Granularity (LG) label map for every frame. One LG label map is a two-dimensional array with the same height and width as the RGB frame, with the bounding box areas replaced by their predicted class label. Next, we use our proposed MVPDist to assign the best possible prompt ID to the 3D masks by using multi-view information, we present an example output of our method in Figure 1. Our contributions are the following: - We introduce a 2D object detection-based approach for open-vocabulary labeling of 3D instances, which efficiently uses object detectors to greatly improve the results. - We propose a novel approach to scoring 3D mask proposals using only bounding boxes from 2D object detectors. - Our Open-YOLO 3D achieves superior performance on two benchmarks, while being considerably faster than existing methods in the literature. On ScanNet200 val. set, our Open-YOLO 3D achieves an absolute gain of 2.3% at mAP50 while being _∼_ 16x faster compared to the recent Open3DIS Nguyen et al. (2024). 2 R ELATED WORKS **Closed-vocabulary 3D segmentation:** The 3D instance segmentation task aims at predicting masks for individual objects in a 3D scene, along with a class label belonging to the set of known classes. 2 Some methods use a grouping-based approach in a bottom-up manner, by learning embeddings in the latent space to facilitate clustering of object points Chen et al. (2021); Han et al. (2020); He et al. (2021); Jiang et al. (2020); Lahoud et al. (2019); Liang et al. (2021); Wang et al. (2018); Zhang & Wonka (2021). Conversely, proposal-based methods adopt a top-down strategy, initially detecting 3D bounding boxes and then segmenting the object region within each box Engelmann et al. (2020); Hou et al. (2019); Liu et al. (2020); Yang et al. (2019); Yi et al. (2019). Notably, inspired by advancements in 2D works Cheng et al. (2022; 2021), transformer designs Vaswani et al. (2017) have been recently applied to 3D instance segmentation tasks Schult et al. (2023); Sun et al. (2023); Kolodiazhnyi et al. (2024); Al Khatib et al. (2023); Jain et al. (2024). Mask3D Schult et al. (2023) introduces the first hybrid architecture that combines Convolutional Neural Networks (CNN) and transformers for this task. It uses a 3D CNN backbone to extract per-point features and a transformer-based instance mask decoder to refine a set of queries. Building on Mask3D, the authors of Al Khatib et al. (2023) show that using explicit spatial and semantic supervision at the level of the 3D backbone further improves the instance segmentation results. Oneformer3D Kolodiazhnyi et al. (2024) follows a similar architecture and introduces learnable kernels in the transformer decoder for a unified semantic, instance, and panoptic segmentation. ODIN Jain et al. (2024) proposes an architecture that uses 2D-3D fusion to generate the masks and class labels. Other methods introduce weakly-supervised alternatives to dense annotation approaches, aiming to reduce the annotation cost associated with 3D data Chibane et al. (2022); Hou et al. (2021); Xie et al. (2020). While these methodologies strive to enhance the quality of 3D instance segmentation, they typically rely on a predefined set of semantic labels. In contrast, our proposed approach aims at segmenting objects with both known and unknown class labels. **Open-vocabulary 2D recognition:** This task aims at identifying both known and novel classes, where the labels of the known classes are available in the training set, while the novel classes are not encountered during training. In the direction of open-vocabulary object detection (OVOD), several approaches have been proposed Zhong et al. (2022); Pham et al. (2024); Liu et al. (2023); Zang et al. (2022); Wang et al. (2023); Kaul et al. (2023); Yao et al. (2023); Cheng et al. (2024). Another widely studied task is open-vocabulary segmentation (OVSS) Bucher et al. (2019); Xu et al. (2022); Li et al. (2021); Ghiasi et al. (2022); Liang et al. (2023). Recent open-vocabulary semantic segmentation methods Li et al. (2021); Ghiasi et al. (2022); Liang et al. (2023) leverage pre-trained CLIP Zhang et al. (2023) to perform open-vocabulary segmentation, where the model is trained to output a pixel-wise feature that is aligned with the text embedding in the CLIP space. Furthermore, AttrSeg Ma et al. (2024) proposes a decomposition-aggregation framework where vanilla class names are first decomposed into various attribute descriptions, and then different attribute representations are aggregated into a final class representation. Open-vocabulary instance segmentation (OVIS) aims at predicting instance masks while preserving high zero-shot capabilities. One approach Huynh et al. (2022) proposes a cross-modal pseudo-labeling framework, where a student model is supervised with pseudo-labels for the novel classes from a teacher model. Another approach VS et al. (2023) proposes an annotation-free method where a pre-trained vision-language model is used to produce annotations at both the box and pixel levels. Although these methods show high zero-shot performance and real-time speed, they are still limited to 2D applications only. **Open-vocabulary 3D segmentation:** Several methods Huang et al. (2024); Peng et al. (2023); Gu et al. (2023); Hong et al. (2023) have been proposed to address the challenges of open-vocabulary semantic segmentation where they use foundation models like clip for unknown class discovery, while the authors of Boudjoghra et al. (2023) focus on weak supervision for unknown class discovery without relying on any 2D foundation model. OpenScene Peng et al. (2023) makes use of 2D open-vocabulary semantic segmentation models to lift the pixel-wise 2D CLIP features into the 3D space, which allows the 3D model to perform 3D open-vocabulary point cloud semantic segmentation. On the other hand, ConceptGraphs Gu et al. (2023) relies on creating an open-vocabulary scene graph that captures object properties such as spatial location, enabling a wide range of downstream tasks including segmentation, object grounding, navigation, manipulation, localization, and remapping. In the direction of 3D point cloud instance segmentation, OpenMask3D Takmaz et al. (2023) uses a 3D instance segmentation network to generate class-agnostic mask proposals, along with SAM Kirillov et al. (2023) and CLIP Zhang et al. (2023), to construct a 3D clip feature for each mask using RGB-D images associated with the 3D scene. Unlike OpenMask3D where a 3D proposal network is used, OVIR-3D Lu et al. (2023) generates 3D proposals by fusing 2D masks obtained by a 2D instance segmentation model. Open3DIS Nguyen et al. (2024) combines proposals from 2D and 3D with novel 2D masks fusion approaches via hierarchical agglomerative clustering, and also proposes to 3 **Input** **Output** ~~2D~~ ~~OVOD~~ "chair", "table", ..., "mattress" Figure 2: **Proposed open-world 3D instance segmentation pipeline.** We use a 3D instance segmentation network (3D Network) for generating class-agnostic proposals. For open-vocabulary prediction, a 2D Open-Vocabulary Object Detector (2D OVOD) generates bounding boxes with class labels. These predictions are used to construct label maps for all input frames. Next, we assign the top-k label maps to each 3D proposal based on visibility. Finally, we generate a Multi-View Prompt Distribution from the 2D projections of the proposals to match a text prompt to every 3D proposal. use point-wise 3D CLIP features instead of mask-wise features. The two most recent approaches in Nguyen et al. (2024); Takmaz et al. (2023) show promising generalizability in terms of novel class discovery Takmaz et al. (2023) and novel object geometries especially small objects Nguyen et al. (2024). However, they both suffer from slow inference speed, as they rely on SAM for 3D mask proposal clip feature aggregation in the case of OpenMask3D Takmaz et al. (2023), and for novel 3D proposal masks generation from 2D masks Nguyen et al. (2024). 3 P RELIMINARIES **Problem formulation:** 3D instance segmentation aims at segmenting individual objects within a 3D scene and assigning one class label to each segmented object. In the open-vocabulary (OV) setting, the class label can belong to previously known classes in the training set as well as new class labels. To this end, let _P_ denote a 3D reconstructed point cloud scene, where a sequence of RGB-D images was used for the reconstruction. We denote the RGB image frames as _I_ along with their corresponding depth frames D. Similar to recent methods Peng et al. (2023); Takmaz et al. (2023); Nguyen et al. (2024), we assume that the poses and camera parameters are available for the input 3D scene. 3.1 B ASELINE O PEN -V OCABULARY 3D I NSTANCE S EGMENTATION We base our approach on OpenMask3D Takmaz et al. (2023), which is the first method that performs open-vocabulary 3D instance segmentation in a zero-shot manner. OpenMask3D has two main modules: a class-agnostic mask proposal head, and a mask-feature computation module. The classagnostic mask proposal head uses a transformer-based pre-trained 3D instance segmentation model Schult et al. (2023) to predict a binary mask for each object in the point cloud. The mask-feature computation module first generates 2D segmentation masks by projecting 3D masks into views in which the 3D instances are highly visible, and refines them using the SAM Kirillov et al. (2023) model. A pre-trained CLIP vision-language model Zhang et al. (2023) is then used to generate image embeddings for the 2D segmentation masks. The embeddings are then aggregated across all the 2D frames to generate a 3D mask-feature representation. **Limitations** : OpenMask3D makes use of the advancements in 2D segmentation (SAM) and visionlanguage models (CLIP) to generate and aggregate 2D feature representations, enabling the querying of instances according to open-vocabulary concepts. However, this approach suffers from a high computation burden leading to slow inference times, with a processing time of 5-10 minutes per scene. The computation burden mainly originates from two sub-tasks: the 2D segmentation of the large number of objects from the various 2D views, and the 3D feature aggregation based on the object visibility. We next introduce our proposed method which aims at reducing the computation burden and improving the task accuracy. 4 4 M ETHOD : O PEN -YOLO 3D **Motivation** : We here present our proposed 3D open-vocabulary instance segmentation method, Open-YOLO 3D, which aims at generating 3D instance predictions in an efficient strategy. Our proposed method introduces efficient and improved modules at the task level as well as the data level. _Task Level:_ Unlike OpenMask3D, which generates segmentations of the projected 3D masks, we pursue a more efficient approach by relying on 2D object detection. Since the end target is to generate labels for the 3D masks, the increased computation from the 2D segmentation task is not necessary. _Data Level:_ OpenMask3D computes the 3D mask visibility in 2D frames by iteratively counting visible points for each mask across all frames. This approach is time-consuming, and we propose an alternative approach to compute the 3D mask visibility within all frames at once. 4.1 O VERALL A RCHITECTURE Our proposed pipeline is shown in Figure 2. First, we generate a set of instance proposals _M_ using a 3D instance segmentation network; the proposals are represented as binary masks, where every 3D mask has a dimension equal to the number of points as the input point cloud. For the open vocabulary prediction, we use a 2D open-vocabulary object detection model to generate a set of bounding boxes denoted B _i_ for every frame _I_ _i_ ; the bounding boxes with their predicted labels are used to construct a low-granularity label map _L_ _i_ for every input frame _I_ _i_ . To assign a prompt ID to the 3D mask proposals, we first start by projecting all _N_ points in the point cloud scene _P_ onto the _N_ _f_ frames, which results in _N_ _f_ 2D projection with _N_ points for each. Afterward, the 2D projections and the 3D mask proposals are used to compute the visibility of every mask in every frame using our proposed accelerated visibility computation ( **VAcc** ); the visibility is then used to assign top-k LowGranularity label maps to each mask and to select the top-k 2D projections corresponding to every 3D mask proposal; for a single 3D mask we crop the (x, y) coordinates from the projections using the instance mask and filter out the points that are occluded or outside the frame. The final cropped (x, y) coordinates from the top-k frames are used to select per-point labels from their corresponding Low-Granularity label maps to finally construct a Multi-View Prompt Distribution to predict the prompt ID corresponding to the 3D mask proposal. 4.2 3D O BJECT P ROPOSAL To generate class-agnostic 3D object proposals, we rely on the 3D instance generation approach Mask3D Schult et al. (2023), which allows for faster proposal generation compared to 2D mask-based 3D proposal generation methods Nguyen et al. (2024); Lu et al. (2023). Mask3D is a hybrid model that combines a 3D Convolutional Neural Network as a backbone for feature generation and a transformer-based model for mask instance prediction. The 3D CNN backbone takes the voxelized input point cloud scene as input, and outputs multi-level feature maps, while the transformer decoder takes the multi-level feature maps to refine a set of queries through self and cross-attention. The final refined queries are used to predict instance masks. The 3D proposal network predicts a set _K_ 3 _D_ _∈_ N of 3D mask proposals _M ∈_ Z _[K]_ 2 [3] _[D]_ _[×][N]_ for a given point cloud _P ∈_ R [4] _[×][N]_ with _N_ points in homogeneous coordinate system, where Z 2 = _{_ 0 _,_ 1 _}_ . 4.3 L OW G RANULARITY (LG) L ABEL -M APS As discussed earlier, the focus of our approach is to generate fast and accurate open-vocabulary labels for the generated 3D proposals. Instead of relying on computationally intensive 2D segmentation, we propose a 2D detection-based approach in our pipeline. For every RGB image _I_ _i_ we generate a set of _K_ _b,i_ bounding boxes B _i_ = _{_ ( _b_ _j_ _, c_ _j_ ) _|_ _b_ _j_ _∈_ R [4] _,_ _c_ _j_ _∈_ N _,_ _∀j ∈_ (1 _, ..., K_ _b,i_ ) _}_ using an open-vocabulary 2D object detector, where _b_ _j_ are the bounding boxes coordinates while _c_ _j_ is its predicted label. We assign a weight _w_ _j_ = _b_ _[H]_ _j_ [+] _[ b]_ _[W]_ _j_ for each output bounding box _b_ _j_, where _b_ _[H]_ _j_ [and] _b_ _[W]_ _j_ are the bounding box’s height and width, respectively. The weights represent the bounding box size and help determine the order of bounding boxes when used to construct the LG label maps. After obtaining the 2D object detections, we represent the output of each 2D image frame _I_ _i_ as an LG label map _L_ _i_ _∈_ Z _[W][ ×][H]_ . To construct _L_ _i_, we start by initializing all of its elements as **-1** . In our notation, **-1** represents no class and is ignored during prediction. Next, we sort all bounding 5 Idea Generation Category:
2Direct Enhancement
CRmiX0v16e
# S YMMETRIC D IFFUSERS : L EARNING D ISCRETE D IFFUSION ON F INITE S YMMETRIC G ROUPS **Yongxing Zhang** [1] _[,]_ [3] _[∗]_ **, Donglin Yang** [2] _[,]_ [3] **, Renjie Liao** [2] _[,]_ [3] 1 University of Waterloo 2 University of British Columbia 3 Vector Institute nick.zhang@uwaterloo.ca, {ydlin, rjliao}@ece.ubc.ca A BSTRACT The group of permutations _S_ _n_, also known as the finite symmetric groups, are essential in fields such as combinatorics, physics, and chemistry. However, learning a probability distribution over _S_ _n_ poses significant challenges due to its intractable size and discrete nature. In this paper, we introduce _SymmetricDiffusers_, a novel discrete diffusion model that simplifies the task of learning a complicated distribution over _S_ _n_ by decomposing it into learning simpler transitions of the reverse diffusion using deep neural networks. We identify the riffle shuffle as an effective forward transition and provide empirical guidelines for selecting the diffusion length based on the theory of random walks on finite groups. Additionally, we propose a generalized Plackett-Luce (PL) distribution for the reverse transition, which is provably more expressive than the PL distribution. We further introduce a theoretically grounded "denoising schedule" to improve sampling and learning efficiency. Extensive experiments show that our model achieves state-of-the-art or comparable performance on solving tasks including sorting 4-digit MNIST images, jigsaw puzzles, and traveling salesman problems. Our code is released at [https://github.com/DSL-Lab/SymmetricDiffusers.](https://github.com/DSL-Lab/SymmetricDiffusers) 1 I NTRODUCTION As a vital area of abstract algebra, finite groups provide a structured framework for analyzing symmetries and transformations which are fundamental to a wide range of fields, including combinatorics, physics, chemistry, and computer science. One of the most important finite groups is the _finite_ _symmetric group_ _S_ _n_, defined as the group whose elements are all the bijections (or permutations) from a set of _n_ elements to itself, with the group operation being function composition. Classic probabilistic models for finite symmetric groups _S_ _n_, such as the Plackett-Luce (PL) model (Plackett, 1975; Luce, 1959), the Mallows model (Mallows, 1957), and card shuffling methods (Diaconis, 1988), are crucial in analyzing preference data and understanding the convergence of random walks. Therefore, studying probabilistic models over _S_ _n_ through the lens of modern machine learning is both natural and beneficial. This problem is theoretically intriguing as it bridges abstract algebra and machine learning. For instance, Cayley’s Theorem, a fundamental result in abstract algebra, states that every group is isomorphic to a subgroup of a symmetric group. This implies that learning a probability distribution over finite symmetric groups could, in principle, yield a distribution over any finite group. Moreover, exploring this problem could lead to the development of advanced models capable of addressing tasks such as permutations in ranking problems, sequence alignment in bioinformatics, and sorting. However, learning a probability distribution over finite symmetric groups _S_ _n_ poses significant challenges. First, the number of permutations of _n_ objects grows factorially with _n_, making the inference and learning computationally expensive for large _n_ . Second, the discrete nature of the data brings difficulties in designing expressive parameterizations and impedes the gradient-based learning. In this work, we propose a novel discrete-time discrete (state space) diffusion model over finite symmetric groups, dubbed as _SymmetricDiffusers_ . It overcomes the above challenges by decomposing the difficult problem of learning a complicated distribution over _S_ _n_ into a sequence of simpler _∗_ Work done while an intern at Vector Institute. 1 problems, _i.e_ ., learning individual transitions of a reverse diffusion process using deep neural networks. Based on the theory of random walks on finite groups, we investigate various shuffling methods as the forward process and identify the riffle shuffle as the most effective. We also provide empirical guidelines on choosing the diffusion length based on the mixing time of the riffle shuffle. Furthermore, we examine potential transitions for the reverse diffusion, such as inverse shuffling methods and the PL distribution, and introduce a novel generalized PL distribution. We prove that our generalized PL is more expressive than the PL distribution. Additionally, we propose a theoretically grounded "denoising schedule" that merges reverse steps to improve the efficiency of sampling and learning. To validate the effectiveness of our SymmetricDiffusers, we conduct extensive experiments on three tasks: sorting 4-Digit MNIST images, solving Jigsaw Puzzles on the Noisy MNIST and CIFAR-10 datasets, and addressing traveling salesman problems (TSPs). Our model achieves the state-of-the-art or comparable performance across all tasks. 2 R ELATED W ORKS **Random Walks on Finite Groups.** The field of random walks on finite groups, especially finite symmetric groups, have been extensively studied by previous mathematicians (Reeds, 1981; Gilbert, 1955; Bayer & Diaconis, 1992; Saloff-Coste, 2004). Techniques from a variety of different fields, including probability, combinatorics, and representation theory, have been used to study random walks on finite groups (Saloff-Coste, 2004). In particular, random walks on finite symmetric groups are first studied in the application of card shuffling, with many profound theoretical results of shuffling established. A famous result in the field shows that 7 riffle shuffles are enough to mix up a deck of 52 cards (Bayer & Diaconis, 1992), where a riffle shuffle is a mathematically precise model that simulates how people shuffle cards in real life. The idea of shuffling to mix up a deck of cards aligns naturally with the idea of diffusion, and we seek to fuse the modern techniques of diffusion models with the classical theories of random walks on finite groups. **Diffusion Models.** Diffusion models (Sohl-Dickstein et al., 2015; Song & Ermon, 2020; Ho et al., 2020; Song et al., 2021) are a powerful class of generative models that typically deals with continuous data. They consist of forward and reverse processes. The forward process is typically a discrete-time continuous-state Markov chain or a continuous-time continuous-state Markov process that gradually adds noise to data, and the reverse process learn neural networks to denoise. Discrete (state space) diffusion models have also been proposed to handle discrete data like image, text (Austin et al., 2023), and graphs (Vignac et al., 2023). However, existing discrete diffusion models focused on cases where the state space is small or has a special ( _e.g_ ., decomposable) structure and are unable to deal with intractable-sized state spaces like the symmetric group. In particular, Austin et al. (2023) requires an explicit transition matrix, which has size _n_ ! _× n_ ! in the case of finite symmetric groups and has no simple representations or sparsifications. Finally, other recent advancement includes efficient discrete transitions for sequences (Varma et al., 2024), continuous-time discrete-state diffusion models (Campbell et al., 2022; Sun et al., 2023; Shi et al., 2024) and discrete score matching models (Meng et al., 2023; Lou et al., 2024), but the nature of symmetric groups again makes it non-trivial to adapt to these existing frameworks. **Differentiable Sorting and Learning Permutations.** A popular paradigm to learn permutations is through differentiable sorting or matching algorithms. Various differentiable sorting algorithms have been proposed that uses continuous relaxations of permutation matrices (Grover et al., 2018; Cuturi et al., 2019; Blondel et al., 2020), or uses differentiable swap functions (Petersen et al., 2021; 2022; Kim et al., 2024). The Gumbel-Sinkhorn method (Mena et al., 2018) has also been proposed to learn latent permutations using the continuous Sinkhorn operator. Such methods often focus on finding the optimal permutation instead of learning a distribution over the finite symmetric group. Moreover, they tend to be less effective as _n_ grows larger due to their high complexities. 3 L EARNING D IFFUSION M ODELS ON F INITE S YMMETRIC G ROUPS We first introduce some notations. Fix _n ∈_ N . Let [ _n_ ] denote the set _{_ 1 _,_ 2 _, . . ., n}_ . A _permutation_ 1 2 _· · ·_ _n_ _σ_ on [ _n_ ] is a function from [ _n_ ] to [ _n_ ], and we usually write _σ_ as � _σ_ (1) _σ_ (2) _· · ·_ _σ_ ( _n_ )� . The _identity permutation_, denoted by Id, is the permutation given by Id( _i_ ) = _i_ for all _i ∈_ [ _n_ ] . Let _S_ _n_ be the set of all permutations (or bijections) from a set of _n_ elements to itself, called the _finite_ 2 _q_ ( _X_ _T_ ) _q_ ( _X_ 1 ) _p_ data ( _X_ 0 ) 132 132 231 231 132 231 321 123 321 123 321 123 213 312 213 312 312 213 _q_ ( _X_ _T_ _|X_ _T −_ 1 _, σ_ _T_ ) _q_ ( _X_ _t_ _|X_ _t−_ 1 _, σ_ _t_ ) _q_ ( _X_ 1 _|X_ 0 _, σ_ 1 ) ~~⋯~~ ~~_X_~~ _t_ ⋯⋯ ~~_X_~~ 1 ~~_X_~~ 0 _p_ ( _X_ _t−_ 1 _|X_ _t_ _, σ_ _t_ _[0]_ [)] _p_ ( _X_ 0 _|X_ 1 _, σ_ 1 _[0]_ [)] _p_ _✓_ ( _σ_ _[0]_ _t_ _[0]_ _[|][X]_ _[t]_ [)] _p_ _✓_ ( _σ_ 1 _[0]_ 1 _[0]_ _[|][X]_ [1] [)] _t_ _[0]_ _σ_ 1 _[0]_ **x** 1 **x** 2 **x** 3 **x** 1 **x** 2 **x** 3 **x** 2 Figure 1: This figure illustrates our discrete diffusion model on finite symmetric groups. The middle graphical model displays the forward and reverse diffusion processes. We demonstrate learning distributions over the symmetric group _S_ 3 via the task of sorting three MNIST 4-digit images. The top part of the figure shows the marginal distribution of a ranked list of images _X_ _t_ at time _t_, while the bottom shows a randomly drawn list of images. _symmetric group_, whose group operation is the function composition. For a permutation _σ ∈_ _S_ _n_, the permutation matrix _Q_ _σ_ _∈_ R _[n][×][n]_ associated with _σ_ satisfies _e_ _[⊤]_ _i_ _[Q]_ _[σ]_ [ =] _[ e]_ _[⊤]_ _σ_ ( _i_ ) [for all] _[ i][ ∈]_ [[] _[n]_ []] [. In] this paper, we consider a set of _n_ distinctive objects _X_ = _{_ **x** 1 _, . . .,_ **x** _n_ _}_, where the _i_ -th object is represented by a _d_ -dimensional vector **x** _i_ . Therefore, a ranked list of objects can be represented as a matrix _X_ = [ **x** 1 _, . . .,_ **x** _n_ ] _[⊤]_ _∈_ R _[n][×][d]_, where the ordering of rows corresponds to the ordering of objects. We can permute _X_ via permutation _σ_ to obtain _Q_ _σ_ _X_ . Our goal is to learn a distribution over _S_ _n_ . We propose learning discrete (state space) diffusion models, which consist of a _forward process_ and a _reverse process_ . In the forward process, starting from the unknown data distribution, we simulate a random walk until it reaches a known stationary “noise” distribution. In the reverse process, starting from the known noise distribution, we simulate another random walk, where the transition probability is computed using a neural network, until it recovers the data distribution. Learning a transition distribution over _S_ _n_ is often more manageable than learning the original distribution because: (1) the support size (the number of states that can be reached in one transition) could be much smaller than _n_ !, and (2) the distance between the initial and target distributions is smaller. By doing so, we break down the hard problem (learning the original distribution) into a sequence of simpler subproblems (learning the transition distribution). The overall framework is illustrated in Fig. 1. In the following, we will introduce the forward card shuffling process in Section 3.1, the reverse process in Section 3.2, the network architecture and training in Section 3.3, denoising schedule in Section 3.4, and reverse decoding methods in Section 3.5. 3.1 F ORWARD D IFFUSION P ROCESS : C ARD S HUFFLING Suppose we observe a set of objects _X_ and their ranked list _X_ 0 . They are assumed to be generated iid from an unknown data distribution in an IID manner, _i.e_ ., _X_ 0 _, X_ _∼_ _p_ data ( _X, X_ ) . One can construct a bijection between a ranked list of _n_ objects and an ordered deck of _n_ cards. Therefore, permuting objects is equivalent to shuffling cards. In the forward diffusion process, we would like to add “random noise” to the rank list so that it reaches to some known stationary distribution like the uniform. Formally, we let _S ⊆_ _S_ _n_ be a set of permutations that are realizable by a given shuffling method in one step. _S_ does not change across steps in common shuffling methods. We will provide concrete examples later. We then define the _forward process_ as a Markov chain, _T_ _q_ ( _X_ 1: _T_ _|X_ 0 _, X_ ) = _q_ ( _X_ 1: _T_ _|X_ 0 ) = � _t_ =1 _[q]_ [(] _[X]_ _[t]_ _[|][X]_ _[t][−]_ [1] [)] _[,]_ (1) where _q_ ( _X_ _t_ _|X_ _t−_ 1 ) = [�] _σ_ _t_ _∈S_ _[q]_ [(] _[X]_ _[t]_ _[|][X]_ _[t][−]_ [1] _[, σ]_ _[t]_ [)] _[q]_ [(] _[σ]_ _[t]_ [)] [ and the first equality in Eq. (1) holds since] _[ X]_ [0] implies _X_ . In the forward process, although the set _X_ does not change, the rank list of objects _X_ _t_ changes. Here _q_ ( _σ_ _t_ ) has the support _S_ and describes the permutation generated by the underlying 3 shuffling method. Note that common shuffling methods are time-homogeneous Markov chains, _i.e_ ., _q_ ( _σ_ _t_ ) stays the same across time. _q_ ( _X_ _t_ _|X_ _t−_ 1 _, σ_ _t_ ) is a delta distribution _δ_ ( _X_ _t_ = _Q_ _σ_ _t_ _X_ _t−_ 1 ) since the permuted objects _X_ _t_ are uniquely determined given the permutation _σ_ _t_ and _X_ _t−_ 1 . We denote the _neighbouring states_ of _X_ via one-step shuffling as _N_ _S_ ( _X_ ) := _{Q_ _σ_ _X|σ ∈S}_ . Therefore, we have, _q_ ( _σ_ _t_ ) if _X_ _t_ _∈_ _N_ _S_ ( _X_ _t−_ 1 ) _q_ ( _X_ _t_ _|X_ _t−_ 1 ) = (2) �0 otherwise. Note that _X_ _t_ _∈_ _N_ _S_ ( _X_ _t−_ 1 ) is equivalent to _σ_ _t_ _∈S_ and _X_ _t_ = _Q_ _σ_ _t_ _X_ _t−_ 1 . 3.1.1 C ARD S HUFFLING M ETHODS We now consider several popular shuffling methods as the forward transition, _i.e_ ., _random transpo-_ _sitions_, _random insertions_, and _riffle shuffles_ . Different shuffling methods provide different design choices of _q_ ( _σ_ _t_ ), thus corresponding to different forward diffusion processes. Although all these forward diffusion processes share the same stationary distribution, _i.e_ ., the uniform, they differ in their mixing time. We will introduce stronger quantitative results on their mixing time later. **Random Transpositions.** One natural way of shuffling is to swap pairs of objects. Formally, a _transposition_ or a _swap_ is a permutation _σ ∈_ _S_ _n_ such that there exist _i ̸_ = _j ∈_ [ _n_ ] with _σ_ ( _i_ ) = _j_, _σ_ ( _j_ ) = _i_, and _σ_ ( _k_ ) = _k_ for all _k /∈{i, j}_, in which case we denote _σ_ = ( _i_ _j_ ) . We let _S_ = _{_ ( _i_ _j_ ) : _i ̸_ = _j ∈_ [ _n_ ] _} ∪{_ Id _}_ . For any time _t_, we define _q_ ( _σ_ _t_ ) by choosing two indices from [ _n_ ] uniformly and independently and swap the two indices. If the two chosen indices are the same, then this means that we have sampled the identity permutation. Specifically, _q_ ( _σ_ _t_ = ( _i_ _j_ )) = 2 _/n_ [2] when _i ̸_ = _j_ and _q_ ( _σ_ _t_ = Id) = 1 _/n_ . **Random Insertions.** Another shuffling method is to insert the last piece to somewhere in the middle. Let insert _i_ denote the permutation that inserts the last piece right before the _i_ [th] piece, and let _S_ := _{_ insert _i_ : _i ∈_ [ _n_ ] _}_ . Note that insert _n_ = Id . Specifically, we have _q_ ( _σ_ _t_ = insert _i_ ) = 1 _/n_ when _i ̸_ = _n_ and _q_ ( _σ_ _t_ = Id) = 1 _/n_ . **Riffle Shuffles.** Finally, we introduce the riffle shuffle, a method similar to how serious card players shuffle cards. The process begins by roughly cutting the deck into two halves and then interleaving the two halves together. A formal mathematical model of the riffle shuffle, known as the _GSR model_, was introduced by Gilbert and Shannon (Gilbert, 1955), and independently by Reeds (1981). The model is described as follows. A deck of _n_ cards is cut into two piles according to binomial distribution, where the probability of having _k_ cards in the top pile is � _nk_ � _/_ 2 _[n]_ for 0 _≤_ _k ≤_ _n_ . The top pile is held in the left hand and the bottom pile in the right hand. The two piles are then riffled together such that, if there are _A_ cards left in the left hand and _B_ cards in the right hand, the probability that the next card drops from the left is _A/_ ( _A_ + _B_ ), and from right is _B/_ ( _A_ + _B_ ) . We implement the riffle shuffles according to the GSR model. For simplicity, we will omit the term “GSR” when referring to riffle shuffles hereafter. There exists an exact formula for the probability over _S_ _n_ obtained through one-step riffle shuffle. Let _σ ∈_ _S_ _n_ . A _rising sequence_ of _σ_ is a subsequence of _σ_ constructed by finding a maximal subset of indices _i_ 1 _< i_ 2 _< · · · < i_ _j_ such that permuted values are contiguously increasing, _i.e_ ., _σ_ ( _i_ 2 ) _−_ _σ_ ( _i_ 1 ) = _σ_ ( _i_ 3 ) _−_ _σ_ ( _i_ 2 ) = _· · ·_ = _σ_ ( _i_ _j_ ) _−_ _σ_ ( _i_ _j−_ 1 ) = 1 . For example, the permutation 1 2 3 4 5 has 2 rising sequences, _i.e_ ., 123 (red) and 45 (blue). Note that a permutation 1 4 2 5 3 � � has 1 rising sequence if and only if it is the identity permutation. Denoting by _q_ RS ( _σ_ ) the probability of obtaining _σ_ through one-step riffle shuffle, it was shown by Bayer & Diaconis (1992) that = � ( _n_ + 1) _/_ 2 _[n]_ if _σ_ = Id 1 _/_ 2 _[n]_ if _σ_ has two rising sequences (3)  0 otherwise, _q_ RS ( _σ_ ) = [1] 2 _[n]_ _n_ + 2 _−_ _r_ � _n_ where _r_ is the number of rising sequences of _σ_ . The support _S_ is thus the set of all permutations with at most two rising sequences. We let the forward process be _q_ ( _σ_ _t_ ) = _q_ RS ( _σ_ _t_ ) for all _t_ . 3.1.2 M IXING T IMES AND C UT - OFF P HENOMENON All of the above shuffling methods have the uniform distribution as the stationary distribution. However, they have different mixing times ( _i.e_ ., the time until the Markov chain is close to its 4 stationary distribution measured by some distance), and there exist quantitative results on their mixing times. Let _q ∈{q_ RT _, q_ RI _, q_ RS _}_, and for _t ∈_ N, let _q_ [(] _[t]_ [)] be the marginal distribution of the Markov chain after _t_ shuffles. We describe the mixing time in terms of the total variation (TV) distance between two probability distributions, _i.e_ ., _D_ TV ( _q_ [(] _[t]_ [)] _, u_ ), where _u_ is the uniform distribution. For all three shuffling methods, there exists a _cut-off phenomenon_, where _D_ TV ( _q_ [(] _[t]_ [)] _, u_ ) stays around 1 for initial steps and then abruptly drops to values that are close to 0. The _cut-off time_ is the time when the abrupt change happens. For the formal definition, we refer the readers to Definition 3.3 of SaloffCoste (2004). In Saloff-Coste (2004), they also provided the cut-off time for random transposition, random insertion, and riffle shuffle, which are _[n]_ 2 [log] _[ n]_ [,] _[ n]_ [ log] _[ n]_ [, and] [3] 2 [log] [2] _[ n]_ [ respectively. Observe] that the riffle shuffle reaches the cut-off much faster than the other two methods, which means it has a much faster mixing time. Therefore, we use the riffle shuffle in the forward process. 3.2 T HE R EVERSE D IFFUSION P ROCESS We now model the _reverse process_ as another Markov chain conditioned on the set of objects _X_ . We denote the set of realizable _reverse permutations_ as _T_, and the neighbours of _X_ with respect to _T_ as _N_ _T_ ( _X_ ) := _{Q_ _σ_ _X_ : _σ ∈T }_ . The conditional joint distribution is given by _T_ _p_ _θ_ ( _X_ 0: _T_ _|X_ ) = _p_ ( _X_ _T_ _|X_ ) � _t_ =1 _[p]_ _[θ]_ [(] _[X]_ _[t][−]_ [1] _[|][X]_ _[t]_ [)] _[,]_ (4) where _p_ _θ_ ( _X_ _t−_ 1 _|X_ _t_ ) = [�] _σ_ _t_ _[′]_ _[∈T]_ _[ p]_ [(] _[X]_ _[t][−]_ [1] _[|][X]_ _[t]_ _[, σ]_ _t_ _[′]_ [)] _[p]_ _[θ]_ [(] _[σ]_ _t_ _[′]_ _[|][X]_ _[t]_ [)] [. To sample from] _[ p]_ [(] _[X]_ _[T]_ _[|X]_ [)] [, one simply] samples a random permutation from the uniform distribution and then shuffle the objects accordingly to obtain _X_ _T_ . _p_ ( _X_ _t−_ 1 _|X_ _t_ _, σ_ _t_ _[′]_ [)][ is again a delta distribution] _[ δ]_ [(] _[X]_ _[t][−]_ [1] [=] _[ Q]_ _[σ]_ _t_ _[′]_ _[X]_ _[t]_ [)][. We have] _′_ _p_ _θ_ ( _σ_ _t_ _[|][X]_ _[t]_ [)] if _X_ _t−_ 1 _∈_ _N_ _T_ ( _X_ _t_ ) _p_ _θ_ ( _X_ _t−_ 1 _|X_ _t_ ) = (5) �0 otherwise, where _X_ _t−_ 1 _∈_ _N_ _T_ ( _X_ _t_ ) is equivalent to _σ_ _t_ _[′]_ _[∈T]_ [ and] _[ X]_ _[t][−]_ [1] [=] _[ Q]_ _[σ]_ _t_ _[′]_ _[X]_ _[t]_ [. In the following, we will] introduce the specific design choices of the distribution _p_ _θ_ ( _σ_ _t_ _[′]_ _[|][X]_ _[t]_ [)][.] 3.2.1 I NVERSE C ARD S HUFFLING A natural choice is to use the inverse operations of the aforementioned card shuffling operations in the forward process. Specifically, for the forward shuffling _S_, we introduce their inverse operations _T_ := _{σ_ _[−]_ [1] : _σ ∈S}_, from which we can parameterize _p_ _θ_ ( _σ_ _t_ _[′]_ _[|][X]_ _[t]_ [)][.] **Inverse Transposition.** Since the inverse of a transposition is also a transposition, we can let _T_ := _S_ = _{_ ( _i_ _j_ ) : _i ̸_ = _j ∈_ [ _n_ ] _} ∪{_ Id _}_ . We define a distribution of inverse transposition (IT) over _T_ using _n_ + 1 real-valued parameters **s** = ( _s_ 1 _, . . ., s_ _n_ ) and _τ_ such that _̸_ _̸_ 1 _−_ _ϕ_ ( _τ_ ) if _σ_ = Id _,_ _̸_ _̸_ exp ( _s_ _j_ ) exp( _s_ _j_ ) � exp( _s_ _k_ ) [+] � exp( _s_ _k_ = _̸_ _i_ _k_ _̸_ exp( _s_ _j_ ) exp ( _s_ _i_ ) exp( _s_ _k_ ) _[·]_ � exp( _s_ _̸_ _k_ _k_ = _̸_ � exp( _s_ _k_ ) _̸_ _k_ = _̸_ _j_ _p_ IT ( _σ_ ) = _̸_ _̸_   _̸_ _̸_ _ϕ_ ( _τ_ ) _̸_ _̸_ exp( _s_ _i_ ) � � exp( _s_ _̸_ _̸_ exp( _s_ _i_ ) exp ( _s_ _j_ ) exp( _s_ _k_ ) _[·]_ � exp( _s_ _k_ _k_ = _̸_ _i_ _̸_ � _̸_ _̸_ if _σ_ = _i_ _j_ _, i ̸_ = _j,_ � � _̸_ _̸_ _̸_ _̸_ (6) where _ϕ_ ( _·_ ) is the sigmoid function. The intuition behind this parameterization is to first handle the identity permutation Id separately, where we use _ϕ_ ( _τ_ ) to denote the probability of not selecting Id . Afterwards, probabilities are assigned to the transpositions. A transposition is essentially an _unordered_ pair of _distinct_ indices, so we use _n_ parameters **s** = ( _s_ 1 _, . . ., s_ _n_ ) to represent the logits of each index getting picked. The term in parentheses represents the probability of selecting the unordered pair _i_ and _j_, which is equal to the probability of first picking _i_ and then _j_, plus the probability of first picking _j_ and then _i_ . **Inverse Insertion.** For the random insertion, the inverse operation is to insert some piece to the end. Let inverse_insert _i_ denote the permutation that moves the _i_ [th] component to the end, and let _T_ := _{_ inverse_insert _i_ : _i ∈_ [ _n_ ] _}_ . We define a categorial distribution of inverse insertion (II) over _T_ using parameters **s** = ( _s_ 1 _, . . ., s_ _n_ ) such that, _n_ _p_ II ( _σ_ = inverse_insert _i_ ) = exp( _s_ _i_ ) _/_ �� _j_ =1 [exp(] _[s]_ _[j]_ [)] � _._ (7) 5 **Inverse Riffle Shuffle.** In the riffle shuffle, the deck of card is first cut into two piles, and the two piles are riffled together. So to undo a riffle shuffle, we need to figure out which pile each card belongs to, _i.e_ ., making a sequence of _n_ binary decisions. We define the Inverse Riffle Shuffle (IRS) distribution using parameters **s** = ( _s_ 1 _, . . ., s_ _n_ ) as follows. Starting from the last (the _n_ [th] ) object, each object _i_ has probability _ϕ_ ( _s_ _i_ ) of being put on the top of the left pile. Otherwise, it falls on the top of the right pile. Finally, put the left pile on top of the right pile, which gives the shuffled result. 3.2.2 T HE P LACKETT -L UCE D ISTRIBUTION AND I TS G ENERALIZATION Other than specific inverse shuffling methods to parameterize the reverse process, we also consider general distributions _p_ _θ_ ( _σ_ _t_ _[′]_ _[|][X]_ _[t]_ [)][ whose support are the whole] _[ S]_ _[n]_ [,] _[ i.e]_ [.,] _[ T]_ [ =] _[ S]_ _[n]_ [.] **The PL Distribution.** A popular distribution over _S_ _n_ is the Plackett-Luce (PL) distribution (Plackett, 1975; Luce, 1959), which is constructed from _n_ scores **s** = ( _s_ 1 _, . . ., s_ _n_ ) as follows, _n_ _n_ _p_ PL ( _σ_ ) = � _i_ =1 [exp] � _s_ _σ_ ( _i_ ) � _/_ �� _j_ = _i_ [exp] � _s_ _σ_ ( _j_ ) � [�] _,_ (8) for all _σ ∈_ _S_ _n_ . Intuitively, ( _s_ 1 _, . . ., s_ _n_ ) represents the preference given to each index in [ _n_ ] . To sample from PL **s**, we first sample _σ_ (1) from Cat( _n,_ softmax( **s** )) . Then we remove _σ_ (1) from the list and sample _σ_ (2) from the categorical distribution corresponding to the rest of the scores (logits). We continue in this manner until we have sampled _σ_ (1) _, . . ., σ_ ( _n_ ) . By Cao et al. (2007), the mode of the PL distribution is the permutation that sorts **s** in descending order. However, the PL distribution is not very expressive. In particular, we have the following result, and the proof is given in Appendix E. **Proposition 1.** _The PL distribution cannot represent a delta distribution over S_ _n_ _._ **The Generalized PL (GPL) Distribution.** We then propose a generalization of the PL distribution, referred to as _Generalized Plackett-Luce (GPL) Distribution_ . Unlike the PL distribution, which uses a set of _n_ scores, the GPL distribution uses _n_ [2] scores _{_ **s** 1 _, · · ·,_ **s** _n_ _}_, where each **s** _i_ = _{s_ _i,_ 1 _, . . ., s_ _i,n_ _}_ consists of _n_ scores. The GPL distribution is constructed as follows, _n_ _n_ _p_ GPL ( _σ_ ) := � _i_ =1 [exp] � _s_ _i,σ_ ( _i_ ) � _/_ �� _j_ = _i_ [exp] � _s_ _i,σ_ ( _j_ ) � [�] _._ (9) Sampling of the GPL distribution begins with sampling _σ_ (1) using _n_ scores **s** 1 . For 2 _≤_ _i ≤_ _n_, we remove _i_ _−_ 1 scores from **s** _i_ that correspond to _σ_ (1) _, . . ., σ_ ( _i_ _−_ 1) and sample _σ_ ( _i_ ) from a categorical distribution constructed from the remaining _n −_ _i_ + 1 scores in **s** _i_ . It is important to note that the family of PL distributions is a strict subset of the GPL family. Since the GPL distribution has more parameters than the PL distribution, it is expected to be more expressive. In fact, we prove the following significant result, and the proof is given in Appendix E. **Theorem 2.** _The reverse diffusion process parameterized using the GPL distribution in Eq. (9) can_ _model any distribution over S_ _n_ _._ 3.3 N ETWORK A RCHITECTURE AND T RAINING We now briefly introduce how to use neural networks to parameterize the above distributions used in the reverse process. At any time _t_, given _X_ _t_ _∈_ R _[n][×][d]_, we use a neural network with parameters _θ_ to construct _p_ _θ_ ( _σ_ _t_ _[′]_ _[|][X]_ _[t]_ [)] [. In particular, we treat] _[ n]_ [ rows of] _[ X]_ _[t]_ [as] _[ n]_ [ tokens and use a Transformer] architecture along with the time embedding of _t_ and the positional encoding to predict the previously mentioned scores. For example, for the GPL distribution, to predict _n_ [2] scores, we introduce _n_ dummy tokens that correspond to the _n_ permuted output positions. We then perform a Idea Generation Category:
0Conceptual Integration
EO8xpnW7aX
# - D ISCRETE L ATENT P LANS VIA S EMANTIC S KILL A B ## STRACTIONS **Haobin Jiang** [1] **, Jiangxing Wang** [1] **, Zongqing Lu** [1,2] _[∗]_ 1 School of Computer Science, Peking University 2 Beijing Academy of Artificial Intelligence A BSTRACT Skill learning from language instructions is a critical challenge in developing intelligent agents that can generalize across diverse tasks and follow complex human instructions. Hierarchical methods address this by decomposing the learning problem into multiple levels, where the high-level and low-level policies are mediated through a latent plan space. Effective modeling and learning of this latent plan space are key to enabling robust and interpretable skill learning. In this paper, we introduce LADS, a hierarchical approach that learns language-conditioned discrete latent plans through semantic skill abstractions. Our method decouples the learning of the latent plan space from the language-conditioned high-level policy to improve training stability. First, we incorporate a trajectory encoder to learn a discrete latent space with the low-level policy, regularized by language instructions. Next, we model the high-level policy as a categorical distribution over these discrete latent plans to capture the multi-modality of the dataset. Through experiments in simulated control environments, we demonstrate that LADS outperforms state-of-the-art methods in both skill learning and compositional generalization. [The code is available at https://github.com/PKU-RL/LADS.](https://github.com/PKU-RL/LADS) 1 I NTRODUCTION Creating an agent capable of understanding and executing natural language instructions has been a long-standing goal in both reinforcement learning (RL) and imitation learning (IL) (Luketina et al., 2019; Nair et al., 2022). This capability is essential for developing a generalist artificial intelligence (AI) that can follow human commands to perform a wide range of control tasks, such as playing virtual video games (Lifshitz et al., 2023) or performing real robotic manipulation (Brohan et al., 2022). Other generalist policies often condition on goal images (Nair et al., 2018) or states (Andrychowicz et al., 2017), where the goals are naturally grounded in the observation space. In contrast, languageconditioned policies face the challenge of grounding language into the observation space (Nair et al., 2022). With the development of vision-language models (VLMs), recent work has explored using pretrained models to achieve language grounding (Shridhar et al., 2022; Jiang & Lu, 2024). However, these methods typically focus on visual inputs and object grounding, with limited effectiveness in understanding numerical states, such as robotic proprioception, and motion information. Learning a hierarchical language-conditioned policy is an effective approach to addressing the challenge of grounding language without being constrained by data modality. Hierarchical policy learning provides an intermediate representation that aligns language instructions and low-level control in a shared latent space. This approach significantly simplifies the complexity of language grounding by only requiring the language-conditioned _high-level policy_ to map language instructions into a temporally and semantically abstract _latent plan_ space (Lynch et al., 2020), rather than directly controlling actions. This process is often described as decomposing a task into multiple smaller _sub-_ _tasks_ (Rosete-Beas et al., 2023). The low-level policy is then responsible for generating the precise actions to interact with the environment. Specifically, it conditions on a latent plan vector and acts as a _skill_ for completing the sub-task indicated by this latent plan. In addition, hierarchical policies offer the benefits of sample efficiency in learning complex, long-horizon tasks and improve generalization to unseen scenarios through task decomposition (Garg et al., 2022; Mees et al., 2022a). _∗_ Correspondence to Zongqing Lu _<_ zongqing.lu@pku.edu.cn _>_ . 1 Recent work has advanced in this direction by implementing a hierarchical policy, where the lowlevel policy is learned from an offline dataset annotated with language instructions (Garg et al., 2022; Ju et al., 2024; Liang et al., 2024; Fu et al., 2024). The challenge in acquiring skill abstractions, including the low-level policy and the latent plan space, from language instructions lies in learning them in an unsupervised manner while ensuring the skills are both composable for complex, longhorizon tasks and interpretable for humans (Garg et al., 2022). To address this, these works opt for a _discrete_ latent plan space for skill learning, as it offers better controllability and interpretability compared to continuous representations. Furthermore, discrete latent representations have also been proven effective in various fields such as world models (Hafner et al., 2020; 2023), image generation (Esser et al., 2021; Rombach et al., 2022), and audio codecs (Zeghidour et al., 2021). While promise has been shown, there is a limitation in these methods where the high-level policy and low-level policy are trained jointly in an end-to-end manner (Garg et al., 2022; Ju et al., 2024; Liang et al., 2024). This can lead to potential training instability and difficulty, as the learning of the latent plan space and the language-conditioned high-level policy are entangled. The two components may affect each other’s learning progress, resulting in index collapse in the codebook and thus requiring additional techniques to refine and stabilize the codebook (Ju et al., 2024). Inspired by task-agnostic skill learning methods (Pertsch et al., 2021; Rosete-Beas et al., 2023), we argue that incorporating an additional posterior distribution to encode the low-level action sequences and learning the latent plan space in a variational way (Kingma, 2013; Van Den Oord et al., 2017) would be beneficial. At the same time, using language instructions as a regularizer might help construct a latent plan space with semantics and interpretability. By decoupling the learning of the high-level policy and lowlevel policy, we can model the high-level policy as a categorical distribution over the discrete latent plan space, allowing it to capture the multi-modality of the dataset more effectively. For example, given one instruction, there may be multiple potential sub-tasks to choose from next. In this work, we present **LA** nguage-conditioned **D** iscrete latent plans via semantic **S** kill abstractions ( **LADS** ) to address the limitation of joint end-to-end training discussed above. Our method consists of three main modules: a high-level policy, a low-level policy, and a trajectory encoder. We use VQVAE (Van Den Oord et al., 2017) to jointly learn the low-level policy and the trajectory encoder. This results in a discrete latent plan space, _i.e._, the VQ-VAE codebook. The high-level policy, conditioned on the language instruction, learns to make predictions in this discrete latent plan space for the next skill to execute. Specifically, it outputs a categorical distribution over the discrete space and is supervised by the latent plan provided by the trajectory encoder over the future trajectory. Therefore, the learning of the high-level policy does not interfere with the latent space. Furthermore, we align the latent plan sequence of each trajectory with its corresponding language instruction to regularize the latent space learned by VQ-VAE. We evaluate LADS in two simulated robotic control environments, LOReL (Nair et al., 2022) and Kitchen (Gupta et al., 2019), both of which have language-conditioned datasets. Our results demonstrate that LADS outperforms state-of-theart baselines in skill learning and compositional generalization across instructions. Additionally, the ablation study confirms the significance of the proposed modules. To summarize, our contributions are as follows: (1) We present LADS, a novel hierarchical policy learning framework for skill abstraction from language, decoupling the learning of the languageconditioned high-level policy and the latent plan space. (2) We introduce a trajectory encoder and utilize VQ-VAE to learn a discrete latent space with semantic regularization to guarantee controllability and interpretability. (3) We propose modeling the high-level policy as a categorical distribution to effectively capture the dataset’s multi-modality. (4) We demonstrate the superiority of LADS through quantitative comparisons and qualitative latent plan visualizations. 2 R ELATED W ORK **Hierarchical Policy Learning.** Hierarchical policy learning is a widely explored approach to improve the efficiency and generalization of policies in both RL and IL. Typically, a high-level policy can generate goals as explicit future states (Nair & Finn, 2019; Du et al., 2024; Black et al., 2024), implicit latent plans (Lynch et al., 2020; Pertsch et al., 2021; Rosete-Beas et al., 2023), or language (Hu et al., 2019; Jiang et al., 2019; Chen et al., 2021b). A low-level policy then takes action based on these assigned goals. In RL, the low-level policy is usually learned using information-based objectives (Eysenbach et al., 2018; Laskin et al., 2022; Park et al., 2023) or through joint training with 2 the high-level policy to maximize environment rewards (Kulkarni et al., 2016; Bacon et al., 2017; Veeriah et al., 2021). In IL, the low-level policy can be trained from an offline dataset using goalconditioned IL (Kujanp¨a¨a et al., 2023; Du et al., 2024; Black et al., 2024) or latent variable modeling (Lynch et al., 2020; Pertsch et al., 2021; Rosete-Beas et al., 2023). In this work, we adopt the hierarchical policy learning framework and learn the low-level policy using discrete latent variable modeling (Van Den Oord et al., 2017) from an offline dataset with language instructions. **Language-Conditioned Policy Learning.** Enabling a policy to follow natural language instruction is a crucial step toward achieving generalist AI. Language can directly serve as a form of task representation for the policy (Hermann et al., 2017; Lynch & Sermanet, 2020; Jang et al., 2022). However, this requires the network to learn the structure of the language space and grounding into the environment from scratch, which presents significant challenges. Recent research uses pretrained large language models (LLMs) or VLMs to provide priors, simplifying language-conditioned training through language grounding (Shridhar et al., 2022; Stone et al., 2023; Gao et al., 2024; Jiang & Lu, 2024) or task decomposition (Huang et al., 2022; Du et al., 2023; Singh et al., 2023). In this work, we train the low-level policy with semantic regularization to ground language with latent plans and thus facilitate the learning of the language-conditioned high-level policy. **Language-Conditioned Skill Abstractions.** Recent work explores learning semantic and interpretable skills from language-conditioned offline datasets. LISA (Garg et al., 2022) uses an endto-end hierarchical policy to jointly learn the high-level and low-level policies from the dataset. SkillDiffuser (Liang et al., 2024) improves on this by using a Diffuser (Ajay et al., 2022) as the lowlevel policy. LCSD (Ju et al., 2024) adopts a one-step hierarchical policy with an auxiliary mutual information objective and a diffusion policy (Ho et al., 2020). LAST (Fu et al., 2024) applies variational temporal inference (Kim et al., 2019) to learn skills but relies on an LLM for segmentation priors, limiting its applicability in environments without language-based action spaces. Our method builds on LISA and improves it by decoupling the learning of latent plans and skills from the learning of the high-level policy, thereby enhancing robustness. Additionally, we introduce a categorical prediction head over the discrete latent plan space for the high-level policy, improving its ability to model the dataset’s multi-modality. 3 P ROBLEM S ETUP We consider a multi-task learning environment modeled as a task-augmented Markov Decision Process (MDP) (Garg et al., 2022). The set of tasks is _T_, where each task in _T_ consists of one or more sub-tasks _e ∈E_ . That is, _T_ is a subset of the _powerset_ of the sub-task set _E_, _i.e._, _T ⊆P_ ( _E_ ). Each task is described by a natural language instruction _l ∈_ _L_, which specifies the sub-tasks included in the task. As shown in Figure 1, language instruction can contain two sub-tasks, _e.g._, _open drawer_ and _turn faucet right._ We assume access to a dataset consisting of _N_ language-conditioned trajectories _D_ = _{l_ _[i]_ _, s_ _[i]_ 1 _[, a]_ _[i]_ 1 _[, . . ., s]_ _[i]_ _T_ _i_ _[, a]_ _T_ _[i]_ _i_ _[}]_ _i_ _[N]_ =1 [collected by a] _[ sub-optimal]_ [ policy in the environment, where] _s_ _t_ _∈S_ denotes the state, _a_ _t_ _∈A_ denotes the action, and _T_ _i_ is the length of the trajectory _i_ . We consider the problem of learning a language-conditioned policy _π_ ( _a_ _t_ _|s_ _t_ _, l_ ) that outputs an action _a_ _t_, given the current state _s_ _t_ and a language instruction _l_, under the dynamics _P_ : _S × A →S_ defined by the task-augmented MDP. The decomposable structure of the task space _T_ makes this learning problem different from the standard multi-task imitation learning, where each trajectory is independently considered as a single task in a monolithic fashion (Jang et al., 2022; Mees et al., 2022b;a; Black et al., 2024). To improve sample efficiency and generalization, the policy must leverage the shared structure across trajectories, _i.e._, common sub-tasks, to reduce the task space. However, trajectories are not annotated with the sub-tasks executed at each step. Therefore, the core challenge in this problem setup is to learn and reuse skills for sub-tasks in an unsupervised manner. 4 M ETHOD In this section, we present the details of our method. We begin by defining an objective for learning the skill-based hierarchical policy and optimizing its lower bound (Section 4.1). We decompose the objective into three components. First, we focus on learning skill abstractions from trajectory via VQ-VAE (Van Den Oord et al., 2017), which provides a discrete latent plan space (Section 4.2). Next, we build the high-level policy as a categorical distribution to predict the discrete latent plan 3 Instruction: _open drawer and turn faucet right_ Alignment ~~Whole~~ ~~Traj~~ . ~~:~~ ... ... History Trajectory Distribution of latent plans Loss Net Data Latent plans (VQ codes) High-Level Policy Categorical Prediction Figure 1: Overview of **LA** nguage-conditioned **D** iscrete latent plans via semantic **S** kill abstractions ( **LADS** ). For each trajectory segment _τ_ _kH_ +1:( _k_ +1) _H_, we use a trajectory encoder to map it into a discrete latent plan _z_ _k_ through vector quantization (VQ). The low-level policy reconstructs actions from _z_ _k_ . Meanwhile, the high-level policy predicts the index of _z_ _k_ in the discrete latent plan space, based on the history trajectory _τ_ : _kH_ and language instruction _l_ . Lastly, we regularize the latent plan space by aligning the sequence _{z_ 0 _, z_ 1 _, . . ., z_ _⌈_ _HT_ _[⌉−]_ [1] _[}]_ [ of one trajectory to its language instruction.] for the next few steps (Section 4.3). Finally, we impose semantic regularization on the discrete plan space by aligning the sequence of latent plans with language instructions (Section 4.4). We train all modules jointly by combining the proposed losses (Section 4.5). 4.1 S KILL -B ASED H IERARCHICAL L EARNING We implements a hierarchical framework consisting of a high-level policy _π_ _h_ ( _z|τ_ : _t_ _, l_ ) and a lowlevel policy _π_ _l_ ( _a_ _t_ _|s_ _t_ _, z_ ). Specifically, the high-level policy takes as input the history trajectory _τ_ : _t_ = _{s_ 1 _, a_ 1 _, . . ., s_ _t_ _, a_ _t_ _, s_ _t_ +1 _}_ and the language instruction _l_, and selects a latent plan _z_ from the latent plan space _Z_ . Once a _z_ is assigned, the low-level policy acts as a skill that executes the latent plan _z_ . Following previous work (Garg et al., 2022; Liang et al., 2024), we assume that each skill lasts for _H_ timesteps. We propose the following objective for learning this hierarchical policy, max log _p_ _θ_ ( _τ_ _t_ +1: _t_ + _H_ _|τ_ : _t_ _, l_ ) _,_ (1) _θ_ which aims to maximize the likelihood of the future trajectory over the next _H_ timesteps _τ_ _t_ +1: _t_ + _H_ = _{s_ _t_ +1 _, a_ _t_ +1 _, . . ., s_ _t_ + _H_ _, a_ _t_ + _H_ _}_ given the history trajectory and the language instruction. _θ_ denotes the learnable parameters of the hierarchical policy. As for LISA (Garg et al., 2022), its learning objective can be viewed as a lower bound of Equation (1), as detailed in Appendix A.1. However, LISA learns the high-level policy and low-level policy in an end-to-end manner, causing the learning of the language-conditioned policy (high-level policy) to be entangled with the learning of the latent plan space (low-level policy). As a result, LISA has been found to show poor training stability and is prone to cause index collapse in latent space (Ju et al., 2024). To decouple the learning of the high-level and low-level policies, we introduce a trajectory encoder _q_ ( _z|τ_ _t_ +1: _t_ + _H_ ), which encodes the ground-truth future trajectory over the next _H_ timesteps into the latent plan space. This allows us to learn the latent plan space in a variational manner, along with the low-level policy. We begin by bounding the learning objective in Equation (1) as follows, log _p_ ( _τ_ _t_ +1: _t_ + _H_ _|τ_ : _t_ _, l_ ) _≥_ E _q_ ( _z|τ_ : _t_ + _H_ _,l_ ) log _[p]_ [(] _[τ]_ _[t]_ _q_ [+1:] ( _z|_ _[t]_ _τ_ [+] : _t_ _[H]_ + _[,]_ _H_ _[ z]_ _, l_ _[|][τ]_ ) [:] _[t]_ _[,][ l]_ [)] _,_ (2) where _q_ is an approximated posterior. We replace this posterior distribution with our trajectory encoder _q_ ( _z|τ_ _t_ +1: _t_ + _H_ ), based on the intuition that the single latent plan _z_ should represent the lowlevel action sequences, relying solely on the future trajectory data. Then we can rewrite the RHS of Equation (2) and get the following learning objective to maximize, _J_ LADS ( _θ_ ) = E _q_ ( _z|τ_ _t_ +1: _t_ + _H_ ) log _[p]_ [(] _[τ]_ _q_ _[t]_ ( [+1:] _z|τ_ _[t]_ [+] _t_ +1: _[H]_ _[,]_ _t_ _[ z]_ + _[|]_ _H_ _[τ]_ [:] ) _[t]_ _[,][ l]_ [)] = E _q_ ( _z|τ_ _t_ +1: _t_ + _H_ ) _H_ � log _p_ ( _a_ _t_ + _h_ _|s_ _t_ + _h_ _, z_ ) _−_ _D_ KL ( _q_ ( _z|τ_ _t_ +1: _t_ + _H_ ) _∥p_ ( _z|τ_ : _t_ _, l_ )) _,_ (3) _h_ =1 4 where constant terms related to environment dynamics are already removed. The detailed derivation process is available in Appendix A.2. By substituting the high-level policy _π_ _h_ ( _z|τ_ : _t_ _, l_ ) and lowlevel policy _π_ _l_ ( _a_ _t_ _|s_ _t_ _, z_ ) into _p_ ( _z|τ_ : _t_ _, l_ ) and _p_ ( _a_ _t_ + _h_ _|s_ _t_ + _h_ _, z_ ) in _J_ LADS ( _θ_ ), respectively, we obtain the objective for optimizing our skill-based hierarchical framework. 4.2 S KILL A BSTRACTIONS To optimize the first term in _J_ LADS ( _θ_ ), we implement VQ-VAE (Van Den Oord et al., 2017) to learn the trajectory encoder and low-level policy, resulting in a discrete latent plan space _Z_ . Skills, by design, are often distinct and categorical in nature, such as _open drawer_, _move mug right_, or _pick up_ _kettle_ . The discrete latent space provided by VQ-VAE aligns well with this requirement because it forces the model to group similar low-level action sequences into the same cluster. In addition, the discrete latent plan _z_ can enhance the interpretability and controllability of the low-level policy’s behavior (Garg et al., 2022; Liang et al., 2024). Given an input trajectory segment _τ_ _t_ +1: _t_ + _H_, the trajectory encoder _q_ ( _τ_ _t_ +1: _t_ + _H_ ) [1] maps the segment into a latent vector ˜ _z_, which is then quantized to the nearest point in a set of discrete latent codes _Z_ = _{z_ [1] _, z_ [2] _, . . ., z_ _[M]_ _}_ from the latent codebook of size _M_ . This process can be expressed as, _z_ = arg min _z_ _[i]_ _∈Z_ _[∥][q]_ [(] _[τ]_ _[t]_ [+1:] _[t]_ [+] _[H]_ [)] _[ −]_ _[z]_ _[i]_ _[∥]_ [2] _[.]_ (4) The decoder, _i.e._, the low-level policy _π_ _l_ ( _a_ _t_ _|s_ _t_ _, z_ ), then takes as input this quantized latent vector _z_ and reconstructs the future trajectory _τ_ _t_ +1: _t_ + _H_, which is used for both the skill execution by the low-level policy and training via the behavior cloning loss, _L_ BC = _−_ _H_ � log _π_ _l_ ( _a_ _t_ + _h_ _|s_ _t_ + _h_ _, z_ ) _._ (5) _h_ =1 The VQ-VAE optimization objective includes two parts: the behavior cloning loss for reconstruction and a codebook loss to ensure the discrete latent vectors in _Z_ are effectively learned, _L_ VQ = _L_ BC + _∥_ sg[ _q_ ( _τ_ _t_ +1: _t_ + _H_ )] _−_ _z∥_ [2] + _β_ commit _∥q_ ( _τ_ _t_ +1: _t_ + _H_ ) _−_ sg[ _z_ ] _∥_ [2] _,_ (6) where sg[ _·_ ] denotes the stop-gradient operation, and _β_ commit is a hyperparameter controlling the commitment loss that encourages the encoder to produce ˜ _z_ that is close to the quantized vectors. 4.3 D ISCRETE L ATENT P LANS In our learning objective _J_ LADS ( _θ_ ), the second term is a KL divergence between the trajectory encoder and the high-level policy. This KL loss trains the high-level policy to predict the next latent plan and regularizes the latent plan space. We can treat the two learning processes separately by using a stop-gradient operation (Hafner et al., 2020), _−D_ KL ( _q_ ( _z|τ_ _t_ +1: _t_ + _H_ ) _∥p_ ( _z|τ_ : _t_ _, l_ )) = _−_ _αD_ KL (sg[ _q_ ( _z|τ_ _t_ +1: _t_ + _H_ )] _∥p_ ( _z|τ_ : _t_ _, l_ )) _−_ (1 _−_ _α_ ) _D_ KL ( _q_ ( _z|τ_ _t_ +1: _t_ + _H_ ) _∥_ sg[ _p_ ( _z|τ_ : _t_ _, l_ )]) = _αJ_ _p_ ( _θ_ ) + (1 _−_ _α_ ) _J_ _q_ ( _θ_ ) _,_ (7) where _α_ controls the balance between the two KL terms. In this section, we describe the design of the high-level policy and its training with the objective _J_ _p_ ( _θ_ ). The second objective _J_ _q_ ( _θ_ ) regularizes the latent space and is detailed in Section 4.4. Given the discrete latent plan space, two approaches can be used to build the high-level policy. The first approach predicts a latent vector ˜ _z_ and then quantizes it to the nearest _z_ in the latent codebook, similar to previous methods (Garg et al., 2022; Ju et al., 2024; Liang et al., 2024). This approach trains the high-level policy in a regressive manner, using the latent plan _z_ provided by the trajectory encoder as the target for ˜ _z_ . The second approach is to predict the index of the latent plan _z_ in the codebook by formulating the high-level policy as a categorical distribution. This allows us to optimize _J_ _p_ ( _θ_ ) via a cross-entropy loss, _L_ CE = _−_ log _π_ _h_ (id( _z_ ) _|τ_ : _t_ _, l_ ) _,_ (8) 1 For VQ-VAE, we use a deterministic encoder _q_ ( _τ_ _t_ +1: _t_ + _H_ ) to replace the distribution _q_ ( _z|τ_ _t_ +1: _t_ + _H_ ). 5 Idea Generation Category:
2Direct Enhancement
L66G39JrM4
# S AMBA : S IMPLE H YBRID S TATE S PACE M ODELS FOR E FFICIENT U NLIMITED C ONTEXT L ANGUAGE M ODELING **Liliang Ren** [1] _[,]_ [2] _[∗]_ **Yang Liu** [1] _[†]_ **Yadong Lu** [1] _[†]_ **Yelong Shen** [1] **Chen Liang** [1] **Weizhu Chen** [1] 1 Microsoft 2 University of Illinois at Urbana-Champaign {liliangren,yaliu10,yadonglu,yelong.shen,chenliang1,wzchen}@microsoft.com A BSTRACT Efficiently modeling sequences with infinite context length has long been a challenging problem. Previous approaches have either suffered from quadratic computational complexity or limited extrapolation ability in length generalization. In this work, we present S AMBA, a simple hybrid architecture that layer-wise combines Mamba, a selective State Space Model (SSM), with Sliding Window Attention (SWA). S AMBA selectively compresses a given sequence into recurrent hidden states while still maintaining the ability to precisely recall recent memories with the attention mechanism. We scale S AMBA up to 3.8B parameters with 3.2T training tokens and demonstrate that it significantly outperforms state-of-the-art models across a variety of benchmarks. Pretrained on sequences of 4K length, S AMBA shows improved perplexity in context lengths of up to 1M in zero-shot. When finetuned on 4K-length sequences, S AMBA efficiently extrapolates to a 256K context length with perfect memory recall on the Passkey Retrieval task, and exhibits superior retrieval extrapolation on the challenging Phonebook task compared to full-attention models. As a linear-time sequence model, S AMBA achieves a 3 _._ 73 _×_ higher throughput compared to Transformers with grouped-query attention for user prompts of 128K length, and a 3 _._ 64 _×_ speedup when generating 64K tokens with unlimited streaming. Our code for training on open source data is publicly [available at https://github.com/microsoft/Samba.](https://github.com/microsoft/Samba) 1 I NTRODUCTION Attention-based models (Vaswani et al., 2017; Bahdanau et al., 2014) have dominated the neural architectures of Large Language Models (LLMs) (Radford et al., 2019; Brown et al., 2020; OpenAI, 2023; Bubeck et al., 2023) due to their ability to capture complex long-term dependencies and the efficient parallelization for large-scale training (Dao et al., 2022a). Recently, State Space Models (SSMs) (Gu et al., 2021; Smith et al., 2023; Gu et al., 2022; Gu & Dao, 2023) have emerged as a promising alternative, offering linear computation complexity and the potential for better extrapolation to longer sequences than seen during training. Specifically, Mamba (Gu & Dao, 2023), a variant of SSMs equipped with selective state spaces, has demonstrated notable promise through strong empirical performance and efficient hardware-aware implementation. Recent work also shows that transformers have poorer modeling capacities than input-dependent SSMs in state tracking problems (Merrill et al., 2024). However, SSMs struggle with memory recall due to their recurrent nature (Arora et al., 2023), and experimental results on information retrieval-related tasks (Fu et al., 2023; Wen et al., 2024; Arora et al., 2024), have further shown that SSMs are not as competitive as their attention-based counterparts. Previous works (Zuo et al., 2022; Fu et al., 2023; Ma et al., 2023; Ren et al., 2023) have explored various approaches to hybridize SSMs with the attention mechanism, but none have demonstrated significantly better language modeling performance compared to state-of-the-art Transformer architectures. Existing length extrapolation techniques (Han et al., 2023; Xiao et al., 2023; Jin et al., _∗_ Work partially done during internship at Microsoft. _†_ Equal second-author contribution. 1 2024) designed for attention mechanisms are constrained by quadratic computational complexity or insufficient context extrapolation performance, particularly when evaluated under perplexity metrics. In this paper, we introduce S AMBA, a simple neural architecture that harmonizes the strengths of both the SSM and the attention-based models, while achieving a potentially infinite length extrapolation with linear time complexity. S AMBA combines SSMs with attention through layer-wise interleaving Mamba (Gu & Dao, 2023), SwiGLU (Shazeer, 2020), and Sliding Window Attention (SWA) (Beltagy et al., 2020). Mamba layers capture the time-dependent semantics and provide a backbone for efficient decoding, while SWA fills in the gap modeling complex, non-recurrent dependencies. A detailed discussion of related work is included in Appendix A. We scale S AMBA with 421M, 1.3B, 1.7B and up to 3.8B parameters with 3.2T tokens. In particular, the largest 3.8B post-trained model achieves a 71.9 score for MMLU (Hendrycks et al., 2021), 62.8 for HumanEval (Chen et al., 2021), and 87.6 for GSM8K (Cobbe et al., 2021), substantially outperforming the post-trained Phi-3-mini model under a control of the same training recipes and datasets, as detailed in Table 1. Despite being pre-trained in the 4K sequence length, S AMBA can be extrapolated to 1M length in zero shot with improved perplexity on Proof-Pile (Zhangir Azerbayev & Piotrowski, 2022), achieving a 256 _×_ extrapolation ratio, while still maintaining the linear decoding time complexity with unlimited token streaming, as shown in Figure 2. We show that when instruction-tuned in a 4K context length with only 500 steps, S AMBA can be extrapolated to a 256K context length with perfect memory recall in Passkey Retrieval (Mohtashami & Jaggi, 2023). In contrast, the fine-tuned SWA-based model simply cannot recall memories beyond 4K length. We further demonstrate that the instruction-tuned S AMBA 3.8B model can achieve significantly better performance than the SWAbased models on downstream long-context summarization tasks, while still keeping its impressive performance on the short-context benchmarks. In a more challenging multiple key-value retrieval task, Phonebook (Jelassi et al., 2024), we demonstrate that instruction fine-tuning enables S AMBA to bridge the retrieval performance gap with full-attention models, while exhibiting significantly better extrapolation ability when retrieving phone numbers beyond the training context length. Finally, we perform extensive analyzes and ablation studies across model sizes up to 1.7B parameters to validate the architectural design of S AMBA . We also offer potential explanations for the effectiveness of our simple hybrid approach through the lens of attention/selection entropy. To the best of our knowledge, Samba is the first hybrid model showing that linear complexity models can be substantially better than state-of-the-art Transformer models on short-context tasks at large scale, while still being able to extrapolate to extremely long sequences under the perplexity metric. 2 M ETHODOLOGY We explore different hybridization strategies consisting of the layers of Mamba, Sliding Window Attention (SWA), and Multi-Layer Perceptron (Shazeer, 2020; Dauphin et al., 2016). We conceptualize the functionality of Mamba as the capture of recurrent sequence structures, SWA as the precise retrieval of memory, and MLP as the recall of factual knowledge. We also explore other linear recurrent layers including Multi-Scale Retention (Sun et al., 2023) and GLA (Yang et al., 2023) as potential substitutions for Mamba in Section 3.2. Our goal of hybridization is to harmonize between these distinct functioning blocks and find an efficient architecture for language modeling with unlimited length extrapolation ability. 2.1 A RCHITECTURE As illustrated in Figure 1, we explore three kinds of layerwise hybridization strategies on the 1.7B scale: Samba, Mamba-SWA-MLP, and Mamba-MLP. We also explore other hybridization approaches with full self-attention on smaller scales in Section 4. The number of layers _N_ is set to 48 for Samba, Mamba-MLP, and Mamba, while Mamba-SWA-MLP has 54 layers, so each model has approximately 1.7B parameters. We only modify the layer-level arrangement for each of the models and keep every other configuration the same to have apple-to-apple comparisons. More details on the configuration of each layer are explained in the following subsections. 2.1.1 M AMBA L AYER Mamba (Gu & Dao, 2023) is a recently proposed SSM-based model with selective state spaces. It enables input-dependent gating to both the recurrent states and the input representation for a soft 2 Figure 1: From left to right: Samba, Mamba-SWA-MLP, Mamba-MLP, and Mamba. The illustrations depict the layer-wise integration of Mamba with various configurations of Multi-Layer Perceptrons (MLPs) and Sliding Window Attention (SWA). We assume the total number of intermediate layers to be _N_, and omit the embedding layers and output projections for simplicity. Pre-Norm (Xiong et al., 2020; Zhang & Sennrich, 2019) and skip connections (He et al., 2016) are applied for each of the intermediate layers. selection of the input sequence elements. Given an input sequence representation **X** _∈_ R _[n][×][d]_ _[m]_, where _n_ is the length of the sequence and _d_ _m_ is the hidden size, Mamba first expands the inputs to a higher dimension _d_ _e_, _i.e._, **H** = **XW** in _∈_ R _[n][×][d]_ _[e]_ where **W** in _∈_ R _[d]_ _[m]_ _[×][d]_ _[e]_ is a learnable projection matrix. Then a Short Convolution (SC) (Poli et al., 2023) operator is applied to smooth the input signal, **U** = SC( **H** ) = SiLU(DepthwiseConv( **H** _,_ **W** conv )) _∈_ R _[n][×][d]_ _[e]_ (1) where **W** conv _∈_ R _[k][×][d]_ _[e]_ and the kernel size _k_ is set to 4 for hardware-aware efficiency. The Depthwise Convolution (He et al., 2019) is applied over the sequence dimension followed by a SiLU (Elfwing et al., 2017) activation function. The selective gate is then calculated through a low-rank projection followed by Softplus (Zheng et al., 2015), ∆= Softplus( **UW** r **W** q + **b** ) _∈_ R _[n][×][d]_ _[e]_ (2) where **W** r _∈_ R _[d]_ _[e]_ _[×][d]_ _[r]_, **W** q _∈_ R _[d]_ _[r]_ _[×][d]_ _[e]_ and _d_ _r_ is the low-rank dimension. **b** _∈_ R _[d]_ _[e]_ is carefully initialized so that ∆ _∈_ [∆ min _,_ ∆ max ] after the initialization stage. We set [∆ min _,_ ∆ max ] = [0 _._ 001 _,_ 0 _._ 1], and find that these values are not sensitive to language modeling performance under the perplexity metric. The input dependence is also introduced for the parameters **B** and **C** of SSM, **B** = **UW** b _∈_ R _[n][×][d]_ _[s]_ **C** = **UW** c _∈_ R _[n][×][d]_ _[s]_ where _d_ _s_ is the state dimension. For each time step 1 _≤_ _t ≤_ _n_, the recurrent inference of the Selective SSM (S6) is performed in an expanded state space **Z** _t_ _∈_ R _[d]_ _[e]_ _[×][d]_ _[s]_, _i.e._, **Z** _t_ = exp( _−_ ∆ _t_ _⊙_ exp( **A** )) _⊙_ **Z** _t−_ 1 + ∆ _t_ _⊙_ ( **B** _t_ _⊗_ **U** _t_ ) _∈_ R _[d]_ _[e]_ _[×][d]_ _[s]_ **Y** _t_ = **Z** _t_ **C** _t_ + **D** _⊙_ **U** _t_ _∈_ R _[d]_ _[e]_ where **Z** 0 = **0**, _⊙_ means the point-wise product, _⊗_ means the outer product and exp means the point-wise natural exponential function. **D** _∈_ R _[d]_ _[e]_ is a learnable vector initialized as _D_ _i_ = 1 and **A** _∈_ R _[d]_ _[e]_ _[×][d]_ _[s]_ is a learnable matrix initialized as _A_ _ij_ = log( _j_ ) _,_ 1 _≤_ _j ≤_ _d_ _s_, following the S4DReal (Gu et al., 2022) initialization. In practice, Mamba implements a hardware-aware parallel scan algorithm for efficient parallelizable training. The final output is obtained through a gating mechanism similar to Gated Linear Unit (Shazeer, 2020; Dauphin et al., 2016), **O** = **Y** _⊙_ SiLU( **XW** g ) **W** out _∈_ R _[n][×][d]_ _[m]_ where **W** _g_ _∈_ R _[d]_ _[m]_ _[×][d]_ _[e]_ and **W** out _∈_ R _[d]_ _[e]_ _[×][d]_ _[m]_ are learnable parameters. In this work, we set _d_ _e_ = 2 _d_ _m_, _d_ _r_ = _d_ _m_ _/_ 16, and _d_ _s_ = 16 . The Mamba layer in S AMBA is expected to capture the time-dependent semantics of the input sequence through its recurrent structure. The input selection mechanism in the Mamba layer enables the model to focus on relevant inputs, thereby allowing the model to memorize important information in the long term. 3 2.1.2 S LIDING W INDOW A TTENTION (SWA) L AYER We include Sliding Window Attention (Beltagy et al., 2020) layers to address the limitations of Mamba layers in capturing non-recurrent dependencies in sequences. Our SWA layer operates on a window size _w_ = 2048 that slides over the input sequence, ensuring that the computational complexity remains linear with respect to the sequence length. RoPE (Su et al., 2021) is applied within the sliding window, with a base frequency of 10,000. By directly accessing the contents in the context window through attention, the SWA layer can retrieve high-definition signals from the middle to short-term history that cannot be clearly captured by the recurrent states of Mamba. We use FlashAttention 2 (Dao, 2023) for the efficient implementation of self-attention throughout this work. We also choose the 2048 sliding window size for efficiency consideration; FlashAttention 2 has the same training speed as Mamba’s selective parallel scan at the sequence length of 2048 based on the measurements in (Gu & Dao, 2023). 2.1.3 M ULTI -L AYER P ERCEPTRON (MLP) L AYER The MLP layers in S AMBA serve as the architecture’s primary mechanism for nonlinear transformation and recall of factual knowledge (Dai et al., 2022). We use SwiGLU (Shazeer, 2020) for all the models trained in this paper and denote its intermediate hidden size as _d_ _p_ . As shown in Figure 1, Samba applies separate MLPs for different types of information captured by Mamba and the SWA layers. 3 E XPERIMENTS AND R ESULTS We pre-train four S AMBA models with different parameter sizes, 421M, 1.3B, 1.7B and 3.8B, to investigate its performance across different scales. The details of the hyperparameters for the training and architecture designs are shown in Table 12 of Appendix G. We also train other hybrid architectures as mentioned in Section 2.1, including the baseline Mamba (Gu & Dao, 2023), Llama-3 (MetaAI, 2024; Dubey et al., 2024), and Mistral (Jiang et al., 2023) architecture on a scale of around 1.7B, with detailed hyperparameters in Table 11 of Appendix G. We do comprehensive downstream evaluations on a wide range of benchmarks, focusing on four main capabilities of the models: commonsense reasoning (ARC (Clark et al., 2018), PIQA (Bisk et al., 2020), WinoGrande (Sakaguchi et al., 2021), SIQA (Sap et al., 2019)), language understanding (HellaSwag (Zellers et al., 2019), BoolQ (Clark et al., 2019), OpenbookQA (Mihaylov et al., 2018), SQuAD (Rajpurkar et al., 2016), MMLU (Hendrycks et al., 2021), MMLU-Pro (Wang et al., 2024), GPQA(Rein et al., 2023)), truthfulness (TruthfulQA (Lin et al., 2022)) and math and coding (GSM8K (Cobbe et al., 2021), MBPP (Austin et al., 2021), HumanEval (Chen et al., 2021)). Table 1: Downstream performance comparison between Samba-3.8B-IT and Phi-3-mini-4K on both long-context and short-context tasks. We report 5-shot accuracy (averaged by category) for MMLU, 8-shot CoT (Wei et al., 2022) for GSM8K, 0-shot pass@1 for HumanEval, ROUGE-L for both GovReport and SQuALITY. † Results from the Phi-3 technical report (Abdin et al., 2024). **Model** **MMLU** **GSM8K** **HumanEval** **GovReport** **SQuALITY** Phi-3-mini-4K-instruct † 68.8 82.5 58.5 14.4 **21.6** Samba-3.8B-IT **71.9** **87.6** **62.8** **18.9** 21.2 3.1 L ANGUAGE M ODELING ON T EXTBOOK Q UALITY D ATA We first present results from our largest 3.8B S AMBA model, trained on the same data set used by Phi3 (Abdin et al., 2024) with 3.2T tokens. We follow the same multiphase pretraining strategy as Phi3-mini, and apply both the original Phi-3-mini post-training recipe and the Phi3-mini-June-2024 recipe to produce our instruction-tuned S AMBA 3.8B models, _i.e._, Samba-3.8B-IT and Samba3.8B (June) respectively. We report comprehensive benchmark results of the Samba 3.8B base model and Samba-3.8B (June) in Appendix B. As shown in Table 1, we evaluate the downstream performance of Samba-3.8B-IT on both long-context summarization tasks (GovReport (Huang et al., 2021), SQuALITY (Wang et al., 2022)) and major short-context benchmarks (MMLU, GSM8K, HumanEval). We can see that Samba has substantially better performance than Phi-3-mini-4k-instruct on both the short-context (MMLU, GSM8K, HumanEval) and long-context (GovReport) tasks, while 4 still having the 2048 window size of its SWA layer and maintaining the linear complexity for efficient processing of long documents. Details of data statistics and evaluation setup for long context tasks are included in Appendix F. Table 2: Downstream evaluation of the architectures trained on 230B tokens of the Phi2 dataset. We report the unnormalized accuracy for multiple choice tasks. GSM8K is evaluated with 5-shot examples while other tasks are in zero-shot. Best results are in bold, second best underlined. Llama-3 Mistral Mamba Mamba-SWA-MLP Mamba-MLP S AMBA **Benchmark** 1.6B 1.6B 1.8B 1.6B 1.9B 1.7B ARC-Easy 76.85 77.02 77.99 76.68 78.91 **79.25** ARC-Challenge 43.26 44.20 45.22 46.16 47.35 **48.21** PIQA 76.66 75.79 77.31 76.50 **78.84** 77.10 WinoGrande 70.01 70.72 73.40 **73.72** 72.38 72.93 SIQA 51.23 52.00 53.12 **55.12** 54.30 53.68 HellaSwag 46.98 47.19 49.80 49.71 **50.14** 49.74 BoolQ 68.20 70.70 74.83 74.74 73.70 **75.57** OpenbookQA 34.00 32.80 36.60 33.80 35.40 **37.20** SQuAD 74.88 72.82 67.66 76.73 63.86 **77.64** MMLU 43.84 43.54 45.28 47.39 43.68 **48.01** TruthfulQA (MC1) 25.70 25.09 26.81 26.20 26.44 **27.78** TruthfulQA (MC2) 40.35 38.80 40.66 40.80 40.04 **41.62** GSM8K 32.68 32.45 32.07 **44.05** 27.52 38.97 MBPP 46.30 47.08 47.86 47.08 47.08 **48.25** HumanEval 36.59 36.59 35.98 37.80 31.10 **39.02** **Average** 51.17 51.12 52.31 53.77 51.38 **54.33** To examine the different hybridization strategies mentioned in Section 2.1, we train 6 models with around 1.7B parameters on the Phi2 (Li et al., 2023) dataset with 230B tokens and evaluate them in the full suite of 15 downstream benchmarks to have a holistic assessment of hybrid and purebred architectures. As shown in Table 2, S AMBA demonstrates superior performance on a diverse set of tasks, including commonsense reasoning (ARC-Challenge), language understanding (MMLU, SQuAD), TruthfulQA and code generation (HumanEval, MBPP). It outperforms both the pure attention-based and SSM-based models in most tasks and achieves the best average performance. By comparing the performance of Mamba-MLP and Mamba in Table 2, we can observe that replacing Mamba blocks with MLPs does not harm common sense reasoning ability, but its performance in language understanding and complex reasoning ability, such as coding and mathematical reasoning, degenerates significantly. We can also see that pure Mamba models fall short on retrieval intensive tasks such as SQuAD due to their lack of precise memory retrieval ability. The best results are achieved through the combination of the attention and Mamba modules, as shown with our Samba architecture. We can also notice that Mamba-SWA-MLP has significantly better performance on GSM8K, potentially resulting from a closer collaboration between the Mamba and the SWA layers. The distinct downstream performances of different hybridization strategies pose interesting future work for developing task-adaptive dynamic architectures. 3.2 E XPLORATION ON H YBRIDIZING A TTENTION AND L INEAR R ECURRENCE Since SSMs belong to a broader realm of linear recurrent models (Orvieto et al., 2023; Qin et al., 2023; Yang et al., 2023; Katsch, 2023; Qin et al., 2024; Yang et al., 2024), there exist multiple alternatives other than Mamba when combing attention-based layers with recurrent neural networks. We also add architecture ablation studies to justify the design choices of Samba. Specifically, in addition to Llama-2, Mamba, Samba and Mamba-SWA-MLP, we investigate the comparative analysis of the following architectures: - **Llama-2-SWA** is a pure attention-based architecture that replaces all full attention layers in Llama-2 with sliding window attention. 5 Table 3: Perplexity on the validation set of SlimPajama for different attention and linear recurrent model architectures trained at 4,096 context length. We use window size 2,048 for Sliding Window Attention (SWA). The perplexity results have a fluctuation around _±_ 0 _._ 3%. **Training Speed** **Validation Context Length** **Architecture** **Size** **Layers** ( _×_ 10 [5] tokens/s) 4096 8192 16384 _20B training tokens on 8×A100 GPUs_ Llama-2 438M 24 4.85 11.14 47.23 249.03 Llama-2-SWA 438M 24 4.96 11.12 10.66 10.57 Mamba 432M 60 2.46 10.70 10.30 10.24 Sliding GLA 438M 24 4.94 10.43 10.00 9.92 Sliding RetNet 446M 24 4.32 10.38 9.96 9.87 Mega-S6 422M 24 3.26 12.63 12.25 12.25 Mamba-SWA-MLP 400M 24 4.21 10.07 9.67 9.59 MLP2-SWA-MLP 417M 24 **5.08** 10.95 10.50 10.41 S AMBA -NoPE 421M 24 4.48 10.11 28.97 314.78 S AMBA 421M 24 4.46 **10.06** **9.65** **9.57** _100B training tokens on 64×H100 GPUs_ Llama-2 1.3B 40 25.9 7.60 44.32 249.64 Llama-2-SWA 1.3B 40 26.2 7.60 7.37 7.21 Mamba 1.3B 48 17.8 7.47 7.26 7.15 Sliding GLA 1.2B 36 25.9 7.58 7.35 7.19 Sliding RetNet 1.4B 36 23.0 7.56 7.35 7.56 Mega-S6 1.3B 36 17.9 9.01 8.81 8.68 Mamba-SWA-MLP 1.3B 36 23.5 7.37 7.16 7.00 MLP2-SWA-MLP 1.3B 36 **26.6** 7.81 7.58 7.42 S AMBA -NoPE 1.3B 36 25.2 7.33 20.40 326.17 S AMBA 1.3B 36 25.2 **7.32** **7.11** **6.96** - **Sliding RetNet** replaces Mamba layers in the Samba architecture with Multi-Scale Retention (Sun et al., 2023) layers. RetNet is a linear attention model with fixed and input-independent decay applying to the recurrent hidden states. - **Sliding GLA** replaces Mamba layers in the Samba architecture with Gated Linear Attention (GLA) (Yang et al., 2023). GLA is a more expressive variant of linear attention with input-dependent gating. - **Mega-S6** replaces all MD-EMA modules in the Mega (Ma et al., 2023) architecture with the ShortConv+S6 combinations from Mamba to adapt Mega to the modern Mamba architecture. Rotary position embedding, RMSNorm and Softmax attention are also adopted. We set the intermediate dimension of the Mega-S6 layer to be _d_ _m_ so that it has a roughly 5 _d_ [2] _m_ [number] of parameters. This represents a classical baseline that conducts sequential intra-layer SSM-Attention hybridization. - **MLP2-SWA-MLP** replaces all Mamba layers in the Samba architecture to SwiGLU layers with 6 _d_ [2] _m_ [number of parameters.] - **Samba-NoPE** removes the rotary relative position embedding in Samba and does not have any position embedding in the architecture. We pre-train all models on the same SlimPajama (Soboleva et al., 2023) dataset under both around 438M and 1.3B settings, and evaluate these models by calculating perplexity on the validation set with context length at 4096, 8192, and 16384 tokens to investigate their zero-shot length extrapolation ability. Peak training throughput is also measured as an efficiency metric. The details of the hyperparameter settings are included in Appendix G. As shown in Table 3, S AMBA consistently outperforms all other models in different context lengths and model sizes. The training speed of S AMBA is competitive compared to pure Transformer-based models on the 1.3B scale. Mamba has significantly worse training throughput because Mamba layers have slower training speed than MLP layers, and the purebred Mamba models need to have more layers than other models at the same number of parameters. Comparing Mamba-SWA-MLP with Samba, we can see that Samba has slightly better perplexity scores and higher training throughput. Mamba-SWA-MLP trades off the MLP layers with more I/O intensive Mamba and Attention layers, leading to slower training speed. 6 This also indicates that Mamba-SWA-MLP will have slower decoding speed than Samba due to larger total cache size resulting from more SSMs and Attention layers. We can further observe that replacing Mamba with MLP speeds up the training but harms perplexity significantly, indicating the importance of Mamba layers in the Samba architecture. Interestingly, even though we use SWA in Samba architecture, Samba-NoPE still has exploded perplexities beyond its training length without RoPE. We can also find that while RetNet can extrapolate well under the 438M scale, it has an increasing perplexity on 16K length at the 1.4B scale, which may indicate that its input-independent decay may need specific tuning at different scales to work well. Table 4: Downstream evaluation of models pre-trained with 100B tokens from SlimPajama. We measure the character-normalized accuracy for HellaSwag following Gu & Dao (2023). All tasks are evaluated in zero-shot. **ARC-Easy** **HellaSwag** **Wino.** **PIQA** **LAMBADA** **Architecture** **Size** **Avg.** acc _↑_ acc_norm _↑_ acc _↑_ acc _↑_ acc _↑_ LLaMA-2 1.3B 55.09 52.32 53.35 71.11 48.52 56.08 LLaMA-2-SWA 1.3B 56.65 52.59 54.93 71.60 47.56 56.67 Sliding GLA 1.2B 56.94 52.52 **56.75** 71.38 48.17 57.15 Sliding RetNet 1.4B 57.66 52.64 **56.75** 71.33 48.34 57.34 Mega-S6 1.3B 50.63 41.91 52.96 68.17 37.88 50.31 Mamba 1.3B 58.08 **54.93** 53.99 71.98 45.97 56.99 Mamba-SWA-MLP 1.3B **59.64** 54.50 55.25 **72.42** 49.12 58.19 MLP2-SWA-MLP 1.3B 55.18 50.32 52.80 70.67 48.11 55.42 S AMBA -NoPE 1.3B 58.38 54.62 56.51 72.03 51.08 58.52 S AMBA 1.3B 58.21 54.73 55.72 72.36 **51.68** **58.54** In Table 4, we evaluate all our 1.3B scale models on five typical commonsense reasoning tasks (ARC-Easy, HellaSwag, WinoGrande, PIQA and the OpenAI variant [1] of LAMBADA (Paperno et al., 2016) ) to understand the effect of architecture designs on downstream performances. We can see that Samba has the best average accuracy, outperforming the LLaMA 2 architectures by a large margin. Similar to our perplexity evaluation, Samba and Samba-NoPE have similar average accuracies, whereas Mamba-SWA-MLP falls slightly behind. We observe that different architectures excel at different tasks. Mamba-SWA-MLP performs best on ARC-Easy, while Samba and Samba-NoPE achieve superior results on LAMBADA. Hybrid models based on Mamba generally outperform hybrid linear attention models and pure softmax-attention models on HellaSwag. 3.3 E FFICIENT L ENGTH E XTRAPOLATION |5.50<br>5.25<br>5.00<br>ity|Col2|Col3|Col4|Col5|Models|Col7| |---|---|---|---|---|---|---| |5.50<br>5.25<br>5.00<br>ity|||||Sam<br>Mam<br>Llam|ba 1.7B<br>ba 1.8B<br>a-3 1.6B| |5.50<br>5.25<br>5.00<br>ity|||||SE-Ll<br>Mistr|ama-3 1.6B<br>al 1.6B| |5.50<br>5.25<br>5.00<br>ity||||||| |4.75<br>Perplex<br>4.50<br>4.25<br>4.00<br>3.75<br>1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M<br>Context Length||||||| |4.75<br>Perplex<br>4.50<br>4.25<br>4.00<br>3.75<br>1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M<br>Context Length||||||| |4.75<br>Perplex<br>4.50<br>4.25<br>4.00<br>3.75<br>1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M<br>Context Length||||||| |4.75<br>Perplex<br>4.50<br>4.25<br>4.00<br>3.75<br>1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M<br>Context Length||||||| |4.75<br>Perplex<br>4.50<br>4.25<br>4.00<br>3.75<br>1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M<br>Context Length|||ntext|Lengt|h|h| (a) Perplexity on the test set of Proof-Pile (b) Decoding throughput with batch size 16 Figure 2: S AMBA shows improved prediction up to 1M tokens in the Proof-Pile test set while achieving a 3.64 _×_ faster decoding throughput than the Llama-3 architecture on 64K generation length. We also include an SE-Llama-3 1.6B baseline which applies the SelfExtend (Jin et al., 2024) approach for zero-shot length extrapolation. All models are trained with 4K sequence length. We use the test split of the Proof-Pile (Zhangir Azerbayev & Piotrowski, 2022) dataset to evaluate the length extrapolation ability of our models at a scale of around 1.7B parameters. We follow Position 1 [https://huggingface.co/datasets/EleutherAI/lambada_openai](https://huggingface.co/datasets/EleutherAI/lambada_openai) 7 Interpolation (Chen et al., 2023a) for data pre-processing. The sliding window approach (Press et al., 2021) is used for the perplexity evaluation with a window size of 4096. Besides having the decoding throughput in Figure 2 for the generation efficiency metric, we also measure the prompt processing speed in Figure 6 of Appendix B for the models S AMBA 1.7B, Mistral 1.6B, Mamba 1.8B, Llama-3 1.6B and its Self-Extended (Jin et al., 2024) version SE-Llama-3 1.6B with the prompt length sweeping from 1K to 128K. We set the group size to 4 and the neighborhood window to 1024 for Self-Extension. We fix the total processing tokens per measurement to be 128K and varying the batch size accordingly. The throughput is measured on a single A100 GPU with the precision of bfloat16. We repeat the measurements 10 times and report the averaged results. We can see that Samba achieves 3 _._ 73 _×_ higher throughput in prompt processing compared to Llama-3 1.6B at the 128K prompt length, and the processing time remains linear with respect to the sequence length. We can also observe that the existing zero-shot length extrapolation technique introduces significant inference latency overhead on the full-attention counterpart, while it still cannot extrapolate infinitely with perplexity performance comparable to that of Samba. In Figure 2, we can also see that Mamba has a slowly and stably increasing perplexity up to 1M sequence length, which indicates that linear recurrent models can still not extrapolate infinitely if the context length is extremely large. 3.4 L ONG -C ONTEXT U NDERSTANDING Figure 3: Passkey Retrieval performance up to 256K context length for S AMBA 1.7B (Left) vs. Mistral 1.6B (right) instruction tuned on 4K sequence length with 500 steps. Figure 4: Phonebook evaluation accuracy of different base models. Beyond its efficiency in processing long context, Samba can also extrapolate its memory recall ability to 256K context length through supervised fine-tuning, and still keeps its linear computation complexity. We fine-tune Samba 1.7B on Passkey Retrieval with a 4K training sequence length for only 500 steps. As presented in Figure 3, S AMBA 1.7B demonstrates a remarkable ability to recall information from significantly longer contexts compared to Mistral 1.6B, a model based solely on Sliding Window Attention (SWA). This capability is particularly evident in the heatmap, where S AMBA maintains the perfect retrieval performance across a wider range of pass-key positions in a long document of up to 256K length. We also draw the training loss curve and the overall passkey retrieval accuracy across the fine-tuning procedure in Figure 7 and Figure 8 of Appendix C. We find that despite the fact that both architectures can reach near-zero training loss in less than 250 steps, Samba can achieve near-perfect retrieval early at 150 training steps, while the Mistral architecture struggles at around 30% accuracy throughout the training process. This shows that Samba can have better long-range retrieval ability than SWA due to the input selection mechanism introduced by the Mamba layers. In Figure 8, we can also notice that the pre-trained base Samba model has a retrieval accuracy (at step 0) similar to that of Mistral, highlighting the need for future work to improve Samba’s zero-shot retrieval capabilities. The encouraging results on Passkey Retrieval drives us to further explore the limits of our finetuning approach. We perform instruction tuning to the Samba-3.8B base model on Phonebook (Jelassi et al., 2024) with only 100 steps on 4K sequence length and evaluate the resulting Samba-3.8B-FT model for a sequence length up to 8K. The evaluation setting requires the models to retrieve a random phone number from a phone book containing 20 (length 400) to 480 (length 8400) name-number pairs, resulting in a pressure test of memorization to Samba which has a constant memory state size. Surprisingly, as shown in Figure 4, we can see that the Samba-3.8B-FT model can close most of its gap with a full-attention model (Llama2 7B) that has twice the parameter size within the 4K training length, and achieves much better extrapolation accuracy compared to all other models including 8 the Phi3 base model which also uses 2K sliding window attention. Since both Passkey Retrieval and Phonebook require models to remember numbers in a long context document, it is interesting to investigate if a model instruction-tuned on one task can transfer its ability to the other task in zero-shot. We directly evaluate the Passkey Retrieval finetuned Samba 1.7B and Mistral 1.6B models (named Samba 1.7B PK-FT and Mistral 1.6B PK-FT respectively) on the Phonebook Idea Generation Category:
0Conceptual Integration
bIlnpVM4bc
# M A RS: A F AST S AMPLER FOR M EAN R EVERTING D IFFUSION B ASED ON ODE AND SDE S OLVERS **Ao Li** [1] _[,]_ [2][˚] **Wei Fang** [3] _[,]_ [4][˚] **Hongbo Zhao** [1] _[,]_ [2][˚] **Le Lu** [3] **Ge Yang** [1] _[,]_ [2][:] **Minfeng Xu** [3] _[,]_ [4][:] 1 School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 2 Institute of Automation, Chinese Academy of Sciences (CASIA) 3 DAMO Academy, Alibaba Group 4 Hupan Laboratory, Hangzhou, China _{_ liao2022, zhaohongbo2022, ge.yang _}_ @ia.ac.cn lucas.fw@alibaba-inc.com minfengxu@163.com tiger.lelu@gmail.com A BSTRACT In applications of diffusion models, controllable generation is of practical significance, but is also challenging. Current methods for controllable generation primarily focus on modifying the score function of diffusion models, while Mean Reverting (MR) Diffusion directly modifies the structure of the stochastic differential equation (SDE), making the incorporation of image conditions simpler and more natural. However, current training-free fast samplers are not directly applicable to MR Diffusion. And thus MR Diffusion requires hundreds of NFEs (number of function evaluations) to obtain high-quality samples. In this paper, we propose a new algorithm named MaRS (MR Sampler) to reduce the sampling NFEs of MR Diffusion. We solve the reverse-time SDE and the probability flow ordinary differential equation (PF-ODE) associated with MR Diffusion, and derive semi-analytical solutions. The solutions consist of an analytical function and an integral parameterized by a neural network. Based on this solution, we can generate high-quality samples in fewer steps. Our approach does not require training and supports all mainstream parameterizations, including noise prediction, data prediction and velocity prediction. Extensive experiments demonstrate that MR Sampler maintains high sampling quality with a speedup of 10 to 20 times across ten different image restoration tasks. Our algorithm accelerates the sampling procedure of MR Diffusion, making it more practical in controllable generation. [1] 1 I NTRODUCTION Diffusion models have emerged as a powerful class of generative models, demonstrating remarkable capabilities across a variety of applications, including image synthesis (Dhariwal & Nichol, 2021; Ruiz et al., 2023; Rombach et al., 2022) and video generation (Ho et al., 2022a;b). In these applications, controllable generation is very important in practice, but it also poses considerable challenges. Various methods have been proposed to incorporate text or image conditions into the score function of diffusion models(Ho & Salimans, 2022; Ye et al., 2023; Zhang et al., 2023), whereas Mean Reverting (MR) Diffusion offers a new avenue of control in the generation process (Luo et al., 2023b). Previous diffusion models (such as DDPM (Ho et al., 2020)) simulate a diffusion process that gradually transforms data into pure Gaussian noise, followed by learning to reverse this process for sample generation (Song & Ermon, 2020; Song et al., 2021). In contrast, MR Diffusion is designed to produce final states that follow a Gaussian distribution with a non-zero mean, which provides a simple and natural way to introduce image conditions. This characteristic makes MR Diffusion particularly suitable for solving inverse problems and potentially extensible to multi-modal conditions. However, the sampling process of MR Diffusion requires hundreds of iterative steps, which is time-consuming. ˚ These authors contributed equally to this work. : Corresponding authors. 1 [Code is available at https://github.com/grrrute/mr-sampler](https://github.com/grrrute/mr-sampler) 1 NFE=20 NFE=20 NFE=80 NFE=80 NFE=5 NFE=5 NFE=50 NFE=50 NFE=100 NFE=100 Ground Truth Ground Truth Posterior Sampling MR Sampler-2 Posterior Sampling MR Sampler-2 Dehazing Inpainting Figure 1: **Qualitative comparisons between MR Sampler and Posterior Sampling.** All images are generated by sampling from a pre-trained MR Diffusion (Luo et al., 2024a) on the RESIDE-6k (Qin et al., 2020b) dataset and the CelebA-HQ (Karras, 2017) dataset. To improve the sampling efficiency of diffusion models, various acceleration strategies have been proposed, which can be divided into two categories. The first explores methods that establish direct mappings between starting and ending points on the sampling trajectory, enabling acceleration through knowledge distillation (Salimans & Ho, 2022; Song et al., 2023; Liu et al., 2022b). However, such algorithms often come with trade-offs, such as the need for extensive training and limitations in their adaptability across different tasks and datasets. The second category involves the design of fast numerical solvers that increase step sizes while controlling truncation errors, thus allowing for faster convergence to solutions (Lu et al., 2022a; Zhang & Chen, 2022; Song et al., 2020a). Notably, fast sampling solvers mentioned above are designed for common SDEs such as VPSDE and VESDE (Song et al., 2020b). Due to the difference between these SDEs and MRSDE, existing training-free fast samplers cannot be directly applied to Mean Reverting (MR) Diffusion. In this paper, we propose a novel algorithm named MaRS (MR Sampler) that improves the sampling efficiency of MR Diffusion. Specifically, we solve the reverse-time stochastic differential equation (SDE) and probability flow ordinary differential equation (PF-ODE) (Song et al., 2020b) derived from MRSDE, and obtain a semi-analytical solution, which consists of an analytical function and an integral parameterized by neural networks. We prove that the difference of MRSDE only leads to change in analytical part of solution, which can be calculated precisely. And the integral part can be estimated by discretization methods developed in several previous works (Lu et al., 2022a; Zhang & Chen, 2022; Zhao et al., 2024). We derive sampling formulas for two types of neural network parameterizations: noise prediction (Ho et al., 2020; Song et al., 2020b) and data prediction (Salimans & Ho, 2022). Through theoretical analysis and experimental validation, we demonstrate that data prediction exhibits superior numerical stability compared to noise prediction. Additionally, we propose transformation methods for velocity prediction networks (Salimans & Ho, 2022) so that our algorithm supports all common training objectives. Extensive experiments show that our fast sampler converges in 5 or 10 NFEs with high sampling quality. As illustrated in Figure 1, our algorithm achieves stable performance with speedup factors ranging from 10 to 20. In summary, our main contributions are as follows: - We propose _MR Sampler_, a fast sampling algorithm for MR Diffusion, based on solving the PF-ODE and SDE derived from MRSDE. Our algorithm is plug-and-play and can adapt to all common training objectives. 2 - We demonstrate that posterior sampling (Luo et al., 2024b) for MR Diffusion is equivalent to Euler-Maruyama discretization, whereas MR Sampler computes a semi-analytical solution, thereby eliminating part of approximation errors. - Through extensive experiments on ten image restoration tasks, we demonstrate that MR Sampler can reduce the required sampling time by a factor of 10 to 20 with comparable sampling quality. Moreover, we reveal that data prediction exhibits superior numerical stability compared to noise prediction. 2 B ACKGROUND In this section, we briefly review the basic definitions and characteristics of diffusion probabilistic models and mean-reverting diffusion models. 2.1 D IFFUSION P ROBABILISTIC M ODELS According to Song et al. (2020b), Diffusion Probabilistic Models (DPMs) can be defined as the solution of the following Itˆo stochastic differential equation (SDE), which is a stochastic process t _**x**_ _t_ u _t_ Pr0 _,T_ s with _T_ ą 0, called _forward process_, where _**x**_ _t_ P R _[D]_ is a D-dimensional random variable. d _**x**_ “ _f_ p _**x**_ _, t_ qd _t_ ` _g_ p _t_ qd _**w**_ _._ (1) The forward process performs adding noise to the data _**x**_ 0, while there exists a corresponding reverse process that gradually removes the noise and recovers _**x**_ 0 . Anderson (1982) shows that the reverse of the forward process is also a solution of an Itˆo SDE: d _**x**_ “ r _f_ p _**x**_ _, t_ q ´ _g_ p _t_ q [2] _∇_ _**x**_ log _p_ _t_ p _**x**_ qsd _t_ ` _g_ p _t_ qd ¯ _**w**_ _,_ (2) where _f_ and _g_ are the drift and diffusion coefficients respectively, ¯ _**w**_ is a standard Wiener process running backwards in time, and time _t_ flows from _T_ to 0, which means d _t_ ă 0. The score function _∇_ _**x**_ log _p_ _t_ p _**x**_ q is generally intractable and thus a neural network _**s**_ _θ_ p _**x**_ _, t_ q is used to estimate it by optimizing the following objective (Song et al., 2020b; Hyv¨arinen & Dayan, 2005): _**θ**_ [˚] “ arg min _**θ**_ [E] _[t]_ ! _λ_ p _t_ qE _**x**_ 0 E _**x**_ _t_ | _**x**_ 0 ” } _**s**_ _**θ**_ p _**x**_ _t_ _, t_ q ´ _∇_ _**x**_ _t_ log _p_ p _**x**_ _t_ | _**x**_ 0 q} [2] 2 ı) _._ (3) where _λ_ p _t_ q : r0 _, T_ s Ñ R [`] is a positive weighting function, _t_ is uniformly sampled over r0 _, T_ s, _**x**_ 0 „ _p_ 0 p _**x**_ q and _**x**_ _t_ „ _p_ p _**x**_ _t_ | _**x**_ 0 q. To facilitate the computation of _p_ p _**x**_ _t_ | _**x**_ 0 q, the drift coefficient _f_ p _**x**_ _, t_ q is typically defined as a linear function of _**x**_, as presented in Eq.(4). Based on the inference by S¨arkk¨a & Solin (2019) in Section 5.5, the transition probability _p_ p _**x**_ _t_ | _**x**_ 0 q corresponding to Eq.(4) follows Gaussian distribution, as shown in Eq.(5). d _**x**_ “ _f_ p _t_ q _**x**_ d _t_ ` _g_ p _t_ qd _**w**_ _,_ (4) _t_ _p_ p _**x**_ _t_ | _**x**_ 0 q „ _N_ _**x**_ _t_ ; _**x**_ 0 _e_ ş 0 _t_ _[f]_ [p] _[τ]_ [q][d] _[τ]_ _,_ _e_ [2] ş _τt_ _[f]_ [p] _[ξ]_ [q][d] _[ξ]_ _g_ [2] p _τ_ qd _τ_ ¨ _**I**_ _._ (5) ˆ ż 0 ˙ Song et al. (2020b) proved that Denoising Diffusion Probabilistic Models (Ho et al., 2020) and Noise Conditional Score Networks (Song & Ermon, 2019) can be regarded as discretizations of Variance Preserving SDE (VPSDE) and Variance Exploding SDE (VESDE), respectively. As shown in Table 1, the SDEs corresponding to the two most commonly used diffusion models both follow the form of Eq.(4). Table 1: Two popular SDEs, Variance Preserving SDE (VPSDE) and Variance Exploding SDE (VESDE). _m_ p _t_ q and _v_ p _t_ q refer to mean and variance of the transition probability _p_ p _**x**_ _t_ | _**x**_ 0 q. SDE _f_ p _t_ q _g_ p _t_ q _m_ p _t_ q _v_ p _t_ q [1] 2 _[β]_ [p] _[t]_ [q] ~~a~~ [1] 2 ş 0 _t_ _[β]_ [p] _[τ]_ [q][d] _[τ]_ _**I**_ ´ _**I**_ _e_ [´] ş 0 _t_ _[β]_ [p] _[τ]_ [q][d] _[τ]_ VPSDE(Ho et al., 2020) ´ [1] _β_ p _t_ q _**x**_ 0 _e_ [´] [1] 2 VESDE(Song & Ermon, 2019) 0 ~~b~~ dr _σ_ [2] p _t_ qs d _t_ _**x**_ 0 “ _σ_ [2] p _t_ q ´ _σ_ [2] p0q‰ _**I**_ dr _σ_ [2] p _t_ qs 3 2.2 M EAN R EVERTING D IFFUSION M ODELS Luo et al. (2023b) proposed a special case of Itˆo SDE named Mean Reverting SDE (MRSDE), as follows: d _**x**_ “ _f_ p _t_ q p _**µ**_ ´ _**x**_ q d _t_ ` _g_ p _t_ qd _**w**_ _,_ (6) where _**µ**_ is a parameter vector that has the same shape of variable _**x**_, and _f_ p _t_ q _, g_ p _t_ q are timedependent non-negative parameters that control the speed of the mean reversion and stochastic volatility, respectively. To prevent potential confusion, we have substituted the notation used in the original paper (Luo et al., 2023b). For further details, please refer to Appendix B. Under the assumption that _g_ [2] p _t_ q{ _f_ p _t_ q “ 2 _σ_ 8 [2] [for any] _[ t]_ [ P r][0] _[, T]_ [s][ with] _[ T]_ [ ą][ 0][, Eq.(6) has a closed-form solution,] given by _**x**_ _t_ “ _**x**_ 0 _e_ [´] ş 0 _t_ _[f]_ [p] _[τ]_ [q][d] _[τ]_ ` _**µ**_ p1 ´ _e_ [´] ş 0 _t_ _[f]_ [p] _[τ]_ [q][d] _[τ]_ q ` _σ_ 8 ~~b~~ 1 ´ _e_ [´][2] ş 0 _t_ _[f]_ [p] _[τ]_ [q][d] _[τ]_ _**z**_ _,_ (7) where _σ_ 8 is a positive hyper-parameter that determines the standard deviation of _**x**_ _t_ when _t_ Ñ 8 and _**z**_ „ _N_ p **0** _,_ _**I**_ q. Note that _**x**_ _t_ starts from _**x**_ 0, and converges to _**µ**_ ` _σ_ 8 _**z**_ as _t_ Ñ 8. According to Anderson (1982)’s result, we can derive the following reverse-time SDE: d _**x**_ “ “ _f_ p _t_ q p _**µ**_ ´ _**x**_ q ´ _g_ [2] p _t_ q _∇_ _**x**_ log _p_ _t_ p _**x**_ q‰ d _t_ ` _g_ p _t_ qd ¯ _**w**_ _._ (8) Similar to DPMs, the score function in Eq.(8) can also be estimated by score matching methods Song & Ermon (2019); Song et al. (2021). Once the score function is known, we can generate _**x**_ 0 from a noisy state _**x**_ _T_ . In summary, MRSDE illustrates the conversion between two distinct types of data and has demonstrated promising results in image restoration tasks (Luo et al., 2023c). Various algorithms have been developed to accelerate sampling of VPSDE, including methods like CCDF (Chung et al., 2022), DDIM (Song et al., 2020a), PNDM (Liu et al., 2022a), DPM-Solver (Lu et al., 2022a) and UniPC (Zhao et al., 2024). Additionally, Karras et al. (2022) and Zhou et al. (2024) have introduced techniques for accelerating sampling of VESDE. However, the drift coefficient of VPSDE and VESDE is a linear function of _**x**_, while the drift coefficient in MRSDE is an affine function w.r.t. _**x**_, adding an intercept _**µ**_ (see Eq.(4) and Eq.(6)). Therefore, current sampling acceleration algorithms cannot be applied to MR Diffusion. To the best of our knowledge, MR Sampler has been the first sampling acceleration algorithm for MR Diffusion so far. 3 F AST S AMPLERS FOR M EAN R EVERTING D IFFUSION WITH N OISE P REDICTION According to Song et al. (2020b), the states _**x**_ _t_ in the sampling procedure of diffusion models correspond to solutions of reverse-time SDE and PF-ODE. Therefore, we look for ways to accelerate sampling by studying these solutions. In this section, we solve the noise-prediction-based reversetime SDE and PF-ODE, and we numerically estimate the non-closed-form component of the solution, which serves to accelerate the sampling process of MR diffusion models. Next, we analyze the sampling method currently used by MR Diffusion and demonstrate that this method corresponds to a variant of discretization for the reverse-time MRSDE. 3.1 S OLUTIONS TO M EAN R EVERTING SDE S WITH N OISE P REDICTION Ho et al. (2020) reported that score matching can be simplified to predicting noise, and Song et al. (2020b) revealed the connection between score function and noise prediction models, which is _**x**_ _t_ “ _**x**_ 0 _e_ [´] ş 0 _t_ _[f]_ [p] _[τ]_ [q][d] _[τ]_ ` _**µ**_ p1 ´ _e_ [´] ş 0 _t_ _[f]_ [p] _[τ]_ [q][d] _[τ]_ q ` _σ_ 8 ~~b~~ _**[µ]**_ _[,][ t]_ [q] _∇_ _**x**_ _t_ log _p_ p _**x**_ _t_ | _**x**_ 0 q “ ´ _**[ϵ]**_ _[θ]_ [p] _**[x]**_ _[t]_ _[,]_ _,_ (9) _σ_ _t_ where _σ_ _t_ “ _σ_ 8 ~~a~~ 1 ´ _e_ [´][2] ş 0 _t_ _[f]_ [p] _[τ]_ [q][d] _[τ]_ is the standard deviation of the transition distribution _p_ p _**x**_ _t_ | _**x**_ 0 q. Because _**µ**_ is independent of _t_ and _**x**_, we substitute _**ϵ**_ _θ_ p _**x**_ _t_ _,_ _**µ**_ _, t_ q with _**ϵ**_ _θ_ p _**x**_ _t_ _, t_ q for notation simplicity. According to Eq.(9), we can rewrite Eq.(8) as d _**x**_ “ _f_ p _t_ q p _**µ**_ ´ _**x**_ q ` _[g]_ [2] [p] _[t]_ [q] _**ϵ**_ _θ_ p _**x**_ _t_ _, t_ q d _t_ ` _g_ p _t_ qd ¯ _**w**_ _._ (10) „ _σ_ _t_ ȷ Using Itˆo’s formula (in the differential form), we can obtain the following semi-analytical solution: 4 **Proposition 1.** Given an initial value _**x**_ _s_ at time _s_ P r0 _, T_ s, the solution _**x**_ _t_ at time _t_ P r0 _, s_ s of Eq.(10) is _[α]_ _[t]_ _**x**_ _s_ ` 1 ´ _[α]_ _[t]_ _α_ _s_ ˆ _α_ _s_ _**x**_ _t_ “ _[α]_ _[t]_ _**µ**_ ` _α_ _t_ ˙ ~~d~~ _t_ ´ ż _s_ _s_ _α_ _s_ ż _s t_ _g_ [2] p _τ_ q _**[ϵ]**_ _[θ]_ [p] _**[x]**_ _[τ]_ _[,][ τ]_ [q] _s_ _α_ _τ_ _σ_ _τ_ d _τ_ ` _α_ _τ_ _σ_ _τ_ _α_ _t_ [2] _g_ [2] p _τ_ qd _τ_ _**z**_ _,_ (11) _α_ _τ_ [2] where we denote _α_ _t_ :“ _e_ [´] ş 0 _t_ _[f]_ [p] _[τ]_ [q][d] _[τ]_ and _**z**_ „ _N_ p **0** _,_ _**I**_ q. The proof is in Appendix A.1. However, the integral with respect to neural network output is still complicated. There have been several methods (Lu et al., 2022a; Zhang & Chen, 2022; Zhao et al., 2024) to estimate the integral numerically. We follow Lu et al. (2022b)’s method and introduce the half log-SNR _λ_ _t_ :“ logp _α_ _t_ { _σ_ _t_ q. Since both _f_ p _t_ q and _g_ p _t_ q are deliberately designed to ensure that _α_ _t_ is monotonically decreasing over _t_ and _σ_ _t_ is monotonically increasing over _t_ . Thus, _λ_ _t_ is a strictly decreasing function of _t_ and there exists an inverse function _t_ p _λ_ q. Then we can rewrite _g_ p _τ_ q in Eq.(11) as 8 _[α]_ _τ_ [2] _g_ [2] p _τ_ q “ 2 _σ_ 8 [2] _[f]_ [p] _[τ]_ [q “][ 2] _[f]_ [p] _[τ]_ [qp] _[σ]_ _τ_ [2] [`] _[ σ]_ 8 [2] _[α]_ _τ_ [2] [q “][ 2] _[σ]_ _τ_ [2] [p] _[f]_ [p] _[τ]_ [q `] _[f]_ [p] _[τ]_ [q] _[σ]_ [2] q _σ_ _τ_ [2] 1 “ 2 _σ_ _τ_ [2] [p] _[f]_ [p] _[τ]_ [q `] 2 _σ_ _τ_ [2] d _σ_ _τ_ [2] d _λ_ _τ_ _τ_ d _τ_ [q “ ´][2] _[σ]_ [2] d _τ_ (12) _λ_ _τ_ d _τ_ _[.]_ By substituting Eq.(12) into Eq.(11), we obtain _[α]_ _[t]_ _**x**_ _s_ ` 1 ´ _[α]_ _[t]_ _α_ _s_ ˆ _α_ _s_ ż _λ λ_ _t_ _t_ _e_ [´] _[λ]_ _**ϵ**_ _θ_ p _**x**_ _λ_ _, λ_ qd _λ_ ` _σ_ _t_ ~~b~~ _λ_ _s_ _**x**_ _t_ “ _[α]_ _[t]_ _**µ**_ ´ 2 _α_ _t_ ˙ _α_ _s_ p _e_ [2][p] _[λ]_ _[t]_ [´] _[λ]_ _[s]_ [q] ´ 1q _**z**_ _,_ (13) where _**x**_ _λ_ :“ _**x**_ _t_ p _λ_ _τ_ q _,_ _**ϵ**_ _θ_ p _**x**_ _λ_ _, λ_ q :“ _**ϵ**_ _θ_ p _**x**_ _t_ p _λ_ _τ_ q _, t_ p _λ_ _τ_ qq. According to the methods of exponential integrators (Hochbruck & Ostermann, 2010; 2005), the p _k_ ´ 1q-th order Taylor expansion of _**ϵ**_ _θ_ p _x_ _λ_ _, λ_ q and integration-by-parts of the integral part in Eq.(13) yields _e_ _[h]_ ´ ˜ _n_ p _h_ q _[m]_ ÿ _m_ ! _m_ “0 _k_ ´1 ÿ _n_ “0 _**ϵ**_ [p] _θ_ _[n]_ [q] [p] _**[x]**_ _[λ]_ _[s]_ _[, λ]_ _[s]_ [q] « « ¸ff ´2 _α_ _t_ _λ_ _t_ _e_ [´] _[λ]_ _**ϵ**_ _θ_ p _x_ _λ_ _, λ_ qd _λ_ “ ´2 _σ_ _t_ ż _λ_ _s_ ` _O_ p _h_ _[k]_ [`][1] q _,_ (14) where _h_ :“ _λ_ _t_ ´ _λ_ _s_ . We drop the discretization error term _O_ p _h_ _[k]_ [`][1] q and estimate the derivatives with _backward difference method_ . We name this algorithm as _MR Sampler-SDE-n-k_, where _n_ means noise prediction and _k_ is the order. We present details in Algorithm 1 and 2. 3.2 S OLUTIONS TO M EAN R EVERTING ODE S WITH N OISE P REDICTION Song et al. (2020b) have illustrated that for any Itˆo SDE, there exists a _probability flow_ ODE, sharing the same marginal distribution _p_ _t_ p _**x**_ q as a reverse-time SDE. Therefore, the solutions of PF-ODEs are also helpful in acceleration of sampling. Specifically, the PF-ODE corresponding to Eq.(10) is d _**x**_ _[g]_ [2] [p] _[t]_ [q] d _t_ [“] _[ f]_ [p] _[t]_ [q p] _**[µ]**_ [ ´] _**[ x]**_ [q `] 2 _σ_ d _**x**_ _**ϵ**_ _θ_ p _**x**_ _t_ _, t_ q _._ (15) 2 _σ_ _t_ The aforementioned equation exhibits a semi-linear structure with respect to _**x**_, thus permitting resolution through the method of ”variation of constants”. We can draw the following conclusions: **Proposition 2.** Given an initial value _**x**_ _s_ at time _s_ P r0 _, T_ s, the solution _**x**_ _t_ at time _t_ P r0 _, s_ s of Eq.(15) is _[α]_ _[t]_ _**x**_ _s_ ` 1 ´ _[α]_ _[t]_ _α_ _s_ ˆ _α_ _s_ ż _s t_ _g_ [2] p _τ_ q _**ϵ**_ _θ_ p _**x**_ _τ_ _, τ_ qd _τ,_ (16) 2 _α_ _τ_ _σ_ _τ_ _**x**_ _t_ “ _[α]_ _[t]_ _α_ _s_ _**µ**_ ` _α_ _t_ ˙ where _α_ _t_ :“ _e_ [´] ş 0 _t_ _[f]_ [p] _[τ]_ [q][d] _[τ]_ . The proof is in Appendix A.1. Then we follow the variable substitution and Eq.(12-14) in Section 3.1, and we obtain _e_ _[h]_ ´ ˜ ˜ _n_ ÿ _m_ “0 _[α]_ _[t]_ _**x**_ _s_ ` 1 ´ _[α]_ _[t]_ _α_ _s_ ˆ _α_ _s_ _k_ ´1 ÿ _n_ “0 _**ϵ**_ [p] _θ_ _[n]_ [q] [p] _**[x]**_ _[λ]_ _[s]_ _[, λ]_ _[s]_ [q] « « ¸ff _**x**_ _t_ “ _[α]_ _[t]_ _α_ _s_ _**µ**_ ´ _σ_ _t_ ˙ p _h_ q _[m]_ _m_ ! ` _O_ p _h_ _[k]_ [`][1] q _,_ (17) d _[n]_ _**ϵ**_ _θ_ p _**x**_ _λ_ _,λ_ q where _**ϵ**_ [p] _θ_ _[n]_ [q] [p] _**[x]**_ _[λ]_ _[, λ]_ [q][ :][“] d _λ_ _[n]_ is the _n_ -th order total derivatives of _**ϵ**_ _θ_ with respect to _λ_ . By dropping the discretization error term _O_ p _h_ _[k]_ [`][1] q and estimating the derivatives of _**ϵ**_ _θ_ p _**x**_ _λ_ _s_ _, λ_ _s_ q with _backward difference method_, we design the sampling algorithm from the perspective of ODE (see Algorithm 3 and 4). 5 Idea Generation Category:
0Conceptual Integration
yVeNBxwL5W
# M O DGS: D YNAMIC G AUSSIAN S PLATTING FROM - C ASUALLY CAPTURED M ONOCULAR V IDEOS WITH D EPTH P RIORS **Qingming Liu** **[1,5]** _[∗]_ **Yuan Liu** **[2]** _[∗]_ **Jiepeng Wang** **[3]** **Xianqiang Lyv** **[1]** **Peng Wang** **[3]** **Wenping Wang** **[4]** **Junhui Hou** **[1]** _[†]_ 1 City University of HongKong 2 HKUST 3 HKU 4 TAMU 5 CUHK(SZ) qingmingliu@foxmail.com yuanly@ust.hk A BSTRACT In this paper, we propose MoDGS, a new pipeline to render novel views of dynamic scenes from a casually captured monocular video. Previous monocular dynamic NeRF or Gaussian Splatting methods strongly rely on the rapid movement of input cameras to construct multiview consistency but struggle to reconstruct dynamic scenes on casually captured input videos whose cameras are either static or move slowly. To address this challenging task, MoDGS adopts recent single-view depth estimation methods to guide the learning of the dynamic scene. Then, a novel 3D-aware initialization method is proposed to learn a reasonable deformation field and a new robust depth loss is proposed to guide the learning of dynamic scene geometry. Comprehensive experiments demonstrate that MoDGS is able to render high-quality novel view images of dynamic scenes from just a casually captured monocular video, which outperforms state-of-the-art methods [by a significant margin. Project page: https://MoDGS.github.io](https://MoDGS.github.io) 1 I NTRODUCTION Novel view synthesis (NVS) is an important task in computer graphics and computer vision, which greatly facilitates downstream applications such as augmented or virtual reality. In recent years, the novel-view-synthesis quality on static scenes has witnessed great improvements thanks to the recent development of techniques such as NeRF (Mildenhall et al., 2020), Instant-NGP (M¨uller et al., 2022), and Gaussian Splatting (Kerbl et al., 2023), especially when there are sufficient input images. However, novel view synthesis in a dynamic scene with only one monocular video still remains a challenging task. Dynamic View Synthesis (DVS) has achieved impressive improvements along with the emerging neural representations (Mildenhall et al., 2020) and Gaussian splatting (Kerbl et al., 2023) techniques. Most of the existing DVS methods (Cao & Johnson, 2023; Yang et al., 2023) require multiview videos captured by dense synchronized cameras to achieve good rendering quality. Though some works can process a monocular video for DVS, as pointed out by DyCheck (Gao et al., 2022), these methods require the camera of the monocular video to have extremely large movements, which is called “Teleporting Camera Motion” on different viewpoints, so these methods can utilize the multiview consistency provided by this pseudo multiview video to reconstruct the 3D geometry of the dynamic scene. However, such large camera movements are rarely seen in casually captured videos because casual videos are usually produced by smoothly moving or even static cameras. When the camera moves slowly or is static, the multiview consistency constraint will be much weaker and all these existing DVS methods fail to produce high-quality novel-view images, as shown in Fig. 1. In this paper, we present Monocular Dynamic Gaussian Splatting (MoDGS) to render novel-view images from casually captured monocular videos in a dynamic scene. MoDGS addresses the weak multiview constraint problem by adopting a monocular depth estimation method (Fu et al., 2024), which provides prior depth information on the input video to help the 3D reconstruction. However, we find that simply applying a single-view depth estimator in DVS to supervise rendered depth _∗_ Equal contribution. This project was primarily completed while Qingming was at CityUHK. _†_ Corresponding Authors. Email: jh.hou@cityu.edu.hk. This project was supported in part by the NSFC Excellent Young Scientists Fund 62422118, and in part by the Hong Kong Research Grants Council under Grant 11219422 and Grant 11219324. 1 **Inconsistent depth inputs** **…** **…** **…** **Ordinal depth loss** **Consistent depth renderings** **Baseline** **Ours** Figure 1: Given a casually captured monocular video of a dynamic scene, **MoDGS** is able to synthesize high-quality novel-view images in this scene. In the middle column, the baseline method (Yang et al., 2023) fails to correctly reconstruct the 3D dynamic scenes on this static monocular video. The white regions in cyan bounding boxes are not visible in the input video (red bounding boxes) so there are some artifacts for these invisible regions. In the rightmost column, the input estimated monocular depth is inconsistent (red bounding boxes); however, our proposed ordinal depth loss effectively ensures more consistent depth outputs. This loss enhances the accuracy and reliability of learning underlying geometry. maps is not enough for high-quality novel view synthesis. First, the depth supervision only provides information for each frame but does not help to associate 3D points between two frames in time. Thus, we still have difficulty in learning an accurate time-dependent deformation field. Second, the estimated depth values are not consistent among different frames. To learn a robust deformation field from a monocular video, we propose a 3D-aware initialization scheme for the deformation field. Existing methods (Katsumata et al., 2024) solely rely on supervision from 2D flow estimation, which produces deteriorated results without sufficient multiview consistency. We find that directly initializing the deformation field in the 3D space greatly helps the subsequent learning of the 4D representations and improves the rendering quality as shown in Fig. 1. To better utilize the estimated depth maps for supervision, we propose a novel depth loss to address the scale inconsistency of estimated depth values across different frames. Previous methods (Li et al., 2023b; Liu et al., 2023a) supervise the rendered depth maps using a scale-invariant depth loss by minimizing the _L_ 2 distance of normalized rendered depth and depth priors, and the most recent method (Zhu et al., 2023c) proposed to supervise the rendered depth maps using a Pearson correlation loss to mitigate the scale ambiguity between the reconstructed scene and the estimated depth maps. However, the estimated depth maps of different frames are not even consistent after normalizing to the same scale. To address these challenges, we observe that despite the inconsistency in values, the orders of depth values of different pixels in different frames are stable, which motivates us to propose an ordinal depth loss. This novel ordinal depth loss enables us to fully utilize the estimated depth maps for high-quality novel view synthesis. To demonstrate the effectiveness of MoDGS, we conduct experiments on three widely used datasets, the Nvdia (Yoon et al., 2020) dataset, the DyNeRF (Li et al., 2022) dataset, and the Davis (PontTuset et al., 2017) dataset. We also present results on a self-collected dataset containing monocular in-the-wild videos from the Internet. We adopt an exact monocular DVS evaluation setting that only uses the video of one camera as input while evaluating the video of another camera. Results show that our method outperforms previous DVS methods by a large margin and achieves high-quality NVS on casually captured monocular videos. 2 R ELATED W ORK In recent years, numerous works have focused on the task of novel view synthesis in both static and dynamic scenes. The main representatives are Neural Radiance Field (Mildenhall et al., 2020) and 2 Ordinal depth loss Input video time 𝑡 ~~Deform~~ ~~Deform~~ ~~field~~ ~~field~~ ~~𝓣~~ 𝒕 Rendered depth Estimated depth Rendering loss Rendered image |Gaussians in canonical space|Col2|Col3|Col4| |---|---|---|---| |canonical space|canonical space|canonical space|canonical space| |D|Deefoforr|mm f fiieelldd|𝓣<br>𝒕| |Splatting|Splatting|Splatting|Splatting| Figure 2: **Overview** . Given a casually captured monocular video of a dynamic scene, MoDGS represents the dynamic scene with a set of Gaussians in a canonical space and a deformation field represented by an MLP _T_ . To render an image at a specific timestamp _t_, we deform all the Gaussians by _T_ _t_ and then use the splatting technique to render images and depth maps. While in training MoDGS, we use a single-view depth estimator GeoWizard (Fu et al., 2024) to estimate depth maps and compute the rendering loss and an ordinal depth loss for training. Gaussian Splatting (Kerbl et al., 2023), along with their variants. In this paper, we primarily focus on view synthesis in dynamic scenes. **Dynamic NeRF.** Recent dynamic NeRF methods can be roughly categorized into two groups. 1) Representing by time-varying neural radiance fields conditioned on time (Gao et al., 2021; Li et al., 2022; Park et al., 2023). For example, Park et al. (2023) proposes a simple spatiotemporal radiance field by interpolating the feature vectors indexed by time. 2) Representing by a canonical space NeRF and deformation field (Guo et al., 2023; Li et al., 2021; Park et al., 2021a;b; Pumarola et al., 2021; Tretschk et al., 2021; Xian et al., 2021). For example, NSFF (Li et al., 2021) models the dynamic components using forward and backward flow represented as 3D dense vector fields; Nerfies (Park et al., 2021a) and HyperNeRF (Park et al., 2021b) model the scene dynamics as a deformation field mapping to a canonical space. Recent advances in grid-based NeRFs (M¨uller et al., 2022; Sara Fridovich-Keil and Alex Yu et al., 2022; Chen et al., 2022) demonstrate that the training of static NeRFs can be significantly accelerated. Consequently, some dynamic NeRF works utilize these grid-based or hybrid representations for fast optimization (Guo et al., 2023; Cao & Johnson, 2023; Fang et al., 2022; Fridovich-Keil et al., 2023; Shao et al., 2023; Wang et al., 2023a;b; Song et al., 2023; You & Hou, 2023). **Dynamic Gaussian Splatting.** The recent emergence of 3D Gaussian Splatting (3DGS) demonstrates its efficacy for super-fast real-time rendering attributed to its explicit point cloud representation. Recent follow-ups extend 3DGS to model dynamic 3D scenes. Luiten et al. (2023) track dynamic 3D Gaussians by frame-by-frame training from synchronized multi-view videos. Yang et al. (2023) propose a deformable version of 3DGS by introducing a deformation MLP network to model the 3D flows. Wu et al. (2024) and Duisterhof et al. (2023) also introduce a deformation field but using a more efficient Hexplane representation (Cao & Johnson, 2023). Yang et al. (2024c) proposes a dynamic representation with a collection of 4D Gaussian primitives, where the time evolution can be encoded by 4D spherical harmonics. Bae et al. (2024) encodes motions with a per-Gaussian feature vector. Some other works (Li et al., 2023a; Lin et al., 2024; Liang et al., 2023) also study how to effectively encode the motions for Gaussians with different bases. To effectively learn the motions of Gaussians, some works (Feng et al., 2024; Yu et al., 2023; Huang et al., 2024) resort to clustering the motions together for a compact representation. **DVS from Casual Monocular Videos.** As shown in DyCheck (Gao et al., 2022), many existing monocular dynamic view synthesis datasets used for benchmarking, like D-NeRF (Pumarola et al., 2021), HyperNeRF (Park et al., 2021b), and Nerfies (Park et al., 2021a), typically involve significant camera movements between frames but with small object dynamic motions. While this capture style helps with multi-view constraints and dynamic 3D modeling, it is not representative of casual everyday video captures. When using casual videos, the reconstruction results from these methods suffer from quality degradation. Some works address dynamic 3D scene modeling using monocular casual videos. DynIBaR (Li et al., 2023b) allows for long-sequence image-based rendering of 3 **Canonical space** Dense Points Downsample Sparse points ~~→~~ ~~**Init**~~ ~~**Ga**~~ **u** ~~**ssians**~~ |Col1|Depth points 𝑡<br>𝑖|Col3|Col4|Col5| |---|---|---|---|---| ||Depth points 𝑡<br>𝑖|Depth points 𝑡<br>𝑖||𝓣𝒕− 𝒊𝟏| ||Deform field|Deform field|Deform field|Deform field| |𝓣𝒕− 𝒊𝟏|𝓣𝒕− 𝒊𝟏|𝓣𝒕− 𝒊𝟏|𝓣𝒕− 𝒊𝟏|𝓣𝒕− 𝒊𝟏| |𝓣𝒕− 𝒊𝟏|…<br>Depth points 𝑡<br>𝑗|…<br>Depth points 𝑡<br>𝑗|𝓣𝒕− 𝒋𝟏|𝓣𝒕− 𝒋𝟏| |𝓣𝒕− 𝒊𝟏|…<br>Depth points 𝑡<br>𝑗|Deform field|Deform field|Deform field| **(a) Initialization of deformation field** **(b) Initialization of Gaussians** Figure 3: (a) **Initialization of the deformation field** . We first lift the depth maps and a 2D flow to a 3D flow and train the deformation field for initialization. (b) **Initialization of Gaussians in** **the canonical space** . We use the initialized deformation field to deform all the depth points to the canonical space and downsample these depth points to initialize Gaussians. dynamic scenes by aggregating features from nearby views, but its training cost is high for long per-scene optimization. Lee et al. (2023) proposes a hybrid representation that combines static and dynamic elements, allowing for faster training and rendering, though it requires additional per-frame masks for dynamic components. RoDynRF (Liu et al., 2023a) focuses on robust dynamic NeRF reconstruction by estimating NeRF and camera parameters together. DpDy (Wang et al., 2024a) enhances quality by fine-tuning a diffusion model with SDS loss supervision (Poole et al., 2022), but it demands significant computational resources. Concurrent works like DG-Marbles (Stearns et al., 2024) use Gaussian marbles and a hierarchical learning strategy to optimize representations. Shape-of-Motion (Wang et al., 2024b) and Mosca (Lei et al., 2024) rely on explicit motion representation and initialize scene deformation with depth estimation and video tracking priors. However, our MoDGS method effectively uses only noisy inter-frame flow maps from RAFT (Teed & Deng, 2020) as input, performing well without the need for strong long-range pixel correspondence. **Ordinal Relation in Depth Maps.** The ordinal relation between pixels has been investigated in recent years, especially in the field of monocular depth estimation. Zoran et al. (2015) proposed to use a three-category classification network to predict the order relation of given pixel pairs. Then the depth can be extracted by optimizing a constrained quadratic optimization problem. Similarly, Fu et al. (2018) treats the depth prediction problem as a multi-class classification problem. Chen et al. (2016) further proposes a ranking loss to learn metric depth, which encourages a small difference between depths if the ground-truth relation is equal; otherwise it encourages a large difference. Then, Pavlakos et al. (2018) extends this differentiable ranking loss to the human pose estimation task. However, these works only utilize limited numbers of depth orders for training (one pair in Chen et al. (2016) and 17 pairs in Pavlakos et al. (2018)), resulting in coarse supervision for depth maps. The direct application of their ranking loss as depth supervision has yet to be explored. Moreover, our ordinal depth takes the rendered metric depth maps as input, which are dense grids of float numbers. Our task is different from previous depth estimation and pose estimation tasks and we present a comparison between our ordinal depth loss and depth ranking loss in Appendix A.10. 3 P ROPOSED M ETHOD Given a casually captured monocular video, we aim to synthesize novel view images from this video. We propose MoDGS, which achieves this by learning a set of Gaussians _{G_ _i_ _|i_ = 1 _, · · ·, N_ _}_ in a canonical space and a deformation field _T_ _t_ : R [3] _→_ R [3] to deform these Gaussians to a specific timestamp _t_ . Then, for a timestamp _t_ and a camera pose, we use splatting to render an image. **Overview.** As shown in Fig. 2, to train MoDGS, we split the monocular video into a sequence of images _{I_ _t_ _|t_ = 1 _, ..., T_ _}_ with known camera poses. We denote our deformation field as a function _x_ _t_ = _T_ _t_ ( _x_ ), which maps a 3D location _x ∈_ R [3] in the canonical 3D space to a location _x_ _t_ _∈_ R [3] in the 3D space on time _t_ . For every image _I_ _t_, we utilize a single-view depth estimator (Fu et al., 2024) to estimate a depth map _D_ _t_ for every image and utilize a flow estimation method RAFT (Teed & Deng, 2020) to estimate a 2D optical flow _F_ _t_ _i_ _→t_ _j_ between _I_ _t_ _i_ and _I_ _t_ _j_ where _t_ _i_ and _t_ _j_ are two arbitrary timestamps. Then, we initialize our deformation field by a 3D-aware initialization scheme as introduced in Sec. 3.2. After initialization, we train our Gaussians and deformation field with a 4 rendering loss and a new depth loss introduced in Sec. 3.3. In the following, we first begin with the definition of the Gaussians and the rendering process in MoDGS. 3.1 G AUSSIANS AND D EFORMATION F IELDS **Gaussians in the canonical space.** We define a set of Gaussians in the canonical space, we follow the original 3D GS (Kerbl et al., 2023) to define a 3D location, a scale vector, a rotation, and a color with spherical harmonics. Note this canonical space does not explictly correspond to any timestamp but is just a virtual space that contains the canonical locations of all Gaussians. **Deformation fields.** The deformation field _T_ _t_ used in MoDGS follows the design of Omnimotion (Wang et al., 2023c) and CaDeX (Lei & Daniilidis, 2022) which is an invertible MLP network (Dinh et al., 2016). This is an invertible MLP means that both _T_ _t_ and _T_ _t_ _[−]_ [1] can be directly computed from the MLP network. All _T_ _t_ at different timestamps _t_ share the same MLP network and the time _t_ is normalized to [0 _,_ 1] as input to the MLP network. **Render with MoDGS.** After training both the Gaussians in canonical space and the deformation field, we will use the deformation field to deform the Gaussians in the canonical space to a specific time step _t_ . Then, we follow exactly the splatting techniques in 3D GS (Kerbl et al., 2023) to render images from arbitrary viewpoints. 3.2 3D- AWARE I NITIALIZATION Original 3D Gaussian splatting (Kerbl et al., 2023) relies on the sparse points from Structure-fromMotion (SfM) to initialize all the locations of Gaussians. When we only have a casually captured monocular video, it is difficult to get an initial set of sparse points for initialization from SfM. Though it is possible to initialize all the Gaussians from the points of the estimated single-view depth of the first frame, we show that this leads to suboptimal results. At the same time, we need to initialize not only the Gaussians but also the deformation field. Thus, we propose a 3D-aware initialization scheme for MoDGS. **Initialization of depth scales.** Since the estimated depth maps on different timestamps would have different scales, we first estimate a coarse scale for every frame to unify the scales. We achieve this by first segmenting out the static regions on the video and then computing the scale with a least square fitting (Chung et al., 2023b). The static regions can be determined by either thresholding on the 2D flow (Teed & Deng, 2020) or segmenting with a segmentation method like SAM2 (Ravi et al., 2024). Then, on these static regions, we reproject the depth values at a specific timestamp to the first frame and minimize the difference between the projected depth and the depth of the first frame, which enables us to solve for a scale for every frame. We rectify all depth maps with the computed scales. In the following, we reuse _D_ _t_ to denote the rectified depth maps by default. **Initialization of the deformation field.** As shown in Fig. 3 (left), given two depth maps _D_ _t_ _i_ and _D_ _t_ _j_ along with the 2D flow _F_ _t_ _i_ _→t_ _j_, we lift them to a 3D flow _F_ _t_ [3] _i_ _[D]_ _→t_ _j_ [. This is achieved by first] converting the depth maps into 3D points in the 3D space. Then, the estimated 2D flow _F_ _t_ _i_ _→t_ _j_ actually associate two sets of 3D points, which results in a 3D flow _F_ _t_ [3] _i_ _[D]_ _→t_ _j_ [. After getting this 3D] flow, we then train our deformation field _T_ with this 3D flow. Specifically, for a pixel in _I_ _t_ _i_ whose corresponding 3D point is _x_ _t_ _i_, we query _F_ _t_ [3] _i_ _[D]_ _→t_ _j_ [to find its target point] _[ x]_ _[t]_ _j_ [in the] _[ t]_ _[j]_ [timestamp. Then,] we minimize the difference by _ℓ_ init = � _∥T_ _t_ _j_ _◦T_ _t_ _[−]_ _i_ [1] [(] _[x]_ _[t]_ _i_ [)] _[ −]_ _[x]_ _[t]_ _j_ _[∥]_ [2] _[.]_ (1) We train the MLP in _T_ for a fixed number of steps to initialize the deformation field. **Initialization of Gaussians.** After getting the initialized deformation field, we will initialize a set of 3D Gaussians in the canonical space as shown in Fig. 3 (right). We achieve this by first converting all the depth maps to get 3D points. Then, these 3D points are deformed backward to the canonical 3D space. This means that we transform all the depth points of all timestamps to the canonical space, which results in a large amount of points. We then evenly downsample these points with a predefined voxel size to reduce the point number and we initialize all our Gaussians with the locations of these downsampled 3D points in the canonical space. Here, more advanced learning-based adaptive fusion strategies (You et al., 2023) could be adopted to downsample the points, potentially improving the representation. 5 Idea Generation Category:
0Conceptual Integration
2prShxdLkX
# P ROVABLY R ELIABLE C ONFORMAL P REDICTION S ETS IN THE P RESENCE OF D ATA P OISONING **Yan Scholten, Stephan G¨unnemann** Department of Computer Science & Munich Data Science Institute Technical University of Munich _{_ y.scholten, s.guennemann _}_ @tum.de A BSTRACT Conformal prediction provides model-agnostic and distribution-free uncertainty quantification through prediction sets that are guaranteed to include the ground truth with any user-specified probability. Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data, which can significantly alter prediction sets in practice. As a solution, we propose _reliable prediction sets_ (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning. To ensure reliability under training poisoning, we introduce smoothed score functions that reliably aggregate predictions of classifiers trained on distinct partitions of the training data. To ensure reliability under calibration poisoning, we construct multiple prediction sets, each calibrated on distinct subsets of the calibration data. We then aggregate them into a majority prediction set, which includes a class only if it appears in a majority of the individual sets. Both proposed aggregations mitigate the influence of datapoints in the training and calibration data on the final prediction set. We experimentally validate our approach on image classification tasks, achieving strong reliability while maintaining utility and preserving coverage on clean data. Overall, our approach represents an important step towards more trustworthy uncertainty quantification in the presence of data poisoning. [1] 1 I NTRODUCTION Conformal prediction has emerged as a powerful, model-agnostic framework for distribution-free uncertainty quantification. By constructing prediction sets calibrating on hold-out data, it transforms any black-box classifier into a predictor with formal coverage guarantees, ensuring its prediction sets cover the ground truth with any user-specified probability (Angelopoulos & Bates, 2021). This makes it highly relevant for safety-critical applications like medical diagnosis (Vazquez & Facelli, 2022), autonomous driving (Lindemann et al., 2023), and flood forecasting (Auer et al., 2023). However in practice, noise, incomplete data or adversarial perturbations can lead to unreliable prediction sets (Liu et al., 2024). In particular data poisoning – where adversaries modify the training or calibration data (e.g. during data labeling) – can significantly alter the prediction sets, resulting in overly conservative or empty sets (Li et al., 2024). This vulnerability can undermine the practical utility of conformal prediction in safety-critical applications, raising the research question: _How can we make conformal prediction sets provably reliable in the presence of data poisoning?_ As a solution, we propose _reliable prediction sets_ (RPS): the first efficient method for constructing prediction sets more reliable under data poisoning where adversaries can modify, add and delete datapoints from training and calibration sets. Our approach consists of two key components (Figure 1): First ( **i** ), we introduce smoothed score functions that reliably aggregate predictions from classifiers trained on distinct partitions of the training data, improving reliability under training poisoning. Second ( **ii** ), we calibrate multiple prediction sets on disjoint subsets of the calibration data and construct a majority prediction set that includes classes only when a majority of the independent prediction sets agree, improving reliability under calibration poisoning. Using both strategies ( **i** ) and ( **ii** ), RPS effectively reduces the influence of individual datapoints during training and calibration. 1 [Project page: https://www.cs.cit.tum.de/daml/reliable-conformal-prediction/](https://www.cs.cit.tum.de/daml/reliable-conformal-prediction/) 1 **(i) Reliable CP under training poisoning (ii) Reliable CP under calibration poisoning** **Test image** |function|Col2| |---|---| ||| ||| ||| **x** Figure 1: Conformal prediction (CP) is not reliable under poisoning (orange) of training and calibration data, undermining its practical utility in safety-critical applications. As a solution, we propose _reliable prediction sets_ (RPS): A novel approach for constructing more reliable prediction sets. We ( **i** ) aggregate predictions of classifiers trained on distinct partitions, and ( **ii** ) merge multiple prediction sets _C_ _i_ ( _x_ )= _{y_ : _s_ ( _x, y_ ) _≥_ _τ_ _i_ _}_ calibrated on separate partitions into a majority prediction set that includes classes only if a majority of the prediction sets _C_ _i_ agree. This way RPS reduces the influence of datapoints while preserving the coverage guarantee of conformal prediction on clean data. We further derive certificates, i.e. provable guarantees for the reliability of RPS under worst-case poisoning. We experimentally validate the effectiveness of our approach on image classification tasks, demonstrating strong reliability under worst-case poisoning while maintaining utility and empirically preserving the coverage guarantee of prediction sets on clean data. Our main contributions are: - We propose _reliable prediction sets_ (RPS) – the first scalable and efficient method for making conformal prediction more reliable under training and calibration poisoning. - We derive novel certificates that guarantee the reliability of RPS under worst-case data poisoning attacks, including guarantees against label flipping attacks. - We thoroughly evaluate our method and verify our theorems on image classification tasks. 2 R ELATED WORK **Prediction set ensembles.** Ensembles of prediction sets are studied in the uncertainty set literature beyond machine learning (Cherubin, 2019; Solari & Djordjilovi´c, 2022; Gasparin & Ramdas, 2024), e.g. to reduce the effect of randomness. Our work instead proposes a method to improve the reliability of conformal prediction under worst-case training and calibration poisoning. **Conformal prediction under evasion.** Most works regarding reliable conformal prediction focus on evasion threat models, i.e. adversarial perturbations of the test data. They typically build upon randomized smoothing (Cohen et al., 2019) to certify robustness against evasion attacks (Gendler et al., 2022; Yan et al., 2024; Zargarbashi et al., 2024), or use neural network-specific verification (Jeary et al., 2024). Ghosh et al. (2023) introduce a probabilistic notion as an alternative to worstcase evasion attacks. Unlike prior work on evasion, we consider poisoning threat models. **Conformal prediction under poisoning.** Despite emerging poisoning attacks (Li et al., 2024), the few existing attempts to improve reliability consider other reliability notions. Most works only certify the coverage guarantee under calibration poisoning (Park et al., 2023; Zargarbashi et al., 2024; Kang et al., 2024). Others study calibration poisoning empirically (Einbinder et al., 2022), under specific label noise (Penso & Goldberger, 2024), or consider distribution shifts between calibration and test data (Cauchois et al., 2020). Zargarbashi et al. (2024) consider modifications to the calibration data, but their threat model does not support adversarial data insertion or deletion. Overall, none of the existing approaches considers pointwise reliability of prediction sets under threat models where adversaries can modify, add or remove datapoints from both training and calibration data. **Robustness certification against data poisoning.** Most certification techniques for robust classification under poisoning consider other threat models, specific training techniques or architectures (Rosenfeld et al., 2020; Tian et al., 2023; Sosnin et al., 2024). The strongest guarantees also partition the training data and aggregate predictions of classifiers trained on each partition (Levine & Feizi, 2021; Wang et al., 2022; Rezaei et al., 2023). However, all of the prior works only guarantee robust classification and are not directly applicable to certify conformal prediction since prediction sets (1) contain multiple classes, and (2) can be manipulated via poisoning during training and calibration. 2 3 B ACKGROUND AND PRELIMINARIES We focus on classification tasks defined on an input space _X_ = R _[d]_ for a given finite set of classes _Y_ = _{_ 1 _, . . ., K}_ . We model prediction set predictors as functions _C_ : _X →_ 2 _[Y]_, which provide prediction sets as subsets _C_ ( _x_ ) _⊆Y_ of the classes _Y_ for any given datapoint _x ∈X_ . **Exchangeability.** Conformal prediction is a model-agnostic and distribution-free method for constructing prediction sets. It only requires that calibration and test points are exchangegable, which means that their joint distribution is invariant under permutations. In this paper, we adopt the standard assumption (e.g. in image classification) that datapoints are i.i.d., which implies exchangeability. Specifically, we assume three datasets with datapoints sampled i.i.d. from the same distribution _D_ over _X × Y_ : training set _D_ _train_, calibration set _D_ _calib_ = _{_ ( _x_ _i_ _, y_ _i_ ) _}_ _[n]_ _i_ =1 [and test set] _[ D]_ _[test]_ [.] **Conformal prediction.** Conformal prediction transforms any given black-box classifier _f_ : _X →Y_ into a prediction set predictor. We focus on split conformal prediction (Papadopoulos et al., 2002; Lei et al., 2018), the most widely used variant in machine learning. First, one trains a classifier _f_ on the training set and defines a score function _s_ ( _x, y_ ) that measures conformity between samples _x_ and classes _y_ . For example, homogeneous prediction sets (HPS) use the class probabilities of a given soft classifier, _s_ ( _x, y_ ) = _f_ _y_ ( _x_ ) (Sadinle et al., 2019). Then, one computes conformal scores _S_ = _{s_ ( _x_ _i_ _, y_ _i_ ) _}_ _[n]_ _i_ =1 [for samples of the calibration set] _[ D]_ _[calib]_ [ using the score function] _[ s]_ [. Finally, one] can construct prediction sets with the following coverage guarantee (Vovk et al., 1999; 2005): **Theorem 1.** _Given user-specified coverage probability_ 1 _−_ _α ∈_ (0 _,_ 1) _, test sample_ ( _x_ _n_ +1 _, y_ _n_ +1 ) _∈_ _D_ _test_ _exchangeable with D_ _calib_ _, and a score function s, we can construct the following prediction set_ _C_ ( _x_ _n_ +1 ) = _{y ∈Y_ : _s_ ( _x_ _n_ +1 _, y_ ) _≥_ _τ_ _},_ _which fulfills the following marginal coverage guarantee_ Pr[ _y_ _n_ +1 _∈C_ ( _x_ _n_ +1 )] _≥_ 1 _−_ _α_ _for τ_ = _Quant_ ( _α_ _n_ ; _S_ ) _. Specifically, the threshold τ is chosen as the α_ _n_ _-quantile of the conformal_ _scores S for a finite-sample corrected significance level α_ _n_ = _⌊α_ ( _n_ + 1) _−_ 1 _⌋/n._ 4 D ESIDERATA FOR RELIABLE CONFORMAL PREDICTION First we want to outline the desired properties that reliable conformal prediction should exhibit, setting clear goals for how uncertainty should be captured by prediction sets under data poisoning. **Data poisoning.** While exchangeability may hold for the data distribution _D_ in theory, the labeled data _D_ _l_ = ( _D_ _train_ _, D_ _calib_ ) can be poisoned to alter prediction sets in practice. We formalize this threat model, i.e. the strength of poisoning attacks, as a ball centered around labeled data: _B_ _r_ _t_ _,r_ _c_ ( _D_ _l_ ) = _D_ ˜ _l_ _| δ_ ( ˜ _D_ _train_ _, D_ _train_ ) _≤_ _r_ _t_ _, δ_ ( ˜ _D_ _calib_ _, D_ _calib_ ) _≤_ _r_ _c_ (1) � � where _δ_ is a distance metric between datasets, and _r_ _t_ _, r_ _c_ are the radii for training and calibration sets, respectively. Specifically, we define _δ_ as the number of inserted or deleted datapoints and label flips, modeling feature modification as two perturbations (deletion and insertion): _δ_ ( _D_ 1 _, D_ 2 ) = _|D_ 1 _⊖D_ 2 _| −|F_ ( _D_ 1 _, D_ 2 ) _|_ where _A ⊖_ _B_ = ( _A \ B_ ) _∪_ ( _B \ A_ ) is the symmetric set difference between two sets _A_ and _B_, _|S|_ denotes the cardinality of a set _S_, and _F_ ( _D_ 1 _, D_ 2 ) represents the set of datapoints with label flips _F_ ( _D_ 1 _, D_ 2 )= _{x | ∃_ _y_ 1 : ( _x, y_ 1 ) _∈D_ 1 _\ D_ 2 _, ∃_ _y_ 2 : ( _x, y_ 2 ) _∈D_ 2 _\ D_ 1 _}_ . Note that we count label flips only once, and feature perturbations can be of arbitrary magnitude. **Reliability under data poisoning.** Given a datapoint _x ∈X_ and a prediction set _C_ ( _x_ ) _⊆Y_, we define reliability of conformal prediction sets under data poisoning as follows: **Definition 1** (Reliability) **.** _We assume that reliability is compromised if adversaries can remove or_ _add a single class from or to the prediction set C_ ( _x_ ) _under our threat model (Equation 1). Specif-_ _ically, we call prediction sets_ _**coverage reliable**_ _if adversaries cannot shrink prediction sets C_ ( _x_ ) _by removing classes, and_ _**size reliable**_ _if adversaries cannot inflate prediction sets C_ ( _x_ ) _by adding_ _classes. We further call coverage and size reliable prediction sets_ _**robust**_ _._ Note that while the coverage guarantee (Theorem 1) provides a marginal guarantee over the entire distribution, our notion of coverage reliability is _pointwise_, i.e. applies to each prediction set _C_ ( _x_ ). 3 Accordingly, we propose the following novel desiderata for reliable conformal prediction: While desideratum **I** requires marginal coverage (Theorem 1), desideratum **II** ensures small sets, and together they prevent that reliability can be achieved trivially by predicting empty or full sets. desideratum **III** ensures that reliability must be certifiable, i.e. a provable guarantee under worst-case poisoning. desideratum **IV** requires that algorithms can increase reliability as more data becomes available since practical risks often increase with more data. Finally, desideratum **V** ensures efficiency in deployment, where reproducibility requires stability under recomputation. 5 R ELIABLE CONFORMAL PREDICTION SETS Guided by our desiderata for reliable conformal prediction we introduce **reliable conformal predic-** **tion sets** (RPS): the first method for provably reliable conformal prediction under training and calibration poisoning (Figure 1). The first component of RPS ( **i** ) reliably aggregates classifiers trained on _k_ _t_ disjoint partitions of the training data. The second component of RPS ( **ii** ) constructs reliable prediction sets by merging sets calibrated separately on _k_ _c_ disjoint partitions of the calibration data. Intuitively, larger _k_ _t_ increases reliability against training poisoning and larger _k_ _c_ increases reliability against calibration poisoning. We provide detailed instructions in Algorithm 1 and Algorithm 2. 5.1 C ONFORMAL SCORE FUNCTIONS RELIABLE UNDER TRAINING DATA POISONING First, our goal is to derive a conformal score function that is reliable under poisoning of training data. This is challenging since the score function also has to quantify agreement between samples and classes, and maintain exchangeability of conformal scores between calibration and test data. To overcome this challenge we propose to (1) partition the training data into _k_ _t_ disjoint sets, (2) train separate classifiers on each partition, and (3) design a score function that counts the number of classifiers voting for a class _y_ given sample _x_ . Since deleting or inserting one datapoint from or into the training set only affects a single partition and thus a single classifier, this procedure effectively reduces the influence of datapoints on the score function. **Training data partitioning.** To prevent that simple reordering of the datasets affects all partitions simultaneously, we have to partition the training data in a way that is invariant to its order. To achieve this we assign datapoints to partitions by using a hash function directly defined on _x_ . For example for images, we use the sum of their pixel values. This technique that has been previously shown to induce certifiable robustness in the context of image classification (Levine & Feizi, 2021). Given a deterministic hash function _h_ : _X →_ Z we define the _i_ -th partition of the training set as _P_ _i_ _[t]_ [=] _[ {]_ [(] _[x]_ _[j]_ _[, y]_ _[j]_ [)] _[ ∈D]_ _[train]_ [:] _[ h]_ [(] _[x]_ _[j]_ [)] _[ ≡]_ _[i]_ [ (][mod] _[ k]_ _[t]_ [)] _[}][ .]_ Then we deterministically train _k_ _t_ classifiers _f_ [(] _[i]_ [)] : _X →Y_ on all partitions _P_ 1 _[t]_ _[, . . ., P]_ _k_ _[ t]_ _t_ [separately.] **Smoothed score function.** Now we define our score function that measures agreement between a sample _x_ and class _y_ by counting the number of classifiers _f_ [(] _[i]_ [)] voting for class _y_ given _x_ : _s_ ( _x, y_ ) = � _Ki_ _e_ =1 _[π]_ _[y]_ _[e]_ [(] _[x][π]_ [)] _[i]_ [(] _[x]_ [)] with _π_ _y_ ( _x_ ) = _k_ [1] _t_ _k_ _t_ � 1 _{f_ [(] _[i]_ [)] ( _x_ ) = _y}_ (2) _i_ =1 where _π_ _y_ ( _x_ ) is the percentage of classifiers voting for class _y_ given sample _x_, and _K_ is the number of classes. Note that we introduce the additional softmax over class distribution _π_ ( _x_ ) to prevent overly large prediction sets in practice (desideratum **II**, see Section 7). 4 **Algorithm 1** Reliable conformal score function **Input:** _D_ _train_, _k_ _t_, deterministic training algo. _T_ 1: Split _D_ _train_ into _k_ _t_ disjoint partitions _P_ _i_ _[t]_ _P_ _i_ _[t]_ [=] _[ {]_ [(] _[x]_ _[j]_ _[, y]_ _[j]_ [)] _[ ∈D]_ _[train]_ [:] _[ h]_ [(] _[x]_ _[j]_ [)] _[ ≡]_ _[i]_ [ (][mod] _[ k]_ _[t]_ [)] _[}]_ 2: **for** _i_ = 1 **to** _k_ _t_ **do** 3: Train classifier _f_ [(] _[i]_ [)] = _T_ ( _P_ _i_ _[t]_ [)][ on partition] _[ P]_ _[ t]_ _i_ 4: Construct the voting function _π_ _y_ ( _x_ ) = _k_ 1 _t_ � _ki_ =1 _t_ [1] _[{][f]_ [ (] _[i]_ [)] [(] _[x]_ [) =] _[ y][}]_ 5: Smooth the voting function _s_ ( _x, y_ ) = _e_ _[π]_ _[y]_ [(] _[x]_ [)] _/_ ( [�] _[K]_ _i_ =1 _[e]_ _[π]_ _[i]_ [(] _[x]_ [)] [)] **Output:** Reliable conformal score function _s_ **Algorithm 2** Reliable conformal prediction sets **Input:** _D_ _calib_, _k_ _c_, _s_, _α_, _x_ _n_ +1 1: Split _D_ _calib_ into _k_ _c_ disjoint partitions _P_ _i_ _[c]_ _P_ _i_ _[c]_ [=] _[ {]_ [(] _[x]_ _[j]_ _[, y]_ _[j]_ [)] _[ ∈D]_ _[calib]_ [:] _[ h]_ [(] _[x]_ _[j]_ [)] _[ ≡]_ _[i]_ [ (][mod] _[ k]_ _[c]_ [)] _[}]_ 2: **for** _i_ = 1 **to** _k_ _c_ **do** 3: Compute scores _S_ _i_ = _{s_ ( _x_ _j_ _, y_ _j_ ) _}_ ( _x_ _j_ _,y_ _j_ ) _∈P_ _i_ _[c]_ 4: Compute _α_ _n_ _i_ -quantile _τ_ _i_ of scores _S_ _i_ 5: Construct prediction set for quantile _τ_ _i_ _C_ _i_ ( _x_ _n_ +1 ) = _{y_ : _s_ ( _x_ _n_ +1 _, y_ ) _≥_ _τ_ _i_ _}_ 6: Construct majority vote prediction set _C_ _[M]_ ( _x_ _n_ +1 )= _{y_ : [�] _[k]_ _i_ =1 _[c]_ [1] _[{][y][ ∈C]_ _[i]_ [(] _[x]_ _[n]_ [+1] [)] _[}]_ _[>]_ [ ˆ] _[τ]_ [(] _[α]_ [)] _[}]_ **Output:** Reliable conformal prediction set _C_ _[M]_ For any function to be considered a valid score function for conformal prediction it has to maintain exchangeability of conformal scores between calibration and test data (Angelopoulos et al., 2021). **Lemma 1.** _The smoothed score function in Equation 2 is a valid conformal score function._ _Proof._ We use one function to score all points independent of other datapoints and which dataset they belong to (and where in the dataset). Thus, given exchangeable data, scores computed by our smoothed score function remain exchangeable. Therefore _s_ of Equation 2 is a valid score function. □ Lemma 1 implies that the coverage guarantee (Theorem 1) holds on clean data when using our smoothed score function (desideratum **I** ). Intuitively, our score function quantifies uncertainty by the number of votes from multiple classifiers (instead of the logits of one classifier). As long as classifiers are trained on isolated partitions we can reduce the influence of datapoints on the conformal scores. We summarize instructions for the smoothed score function in Algorithm 1. 5.2 M AJORITY PREDICTION SETS RELIABLE UNDER CALIBRATION DATA POISONING Now we derive prediction sets reliable against calibration poisoning. This is challenging since the prediction sets must also achieve marginal coverage on clean data (desideratum **I** ) without inflating set size (desideratum **II** ). We propose to (1) partition the calibration data into _k_ _c_ disjoint sets, (2) compute separate prediction sets based on the conformal scores on each partition, and (3) merge the resulting prediction sets via majority voting. This improves reliability since adversaries have to poison multiple partitions to alter the majority vote. We further show that such majority prediction sets achieve marginal coverage, and do not grow too much in size in practice (Section 7). **Calibration data partitioning.** We partition the calibration data as follows: Given hash function _h_ we define the _i_ -th partition of the calibration set as _P_ _i_ _[c]_ [=] _[ {]_ [(] _[x]_ _[j]_ _[, y]_ _[j]_ [)] _[ ∈D]_ _[calib]_ [ :] _[ h]_ [(] _[x]_ _[j]_ [)] _[ ≡]_ _[i]_ [ (][mod] _[ k]_ _[c]_ [)] _[}]_ [.] We then use a (potentially reliable) conformal score function _s_ to compute the conformal scores _S_ _i_ = _{s_ ( _x_ _j_ _, y_ _j_ ) _}_ ( _x_ _j_ _,y_ _j_ ) _∈P_ _i_ _[c]_ [on each partition] _[ P]_ _i_ _[ c]_ [. We can then determine the] _[ α]_ _[n]_ _i_ [-quantiles of the] separate conformal scores, _τ_ _i_ = Quant( _α_ _n_ _i_ ; _S_ _i_ ), where _n_ _i_ is the size of the _i_ -th partition, _n_ _i_ = _|P_ _i_ _[c]_ _[|]_ [.] **Majority prediction sets.** Now we propose prediction sets, which are provably reliable under calibration poisoning. Given a new datapoint _x_ _n_ +1 _∈D_ _test_ we construct _k_ _c_ prediction sets for each partition as _C_ _i_ ( _x_ _n_ +1 ) = _{y_ : _s_ ( _x_ _n_ +1 _, y_ ) _≥_ _τ_ _i_ _}_ . We then construct a prediction set composed of all classes that appear in the majority of _independent_ prediction sets (see instructions in Algorithm 2): � _C_ _[M]_ ( _x_ _n_ +1 ) = _y_ : � _k_ _c_ � � 1 _{y ∈C_ _i_ ( _x_ _n_ +1 ) _} >_ ˆ _τ_ ( _α_ ) _i_ =1 (3) with quantile function ˆ _τ_ ( _α_ ) = max _{x ∈_ [ _k_ _c_ ] : _F_ ( _x_ ) _≤_ _α}_ for [ _k_ _c_ ] = _{_ 0 _, . . ., k_ _c_ _}_, where ˆ _τ_ ( _α_ ) is the inverse of the CDF _F_ of the Binomial distribution Bin( _k_ _c_ _,_ 1 _−_ _α_ ). Intuitively, we select the required majority ˆ _τ_ ( _α_ ) such that the sum over _k_ _c_ Bernoulli random variables 1 _{y ∈C_ _i_ ( _x_ _n_ +1 ) _}_ (each with success probability at least 1 _−_ _α_ ) is at most ˆ _τ_ ( _α_ ) with probability at most _α_ . Note that for _k_ _c_ = 1 we have ˆ _τ_ ( _α_ ) = 0, for which _C_ _[M]_ amounts to vanilla conformal prediction. Notably, such majority prediction sets achieve marginal coverage on clean data (Proof in Appendix D): **Theorem 2.** _Given any conformal score function s and test sample_ ( _x_ _n_ +1 _, y_ _n_ +1 ) _∈D_ _test_ _exchange-_ _able with D_ _calib_ _, the majority prediction set (Equation 3) – constructed from sets calibrated on_ _disjoint partitions – achieves marginal coverage on clean data:_ Pr[ _y_ _n_ +1 _∈C_ _[M]_ ( _x_ _n_ +1 )] _≥_ 1 _−α._ 5 Idea Generation Category:
0Conceptual Integration
ofuLWn8DFZ
# - R EPO G RAPH : E NHANCING AI S OFTWARE E NGINEER - ING WITH R EPOSITORY LEVEL C ODE G RAPH **Siru Ouyang** **[1]** _[∗]_ **, Wenhao Yu** **[2]** **, Kaixin Ma** **[2]** **, Zilin Xiao** **[3]** **, Zhihan Zhang** **[4]** **, Mengzhao Jia** **[4]** **,** **Jiawei Han** **[1]** **, Hongming Zhang** **[2]** **, Dong Yu** **[2]** 1 University of Illinois Urbana-Champaign, 2 Tencent AI Seattle Lab 3 Rice University, 4 University of Notre Dame siruo2@illinois.edu A BSTRACT Large Language Models (LLMs) excel in code generation yet struggle with modern AI software engineering tasks. Unlike traditional function-level or file-level coding tasks, AI software engineering requires not only basic coding proficiency but also advanced skills in managing and interacting with code repositories. However, existing methods often overlook the need for repository-level code understanding, which is crucial for accurately grasping the broader context and developing effective solutions. On this basis, we present R EPO G RAPH, a plug-in module that manages a repository-level structure for modern AI software engineering solutions. R EPO G RAPH offers the desired guidance and serves as a repositorywide navigation for AI software engineers. We evaluate R EPO G RAPH on the SWE-bench by plugging it into four different methods of two lines of approaches, where R EPO G RAPH substantially boosts the performance of all systems, leading to _a new state-of-the-art_ among open-source frameworks. Our analyses also demonstrate the extensibility and flexibility of R EPO G RAPH by testing on another repo-level coding benchmark, CrossCodeEval. Our code is available at [https://github.com/ozyyshr/RepoGraph](https://github.com/ozyyshr/RepoGraph) 1 I NTRODUCTION Recent advancements in large language models (LLMs) have showcased their powerful capabilities across various natural language processing tasks (OpenAI, 2023; Anil et al., 2023; Dubey et al., 2024), and now, coding-specific LLMs are emerging to tackle complex software engineering challenges (Hou et al., 2023; Fan et al., 2023), such as Code-Llama (Rozi`ere et al., 2023) and StarCoder (Li et al., 2023a). These coding-specific LLMs are capable of assisting users with various software engineering tasks, even achieving human-level performance in many function-level coding tasks, such as program synthesis (Chen et al., 2021; Austin et al., 2021), code annotation (Yao et al., 2019), bug fixing (Tufano et al., 2019), and code translation (Rozi`ere et al., 2020). Real-world software engineering often extends beyond single function or self-contained code files. Applications are typically built as repositories containing multiple interdependent files, modules, and libraries (Bairi et al., 2024). These complex structures require a holistic understanding of the entire codebase to perform tasks such as code completion (Shrivastava et al., 2023; Ding et al., 2023), feature addition (Liang et al., 2024), or issue resolving (Jimenez et al., 2024). Recent benchmarks like SWE-Bench (Jimenez et al., 2024) have been proposed to evaluate LLMs on real-world GitHub issues. It requires LLMs to modify the repository to resolve the issue, either by fixing a bug or introducing a new feature. This task is particularly challenging because it requires navigating complex code bases, understanding intricate dependencies between code files, and ensuring that changes integrate seamlessly without introducing new issues, which highlights the difficulties in scaling from function-level to repository-level understanding, as expounded in Figure 1. A key step in addressing repository-level tasks is to understand the structure of a repository and identify related code. To achieve this, retrieval-augmented generation (RAG) and its variants (Xiao et al., _∗_ Work done during internship at Tencent AI Seattle Lab. 1 **(a) Function-level Coding Problem** _**Input text:**_ Write a python function to find the first repeated character in a given string. 1. **def** **first_repeated_char** (str1): 2. **for** index,c **in** **enumerate** (str1): 3. **if** str1[:index+1].count(c) > **1** : **return** c 4. **return** **"None"** |Unit test:|Col2| |---|---| |test_coord_matrix|| |test_cdot|| |test_arith_oper|| Figure 1: The illustration of _(a) a function-level coding problem_ from HumanEval (Chen et al., 2021) and _(b) a repository-level coding problem_ from SWE-Bench (Jimenez et al., 2024). 2023; Zhang et al., 2023; Phan et al., 2024; Wu et al., 2024) have been leveraged, in a procedural manner, to retrieve relevant code files across the repository first, providing context for LLMs for further edition. However, indexing at file-level can only identify semantically similar but not genuinely related code snippets. Instead of using RAG, recent approaches like Agentless (Xia et al., 2024) construct a skeletal format for each file, and directly prompt LLMs to identify relevant files and code lines. However, this method still treats code repositories as flat documents (Zhang et al., 2024), which suffers from limitations of repository structure such as the intricate inter-dependencies across files. An alternative approach is to design agent frameworks (Yang et al., 2024; Wang et al., 2024), which enables LLMs to interact with repositories using actions. While LLM agents can freely determine the next action based on current observations, without the grasp of global repository structures, they tend to focus narrowly on specific files, resulting in local optimums. Addressing these limitations requires going beyond semantic matching and developing techniques that enable a deeper understanding of the codebase structure. This will allow LLMs to leverage fine-grained context across multiple files and function calls, facilitating more informed, repository-wide decision-making for coding tasks. Motivated by this, we propose R EPO G RAPH, a _plug-in_ module designed to help LLMs-based AI programmers leverage the code structure of _an entire repository_ . R EPO G RAPH is a graph structure and operates at the line level, offering a more fine-grained approach compared to previous filelevel browsing methods. Each node in the graph represents a line of code, and edges represent the dependencies of code definitions and references. R EPO G RAPH is constructed via code line parsing and encodes the structured representation of the entire repository. Sub-graph retrieval algorithms are then used to extract ego-graphs, which represent the relationships of a central node (in our case, specific keywords). These ego-graphs can be smoothly integrated with both procedural and agent frameworks, offering key clues that provide a more comprehensive context for LLMs to solve realworld software engineering problems. To assess R EPO G RAPH ’s effectiveness and versatility as a plug-in module, we integrate it with four existing software engineering frameworks and evaluate its performance using SWE-bench, a recent benchmark for AI software engineering. Experiment results show that R EPO G RAPH boosts the success rate of existing methods for both agent and procedural frameworks by achieving an average relative improvement of 32 _._ 8%. We also test R EPO G RAPH on CrossCodeEval to verify its transferability to general coding tasks that require repository-level code understanding. Additionally, we systematically analyze different sub-graph retrieval algorithms and integration methods. Together with error analyses, we hope to shed light on future works targeting modern AI software engineering. 2 R ELATED W ORKS 2.1 LLM- BASED METHODS FOR AI SOFTWARE ENGINEERING Recently, there has been a significant increase in research focused on AI-driven software engineering, which can be broadly categorized into two primary approaches: (i) LLM agent-based frameworks and (ii) SWE-featured procedural frameworks. While this field has advanced rapidly, with most methods being released as proprietary solutions for industry applications (Cognition, 2024), our related work section will concentrate specifically on open-source frameworks. 2 **LLM agent-based framework** equips large language models (LLMs) with a set of predefined tools, allowing agents to iteratively and autonomously perform actions, observe feedback, and plan future steps (Yang et al., 2024; Zhang et al., 2024; Wang et al., 2024; Cognition, 2024; Ouyang et al., 2024; Tang et al., 2025). While the exact set of tools may vary across different agent frameworks, they typically include capabilities such as opening, writing, or creating files, searching for code lines, running tests, and executing shell commands. To solve a problem, agent-based approaches involve multiple actions, with each subsequent turn depending on the actions taken in previous ones and the feedback received from the environment. For example, SWE-agent (Yang et al., 2024) facilitates interactions with the execution environment by designing a special agent-computer interface (ACI). There are various actions, including “search and navigation”, “file viewer and editor”, and “context management”. Another work, AutoCodeRover (Zhang et al., 2024), further offers fine-grained searching methods for LLM agents in better contexts without an execution process. Specifically, it supports class and function-level code search. OpenDevin (Wang et al., 2024), initiated after Devin (Cognition, 2024), is a community-driven platform that integrates widely used agent systems and benchmarks. The action space design of OpenDevin is highly flexible, requiring LLM agents to generate code on the fly. **SWE-featured procedural frameworks** typically follow a two-step _Localize/Search-Edit_ approach, as seen in existing literature (Zhang et al., 2023; Wu et al., 2024; Liang et al., 2024; Xia et al., 2024). The _localize_ step focuses on identifying relevant code snippets, while the _edit_ step involves completing or revising the code. Some works introduce additional steps to further enhance performance, such as the _Search-Expand-Edit_ approach (Phan et al., 2024). Retrieval (Lewis et al., 2020) is a popular technique used for localization, allowing models to search for relevant code snippets from large repositories by treating issue descriptions as queries and code snippets as indexed data. Some approaches use a sliding window to ensure completeness (Zhang et al., 2023). Besides, Agentless (Xia et al., 2024) is a recently developed method that uses LLMs to directly identify relevant elements for editing within code repositories. It first recursively traverses the repository structure to generate a format that aligns files and folders vertically, with indents for sub-directories. This structure and the issue description are then input into the LLM, which performs a hierarchical search to identify the top-ranked suspicious files requiring further inspection or modification. 2.2 R EPOSITORY - LEVEL C ODING C APABILITY The evaluation of coding capabilities in AI systems has traditionally focused on function-level PO G RAPH or line-level assessments (Lu et al., 2021; Chen et al., 2021; Austin et al., 2021), where indi **stander (Ma et al.** vidual code snippets or isolated functions are the primary units of analysis. Unlike previ- **et al., 2024** ous studies, SWE-bench (Jimenez et al., 2024) Model highlights the trend of repository-level coding, DraCo driven by recent advances of coding-specific Aider LLMs (Guo et al., 2024; Li et al., 2023b). It RepoUnderstander* reflects the growing user demand to understand CodexGraph* and contribute to entire projects rather than iso R EPO G RAPH lated functions (Ouyang et al., 2023), as well as solving real-world problems in an end-to-end and automatic manner. Table 1: Comparison between our approach R E PO G RAPH and existing methods for representing the repository on various aspects. ***RepoUnder-** **stander (Ma et al., 2024) and CodexGraph (Liu** **et al., 2024) are concurrent works to ours.** Model Line-level File-level Repo-level DraCo ✗ ✓ ✗ Aider ✓ ✗ ✗ RepoUnderstander* ✗ ✓ ✓ CodexGraph* ✗ ✓ ✓ R EPO G RAPH ✓ ✓ ✓ Long before the LLM era, the software engineering community has been studying the navigation of code repositories, including using eye-tracking data (Busjahn et al., 2015) and exploring interconnections (Begel et al., 2010). Then pre-trained code LLMs incorporate repository-level information such as file dependencies. But tasks at the repository level often involve more intricate call relationships within their context. Recent works like RepoCoder (Zhang et al., 2023) and RepoFuse (Liang et al., 2024) have started integrating RAG modules to harness additional information from repositories. Building on this, subsequent research has focused on embedding repository-level context into their methodologies. For instance, DraCo (Cheng et al., 2024) introduces importing relationships between files, CodePlan maintains the code changes by LLMs with incremental dependency analysis, while Aider (Gauthier, 2024) employs PageRank (Page, 1999) to identify the most significant contextual elements. RepoUnderstander (Ma et al., 2024) and CodexGraph (Liu et al., 2024) model 3 action “search ~~r~~ epograph”. A simplified version can be found in Figure 10. code files as a knowledge graph. Despite similarities in representation, methods vary in how they retrieve information from these structures and utilize it for downstream tasks. Table 1 summarizes the differences between these methods and R EPO G RAPH . R EPO G RAPH surpasses previous approaches by effectively integrating context at the line, file, and repository levels. 3 R EPO G RAPH This section introduces R EPO G RAPH, a novel plug-in module that can be seamlessly integrated into existing research workflows for both agent-based and procedural frameworks. The primary goal of R EPO G RAPH is to provide a structured way to analyze and interact with complex codebases, enabling detailed tracing of code dependencies, execution flow, and structural relationships across the repository. In the following sections, we will provide a detailed description of R EPO G RAPH ’s construction, its underlying representation, and its utility across various scenarios. The overall architecture is depicted in Figure 2, highlighting its key components and operational flow. 3.1 C ONSTRUCTION Given a repository-level coding task, the first step is to carefully examine the repository structure so that the necessary information can be collected. The input for R EPO G RAPH construction is a repository, i.e., a collection of its folders and files, while the output is a structured graph, where each node is a code line, and each edge represents the dependencies in between. R EPO G RAPH enables tracing back to the root cause of the current issue and gathering dependent code context to help solve the problem. The construction process of R EPO G RAPH could be divided into three key steps. **Step 1: Code line parsing.** We first traverse the entire repository using a top-down approach to identify all code files as candidates for next-step parsing. This is accomplished by filtering based on file extensions, retaining only those with relevant code file suffixes (e.g., . ~~py)~~ while excluding other file types (e.g., . ~~git~~ or r ~~equirements~~ . ~~txt)~~, which might be noisy and irrelevant for coding tasks. For each code file, we utilize tree-sitter [1] to parse the code, leveraging its Abstract Syntax Tree (AST) framework. The AST provides a tree-based representation of the abstract syntactic structure of the source code, enabling the identification of key elements such as functions, classes, variables, types, and other definitions. While recognizing these definitions is crucial, tracing their usage and references throughout the code is equally important. Tree-sitter facilitates this by capturing the definitions 1 [https://pypi.org/project/tree-sitter-languages/](https://pypi.org/project/tree-sitter-languages/) 4 and tracking where they are utilized or referenced within the codebase. For example, in figure 2, we not only identify definitions like ~~class~~ ~~Model~~ and its inherent methods but also references like ~~self~~ . ~~validate~~ ~~input~~ ~~units()~~ . After processing each line of code with a tree-sitter, we selectively retain lines that involve function calls and dependency relations, discarding extraneous information. Our focus is primarily on the _functions_ and _classes_, as these represent the core structural components of the code. By concentrating on these elements and their interrelationships, R EPO G RAPH optimizes the analysis process by excluding less significant details, such as individual variables, which tend to be redundant and less relevant for further processing. **Step 2: Project-dependent relation filtering.** After the previous parsing step, we obtain code lines with calling and dependency relations. However, not all relations are useful for fixing issues. Specifically, many default and built-in function/class calls could distract from the project-related ones. Therefore, we additionally introduce a filtering process that excludes the repository-independent relations. Two types of such relations exist: (i) _global relation_ refers to Python standard and built-in functions and classes. (ii) _local relation_ are introduced by third-party libraries, which are specific to the current code file. For global relations, we maintain a comprehensive list of methods from standard and built-in libraries, excluding any identified relations from this list. The list is empirically constructed by gathering methods of the ~~builtins~~ library and default methods such as “list” and “tuple”. For example, in figure 2, line i ~~nputs~~ = ~~len(input)~~ is excluded since “len” is a default method. For local relations, we parse import statements in the code to identify third-party methods that are included, and exclude them accordingly. A detailed illustration of this step can be found in Figure 11. **Step 3: Graph organization.** At this stage, we construct R EPO G RAPH using code lines as the fundamental building units. The graph can be represented as _G_ = _{V, E}_, where _V_ represents the set of nodes, with each node corresponding to a line of code, and _E_ represents the set of edges, capturing the relationships between these code lines. Each node in _V_ contains attributes to represent its meta-information, such as line ~~n~~ umber, file ~~n~~ ame, directory, etc. Additionally, we classify each code line as either a “definition” (def) or a “reference” (ref) to a particular module. A “def” node corresponds to the line where a function, class, or variable is initially defined, while a “ref” node indicates a code line where this entity is referenced or invoked elsewhere in the code. Similar to soft links to “def” nodes, “ref” nodes also represent other variations of invoking methods. For example, in Figure 2, the class definition _“class Model”_ would be a “def” node, while any subsequent usages of Model would be “ref” nodes. Each “def” node may have multiple “ref” nodes associated with it, as a single function or class can be referenced in various places throughout the code. We define two types of edges: _E_ _invoke_ and _E_ _contain_ . The triple ( _V_ 1 _, E_ _contain_ _, V_ 2 ) denotes that _V_ 1 (e.g., a function definition) contains another module _V_ 2 (e.g., an internal function or class). The edge _E_ _contain_ typically connects a “def” node to its internal components. In contrast, _E_ _invoke_ represents an invocation relationship, usually connecting a “def” node to a “ref” node, where the reference node includes a dependency on the definition. 3.2 U TILITY The constructed R EPO G RAPH serves as a structured representation of the current repository and facilitates better-related information collection and aggregation. For information collection based on R EPO G RAPH, specifically, we use one search term each time for subgraph retrieval. Search terms are the key functions or classes that are determined by current states. For example, “separability ~~m~~ atrix” is the initial search term in Figure 2(c). We retrieve the _k_ -hop ego-graphs (Hu et al., 2024) with the search term in the centric. The ego-graph is crucial for solving the problem because it focuses on the immediate relationships (Jin et al., 2024) around the search term, capturing the relevant dependencies and interactions within the repository, which is key to understanding the functional context. Additionally, the retrieved content explicitly contains information at both the method and line levels and implicitly expresses the grouping at the file level. This process is abstracted via _search_ ~~_r_~~ _epograph()_ as illustrated in the middle of Figure 2. The retrieved _k_ -hop ego-graph will be flattened for further processing. We also tried other variants for integration later in Section 5.2 and their performance in Table 4. We narrate how R EPO G RAPH could be plugged in with existing representative research lines in the following. 5 Idea Generation Category:
0Conceptual Integration
dw9VUsSHGB
# D O AS W E D O, N OT AS Y OU T HINK : THE C ONFORMITY OF L ARGE L ANGUAGE M ODELS **Zhiyuan Weng** [1] _[∗]_ **, Guikun Chen** [1] _[∗]_ **, Wenguan Wang** [1] _[†]_ 1 Zhejiang University A BSTRACT Recent advancements in large language models (LLMs) revolutionize the field of intelligent agents, enabling collaborative multi-agent systems capable of tackling complex problems across various domains. However, the potential of conformity within these systems, analogous to phenomena like conformity bias and groupthink in human group dynamics, remains largely unexplored, raising concerns about their collective problem-solving capabilities and possible ethical implications. This paper presents a comprehensive study on conformity in LLM-driven multi-agent systems, focusing on three aspects: the existence of conformity, the factors influencing conformity, and potential mitigation strategies. In particular, we introduce B ENCH F ORM, a new conformity-oriented benchmark, featuring reasoning-intensive tasks and five distinct interaction protocols designed to probe LLMs’ behavior in collaborative scenarios. Several representative LLMs are evaluated on B ENCH F ORM, using metrics such as conformity rate and independence rate to quantify conformity’s impact. Our analysis delves into factors influencing conformity, including interaction time and majority size, and examines how the subject agent rationalizes its conforming behavior. Furthermore, we explore two strategies to mitigate conformity effects, _i_ . _e_ ., developing enhanced personas and implementing a reflection mechanism. Several interesting findings regarding LLMs’ conformity are derived from empirical results and case studies. We hope that these insights can pave the way for more robust and ethically-aligned [collaborative AI systems. Our benchmark and code are available at BenchForm.](https://github.com/Zhiyuan-Weng/BenchForm) 1 I NTRODUCTION **Background.** Advances in LLMs usher in a new era of multi-agent systems capable of tackling complex, multifaceted problems across domains [1–6]. As these systems evolve, they are increasingly considered for crucial roles in public policy analysis, social platform moderation, and even governance processes [7–10]. However, the integration of such systems into societal processes raises concerns about potential unintended consequences, particularly the susceptibility of these agents to cognitive biases akin to those observed in human group dynamics [11–15]. Of particular interest is the phenomenon of conformity, well-documented in social psychology [16–20], which may manifest in multi-agent systems with both constructive and problematic effects. While conformity can foster consensus and coordination among agents as they interact with humans and each other, it may also lead to detrimental herding behavior [21, 22], potentially compromising the reliability of agents’ judgments on critical social issues such as voting, policy recommendations, or ethical decisions. **Motivation.** While considerable research focuses on improving the overall performance of multi-agent systems ( _e_ . _g_ ., enhancing the expertise of in ~~dividual~~ ~~agents~~ ~~[23~~ – ~~25]~~ and integrating external knowledge [26–29]), a critical question remains unexplored: Do multi-agent systems function as expected, and more specifically, do they encounter issues that a single agent would not? This inquiry is rooted in observations of conformity in human social behavior and group - The first two authors contribute equally to this work. - Corresponding Author: Wenguan Wang. 1 What is the largest satellite in the solar system? Titan Titan Figure 1: An illustration of conformity. decision-making [11–15]. Just as human group dynamics can lead to phenomena like conformity bias and groupthink [30–34], multi-agent systems may exhibit analogous behaviors, potentially impacting their collective problem-solving capabilities or even posing considerable ethical issues. For example, even simple problems can be influenced by peer pressure or other factors, causing agents to abandon correct judgments in favor of majority opinions (Fig. 1). Despite some studies noting conformity in certain scenarios [35–37], a compre- hensive investigation into this phenomenon within LLM-driven multi-agent environments is absent. **Methodology.** In this work, we present a systematic study of conformity in LLM-driven multi-agent systems, addressing three fundamental questions: ❶ Does conformity exist in multi-agent collaboration? ❷ What are the factors influencing conformity? ❸ How can we mitigate the effects of conformity? To answer Question ❶, we introduce B ENCH F ORM (§2), a new conformity-oriented benchmark derived from BIG-Bench Hard (BBH) dataset [38]. B ENCH F ORM incorporates reasoningintensive tasks specifically selected for their relevance to conformity studies. Several representative LLMs, encompassing both proprietary and open-source models, are evaluated on B ENCH F ORM using five distinct interaction protocols (§2.2). These protocols are designed to probe LLMs’ behavior in both short-term and long-term collaborative scenarios. Moreover, two conformity-oriented metrics (§3.2) — conformity rate, and independence rate — are proposed to quantitatively assess the impact of conformity. Building on the insights gained from our investigation (§3.3), we address Question ❷ by studying two factors ( _i_ . _e_ ., the interaction time and the majority size) that might influence conformity (§4.1 and §4.2) and conducting a behavioral study to elucidate how subject agents rationalize their conformity (§4.3). Finally, to answer Question ❸, we explore two preliminary strategies to mitigate conformity effects: developing enhanced personas for LLMs (§5.1), and implementing a reflection mechanism to encourage independent decision-making (§5.2). In addition, several potential directions about mitigating conformity effects are outlined for further research (§5.3). Through this comprehensive analysis, our goal is to highlight the presence of conformity in multi-agent systems while shedding light on its root factors and suggesting potential interventions. **Contribution.** In a nutshell, our contributions are three-fold: - We introduce B ENCH F ORM, a new benchmark for investigating conformity in multi-agent systems. B ENCH F ORM features reasoning-intensive tasks and interaction protocols designed to study conformity-related behavior, providing a strong basis for future research on LLM conformity. - With the proposed B ENCH F ORM, we present a comprehensive empirical study on conformity in collaborative environments, by measuring the impact of conformity through three quantitative metrics — accuracy, conformity rate, and independence rate. - We conduct an analysis of factors influencing conformity, examining both intrinsic model characteristics and extrinsic contextual variables. We also explore mitigation strategies and discuss the implications of our findings for future research in AI ethics and collaborative AI systems. 2 B ENCH F ORM This section introduces B ENCH F ORM, a reasoning-intensive benchmark designed to evaluate conformity in multi-agent collaborative environments. B ENCH F ORM encompasses a series of challenging reasoning tasks and is engineered to assess the formation and impact of relationships between agents over short-term and long-term interactions. Next, we first present the data source and collection procedure (§2.1). Then, we introduce five evaluation protocols specifically designed for studying conformity in multi-agent collaboration under different interaction scenarios (§2.2). Finally, we provide implementation details about the prompt configurations (§2.3). 2.1 D ATA C OLLECTION B ENCH F ORM draws inspiration from sociological research demonstrating a positive correlation between task difficulty and conformity propensity [39]. We use the BIG-Bench Hard (BBH) dataset [38], known for its complex reasoning tasks, to compile our dataset. Two primary task categories are collected: **i** ) logical and analytical reasoning, featuring clear, logically-derived correct answers; and **ii** ) language and contextual understanding, introducing subjective elements with less clearly defined right or wrong answers. This design allows evaluation of whether subject agents trust their own reasoning or conform to group judgments, while also creating a nuanced environment 2 Current Discussion {Question} {Answer Choices} {Answer} Current Discussion {Question} {Answer Choices} {Correct Answer} **···** {Correct Answer} {Answer} Current Discussion {Question} {Answer Choices} {Wrong Answer} **···** {Wrong Answer} {Answer} {Wrong Answer} {Wrong Answer} **···** {Wrong Answer} {Answer} (d) Trust Previous Discussions {Question} {Answer Choices} {Correct Answer} {Correct Answer} **···** {Correct Answer} {Correct Answer} Current Discussion Previous Discussions {Question} {Answer Choices} {Wrong Answer} {Wrong Answer} **···** {Wrong Answer} {Correct Answer} Current Discussion {Question} {Question} {Answer Choices} {Answer Choices} {Correct Answer} {Correct Answer} **···** {Correct Answer} {Answer} (e) Doubt Figure 2: An overview of the five protocols (§2.2) used to study conformity. for studying conformity in ambiguous contexts. To ensure uniform sample distribution, we employ a subsampling strategy [40], including up to 300 samples per task type. The resulting dataset comprises 3,299 multiple-choice questions. More details and data statistics are given in Appendix §B.1. 2.2 P ROTOCOLS The most simple and basic protocol for B ENCH F ORM is involving only two entities: a questioner and a subject agent. The questioner presents a problem, and the subject agent responds directly. We call this as Raw Protocol (Fig. 2a). This serves as our baseline scenario to establish the agent’s performance without interactions with additional agents. Based on this baseline protocol, we further devise two groups of interaction protocols that simulate various social scenarios. The first group involves one single round of discussion, focusing on short-term interactions and their immediate effects on the subject agent’s decision-making, defined as: - Correct Guidance Protocol (Fig. 2b). This scenario introduces additional agents alongside the subject agent. The questioner presents a problem to all agents. The additional agents provide _correct answers_ before the subject agent responds. This setup is used to assess whether the subject agent is influenced by (or conforms to) correct information from peers. - Wrong Guidance Protocol (Fig. 2c) is the _inverse_ of the Correct Guidance protocol. The key difference is that the additional agents provide _incorrect answers_ before the subject agent responds. This setup is used to evaluate whether the subject agent might conform to incorrect group consensus, even when the provided information contradicts the subject agent’s own reasoning. The second group examines the subject agent’s behavior after social relationships are established through multiple rounds of discussion. This is inspired by previous work in social psychology [41] which suggests that an important factor of being a conformist within a group is _the level of trust one_ _places in that group_ . In particular, the Trust protocol is defined as: - Trust Protocol (Fig. 2d) involves multiple interaction rounds. In initial rounds, additional agents provide correct answers. In the final round, these agents give an incorrect answer. This setup is used to examine whether the subject agent has developed “trust” in its peers due to their past accuracy and whether the subject agent will conform to peers’ incorrect answer in the final round. So far, one might wonder if a doubt relationship could also be formed between agents and how this relationship might influence the subject agent’s behavior. The Doubt protocol is then defined as: - Doubt Protocol (Fig. 2e) is the _inverse_ of the Trust protocol. In initial rounds, additional agents provide incorrect answers. In the final round, these agents give the correct answer. This setup is used to investigate whether the subject agent has developed “doubt” in its peers due to their past inaccuracy and whether the subject agent will exclude peers’ correct answer in the final round. 3 I'd like to choose "(C) Forrest Gump" I'm going with "(C) Forrest Gump" I'd select "(C) Forrest Gump" as the best answer Current Discussion Find a movie similar to Star Wars Episode IV - A New Hope, Inception, Up, Raiders of the Lost Ark: Answer choices: (A)The Matrix (B) The Love Bug (C) Giant (D) I'll Be Home For Christmas I'd favor "(B) The Love Bug" as the most fitting option I'm going with "(B) The Love Bug" I'd opt for "(B) The Love Bug" in this case The best answer is "(B) The Love Bug" (a) Trust Previous Discussions Find a movie similar to Schindler's List, Braveheart, The Silence of the Lambs, Tombstone: Answer choices: (A) Orlando (B) All the Real Girls (C) Forrest Gump (D) Guilty of Romance I'd like to choose "(B) All the Real Girls" I'm going with "(B) All the Real Girls" Current Discussion Find a movie similar to Star Wars Episode IV - A New Hope, Inception, Up, Raiders of the Lost Ark: Answer choices: (A)The Matrix (B) The Love Bug (C) Giant (D) I'll Be Home For Christmas I'd favor "(A) The Matrix" as the most fitting option I'm going with "(A) The Matrix" I'd opt for "(A) The Matrix" in this case The best answer is "(C) Giant" (b) Doubt Figure 3: Visualization about the influence of (a) Trust and (b) Doubt protocols (§2.2) on the subject agent’s decision-making process. These illustrations demonstrate how the subject agent develops a trust or doubt relationship with other agents, leading to answers based on these relationships rather than independent reasoning. Both the Trust and Doubt protocols draw on the setup of Asch conformity experiments [42–44]. Participants are not informed about the correctness of their answers, allowing us to measure the influence of social pressure without external feedback. The additional agents provide the same answer following the experimental setup of Asch conformity experiments. Experiments with divergent opinions are elaborated in §4.2. Examples of the Trust and Doubt protocols are shown in Fig. 3. In a nutshell, all the proposed protocols are designed to mimic various social dynamics observed in human group behavior, such as peer pressure, trust building, and skepticism. By applying these protocols to LLM-based agents, we can systematically study how multi-agent systems might exhibit conformity or independence in different collaborative scenarios. In addition, these protocols allow us to draw parallels between agent behavior and well-documented human social phenomena, providing insights into the potential limitations and biases of multi-agent systems. 2.3 I MPLEMENTATION D ETAILS The Raw protocol involves a simple question-answer interaction between the questioner and the subject agent, without introducing additional agents. For the Correct Guidance and Wrong Guidance protocols, six additional agents are introduced to provide either correct or incorrect responses, respectively. The second group of protocols extends the question-answering process further, which introduces several historical discussions on top of the original process. Our configuration draws inspiration from the seminal Asch conformity experiments [42, 44], where the subject agent is strategically positioned to respond last. Complete prompts of all protocols are given in Appendix §B.3. 3 C ONFORMITY ON B ENCH F ORM 3.1 T ARGET LLM S We conduct experiments on 11 popular LLMs, including two closed-source ones (GPT-3.5 [45], GPT-4o [46]), and nine open-source LLMs (Llama3 [47], Llama3.1 [48], Gemma2 [49] and Qwen2 [50] series). Detailed model settings and complete experimental results of all LLMs are given in Appendix §B.2 and §C.1, respectively. 3.2 E VALUATION M ETRICS Given that the subject agent is tested on a fixed QA set _Q_, we track the subject agent’s response on _Q_ under a certain protocol _P_ . Each protocol _P_ is represented by its initial for brevity ( _e_ . _g_ ., Raw 4 is represented by R). _Q_ _[P]_ ✓ [and] _[ Q]_ _[P]_ ✗ [refer to the correctly answered and wrongly answered questions] under specific protocol _P_, respectively. Two metrics are devised to evaluate the conformity: Acc _[P]_ = _|Q_ _[P]_ ✓ _[|][ /][ |Q|][,]_ CR _[P]_ = _|Q_ _[P]_ ✗ _[∩Q]_ ✓ _[R]_ _[R]_ ✓ _[|][ /][ |Q]_ _[R]_ ✓ ✓ _[|][,]_ (1) where Acc _[P]_ refers to the average accuracy and CR _[P]_ denotes the average conformity rate across _Q_ under protocol _P_ . As seen, CR _[P]_ represents the proportion of questions that are originally answered correctly but are answered incorrectly when _P_ is applied. Hence, CR can serve as a quantitative measure about the subject agent’s level of conformity. Note that since other agents provide correct answers as context under Correct Guidance, CR _[C]_ is defined with a slight variation: CR _[C]_ = _|Q_ _[R]_ ✗ _[R]_ ✗ _[∩Q]_ ✓ _[C]_ _[C]_ ✓ _[|][ /][ |Q]_ _[R]_ ✗ ✗ _[|][.]_ (2) Note that while CR _[C]_ reflects conformity tendencies, this characteristic could be beneficial when LLMs learn from group interactions. In addition, to measure the ability of the subject agent to make independent decisions, we further devise the independence rate ( _i_ . _e_ ., IR) metric as follows: IR = _|Q_ _[T]_ ✓ _[∩Q]_ _[D]_ ✓ _[∩Q]_ _[R]_ ✓ _[|][ /][ |Q]_ _[R]_ ✓ _[|][.]_ (3) As seen, IR represents the proportion of questions that are answered correctly across protocols. Note that only Trust and Doubt protocols are included as long-term interactions are shown to be closely relevant to independent decision-making [41]. _[T]_ ✓ _[∩Q]_ _[D]_ ✓ _[R]_ ✓ _[|][ /][ |Q]_ _[R]_ ✓ IR = _|Q_ _[T]_ ✓ _[D]_ ✓ _[∩Q]_ _[R]_ ✓ 3.3 M AIN R ESULTS AND F INDINGS The largest version is selected as the representative for each LLM series. By evaluating them on B ENCH F ORM with the proposed interaction protocols, we have the following findings: **Finding I: All the evaluated LLMs show a** Table 1: Results (%) of five series on B ENCH F ORM . **tendency to conform.** Table 1 shows high Each protocol is represented by its initial for brevity. ∆ _P_ rates, indicating LLMs’ susceptibility to ∆ _P_ denotes _|_ Acc _[P]_ _−_ Acc _[R]_ _|_ . group pressure. For instance, Gemma2-27B ex- ~~Model~~ ~~∆~~ ~~_C↓_~~ ~~∆~~ ~~_W_~~ ~~_↓_~~ ~~∆~~ ~~_T_~~ ~~_↓_~~ ~~∆~~ ~~_D↓_~~ hibits notable conformance, with ∆ _D_ reach- Gemma2 [49] 24.1 _±_ 0 _._ 1 22.8 _±_ 1 _._ 1 9.5 _±_ 0 _._ 3 38.6 _±_ 0 _._ 2 ing 38.6%. Even state-of-the-art LLMs like Llama3 [47] 4.5 _±_ 0 _._ 1 2.2 _±_ 0 _._ 2 25.5 _±_ 0 _._ 1 44.7 _±_ 0 _._ 2 GPT-4o and Llama3.1-405B show substantial Qwen2 [50] 16.1 _±_ 0 _._ 1 17.5 _±_ 0 _._ 1 16.2 _±_ 0 _._ 1 15.9 _±_ 0 _._ 2 conformity, particularly in ∆ _T_ ( _i_ . _e_ ., GPT-4o: Llama3.1 [48] 1.0 _±_ 0 _._ 1 2.5 _±_ 0 _._ 2 2.5 _±_ 0 _._ 5 30.2 _±_ 0 _._ 2 22.6%; Llama3.1-405B: 2.5%) and ∆ _D_ ( _i_ . _e_ ., |Model|∆C↓|∆W↓|∆T↓|∆D↓| |---|---|---|---|---| |Gemma2 [49]<br>Llama3 [47]<br>Qwen2 [50]<br>GPT-4o [46]<br>Llama3.1 [48]|24.1±0.1<br>4.5±0.1<br>16.1±0.1<br>13.2±3.7<br>1.0±0.1|22.8±1.1<br>2.2±0.2<br>17.5±0.1<br>14.9±1.2<br>2.5±0.2|9.5±0.3<br>25.5±0.1<br>16.2±0.1<br>22.6±0.9<br>2.5±0.5|38.6±0.2<br>44.7±0.2<br>15.9±0.2<br>13.0±4.2<br>30.2±0.2| GPT-4o: 13.0%; Llama3.1-405B: 30.2%). Although Llama3.1-405B demonstrates strong resistance under the Correct Guidance protocol, it is vulnerable under the Doubt protocol. These results indicate that none of the evaluated LLMs are fully immune to all the four interaction protocols. We further explore which protocol is most likely to induce Table 2: Results (%) of CR under three misLLMs to make mistakes. As shown in Table 2, among the leading protocols on B ENCH F ORM . three protocols designed to mislead LLMs (Wrong Guid- ~~Model~~ ~~CR~~ _[W]_ ~~CR~~ _[T]_ ~~CR~~ _[D]_ ~~_↓_~~ ance, Trust and Doubt protocols), the Doubt protocol is Gemma2 [49] 39.7 _±_ 1 _._ 5 28.4 _±_ 0 _._ 6 66.1 _±_ 0 _._ 1 the most effective in guiding LLMs into making errors, Llama3 [47] 14.7 _±_ 0 _._ 7 44.4 _±_ 0 _._ 1 69.9 _±_ 0 _._ 1 with CR _[D]_ surpasses that of other protocols in most cases. Qwen2 [50] 28.9 _±_ 0 _._ 4 30.5 _±_ 0 _._ 6 30.0 _±_ 0 _._ 2 For the five representative LLMs, the average CR _[W]_, CR _[T]_, GPT-4o [46] 24.4 _±_ 0 _._ 6 37.9 _±_ 0 _._ 1 26.6 _±_ 1 _._ 1 and CR _[D]_ are 23.5%, 31.3%, and 47.2%, respectively. The |Model|CRW|CRT|CRD↓| |---|---|---|---| |Gemma2 [49]<br>Llama3 [47]<br>Qwen2 [50]<br>GPT-4o [46]<br>Llama3.1 [48]|39.7±1.5<br>14.7±0.7<br>28.9±0.4<br>24.4±0.6<br>10.0±0.1|28.4±0.6<br>44.4±0.1<br>30.5±0.6<br>37.9±0.1<br>15.3±0.4|66.1±0.1<br>69.9±0.1<br>30.0±0.2<br>26.6±1.1<br>43.2±0.1| higher rates of CR _[D]_ and CR _[T]_ compared to CR _[W]_ suggest that established relationships influence conformity. In addition, CR _[D]_ surpassing CR _[T]_ indicates that LLMs are more prone to establish doubt relationships than trust during previous discussions. Table 1: Results (%) of five series on B ENCH F ORM . Each protocol is represented by its initial for brevity. ∆ _P_ denotes _|_ Acc _[P]_ _−_ Acc _[R]_ _|_ . Table 2: Results (%) of CR under three misleading protocols on B ENCH F ORM . **Finding II: Model size correlates posi-** **tively with independence rates.** As depicted in Fig. 4, a clear trend emerges: as LLMs increase in size, their independence rates also rise. For instance, as the Qwen2 series scales from 7B to 72B parameters, its independence rate rises from 19.6% to 57.6%. Llama3.1-405B, the largest evaluated model, shows the second-highest in Figure 4: Results (%) of dependence rate at 56.1%. This might indicate that larger LLMs are more capable of making independent decisions. Figure 4: Results (%) of IR on B ENCH F ORM . 5 Idea Generation Category:
1Cross-Domain Application
st77ShxP1K
# - L OOPED T RANSFORMERS FOR L ENGTH G ENERALIZA ## TION **Ying Fan** [1] **, Yilun Du** [2] **, Kannan Ramchandran** [3] **, Kangwook Lee** [1] 1 University of Wisconsin-Madison 2 Massachusetts Institute of Technology 3 UC Berkeley A BSTRACT Recent work has shown that Transformers trained from scratch can successfully solve various arithmetic and algorithmic tasks, such as adding numbers and computing parity. While these Transformers generalize well on unseen inputs of the same length, they struggle with length generalization, i.e., handling inputs of unseen lengths. In this work, we demonstrate that looped Transformers with an _adaptive_ _number of steps_ significantly improve length generalization. We focus on tasks with a known iterative solution, involving multiple iterations of a RASP-L operation—a length-generalizable operation that can be expressed by a finite-sized Transformer. We train looped Transformers using our proposed learning algorithm and observe that they learn highly length-generalizable solutions for various tasks. 1 I NTRODUCTION Most algorithmic tasks such as coding, writing mathematical proofs, and reasoning are defined with inputs of variable _length_ . The length of an input often correlates with the difficulty of the problem instance. For example, the longer the input, the more difficult the problem tends to be. We say a model perfectly _length-generalizes_ if it can solve an algorithmic task on inputs of any length, even if it was only trained on data with inputs up to a finite length (Anil et al., 2022). Generally, it is hard to expect models to be trained on inputs with all possible lengths, and we need to rely on length generalization. Also, if a model can length-generalize, it means the model has truly learned the correct algorithmic solution to the task, not just a spurious solution that works only for certain lengths. Recently, many works on Large Language Models (LLMs) have shown that we can get more powerful AI models by scaling both compute and data at training time. This scaling approach has indeed succeeded in improving accuracies on various benchmarks. However, even the largest and latest LLMs like Achiam et al. (2023) trained on much of the existing text on the Internet, still struggle with length generalization (Wu et al., 2023; Anil et al., 2022; Lee et al., 2024). One possible cause is the particular computing model. LLMs are built based mostly on the Transformer architecture (Vaswani et al., 2017). While Transformers can accept a variable length of inputs (that can be processed in parallel), they usually have a fixed depth. This might be sufficient for certain tasks, but not always. To learn a model that can effectively generalize to longer problems, it is important to consider architectures that can adaptively adjust the computational budget to the difficulty of the tasks (Anil et al., 2022; Du et al., 2022; 2024). One approach to achieve this is to explicitly generate intermediate output tokens, similar to writing down a scratchpad, which improves LLMs’ capability for solving harder problems (Nye et al., 2021). In theory, LLMs may generate more scratchpad tokens representing intermediate computation when solving a more difficult task, indicating that they can allocate elastic computation according to the length and difficulty of the given instance. This approach can be learned by explicitly training a model on data with intermediate computation steps (Ling et al., 2017; Cobbe et al., 2021). Alternatively, it can be achieved via Chain-of-Thought (CoT) reasoning with few-shot examples (Wei et al., 2022) or even in a zero-shot manner (Kojima et al., 2022). Notice that these approaches still use fixed-depth models. While these approaches help solve more complex reasoning tasks, they are still far from achieving near-perfect length generalization for simple algorithmic tasks. For instance, Lee et al. applied CoT for arithmetic tasks but observed that Transformers cannot length generalize even for simple addition tasks (Lee et al., 2024). Recently, there has been growing interest in using recurrent architectures for reasoning (Dehghani et al., 2018; Bai et al., 2019; Bansal et al., 2022; Yang et al., 2024). Unlike standard RNN-type 1 architectures that process the input sequence incrementally, one can consider a recurrent architecture that processes the entire input sequence multiple times, passing the intermediate output to the next iteration’s input, possibly along with the original input. In particular, if the base model in each iteration is a Transformer, this model is called a Looped Transformer (Yang et al., 2024). Looped Transformer can naturally break the limitation of the fixed depth in the standard Transformer architecture: _One can adjust the number of looped_ _steps based on the computational complexity of the_ _underlying algorithmic solution_ . Consider a problem set where 1) The problems can be solved by a loop of one RASP-L (Zhou et al., 2024a) program [1], i.e., each step in the loop can be performed by a decoder-only Transformer with a fixed depth; 2) The number of steps needed in the loop depends on the problem’s complexity, i.e., more difficult problems could potentially require more steps to solve. Under the length generalization scheme, we consider the number of steps depending on the problem length, and define this problem set as _n_ -RASP-L problems. For _n_ -RASP-L problems, if we can learn these lengthindependent steps, we can utilize an adaptive number of steps to achieve length generalization. Figure 1: **Method Overview.** During training, we supervise the output of the model to match the target data only after a certain number of steps of applying the same decoder block, helping the model learn intermediate steps that can be reused and can handle input of arbitrary lengths. All grey blocks share the same parameters. Examples are from the Copy task with _n_ symbols. “#” indicates EOS, “*” indicates ignored output, and “>” indicates the end of the query (EOQ). Output # * ... * ... # Input Inspired by this observation, we study training and can handle input of arbitrary lengths. All grey Looped Transformers models for length generaliza blocks share the same parameters. Examples are tion. Specifically, we consider a training setup where from the Copy task with _n_ symbols. “#” indicates we do not require any intermediate supervision data EOS, “*” indicates ignored output, and “>” indi(such as reasoning steps or scratchpad). We only cates the end of the query (EOQ). assume access to end-to-end supervision (input and output) and the number of steps needed. Depending on the number of steps, we iteratively apply the same decoder block and then decode the final answer; See Figure 1 for illustration. At inference time, the model could either decide when to stop with predefined stopping criteria or stop when reaching the ground-truth number of steps. Empirically, we show that looped Transformers with an adaptive number of steps can successfully length-generalize to longer lengths simply by appropriately adapting the number of loops at inference time, indicating that our approach encourages the model to implicitly learn the necessary steps to solve a task. Our contributions can be summarized as follows: **(1)** We first formally define _n_ -RASP-L problems, and provide examples of _n_ -RASP-L solutions to the Copy, Parity, and Addition tasks (Section 3); **(2)** We propose to learn _n_ -RASP-L problems with Looped Transformers where we supervise the final answer in a step-dependent way, which enables us to use an adaptive number of steps depending on the problem complexity (Section 4); **(3)** Empirically, we show that our proposed method outperforms the baseline approaches in terms of length generalization performance (Section 6). 2 B ACKGROUND 2.1 RASP-L A decoder-only Transformer is a type of Transformer architecture that consists of only the decoder part of the original Transformer model introduced by Vaswani et al. (2017), where a causal mask is applied to the attention weights to prevent the model from attending to future tokens. RASP (Restricted Access Sequence Processing) (Weiss et al., 2021) is a computational model for the Transformer architecture in the form of a programming language. RASP-L (Zhou et al., 2024a), is a learnable subset of the RASP language. Some key points about RASP-L are: - RASP-L programs accept an input sequence and return an output sequence of the same length for an _arbitrary length_, like decoder-only Transformers. 1 Here we consider a more general way to loop, i.e., predicting all missing tokens at the end of the loop, not necessarily in the way of predicting the single next token at a time. See more discussions in Section 2.2. 2 - The core operations in RASP-L include element-wise operations on sequences and a specific type of non-elementwise operation called kqv, which simulates a causal attention layer. - RASP-L has restrictions on the allowed operations to ensure learnability: It does not allow arbitrary index arithmetic, and restricts operations on token indices to order comparisons and computing successor/predecessor. - RASP-L _does not allow control flow_ statements like branching or loops. Programs must be straight-line code, with each line being a call to a core function or another RASP-L program. In Zhou et al. (2024a), they show algorithmic tasks that can be written as a RASP-L program can be easily learned by a Transformer in a length-generalizable way with next-token prediction. The length-generalizable tasks include counting, finding the mode, copying the input sequence (consisting of unique tokens), and sorting. However, they also showed that for algorithmic tasks whose RASP-L program representation is not known to exist, such as addition, parity, and copying the input sequence, it is hard to learn in a length-generalizable way. In other words, once the Transformer is trained on in-distribution data up to a particular length, it fails to generalize to unseen lengths. 2.2 N EXT - TOKEN PREDICTION AND FULL - OUTPUT PREDICTION Decoder-only Transformers are naturally convenient for next-token prediction (NTP) which could be efficiently trained in parallel. In Zhou et al. (2024a), their setup and RASP-L solutions are both constrained to predicting the single next token: During training, the full sequence (both the query and the answer) is provided as input and the output is expected to be the shifted sequence. During inference, only the query part is provided, and the model continues to output the next token and append the token to the current sequence until the output token is EOS. The output locations before the end of query (EOQ) sign are ignored. See (a) in Figure 2 for illustration. Figure 2: Visualization of the next-token prediction (NTP) and full-output prediction (FOP) schemes. “#" indicates EOS, “*" indicates ignored output, and “>" indicates the end of the query (EOQ). Output Output Input Input (a) NTP (b) FOP On the other hand, we can also consider a indicates EOS, “*" indicates ignored output, and “>" more general way of predicting the answer: full- indicates the end of the query (EOQ). output prediction (FOP). During both training and inference time, the input given is just the query part, and the rest of the locations are filled with multiple EOS tokens to keep the input and the output to be the same length. The model is supposed to output the answer with a shifted location, and the output locations before the EOQ sign are ignored; see (b) in Figure 2. Notice that in FOP, the model is not forced to predict token-by-token as NTP. Instead, the model is expected to predict all missing tokens after all internal processing steps. 3 _n_ -RASP-L Recall that RASP-L programs do not allow loops. If we consider the next-token prediction (NTP) scheme, it means that we need to find the same RASP-L program (which can be represented with a fixed-depth decoder-only Transformer) to predict the next token given any possible prefix in the answer sequence. Such solutions might not always exist for all problems: there is no known RASP-L program for addition, parity, and copy under the NTP scheme (Zhou et al., 2024a). On the other hand, architectures such as the Looped Transformer have external loops embedded in the architecture which naturally provides adaptive depth. Thus, a natural question is: what kind of algorithmic tasks can we represent with a decoder-only Transformer in a loop? Specifically, what if we also allow the number of iterations to explicitly depend on the input length, say _n_ ? Moreover, what if we are not constrained by the NTP scheme, but a more general FOP scheme? Inspired by these questions, we define the following class of algorithmic tasks: **Definition 3.1** ( _n_ -RASP-L) **.** _A program_ _P_ _is called an_ _n_ _-RASP-L program if (1) there exist_ _T_ : N _→_ N _, and (2)_ _P_ _can be decomposed to a sequential application of_ _P_ _[′]_ _for_ _T_ ( _n_ ) _steps with a_ _possible pre-processing step_ _P_ _pre_ _and post-processing step_ _P_ _post_ _:_ _P_ = _P_ _pre_ _◦_ ( _P_ _[′]_ ) _[T]_ [ (] _[n]_ [)] _◦_ _P_ _post_ _where_ _P_ _[′]_ _, P_ _pre_ _, P_ _post_ _∈_ _RASP-L._ 3 ans carry-on partial ans **0** **0** **0** summand 1 0 1 **1** -> 0 0 #0 - summand 2 **0** 1 **0** **0** **0** **0** carry-on **0** **1** -> 0 ans ans shifted input partial ans carry-on nput **0 1 -** >1 - >0 input 0 1 **1** -> > summand 1 summand 2 digits to be copied digits to be checked (a) Copy (b) Parity (c) Addition Figure 3: Visualization of the _n_ -RASP-L solutions for Copy, Parity, and Addition with _n_ = 2 . Copy is implemented by _n_ iterations of shifting; Parity is implemented by _n_ iterations of shifting and XOR; Addition is implemented by _n_ +1 iterations of shifted XOR and AND; The inputs are preprocessed. See details in Section 3. We show that _n_ -digit addition, _n_ -bit parity, copying _n_ symbols indeed have _n_ -RASP-L solutions. **Proposition 3.2.** _(Parity.) There exists a n-RASP-L program with_ _T_ ( _n_ ) = _n_ _that solves the_ _n_ _-bit_ _parity check task:_ _y_ _#_ _. . ._ _#_ _,_ ~~�~~ ~~�~~ � ~~�~~ _n_ _[′]_ _tokens_ _⇒_ _*_ _. . ._ _*_ ~~�~~ � ~~�~~ � _n tokens_ _x_ 1 _..._ _x_ _n_ ~~�~~ � ~~�~~ ~~�~~ _n tokens_ _>_ _#_ _. . ._ _#_ ~~�~~ � ~~�~~ � _n_ _[′]_ _tokens, n_ _[′]_ _≥_ 0 _where y is the parity check result for the arbitrary binary input sequence {x_ _i_ _}._ _Proof._ See Listing 1 in Appendix A, where the number of steps required in parity_loop is _T_ ( _n_ ) = _n_ for the input query with _n_ bits. **Proposition 3.3.** _(Copy.) There exists a n-RASP-L program with_ _T_ ( _n_ ) = _n_ _that solves the_ _n_ _-symbol_ _copy task:_ _x_ 1 _. . ._ _x_ _n_ � ~~��~~ � _n tokens_ _⇒_ _*_ _. . ._ _*_ ~~�~~ � ~~�~~ � _n tokens_ _x_ 1 _..._ _x_ _n_ � ~~��~~ � _n tokens_ _>_ _#_ _..._ _#_ � ~~��~~ � _n_ _[′]_ _tokens, n_ _[′]_ _≥_ _n −_ 1 _#_ _. . ._ _#_ _,_ ~~�~~ � ~~�~~ ~~�~~ _n_ _[′]_ _−_ _n_ + 1 _tokens_ _where {x_ _i_ _} denotes arbitrary binary input symbols._ _Proof._ See Listing 2 in Appendix A, where the number of steps required in copy_loop is _T_ ( _n_ ) = _n_ for the input query with _n_ symbols. **Proposition 3.4.** _(Addition.) There exists a n-RASP-L program with_ _T_ ( _n_ ) = _n_ + 1 _that solves the_ _n-digit addition task:_ _z_ 1 _. . ._ _z_ _n_ +1 ~~�~~ ~~��~~ � _n_ + 1 _tokens_ _⇒_ _*_ _. . ._ _*_ ~~�~~ �� � 2 _n_ + 1 _tokens_ _>_ _#_ _. . ._ _#_ � � ~~�~~ � _n_ _[′]_ _tokens, n_ _[′]_ _≥_ _n_ _x_ 1 _. . ._ _x_ _n_ ~~�~~ ~~��~~ � _n tokens_ _+_ _y_ 1 _. . ._ _y_ _n_ ~~�~~ ~~��~~ ~~�~~ _n tokens_ _#_ _. . ._ _#_ _,_ � �� ~~�~~ _n_ _[′]_ _−_ _n tokens_ _where_ _{x_ _i_ _}_ _,_ _{y_ _i_ _}_ _are arbitrary binary summands and_ _{z_ _i_ _}_ _denotes the result of adding_ _{x_ _i_ _}_ _and_ _{y_ _i_ _}_ [2] _._ _Proof._ See Listing 3 in Appendix A, where the number of steps required in addition_loop is _T_ ( _n_ ) = _n_ + 1 for the input summands with _n_ digits each. We present visualizations of the intermediate steps in the loops of our n-RASP-L solutions in Figure 3: For the parity task, _P_ parity _[′]_ [is to shift the input sequence to the right by 1 and calculate XOR of the] answer sequence and the input sequence; For the copy task, _P_ copy _[′]_ [is to shift the input sequence to] the right by 1; For the addition task _P_ addition _[′]_ [is to calculate the XOR of two sequences and shift the] results to the right by 1 position as the partial answer, and calculate the AND of two sequences as the carry-on sequence [3] . 2 For simplicity, we include the leading 0’s to keep the same length of the output for all possible inputs. 3 Here we omit the pre-processing and post-processing steps like handling EOS (“#”) and EOQ (“>”) tokens which can be done by fixed-depth attention layers outside of the loop (see Listings 1, 2, 3). 4 4 L EARNING _n_ -RASP-L PROBLEMS WITH LOOPED T RANSFORMERS Consider a task solvable by an _n_ -RASP-L program. It is straightforward to learn the looped Transformer model with the supervision of the ground truth intermediate outputs: One can use a fixed-depth TF block and simply supervise the input and output for each step. However, such intermediate supervision can be difficult to get, just like collecting helping CoT steps could be difficult. Therefore, a more interesting setup we consider is to learn looped Transformers in an end-to-end manner _without_ intermediate-step supervision. Here we present a novel framework for length generalization: In the absence of ground truth CoT data/intermediate output, we propose to leverage the inherent structure of the problem with the help of “knowing when to stop”. We present the setup for training data in Section 4.1, the model architecture and training algorithm in Section 4.2, and the inference algorithm in Section 4.3. 4.1 E ND - TO - END SUPERVISED DATA WITHOUT INTERMEDIATE STEP SUPERVISION We consider the following settings for the training data and the tasks: - There exists an _n_ -RASP-L program that solves the given task. - Training data consists only of ( _x, y_ ) pairs, but not intermediate steps. That is, we do not have access to _P_ _[′]_ ( _x_ ) _, P_ _[′]_ ( _P_ _[′]_ ( _x_ )) _, . . ._ . - _T_ ( _n_ ), i.e., the pre-defined number of iterations to solve the problem (with some _P_ _[′]_ ) is available in the training data [4] . - The length _n_ is diversely distributed in the dataset, e.g., _n ∈{_ 1 _, . . ., n_ max _}_ where _n_ max is the maximum number of lengths in the dataset; The pre-defined number of steps needed _T_ ( _n_ ) is also diversely distributed in the dataset, e.g., _T_ ( _n_ ) _∈{T_ (1) _, . . ., T_ ( _n_ max ) _}_ where _T_ ( _n_ max ) is the maximum number of steps in the dataset [5] . 4.2 L OOPED TRAINING WITH STEP SUPERVISION 4.2.1 A RCHITECTURE OF THE LOOPED T RANSFORMERS We present the architecture for Looped Transformer model in Figure 1. The key characteristics are: **Recurrence:** Instead of having a simple stack of blocks, the Looped Transformer is recurrent (like Giannou et al. (2023) but with decoder-only structure) in the sense that we reuse the same decoder block (which consists of a certain number of layers) for a number of looped steps, and we can adjust the number of looped steps at will. **Input injection:** For each step, the original input sequence is injected together with the output from the previous decoder block, i.e. the input embeddings are added to the output embeddings of the previous step as the input of the current step. With input injection, the model can maintain a strong connection to the original input, preventing information loss with improved performance (Bai et al., 2019; Yang et al., 2024). **Positional embeddings:** Notice that there is no positional encoding in the RASP-L operations (Zhou et al., 2024a). To follow our _n_ -RASP-L assumption and test the effect of the looped training only, we use NoPE (Kazemnejad et al., 2024) in decoder-only Transformers to avoid the impact from different positional embeddings [6] . 4.2.2 T RAINING ALGORITHM Given a dataset _D_ = _{_ ( _{_ ( _x_ _l_ ) _[L]_ _l_ =1 _[i]_ _[}]_ _[i]_ _[,][ {]_ [(] _[y]_ _[l]_ [)] _[L]_ _l_ =1 _[i]_ _[}]_ _[i]_ _[, T]_ _[i]_ _[, L]_ _[i]_ [)] _[}]_ _i_ _[N]_ =1 [, where] _[ {]_ [(] _[x]_ _[l]_ [)] _[L]_ _l_ =1 _[i]_ _[}]_ _[i]_ [ is the input with] _[ L]_ _[i]_ tokens, _{_ ( _y_ _l_ ) _[L]_ _l_ =1 _[i]_ _[}]_ _[i]_ [ is the output with] _[ L]_ _[i]_ [ tokens, and] _[ T]_ _[i]_ [ is pre-defined number of steps of sample] _[ i]_ [.] We aim to learn the transformer model _M_ _θ_ [7] by minimizing the following loss: 4 This assumption is to provide supervision for when to stop during training; for inference, we can either use the pre-defined steps or leverage the confidence of the output as a stopping criterion (see Section 4.3 for details.) 5 The length of the problem is not necessarily the same as the actual length of the input due to EOS and EOQ tokens; see Section 6.1.1 for the definition of the length of the specific tasks. 6 NoPE is shown to inherently learn to use relative positional embeddings in practice Kazemnejad et al. (2024). 7 _M_ _θ_ only handles the embedding space and we use greedy decoding to get the decoded output. 5 Idea Generation Category:
0Conceptual Integration
2edigk8yoU
# W HEN A TTENTION S INK E MERGES IN L ANGUAGE M ODELS : A N E MPIRICAL V IEW **Xiangming Gu** _[∗]_ [1] _[,]_ [2] **, Tianyu Pang** _[†]_ [1] **, Chao Du** [1] **, Qian Liu** [1] **, Fengzhuo Zhang** [1] _[,]_ [2] **,** **Cunxiao Du** [1] **, Ye Wang** _[†]_ [2] **, Min Lin** [1] 1 Sea AI Lab, Singapore 2 National University of Singapore {guxm, tianyupang, duchao, liuqian, zhangfz, ducx, linmin}@sea.com; wangye@comp.nus.edu.sg A BSTRACT Auto-regressive Language Models (LMs) assign significant attention to the first token, even if it is not semantically important, which is known as **attention sink** . This phenomenon has been widely adopted in applications such as streaming/long context generation, KV cache optimization, inference acceleration, model quantization, and others. Despite its widespread use, a deep understanding of attention sink in LMs is still lacking. In this work, we first demonstrate that attention sinks exist universally in auto-regressive LMs with various inputs, even in small models. Furthermore, attention sink is observed to emerge during the LM pre-training, motivating us to investigate how optimization, data distribution, loss function, and model architecture in LM pre-training influence its emergence. We highlight that attention sink emerges after effective optimization on sufficient training data. The sink position is highly correlated with the loss function and data distribution. Most importantly, we find that attention sink acts more like key biases, _storing extra attention scores_, which could be non-informative and not contribute to the value computation. We also observe that this phenomenon (at least partially) stems from tokens’ inner dependence on attention scores as a result of softmax normalization. After relaxing such dependence by replacing softmax attention with other attention operations, such as sigmoid attention without normalization, attention sinks do not emerge in LMs up to 1B parameters. The code is available [at https://github.com/sail-sg/Attention-Sink.](https://github.com/sail-sg/Attention-Sink) 1 I NTRODUCTION Xiao et al. (2023b) showed that Large Language models (LLMs) allocate significant attention to the initial tokens, irrespective of their semantic relevance. This interesting phenomenon is termed as **attention sink** and has widespread applications, including streaming/long context generation (Xiao et al., 2023b; Han et al., 2024; Yang et al., 2024), KV cache optimization (Ge et al., 2023; Wan et al., 2024; Wu & Tu, 2024), efficient inference (Zhang et al., 2024b; Chen et al., 2024), model quantization (Liu et al., 2024b; Huang et al., 2024), and others. A seminal of works attempted to understand attention sink. Among them, Cancedda (2024) clarified that attention sink primarily appears only on the first token. They attributed the phenomenon to the large norm of hidden states of the first token. This is referred to as _massive activations_ (very few activations exhibit extremely large values compared to others) in Sun et al. (2024). Besides, Sun et al. (2024); Yu et al. (2024) observed that attention sink may also appear in several word tokens carrying limited semantic information and having no fixed position. Despite the above research efforts, a deep understanding of attention sink is still absent. Therefore, we conduct a comprehensive study to investigate when attention sink emerges. We defer full discussions on related work to Appendix A. Based on open-sourced auto-regressive LMs, we show that the first token acts as biases: the angles between the first key and queries of other tokens are typically small, leading to attention sink. Then _∗_ Work done during Xiangming Gu’s internship at Sea AI Lab. _†_ Correspondence to Tianyu Pang and Ye Wang. 1 Figure 1: ( _Left_ ) Architecture of pre-norm transformer block (we highlight the location of post-norm LN using dashed lines). We denote the output of MHSA as _**O**_ _[l]_ and the output of FFN as _**F**_ _[l]_ . ( _Right_ ) The packing strategy in the LM pre-training. All documents are concatenated with BOS (optional) and EOS tokens as the boundaries. Then it is chunked into equal-sized sequences with context length _C_ . we find that attention sink universally exists in auto-regressive LMs across different inputs, even in the small models or with random token sequences. Additionally, attention sink is observed to emerge during the LM pre-training before continual instruction tuning (Ouyang et al., 2022). This motivates us to focus on the LM pre-training, whose objective can be formulated as: min _θ_ [E] _**[X]**_ _[∼][p]_ [data] [ [] _[L]_ [ (] _[p]_ _[θ]_ [(] _**[X]**_ [))]][ .] (1) In the remaining part of this paper, we investigate how the optimization (Section 4), data distribution (Section 5), loss function (Section 6), and model architecture (Section 7) influence the emergence of attention sink. We have the following conclusions: - Attention sink emerges after LMs are trained effectively on sufficient training data. It appears less obvious in LMs trained with small learning rates. While weight decay encourages the emergence of attention sink. - The sink position is highly related to the loss function and data distribution and can be shifted to other positions rather than the first token. - Attention sink acts more like key biases, storing extra attention and meanwhile not contributing to the value computation. This phenomenon (at least partially) stems from tokens’ inner dependence on attention scores due to the softmax normalization. After relaxing such dependence by replacing softmax attention with other attention operations, e.g., sigmoid attention without normalization, attention sinks do not emerge in LMs up to 1B parameters. 2 P RELIMINARIES ON LM S AND ATTENTION SINK Let _f_ _θ_ be an auto-regressive LM with _L_ transformer decoder blocks and _**X**_ _∈_ R _[T][ ×|]_ [V] _[|]_ := _{_ _**x**_ 1, _**x**_ 2, _. . ._, _**x**_ _T_ _}_ are the input tokens, where each token _**x**_ _t_ is a one-hot encoding and _|_ V _|_ is the vocabulary size of tokenizer V . The LM output is also a sequence _**Y**_ _∈_ R _[T][ ×|]_ [V] _[|]_ := _{_ _**y**_ 1, _**y**_ 2, _. . ._, _**y**_ _T_ _}_ = _f_ _θ_ ( _**X**_ ), where _**y**_ _t_ represents the predicted logits of _p_ ( _**x**_ _t_ +1 _|_ _**x**_ _≤t_ ). **Transformer blocks.** In the forward pass, _**X**_ is first embedded as _**H**_ [0] _∈_ R _[T][ ×][d]_ := _**XW**_ _E_ + _**P**_, where _**W**_ _E_ _∈_ R _[|]_ [V] _[|×][d]_ is the learnable word embedding, _**P**_ _∈_ R _[T][ ×][d]_ is the positional embedding, and _d_ is the hidden dimension. We denote _**H**_ _[l]_ _∈_ R _[T][ ×][d]_ := _{_ _**h**_ _[l]_ 1 [,] _**[ h]**_ _[l]_ 2 [,] _[ . . .]_ [,] _**[ h]**_ _[l]_ _T_ _[}]_ [,][ 1] _[ ≤]_ _[l][ ≤]_ _[L]_ [ to be the] output of the _l_ -th block. Each block comprises a multi-head self-attention (MHSA) operation and a feed-forward network (FFN). The block has either a pre-norm or post-norm structure according to the location of layer normalization (LN) (Ba et al., 2016; Zhang & Sennrich, 2019). Most of LLMs consider a pre-norm block, as also shown in Figure 1( _Left_ ): _**H**_ _[l]_ = FFN(LN( _**O**_ _[l]_ + _**H**_ _[l][−]_ [1] )) + _**O**_ _[l]_ + _**H**_ _[l][−]_ [1], _**O**_ _[l]_ = MHSA(LN( _**H**_ _[l][−]_ [1] )), (2) while the post-norm transformer block is _**H**_ _[l]_ = LN �FFN(LN( _**O**_ _[l]_ + _**H**_ _[l][−]_ [1] )) + LN( _**O**_ _[l]_ + _**H**_ _[l][−]_ [1] )�, _**O**_ _[l]_ = MHSA( _**H**_ _[l][−]_ [1] ). (3) 2 _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ ~~Block~~ ~~Block~~ _̸_ ~~Block~~ _̸_ Sink Sink No Sink _̸_ _̸_ cos( _**q**_ _[l]_ _t_ [,] _[h]_ [,] _**[ k]**_ _[l]_ _j_ [,] _[h]_ [)] ��� _**q**_ _lt_, _h_ ��� _·_ ��� _**k**_ _lj_, _h_ ��� _̸_ _[l]_ _t_ [,] _[h]_ [,] _**[ k]**_ _[l]_ [,] _[h]_ _̸_ _**q**_ _[l]_ _t_ [,] _[ h]_ _̸_ _[l]_ _t_ [,] _[ h]_ _**k**_ _[l]_ [,] _[ h]_ _̸_ _j_ _̸_ _>_ _̸_ _j_ [,] [)] _̸_ _·_ ��� _̸_ _**k**_ _l_, _h_ ��� _j_ _̸_ _l_, _h_ ��� _**q**_ _t_ _̸_ _t_ _̸_ _j_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ Figure 2: In LLaMA3-8B Base, ( _Top_ ) the first token has significantly larger _ℓ_ 2 -norm of hidden states, but much smaller _ℓ_ 2 -norm of keys and values than the mean of other tokens; ( _Bottom_ ) cosine similarity instead of norm product contributes to attention sink. We delay more visualizations to Appendix C.3. **MHSA layers.** In the MHSA layer, the input _**H**_ _[l][−]_ [1] are first transformed into keys, queries, and values: _**K**_ _[l]_ [,] _[h]_ = _**H**_ _[l][−]_ [1] _**W**_ _K_ _[l]_ [,] _[h]_ [,] _**[ Q]**_ _[l]_ [,] _[h]_ [ =] _**[ H]**_ _[l][−]_ [1] _**[W]**_ _[ l]_ _Q_ [,] _[h]_ [,] _**[ V]**_ _[ l]_ [,] _[h]_ [ =] _**[ H]**_ _[l][−]_ [1] _**[W]**_ _[ l]_ _V_ [,] _[h]_ for each head 1 _≤_ _h ≤_ _H_ (we omit the notation of LN when considering pre-norm design for simplicity). Here _**W**_ _K_ _[l]_ [,] _[h]_ _[,]_ _**[ W]**_ _Q_ _[ l]_ [,] _[h]_ _[,]_ _**[ W]**_ _[ l]_ _V_ [,] _[h]_ _∈_ R _[d][×][d]_ _[h]_, _d_ _h_ = _d/H_ . Then the attention output is computed as _̸_ _**A**_ _[l]_ [,] _[h]_ = Softmax� _**Q**_ _[l]_ [,] _[h]_ _**K**_ _[l]_ [,] _[h][⊤]_ _/_ � _̸_ _d_ _h_ + _**M**_ �, _**O**_ _[l]_ = Concat _[H]_ _h_ =1 � _**A**_ _[l]_ [,] _[h]_ _**V**_ _[l]_ [,] _[h]_ [�] _**W**_ _O_ _[l]_ [,] (4) _̸_ where _**M**_ _∈_ R _[T][ ×][T]_ is an attention mask. For vanilla causal attention, _**M**_ _ij_ = _−∞_ for _i < j_ and _**M**_ _ij_ = 0 for _i ≥_ _j_ . Finally, the output of final transformer block _**H**_ _[L]_ is fed into an unembedding layer for prediction: _**Y**_ = LN( _**H**_ _[L]_ ) _**W**_ cls, where _**W**_ cls _∈_ R _[d][×|]_ [V] _[|]_ . **Positional embedding.** NoPE (Kazemnejad et al., 2024) considered no explicit positional embedding (PE) in LMs, where _**P**_ = **0** . When using absolute PE (Vaswani et al., 2017), _**P**_ is a periodic function of token positions. Devlin et al. (2019); Brown et al. (2020) adopted a learnable PE, which means _**P**_ is a learnable embedding of token positions. The dot product between each query and key meets _⟨_ _**q**_ _i_, _**k**_ _j_ _⟩_ = _**q**_ _i_ _**k**_ _j_ _[⊤]_ [when using the above three PEs. While for relative PE (][Raffel et al.][,][ 2020][), AL-] iBi (Press et al., 2021), Rotary (Su et al., 2024), they have _**P**_ = **0** . Instead, they modify the dot product _⟨_ _**q**_ _i_, _**k**_ _j_ _⟩_ . For relative PE and ALiBi, _⟨_ _**q**_ _i_, _**k**_ _j_ _⟩_ = _**q**_ _i_ _**k**_ _j_ _[⊤]_ [+] _[ g]_ [(] _[i][ −]_ _[j]_ [)] [, where] _[ g]_ [(] _[·]_ [)] [ is pre-defined function] of the distance between two token positions. For Rotary, _⟨_ _**q**_ _i_, _**k**_ _j_ _⟩_ = _**q**_ _i_ _**R**_ Θ, _j−i_ _**k**_ _j_ _[⊤]_ [, where] _**[ R]**_ [Θ][,][ (] _[·]_ [)] [is] a pre-defined rotation matrix. We include detailed formulations of the above PEs in Appendix B. **Auto-regressive objective.** The pre-training objective of LMs is to maximize the likelihood of input data: _θ_ _[∗]_ = arg max _θ_ E _**X**_ _∼p_ data _Tt_ =1 [log] _[ p]_ _[θ]_ [(] _**[x]**_ _[t]_ _[|]_ _**[x]**_ _[<t]_ [)], where _p_ data refers to the data distribution. �� � **Packing documents in pre-training.** Given a large corpus _D_ = _{_ _**d**_ 1, _**d**_ 2, _· · ·_, _**d**_ _|D|_ _}_, where each _**d**_ _i_ represents a document containing a sequence of tokens. A packing strategy is adopted in the LM pre-training, as present in Figure 1( _Right_ ). All documents are concatenated and chunked into sequences with a context length of _C_ . Each chunk could start with any token within one document or the BOS/EOS token. Then the empirical loss function for each chunk is _L_ = [�] _[C]_ _t_ =2 [log] _[ p]_ _[θ]_ [(] _**[x]**_ _[t]_ _[|]_ _**[x]**_ _[<t]_ [)] [.] We note that _p_ _θ_ ( _**x**_ 1 ) is ignored since _**y**_ 1 = _f_ _θ_ ( _**x**_ 1 ) is the prediction for the next token _**x**_ 2 . **LM inference.** During the inference, a BOS token is fed into the model as the prefix for unconditional generation: _**x**_ _[′]_ _t_ _[∼]_ _[p]_ _[θ]_ [(] _**[x]**_ _[′]_ _t_ _[|]_ _**[x]**_ _[′]_ _<t_ [,] _**[ x]**_ [, BOS][)] [, where] _**[ x]**_ _t_ _[′]_ [is the] _[ t]_ [-th generated token,] _**[ x]**_ [ is the optional prompt.] If there are no BOS tokens in the pre-training, the EOS token is considered as the BOS. **Attention sink.** Xiao et al. (2023b) revealed that LLMs allocate significant attention scores to specific token positions, e.g. the first token (not necessary to be a BOS token), resulting in “vertical” attention patterns. To represent this, we have the attention scores _**A**_ _[l]_ _i_ [,], _[ h]_ 1 _[≫]_ [mean][(] _**[A]**_ _[l]_ _i_ [,], _[ h]_ _j_ =1 _̸_ [)][.] 3 GPT2-XL LLaMA2-7B Base LLaMA3-8B Base _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| ||✏= 0|✏= 0|.2||||| |MhpPYpC0xlRHOh5byL+57VS7F20My6TFEGy2aJeKhyMnUkWTpcrYChGhlCmuLnVYQOqKEOTWNGE4M+/vEiCM9evuBX9zXvy2Xqld5KAVyRI7JKfHJOamSG1IjdcJIRp7JK3mznqwX6936mLUuWfnMAfkD6/MH3TW2w=</latexit>|✏=<br>✏=|0<br>0|.3<br>.4||||| ||✏=|✏=|.5||||| ||||||||| ||||||||| |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| ||✏=|✏=|0.2||||| ||✏<br>✏|=<br>=|0.3<br>0.4||||| ||✏|✏|0.5||||| ||||||||| ||||||||| _T_ _T_ _̸_ _̸_ _̸_ |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| ||✏=|✏=|0.2||||| ||✏<br>✏|= 0<br>= 0|.3<br>.4||||| ||✏|✏|0.5||||| ||||||||| ||||||||| _T_ _̸_ _̸_ _̸_ Figure 3: The metric Sink _[ϵ]_ 1 [(averaged on 100 sequences) tends to decrease with larger token lengths] _T_ . This tendency becomes more obvious with the more strict definition of attention sink (larger _ϵ_ ). 3 P ROPERTIES OF ATTENTION SINK 3.1 T HE FIRST TOKEN ACTS AS BIASES **Uniqueness of the first token.** It is noted that the calculation of hidden states for the first token has no involvement of self-attention _**h**_ _[l]_ 1 [=][ FFN][(][LN][(] _**[o]**_ _[l]_ 1 [+] _**[ h]**_ 1 _[l][−]_ [1] )) + _**o**_ _[l]_ 1 [+] _**[ h]**_ _[l]_ 1 _[−]_ [1], where _**o**_ _[l]_ 1 [=][ LN][(] _**[h]**_ _[l]_ 1 _[−]_ [1] ) � _**W**_ _[l]_ [,][1] _**W**_ _[l]_ [,][2] _· · ·_ _**W**_ _[l]_ [,] _[H]_ [�] _**W**_ _O_ _[l]_ [.] Therefore, _**h**_ _[l]_ 1 [, and corresponding] queries/keys/values _**k**_ 1 _[l]_ [,] _[h]_ = LN( _**h**_ _[l]_ 1 _[−]_ [1] ) _**W**_ _K_ _[l]_ [,] _[h]_ [,] _**[ q]**_ 1 _[l]_ [,] _[h]_ = LN( _**h**_ _[l]_ 1 _[−]_ [1] ) _**W**_ _Q_ _[l]_ [,] _[h]_ [,] _**[ v]**_ 1 _[l]_ [,] _[h]_ = LN( _**h**_ _[l]_ 1 _[−]_ [1] ) _**W**_ _V_ _[l]_ [,] _[h]_ could be considered as the MLP output of input word embedding _**x**_ 1 _**W**_ _E_ . Using LLaMA3-8B Base (Dubey et al., 2024), we show that from certain transformer block, e.g., _l_ = 2, the _ℓ_ 2 -norm of _**h**_ _[l]_ 1 [is] significantly larger than that of other tokens _**h**_ _t_ =1 _̸_ in Figure 2( _Top_ ). This reproduces massive activations in Cancedda (2024); Sun et al. (2024). Despite the large _ℓ_ 2 -norm of hidden states, we observe that the _ℓ_ 2 -norm of keys and values of the first token is significantly smaller than that of other tokens in the same figure, which was also observed in Devoto et al. (2024); Guo et al. (2024b). **QK angles contribute to attention sink.** In the _l_ -th transformer block, we consider the keys and queries after adding PE (Rotary in LLaMA3-8B Base): _**k**_ _t_ _[l]_ [,] _[h]_ = LN( _**h**_ _[l]_ _t_ _[−]_ [1] ) _**W**_ _K_ _[l]_ [,] _[h]_ _**[R]**_ [Θ][,] _[ −][t]_ [,] _**[ q]**_ _t_ _[l]_ [,] _[h]_ = LN( _**h**_ _[l]_ _t_ _[−]_ [1] ) _**W**_ _Q_ _[l]_ [,] _[h]_ _**[R]**_ [Θ][,] _[ −][t]_ [, where LN is RMSNorm (][Zhang & Sennrich][,][ 2019][):] [ LN][(] _**[h]**_ [) =] RMS _**h**_ ( _**h**_ ) _[⊙]_ _**[g]**_ _̸_ _̸_ _̸_ and RMS( _**h**_ ) = ~~�~~ _d_ 1 � _di_ =1 _**[h]**_ _i_ [2] [. Here] _**[ g]**_ [ is a learnable gain parameter. Suppose that] _**[ h]**_ 1 _[l][−]_ [1] already has massive activations. Since _**h**_ _[l]_ 1 [has a massive magnitude in specific dimensions, the LN operation] retains the magnitude in these dimensions and further reduces the magnitude in other dimensions, leading to that _**q**_ 1 _[l]_ [,] _[ h]_ [,] _**[ k]**_ 1 _[l]_ [,] _[ h]_ [, and] _**[ v]**_ 1 _[l]_ [,] _[ h]_ are distributed on different manifolds, especially for _**k**_ 1 _[l]_ [,] _[ h]_ [.] For the _t_ -th query, we know that _**q**_ _t_ _[l]_ [,] _[ h]_ _**k**_ 1 _[l]_ [,] _[ h]_ _⊤_ typically has much larger values than _**q**_ _tl_, _h_ _**k**_ _j_ _[l]_ [,] =1 _̸_ _[ h]_ _⊤_, as visualized in Figure 2( _Bottom_ ). We further show that due to the different manifold of _**k**_ 1 _[l]_ [,] _[ h]_ [, the angles] between _**k**_ 1 _[l]_ [,] _[ h]_ and _**q**_ _t_ _[l]_ [,] _[ h]_ play an important role. Considering _**q**_ _t_ _[l]_ [,] _[h]_ _**[k]**_ _j_ _[l]_ [,] _[h]_ _⊤_ = _∥_ _**q**_ _tl_, _h_ _[∥·∥]_ _**[k]**_ _j_ _[l]_ [,] _[h]_ _[∥·]_ [cos][(] _**[q]**_ _t_ _[l]_ [,] _[h]_ [,] _**[ k]**_ _j_ _[l]_ [,] _[h]_ [)] [,] we visualize the cosine similarity between keys and values, and the product of _ℓ_ 2 -norm between keys and values in Figure 2( _Bottom_ ). Although _∥_ _**q**_ _t_ _[l]_ [,] _[h]_ _[∥· ∥]_ _**[k]**_ 1 _[l]_ [,] _[h]_ _[∥]_ [is comparatively small,] [ cos][(] _**[q]**_ _t_ _[l]_ [,] _[h]_ [,] _**[ k]**_ 1 _[l]_ [,] _[h]_ [)] [ is] significantly large, leading to attention sink. This explains why attention sink exists despite the small _ℓ_ 2 -norm of keys of the first token. To conclude, the first token leverages its keys to act as biases, thus minimizing the angles between _**k**_ 1 _[l]_ [,] _[ h]_ and _**q**_ _t_ _[l]_ [,] _[ h]_ and exhibiting attention sink. 3.2 M EASURING ATTENTION SINK _̸_ _̸_ _̸_ **Threshold-based metrics.** Xiao et al. (2023b) showcased the appearance of attention sink by visualizing attention logits/scores in different heads/blocks. This leads to the intractability of measuring attention sink quantitatively due to the large number of attention heads and blocks. Therefore, we first explore the metrics to measure the attention sink. Within each head, we compute the importance scores for the _k_ -th token _α_ _k_ _[l]_ [,] _[h]_ = _T −_ 1 _k_ +1 � _Ti_ = _k_ _**[A]**_ _[l]_ _i_ [,], _[ h]_ _k_ [. We mainly focus on the] first token _α_ 1 _[l]_ [,] _[h]_ [. It is noted that] _T_ [1] _[≤]_ _[α]_ 1 _[l]_ [,] _[ h]_ _≤_ 1 since _**A**_ _[l]_ 1 [,] _[ h]_ 1 [= 1] [ and] [ 0] _[ ≤]_ _**[A]**_ _i_ _[l]_ [,] =1 _̸_ _[ h]_ 1 _[≤]_ [1] [. Then we] _̸_ _̸_ first token _α_ 1 _[l]_ [,] _[h]_ [. It is noted that] _T_ [1] _[≤]_ _[α]_ 1 _[l]_ [,] _[ h]_ _≤_ 1 since _**A**_ _[l]_ 1 [,], _[ h]_ 1 [= 1] [ and] [ 0] _[ ≤]_ _**[A]**_ _i_ _[l]_ [,] =1 _̸_ _[ h]_, 1 _[≤]_ [1] [. Then we] adopt a threshold-based metric, we consider a head has attention sink in the first token if _α_ 1 _[l]_ [,] _[h]_ _> ϵ_ . Considering that the whole model has _L_ blocks and each block has _H_ heads, we use the following metric to measure the attention sink of the whole LM: Sink _[ϵ]_ _k_ [=] _L_ [1] � _Ll_ =1 _H_ 1 � _Hh_ =1 [I][(] _[α]_ _k_ _[l]_ [,] _[h]_ _> ϵ_ ). _̸_ _̸_ _̸_ _L_ [1] � _Ll_ =1 _H_ 1 � _Hh_ =1 [I][(] _[α]_ _k_ _[l]_ [,] _[h]_ _> ϵ_ ). _̸_ _̸_ _̸_ 4 Table 1: ( _Left_ ) Even with random sequence as input, there still exists an obvious attention sink. But with repeated tokens, the attention sink disappears for Mistral/LLaMA models. ( _Right_ ) Chat models have comparable attention sink metrics with base models. Sink _[ϵ]_ 1 [(%)] LLM natural random repeat GPT2-XL 77.00 70.29 62.28 Mistral-7B 97.49 75.21 0.00 LLaMA2-7B Base 92.47 90.13 0.00 LLaMA3-8B Base 99.02 91.23 0.00 Sink _[ϵ]_ 1 [(%)] LLM Base Chat Mistral-7B 97.49 88.34 LLaMA2-7B 92.47 92.88 LLaMA2-13B 91.69 90.94 LLaMA3-8B 99.02 98.85 Figure 4: ( _Left_ ) Attention sink also emerges in small LMs. ( _Middle_ ) Dynamics of train/valid loss and Sink _[ϵ]_ 1 [during LM pre-training under the default setup. Attention sink emerges after certain] optimization steps. ( _Right_ ) Training loss (solid lines)/attention sink (dashed lines) dynamics of LMs using different learning rates. We observe that with smaller learning rates, attention sink tends to emerge after more optimization steps and be less obvious. **Selections of thresholds.** Typically, the selections of thresholds represent the strictness of quantifying attention sink. Generally, a larger _ϵ_ represents a strict definition for attention sink. There is no principal way to find an optimal threshold and we only use this metric to quantify the emergence of attention sink empirically. Based on Figure 3, we prefer to select a threshold that is both strict in quantifying attention sink and less sensitive to the token length _T_ . This gives us the selection of _ϵ_ = 0.3 . For fair comparisons, we need to fix _T_ when computing the metric, e.g., _T_ = 64. 3.3 A TTENTION SINK UNDER DIFFERENT INPUTS **Different data domains.** We first explore the effects of input domains on attention sinks. The pile dataset (Gao et al., 2020), a regular dataset for LM pretraining, has 17 available data domains. As shown in Appendix C.2, input domains have negligible effects on our attention sink metric Sink _[ϵ]_ 1 [.] **Beyond natural languages.** We also consider two ideal scenarios: (i) randomly sample _T_ tokens from the tokenizer vocabulary V to construct a sequence and (ii) randomly sample 1 token from the tokenizer V and repeat it _T_ times. As present in Table 1( _Left_ ), attention sink still exists when the inputs are random tokens instead of natural language. However, with repeated tokens, attention sink in Mistral (Jiang et al., 2023) and LLaMA models disappears. In Appendix C.1, we prove that for LMs with NoPE/relative PE/ALiBI/Rotary, if the first _T_ tokens are the same, their corresponding hidden states are the same. They all have massive activations, thus dispersing the attention sink. We also provide the closed form/upper bound for attention scores in these LMs through Propositions 1-4. 3.4 A TTENTION SINK UNDER DIFFERENT LM S **Base vs. chat model.** Compared with base models, chat models are typically continually trained through instruction tuning (Ouyang et al., 2022). From Table 1( _Right_ ), instruction tuning has an insignificant impact on attention sink, which motivates us to focus on the LM pre-training. **Model scale.** We evaluate the metric Sink _[ϵ]_ 1 [of LLaMA2 Base (][Touvron et al.][,][ 2023][), LLaMA3] Base (Dubey et al., 2024), Pythia (Biderman et al., 2023), GPT2 (Radford et al., 2019), OPT (Zhang et al., 2022) families. As visualized in Figure 4( _Left_ ), attention sink emerges in small LMs, even in Pythia-14M. Only in Pythia family, larger-sized LMs tend to have more obvious attention sink. 4 E FFECTS OF OPTIMIZATION ON ATTENTION S INK . We pre-train a series of LLaMA models to conduct our experiments, based on the repos (Zhang et al., 2024a; Liu et al., 2024a). Due to the intractability of replicating LLaMA pre-training, we design small-sized models. Following Liu et al. (2024a), we set hidden dimension _d_ = 768, block number _L_ = 10, head number _H_ = 8, intermediate size of FFN as 1536, resulting in approximately 60M parameters except for word embeddings and unembeddings. We keep the other design the same 5 Idea Generation Category:
3Other
78Nn4QJTEN
# F REQ P RIOR : I MPROVING V IDEO D IFFUSION M ODELS WITH F REQUENCY F ILTERING G AUSSIAN N OISE **Yunlong Yuan** [1] **, Yuanfan Guo** [2] **, Chunwei Wang** [2] **, Wei Zhang** [2] **, Hang Xu** [2] **, Li Zhang** [1] _[∗]_ 1 School of Data Science, Fudan University 2 Noah’s Ark Lab, Huawei [https://github.com/fudan-zvg/FreqPrior](https://github.com/fudan-zvg/FreqPrior) A BSTRACT Text-driven video generation has advanced significantly due to developments in diffusion models. Beyond the training and sampling phases, recent studies have investigated noise priors of diffusion models, as improved noise priors yield better generation results. One recent approach employs the Fourier transform to manipulate noise, marking the initial exploration of frequency operations in this context. However, it often generates videos that lack motion dynamics and imaging details. In this work, we provide a comprehensive theoretical analysis of the variance decay issue present in existing methods, contributing to the loss of details and motion dynamics. Recognizing the critical impact of noise distribution on generation quality, we introduce FreqPrior, a novel noise initialization strategy that refines noise in the frequency domain. Our method features a novel filtering technique designed to address different frequency signals while maintaining the noise prior distribution that closely approximates a standard Gaussian distribution. Additionally, we propose a partial sampling process by perturbing the latent at an intermediate timestep while finding the noise prior, significantly reducing inference time without compromising quality. Extensive experiments on VBench demonstrate that our method achieves the highest scores in both quality and semantic assessments, resulting in the best overall total score. These results highlight the superiority of our proposed noise prior. 1 I NTRODUCTION Benefiting from notable advancements of diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021b) alongside the expansion of large video datasets (Bain et al., 2021; Schuhmann et al., 2022), text-to-video generation has experienced remarkable progress (Ho et al., 2022a; Wu et al., 2022a; Blattmann et al., 2023; Ge et al., 2023; Guo et al., 2024; Singer et al., 2023; Wang et al., 2023; Chen et al., 2023). In ordinary videos, the content between successive frames often shows high similarity, allowing the video to be considered as a sequence of images with motion information. Leveraging this characteristic, the architecture of video diffusion models (Blattmann et al., 2023; Wang et al., 2023; Hong et al., 2023; Guo et al., 2024) commonly incorporates temporal or motion layers into existing image diffusion models. In addition to model architecture, some studies, inspired by the consistent patterns observed across video frames, investigate the relationships within the initial noise prior. Consequently, alongside research focusing on the training and sampling phases (Song et al., 2021a; Karras et al., 2022; Lu et al., 2022; Salimans & Ho, 2022; Song et al., 2023), another important line of research in video diffusion models is to explore noise initialization strategies, since improved noise prior can potentially yield better generation results. Several efforts have been made to explore the noise prior, as the initial noise significantly impacts the generated outcomes (Ge et al., 2023; Qiu et al., 2024; Chang et al., 2024; Gu et al., 2023; Mao et al., 2024; Wu et al., 2024). PYoCo (Ge et al., 2023) discovers that the noise maps corresponding to different frames, derived from a pre-trained image diffusion model, cluster in t-SNE space (Van der Maaten & Hinton, 2008), indicating a strong correlation along the temporal dimension. Based on this observation, it introduces two kinds of noise prior with correlations on the frame dimension. _∗_ Corresponding author lizhangfd@fudan.edu.cn. 1 𝝈𝝈 [𝟐𝟐] = 𝟏𝟏. 𝟎 𝟐𝟐 𝝈𝝈 [𝟐𝟐] = 𝟎𝟎. 𝟗 𝟐𝟐 𝝈𝝈 [𝟐𝟐] = 𝟎𝟎. 𝟗 𝟐𝟐 𝝈𝝈 [𝟐𝟐] = 𝟎𝟎. 𝟗 𝟐𝟐 𝝈𝝈 [𝟐𝟐] = 𝟎𝟎. 𝟗 𝟐𝟐 𝝈 [𝟐] 𝝈 [𝟐] ~~𝝈~~ 𝝈 [𝟐][𝟐] ~~**decreasing**~~ 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈𝝈 [𝟐𝟐] = 𝟏𝟏. 𝟎 𝟐𝟐 𝝈𝝈 [𝟐𝟐] = 𝟎𝟎. 𝟗 𝟐𝟐 𝝈𝝈 [𝟐𝟐] = 𝟎𝟎. 𝟗 𝟐𝟐 𝝈𝝈 [𝟐𝟐] = 𝟎𝟎. 𝟗 𝟐𝟐 𝝈𝝈 [𝟐𝟐] = 𝟎𝟎. 𝟗 𝟐𝟐 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] 𝝈 [𝟐] Figure 1: _**(Left)**_ **Generated video frames corresponding to Gaussian noise with different vari-** **ances.** As the variance, denoted as _σ_ [2], decreases from 1 _._ 00 [2] to 0 _._ 96 [2], the imaging quality deteriorates and background details are gradually lost. _**(Right)**_ **Comparisons of our method against the** **FreeInit and standard Gaussian noise.** The frames generated using FreeInit appear overly smooth and blurred in the area of the highlighted red box, whereas our method preserves rich image details. However, this change in the noise prior requires massive fine-tuning. FreeInit (Wu et al., 2024) investigates the low-frequency signal leakage phenomenon in the noise, as also demonstrated in the image domain (Lin et al., 2024), and finds that the denoising process is significantly influenced by the low-frequency components of initial noise. Leveraging these insights, it uses frequency filtering on the noise prior to enhance the temporal consistency of generated videos. However, despite its efforts, the generated videos suffer from excessive smoothness, limited motion dynamics, and a lack of details. Moreover, additional iterations are necessary to refine the noise, with a full sampling process conducted in each iteration, making FreeInit (Wu et al., 2024) quite time-consuming. To address this gap, we conduct a mathematical analysis and provide theoretical justification. Our analysis identifies the variance decay issue existing in FreeInit (Wu et al., 2024). As depicted in Figure 1, we investigate the significance of the distribution of the initial noise for diffusion models. The impact of the variance on the quality of generated videos is evident. As _σ_ decreases from 1 to 0 _._ 96, there is a progressive loss of details alongside a reduction in motion dynamics. The frames generated by FreeInit (Wu et al., 2024) are overly smooth and lack details, as the refined noise deviates from the standard Gaussian distribution, resulting in variance decay. Therefore, it is critically important for diffusion models that the noise prior follows standard Gaussian distribution. In this work, we introduce a novel noise prior called **FreqPrior** . At the core of our approach is the noise refinement stage, where we propose a novel frequency filtering method designed for noise, which essentially is random variables. During this stage, we retain the low-frequency signals while enriching high-frequency signals in the frequency domain, thereby reducing the covariance error and ensuring that the distribution of our refined noise approximates a standard Gaussian distribution. As illustrated in Figure 1, our method does not suffer from the detail loss issue present in FreeInit (Wu et al., 2024). Additionally, retaining low-frequency signals enhances semantic fidelity. Furthermore, to obtain the noise prior, we adjust the diffusion process by perturbing the latent at an intermediate step, resulting in significant time savings without compromising the quality of the generation results. We conduct extensive experiments on Vbench (Huang et al., 2024b), a comprehensive benchmark, to assess the quality of generated videos. The results demonstrate that our method effectively addresses the issue of limited dynamics while improving the overall quality. Moreover, our approach outperforms the best on VBench, highlighting the superiority of our method. Additionally, our method achieves a time-saving of nearly 23% compared to FreeInit (Wu et al., 2024). In summary, our contributions are as follows: **(i):** We propose a novel frequency filtering method designed to refine the noise, acquiring a better prior, termed **FreqPrior** . We provide a rigorous theoretical analysis of the distribution of our prior. Numerical experiments reveal the covariance error of our method is negligible, implying that our prior closely approximates a Gaussian distribution. **(ii):** we propose the partial sampling strategy in our framework, which perturbs the latent at a middle timestep. It can save much time without compromising quality. **(iii):** Extensive experiments validate the effectiveness of **FreqPrior** . Specifically, our approach improves both video quality and semantic quality, achieving the highest total score over baselines on VBench (Huang et al., 2024b). 2 2 R ELATED WORK **Video generative models** In the field of video generation, previous work has explored a range of methods, including VAEs (Kingma & Welling, 2014; Hsieh et al., 2018; Bhagat et al., 2020), GANs (Goodfellow et al., 2014; Tian et al., 2021; Brooks et al., 2022; Skorokhodov et al., 2022), and auto-regressive models (Wu et al., 2021; 2022a; Ge et al., 2022; Hong et al., 2023). Recently, diffusion models (Ho et al., 2020; Song et al., 2021b; Sohl-Dickstein et al., 2015; Dhariwal & Nichol, 2021) have showcased great abilities in image synthesis (Rombach et al., 2022; Saharia et al., 2022; Nichol et al., 2022), and pave the way towards video generation (Ho et al., 2022b; He et al., 2022; Voleti et al., 2022). Many recent works (Ho et al., 2022a; Blattmann et al., 2023; Ge et al., 2023; Guo et al., 2024; Wang et al., 2023; Chen et al., 2023) on video synthesis are text-to-video diffusion paradigm with text as a highly intuitive and informative instruction. Both ModelScope (Wang et al., 2023; Luo et al., 2023) and VideoCrafter (Chen et al., 2023) are built upon on the UNet (Ronneberger et al., 2015) architecture. VideoCrafter adds a temporal transformer after a spatial transformer in each block, while in ModelScopoe each block comprises spatial and temporal convolution layers, along with spatial and temporal attention layers. AnimateDiff (Guo et al., 2024) generates videos by integrating Stable Diffusion (Rombach et al., 2022) with motion modules. **Noise prior for diffusion models** Given inherent high correlations within video data, several studies (Ge et al., 2023; Qiu et al., 2024; Chang et al., 2024; Gu et al., 2023; Mao et al., 2024; Wu et al., 2024) have delved into the realm of noise prior within diffusion models. Both FreeNoise (Qiu et al., 2024) and VidRD (Gu et al., 2023) focus on initialization strategies for long video generation, with FreeNoise employing a shuffle strategy to create noise sequences with long-range relationships, while VidRD utilizes the latent feature of the initial video clip. � -noise prior interprets noise as a continuously integrated noise field rather than discrete pixel values (Chang et al., 2024). However, it focuses on low-level features, making it more suitable for tasks such as video restoration and video editing. Mao et al. (2024) identifies that some pixel blocks of initial noise correspond to certain concepts, enabling semantic-level generation. Nevertheless, collecting these blocks for different concepts is time-consuming, which limits its practical application. Motivated by correlations in the noise maps corresponding to different frames, PYoCo (Ge et al., 2023) carefully designs mixed noise prior and progressive noise prior. FreeInit (Wu et al., 2024) identifies signal leakage in the low-frequency domain and uses Fourier transform to refine the noise, making the initial exploration of frequency operations in the noise prior. However, noise is essentially different from signals, making the classic frequency filtering method unsuitable. As a result, the generated videos lack motion dynamics and imaging details due to the variance decay issue. To address these limitations, we propose a novel prior to enhance the overall quality of generated videos. 3 M ETHOD **FreqPrior** comprises three key stages: **sampling process**, **diffusion process**, and **noise refinement**, as shown in Figure 2. To obtain a new noise prior, our method starts with Gaussian noise, which then goes through these three stages sequentially, repeated several times, to result in a refined noise prior. Once the new prior is established, it serves as the initial latent for video diffusion models to generate a video. The **sampling process** in our framework is DDIM sampling (Song et al., 2021a). 3.1 D IFFUSION PROCESS During the sampling process, the latent becomes clean. Unlike the conventional diffusion process that typically diffuses the clean latent to timestep _T_, our approach perturbs the latent with the initial noise _ϵ_ once sampling reaches a specific intermediate timestep, denoted as _t_ . The diffusion process can be formulated as follows by leveraging the Markov property: _α_ ¯ _T_ **z** _[i]_ _noise_ [=] ¯ **z** _[i]_ _t_ [+] � _α_ _t_ � 1 _−_ _[α]_ [¯] ¯ _[T]_ _ϵ,_ (1) _α_ _t_ where _{α_ ¯ _j_ _}_ _[T]_ _j_ =0 [are the notations corresponding to the diffusion scheduler (][Ho et al.][,][ 2020][), and] _[ i]_ represents the _i_ -th iteration. The rationale for conducting the diffusion process beforehand stems from the observation that when _t_ reaches about timestep 400, the latent **z** _[i]_ _t_ [has roughly taken shape and resembles the clean latent] 3 Figure 2: The framework of **FreqPrior** . It consists of three stages: **sampling process**, **diffusion** **process**, and **and noise refinement** . In the noise refinement stage, the noise is refined in three steps including **noise preparation**, **noise processing**, and **post-processing** . **z** _[i]_ 0 [, indicating the latent already has recovered large low-frequency information. Consequently, this] modification yields nearly identical outcomes compared to diffusing a pure clean latent. This modification offers a notable advantage in terms of efficiency, as it significantly reduces the number of required sampling steps while maintaining consistent results. Therefore, we achieve substantial time savings without compromising the fidelity of our results. 3.2 N OISE REFINEMENT The **noise refinement** stage focuses on processing different frequency components of the noise to improve video generation quality. Low-frequency signals help the model generate videos with better semantics, while high-frequency signals contribute to finer image details. Unlike conventional filtering methods, which typically target signals like images, our approach processes noise, essentially random variables, distinguishing it from traditional techniques. Therefore, we propose a novel frequency filtering method designed to effectively handle noise, enhancing overall quality. **Step 1: Preparation of two sets of noise** We begin by preparing two distinct sets of noise, each serving a specific purpose: one to convey low-frequency information and the other to provide highfrequency information. Initially, we independently sample from a standard Gaussian distribution to obtain _η_ 1 _[i]_ [,] _[ η]_ 2 _[i]_ [,] **[ y]** 1 _[i]_ [and] **[ y]** 2 _[i]_ [, where] **[ y]** 1 _[i]_ [and] **[ y]** 2 _[i]_ [correspond to high-frequency information. As for] low-frequency information, we combine **z** _[i]_ _noise_ [with] _[ η]_ 1 _[i]_ [and] _[ η]_ 2 _[i]_ [to yield] **[ x]** _[i]_ 1 [and] **[ x]** _[i]_ 2 [as follows:] 1 **x** _[i]_ 1 [=] _√_ 1 + cos [2] _θ_ 1 **x** _[i]_ 2 [=] _√_ 1 + cos [2] _θ_ �cos _θ ·_ **z** _[i]_ _noise_ [+ sin] _[ θ][ ·][ η]_ 1 _[i]_ � _,_ _η_ 1 _[i]_ _[∼N]_ [(] **[0]** _[,]_ _**[ I]**_ [)] _[,]_ (2) �cos _θ ·_ **z** _[i]_ _noise_ [+ sin] _[ θ][ ·][ η]_ 2 _[i]_ � _,_ _η_ 2 _[i]_ _[∼N]_ [(] **[0]** _[,]_ _**[ I]**_ [)] _[.]_ Here, ratio cos _θ_ controls the proportion of **z** _[i]_ _noise_ [contained within] **[ x]** 1 _[i]_ [and] **[ x]** _[i]_ 2 [. It adds flexibility to] the framework, allowing us to control the amount of low-frequency information derived from **z** _[i]_ _noise_ [.] 4 **Algorithm 1** FreqPrior **Require:** _T_ : total diffusion step; _t_ : middle timestep; _{α}_ _[T]_ _t_ =0 [: scheduler.] _[ n]_ [: number of iterations.] 1: Initialize **z** _T_ = _ϵ_, where _ϵ ∼N_ ( **0** _,_ _**I**_ ). 2: _▷_ _**Obtain the noise prior**_ 3: **for** _i_ = 0 **to** _n_ **do** 4: **z** _t_ _←_ Sampling( **z** _T_ ) _▷_ Partial sampling process 5: **z** _noise_ = � _α_ ¯ _T_ _/α_ ¯ _t_ _·_ **z** _t_ + �1 _−_ _α_ ¯ _T_ _/α_ ¯ _t_ _· ϵ_ _▷_ Diffusion Process _α_ ¯ _T_ _/α_ ¯ _t_ _·_ **z** _t_ + � 5: **z** _noise_ = � _α_ ¯ _T_ _/α_ ¯ _t_ _·_ **z** _t_ + �1 _−_ _α_ ¯ _T_ _/α_ ¯ _t_ _· ϵ_ _▷_ Diffusion Process 6: **z** _T_ _←_ NoiseRefine( **z** _noise_ ) _▷_ Noise refinement 7: _▷_ _**Generate a video from new noise prior**_ 8: **z** 0 _←_ Sampling( **z** _T_ ) _▷_ Sampling process 9: video _←_ Decode( **z** 0 ) 10: **return** video **Step 2: Retention of low-frequency signals while enriching high-frequency signals** We apply the Fourier transform to map the noise to the frequency domain: **x** ˜ _[i]_ 1 [=] _[ F]_ [3] _[D]_ [(] **[x]** 1 _[i]_ [)] _[,]_ **x** ˜ _[i]_ 2 [=] _[ F]_ [3] _[D]_ [(] **[x]** 2 _[i]_ [)] _[,]_ **y** ˜ 1 _[i]_ [=] _[ F]_ [3] _[D]_ [(] **[y]** 1 _[i]_ [)] _[,]_ **y** ˜ 2 _[i]_ [=] _[ F]_ [3] _[D]_ [(] **[y]** 2 _[i]_ [)] _[,]_ (3) where _F_ 3 _D_ represents the Fourier transform operation on temporal and spatial dimensions. We then perform filtering with a low-pass filter _M_ : **z** ˜ _[i]_ 1 [=] _[ M ⊙]_ **[x]** [˜] 1 _[i]_ [+ (] **[1]** _[ −M]_ [2] [)] [0] _[.]_ [5] _[ ⊙]_ **[y]** [˜] 1 _[i]_ _[,]_ **z** ˜ _[i]_ 2 [=] _[ M ⊙]_ **[x]** [˜] 2 _[i]_ [+ (] **[1]** _[ −M]_ [2] [)] [0] _[.]_ [5] _[ ⊙]_ **[y]** [˜] 2 _[i]_ _[.]_ (4) Since we are filtering Gaussian variables rather than real image signals, the conventional filtering approach may not be suitable. Typically, a high-pass filter is set to ( **1** _−M_ ), we use ( **1** _−M_ [2] ) [0] _[.]_ [5] instead. This adjustment is inspired by a fact in probability: if **u** _,_ **v** _∼N_ ( **0** _,_ _**I**_ ) are independent, then for _m ∈_ [0 _,_ 1], it holds that **w** = _m ·_ **u** + (1 _−_ _m_ [2] ) [0] _[.]_ [5] _·_ **v** is also standard Gaussian. In traditional filtering operations, the sum of the low-pass and high-pass filters equals one. However, in our approach, the sum of the squares of the low-pass and high-pass filters equals one. This modification enriches the high-frequency signals, maintaining the balance between low-frequency and high-frequency components. As a result, it mitigates the loss of details and motion dynamics, leading to higher fidelity in the generated videos. **Step 3: Post-processing** After filtering, the frequency features are mapped back into the latent space, followed by post-processing to form the new noise prior **z** _[i]_ _T_ [+1] . The process is as follows: 1 **z** _[i]_ _T_ [+1] = _√_ 2 � _ℜ_ � **z** _[i]_ _T,_ 1 � + _ℑ_ � **z** _[i]_ _T,_ 1 � + _ℜ_ � **z** _[i]_ _T,_ 2 � _−ℑ_ � **z** _[i]_ _T,_ 2 �� _,_ **z** _[i]_ _T,{_ 1 _,_ 2 _}_ [=] _[ F]_ 3 _[−]_ _D_ [1] [(˜] **[z]** _[i]_ _{_ 1 _,_ 2 _}_ [)] _[.]_ (5) Unlike traditional methods that overlook the imaginary component, our approach recognizes the importance of the information contained within these imaginary parts, which are crucial for preserving the variance in the noise prior. Consequently, we retain both the real and imaginary components. Specifically, we take both the positive real parts of **z** _[i]_ _T,_ 1 [and] **[ z]** _[i]_ _T,_ 2 [, but for imaginary components,] we take the positive imaginary part of **z** _[i]_ _T,_ 1 [and the negative imaginary part of] **[ z]** _[i]_ _T,_ 2 [. This is the] reason we prepare two sets of noise in **Step 1** . This symmetric formulation enhances the retention of valuable information while effectively eliminating unnecessary and complex terms. In summary, our framework comprises two phases: the first phase focuses on finding a new noise prior, while the second phase generates a video based on that prior. The process of finding the noise prior includes the sampling process, diffusion process, and noise refinement, as previously discussed. Our framework is detailed in Algorithm 1. 3.3 A NALYSIS ON THE DISTRIBUTION OF DIFFERENT NOISE PRIOR For the mixed noise prior proposed in PYoCo (Ge et al., 2023), it is constructed as follows: 1 **z** _j_ = _√_ [1] 2 _[ϵ]_ _[j]_ [ +] _√_ _ϵ_ _j_ _, ϵ_ _share_ _∼N_ ( **0** _,_ _**I**_ ) _,_ (6) 2 _[ϵ]_ _[share]_ _[,]_ 5 Idea Generation Category:
2Direct Enhancement
8x0SGbCpzs
# F EATURE A VERAGING : A N I MPLICIT B IAS OF G RADIENT D ESCENT L EADING TO N ON -R OBUSTNESS IN N EURAL N ETWORKS **Binghui Li** [1] _[,][∗]_ **Zhixuan Pan** [2] _[,][∗]_ **Kaifeng Lyu** [3] **Jian Li** [2] _[,][†]_ 1 Center for Machine Learning Research, Peking University 2 Institute for Interdisciplinary Information Sciences, Tsinghua University 3 Simons Institute, UC Berkeley libinghui@pku.edu.cn, pzx20@mails.tsinghua.edu.cn kaifenglyu@berkeley.edu, lapordge@gmail.com A BSTRACT In this work, we investigate a particular implicit bias in gradient descent training, which we term “Feature Averaging,” and argue that it is one of the principal factors contributing to the non-robustness of deep neural networks. We show that, even when multiple discriminative features are present in the input data, neural networks trained by gradient descent tend to rely on an average (or a certain combination) of these features for classification, rather than distinguishing and leveraging each feature individually. Specifically, we provide a detailed theoretical analysis of the training dynamics of two-layer ReLU networks on a binary classification task, where the data distribution consists of multiple clusters with mutually orthogonal centers. We rigorously prove that gradient descent biases the network towards feature averaging, where the weights of each hidden neuron represent an average of the cluster centers (each corresponding to a distinct feature), thereby making the network vulnerable to input perturbations aligned with the negative direction of the averaged features. On the positive side, we demonstrate that this vulnerability can be mitigated through more granular supervision. In particular, we prove that a two-layer ReLU network can achieve optimal robustness when trained to classify individual features rather than merely the original binary classes. Finally, we validate our theoretical findings with experiments on synthetic datasets, MNIST, and CIFAR-10, and confirm the prevalence of feature averaging and its impact on adversarial robustness. We hope these theoretical and empirical insights deepen the understanding of how gradient descent shapes feature learning and adversarial robustness, and how more detailed supervision can enhance robustness. 1 I NTRODUCTION Deep learning has achieved unprecedented success across a wide range of application domains, including many safety-critical systems such as autonomous driving and diagnostic assistance technologies. Despite these successes, a landmark study by Szegedy et al. (2013) exposed that deep neural networks are extremely vulnerable to adversarial attacks. These attacks involve adding nearly imperceptible and carefully chosen perturbations to input data to confound deep learning models into making incorrect predictions. The perturbed inputs are termed adversarial examples, and their existence has attracted significant attention from the research community. Since then, various attacks (Biggio et al., 2013; Szegedy et al., 2013; Goodfellow et al., 2014; Madry et al., 2018) and defenses (Goodfellow et al., 2014; Madry et al., 2018; Shafahi et al., 2019; Pang et al., 2022) were developed, but the issue of adversarial robustness is still far from being resolved. Previous attempts to explain the adversarial robustness of neural networks have been made from various theoretical perspectives. Daniely & Shacham (2020); Bubeck et al. (2021a); Bartlett et al. _∗_ Equal contribution, alphabet ordering. _†_ Corresponding author. 1 (2021); Montanari & Wu (2023) proved the existence of adversarial examples for neural networks with random weights across various architectures. Tsipras et al. (2019); Zhang et al. (2019) analyzed the fundamental trade-off between robustness and accuracy. Bubeck et al. (2021b); Bubeck & Sellke (2021); Li et al. (2022a); Li & Li (2023) proved that having a large model size is necessary for achieving robustness in many settings. Ilyas et al. (2019); Tsilivis & Kempe (2022); Kumano et al. (2024); Li & Li (2024) studied the relationship between adversarial examples and the presence of non-robust but predictive features in the data distribution. A related line of work in deep learning theory studies the implicit bias of gradient descent to explain why neural networks generalize so well. Training deep neural networks is a highly non-convex and over-parametrized optimization problem, in which there are many solutions that fit the training data correctly. Recent studies suggest that, without explicit regularization, gradient descent seems to implicitly bias towards solutions that enjoy favorable properties, particularly good generalization. Hence, characterizing various implicit biases in favor of better generalization has been extensively studied in recent years (Gunasekar et al., 2017; Soudry et al., 2018; Arora et al., 2019b; Lyu & Li, 2020; Blanc et al., 2020). However, good generalization properties do not necessarily imply good robustness with respect to inputs. Indeed, even well-trained neural networks are vulnerable to adversarial examples. In fact, recent studies by Vardi et al. (2022) and Frei et al. (2024) proved that the implicit bias of gradient descent can be a “double-edged sword”, in the sense that it leads to generalizable solutions with perfect clean accuracy, but being non-robust (susceptible to small adversarial _ℓ_ 2 -perturbations), even though there exist robust networks with perfect robust accuracy. Under a similar data setup, Min & Vidal (2024) further conjectured that the weight vectors of a two-layer ReLU network trained by gradient flow converge to an average of the cluster centers. In this paper, we perform a detailed analysis of the training dynamics of gradient descent on two-layer ReLU networks (under data distributions similar to Vardi et al. (2022), Frei et al. (2024) and Min & Vidal (2024), and the detailed discussion about the connection between their works and our paper is deferred to Section 2), and rigorously prove that the learned weights exhibit a particular implicit bias, which we term _feature averaging_ . In our setting, feature averaging refers to a particularly simple form: the network trained by gradient descent tends to learn the average of useful features, in the sense that the weight vector associated with each hidden-layer neuron is a weighted average of feature vectors. This also resolves the conjecture made by Min & Vidal (2024). Further, one can easily show that such an average is more susceptible to small adversarial perturbations than individual features, rendering the learned solution non-robust. In our experiments, we empirically observe similar phenomena in several other settings. We argue that feature averaging is a key factor contributing to the non-robustness of deep neural networks, and demonstrate its close relationship with several known phenomena and theoretical models in adversarial robustness research. These include the observation that neural networks tend to leverage both robust and non-robust features for classification (Tsipras et al., 2019; Ilyas et al., 2019; Allen-Zhu & Li, 2022; Tsilivis & Kempe, 2022; Li & Li, 2024), the connection between model Lipschitzness (smoothness) and over-parameterization (Bubeck et al., 2021b; Bubeck & Sellke, 2021; Li et al., 2022a; Li & Li, 2023), the simplicity bias of gradient descent that leads to non-robustness (Shah et al., 2020; Lyu et al., 2021), and the dimpled manifold hypothesis (Shamir et al., 2021). Beyond the linear average behavior studied in this work, we conjecture that feature averaging may appear in more complex forms in real-world settings. For example, the neural network may tend to combine many localized, semantically meaningful (hence more robust (Ilyas et al., 2019; Tsilivis & Kempe, 2022)) features into one discriminative but non-robust feature. In light of the feature averaging phenomena, we propose to enhance the robustness by learning individual features. In particular, we explore a natural and simple, yet less explored method in the study of adversarial robustness, which is to provide more granular supervised information related to individual features and force the model to learn the individual features. Theoretically, we prove that training a two-layer ReLU network with feature-level labels leads to a binary classifier with optimal robustness. Empirically, we design several experiments, using synthetic and real datasets, and the experimental results demonstrate that feature-level supervised information can be very effective in enhancing the robustness of the model (even with standard training). These results are consistent with the empirical findings by Sitawarin et al. (2022); Li et al. (2024), which showed that incorporating fine-grained annotations, such as part-level segmentation, can substantially enhance the adversarial robustness of object recognition systems. See Appendix A for a more detailed discussion of these 2 connections, including the relationship of feature averaging to existing robustness phenomena, and how more granular supervision may improve adversarial robustness. Our technical contributions can be summarized as follows: 1. (Section 4.1) Under certain multi-cluster data distributions (similar to that in Frei et al. (2024)), we prove that two-layer ReLU networks trained by gradient descent converge to feature-averaging solutions (Theorem 4.5). In particular, we show that the weight vector associated with each hidden-layer neuron converges to the average of cluster-center features and the feature-averaging solution is non-robust w.r.t. the radius Ω(� _d/k_ ), while there exist solutions with optimal robust radius _O_ ( _√d_ ) (where _d_ is the data dimension, _k_ is the number of clusters, and the existence of such optimal robust solutions is shown in Theorem G.3). This result also solves the conjecture of Min & Vidal (2024) under our settings (Theorem 4.6). 2. (Section 4.2) We show that if the model is provided with the feature level labels (in fact a multi-class classification problem in our multi-cluster data distribution setting), a two-layer network can learn the individual features, which furthermore can induce a robust model with optimal robust radius _O_ ( _√d_ ) (Theorem 4.7). 3. (Section 5) We validate our theoretical results on synthetic data and real-world datasets such as MNIST and CIFAR-10. We empirically show that gradient descent learns averaged features. Our experiments also demonstrate enhanced robustness through the incorporation of fine-grained supervisory information. 2 R ELATED W ORK **Implicit Bias of Gradient Descent.** The implicit bias of gradient descent has been studied from various perspectives. The most prominent line of works establishes an equivalence between neural networks in certain training regimes to kernel regression with Neural Tangent Kernel (NTK) (Du et al., 2019b;a; Allen-Zhu et al., 2019a; Zou et al., 2020; Chizat et al., 2019; Arora et al., 2019b; Ji & Telgarsky, 2020b; Cao & Gu, 2019), but the generalization of kernel regression is usually worse than that of real-world neural networks. Other works prove other types of implicit biases beyond this NTK regime, including margin maximization (Soudry et al., 2018; Nacson et al., 2019; Lyu & Li, 2020; Ji & Telgarsky, 2020a), parameter norm minimization (Gunasekar et al., 2017; 2018; Arora et al., 2019a) and sharpness reduction (Blanc et al., 2020; Damian et al., 2021; HaoChen et al., 2021; Li et al., 2022b; Lyu et al., 2022; Gu et al., 2023). All these works focus on implicit biases that may lead to good generalization except that Vardi et al. (2022) and Frei et al. (2024) connected the line of works on margin to the non-robustness of neural networks, which we discuss shortly. **Feature Learning Theory for Two-Layer Networks.** The feature learning theory of two-layer neural networks as proposed in various recent studies (Wen & Li, 2021; Allen-Zhu & Li, 2022; Chen et al., 2022; Cao et al., 2022; Zhou et al., 2022; Chidambaram et al., 2023; Allen-Zhu & Li, 2023; Kou et al., 2023a; Simsek et al., 2023) aims to explore how features are learned in deep learning. This theory extends the theoretical optimization analysis beyond the scope of the neural tangent kernel (NTK) theory (Jacot et al., 2018; Du et al., 2019b;a; Allen-Zhu et al., 2019b; Arora et al., 2019b). Among these feature learning works, there exist various data assumptions about feature-noise structure. Based on the data assumption of sparse coding model, Wen & Li (2021) study feature learning process of self-supervised contrastive learning, and Allen-Zhu & Li (2022) propose a principle called feature purification to explain the workings of adversarial training. Allen-Zhu & Li (2023) utilize multi-view-based patch-structured data assumption to understand the benefits of ensembles in deep learning. Following the multi-view data proposed in Allen-Zhu & Li (2023), Chidambaram et al. (2023) show that data mix-up algorithm can provably learn diverse features to improve generalization. Cao et al. (2022); Kou et al. (2023a) explore the benign overfitting phenomenon of two-layer convolutional neural networks by leveraging a technique of signal-noise decomposition. Zhou et al. (2022) study feature condensation and prove that, for two-layer network with small initialization, input weights of hidden neurons condense onto isolated orientations at the initial training stage. Simsek et al. (2023) focus on the regression setting and study the compression of the teacher network, and they find that weight vectors, whether copying an individual teacher vector or averaging a set of teacher vectors, are critical points of the loss function. 3 **Comparisons with Vardi et al. (2022), Frei et al. (2024) and Min & Vidal (2024).** Recently, Vardi et al. (2022) and Frei et al. (2024) demonstrated that for two-layer ReLU networks, any KKT solution to the maximum margin program (it is known that gradient flow converges to such KKT solution (Lyu & Li, 2020; Ji & Telgarsky, 2020a)) leads to non-robust solutions under the assumption of synthetic cluster data, and Min & Vidal (2024) further conjectured that the weight vectors of two-layer ReLU network converge to an average of cluster-center vectors. Their finding highlights the significance of the optimization process in the (non)robustness of neural networks. Our theoretical results are inspired by theirs, but differ from theirs in the following important aspects: (1) Conceptually, feature averaging is arguably more intuitive and concrete (in the feature level) than the set of KKT properties. Moreover, feature averaging (or its nonlinear extensions) may appear in more complex and general setting even when the solution is far from a KKT point. (2) Technically, we perform a detailed and finite-time analysis of the gradient descent dynamics, in contrast to their result about limiting behavior of gradient descent. In particular, our analysis of gradient descent dynamics reveals the feature learning process. Furthermore, we comment that the time complexity converging from an initialization point to a KKT solution can be slow (i.e., Ω(1 _/_ log( _t_ )) proven in Soudry et al. (2018); Lyu & Li (2020); Kou et al. (2023b)). (3) Our analysis of the GD dynamics requires small initialization, whereas their results depend on starting from a solution that already correctly classifies the training set (an assumption made in (Lyu & Li, 2020) for achieving KKT points). (4) Our result (Theorem 4.5) solves the conjecture proposed by Min & Vidal (2024), where we show that the weight vector associated with each neuron aligns with a weighted average of cluster features, and the ratio between weights of distinct clusters is close to 1. 3 P ROBLEM S ETUP In this section, we introduce some useful notations and concepts, including the multi-cluster data distribution, the two-layer neural network learner and the gradient descent algorithm. **Notations.** We use bold-face letters to denote vectors, e.g., _**x**_ = ( _x_ 1 _, . . ., x_ _d_ ) . For _**x**_ _∈_ R _[d]_, we denote by _∥_ _**x**_ _∥_ the Euclidean ( _ℓ_ 2 ) norm. We denote by 1 ( _·_ ) the standard indicator function.We denote sgn( _z_ ) = 1 if _z >_ 0 and _−_ 1 otherwise. For integer _n ≥_ 1, we denote [ _n_ ] = _{_ 1 _, . . ., n}_ . We denote by _N_ � _µ, σ_ [2] [�] the normal distribution with mean _µ ∈_ R and variance _σ_ [2], and by _N_ ( _**µ**_ _,_ **Σ** ) the multivariate normal distribution with mean vector _**µ**_ and covariance matrix **Σ** . The identity matrix of size _d_ is denoted by _**I**_ _d_ . We use Unif( _A_ ) to denote the uniform distribution on the support set _A_ . We use standard asymptotic notation _O_ ( _·_ ) and Ω( _·_ ) to hide constant factors, and _O_ [˜] ( _·_ ) _,_ Ω( [˜] _·_ ) to hide logarithmic factors. 3.1 D ATA D ISTRIBUTION Following Vardi et al. (2022); Frei et al. (2024), we consider binary classification on the following data distribution with multiple clusters. **Definition 3.1** (Multi-Cluster Data Distribution) **.** Given _k_ vectors _**µ**_ 1 _, . . .,_ _**µ**_ _k_ _∈_ R _[d]_, called the _cluster_ _features_, and a partition of [ _k_ ] into two disjoint sets _J_ _±_ = ( _J_ + _, J_ _−_ ), we define _D_ ( _{_ _**µ**_ _j_ _}_ _[k]_ _j_ =1 _[, J]_ _[±]_ [)] [ as a] data distribution on R _[d]_ _× {−_ 1 _,_ 1 _}_, where each data point ( _**x**_ _, y_ ) is generated as follows: 1. Draw a cluster index as _j ∼_ Unif([ _k_ ]); 2. Set _y_ = +1 if _j ∈_ _J_ + ; otherwise _j ∈_ _J_ _−_ and set _y_ = _−_ 1; 3. Draw _**x**_ := _**µ**_ _j_ + _**ξ**_, where _**ξ**_ _∼N_ ( **0** _,_ _**I**_ _d_ ). For convenience, we write _D_ instead of _D_ ( _{_ _**µ**_ _j_ _}_ _[k]_ _j_ =1 _[, J]_ _[±]_ [)] [ if] _[ {]_ _**[µ]**_ _[j]_ _[}]_ _[k]_ _j_ =1 [and] _[ J]_ _[±]_ [ are clear from the] context. For _s ∈{±_ 1 _}_, we write _J_ _s_ to denote _J_ + if _s_ = +1 and _J_ _−_ if _s_ = _−_ 1. To ease the analysis, we make the following simplifying assumptions on the distribution. **Assumption 3.2** (Orthogonal Equinorm Cluster Features) **.** The cluster features _{_ _**µ**_ _j_ _}_ _[k]_ _j_ =1 [satisfy the] properties that (1) _∥_ _**µ**_ _j_ _∥_ = _√d_ for all _j ∈_ [ _k_ ]; and (2) _**µ**_ _i_ _⊥_ _**µ**_ _j_ for all 1 _≤_ _i < j ≤_ _k_ . **Assumption 3.3** (Nearly Balanced Classification) **.** The partition _J_ _±_ satisfies _c_ _[−]_ [1] _≤_ _|_ _[|]_ _J_ _[J]_ _−_ [+] _[|]_ _|_ _[≤]_ _[c]_ [ for] some absolute constant _c ≥_ 1. 4 Our data distribution is similar to that in Vardi et al. (2022) and Frei et al. (2024). In particular, Vardi et al. (2022) consider a setting where data are comprised of _k_ nearly orthogonal data points in R _[d]_ . This assumption is further relaxed in Frei et al. (2024), where they assume _k_ clusters with nearly orthogonal cluster means _{_ _**µ**_ _i_ _}_ _[k]_ _i_ =1 [(i.e., they have that] _[|⟨]_ _**[µ]**_ _[i]_ _[,]_ _**[µ]**_ _[j]_ _[⟩|]_ [=] _[ O]_ � _k_ 1 � holds for all _i ̸_ = _j_ ). For _∥_ _**µ**_ _**[µ]**_ _i_ _[i]_ _∥∥_ _[,]_ _**[µ]**_ _**µ**_ _[j]_ _j_ _∥_ [=] _[ O]_ � _k_ 1 orthogonal cluster means _{_ _**µ**_ _i_ _}_ _[k]_ _i_ =1 [(i.e., they have that] _∥_ _**µ**_ _**[µ]**_ _i_ _[i]_ _∥∥_ _[,]_ _**[µ]**_ _**µ**_ _[j]_ _j_ _∥_ [=] _[ O]_ � _k_ 1 � holds for all _i ̸_ = _j_ ). For simplicity, our work focuses on the setting with clusters exactly orthogonal to each other. 3.2 N EURAL N ETWORK L EARNER A training dataset _S_ := _{_ ( _**x**_ _i_ _, y_ _i_ ) _}_ _[n]_ _i_ =1 _[⊆]_ [R] _[d]_ _[ × {−]_ [1] _[,]_ [ 1] _[}]_ [ of size] _[ n]_ [ is randomly sampled from the data] distribution _D_ ( _{_ _**µ**_ _j_ _}_ _[k]_ _j_ =1 _[, J]_ _[±]_ [)][ and is used to train a two-layer neural network.] **Network Architecture.** We focus on learning two-layer ReLU networks. Such networks are usually defined as _f_ _**θ**_ ( _**x**_ ) := [�] _[M]_ _j_ =1 _[a]_ _[j]_ [ ReLU(] _[⟨]_ _**[w]**_ _[j]_ _[,]_ _**[ x]**_ _[⟩]_ [+] _[ b]_ _[j]_ [)] [, where] _**[ θ]**_ [ :=] � _{a_ _j_ _}_ _[M]_ _j_ =1 _[,][ {]_ _**[w]**_ _[j]_ _[}]_ _[M]_ _j_ =1 _[,][ {][b]_ _[j]_ _[}]_ _[M]_ _j_ =1 � are the parameters of the network, and ReLU( _·_ ) is the ReLU activation function defined as ReLU( _z_ ) = max(0 _, z_ ). For the sake of simplicity, we consider the case where _M_ = 2 _m_ is even and fix the second layer as 1 _a_ _j_ = _m_ [for] [ 1] _[ ≤]_ _[j][ ≤]_ _[m]_ [ and] _[ a]_ _[j]_ [ =] _[ −]_ _m_ [1] [for] _[ m]_ [ + 1] _[ ≤]_ _[j][ ≤]_ [2] _[m]_ [, which is a widely adopted setting in] the literature of feature learning theory (Allen-Zhu & Li, 2022; Cao et al., 2022; Kou et al., 2023a). With this simplification, we focus on training only the first layer ( _{_ _**w**_ _j_ _}_ _[M]_ _j_ =1 _[,][ {][b]_ _[j]_ _[}]_ _[M]_ _j_ =1 [)] [ and rewrite the] network as _f_ _**θ**_ ( _**x**_ ) := [1] _m_ � ReLU( _⟨_ _**w**_ +1 _,r_ _,_ _**x**_ _⟩_ + _b_ +1 _,r_ ) _−_ _m_ [1] _r∈_ [ _m_ ] � _m_ � ReLU( _⟨_ _**w**_ _−_ 1 _,r_ _,_ _**x**_ _⟩_ + _b_ _−_ 1 _,r_ ) _,_ _r∈_ [ _m_ ] where _**θ**_ = ( _{_ _**w**_ +1 _,r_ _}_ _[m]_ _r_ =1 _[,][ {][b]_ [+1] _[,r]_ _[}]_ _[m]_ _r_ =1 _[,][ {]_ _**[w]**_ _[−]_ [1] _[,r]_ _[}]_ _[m]_ _r_ =1 _[,][ {][b]_ _[−]_ [1] _[,r]_ _[}]_ _[m]_ _r_ =1 [)] [ are the trainable parameters, and] _**w**_ +1 _,r_ and _b_ +1 _,r_ correspond to the neurons with _a_ _r_ = _m_ [1] [, while] _**[ w]**_ _[−]_ [1] _[,r]_ [ and] _[ b]_ _[−]_ [1] _[,r]_ [ correspond to the] neurons with _a_ _r_ = _−_ _m_ [1] [.] **Training Objective and Gradient Descent.** The neural network _f_ _**θ**_ ( _·_ ) is trained to minimize the following empirical loss on the training dataset _S_ : _L_ ( _**θ**_ ) := _n_ [1] � _ni_ =1 _[ℓ]_ [(] _[y]_ _[i]_ _[f]_ _**[θ]**_ [ (] _**[x]**_ _[i]_ [))] [, where] _[ ℓ]_ [(] _[q]_ [) :=] log(1 + _e_ _[−][q]_ ) is the logistic loss. We apply gradient descent to minimize this loss: _m_ [.] **Training Objective and Gradient Descent.** The neural network _f_ _**θ**_ ( _·_ ) is trained to minimize the following empirical loss on the training dataset _S_ : _L_ ( _**θ**_ ) := [1] � _ni_ =1 _[ℓ]_ [(] _[y]_ _[i]_ _[f]_ _**[θ]**_ [ (] _**[x]**_ _[i]_ [))] [, where] _[ ℓ]_ [(] _[q]_ [) :=] _**θ**_ [(] _[t]_ [+1)] = _**θ**_ [(] _[t]_ [)] _−_ _η∇L_ ( _**θ**_ [(] _[t]_ [)] ) _,_ (1) where _**θ**_ [(] _[t]_ [)] denotes the parameters at _t_ -th iteration for all _t ≥_ 0, and _η >_ 0 is the learning rate. We specify the derivative of ReLU activation as ReLU _[′]_ ( _z_ ) = 1 ( _z ≥_ 0) in backpropagation. At initialization, we set _**w**_ _s,r_ [(0)] _[∼N]_ [(] **[0]** _[, σ]_ w [2] _**[I]**_ _[d]_ [)][ and] _[ b]_ _s,r_ [(0)] _[∼N]_ [(0] _[, σ]_ b [2] [)][ for some] _[ σ]_ [w] _[, σ]_ [b] _[ >]_ [ 0][.] **Clean Accuracy and Robust Accuracy.** For a given data distribution _D_ over R _[d]_ _× {−_ 1 _,_ 1 _}_, the clean accuracy of a neural network _f_ _**θ**_ : R _[d]_ _→_ R on _D_ is defined as Acc _[D]_ clean [(] _[f]_ _**[θ]**_ [) :=][ P] ( _**x**_ _,y_ ) _∼D_ [[sgn(] _[f]_ _**[θ]**_ [(] _**[x]**_ [)) =] _[ y]_ []] _[ .]_ In this work, we focus on the _ℓ_ 2 -robustness. The _ℓ_ 2 _δ_ -robust accuracy of _f_ _**θ**_ on _D_ is defined as Acc _[D]_ robust [(] _[f]_ _**[θ]**_ [;] _[ δ]_ [) :=][ P] ( _**x**_ _,y_ ) _∼D_ [[] _[∀]_ _**[ρ]**_ _[ ∈]_ [B] _[δ]_ [: sgn(] _[f]_ _**[θ]**_ [(] _**[x]**_ [ +] _**[ ρ]**_ [)) =] _[ y]_ []] _[,]_ where B _δ_ := _{_ _**ρ**_ _∈_ R _[d]_ : _∥_ _**ρ**_ _∥≤_ _δ}_ is the _ℓ_ 2 -ball centered at the origin with radius _δ_ . We say that a neural network _f_ _**θ**_ is _δ_ -robust if Acc _[D]_ robust [(] _[f]_ _**[θ]**_ [;] _[ δ]_ [)] _[ ≥]_ [1] _[ −]_ _[ϵ]_ [(] _[d]_ [)] [ for some function] _[ ϵ]_ [(] _[d]_ [)] [ that vanishes to] zero, i.e., _ϵ_ ( _d_ ) _→_ 0 as _d →∞_ . **Robust Networks Exist.** In a very similar setting to ours, Frei et al. (2024) show that there exists a two-layer ReLU network that can achieve nearly 100% clean accuracy and Ω( _√d_ ) -robust accuracy on their data distribution. In our setting, we can also construct a similar network that achieves nearly 100% clean accuracy and Ω( _√d_ ) -robust accuracy. In particular, such network utilizes one hidden neuron to capture one feature/cluster (i.e., the neural is activated only if the input point is from the corresponding cluster). See Theorem G.3 in Appendix G.2 for the details and Figure 1 for an illustration. However, we will soon show that, despite such Ω( _√d_ ) -robust network exists, gradient descent is incapable of learning such a robust network, but instead converges to a very different solution with a robust radius that is Θ( _√k_ ) times smaller. 5 Idea Generation Category:
3Other
zPHra4V5Mc
### I MPROVING SEMANTIC UNDERSTANDING IN SPEECH #### LANGUAGE MODELS VIA BRAIN TUNING **Omer Moussa** **[1]** **Dietrich Klakow** **[2]** **Mariya Toneva** **[1]** 1 Max Planck Institute for Software Systems 2 Saarland University _{_ omoussa, mtoneva _}_ @mpi-sws.org dietrich.klakow@lsv.uni-saarland.de A BSTRACT Speech language models align with human brain responses to natural language to an impressive degree. However, current models rely heavily on low-level speech features, indicating they lack brain-relevant semantics which limits their utility as model organisms of semantic processing in the brain. In this work, we address this limitation by inducing brain-relevant bias directly into the models via fine-tuning with fMRI recordings of people listening to natural stories–a process we name _brain-tuning_ . After testing it on 3 different pretrained model families, we show that brain-tuning not only improves overall alignment with new brain recordings in semantic language regions, but also reduces the reliance on low-level speech features for this alignment. Excitingly, we further show that brain-tuning leads to 1) consistent improvements in performance on semantic downstream tasks and 2) a representational space with increased semantic preference. Our results provide converging evidence, for the first time, that incorporating brain signals into the training of language models improves the models’ semantic understanding. We make the code available at https://github.com/bridge-ai-neuro/brain-tuning. 1 I NTRODUCTION It is an exciting time for the cognitive neuroscience of language with the rise of language models which have been shown to align with (i.e. predict) brain activity evoked by natural language to impressive and unprecedented degrees (Wehbe et al., 2014; Jain & Huth, 2018; Toneva & Wehbe, 2019; Schrimpf et al., 2021; Caucheteux & King, 2022; Goldstein et al., 2022; Karamolegkou et al., 2023). Researchers aim to use language models as model organisms (Toneva, 2021) of reading and listening in the brain to learn more about the underlying information processing that leads to brain-like representations of language. However, recent work has questioned whether current popular speech language models can fully serve this role, as their alignment with semantic brain regions was shown to be mostly due to lowlevel speech features, indicating that speech language models lack brain-relevant semantics (Oota et al., 2024a). Given that most large brain recording datasets are of speech-evoked language (LeBel et al., 2023; Nastase et al., 2021; Deniz et al., 2019; Momenian et al., 2024), having speech models with improved brain-relevant semantics is important to provide better model organisms for auditory language processing. The lack of brain-relevant semantics in speech models (Oota et al., 2024a) may also be related to their incomplete downstream semantic understanding. (Choi et al., 2024). To bridge the gap between language understanding in speech models and the human brain, we propose to augment pretrained speech model training directly with brain recordings in a process we call brain-tuning (see Fig.1a for illustration of the training approach). We then evaluate the resulting brain-tuned speech models in three distinct ways (see Fig.1c for an illustration of the evaluation approach): 1) alignment with new brain recordings in semantic regions of the brain, which we expect to significantly increase if brain-tuning successfully induces brain-relevant semantics, 2) effect of low-level features, such as Tri-Phones and Articulation, on the alignment with these semantic regions, which we expect to significantly decrease if brain-tuning successfully induces brain-relevant semantics 3) downstream performance on tasks that are helped by semantic understanding, which we expect to significantly improve if the brain-relevant semantic understanding induced by the braintuning is also useful for downstream semantic tasks. 1 We brain-tune three popular speech language models using the largest available fMRI dataset, recorded when participants listened to natural stories. Across all models, we find that brain-tuning 1) significantly improves alignment with new fMRI recordings in semantic brain regions, 2) significantly reduces the impact of low-level features on this alignment, and 3) significantly improves downstream performance on tasks that are helped by semantic understanding. We show that these results hold when comparing the brain-tuned models to their pretrained counterparts, and to two additional strong baselines (i.e. brain-tuning with block-permuted fMRI data, and fine-tuning using representations from a larger speech model). Our results provide converging evidence that augmenting speech models with brain signals from listening to natural language improves semantic understanding in speech models. Excitingly, our findings indicate for the first time that improving alignment with semantic understanding in the brain also translates to downstream gains for the models. We will make all models and code publicly available, and hope that the improved speech models our work provides will contribute to a better understanding of listening in the brain. Our main contributions can be summarized as follows: 1. We provide an approach to fine-tune pretrained speech models using fMRI recordings of people listening to natural stories, and validate it across three popular model families. 2. We conduct extensive analyses to understand the impact of this fine-tuning on the speech model representations and behavior. 3. For the first time, we show that improving alignment with the brain has a substantial and significant downstream benefit for an AI model. 2 R ELATED W ORK Our work is most closely related to that of Schwartz et al. (2019), who fine-tune one pretrained textbased language model (BERT (Devlin et al., 2019)) using fMRI and MEG recordings of participants reading a chapter of a book. We instead focus on speech models, validate our method across three model families, and conduct comprehensive analyses to reveal that brain-tuning improves semantic understanding in speech language models for the first time. Separately, a growing literature investigates the alignment between human brains and pretrained language models. A number of studies have shown a degree of alignment between language-evoked brain activity with text-based language models (Wehbe et al., 2014; Jain & Huth, 2018; Toneva & Wehbe, 2019; Caucheteux & King, 2022; Jat et al., 2019; Abdou et al., 2021; Schrimpf et al., 2021; Toneva et al., 2022a;b; Antonello et al., 2021; Oota et al., 2022; Merlin & Toneva, 2022; Aw & Toneva, 2023; Oota et al., 2024b; Lamarre et al., 2022; Antonello et al., 2024), and with speech-based language models (Millet et al., 2022; Vaidya et al., 2022; Tuckute et al., 2023; Oota et al., 2023; 2024a; Chen et al., 2024). Our approach of brain-tuning pretrained language models is complementary and can be used in addition to previous methods for analyzing the alignment between language models and brain activity. 3 M ETHODS 3.1 S PEECH L ANGUAGE M ODELS We build on three popular pretrained transformer-based speech language model families: Wav2vec2.0 (Baevski et al., 2020), HuBERT (Hsu et al., 2021), and Whisper (Radford et al., 2023). _∼_ We chose versions of these models that have comparable sizes ( 90M parameters), the same number of encoder layers (12), and the same embedding size (768). Wav2vec2.0 and HuBERT are self-supervised models that are trained to predict representations of masked portions of the input. They both divide the input into tokens of 20ms and then use a CNN feature extractor. We use the base architectures which are trained on _∼_ 960 hours of audio. Whisper, unlike Wav2Vec2.0 and HuBERT, is trained in a weakly supervised manner, using 680K hours of paired audio-text data and has an encoder-decoder architecture. Contrary to HuBERT and Wav2Vec2.0, Whisper takes a fixed 30s input and then converts it to log-mel spectrograms. We fine-tune only the Whisper encoder for two reasons: 1) to keep the model of comparable size to the other two models, and 2) since the encoder 2 TR-aligned Audio Get Representations Speech Model fMRI Response Fine-tune tokens Freeze Fine-tune tokens Loss Predicted fMRI Response (a) Proposed brain-tuning approach Estimate Brain Alignment Input Audio True fMRI Response **Low-level** **Impact** True fMRI Response Linear Function of Low-Level Feature Original alignment ( ) ( ) Residual alignment ( ) residuals (b) Approach to estimate brain alignment and lowlevel feature impact (c) Evaluation strategy and expected outcomes Figure 1: Training and Evaluation Approaches. (a) Brain-tuning approach for a given speech model; (b) Evaluation of brain alignment and low-level feature impact on the brain alignment; (c) Types of evaluation and expected outcomes if brain-tuning successfully improves semantic understanding in speech models: increase of alignment with semantic brain regions, decrease of impact of low-level features on this alignment, and increase in downstream performance on semantic tasks. is expected to represent lower-level information than the decoder, it is a good testbed for whether brain-tuning can induce semantic understanding. 3.2 N ATURALISTIC B RAIN D ATASET AND D ATA P REPROCESSING We use the largest public dataset of fMRI recordings (LeBel et al., 2024) for brain-tuning. The dataset contains fMRI recordings for 8 participants listening to 27 short stories from the Moth Radio Hour podcast for a total of 6.4 hours of audio per participant (11 _,_ 543 fMRI images (TRs) with TR = 2 _._ 0045s). To fine-tune a model using fMRI recordings, we need to build a paired dataset of fMRI recordings and the corresponding audio snippets that were presented to the participants. We follow previously proposed approaches for this (Oota et al., 2024a; Vaidya et al., 2022; Antonello et al., 2024; Schwartz et al., 2019). Specifically, we first partition the audio input by utilizing a sliding window of length _T_ seconds with a stride _W_ seconds. This way, at each time _t_ in the audio, a window of length [ _t −_ _T, t_ ] seconds is provided as input to the speech model. We use _T_ = 16s and _W_ = 0 _._ 1s. We next align the stimulus presentation rate with the slower fMRI acquisition rate by downsampling using a 3-lobed Lanczos filter. Lastly, we account for the slowness of the fMRI hemodynamic response by modeling it as a finite response filter with 10 seconds (5 TRs). These steps result in an audio-fMRI paired dataset that can be used for brain-tuning or evaluation. **Estimated noise ceiling.** Noise in fMRI data can impair brain-tuning and evaluation, so it is important to estimate the “noise ceiling” of each voxel in the fMRI recordings. We estimate the voxel-wise noise ceiling for all participants’ fMRI data based on the preferred method by the original dataset paper (LeBel et al., 2023), which leverages within-participant repetitions of the same story. This noise ceiling value estimates the amount of explainable variance in the brain signal, ranging from 0 to 1. We use this estimated noise ceiling to filter noisy voxels and to normalize the brain alignment during evaluation. We use a filtration threshold of 0 _._ 4, in line with the findings of Antonello et al. (2024). After filtering voxels with low noise ceiling, there remain 30 _,_ 000 to 50 _,_ 000 voxels per participant. 3 The final brain-tuning voxel set contains voxels from late language regions and the auditory cortex. Note that because the late language regions are much larger than the auditory cortex, the number of included voxels from the late language regions is naturally much greater (as shown in Fig.13). 3.3 B RAIN - TUNING S PEECH M ODELS **Brain-tuning approach.** Given an input audio and its corresponding fMRI response, obtained via the method in Section 3.2, we aim to fine-tune a pretrained speech model with the fMRI responses (i.e., brain-tune the model). Specifically, we fine-tune the model to reconstruct the fMRI responses corresponding to the voxels with high noise ceiling ( _>_ 0 _._ 4). The approach is illustrated in Fig.1a. To this end, we add a pooling layer and a projection head on top of the output tokens. The projection head predicts the fMRI response from the pooled model tokens. More formally, given the _o_ 1 _....o_ _N_ output tokens, we have a function **H**, that predicts fMRI targets such that **H** ( _o_ 1 : _o_ _N_ ) = _FC_ ( _P_ ( _o_ 1 : _o_ _N_ )), where _P_ is an average pooling function and _FC_ is a linear function. The training objective is a reconstruction loss ( _L_ 2 loss) between the outputs of **H** and the fMRI voxels. We freeze the feature extractor and backpropagate the loss to fine-tune the projection head and the transformer layers. **Training details.** We used a base learning rate of 5 _×_ 10 _[−]_ [5] and 10 _[−]_ [4] respectively for the transformer layers and the linear projection head. Both had a linear decay scheduler for the learning rate with a warmup period for 10% of the epochs. The 27 fMRI stories are split into a training set (24 stories), a validation set (2 stories), and a held-out test set (1 story). The training is stopped when the validation loss saturates or begins to diverge. Since the number of voxels differs for each participant, this fine-tuning process is done separately for each fMRI participant. We apply this approach to the 3 pretrained models: Wav2vec2.0, HuBERT, and the Whisper encoder. 3.3.1 C OMPARISON M ODELS In addition to comparing the brain-tuned models to the corresponding pretrained ones, we further train several additional baselines for comparison. We briefly summarize these baselines and their purpose below, and provide more details about each baseline in Appendix D.2. **Random brain-tuned.** This baseline aims to test how the addition of any fMRI data impacts model performance. This baseline uses the same fine-tuning process as in Fig.1a, but instead of using the matched fMRI responses for the input stimulus, it uses block-permuted fMRI responses. **Big spoken language model-tuned (BigSLM-tuned).** This baseline tests the importance of having fMRI responses as the training targets. We replace the fMRI targets for the input stimuli with representations for the same stimuli obtained from a BigSLM. We use Whisper Medium (800M parameters) as the BigSLM and use a concatenation of all its decoder layers’ representations. **Stimulus-tuned** . This baseline tests whether tuning with the fMRI signal results in additional gains over simply further tuning only using the stimulus audio. Stimulus-tuned models have been previously found to outperform pretrained models specifically for brain alignment (Merlin & Toneva, 2022), but their performance on downstream tasks has not been investigated. **Text language model-tuned (LM-tuned)** . We expect that current text LMs encode richer semantics than current speech LMs, so this baseline tests the importance of added semantics for model performance. For tuning, we use representations from two pretrained text LMs (GPT2 and LLama2). We leverage LM-tuned models to detect which downstream tasks benefit from more semantics. In the main paper, we focus on two of these baselines–Random Brain-tuned and BigSLM-tuned– and provide results from the remaining baselines in Appendix D.2. Briefly, stimulus-tuned models perform similarly to pretrained models and substantially worse than brain-tuned models on the tested downstream tasks. LM-tuned models improve over the pretrained models on two downstream tasks, the same ones where brain-tuning leads to the biggest gains over the pretrained models. This further supports our conclusions that brain-tuning improves semantic understanding in speech models. 3.4 E VALUATION We evaluate multiple aspects of the brain-tuned models and illustrate our evaluation strategy in Fig.1c. If brain-tuning successfully improves semantic understanding in speech models, we expect 4 that brain-tuned models will align better with semantic language regions in new brain recordings, have impact of lower low-level features on the alignment with these regions, and have improved downstream performance on semantic tasks. 3.4.1 B RAIN A LIGNMENT To compare brain alignment for a model before (i.e., the pretrained version) and after brain-tuning, we compute the normalized brain alignment using standard voxel-wise encoding models and report it for language- and speech-related brain regions. For each region, we statistically test whether brain-tuning leads to significantly better alignment. **Normalized brain alignment.** We estimate standard voxel-wise encoding models to evaluate the brain alignment of a model representation (Antonello et al., 2024; Vaidya et al., 2022; Oota et al., 2024a). We carry out this voxel-wise encoding as shown in the original alignment branch in Fig.1b. The audio data is processed as detailed in Section 3.2, then a voxel-wise encoding function **h** is learned using ridge regression on the training portion of the dataset. The prediction performance of this encoding function is computed over the held-out testing portion of the dataset via Pearson correlation. For a voxel _v_, we define _ρ_ _v_ (the alignment for voxel _v_ ) as the Pearson correlation between the predictions of **h** and the corresponding brain responses for this voxel across all held-out data samples. Lastly, we define the normalized brain alignment _B_ for a brain region of _V_ voxels as: 1 _B_ = _|V |_ � _v∈V_ 1 _ρ_ _v_ (1) _NC_ _v_ where _NC_ _v_ is the noise ceiling for voxel _v_ . This serves as a standardized measure for alignment between a model and different brain regions since it is computed relative to the estimated explainable variance in the brain region. **Parsing language and primary auditory regions.** To make the normalized brain alignment comparison focused on language and primary auditory regions, we use FreeSurfer v7 to project the participants’ data, and then we use the human cerebral cortex parcellation atlas from (Glasser et al., 2016) to parse the regions of interest (ROIs). We focus mainly on the late language regions (e.g., inferior frontal gyrus, angular gyrus, anterior and posterior temporal lobes, and middle frontal gyrus) and the primary auditory regions. The full ROI list and their functions is provided in Appendix A.1. **Significance testing.** To test whether the brain-tuned models have significantly different alignment than the pretrained ones, we use the Wilcoxon signed-rank test. We indicate significant differences (corresponding to p-value _<_ 0 _._ 05) with an asterisk *. 3.4.2 I MPACT OF L OW - LEVEL F EATURES ON B RAIN A LIGNMENT Previous work showed that the alignment of pretrained speech models with late language regions is mostly due to low-level features (Oota et al., 2024a), which is undesirable. We further set out to test the impact of low-level features on the brain-tuned models’ alignment with the brain. To enable comparisons with previous work, we estimate the low-level feature impact on brain alignment using the same approach as in Oota et al. (2024a). Intuitively, the impact of a specific low-level feature is estimated by comparing the brain alignment of a model before and after this low-level feature is computationally removed from the model. If, after removal of the low-level feature, the alignment is significantly lower than the original one, the low-level feature is said to have high impact on the brain alignment. We illustrate this process in Fig.1b and provide details about this method below. **Low-level features.** We focus on four low-level speech features: Power Spectrum (the time-varying power spectrum across frequency bands), Di-Phones & Tri-Phones (adjacent pairs and triples of phonemes), and Articulation (articulatory characteristics of the phonemes). These features cover different stages of speech and are considered to be non-semantic features. The specifics of obtaining these features from the audio are detailed in (Oota et al., 2024a) and Appendix A.3. **Low-level feature impact.** First, given a low-level feature of the input audio, a linear function **F** learns to predict the representations of the model from this feature. Then, the predicted model representations are subtracted from the true representations, and the brain alignment of this residual is estimated via a standard encoding model (Section 3.4.1). We define the **low-level impact** _R_ as: _R_ = 100 _·_ _[B]_ _[o]_ _[ −]_ _[B]_ _[r]_ (2) _B_ _o_ 5 Idea Generation Category:
1Cross-Domain Application
KL8Sm4xRn7
# A TTENTION WITH M ARKOV : A C URIOUS C ASE OF - S INGLE LAYER T RANSFORMERS **Ashok Vardhan Makkuva** _[∗]_ **Marco Bondaschi** _[∗]_ **Adway Girish** **Alliot Nagle** EPFL EPFL EPFL UT Austin **Martin Jaggi** **Hyeji Kim** **[†]** **Michael Gastpar** **[†]** EPFL UT Austin EPFL A BSTRACT Attention-based transformers have achieved tremendous success across a variety of disciplines including natural languages. To deepen our understanding of their sequential modeling capabilities, there is a growing interest in using Markov input processes to study them. A key finding is that when trained on first-order Markov chains, transformers with two or more layers consistently develop an induction head mechanism to estimate the in-context bigram conditional distribution. In contrast, single-layer transformers, unable to form an induction head, directly learn the Markov kernel but often face a surprising challenge: they become trapped in local minima representing the unigram distribution, whereas deeper models reliably converge to the ground-truth bigram. While single-layer transformers can theoretically model first-order Markov chains, their empirical failure to learn this simple kernel in practice remains a curious phenomenon. To explain this contrasting behavior of single-layer models, in this paper we introduce a new framework for a principled analysis of transformers via Markov chains. Leveraging our framework, we theoretically characterize the loss landscape of single-layer transformers and show the existence of global minima (bigram) and bad local minima (unigram) contingent on data properties and model architecture. We precisely delineate the regimes under which these local optima occur. Backed by experiments, we demonstrate that our theoretical findings are in congruence with the empirical results. Finally, we outline several open problems in this arena. Code is available [at https://github.com/Bond1995/Markov.](https://github.com/Bond1995/Markov) 1 I NTRODUCTION Attention-based transformers have been at the forefront of recent breakthroughs in a variety of disciplines, including natural language processing (Vaswani et al., 2017; Radford and Narasimhan, 2018; Devlin et al., 2018). One of the key workhorses behind this success is the attention mechanism, which allows transformers to capture complex causal structures in the data, thus rendering them with impressive sequential modeling capabilities. Given their success, there is tremendous interest in understanding the sequential modeling abilities of transformers. Notably, a growing body of research explores transformers through Markov input processes to investigate their in-context learning capabilities (Rajaraman et al., 2024a; Nichani et al., 2024; Edelman et al., 2024; Bietti et al., 2023). These studies reveal an interesting insight that transformers with two or more layers develop an induction head to estimate the in-context bigram conditional distribution when trained on first-order Markov chains. In contrast, single-layer transformers, unable to form an induction head (Olsson et al., 2022), directly learn the Markov kernel. Surprisingly, we empirically find that while deeper models reliably converge to the groundtruth bigram, regardless of initialization, single-layer transformers often get stuck in local minima corresponding to the unigram distribution (Fig. 1). Despite their theoretical ability to model first-order _∗_ Equal contribution. - Equal contribution. Correspondence to: Ashok Vardhan Makkuva, ashok.makkuva@epfl.ch 1 Markov chains, they sometimes fail to learn this simple kernel in practice. Motivated by this stark contrast in behavior based on depth, and our limited understanding of it, we ask: _Can we systematically_ _characterize the learning capabilities of single-layer transformers with Markovian inputs?_ To address this, in this paper we introduce a new framework for a principled theoretical and empirical analysis of transformers via Markov chains. Leveraging our framework, we characterize the loss landscape of single-layer transformers and prove the existence of bad local minima and global minima corresponding to the unigram and bigram, respectively. We further demonstrate that the presence of these local optima depends on the Markov state switching probabilities and the transformer’s weight-tying, and we precisely delineate the regimes under which this happens. Together, our analysis reveals a complex interplay between the data-distributional properties, the transformer architecture, and the final model performance for single-layer transformers with Markov chains, explaining the aforementioned empirical phenomena. In summary, we make the following contributions: - We provide a novel framework for a precise theoretical and empirical study of transformers via Markov chains (Sec. 3). - We characterize the loss landscape of single-layer transformers with first-order Markov chains, highlighting the effect of the data distribution and the model architecture (Sec. 4). - We show that the Markov switching probabilities and weight-tying play a crucial role in the presence of local optima on loss surface and precisely characterize the said conditions (Thms. 2 and 3). |Col1|Col2|Col3|Col4|Col5|Col6|Col7|e r|Col9| |---|---|---|---|---|---|---|---|---| |||||||1 - l a y e r t r a n s f o r m|e r|e r| |||||||2 -layer transform<br>4 -layer transform|er<br>er|| |||||||8 -layer transform|er|| |||||||U nigram loss (lo<br>B igram loss (glo|cal minimum)<br>bal minimum)|| |||||||||| |||||||||| |||||||||| |||||||||| |||||||||| |||||||||| Iteration Figure 1: Single-layer transformers get stuck at local minima, corresponding to the unigram model, when the input is a first-order Markov chain with switching probabilities _p_ = 0 _._ 5 and _q_ = 0 _._ 8 (Fig. 2b). However, deeper models escape to global minima corresponding to the bigram model. Our main findings and observations are: - We prove that weight tying can introduce bad local minima for single-layer transformers when Markovian switching is greater than one (Thm. 2). Removing the tying, however, solves the issue (Thm. 3). - When the Markovian switching is less than one, we empirically observe the model always converges to the global minima irrespective of the weight tying (Fig. 3). - Interestingly, transformers with depth two and beyond always converge to the global minima irrespective of the weight tying and switching (Fig. 1). **Notation.** Scalars are denoted by italic lower case letters like _x, y_ and Euclidean vectors and matrices are denoted by bold ones _**x**_ _,_ _**y**_ _,_ _**M**_, etc. We use _∥· ∥_ to denote the _ℓ_ 2 -norm for Euclidean 2 vectors and Frobenius norm for matrices. [ _k_ ] ≜ _{_ 1 _, . . ., k}_, and for a sequence ( _x_ _n_ ) _n≥_ 1, define _x_ _[m]_ _k_ ≜ ( _x_ _k_ _, . . ., x_ _m_ ) if _k ≥_ 1 and ( _x_ 1 _, . . ., x_ _m_ ) otherwise. For _z ∈_ R, the sigmoid _σ_ ( _z_ ) ≜ 1 _/_ (1 + _e_ _[−][z]_ ) and ReLU( _z_ ) ≜ max(0 _, z_ ) . For events _A_ and _B_, P ( _A_ ) denotes the probability of _A_ whereas P ( _A | B_ ) the conditional probability. Let ( _x, y_ ) be a pair of discrete random variables on [ _k_ ] _×_ [ _k_ ] with the probability mass function (pmf) of _x_ being _**p**_ _x_ = ( _p_ 1 _, . . ., p_ _k_ ) _∈_ [0 _,_ 1] _[k]_ . Then its Shannon entropy is defined as _H_ ( _x_ ) = _H_ ( _**p**_ _x_ ) ≜ _−_ [�] _i∈_ [ _k_ ] _[p]_ _[i]_ [ log] _[ p]_ _[i]_ [, and the conditional] entropy _H_ ( _y|x_ ) ≜ _H_ ( _x, y_ ) _−_ _H_ ( _x_ ) . The entropy rate of a stochastic process ( _x_ _n_ ) _n≥_ 1 is defined as lim _n→∞_ _H_ ( _x_ _[n]_ 1 [)] _[/n]_ [. Finally, for] _[ p][ ∈]_ [(0] _[,]_ [ 1)] [, the binary entropy function] _[ h]_ [(] _[·]_ [)] [ is defined as] _[ h]_ [(] _[p]_ [)][ ≜] _H_ ( _p,_ 1 _−_ _p_ ) = _−p_ log _p −_ (1 _−_ _p_ ) log(1 _−_ _p_ ). 2 B ACKGROUND We describe the transformer architecture and the Markovian input process. 2.1 T RANSFORMERS We study a single-layer transformer with a single-head softmax attention and ReLU non-linearity. We omit the layer norm since its influence is marginal in the settings we consider (Sec. 4). We consider an input vocabulary _X_ of finite size _|X|_ . For the ease of exposition, in this paper we mainly focus on _|X|_ = 2, i.e. _X_ = _{_ 0 _,_ 1 _}_, and outline our results for multi-state setting in Sec. 4.3. Let _{x_ _n_ _}_ _[N]_ _n_ =1 _[∈{]_ [0] _[,]_ [ 1] _[}]_ _[N]_ [ be an input sequence of length] _[ N]_ [. Then for each] _[ n][ ∈]_ [[] _[N]_ []] [, the transformer] operations are mathematically given by (Fig. 2a): _**x**_ _n_ = _x_ _n_ _**e**_ 1 + (1 _−_ _x_ _n_ ) _**e**_ 0 + � _**p**_ _n_ _∈_ R _[d]_ _,_ ( Embedding) _**y**_ _n_ = _**x**_ _n_ + _**W**_ _O_ � att _n,i_ _·_ _**W**_ _V_ _**x**_ _i_ _∈_ R _[d]_ _,_ ( Attention) _i∈_ [ _n_ ] _**z**_ _n_ = _**y**_ _n_ + _**W**_ 2 ReLU( _**W**_ 1 _**y**_ _n_ ) _∈_ R _[d]_ _,_ ( FF) logit _n_ = _⟨_ _**a**_ _,_ _**z**_ _n_ _⟩_ + _b_ _∈_ R _,_ ( Linear) _f_ _**θ**_ ¯ ( _x_ _[n]_ 1 [)][ ≜] [P] _**θ**_ [¯] [(] _[x]_ _[n]_ [+1] [= 1] _[ |][ x]_ _[n]_ 1 [) =] _[ σ]_ [(logit] _n_ [)] _[ ∈]_ [[0] _[,]_ [ 1]] _[.]_ ( Prediction) Here _**θ**_ [¯] ≜ ( _**e**_ 1 _,_ _**e**_ 0 _, {_ _**p**_ � _n_ _} . . ., b,_ _**a**_ ) denotes the full list of the transformer parameters. _d_ is the embedding dimension, _**e**_ 1 and _**e**_ 0 in R _[d]_ are the token-embeddings corresponding to _x_ _n_ = 1 and _x_ _n_ = 0 respectively, and � _**p**_ _n_ is the (trainable) positional encoding. We have matrices _**W**_ _O_ _∈_ R _[d][×][m]_ and _**W**_ _V_ _∈_ R _[m][×][d]_, and the attention weights att _n,i_ _∈_ (0 _,_ 1) are computed using the query and key matrices (§ A). _**W**_ 2 _∈_ R _[d][×][r]_ and _**W**_ 1 _∈_ R _[r][×][d]_ are the weight matrices in the FF layer, whereas _**a**_ _∈_ R _[d]_ and _b ∈_ R are the weight and bias parameters for the linear layers. For a multi-layer transformer, we apply the successive attention and feed-forward layers multiple times before the final linear layer. Finally, ¯ we compute the probability for the symbol 1 using the logits: _f_ _**θ**_ ( _x_ _[n]_ 1 [)][ ≜] [P] _**θ**_ [¯] [(] _[x]_ _[n]_ [+1] [= 1] _[ |][ x]_ _[n]_ 1 [) =] _σ_ (logit _n_ ) _∈_ [0 _,_ 1]. Note that a single symbol probability suffices as the vocabulary is binary. **Loss.** The parameters _**θ**_ [¯] are trained using the next-token prediction loss between the predicted probability _f_ _**θ**_ ¯ ( _x_ _[n]_ 1 [)] [ and the corresponding ground truth symbol] _[ x]_ _[n]_ [+1] [across all the positions] _[ n][ ∈]_ [[] _[N]_ []] [:] _L_ ( _**θ**_ [¯] ) ≜ _−_ [1] _N_ � E _x_ _n_ 1 +1 [ _x_ _n_ +1 _·_ log _f_ _**θ**_ ¯ ( _x_ _[n]_ 1 [) + (1] _[ −]_ _[x]_ _[n]_ [+1] [)] _[ ·]_ [ log(1] _[ −]_ _[f]_ _**θ**_ [¯] [(] _[x]_ 1 _[n]_ [))]] _[,]_ (1) _n∈_ [ _N_ ] where the expectation is over the data distribution of the sequence _{x_ _n_ _}_ _[N]_ _n_ =1 [. In practice, it is replaced] by the empirical averages across the sequences _{x_ _n_ _}_ _[N]_ _n_ =1 [sampled from the corpus, with stochastic] optimizers like SGD or Adam (Kingma and Ba, 2015) used to update the model parameters. 2.2 M ARKOV CHAINS We model the input as a _first-order Markov chain_, i.e. a Markov chain with (order) memory _m_ = 1 . For these processes, the next state is influenced only by the current state and none of the past: _**P**_ _ij_ ≜ P ( _x_ _n_ +1 = _j | x_ _n_ = _i_ ) = P � _x_ _n_ +1 = _j | x_ _n_ = _i, x_ _[n]_ 1 _[−]_ [1] = _i_ 1 _[n][−]_ [1] � _,_ 3 _f_ _**θ**_ ¯ ( _x_ [1] 1 [)] _. . ._ _f_ _**θ**_ ¯ ( _x_ _[n]_ 1 [)] _. . ._ _f_ _**θ**_ ¯ ( _x_ _[N]_ 1 [)] _x_ 1 _. . ._ _x_ _n_ _. . ._ _x_ _N_ _∈{_ 0 _,_ 1 _}_ ��� 1 _−_ _p_ _p_ ( _x_ _n_ +1 _| x_ _n_ ) _n≥_ 1 _∼_ _**P**_ ( _p, q_ ) ≜ � _q_ 1 _−_ _q_ _,_ � _**P**_ _ij_ = P� _x_ _n_ +1 = _j | x_ _n_ = _i_ � _, i, j ∈{_ 0 _,_ 1 _}._ (a) The transformer model with binary input data: for each _x_ _[n]_ 1 [, the next-bit prediction probability is] _f_ _**θ**_ ¯ ( _x_ _[n]_ 1 [) =][ P] _**θ**_ [¯] [(] _[x]_ _n_ +1 [= 1] _[|][x]_ _[n]_ 1 [)][.] (b) State transition diagram and Markov kernel for a first-order Markov chain _**P**_ ( _p, q_ ) with flipping probabilities _**P**_ 01 = _p_ and _**P**_ 10 = _q_ . Figure 2: Analysis of transformers via Markov chains. for any _i_ 1 _, . . ., i_ _n−_ 1 _, i, j ∈X_ _, n ≥_ 1 . Here the Markov kernel _**P**_ = ( _**P**_ _ij_ ) governs the transition dynamics of the process: if _**π**_ [(] _[n]_ [)] _∈_ [0 _,_ 1] _[|X|]_ denotes the probability law of _x_ _n_ at time _n_, then _**π**_ [(] _[n]_ [+1)] = _**π**_ [(] _[n]_ [)] _·_ _**P**_ . Of particular interest to us in this paper is the kernel _**P**_ ( _p, q_ ) ≜ [1 _−p, p_ ; _q,_ 1 _−q_ ] on the binary state space with the switching probabilities _**P**_ 01 = _p_ and _**P**_ 10 = _q_, for _p, q ∈_ (0 _,_ 1) . Fig. 2b illustrates the state transition diagram for this kernel. Here we refer to the sum _p_ + _q_ as the _switching factor_ . We denote a first-order binary Markov chain ( _x_ _n_ ) _n≥_ 1 with the transition kernel _**P**_ ( _p, q_ ) and starting with an initial law _**π**_ [(1)] as ( _x_ _n_ ) _n≥_ 1 _∼_ ( _**π**_ [(1)] _,_ _**P**_ ( _p, q_ )) . When the initial distribution is understood from context, we simply write ( _x_ _n_ +1 _| x_ _n_ ) _n≥_ 1 _∼_ _**P**_ ( _p, q_ ) . For this process, the entropy rate equals _H_ ( _x_ _n_ +1 _|x_ _n_ ) = _p_ +1 _q_ [(] _[q h]_ [(] _[p]_ [) +] _[ p h]_ [(] _[q]_ [))][, which is independent of] _[ n]_ [.] **Stationary distribution.** A _stationary distribution_ of a Markov chain is a distribution _**π**_ on _X_ that is invariant to the transition dynamics, i.e. if _**π**_ [(] _[n]_ [)] = _**π**_, then we have _**π**_ [(] _[n]_ [+1)] = _**πP**_ = _**π**_ and consequently, _**π**_ [(] _[m]_ [)] = _**π**_ for all _m ≥_ _n_ . Also referred to as the steady-state distribution, its existence and uniqueness can be guaranteed under fairly general conditions (Norris, 1997), and in particular for _**P**_ ( _p, q_ ) when _p, q ̸_ = 0 _,_ 1 . For _**P**_ ( _p, q_ ), the stationary distribution is given by _**π**_ ( _p, q_ ) ≜ ( _π_ 0 _, π_ 1 ) = _p_ +1 _q_ [(] _[q, p]_ [)] [. The higher the flipping probability] _[ q]_ [, the higher the likelihood for] the chain to be in the state 0 at the steady state. Similarly for the state 1 . We can verify that _**π**_ indeed satisfies _**πP**_ = _**π**_ . For brevity, we drop the dependence on ( _p, q_ ) and simply write _**π**_ and _**P**_ when the parameters are clear from context. 3 F RAMEWORK : T RANSFORMERS VIA M ARKOV CHAINS We present our mathematical framework for a principled analysis of transformers via Markov chains. In this paper we focus on first-order binary Markovian data and single-layer transformers though our framework readily generalizes to higher orders and deeper architectures (Sec. 4.4), and multi-state Markov chains (Sec. 4.3). **Data.** We assume that the vocabulary _X_ = _{_ 0 _,_ 1 _}_ and the input data _{x_ _n_ _}_ _[N]_ _n_ =1 _[∼]_ [(] _**[π]**_ [(] _[p, q]_ [)] _[,]_ _**[ P]**_ [ (] _[p, q]_ [))] [,] for some fixed sequence length _N ≥_ 1 and ( _p, q_ ) _∈_ (0 _,_ 1) [2] . Recall that _p_ + _q_ is the switching factor. The parameters _p_ and _q_ provide a tractable mechanism to control the input data, which plays a crucial role in transformer learning. 4 **Model.** We consider a single-layer transformer with a single-head attention, without layer norm. As the input is binary, the Embedding layer can be simplified to _**x**_ _n_ = _x_ _n_ _**e**_ 1 + (1 _−_ _x_ _n_ ) _**e**_ 0 + � _**p**_ _n_ = _x_ _n_ _**e**_ + _**p**_ _n_ _,_ ( Uni-embedding) where _**e**_ ≜ _**e**_ 1 _−_ _**e**_ 0 is the embedding vector and _**p**_ _n_ ≜ _**e**_ 0 + � _**p**_ _n_ is the new positional encoding. Note that _x_ _n_ _∈{_ 0 _,_ 1 _}_ and hence the embedding is either _**e**_ + _**p**_ _n_ or just _**p**_ _n_ depending on _x_ _n_ . The other layers are the same as in Sec. 2.1: _x_ _n_ _∈{_ 0 _,_ 1 _}_ _−−−−−−−−−−→_ [Uni-embedding] _**x**_ _n_ _−−−−−−→_ [Attention] _**y**_ _n_ FF _−−→_ _**z**_ _n_ _−−−−→_ [Linear] logit _n_ _−−−−−−−→_ Prediction _f_ _**θ**_ ¯ ( _x_ _[n]_ 1 [)] _[.]_ (2) Let _**θ**_ [¯] ≜ ( _**e**_ _, {_ _**p**_ _n_ _}_ _[N]_ _n_ =1 _[, . . ., b,]_ _**[ a]**_ [)] _[ ∈]_ [R] _[D]_ [ denote the joint list of the parameters from all the layers, with] _D_ being the total dimensionality. In training large language models, it is a common practice to tie the embedding and linear layer weights, i.e. _**a**_ = _**e**_, referred to as _weight tying_ (Press and Wolf, 2017). In this case, the list of all parameters, _**θ**_ = ( _**e**_ = _**a**_ _, {_ _**p**_ _n_ _}_ _[N]_ _n_ =1 _[, . . ., b]_ [)] _[ ∈]_ [R] _[D][−][d]_ [, since] _**[ a]**_ [ is no longer a] free parameter. We analyze both weight-tied and general cases. **Loss.** We consider the cross-entropy loss _L_ from Eq. (1). **Objective.** Towards understanding the phenomenon in Fig. 1, we utilize the aforementioned framework to study single-layer transformers. In particular, our objective is to address the following question: _Can we characterize the loss landscape of singe-layer transformers when the input_ _is Markovian?_ To build intuition about the loss surface, we first examine its global minima and then provide a detailed characterization of the loss landscape, focusing on local optima, in Sec. 4. 3.1 S INGLE - LAYER T RANSFORMERS : G LOBAL MINIMA Since the loss _L_ in Eq. (1) is the cross-entropy loss, it achieves its minimum when the predictive ¯ probability matches the Markov kernel (Lemma 1): _f_ _**θ**_ ( _x_ _[n]_ 1 [) =][ P][ (] _[x]_ _[n]_ [+1] [= 1] _[ |][ x]_ _[n]_ [)] [. In other words,] this occurs when the transformer outputs the correct transition probabilities. This raises a natural question: _can a single-layer transformer exactly represent a first-order Markov chain?_ Intuitively speaking, this seems plausible since the transformer, even with access to the full past information _x_ _[n]_ 1 [at each] _[ n][ ∈]_ [[] _[N]_ []] [, can rely solely on the current symbol] _[ x]_ _[n]_ [(Sec.][ 2][). The following result confirms] this intuition, showing that such a realization is indeed a _global minimum_ for the loss _L_ ( _·_ ): **Theorem 1** (Global minimum) **.** _Let the input sequence be_ _{x_ _n_ _}_ _[N]_ _n_ =1 _[∼]_ [(] _**[π]**_ [(] _[p, q]_ [)] _[,]_ _**[ P]**_ [ (] _[p, q]_ [))] _[ for some]_ _fixed_ ( _p, q_ ) _∈_ (0 _,_ 1) [2] _and_ _**θ**_ _∈_ R _[D][−][d]_ _be the transformer parameters for weight-tied case. Then for_ _all_ ( _p, q_ ) _, there exists a_ _**θ**_ _⋆_ _∈_ R _[D][−][d]_ _with an explicit construction such that it is a global minimum for_ _the population loss L_ ( _·_ ) _in Eq._ (1) _and its prediction matches the Markov kernel, i.e._ (i) _L_ ( _**θ**_ ) _≥_ _L_ ( _**θ**_ _⋆_ ) _for all_ _**θ**_ _∈_ R _[D][−][d]_ _, and_ (ii) P _**θ**_ _⋆_ ( _x_ _n_ +1 = 1 _| x_ _[n]_ 1 [) =][ P][ (] _[x]_ _[n]_ [+1] [= 1] _[ |][ x]_ _[n]_ [)] _[, the Markov kernel or the bigram.]_ _Further,_ _**θ**_ _⋆_ _satisfies:_ (iii) _L_ ( _**θ**_ _⋆_ ) = _H_ ( _x_ _n_ +1 _|x_ _n_ ) _, the entropy rate of the Markov chain._ (iv) _∇L_ ( _**θ**_ _⋆_ ) = 0 _, i.e._ _**θ**_ _⋆_ _is a stationary point._ _In addition, the same result holds for the non-weight-tied case when the parameters are in_ R _[D]_ _._ **Remark 1.** In fact, there exist many such global minima as highlighted in the proof (§ B). _Proof sketch._ The key idea here is to show that any _**θ**_ satisfying _f_ _**θ**_ ( _x_ _[n]_ 1 [) =][ P][ (] _[x]_ _[n]_ [+1] [= 1] _[ |][ x]_ _[n]_ [)] [ is a] global minimum and is a stationary point with the loss being the entropy rate (Lemmas. 1 and 2). To construct such a _**θ**_, we utilize the fact the Markov kernel is only a function of _x_ _n_ and thus we can ignore the past information in the Attention layer using only the skip. We defer the full proof to § B. _Empirical evidence for learning the Markov kernel._ As demonstrated in the proof above, a canonical way to realize the Markov kernel by the single-layer transformer is to rely only on the current 5 symbol _x_ _n_ and ignore the past in the Attention layer. We now empirically confirm this fact. For our experiments, we use the single-layer transformer (Table 1) and report the results averaged across 5 runs and corresponding to the best set of hyper-parameters after a grid search (Table 2). In particular, for _p_ = 0 _._ 2 _, q_ = 0 _._ 3, and _d_ = 4, we generate sequences _{x_ _n_ _}_ _[N]_ _n_ =1 _[∼]_ [(] _**[π]**_ [(] _[p, q]_ [)] _[,]_ _**[ P]**_ [ (] _[p, q]_ [))] of length _N_ = 1024 and train the transformer parameters _**θ**_ (weight-tied) to minimize the crossentropy loss in Eq. (1) . At inference, we interpret the attention layer and observe that the relative magnitude of the attention contribution to the final attention output _**y**_ _n_ is negligible, i.e. the ratio _∥_ _**W**_ _O_ � _i∈_ [ _n_ ] [att] _[n,i]_ _[ ·]_ _**[ W]**_ _[ V]_ _**[ x]**_ _[i]_ _[∥][/][∥]_ _**[y]**_ _n_ _[∥≈]_ [0] _[.]_ [01] [. Hence, the attention contribution can be neglected] compared to the skip-connection, i.e. _**y**_ _n_ _≈_ _**x**_ _n_ . Using this approximation, in § D we derive a formula for the final predicted probability _f_ _**θ**_ ( _x_ _[n]_ 1 [)] [ as it is learnt by the network. This formula reveals] interesting insights about the learnt parameters of the transformer: - **Constant embeddings.** The positional embedding _**p**_ _n_ is constant across _n_, i.e. it is independent of the sequence position, reflecting the fact that it has learnt to capture the homogeneity of the Markovian chain just from the data. - **Low-rank weights.** The weight matrices are all approximately rank-one; while it is not fully clear why the training algorithm converges to low rank solutions, they do indeed provide a canonical and simple way to realize the Markov kernel, as illustrated in § D. Further, we show in § D that plugging in the numerical values obtained from the average of five runs, the probability given by our formula matches the theory, i.e. the model learns to correctly output the Markov kernel probabilities. Indeed, Fig. 3b shows that the test loss of the model converges to the theoretical global minimum (Thm. 1), the entropy rate of the source, corresponding to the bigram, when _p_ = 0 _._ 2 and _q_ = 0 _._ 3 ( _p_ + _q <_ 1 ). We likewise observe a similar phenomenon without weight tying. For the prediction probability, we focus on the zero positions _n_ = _n_ _k_ such that _x_ _n_ _k_ = 0 . Fig. 3d shows that irrespective of the index _k_ and the past _x_ _[n]_ 1 _[k]_ _[−]_ [1], if the current bit _x_ _n_ _k_ is 0, the model correctly predicts the probability for the next bit _x_ _n_ _k_ +1 being 1, which equals _p_ theoretically (Fig. 2b). More precisely, _f_ _**θ**_ ( _x_ _[n]_ 1 _[k]_ _[−]_ [1] _, x_ _n_ _k_ = 0) = _p_ for all _x_ _[n]_ 1 _[k]_ _[−]_ [1] and _k_, in line with property (ii) of Thm. 1. A similar conclusion holds with _x_ _n_ _k_ = 1 and prediction probability _q_ . This indicates that the model has learned to recognize the data as first-order Markovian, relying solely on _x_ _n_ to predict _x_ _n_ +1 . While Thm. 1 and above empirical results highlight the presence of global minima on the loss surface, they does not address local optima, as empirically shown in Fig. 1. We precisely address this in the next section and analyze the loss landscape in terms of the local optima. 4 S INGLE - LAYER T RANSFORMERS : L OCAL O PTIMA In this section we present our main results about the loss landscape of single-layer transformers in terms of local optima. In particular, we prove the existence of bad local minima and saddle points on the loss surface (Thms. 2 and 3), in addition to the global minima discussed above (Thm. 1). Interestingly, the presence of these local optima is influenced by two key factors: _switching factor_ of the Markov chain and the _weight tying_ of the transformer, highlighting the intricate interplay between the input data and the model architecture. First, we present the results for the weight tying scenario. 4.1 W EIGHT TYING : BAD LOCAL MINIMA When the embedding and linear layers are tied, i.e. _**e**_ = _**a**_, our analysis reveals the following surprising fact: if the switching factor _p_ + _q_ is greater than one, there exist _bad local minima_ _**θ**_ _**π**_ _∈_ R _[D][−][d]_, where the prediction probability _f_ _**θ**_ _**π**_ ( _·_ ) is the marginal stationary distribution _**π**_ (unigram), disregarding the past and the present information (Thm. 2 and Fig. 3c). Now we state the result formally. Let _L_ _⋆_ ≜ _L_ ( _**θ**_ _⋆_ ) denote the global minimal loss from Thm. 1. **Theorem 2** (Bad local minimum) **.** _Let the input sequence be_ _{x_ _n_ _}_ _[N]_ _n_ =1 _[∼]_ [(] _**[π]**_ [(] _[p, q]_ [)] _[,]_ _**[ P]**_ [ (] _[p, q]_ [))] _[ for a]_ _fixed_ ( _p, q_ ) _∈_ (0 _,_ 1) [2] _and the transformer parameters be weight-tied. If_ _p_ + _q >_ 1 _, there exists an_ _explicit_ _**θ**_ _**π**_ _∈_ R _[D][−][d]_ _such that it is a bad local minimum for the loss L_ ( _·_ ) _, i.e._ (i) _there exists a neighborhood_ _B_ ( _**θ**_ _**π**_ _, r_ ) _with_ _r >_ 0 _such that_ _L_ ( _**θ**_ ) _≥_ _L_ ( _**θ**_ _**π**_ ) _for all_ _**θ**_ _∈B_ ( _**θ**_ _**π**_ _, r_ ) _,_ _with L_ ( _**θ**_ _**π**_ ) _> L_ _⋆_ _._ _Further,_ _**θ**_ _**π**_ _satisfies:_ 6 |Col1|Col2|Col3|Col4| |---|---|---|---| |W i t h o u<br>W i t h w|t w e i g h t - t<br>e i g h t -tyin|y i n g<br>g|| |B i g r a m<br>U n i g r a|distributi<br>m distribu|on loss<br>tion loss|| ||||| ||||| ||||| ||||| ||||| |20 40 60 80|20 40 60 80|20 40 60 80|20 40 60 80| Iteration (a) Test loss ( _p_ + _q >_ 1) |Col1|Col2|Col3|Col4|Col5|00| |---|---|---|---|---|---| |||W i t h o<br>W i t h|u t w e i g h<br>w e i g h t - t y|t - t y i n g<br>i n g|t - t y i n g<br>i n g| |||B i g r a|m l o s s||| ||||||| ||||||| ||||||| |20 40 60 80 1|20 40 60 80 1|20 40 60 80 1|20 40 60 80 1|20 40 60 80 1|20 40 60 80 1| Iteration (b) Test loss ( _p_ + _q <_ 1) |Col1|Col2|Col3|Col4|y ing| |---|---|---|---|---| |||W i t h o u t|w e i g h t - t|y ing| |||||| |||W i t h w e<br>T r a n s i t i o<br>S t a t i o n a r|i g h t - t y i n<br>n proba<br>y p r o b a|g<br>bility p<br>b i l i t y π| |||||| |||||| |||||| |||||| |||||| |||||| |20 40 60 80|20 40 60 80|20 40 60 80|20 40 60 80|20 40 60 80| Index _k_ : _x_ _n_ _k_ = 0 (c) Predicted probability ( _p_ + _q >_ 1) |Col1|Col2|Col3|Col4|Col5|Col6|00| |---|---|---|---|---|---|---| ||||W i t h o u t<br>W i t h w e|w e i g h t - t<br>i g h t - t y i n|y ing<br>g|y ing<br>g| ||||T r a n s i t i o|n p r o b a|b i l i t y p|b i l i t y p| |||||||| |||||||| |||||||| |20 40 60 80 1|20 40 60 80 1|20 40 60 80 1|20 40 60 80 1|20 40 60 80 1|20 40 60 80 1|20 40 60 80 1| Index _k_ : _x_ _n_ _k_ = 0 (d) Predicted probability ( _p_ + _q <_ 1) Figure 3: Effect of weight tying on test loss and predicted probabilities _f_ _**θ**_ ( _x_ _[n]_ 1 _[k]_ [)] [ for zero indices] _{n_ _k_ _}_ [100] _k_ =1 [such that] _[ x]_ _[n]_ _k_ [= 0] [. For (a),(c):] _[ p]_ [ = 0] _[.]_ [5] _[, q]_ [ = 0] _[.]_ [8] [. With weight tying, the loss converges to a] local minimum, and the predicted probability is _π_ 1 = Idea Generation Category:
3Other
SqZ0KY4qBD
# Q UANTIFYING G ENERALIZATION C OMPLEXITY FOR L ARGE L ANGUAGE M ODELS **Zhenting Qi** [1] _[∗]_ **, Hongyin Luo** [2] **, Xuliang Huang** [3] **, Zhuokai Zhao** [4] _[,]_ [5] **, Yibo Jiang** [5] **Xiangjun Fan** [4] **, Himabindu Lakkaraju** [1] **, James Glass** [2] 1 Harvard University, 2 Massachusetts Institute of Technology 3 University of Illinois at Urbana-Champaign, 4 Meta, 5 University of Chicago zhentingqi@g.harvard.edu, hyluo@mit.edu A BSTRACT While large language models (LLMs) have shown exceptional capabilities in understanding complex queries and performing sophisticated tasks, their generalization abilities are often deeply entangled with memorization, necessitating more precise evaluation. To address this challenge, we introduce **S** **CYLLA**, a dynamic evaluation framework that quantitatively measures the generalization abilities of LLMs. S CYLLA disentangles generalization from memorization via assessing model performance on both in-distribution (ID) and out-of-distribution (OOD) data through 20 tasks across 5 levels of complexity. Through extensive experiments, we uncover a non-monotonic relationship between task complexity and the performance gap between ID and OOD data, which we term the _generaliza-_ _tion valley_ . Specifically, this phenomenon reveals a critical threshold—referred to as _critical complexity_ —where reliance on non-generalizable behavior peaks, indicating the upper bound of LLMs’ generalization capabilities. As model size increases, the critical complexity shifts toward higher levels of task complexity, suggesting that larger models can handle more complex reasoning tasks before over-relying on memorization. Leveraging S CYLLA and the concept of critical complexity, we benchmark 28 LLMs including both open-sourced models such as LLaMA and Qwen families, and closed-sourced models like Claude and GPT, providing a more robust evaluation and establishing a clearer understanding of LLMs’ generalization capabilities. 1 I NTRODUCTION Large language models (LLMs) have revolutionized natural language processing by exhibiting exceptional abilities in understanding complex queries, generating human-like text, and performing a variety of downstream tasks (OpenAI, 2023; Google, 2024; Bubeck et al., 2023; Hoffmann et al., 2022). Beyond their impressive text-generation capabilities, these models also demonstrate emerging skills in _reasoning_ (Wei et al., 2022b; Kojima et al., 2022). Through increased inference-time computation (Chen et al., 2024b; Snell et al., 2024; Bansal et al., 2024; Qi et al., 2024; Wang et al., 2024), LLMs have achieved or even surpassed human-level performance on benchmarks that require nontrivial reasoning abilities (Cobbe et al., 2021; Hendrycks et al., 2021; 2020; Chen et al., 2021; Han et al., 2022). Despite these impressive advancements, research has also demonstrated that LLMs face significant challenges when solving problems that involve terms, patterns, or concepts that are less common in their training data (Razeghi et al., 2022; Kandpal et al., 2023; Chen et al., 2024a; Antoniades et al., 2024). Additionally, concerns have been raised regarding _data contam-_ _ination_ (Magar & Schwartz, 2022b; Carlini et al., 2022; Dong et al., 2024), as many benchmark datasets are sourced from the web and may overlap with the training data, either directly or indirectly, which undermines the reliability of results on such benchmarks. Consequently, there is ongoing debate about whether LLMs truly possess human-like reasoning abilities or simply rely on memorized patterns when solving problems (Kambhampati, 2024; Schwarzschild et al., 2024). _∗_ [Work completed during Zhenting’s visit to MIT CSAIL. Source code will be available at https://](https://github.com/zhentingqi/scylla) [github.com/zhentingqi/scylla.](https://github.com/zhentingqi/scylla) 1 Several efforts have been made to explore the interplay between generalization and memorization in LLMs’ reasoning behaviors (Wu et al., 2023; Lotfi et al., 2023; Zhu et al., 2023; Antoniades et al., 2024; Dong et al., 2024). Lotfi et al. (2023) present the first non-vacuous generalization bounds for LLMs, demonstrating their ability to discover patterns that generalize to unseen data. Wu et al. (2023) suggest that generalization and memorization often exist on a continuum, as LLMs exhibit above-random performance on counterfactual tasks, though with some degradation compared to default tasks. Their study proposes that the seemingly “reasoning” behaviors of LLMs may stem from a combination of: (1) _generalization behaviors_, such as abstract logic and learned skills, and (2) _memorization behaviors_, including memorized input-output mappings and pattern matching. Despite these recent insights, the relationship between task difficulty, model size, and the balance between generalization and memorization remains poorly understood. Several factors undermine the robustness of current findings. First, reliable methods for quantifying task difficulty are still underdeveloped, and the distinction between problem length and intrinsic task complexity is often overlooked. Additionally, evaluations are usually complicated by data contamination and the entanglement with knowledge, introducing confounding factors to reasoning assessments. In this work, we quantify the generalization ability of LLMs by aligning models with the intrinsic complexity of reasoning tasks. We address two specific research questions: 1) _How does_ _task complexity affect the balance between generalizable (generalization) and non-generalizable_ _(memorization) behaviors?_ 2) _How does model size influence this balance?_ We first develop a novel evaluation framework, **S** **CYLLA**, that is **sc** alable in task complexity, d **y** namic, know **l** edge- **l** ight, and memorization- **a** ware. We explain the necessity of each of these criteria for understanding the working mechanism of generalization, and show that no existing evaluation methods fully meet them. S CYLLA enables the generation of in-distribution (ID) and out-of-distribution (OOD) data of a given task, and the performance gap between them is considered an indicator of reliance on non-generalizable behaviors to solve the task. This allows us to assess how well models generalize learned task skills beyond their training distribution. We evaluate the performance of LLMs on both ID and OOD data across varying levels of quantified task complexity. The results of our experiments lead to two key findings: 1) **Non-monotonic performance gap across** **task complexity:** the performance gap between ID and OOD data initially widens as task complexity increases, reaches a peak, and then narrows as tasks become more complex—a phenomenon we refer to as _generalization valley_ . As shown in Fig. 1, this non-monotonic relationship suggests that LMs are most vulnerable to distribution shifts at certain intermediate task complexity levels, where overfitting to training data leads to a greater dependence on memorization. and 2) **Peak of generalization valley** **shifts rightward with increasing model size:** as the model size increases, the peak of performance gap, referred to as the _critical com-_ _plexity_, shifts to the right. As shown in Fig. 1, this rightward shift indicates that larger models are better equipped to handle more complex tasks without over-relying on memorization, maintaining generalization capabilities across a broader range of task difficulties. Figure 1: An illustration of _generalization valley_, where the reliance on non-generalizable behaviors first increases and then decreases; and _criti-_ _cal complexity shift_, where the peak of the valley shifts rightward as model size increases. The contributions of this paper are fourfold. First, we present a novel, task-centric evaluation framework that is scalable in task complexity, dynamic, knowledge-light, and memorizationaware, specifically designed to overcome limitations found in existing evaluation methods. Second, through a detailed analysis of performance across varying task complexities and model sizes, we uncover insights into generalization behavior, revealing patterns that distinguish when models increasingly rely on memorization versus generalization. And third, we highlight the impact of model size on generalization, demonstrating that larger models experience a delayed over-reliance on memorization, with peak performance discrepancies occurring at higher task difficulties than in smaller 2 models. Finally, leveraging our proposed framework and insights, we define a new metric that aims to reward models with strong generalization to OOD data while penalizing those that exhibit overfitting to ID data, and conduct a comprehensive benchmarking of 28 popular LLMs, focusing on their genuine reasoning capabilities. 2 R ELATED W ORK 2.1 G ENERALIZATION & M EMORIZATION IN LLM S ’ R EASONING The debate over whether LLMs can genuinely reason or simply rely on memorization remains central to understand their true capabilities (Zhang et al., 2021; T¨anzer et al., 2021; Zeˇcevi´c et al., 2023; Tang et al., 2023; Yin et al., 2023; Biderman et al., 2024). Wu et al. (2023) argue that models like GPT-4 perform well on default tasks but struggle significantly with counterfactual ones, implying that much of their success comes from memorization of specific patterns. Similarly, Dong et al. (2024) highlight how data contamination can inflate perceived generalization by enabling models to rely on memorized data, while Antoniades et al. (2024) observe that even larger models, which generally show stronger generalization, still exhibit memorization behaviors, especially for frequently encountered n-grams. Kambhampati (2024) assert that LLMs primarily perform “approximate retrieval” from large pretraining datasets rather than true reasoning. In contrast, Lotfi et al. (2023) introduce the first non-vacuous generalization bounds for LLMs, providing a mathematical framework that demonstrates LLMs’ capability to discover regularities and generalize beyond their training data, particularly as models scale up, and thus disprove that larger LLMs are simply better at regurgitating training data. Together, these works highlight the ongoing tension between memorization and generalization, and the need for more robust evaluations that differentiate between the two. 2.2 E VALUATION OF LLM S ’ R EASONING A BILITIES Reasoning is recognized as a key component of both human cognition and AI development, driving research to evaluate the reasoning abilities of LLMs (Zhu et al., 2023). Recent research has emphasized tasks requiring logic and deduction—such as those involving math, text, and code—as benchmarks for reasoning across domains, generally divided into static and dynamic categories. Specifically, static benchmarks, including MATH (Hendrycks et al., 2021), GSM8K (Cobbe et al., 2021), BoardgameQA (Kazemi et al., 2023), and FOLIO (Han et al., 2022), use mathematical and logical problems to assess reasoning performance. However, these benchmarks, which remain fixed after publication, are vulnerable to data contamination (Magar & Schwartz, 2022a; Golchin & Surdeanu, 2023) and reasoning gap issues (Srivastava et al., 2024). To address these limitations, recent benchmarks have adopted a dynamic approach, either by generating new problem instances at test time or by regularly refreshing test data. CLRS-Text (Veliˇckovi´c et al., 2022), for instance, draws on algorithms from Introduction to Algorithms (Cormen et al., 2009) and synthesizes algorithmic reasoning problems in text form. Similarly, NPHardEval (Fan et al., 2023) is built upon algorithm tasks. It organizes them by complexity class, defines difficulty levels by problem lengths, and refreshes data on a monthly basis to mitigate the risk of overfitting. LiveBench (White et al., 2024) also frequently updates questions from the most recent information sources, but the task set tends to be too knowledge-intensive to be a good testbed for reasoning. DyVal (Zhu et al., 2023) employs a graph-informed algorithm to generate math, logic, and algorithm test cases, but faces challenges in manually specifying problems as graphs and defining valid constraints. Despite these research efforts, they barely disentangle reasoning and generalization from memorization or quantitatively align the intrinsic task complexity and generalization ability. 3 M ETHOD 3.1 M OTIVATION To conduct reliable evaluations of the generalization capabilities of LLMs, we begin by discussing several key features that are essential for an effective benchmark. While prior research has touched upon the importance of scalable and dynamic evaluation (Zhu et al., 2023; Fan et al., 2023), we refine these criteria with clearer definitions and emphasize two additional critical dimensions: knowledgelight and memorization-aware. 3 **Scalable inherent complexity.** The difficulty of an ideal evaluation task should be both quantifiable and scalable (Zhu et al., 2023; Fan et al., 2023). There are two dimensions that influence the difficulty of solving a task: (1) the intrinsic complexity of the task and (2) the length of an input problem instance. The former refers to tasks that inherently require more sophisticated reasoning and a greater number of intermediate steps, with the number of steps increasing as a function of the length of the input instances. Consequently, when benchmarks increase task difficulty by both extending input lengths and introducing tasks of varying complexity, as seen in (Fan et al., 2023; Zhu et al., 2023), it becomes difficult to distinguish whether performance drops stem from the challenges with longer inputs or the tasks’ intrinsic complexity. Plus, LLMs are known to struggle with length generalization problem, exhibiting a sharp decline in performance (Anil et al., 2022) or becoming unstable (Zhou et al., 2024) as input length increases. For these reasons, problem length is not an ideal hyperparameter for adjusting task difficulty and should be controlled. In other words, task difficulty should scale independently of input length. **Dynamic question generation.** Optimal benchmark task instances should be generated dynamically to minimize the risk of data contamination. Many widely adopted reasoning benchmarks, such as those for mathematical reasoning (Cobbe et al., 2021; Hendrycks et al., 2021), are based on static datasets. However, evaluations based on static data often encounter challenges such as the reasoning gap problem (Srivastava et al., 2024) and data contamination (Magar & Schwartz, 2022a; Golchin & Surdeanu, 2023) issues, reducing the robustness and reliability of these assessments. **Knowledge-light prerequisite.** Ideal evaluation tasks for reasoning should require minimal background knowledge, containing only simple task descriptions and queries. By minimizing the reliance on external information, we ensure that any performance differences is most likely attributable to the models’ reasoning abilities rather than disparities in their knowledge bases, eliminating the ambiguity of whether a model’s failure is due to a lack of necessary knowledge (Kandpal et al., 2023; Srivastava et al., 2022; Suzgun et al., 2022) or an inherent limitation in reasoning ability. **Memorization-aware evaluation.** The benchmark should explicitly differentiate between task instances that are more likely to have been memorized and those that are less likely. This differentiation helps us accurately attribute the model’s performance to either memorization or generalization. 3.2 S CYLLA B ENCHMARK Considering that none of the current benchmarks or evaluation frameworks meet all of the requirements defined above, we propose a new benchmark, S CYLLA, which distinguishes itself from existing benchmarks through the following key features: - **S** calable inherent **C** omplexity: We utilize algorithmic complexity to quantify task complexity, defining tasks as more complex when they require algorithms of higher complexity for their solution. To ensure consistent complexity across tasks, we impose explicit constraints on problem lengths, ensuring the variation remains within a stable range while keeping the upper bound manageable, so that task complexity is minimally influenced by problem length. Our choice of tasks and their corresponding complexity bounds are detailed in §3.2.1. - D **Y** namic problem generation: All data are generated during the evaluation, ensuring that each evaluation instance is unique and unaffected by pre-exposed data. Details of the data synthesis methodology can be found in §3.2.2. - Know **L** edge- **L** ight prerequisite: Tasks are designed to require no background knowledge, featuring simple and clear descriptions and straightforward instructions. All the tasks are designed to be solvable with basic skills such as additions and comparisons of non-negative integers. - Memorization- **A** ware evaluation: Generalization and reasoning capabilities are more clearly disentangled from memorization through explicit differentiation between in-distribution (ID) and out-of-distribution (OOD) data. Performance on ID data reflects a combination of both generalization and memorization, as the model is familiar with both the length and patterns of the task instances. Conversely, performance on OOD data primarily indicates generalization, as the model is only familiar with the length of the task instances but not the specific patterns of the task elements. Therefore, we propose to utilize the performance gap between ID and OOD data as an estimation of the model’s reliance on memorization, allowing us to assess how well models generalize learned task skills beyond their familiar instances. Procedures for generating ID and OOD data are detailed in §3.2.2. 4 Additionally, S CYLLA requires only black-box access to an LLM, enabling users to evaluate and compare different models across platforms without delving into their internal workings. Its taskcentric design also ensures high adaptability and extensibility, allowing users to seamlessly customize and expand it by incorporating new tasks and complexity levels to make the evaluation results more precise and reliable. Further comparisons with existing benchmarks or evaluation frameworks can be found in Appendix A.2. 3.2.1 T ASKS & C OMPLEXITY L EVELS To categorize and define the tasks in our benchmark, we adopt the notion of time complexity, which measures the order of growth of an algorithm’s running time and provides a standardized framework for comparing the efficiency of different algorithms (Cormen et al., 2009). In this context, we _re-purpose LLMs as algorithm executors_, where their ability to handle tasks is expected to depend on the underlying computational complexity of those tasks. Our experiments, detailed later, validate this hypothesis by demonstrating that time complexity provides a meaningful and rigorous method for task classification. As shown in Fig. 2, we first introduce **an-** **chor tasks** ity levels in our benchmark. Specifically, we that define the intrinsic complex- SbsSum TSP FindMin FindMax utilize six levels of time complexity: _O_ ( _N_ ), SbsSumIR ~~𝑶(𝟐~~ 𝑵 ~~)~~ ~~𝑶(𝑵)~~ FindMode _O_ For a specific task, we define its difficulty us-(2 _[N]_ ) to define our complexity intervals. 4Sum ~~**Anchor**~~ 2Sum ~~**Probe**~~ ing a complexity range, denoted as **O([C** **1** **,** 4SumIR Sort ~~𝑶(~~ ~~𝑵log𝑵~~, ~~𝟐~~ [𝑵] ~~)~~ **C** **2** **])**, where **C** **1** and **C** **2** represent the lower 4SumMT ~~𝑶(~~ ~~𝑵~~ [𝟐], ~~𝑵~~ [𝟑] ~~)~~ RmDup |IR|Col2| |---|---| |IR|𝑶( 𝑵𝟐, 𝑵𝟑)| and upper bounds of the time complexity foralgorithms that solve the task. The anchor 3Sum 3SumIR 3SumMT tasks are selected based on two key crite- Figure 2: **Left** : Anchor tasks. These tasks form the ria. First, the time complexity of the se- core of our benchmark, providing a structured set of lected tasks should fall within one or two ad- challenges across varying time complexities. **Right** : jacent complexity levels to maintain consis- Probe tasks. These tasks are used to evaluate the tency in difficulty. This ensures that the tasks level of complexity that LLMs adopt to solve them. within each group are comparable in terms of computational demands, preventing significant changes in difficulty. It also avoids scenarios where models face both very simple and highly complex tasks, which could lead to inconsistent performance measurements and obscure the underlying reasoning capabilities we aim to evaluate. Second, the tasks must be simple and should avoid reliance on advanced mathematical knowledge, common sense, or natural language understanding—factors outside the scope of the reasoning abilities we aim to evaluate. For instance, matrix multiplication involves specific mathematical concepts, which are not aligned with our focus on reasoning capabilities. In contrast, tasks like “find max”, which only require basic number ordering, provide a more appropriate measure of reasoning ability. For anchor tasks, we propose three to four tasks for each complexity interval. The time complexities of these tasks are illustrated in the left of Fig. 2, with complexity increasing in a clockwise direction on the pie chart. Some task names are abbreviated in the figure, but detailed in Table 1. To explore how LLMs handle tasks Table 1: Anchor tasks for each complexity interval. with multiple solutions of varying time complexities, we also introduce _O_ ( _N_ ) Find min, find max, find mode a set of tasks termed **probe tasks** . _O_ ([ _N, N_ [2] )) Find top-k, two sum, sort numbers, remove duplicates These tasks, including longest com- _OO_ ([([ _NN_ [2][3] _, N, N_ [3][4] )))) 3-sum multiple ten, 3-sum in range, 3-sum4-sum multiple ten, 4-sum in range, 4-sum mon subarray (LCS), longest increas- _O_ (2 _[N]_ ) Subset sum multiple ten, subset sum in range, subset sum, ing sequence (LIS), and longest con- travelling salesman problem secutive elements (LCE), allow us to observe whether LLMs favor more efficient solutions as task complexity increases. The corresponding time complexities of these tasks are shown in the graph on the right of Fig. 2. The behavior of LLMs in choosing between multiple solutions for these tasks is further discussed in §4.5. For both the anchor tasks and probe tasks, a detailed explanation of their time complexity, along with other relevant information, can be found in Appendix D. Figure 2: **Left** : Anchor tasks. These tasks form the core of our benchmark, providing a structured set of challenges across varying time complexities. **Right** : Probe tasks. These tasks are used to evaluate the level of complexity that LLMs adopt to solve them. Table 1: Anchor tasks for each complexity interval. _O_ ( _N_ ) Find min, find max, find mode _O_ ([ _N, N_ [2] )) Find top-k, two sum, sort numbers, remove duplicates _O_ ([ _N_ [2] _, N_ [3] )) 3-sum multiple ten, 3-sum in range, 3-sum _O_ ([ _N_ [3] _, N_ [4] )) 4-sum multiple ten, 4-sum in range, 4-sum _O_ (2 _[N]_ ) Subset sum multiple ten, subset sum in range, subset sum, travelling salesman problem 5 Idea Generation Category:
0Conceptual Integration
jpSLXoRKnH
# M ETA U RBAN : A N E MBODIED AI S IMULATION P LATFORM FOR U RBAN M ICROMOBILITY **Wayne Wu** _[∗]_ **, Honglin He** _[∗]_ **, Jack He, Yiran Wang,** **Chenda Duan, Zhizheng Liu, Quanyi Li, Bolei Zhou** University of California, Los Angeles [https://metadriverse.github.io/metaurban](https://metadriverse.github.io/metaurban) **Infinite Interactive Urban Scenes** **Multiple Sensors** **Flexible User Interface** **10,000 Diverse Obstacles** **1,100 Rigged Human Models** **Vulnerable Road Users** **Mobile Machines** **Terrain Generation System** **1,100 Rigged Human Models** **Vulnerable Road Users** **Mobile Machines** Figure 1: **MetaUrban** enables the construction of _infinite interactive urban scenes_, supports _multiple sensors_, and offers _flexible user interfaces_ such as a mouse, keyboard, joystick, and racing wheel. The platform includes _10,000 diverse obstacles_ in urban scenes, _1,100 rigged human models_ each with 2,314 movements, _vulnerable_ _road users_, _mobile machines_ with varied mechanical structures, and a _terrain generation system_ to create complex ground conditions. We highly recommend visiting our project page for video demonstrations. A BSTRACT Public urban spaces such as streetscapes and plazas serve residents and accommodate social life in all its vibrant variations. Recent advances in robotics and embodied AI make public urban spaces no longer exclusive to humans. Food delivery bots and electric wheelchairs have started sharing sidewalks with pedestrians, while robot dogs and humanoids have recently emerged in the street. **Micro-** **mobility** enabled by AI for short-distance travel in public urban spaces plays a crucial component in future transportation systems. It is essential to ensure the generalizability and safety of AI models used for maneuvering mobile machines. In this work, we present **MetaUrban**, a _compositional_ simulation platform for the AI-driven urban micromobility research. MetaUrban can construct an _infinite_ number of interactive urban scenes from compositional elements, covering a vast array of ground plans, object placements, pedestrians, vulnerable road users, and other mobile agents’ appearances and dynamics. We design point navigation and social navigation tasks as the pilot study using MetaUrban for urban micromobility research and establish various baselines of Reinforcement Learning and Imitation Learning. We conduct extensive evaluation across mobile machines, demonstrating that heterogeneous mechanical structures significantly influence the learning and execution of AI policies. We perform a thorough ablation study, showing that the compositional nature of the simulated environments can substantially improve the generalizability and safety of the trained mobile agents. MetaUrban will be made publicly available to provide research opportunities and foster safe and trustworthy embodied AI and micromobility in cities. The code and data have been released. _∗_ Equal contribution. 1 1 I NTRODUCTION Public urban spaces (Whyte, 2012) vary widely in type, form, and size, encompassing streetscapes, plazas, and parks. They are crucial spaces for transit and transport (Geddes, 1949), as well as providing stages to host various social events (Park et al., 1925). In recent years, these spaces have also become key zones for the growing trend of **micromobility** (Mitchell et al., 2010; Abduljabbar et al., 2021; Oeschger et al., 2020), a term that refers to small, lightweight vehicles like electric scooters, e-bikes, and other mobile machines designed for short-distance travel. Micromobility is becoming an increasingly important solution for improving urban transportation efficiency, reducing environmental impact, and offering flexible alternatives to car ownership in cities. As shown in Figure 2 (Top), food delivery bots navigate on the sidewalk to accomplish the last-mile food delivery task, while elders and physically disabled people maneuver electronic wheelchairs and mobility scooters on the street. Various legged robots like robot dog Spot from Boston Dynamics and humanoid robot Optimus from Tesla are also forthcoming. We can imagine a future of such _automated micromobility_ that harnesses advanced AI models to improve situational awareness and maneuver various mobile machines more intelligently and safely in complex urban environments. Simulation platforms have played a crucial role in enabling systematic and scalable training of embodied AI agents and safety evaluation before real-world deployment. However, most of the existing simulators focus either on _indoor household environments_ (Puig et al., 2018; Kolve et al., 2017; Savva et al., 2019; Shen et al., 2021; Li et al., 2024; Gan et al., 2021) or _outdoor driving_ _environments_ (Krajzewicz et al., 2002; Li et al., 2022b; Dosovitskiy et al., 2017). For example, platforms like AI2-THOR (Kolve et al., 2017), Habitat (Savva et al., 2019), and iGibson (Shen et al., 2021) are designed for household assistant robots in which the environments are mainly apartments or houses with furniture and appliances; platforms like SUMO (Krajzewicz et al., 2002), CARLA (Dosovitskiy et al., 2017), and MetaDrive (Li et al., 2022b) are designed for research on autonomous driving and transportation focusing on roadways and highways. Yet, simulating complex _urban environments_ for micromobility tasks, with diverse layouts, terrains, obstacles, and complex dynamics of pedestrians, is much less explored. Delivery Robot Electric Wheelchair Mobility Scooter Robot Dog Long Horizon Tasks in Large Scale Scenes Multifarious Terrains Diverse Obstacles Crowded Spaces with Pedestrians Figure 2: **Motivation.** (Top) Emerging automated micromobility. (Bottom) Unique challenges in micromobility. Distinct from the household and driving tasks, micromobility plays an essential role in providing accessibility ( _e.g._, electric wheelchairs) and convenience ( _e.g._, food delivery bots) in the public urban space, while it also brings _**unique challenges**_ for mobile machines and the underlying embodied AI agents. Let’s follow the adventure of a last-mile delivery bot, who aims to deliver a lunch order from a nearby pizzeria to the campus (Figure 2 (Bottom)). First, it faces a long-horizon task in **large** **scale** scenes across several street blocks, which span a significantly larger space than the indoor household environment. Second, it needs to deal with **multifarious terrains**, such as fragmented curbs and rugged ground caused by tree roots on sidewalks, which are seldom seen in indoor and driving environments. Then, it must safely navigate the cluttered street full of **diverse obstacles** like trash bins, parked scooters, and potted plants, which is absent in driving scenarios while with large obstacles variations compared to indoor scenes. In addition, it needs to handle **crowded spaces with** **pedestrians** to avoid collisions, especially taking care of disabled people in wheelchairs, which do not exist in indoor and driving environments. Thus, the large scales, multifarious terrains, diverse obstacles, and dense pedestrians bring unique challenges to AI-driven mobile machines moving 2 in cities, as well as the design of simulation environments for the training and evaluation of the embodied AI models. In this work, we present **MetaUrban** – a compositional simulation platform aiming to facilitate the research of AI-driven micromobility. First, we introduce _Hierarchical Layout Generation_, a procedural generation approach that can generate infinite layouts hierarchically from street blocks to sidewalks, functional zones, and object locations. It can generate scenes at an arbitrary scale with various connections and divisions of street blocks, obstacle locations, and complex terrains. Then, we design the _Scalable Obstacle Retrieval_, an automatic pipeline for acquiring an arbitrary number of high-quality objects with real-world distribution, to fill the urban space. We first compute the object category distribution from worldwide urban scene data to form a description pool. Then, with the sampled descriptions from the pool, we design a VLM-based open-vocabulary searching schema, which can effectively retrieve objects from large-scale 3D asset repositories. These two modules are critical for improving the _generalizability_ of trained agents. Finally, we propose the _Cohabitant Populating_ method to generate complex dynamics in urban spaces. We first tailor recent 3D digital human and motion datasets to get 1,100 rigged human models, each with 2,314 movements. Then, to form safety-critical scenarios, we integrate Vulnerable Road Users (VRUs) like bikers, skateboarders, and scooter riders. As the subjects of micromobility, we include various mobile machines – the delivery bot, electric wheelchair, mobility scooter, robot dog, and humanoid robot. Then, based on path planning algorithms, we can get complex trajectories among hundreds of environmental agents simultaneously with collision and deadlock avoidance. Also, enabled by MetaUrban’s flexible user interfaces (mouse, keyboard, joystick, and racing wheel), users can directly apply human-operated trajectories to agents, which provides an easy way to collect demonstration data for agent training. In addition, we impose a series of traffic rules to regulate all agent behaviors. It is critical for enhancing the _safety_ of the mobile agents. Based on MetaUrban, we construct a large-scale dataset, MetaUrban-12K, that includes 12,800 training scenes and 1,000 test scenes. The mean area size is 20,000 _m_ [2], while the episode length is 410 _m_ on average. As a pilot study, we introduce Point Navigation and Social Navigation, which are the two most fundamental tasks for mobile machines moving in urban spaces, as a starting point for AI-driven micromobility research. We build comprehensive benchmarks for these two tasks, in which we establish extensive baseline models, covering Reinforcement Learning, Safe Reinforcement Learning, Offline Reinforcement Learning, and Imitation Learning. Then, we make extensive evaluations across mobile machines to delve into the performance influence of varied mechanical structures (such as engine force, wheel friction, and wheelbase) on the learning and execution of AI policies. In the ablation study, we demonstrate that the compositional nature of the simulated environments can substantially improve the generalizability and safety of the trained mobile agents. We will make MetaUrban publicly available to enable more research opportunities for the community and foster safe and trustworthy embodied AI and micromobility in cities. 2 R ELATED W ORK Many simulation platforms have been developed for embodied AI research, depending on the target environments – such as indoor homes and offices, driving roadways and highways, and crowds in warehouses and squares. We compare representative ones with the proposed MetaUrban simulator. **Indoor Environments.** Platforms for indoor environments are mainly designed for household assistant robots, emphasizing the affordance, realism, and diversity of objects, as well as the interactivity of environments. VirtualHome (Puig et al., 2018) pivots towards simulating routine human activities at home. AI2-THOR (Kolve et al., 2017) and its extensions, such as ManipulaTHOR (Ehsani et al., 2021), RoboTHOR (Deitke et al., 2020), and ProcTHOR (Deitke et al., 2022b), focus on detailed agent-object interactions, dynamic object state changes, and procedural scene generation, alongside robust physics simulations. Habitat (Savva et al., 2019) offers environments reconstructed from 3D scans of real-world interiors. Its subsequent iterations, Habitat 2.0 (Szot et al., 2021) and Habitat 3.0 (Puig et al., 2023b), introduce interactable objects and deformable humanoid agents, respectively. iGibson (Shen et al., 2021) provides photorealistic environments. Its upgrades, Gibson 2.0 (Li et al., 2021), and OmniGibson (Li et al., 2024), focus on household tasks with object state changes and a realistic physics simulation of everyday activities, respectively. ThreeDWorld (Gan et al., 2021) targets real-world physics by integrating high-fidelity simulations of liquids and deformable objects. However, unlike MetaUrban, these simulators are focused on indoor environments with particular tasks like object rearrangement and manipulation. 3 **Driving Environments.** Platforms for driving environments are mainly designed for autonomous vehicle research and development. Simulators like GTA V (Martinez et al., 2017), Sim4CV (Müller et al., 2018), AIRSIM (Shah et al., 2018), CARLA (Dosovitskiy et al., 2017), and its extension SUMMIT (Cai et al., 2020) offer realistic environments that mimic the physical world’s detailed visuals, weather conditions, and day-to-night transitions. Other simulators enhance efficiency and extensibility at the expense of visual realism, such as Udacity (Team, b), DeepDrive (Team, a), Highway-env (Leurent, 2018), and DriverGym (Kothari et al., 2021). MetaDrive (Li et al., 2022b) trades off between visual quality and efficiency, offering a lightweight driving simulator that can support the research of generalizable RL algorithms for vehicles. Although some of the simulators (Martinez et al., 2017; Dosovitskiy et al., 2017) involve traffic participants other than vehicles, such as pedestrians and cyclists, all of them focus on vehicle-centric driving scenarios and neglect the stage for urban micromobility – public urban spaces like sidewalks and plazas. **Social Navigation Environments.** Other than indoor and driving environments, social navigation platforms emphasize the social compatibility of robots. Simulators like Crowd-Nav (Chen et al., 2019), Gym-Collision-Avoidance (Everett et al., 2018), and Social-Gym 2.0 (Sprague et al., 2023), model scenes and agents in 2D maps, focusing more on the development of path planning algorithms. Other simulators, such as HuNavSim (Pérez-Higueras et al., 2023), SEAN 2.0 (Tsoi et al., 2022), and SocNavBench (Biswas et al., 2022), upgrade the environment to 3D space and introduce human pedestrians to support the development of more complex algorithms. Social navigation platforms focus on crowd navigation, with oversimplified objects and surrounding environmental structures in the scenes, making them not applicable to complex urban micromobility tasks. In contrast, MetaUrban supports large-scale urban space simulation with real-world scenes (such as street facilities and terrains), providing significantly rich semantics and superior complexity of environments. In addition, MetaUrban supports the cross-machine evaluation of generalizability and safety with different mechanical structures. These features make it a unique choice for urban micromobility. In summary, none of the recent simulation platforms have been constructed for urban micromobility. The proposed simulator MetaUrban is the first simulator designed for AI-driven urban micromobility research. It differs from previous simulators significantly in terms of complex scenes (with large scales and multifarious terrains), diverse obstacles, vibrant dynamics, different types of mobile machines like delivery bots, electric wheelchairs, and mobility scooters, and intricate simulated interactions. Please refer to Appendix B.6 for a detailed comparison table with existing simulators in the dimensions of scale, sensor, and features, where MetaUrban shows a remarkable superiority. We believe MetaUrban can provide a lot of new research opportunities for AI-driven urban micromobility. 3 M ETA U RBAN S IMULATOR MetaUrban is a compositional simulation platform that can generate infinite training and evaluation environments for AI-driven urban micromobility. We propose a procedural generation pipeline, as the basis of MetaUrban, for constructing infinite interactive scenes with different specifications. As shown in Figure 3, MetaUrban uses a structured description script to create urban scenes. Based on the script information about street blocks, sidewalks, objects, agents, and more, it starts with the street block map, then plans the ground layout by dividing different function zones, then places static objects, and finally populates dynamic agents. Figure 3: **Procedural generation.** MetaUrban can automatically generate complex urban scenes with its compositional nature. From the second to the fourth column, the top row shows the 2D road maps, and the bottom row shows the bird-eye view of 3D scenes. 4 This section highlights three key designs in the MetaUrban simulator to support exhibiting three unique characteristics of urban spaces respectively – complex scenes (with large scales and multifarious terrains), diverse obstacles, and vibrant dynamics. Section 3.1 introduces **Hierarchical Layout** **Generation**, which can infinitely generate diverse layouts with different functional zone divisions, object locations, and terrains that are essential for the _generalizability_ to scene diversity of agents. Section 3.2 introduces **Scalable Obstacle Retrieval**, which harnesses worldwide urban scene data to obtain real-world object distributions in different places, and then builds large-scale, high-quality static objects set with VLM-enabled open-vocabulary searching. It is crucial for further enhancing the _generalizability_ to obstacle diversity of agents. Section 3.3 introduces **Cohabitant Populating**, in which we leverage the advancements in digital humans to enrich the appearances, movements, and trajectories of pedestrians and vulnerable road users, as well as incorporate other agents to form a vivid cohabiting environment. It is critical for improving the _safety_ of the mobile agents. 3.1 H IERARCHICAL L AYOUT G ENERATION The complexity of scene layouts, _i.e._, the connection and categories of blocks, the specifications of sidewalks and crosswalks, the placement of objects, as well as the terrains, is crucial for enhancing the generalizability of trained agents maneuvering in public spaces. In the hierarchical layout generation framework, we start with a ground plan that samples categories of street blocks and divides sidewalks into different functional zones. Then, we allocate various objects procedurally conditioned on functional zones. Finally, we implement a terrain generation system to synthesize various ground conditions. With the above procedures, we can get infinite urban scene layouts with different specifications of sizes, object locations, and terrains. **Ground plan.** We design 5 typical street block categories, _i.e._, straight, intersection, roundabout, circle, and T-junction. In the simulator, to form a large map with several blocks, we can sample the category, number, and order of blocks, as well as the number and width of lanes in one block, to get different maps. Then, each block can simulate its own walkable areas – sidewalks and crosswalks, which are key areas for urban spaces with plenty of interactions. As shown in Figure 4 (Left), according to the Global Street Design Guide (Initiative & of City Transportation Officials, 2016) provided by the Global Designing Cities Initiative, we divide the sidewalk into four functional zones – building zone, frontage zone, clear zone, and furnishing zone. Based on their different combinations of functional zones, we further con- Figure 4: **Ground plan.** (Left) Sidewalk is divided into four funcstruct 7 typical templates for side- tional zones – building, frontage, clear, and furnishing zone. (Right) walks (Figure 4 (Right)). To form a Seven typical sidewalk templates – from (a) to (g). sidewalk, we can first sample the layout from the templates and then assign proportions for different function zones. For crosswalks, we provide candidates at the start and the end of each roadway, which support specifying the needed crosswalks or sampling them by a density parameter. **Terrain generation.** We develop a procedural terrain generator by connecting sampled terrain primitives similar to the method adopted in (Lee et al., 2024), which uses the Wave Function Collapse (WFC) method (Gumin, 2016) to ensure smooth transitions between neighboring terrain primitives. We defined five types of terrain primitives, including slops, steps, stairs, ramps, and rough at different heights. After the mesh was generated, textures with different friction coefficients were added to the terrain to simulate different materials of the ground. This method allows for the generation of a wide variety of terrain combinations, reflecting the complex environments that agents may encounter. **Object placement.** After determining the ground layout, we can place objects on the ground. We divide objects into three classes. 1) Standard infrastructure, such as poles, trees, and signs, are placed periodically along the road. 2) Non-standard infrastructure, such as buildings, bonsai, and trash bins, are placed randomly in the designated function zones. 3) Clutter, such as drink cans, bags, and bicycles, are placed randomly across all functional zones. We can get different street styles by specifying an object pool while getting different compactness by specifying a density parameter. Please refer to Appendix B.1 for more details. 5 Idea Generation Category:
2Direct Enhancement
kFsWpSxkFz
# D E FT: D E CODING WITH F LASH T REE -A TTENTION FOR - E FFICIENT T REE STRUCTURED LLM I NFERENCE **Jinwei Yao** [1] _[,]_ [4] _[,][∗]_ **Kaiqi Chen** [2] _[,][∗]_ **Kexun Zhang** [3] **Jiaxuan You** [4] _[,][†]_ **Binhang Yuan** [5] **Zeke Wang** [2] _[,][†]_ **Tao Lin** [1] _[,][†]_ jinwei.yao1114@gmail.com; {chiaki_cage,wangzeke}@zju.edu.cn; kexunz@andrew.cmu.edu; jiaxuan@illinois.edu; biyuan@ust.hk; lintao@westlake.edu.cn 1 Westlake University 2 Zhejiang University 3 Carnegie Mellon University 4 University of Illinois Urbana-Champaign 5 Hong Kong University of Science and Technology A BSTRACT Large language models (LLMs) are increasingly employed for complex tasks that process multiple generation calls in a tree structure with shared prefixes of tokens, including few-shot prompting, multi-step reasoning, speculative decoding, etc. However, existing inference systems for tree-based applications are inefficient due to improper partitioning of queries and KV cache during attention calculation. This leads to two main issues: (1) a lack of memory access (IO) reuse for KV cache of shared prefixes, and (2) poor load balancing. As a result, there is redundant KV cache IO between GPU global memory and shared memory, along with low GPU utilization. To address these challenges, we propose D E FT [1] ( De coding with F lash T ree-Attention), a hardware-efficient attention algorithm with prefixaware and load-balanced KV cache partitions. D E FT reduces the number of read/write operations of KV cache during attention calculation through _KV-Guided_ _Grouping_, a method that avoids repeatedly loading KV cache of shared prefixes in attention computation. Additionally, we propose _Flattened Tree KV Splitting_, a mechanism that ensures even distribution of the KV cache across partitions with little computation redundancy, enhancing GPU utilization during attention computations. By reducing 73-99 % KV cache IO and nearly 100 % IO for partial results during attention calculation, D E FT achieves up to 2.23/3.59 _×_ speedup in the decoding/attention latency across three practical tree-based workloads compared to state-of-the-art attention algorithms. Our code is available at [https://github.](https://github.com/LINs-lab/DeFT) [com/LINs-lab/DeFT.](https://github.com/LINs-lab/DeFT) 1 I NTRODUCTION Large language models (LLMs) (Achiam et al., 2023; Touvron et al., 2023a;b) are extensively utilized across a range of tasks like chatbot (Roller et al., 2020), code generation (Mark et al., 2021), reasoning (Yao et al., 2023; Besta et al., 2023; Ning et al., 2023), etc. Traditionally, the interactions between LLMs and application users are sequential: the user sends a new prompt after completion result of the previous prompt is received. However, many applications are now designed to process sequences with an internal tree structure, including self-consistency (Wang et al., 2022), few-shot prompting (Mann et al., 2020), multi-step reasoning (Yao et al., 2023; Hao et al., 2023; Xie et al., 2024), and speculative decoding (Miao et al., 2023; Cai et al., 2024), etc, as shown in Figure 1. Usually, these applications produce substantially more tokens than traditional ones, to provide large space for tree search (Graves, 2012; Lu et al., 2022; Liu et al., 2023) or selection, as shown in Table 1. **We need a more efficient decoding algorithm in response to this interaction paradigm change** **from sequence-based decoding to tree-based decoding** . _∗_ Equal contribution. Work done during Jinwei’s visit to Westlake University. _†_ Corresponding author. 1 By default, D E FT refs to D E FT-Flatten, which has **Flattened Tree KV Splitting** before loading KV cache for attention calculation. 1 When requests have shared prefixes in a tree Table 1: **Comparison of efficiency in sequence-based** structure, existing inference systems (Hugging **CoT (Wei et al., 2022) and tree-based ToT (Yao et al.,** Face; NVIDIA; Kwon et al., 2023) designed **2023) decoding for a reasoning task.** The task is _sort-_ for sequence-based decoding introduce redun- _ing_ 128 _numbers_ from Besta et al. (2023). The total gendancy by failing to be prefix-aware at one or erated tokens of CoT is only 525 while 38 _,_ 315 in ToT, resulting in inefficiency in end-to-end latency ( second ) more of the following three levels: (1) _compu-_ and IO ( TB ). IO mainly consists of two parts as follows. _tation_ —for instance, the redundant recomputa- (i) _KV cache_ : IO-KV ; (ii) _Partial results during atten-_ tion of KV caches for shared prompts across _tion calculation like_ _QK_ _[T]_ _and softmax_ : IO-PA ; Baserequests in a batch (Hugging Face); (2) _memory_ lines: (i) _Flash-Decoding_ (Dao et al., 2023); (ii) _Tree_ _storage_ —for example, the redundant storage of _Attention_ : tree attention in Medusa (Cai et al., 2024). KV caches for shared prefixes (Hugging Face; Kwon et al., 2023; NVIDIA); (3) _memory ac-_ Latency IO-KV IO-PA _cess (IO)_ —such as repeatedly loading the KV Flash-Decoding + CoT 21 0.6 0 Flash-Decoding + ToT 429.65 59.96 0 cache of a shared system prompt during atten- Tree Attention + ToT 380.87 12.40 3.69 tion calculations (Hugging Face; Kwon et al., DeFT-Flatten(ours) + ToT 94.67 12.40 0 2023; NVIDIA). Although some tree-based in- Speed up over best baseline 4 _._ 02 _×_ - ference systems (Zheng et al., 2023; Gim et al., 2023; Cai et al., 2024; Miao et al., 2023) address the first two issues, they largely overlook the third and arguably the most crucial aspect: _memory access_, which is critical in the context of memory-bound LLM inference (Shazeer, 2019; Cai et al., 2024; Kim et al., 2023). Table 1: **Comparison of efficiency in sequence-based** **CoT (Wei et al., 2022) and tree-based ToT (Yao et al.,** **2023) decoding for a reasoning task.** The task is _sort-_ _ing_ 128 _numbers_ from Besta et al. (2023). The total generated tokens of CoT is only 525 while 38 _,_ 315 in ToT, resulting in inefficiency in end-to-end latency ( second ) and IO ( TB ). IO mainly consists of two parts as follows. (i) _KV cache_ : IO-KV ; (ii) _Partial results during atten-_ _tion calculation like_ _QK_ _[T]_ _and softmax_ : IO-PA ; Baselines: (i) _Flash-Decoding_ (Dao et al., 2023); (ii) _Tree_ _Attention_ : tree attention in Medusa (Cai et al., 2024). Latency IO-KV IO-PA Flash-Decoding + CoT 21 0.6 0 Flash-Decoding + ToT 429.65 59.96 0 Tree Attention + ToT 380.87 12.40 3.69 DeFT-Flatten(ours) + ToT 94.67 12.40 0 Speed up over best baseline 4 _._ 02 _×_ - **Sequence-based decoding** **Notations** shareable KV cache non-shareable prompt non-shareable generation **Prompt 1** **Generation 1** **Prompt 2** **Generation 2** |P|G1<br>G2| |---|---| ||| |Col1|t1| |---|---| |t0|| |t0|t2| To accelerate the tree-structured LLM inference, an important question is whether we can leverage the shared patterns in multi-cascaded prefixes to design a faster and more memory-efficient attention algorithm. This task is challenging due to two key issues as follows. **C1: How to** **ensure prefix-awareness in memory access of** **KV cache?** Current memory-efficient attention algorithms (Dao et al., 2022; 2023; Hong et al., 2023) are optimized for sequence-based decoding, which leads to a lack of prefix-awareness during memory access. As a result, shared prefixes in the KV cache are repeatedly loaded. **C2: How** **to split the tree-structured KV cache for load** **balancing and high GPU utilization?** For optimal GPU utilization, the current KV splitting strategy for sequence-based decoding—FlashDecoding (Dao et al., 2023), which splits sequence KV into chunks—cannot be directly applied to tree-structured KV. Tree-structured KV caches also need to be effectively partitioned: however, if we naively split them by nodes, token lengths across different nodes can vary significantly (e.g., in speculative decoding (Cai et al., 2024), some nodes might only have 1 token while the root node could have thousands), making it difficult to maintain load balance and efficient computation. |Col1|t0 t|2 t4| |---|---|---| |Step history Step 3(G3)|Step history Step 3(G3)|Step history Step 3(G3)| To address the above challenges, we propose D E FT-Flatten, a prefix-aware tree attention algo Figure 1: **An illustration of Sequence-based decod-** rithm with a flattened tree KV splitting strategy, **ing and Tree-based decoding** . based on two key insights. _•_ First, how queries and KV caches are grouped for attention calculation significantly impacts memory access. Existing approaches use a _**Q-Guided Grouping**_ strategy, where each request/query is grouped with all corresponding KV caches. While this eliminates IO redundancy for queries, the prefix KV cache still gets loaded multiple times. To address **C1**, we propose _**KV-Guided Grouping**_ : D E FT-Flatten groups the prefix’s KV cache with all shared queries, ensuring the prefix KV cache is only loaded once, significantly reducing redundant loading with negligible IO overhead for reloading queries. The IO overhead for queries (Q) is minimal compared to the KV cache, as the maximum query length Figure 1: **An illustration of Sequence-based decod-** **ing and Tree-based decoding** . 2 typically corresponds to the number of root-to-leaf paths in the tree, making the queries relatively short (e.g., dozens of tokens) compared to the KV cache length in each node (e.g., hundreds or thousands of tokens). _•_ Second, since LLM inference is IO-bound, the attention overhead of each QKV group is primarily influenced by the IO of the KV cache. Therefore, it is crucial to ensure that the KV lengths of different QKV groups are nearly balanced. To address **C2**, we propose a _**Flattened**_ _**Tree KV Splitting**_, which enables balanced partitions by dividing the flattened tree KV into even chunks, using bit causal masks to capture causal relationships between queries and KV cache. We summarize our contributions as follows: - We propose a hardware-efficient tree attention algorithm—D E FT-Flatten, which is IO-aware of shared prefixes’ KV cache and load-balanced in computation. - We implement D E FT-Flatten on OpenAI Triton (Tillet et al., 2019) to gain precise management over memory access and fuse all attention operations into a single GPU kernel. - We theoretically justify the superiority of D E FT-Flatten over the existing attention algorithms (Wolf et al., 2019; Dao et al., 2023; Cai et al., 2024; Miao et al., 2023) in terms of IO complexity. - We empirically verify its effectiveness on few-shot prompting, multi-step reasoning, and speculative-decoding tasks. D E FT-Flatten can achieve a decoding latency speedup of **1.3** _×_ for few-shot prompting, **2.2** _×_ for speculative decoding, **1.1** _×_ for multi-step reasoning, due to an up to **3.59** _×_ faster attention calculation, with the baseline implementations (Dao et al., 2023; Cai et al., 2024; Zheng et al., 2023). - We compare different tree split strategies—D E FT-Node, D E FT-Node-Chunk, and D E FT-Flatten in ablation studies (see section 4.4), showing the balanced partitioning of QKV groups matters. 2 R ELATED W ORK **Tree-based Decoding.** Tree-based decoding, exemplified by beam search (Graves, 2012), has been pivotal in NLP, handling lexical and logical constraints (Anderson et al., 2017; Post & Vilar, 2018; Hokamp & Liu, 2017), mitigating gender bias (Lu et al., 2021), achieving communicative goals (Holtzman et al., 2018), and improving alignment (Liu et al., 2023). Based on the structure feature of queries and KV cache, we can classify tree-based decoding into two patterns: (i) Tree-structured past KV with parallel queries—usually in multi-step reasoning (Yao et al., 2023; Besta et al., 2023; Ning et al., 2023), using search trees with parallel hypothesis generation and selection based on scoring functions, either score candidates per token (Dathathri et al., 2019; Lu et al., 2021; 2022) or per reasoning step (Welleck et al., 2022; Uesato et al., 2022; Xie et al., 2024). (ii) Past KV in sequence with tree-structured queries—usually in speculative decoding (Cai et al., 2024; Miao et al., 2023). Further details on these two patterns are discussed in Appendix A.2. Although tree-based search algorithms like A* (Lu et al., 2022) and Monte-Carlo Tree Search (Liu et al., 2023) have been applied, the efficiency of tree-based decoding remains largely under-explored. **Memory-efficient Attention Algorithms.** Existing memory-efficient attention algorithms target sequence-based decoding. FlashAttention (Dao et al., 2022) improves self-attention computation in LLM training via tiling and kernel fusion, reducing IOs. Flash-Decoding (Dao et al., 2023) extends this, enhancing parallelism by dividing K and V and introducing global reduction to gather partial attention results, enabling efficient decoding for long sequences. Unfortunately, applying these memory-efficient algorithms to the tree-based decoding overlooks redundancy in IO of tree-structured KV cache, which is the focus of D E FT. **Tree Attention.** Integrated into LLM inference, tree attention reduces computation, storage, and kernel launching overheads (Miao et al., 2023). Tree-structured token candidates undergo parallel decoding, with SpecInfer (Miao et al., 2023) introducing a topology-aware causal masked tree attention algorithm, dynamically updating a causal mask to capture relationships among tokens. Medusa (Cai et al., 2024) uses a similar mechanism with a static causal mask, while other works (Zhao et al., 2023; Liu et al., 2024) adopt analogous approaches to enhance attention calculation efficiency. However, unlike D E FT, these existing works utilizing tree attention do not take memory access into consideration. **Storage Optimization of Tree-based Decoding.** LLM frameworks optimized for tree-based decoding (Kwon et al., 2023; Zheng et al., 2023) focus on memory storage efficiency. vLLM (Kwon et al., 2023) enhances GPU memory utilization, allowing sequences from the same parent to share KV cache storage. SGLang (Zheng et al., 2023) supports dynamic KV cache management during multi-round interactions with LLMs, improving memory efficiency. **Discussion on Concurrent Works.** Some concurrent works (Ye et al., 2024a; Juravsky et al., 2024; Athiwaratkun et al., 2024; Ye et al., 2024b; Zhu et al., 2024) also recognize the importance of IO 3 during LLM inference. However, these works have at least one of these flaws: i) they (Ye et al., 2024a; Juravsky et al., 2024; Athiwaratkun et al., 2024; Ye et al., 2024b; Zhu et al., 2024) cannot be easily extended to situations where the decoding tree has more than two levels—they target single-context batch sampling scenarios, a special case of general tree-based decoding with a system prompt as prefix and unique suffixes in the first depth; ii) they (Juravsky et al., 2024; Athiwaratkun et al., 2024) do not consider the inefficiency caused by the lengths of different nodes in the decoding tree. See the comparison of D E FT and concurrent works in Appendix A.3. 3 D E FT In this section, we first introduce the background knowledge of LLM inference, upon which we outline the importance of QKV partitions for attention calculation. We then present the overview of D E FT algorithm and Attention Kernel design, with its system support. Finally, we propose efficient QKV partitioning method for D E FT, which not only reduces memory access of prefixes’ KV cache and partial results (e.g., Softmax), but also ensures balanced partitions during attention computation. 3.1 P RELIMINARY **LLM inference and its bottleneck.** LLM inference involves two stages: (1) prefill and (2) decoding. During the prefill stage, a prompt is tokenized to initialize LLM. The output of the prefill stage becomes the input for the decoding stage. The decoding stage is auto-regressive, with each output token from the previous step serving as the input token for the next step. Due to the sequential process of auto-regressive decoding, LLM inference is memory-bound (Shazeer, 2019; Kim et al., 2023; Cai et al., 2024), wherein every forward pass requires transferring all model parameters and KV cache from slower but larger High-Bandwidth Memory (HBM) to the faster but much smaller shared memory of the GPU (Jia & Van Sandt, 2021) [2] . Another potential bottleneck is low GPU utilization (Dao et al., 2023), which happens when the parallelism (usually limited by the batch size is much smaller than the number of streaming multiprocessors (SMs) on the GPU ( 108 for an A100), where the operation will only utilize a small portion of the GPU. **The execution pattern of attention algorithms on GPUs.** We can separate the execution of attention algorithms into two main phases: (1) QKV P REPARATION P HASE : group Query, Key, and Value (QKV) logically to partitions and map QKV groups to different streaming multiprocessors (SMs) of GPUs; (2) A TTENTION C ALCULATION P HASE : load QKV partitions to different SMs’ shared memory and apply attention algorithms to each group for final attention results. **QKV partitions with segmented attention.** In sequence-based decoding, QKV partitioning is crucial when the parallelism (usually limited by the batch size (Dao et al., 2023)) is much smaller than the number of streaming multiprocessors (SMs) on the GPU ( 108 for an A100), where the operation will only utilize a small portion of the GPU. To enable high GPU utilization, Flash-Decoding (Dao et al., 2023) partitions the queries and KV cache then calculates the attention in parallel. Details are as follows: (1) QKV P REPARATION P HASE : for each query in the batch, split its sequential KV cache into chunks as different QKV partitions. (2) A TTENTION C ALCULATION P HASE : it calculates segmented attention _A_ 0, _A_ 1, and _A_ 2 over three segments, respectively, and then gets final attention by online Softmax merging (Dao et al., 2022; 2023) based on segmented attention from different QKV partitions. We elaborate on the procedure below. - Let’s say we have key tensor _**K**_ _∈_ R _[l]_ _[kv]_ _[×][d]_, value tensor _**V**_ _∈_ R _[l]_ _[kv]_ _[×][d]_, and query tensor _**Q**_ _∈_ R _[l]_ _[q]_ _[×][d]_ . Considering the general case _**K**_ and _**V**_ are partitioned across the sequence (row) dimension into three parts for parallel calculation, respectively: _**K**_ = _**K**_ 0 _∥_ _**K**_ 1 _∥_ _**K**_ 2, and _**V**_ = _**V**_ 0 _∥_ _**V**_ 1 _∥_ _**V**_ 2, with “ _∥_ ” denoting concatenation along the row axis. - We calculate the attention _**A**_ 0, _**A**_ 1, and _**A**_ 2 over KV chunks in different streaming-multiprocessors (SMs) of GPU, where _**A**_ 0 = _⟨_ _**Q**_, _**K**_ 0 _,_ _**V**_ 0 _⟩_, _**A**_ 1 = _⟨_ _**Q**_ _,_ _**K**_ 1 _,_ _**V**_ 1 _⟩_, _**A**_ 2 = _⟨_ _**Q**_ _,_ _**K**_ 2 _,_ _**V**_ 2 _⟩_, and _⟨_ **q** _,_ **k** _,_ _**v**_ _⟩_ = Softmax � **qk** _[⊤]_ _/_ ~~_√_~~ _d_ � _**v**_ _._ _⟨_ **q** _,_ **k** _,_ _**v**_ _⟩_ = Softmax � **qk** _[⊤]_ _/_ ~~_√_~~ _d_ � _**v**_ _._ - We calculate LogSumExp (LSE) as a weight of merging _**A**_ 0, _**A**_ 1, and _**A**_ 2 . We define LSE( **q** _,_ **k** ) = log ���exp � **qk** _[⊤]_ _/_ ~~_√_~~ _d_ ���. log ���exp � **qk** _[⊤]_ _/_ ~~_√_~~ _d_ ���. - We have _⟨_ _**Q**_ _,_ _**K**_ _,_ _**V**_ _⟩_ = SegAttn( _**A**_ 0 _,_ _**A**_ 1 _,_ _**A**_ 2 ), which means segmented attention with Online Softmax (Dao et al., 2022): SegAttn( _**A**_ 0 _,_ _**A**_ 1 _,_ _**A**_ 2 ) = _**[A]**_ [0] _[e]_ [LSE(] _e_ [LSE(] _**[Q][Q]**_ _[,]_ _**[K]**_ _[,]_ _**[K]**_ [0)][0)] [+] + _**[A]**_ _e_ [1][LSE(] _[e]_ [LSE(] _**[Q][Q]**_ _[,]_ _**[K]**_ _[,]_ _**[K]**_ [1)][1)] + [+] _e_ [LSE(] _**[A]**_ [2] _[e]_ _**[Q]**_ [LSE(] _[,]_ _**[K]**_ [2)] _**[Q]**_ _[,]_ _**[K]**_ [2)] _,_ where _e_ := exp _._ (1) 2 A100’s HBM has 1.5-2TB/s bandwidth and 40-80GB; its shared memory has 19TB/s bandwidth and 20MB. 4 **Notations** **HBM(2 TB/s)** **Shared Memory(19 TB/s)** **Phase 2: Attention Calculation** **DeFT Attention Kernel** **Query** **KV** **Cache** **Tree** **Topo** **Groups** ~~**AttentionGlobal**~~ **G** **0** ~~**SM**~~ ~~**0**~~ ~~**A**~~ **0** **Reduction** **G** **1** **SM** **1** **A** **1** **Partial** ~~**Attention**~~ **A** **2** **Final** **Attention** ~~**SM**~~ ~~**0**~~ **SM** **1** **SM** **2** |Que Tree KV|Col2|Que| |---|---|---| |Tree KV<br>KV1<br>KV0<br>KV2|KV1|| |Tree KV<br>KV1<br>KV0<br>KV2|KV2|| Figure 2: **Overview of D** **E** **FT** . _Input Metadata_ is prepared in the system elaborated in Appendix A.1. In _QKV_ _Preparation Phase_ (see Section 3.3), the QKV will be grouped logically to partitions with IO-awareness of shared prefixes’ KV cache and load-balancing. These partitions will guide the loading of QKV on the _Attention_ _Calculation Phase_ (see Appendix A.4), where the attention calculation will be executed. 3.2 O VERVIEW OF D E FT **The importance of QKV partitions.** For tree-based decoding, logically partitioning QKV is necessary for attention calculation with high parallelism. The branch number of tree-structured generation requests may be insufficient to fully utilize the GPU when the number of tokens in the tree-structured KV cache is large, due to memory capacity limitations. For example, a request for the reasoning task of sorting 128 numbers (Besta et al., 2023), involves around 40K tokens in a Llama2-7B model, whose KV cache occupies 20GB, which means an 80GB A100 can only process at most 4 requests with such token numbers. **Motivation of D** **E** **FT.** D E FT aims to address two potential bottlenecks (i.e., IO and GPU utilization) of LLM inference when dealing with tree-structured KV sequences. Let’s say we have a simple tree with two cascades, as shown in the left part of Figure 2: for two queries _**Q**_ _a_ and _**Q**_ _b_, the corresponding keys satisfy _**K**_ _a_ = _**K**_ 0 _∥_ _**K**_ 1 and _**K**_ _b_ = _**K**_ 0 _∥_ _**K**_ 2, respectively, and values obey the same rule. D E FT is designed to: (1) minimize IO by eliminating redundant memory access of the shared prefix’s KV cache ( _**K**_ 0 and _**V**_ 0 ) for _**Q**_ _a_ and _**Q**_ _b_ ; (2) ensure balanced workloads for high GPU utilization, so that the overhead of computing each segmented attention _**A**_ _i_ remains nearly identical. Since the global reduction in equation 1 requires all partial attention, if the overhead for computing _**A**_ _i_ is significantly larger than _**A**_ _j_, the SM responsible for calculating _**A**_ _j_ will experience prolonged idleness. **Technique overview of D** **E** **FT.** D E FT aims to be a hardware-efficient attention algorithm by reducing memory access and ensuring load-balancing for tree-based decoding. See details in Figure 2: ➀ In the QKV P REPARATION P HASE, for prefix-aware and load-balanced QKV partitions, we introduce a _KV-Guided Grouping_ strategy to reuse the KV cache IO of the shared prefixes, and a _Flattened Tree KV Splitting_ for high GPU-utilization due to balanced and parallel attention calculation. See details in Section 3.3. ➁ During the A TTENTION C ALCULATION P HASE, we design the D E FT A TTENTION K ERNEL [3] to load QKV splits in a memory efficient way, which is logically grouped by the QKV P REPARATION P HASE, then to perform the attention calculation. Key techniques are as follows, with details deferred in Appendix A.4: 1) Common _Kernel Fusion_ and _Tiling_ strategies avoid significant IO operations for partial results (i.e.. _**QK**_ _[⊤]_ and Softmax ), which Tree Attention-Medusa (Cai et al., 2024) lacks. 2) _Tree-Topology-Aware Global Reduction_, which extends the global reduction mechanism from Flash-Decoding (Dao et al., 2023). This approach efficiently computes the final attention for each query by aggregating partial attention results from QKV groups while considering the tree structure. **System frameworks of D** **E** **FT.** Apart from efficient D E FT A TTENTION K ERNEL, our system for D E FT has other two advantages: 1) efficient memory management of the KV cache in a tree structure, and 2) flexible control of the tree decoding process with arbitrary user-defined functions to decide when and how to branch/prune. The details of key components and their coordinations in the system refer to Appendix A.1. 3.3 P REFIX - AWARE AND B ALANCED T REE - STRUCTURED KV C ACHE P ARTITIONS This section delves into the details of the QKV P REPARATION P HASE, which is a key design aspect of D E FT. The discussion of the A TTENTION C ALCULATION P HASE is deferred to Appendix A.4. 3 GPUs utilize a vast array of threads to execute operations known as _kernels_ 5 Idea Generation Category:
2Direct Enhancement
2c7pfOqu9k
# - S PREAD P REFERENCE A NNOTATION : D IRECT P REFER ENCE J UDGMENT FOR E FFICIENT LLM A LIGNMENT **Dongyoung Kim** [1] **, Kimin Lee** [1] **, Jinwoo Shin** [1] **, Jaehyung Kim** [2] 1 Korea Advanced Institute of Science and Technology, 2 Yonsei University kingdy2002@kaist.ac.kr, jaehyungk@yonsei.ac.kr A BSTRACT Aligning large language models (LLMs) with human preferences becomes a key component to obtaining state-of-the-art performance, but it yields a huge cost to construct a large human-annotated preference dataset. To tackle this problem, we propose a new framework, **S** pread **P** reference **A** nnotation with direct preference judgment (SPA), that boosts the alignment of LLMs using only a very small amount of human-annotated preference data. Our key idea is leveraging the human prior knowledge within the small (seed) data and progressively improving the alignment of LLM, by iteratively generating the responses and learning from them with the self-annotated preference data. To be specific, we propose to derive the preference label from the logits of LLM to explicitly extract the model’s inherent preference. Compared to the previous approaches using external reward models or implicit in-context learning, we observe that the proposed approach is significantly more effective. In addition, we introduce a noise-aware preference learning algorithm to mitigate the risk of low quality within generated preference data. Our experimental results demonstrate that the proposed framework significantly boosts the alignment of LLMs. For example, we achieve superior alignment performance on AlpacaEval 2.0 with only 3.3% of the ground-truth preference labels in the Ultrafeedback data compared to the cases using the entire data or state-of-the-art baselines. [1] 1 I NTRODUCTION Recently, large language models (LLMs) have made huge progress in various NLP tasks, leading to real-world applications that are used by millions of users, such as coding assistants and chatbots (Anthropic, 2024; OpenAI, 2022; Team et al., 2023). Aligning LLMs with human feedback, particularly through learning from human preferences, is widely considered a crucial technique for their success (Christiano et al., 2017; Lee et al., 2021; Ziegler et al., 2019). To enhance this alignment, various preference learning algorithms have been extensively explored (Ouyang et al., 2022; Rafailov et al., 2023). Despite these advancements, one of the remaining challenges is the reliance on large-scale human-annotated preference data. As the quality and quantity of the preference data are critical for the successful alignment of LLMs (Bai et al., 2022a; Cui et al., 2023), the huge cost to acquire such data inevitably presents significant obstacles. To mitigate this challenge, engaging LLMs in constructing preference data and improving their alignment using these data has recently gained attention. For example, a representative way on this line is generating multiple responses for the input prompts, and then approximating human preference between them through LLM’s predictions, often referred to as _LLM-as-judge_ (Bai et al., 2022b; Yuan et al., 2024). However, these approaches are only effective when the given LLM is sufficiently large and well-aligned to mimic human preference via in-context learning. On the other hand, using an external reward model is considerable to substitute human preference annotation efficiently (Jiang et al., 2023b; Snorkel, 2024), but it is built on the availability of large human preference data and could also be ineffective if there is a distribution mismatch. Lastly, these approaches have a risk of potential labeling noise from LLMs, but this aspect has not been explored yet. Therefore, in this work, we aim to develop a method to effectively improve the alignment of LLM by overcoming these limitations but only relying on small human annotation. 1 [https://github.com/kingdy2002/SPA](https://github.com/kingdy2002/SPA) 1 New Input Prompts Sampled Responses Preference Judgment Figure 1: **Illustration of the proposed SPA framework.** SPA progressively improves the alignment of LLMs by iterating (1) the generation of new preference data and (2) the preference learning on the constructed data with self-refinement. Technical details are presented in Section 4. **Contribution.** We introduce a simple yet effective framework, coined SPA, to improve the alignment of LLMs with only a small amount of human-labeled preference data, by **S** preading **P** reference **A** nnotation via direct preference judgment. Our key idea is to progressively expand the knowledge of human preference within the small (seed) data, by iteratively generating the responses and learning from them through the self-annotated preference labels. Specifically, our technical contributions are three-fold as described in what follows. First, we judge the preference labels directly using the logits of LLM to explicitly extract the model’s inherent preference. This approach is more effective than previous methods that rely on external reward models or implicit in-context learning. Second, we introduce a confidence-based refinement of preference labels to reduce the risk of noise in preference learning with generated data. Third, to further enhance the effectiveness of this refinement, we propose using a linearly extrapolated prediction between current and reference models; it approximates predictions of a more strongly aligned model, leading to better noise identification. We demonstrate the effectiveness of the proposed SPA by aligning recent LLMs with small human-annotated preference data and evaluating their alignment on the commonly used benchmarks. For example, using only 3.3% of ground-truth preference in Ultrafeedback data (Cui et al., 2023) with the mistral-7b-0.1v SFT model (Jiang et al., 2023a), our framework achieves over 16.4% increase in AlpacaEval2.0 (Li et al., 2023a) win rate compared to the initial SFT model (see Figure 2). Additionally, the AlpacaEval 2.0 length-controlled win rate is improved from 7.58% to 15.39%, and MT-bench score (Zheng et al., 2023) increased from 6.38 to 6.94. Compared to preference judgment methods like LLM-as-judge (Zheng et al., 2023), and even strong reward models such as PairRM (Jiang et al., 2023b), which have recently shown state-of-art performance in AlpacaEval2.0 benchmark, our approach consistently outperforms them across all metrics. More interestingly, the proposed SPA successfully improves the alignment of various LLMs, even without the initial human preference data. These results demonstrate that our framework is highly competitive and practical for real-world applications. 2 R ELATED W ORK |+ 16.4%<br>+ 7.8%|Col2|Col3|+ 16.4%|Col5|Col6|Col7|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| |+ 16.4%<br>+ 7.8%||||||||||| |+ 16.4%<br>+ 7.8%||||||||||| |+ 16.4%<br>+ 7.8%||||||||||| |+ 16.4%<br>+ 7.8%||||||||||| |+ 16.4%<br>+ 7.8%||||||||||| |+ 16.4%<br>+ 7.8%||||||||||| Figure 2: **Summary of main re-** **sult.** Evaluation results on AlpacaEval 2.0 (Li et al., 2023a). Our framework significantly improves the alignment of LLMs, without additional human preference data. See detailed results in Section 5. **Alignment of LLMs with human preference.** Learning from human preferences now serves as a core component for the state-of-the-art LLMs (Anthropic, 2024; OpenAI, 2023; Team et al., 2023; Touvron et al., 2023) for aligning their responses with users’ intent and values (Ouyang et al., 2022; Ziegler et al., 2019). Arguably, one of the most popular frameworks is reinforcement learning with human preference (RLHF) (Christiano et al., 2017; Lee et al., 2021), which first trains the reward model, and then fine-tunes LLM to maximize that reward with KL divergence regularization to prevent the reward over-optimization of LLM. On the other hand, various preference learning algorithms have recently been proposed to fine-tune LLMs with human preference more efficiently (Ethayarajh et al., 2 2024; Hong et al., 2024; Liu et al., 2023; Rafailov et al., 2023; Xu et al., 2023; Zhao et al., 2023; Meng et al., 2024). For example, Rafailov et al. (2023) proposes Direct Preference Optimization (DPO) which allows one to fine-tune LLMs without a separate reward modeling stage, by deriving the training objective mathematically equivalent to RLHF. Ethayarajh et al. (2024) further removes the reliance on pair-wise preference labels by formulating the objective based on a human utility model. However, these methods assume that large human-annotated preference data is available, which requires a huge data acquisition cost. **Engagement of LLMs for constructing preference data.** For an efficient and scalable alignment procedure, engaging LLMs for preference dataset construction has recently received attention. One common approach involves generating multiple responses to input prompts from LLM, and using an LLM’s predictions to approximate human preferences between them, a technique often referred to as _LLM-as-judge_ (Bai et al., 2022a; Yuan et al., 2024). However, this method is effective only when the LLM is sufficiently large and well-aligned to mimic human preferences through in-context learning. Alternatively, employing an external reward model can efficiently replace human preference judgment (Jiang et al., 2023b; Snorkel, 2024), but this approach relies on the availability of extensive human preference data to pre-train reward model and may be ineffective if there is a distribution mismatch. Some concurrent works (Rosset et al., 2024; Snorkel, 2024; Wu et al., 2024; Xiong et al., 2024) have proposed the alignment procedure with iterative data expansion and preference learning. However, they use the external reward model or stronger LLM for the preference judgment. In contrast, we only utilize the intrinsic knowledge of training LLM for new data expansion and preference learning. 3 P RELIMINARIES Let us denote LLM as _π_ _θ_, which generates an output sequence ( _e.g._, response) _y_ for a given input sequence ( _e.g._, prompt) _x_, _i.e.,_ _y ∼_ _π_ _θ_ ( _·|x_ ) . Then, our goal is to make _π_ _θ_ provide human-aligned responses to various input prompts. To this end, we consider the popular framework of preference learning, which optimizes _π_ _θ_ to learn human preferences between two different responses (Christiano et al., 2017; Lee et al., 2021; Ouyang et al., 2022). Specifically, we assume that the preference dataset _D_ = _{_ ( _x, y_ _l_ _, y_ _w_ ) _}_ is available which consists of the triplets of input prompt _x_, preferred response _y_ _w_, and dispreferred response _y_ _l_ . Here, the preference labels were annotated by a ground truth annotator, that is usually a human expert. **Reward modeling and RL fine-tuning.** Since a pairwise preference between _y_ _w_ and _y_ _l_ is hard to model directly, one of the common practices is introducing reward function _r_ ( _x, y_ ) and modeling the preference based on this using the Bradley-Terry model (Bradley & Terry, 1952): exp ( _r_ ( _x,_ _y_ _w_ )) _p_ ( _y_ _w_ _≻_ _y_ _l_ _| x_ ) = (1) exp ( _r_ ( _x, y_ _w_ )) + exp ( _r_ ( _x, y_ _l_ )) _[.]_ From this formulation, one can introduce a parametrized reward model _r_ _ϕ_ ( _x, y_ ) by estimating its parameters with the maximum-likelihood objective: _L_ _R_ ( _r_ _ϕ_ ) = _−_ E ( _x,y_ _w_ _,y_ _l_ ) _∼D_ [log _σ_ ( _r_ _ϕ_ ( _x, y_ _w_ ) _−_ _r_ _ϕ_ ( _x, y_ _l_ ))] _._ (2) where _σ_ is a sigmoid function. After this reward modeling procedure, one could improve the alignment of LLM _π_ _θ_ by optimizing it to maximize the reward captured by _r_ _ϕ_ . Here, KL-distance from the reference model _π_ ref is usually incorporated as a regularization to prevent the reward over-optimization of _π_ _θ_, with a hyper-parameter _β >_ 0 (Ouyang et al., 2022; Ziegler et al., 2019): [2] _L_ RLHF ( _π_ _θ_ ) = _−_ E _y∼π_ _θ_ _,x∼ρ_ [ _r_ _ϕ_ ( _x, y_ )] + _β_ D KL ( _π_ _θ_ ( _y|x_ ) _∥_ _π_ ref ( _y|x_ )) _._ (3) **Direct preference modeling and optimization.** Rafailov et al. (2023) propose an alternative approach to align LLM _π_ _θ_ with the preference dataset _D_, which is called Direct Preference Optimization (DPO). DPO integrates a two-step alignment procedure with reward modeling and RL fine-tuning into a single unified fine-tuning procedure. Specifically, the optimal reward function is derived from the 2 _π_ ref is usually initialized with supervised fine-tuned (SFT) LLM (Chung et al., 2024; Wei et al., 2022a). Also, _π_ _θ_ is initialized with _π_ ref . 3 RLHF objective (Eq. 3), with the target LLM _π_ _θ_ and the reference model _π_ ref (Go et al., 2023; Peng et al., 2019; Peters & Schaal, 2007). _[|][ x]_ [)] _r_ ( _x, y_ ) = _β_ log _[π]_ _[θ]_ [(] _[y]_ 1 _π_ ref ( _y | x_ ) exp � _y_ _π_ ref _[θ]_ ( _y | x_ ) [+] _[ β]_ [ log] _[ Z]_ [(] _[x]_ [)] _[,]_ [ where] _[ Z]_ [(] _[x]_ [) =] � 1 _._ (4) _β_ _[r]_ [(] _[x, y]_ [)] � Then, the preference between two responses could be measured using this reward derivation, and _π_ _θ_ is optimized to maximize this preference of _y_ _w_ over _y_ _l_ using the preference dataset _D_ . _p_ _θ_ ( _y_ _w_ _≻_ _y_ _l_ _|x_ ) = _σ_ _β_ log _[π]_ _[θ]_ [(] _[y]_ _[w]_ _[|][x]_ [)] � _π_ ref ( _y_ _w_ _|x_ _[π]_ _[θ]_ [(] _[y]_ _[w]_ _[|][x]_ [)] _[π]_ _[θ]_ [(] _[y]_ _[l]_ _[|][x]_ [)] _π_ ref ( _y_ _w_ _|x_ ) _[−]_ _[β]_ [ log] _π_ ref ( _y_ _l_ _|x_ ) _π_ ref ( _y_ _l_ _|x_ ) _._ (5) � _L_ DPO ( _π_ _θ_ ) = E ( _x,y_ _w_ _,y_ _l_ ) _∼D_ [ _−_ log _p_ _θ_ ( _y_ _w_ _≻_ _y_ _l_ _|x_ )] _._ (6) 4 SPA: S PREAD P REFERENCE A NNOTATION TO BOOST A LIGNMENT OF LLM S **Overview.** In this section, we present SPA: **S** pread **P** reference **A** nnoation via direct preference judgment to align LLMs while mitigating the huge cost for preference dataset construction. Our main idea is to fully exploit the human prior knowledge within the small (seed) data, and progressively update LLM to improve the alignment. To be specific, SPA iterates two steps: (1) data expansion with self-generated preference (Section 4.1) and (2) fine-tuning LLM with self-refined preference learning (Section 4.2). See Figure 1 for the overview. **Initial stage** . We assume that a small (seed) preference dataset _D_ 0 and an initial LLM _π_ init are given. Here, following the common practice (Ouyang et al., 2022; Rafailov et al., 2023; Ziegler et al., 2019), we use _π_ init which has been supervised fine-tuned (SFT) LLM on the instruction dataset (Chung et al., 2024; Wei et al., 2022a), but not aligned with human preference yet. Then, we first obtain weakly aligned LLM _π_ 0 by fine-tuning _π_ init on _D_ 0 using DPO (Rafailov et al., 2023) (Eq. 6). We adopt DPO among various preference learning methods due to its simplicity and effectiveness. 4.1 D IRECT PREFERENCE JUDGMENT TO ALIGN LLM S WITH SELF - GENERATED DATA For the _i_ -th iteration ( _i_ = 1 _, . . ._ ), we assume that the new prompt set _X_ _i_ = _{x}_ is available, _i.e._, _X_ _i_ _∩_ _X_ _j_ = _∅_ for all _j_ = 0 _, . . ., i −_ 1 . [3] From _X_ _i_, we construct _i_ -th artificial preference dataset _D_ _i_ = _{_ ( _x, y_ _l_ _, y_ _w_ ) _|x ∈_ _X_ _i_ _}_, by using LLM’s intrinsic generation and reward modeling capabilities. Specifically, for each input prompt _x ∈_ _X_ _i_, we sample two responses _y_ 1 and _y_ 2 from _π_ _i−_ 1, _i.e._, _y_ 1 _, y_ 2 _∼_ _π_ _i−_ 1 ( _x_ ) where _π_ _i−_ 1 is the resulting model from the previous iteration. Then, using the reward captured with _π_ _i−_ 1 and _π_ init (Eq. 4), we measure the preference of _π_ _i−_ 1 between _y_ 1 and _y_ 2 : _p_ _i−_ 1 ( _y_ 1 _≻_ _y_ 2 _|x_ ) = _σ_ _β_ log _[π]_ _[i][−]_ [1] [(] _[y]_ [1] _[|][x]_ [)] � _π_ init ( _y_ 1 _|x_ ) _[π]_ _[i][−]_ [1] [(] _[y]_ [1] _[|][x]_ [)] _π_ init ( _y_ 1 _|x_ ) _[−]_ _[β]_ [ log] _[ π]_ _π_ _[i]_ init _[−]_ [1] ( [(] _y_ _[y]_ 2 [2] _|_ _[|]_ _x_ _[x]_ ) [)] _π_ init ( _y_ 2 _|x_ ) _._ (7) � Then, we directly judge the preference label as below and construct _D_ _i_ through this: ( _y_ _w_ _, y_ _l_ ) = ( _y_ 1 _, y_ 2 ) if _p_ _i−_ 1 ( _y_ 1 _≻_ _y_ 2 _|x_ ) _>_ 0 _._ 5 else ( _y_ _w_ _, y_ _l_ ) = ( _y_ 2 _, y_ 1 ) _._ (8) 4.2 S ELF - REFINEMENT OF GENERATED PREFERENCE DATA FOR EFFECTIVE LEARNING After the construction of _D_ _i_, we conduct _i_ -th preference learning by fine-tuning _π_ _θ_, which is initialized by _π_ _i−_ 1, using DPO (here, we also use _π_ _i−_ 1 as _π_ ref in Eq. 6). Learning the self-generated preference data _D_ _i_ could improve the alignment by effectively spreading the human preference prior from _D_ 0 using the power of LLM. However, it also has a risk of the potential labeling noise which could occur from the distribution shift with _X_ _i_ or insufficient reward modeling with _π_ _i−_ 1 . Therefore, we further propose an improved preference learning method by introducing a novel denoising technique: _self-refinement_ of preference labels with _de-coupled noise detection_ . 3 _X_ 0 = _{x|_ ( _x, y_ _l_ _, y_ _w_ ) _∈D_ 0 } 4 **Algorithm 1** SPA algorithm **Input:** initial LLM _π_ init, seed preference dataset _D_ 0, number of improving iterations _T_, new prompt sets _{X_ _i_ _}_ _[T]_ _i_ =1 [,] Obtaining an initial weakly aligned model _π_ 0 using DPO with _π_ init and _D_ 0 (Eq. 6) **for** _t_ = 1 **to** _T_ **do** Synthesizing preference data _D_ _t_ with _π_ _t−_ 1 and _X_ _t_ (Eq. 7 and 8) Initialization of training and reference models _π_ _θ_ _←_ _π_ _t−_ 1, _π_ ref _←_ _π_ _t−_ 1 **for** mini-batch _B ∼D_ _t_ **do** _z_ _θ_ � _←_ De-coupled noise detection for _B_ from _π_ _θ_ _, π_ ref _, X_ _t_ (Eq. 11 and 12) Calculate training loss _L_ rf with refined preference labels using _z_ _θ_ � and _π_ _θ_ (Eq. 10) Update model parameter: _θ ←_ _θ −_ _η∇_ _θ_ _L_ rf **end for** Initializing next iteration model _π_ _t_ with the updated parameters _θ_ **end for** **return** _π_ _T_ **Self-refinement of preference label** : Our key intuition is that one can view the derived preference (Eq. 5) can be viewed as the confidence of the currently training LLM _π_ _θ_ for the labels assigned by _π_ _i−_ 1 . Then, _π_ _θ_ would exhibit lower confidence if the given pair of responses is uncertain to answer, indicating a higher probability of labeling noise. Notably, we also remark that confidence is one of the most popular metrics in the noisy label learning literature (Han et al., 2018; Reed et al., 2014; Sohn et al., 2020). Under this intuition, we first identify the _K_ % least confident samples: _z_ _θ_ = 1 if _p_ _θ_ ( _y_ _w_ _≻_ _y_ _l_ _|x_ ) _< τ_ else _z_ _θ_ = 0 _,_ (9) where _τ_ is the confidence of _K_ percentile sample of _D_ _i_ . Then, with this (potentially) noise identification label _z_ _θ_, we refine the assigned preference label using label smoothing (Müller et al., 2019), to train _π_ _θ_ less confidently when the risk of label noise is high ( _i.e._, _z_ _θ_ = 1): _L_ rf ( _π_ _θ_ ) = E ( _x,y_ _w_ _,y_ _l_ ) _∼D_ _i_ � _−_ �(1 _−_ _α ∗_ _z_ _θ_ ) log _p_ _θ_ ( _y_ _w_ _≻_ _y_ _l_ _|x_ ) + _α ∗_ _z_ _θ_ log _p_ _θ_ ( _y_ _l_ _≻_ _y_ _w_ _|x_ )�� _,_ (10) where _α_ is a hyper-parameter. Then, we train _π_ _θ_ using _L_ rf ( _π_ _θ_ ) instead of naive DPO (Eq. 6). **De-coupled noise preference detection** : While learning with the refined preference label reduces the risk of learning _π_ _θ_ the noisy preference, its effectiveness could be limited as the model _π_ _θ_ for noise detection originated from the label generation model _π_ _i−_ 1 . Therefore, to further improve the effectiveness of our preference label refinement framework, we introduce the de-coupled noise detection (Han et al., 2018; Li et al., 2020) technique for LLM alignment. Specifically, we identify the preference noise by mimicking the preference prediction of a more strongly aligned LLM _π_ _θ_ � : [4] _z_ _θ_ � = 1 if _p_ _θ_ � ( _y_ _w_ _≻_ _y_ _l_ _|x_ ) _< τ_ else _z_ _θ_ � = 0 _._ (11) With this de-coupled identification, _π_ _θ_ is trained with refined preference labels via Eq. 10, _i.e._, _z_ _θ_ � is used to substitute _z_ _θ_ in Eq. 10. Here, we obtain the prediction of _π_ _θ_ � by approximating its logit _h_ _θ_ � through the linear combination of the logits of _π_ _θ_ and _π_ ref . [5] It is motivated by the recent work (Liu et al., 2024) that shows the aligned models via RLHF with varying _β_ are geometric mixtures of a reference model and a single aligned model: _h_ _θ_ � ( _x, y_ 1: _t−_ 1 ) = (1 + _λ_ ) _∗_ _h_ _θ_ ( _x, y_ 1: _t−_ 1 ) _−_ _λ ∗_ _h_ ref ( _x, y_ 1: _t−_ 1 ) _,_ (12) where _λ >_ 0 is a hyper-parameter and _y_ 1: _t−_ 1 indicates the output sequence before _t_ -th output. We remark that this de-coupled noise identification by approximating _p_ _θ_ � ( _y_ _w_ _≻_ _y_ _l_ _|x_ ) _does not require_ _additional computations_ compared to DPO, since the required measurements _h_ _θ_ and _h_ ref are obtained during the calculation of the original DPO objective (Eq. 6). Therefore, SPA only requires a few lines of additional code to the original DPO codebase. We present full procedure of SPA in Algorithm 1. 4 With _λ_ in Eq. 12, _π_ _θ_ � is equivalent to model trained with (1 + _λ_ ) times smaller KL term than _π_ _θ_ via Eq. 3. 5 When _π_ _θ_ ( _·|x_ ) := Softmax� _h_ _θ_ ( _x_ )�, we refer _h_ _θ_ ( _x_ ) as the logit of LLM _π_ _θ_ for the given input _x_ . 5 5 E XPERIMENTS In this section, we present our experimental results to answer the following question: _◦_ Does SPA improve the alignment of LLMs only using a small amount of human-labeled preference data? (Table 1, Figure 4) _◦_ Does the proposed method outperform other preference labeling methods? (Table 2, Figure 3) _◦_ Is SPA generalizable across various choices of seed data and types of LLMs? (Tables 3,4,5) _◦_ What is the effect of each component in SPA? (Tables 6,7) 5.1 E XPERIMENTAL SETUPS **Models.** When there are no specific mentions, our experiments were conducted using the supervised fine-tuned Mistral-7b-0.1 model (Jiang et al., 2023a), as the initial model _π_ init in Section 4. Specifically, we use the open-sourced model [6] that follows the recipe of Zephyr (Tunstall et al., 2023) and fine-tuned on the instructions of Ultrachat (Ding et al., 2023). More details are in Appendix B. **Baselines.** To evaluate the effectiveness of the proposed preference judgment method (Eq. 7), we compare it with other preference judgment methods. Specifically, we consider the baselines that train the model via Iterative DPO (Snorkel, 2024; Xu et al., 2023), which iteratively generate preference data and update the model, using LLM-as-judge (Bai et al., 2022b; Zheng et al., 2023) ( _i.e._, in-context learning) or an external powerful reward model (PairRM (Jiang et al., 2023b)) for the preference judgment. Notably, these approaches are the same in the case of changing the judgment method and removing self-refinement in SPA. Details are presented in Appendix B. **Datasets.** For the preference learning dataset, we utilized UltraFeedback (Cui et al., 2023), following the previous works (Snorkel, 2024; Rosset et al., 2024). [7] To be specific, from this dataset, we first construct the seed data, consisting of 2K samples (3.3% of 60K) with prompts, responses, and ground truth preference labels. We refer the ground-truth preference label provided by the UltraFeedback as _gold label_ in Tables 1 and 5. Then, the remaining samples are divided into subsets of 8K, 20K, and 30K samples, leaving only the prompts. These subsets were used as the prompt sets for the iteration stages 1, 2, and 3, respectively. Only for the experiments in Table 3, the size of seed data is changed. **Evaluations.** Following the common practice in LLM alignment, we mainly evaluate each model our evaluations using (1) AlpacaEval 2.0 (Dubois et al., 2023; 2024; Li et al., 2023a). AlpacaEval 2.0 approximately evaluates human preference for instruction following. Using 805 instructions from various datasets, the evaluation is conducted by comparing the response of GPT-4 (OpenAI, 2023) and the testing model to measure win rates. To mitigate the length bias of LLM’s preference (Wang et al., 2023b; Zheng et al., 2023), both original and length-controlled (LC) win rates are simultaneously measured. LC win rate is an adjusted win rate by neutralizing the effect of response length to focus on quality, using a separately trained regression model (Dubois et al., 2024). We also evaluate trained LLMs using (2) MT-Bench (Zheng et al., 2023) to assess different aspects of LLMs. Namely, MT-Bench evaluates a chatbot’s overall abilities across multiple categories related to key LLM capabilities such as math, coding, roleplay, writing, etc. The evaluation is conducted by scoring responses to multi-turn questions using GPT-4. These benchmarks also provide a thorough evaluation of LLMs’ alignment with human preferences and their overall effectiveness in practical applications. **Implementation details.** After the initialization stage, we conduct three rounds of data expansion with self-generated preference data. For data expansion, we sampled 2 responses independently per each prompt with a temperature of 0.7. Then, using the SFT model as the reference model, we assign the preference label (Eq. 7). The initial DPO training to obtain _π_ 0 was conducted for 3 epochs on the seed dataset. Training on each subsequent iteration was carried out for 1 epoch. For the hyper-parameter _β_ of DPO, we used a fixed value of _β_ = 0 _._ 1 . The batch size was set to 32, and the learning rate was 5 _×_ 10 _[−]_ [7] . We employed AdamW optimizer and a cosine learning rate scheduler with a warm-up phase corresponding to 10% of the total training steps. For the hyper-parameters _α_ and _K_ % for SPA, we used fixed values of _α_ = 0 _._ 1 and _K_ = 10 . Additionally, a warm-up phase was included in the denoising stage, with denoising activated after 20% of the total training steps had been completed. Regarding the hyper-parameters _λ_ for de-coupled noise detection, we utilized the progressively reduced values of 1/2, 1/4, and 1/8 for iterations 1, 2, and 3, respectively. 6 alignment-handbook/zephyr-7b-sft-full 7 "argilla/ultrafeedback-binarized-preferences-cleaned" 6 Table 1: **Main results.** Evaluation results on AlpacaEval 2.0 and MT-Bench with different variants of Mistral-7B-v0.1. The best scores are highlighted with **bold** . **AlpacaEval 2.0** **MT-Bench** Gold Models Label (%) Len-control. Win Rate Win Rate (%) vs. GPT-4 (%) Avg. Score (0-10) Mistral-7B-v0.1 - 0.17 0.50 3.25 Zephyr-7b- _β_ 100 11.75 10.03 6.87 SFT - 7.58 4.72 6.34 DPO 3.3 9.03 7.68 6.81 SPA (Ours) 3.3 **15.39** **21.13** **6.94** Table 2: **Comparison with baselines for preference judgment.** Evaluation results on AlpacaEval 2.0 and MT-Bench with iteratively trained models (from SFT model) under different preference judgment methods. The best scores are highlighted with **bold** . **AlpacaEval 2.0** **MT-Bench** External Methods Model Len-control. Win Rate Win Rate (%) vs. GPT-4 (%) Avg. Score (0-10) Iterative DPO (PairRM) ✓ 11.87 9.46 **6.98** Iterative DPO (LLM-as-judge) ✗ 9.28 9.18 6.67 SPA (Ours) ✗ **15.39** **21.13** 6.94 5.2 M AIN RESULTS After completing 3 iterations of data expansion and fine-tuning via SPA, the trained model achieved a 21.13% win rate against GPT-4 on the AlpacaEval 2.0 benchmark, as presented in Table 1. This represents a significant improvement compared to the 7.68% (7.68% _→_ 21.13%) win rate achieved when using only 3.3% of labeled data with the standard DPO training, while the length-control win rate is also improved. (9.03% _→_ 15.39%). In addition, SPA achieved a score of 6.94 on the MT-Bench, clearly outperforming the model trained with DPO (6.81) on the same amount of 3.3% gold labeling data. More interestingly, our framework achieved superior performance in both win rate (10.03% vs 21.13%) and length-control win rate (11.75% vs 15.39%), compared to Zephyr7b- _β_ which uses same base model (Mistral-7B-0.1v) and SFT dataset but uses significantly larger labeled preference data, _i.e._, 100% of UltraFeedback dataset (v.s. 3.3% for SPA). These significant improvements in both win rates clearly affirm the overall enhancement in performance from SPA. Next, in Table 2, we present additional experimental results to validate the proposed preference judgment method. Namely, three experiments in Table 2 can be viewed as the Iterative DPO variants with different preference judgment methods. One can observe that SPA showed significantly better performance compared to other methods. Specifically, SPA achieved a win rate of 21.13% against GPT-4 on AlpacaEval 2.0, compared to 9.46% for the baseline with an external reward model, PairRM. In terms of length control win rate, SPA achieved 15.39%, surpassing the reward model’s 11.84%. Here, we conjecture that the reason why the Iterative DPO training with the proposed |Col1|Col2|Col3|Col4|e)<br>ter. 3| |---|---|---|---|---| |||||| |||||| ||SPA (Ours)<br>Iterative DPO (PairRM)<br>Iterative DPO (LLM-as-judg|SPA (Ours)<br>Iterative DPO (PairRM)<br>Iterative DPO (LLM-as-judg|SPA (Ours)<br>Iterative DPO (PairRM)<br>Iterative DPO (LLM-as-judg|SPA (Ours)<br>Iterative DPO (PairRM)<br>Iterative DPO (LLM-as-judg| |FT DPO It|FT DPO It|er. 1 It|er. 2 I|er. 2 I| Figure 3: **Improvements during itera-** direct preference judgment method (using training LLM) **tions.** Length control (LC.) win rate (%) outperforms the case with inferred labels from the external reward model is related to the distribution shift. As the measured by AlpacaEval 2.0 is consis tently improved by SPA and it outper iteration is increased, the distribution of the generated data forms other baselines. with LLM is more shifted from the distribution of the seed preference data. Then, the effectiveness of the external reward model inevitably decreases, as the Figure 3: **Improvements during itera-** **tions.** Length control (LC.) win rate (%) measured by AlpacaEval 2.0 is consistently improved by SPA and it outperforms other baselines. 7 Table Idea Generation Category:
0Conceptual Integration
BPgK5XW1Nb
# E XPLORING THE C AMERA B IAS OF - P ERSON R E IDENTIFICATION **Myungseo Song & Jin-Woo Park** mAy-I Inc. Seoul, Korea {myungseo.song,jin}@may-i.io **Jong-Seok Lee** Yonsei University Seoul, Korea jong-seok.lee@yonsei.ac.kr A BSTRACT We empirically investigate the camera bias of person re-identification (ReID) models. Previously, camera-aware methods have been proposed to address this issue, but they are largely confined to training domains of the models. We measure the camera bias of ReID models on unseen domains and reveal that camera bias becomes more pronounced under data distribution shifts. As a debiasing method for unseen domain data, we revisit feature normalization on embedding vectors. While the normalization has been used as a straightforward solution, its underlying causes and broader applicability remain unexplored. We analyze why this simple method is effective at reducing bias and show that it can be applied to detailed bias factors such as low-level image properties and body angle. Furthermore, we validate its generalizability across various models and benchmarks, highlighting its potential as a simple yet effective test-time postprocessing method for ReID. In addition, we explore the inherent risk of camera bias in unsupervised learning of ReID models. The unsupervised models remain highly biased towards camera labels even for seen domain data, indicating substantial room for improvement. Based on observations of the negative impact of camera-biased pseudo labels on training, we suggest simple training strategies to mitigate the bias. By applying these strategies to existing unsupervised learning algorithms, we show that significant performance improvements can be achieved with minor modifications. 1 I NTRODUCTION Person re-identification (ReID) is a process of retrieving images of a query identity from gallery images. With recent advances in deep learning, a wide range of challenging ReID scenarios have been covered, including object occlusion (Miao et al., 2019; Somers et al., 2023), change of appearance (Jin et al., 2022), and infrared images (Wu et al., 2017; Wu & Ye, 2023). In general, the inter-camera sample matching is not trivial since the shared information among images from the same camera can mislead a model easily. This phenomenon is known as the problem of camera bias, where samples from the same camera tend to gather closer in the feature space. This increases the false matching between the query-gallery samples since the samples of different identities from the same camera can be considered too similar. To address the issue, camera-aware ReID methods (Luo et al., 2020; Wang et al., 2021; Chen et al., 2021; Cho et al., 2022; Lee et al., 2023) have been proposed, aiming to learn camera-invariant representations by leveraging camera labels of samples during training. However, the previous works on camera bias of ReID models have mainly focused on seen domains of the models, while the camera bias of ReID models on unseen domains has been overlooked. We observe that existing ReID models exhibit a large camera bias for unseen domain data. For example, Figure 1 describes the feature distance distributions between samples of a camera-aware model (Cho et al., 2022) trained on the Market-1501 (Zheng et al., 2015) dataset, using samples from the MSMT17 (Wei et al., 2018) dataset. Compared to the distance distributions of the seen domain samples, the distance distributions of the unseen domain samples are more separable. In this paper, we first investigate the camera bias of existing ReID models on seen and unseen domain data. We observe that, regardless of the model types, there is a large camera bias in distribution shifts, and unsupervised models are vulnerable to camera bias even on seen domains. As a straightforward 1 (a) (b) Figure 1: Cosine distance distributions of a camera-aware ReID model on (a) the training domain (Market-1501) and (b) the unseen domain (MSMT17). The distances between samples within the same cameras are more skewed to the left when the data distribution is shifted. debiasing technique for unseen domains, we revisit the normalization method on the embedding features of ReID models. Through comprehensive empirical analysis, we reveal why the feature normalization effectively reduces biases towards camera labels and fine-grained factors such as low-level image properties and body angles, as well as demonstrating its general applicability for various ReID models. Additionally, we explore the inherent risk of camera bias in unsupervised learning (USL) of ReID models, observing the negative impact of camera-biased pseudo labels on training. Based on our analysis, we suggest simple training strategies applicable to existing USL algorithms, which significantly improve the performance. The main contributions of this work are summarized as follows: - We investigate the camera bias of ReID models on unseen domain data, which has not been thoroughly studied. We provide comprehensive analysis encompassing various learning methods and model architectures. - We revisit the debiasing effects of normalization on embedding vectors of ReID models. The empirical analysis explains why it is effective for bias mitigation and shows its applicability to detailed bias factors and multiple models. - We explore the risk of camera bias inherent in unsupervised learning of ReID models. From this, we show that the performance of existing unsupervised algorithms can be effectively enhanced by simple modifications to reduce the risk. 2 R ELATED WORK In traditional person ReID methods, the convolutional neural networks (CNN) architectures have been popularly adopted with cross-entropy and triplet loss (Zheng et al., 2017; Hermans et al., 2017; Luo et al., 2019; Ye et al., 2021). When identity labels of training data are unavailable, the pseudo labels are used instead based on clustering on the extracted features (Fan et al., 2018; Lin et al., 2019; Yu et al., 2019; Zhang et al., 2019; Dai et al., 2022). Recently, the transformer backbones (He et al., 2021; Luo et al., 2021b; Chen et al., 2023) and self-supervised pretraining (Fu et al., 2021; 2022; Luo et al., 2021b; Chen et al., 2023) significantly improve the ReID performance. To enhance the generalization ability of the models, a variety of domain generalizable techniques have been also proposed (Dai et al., 2021; Song et al., 2019; Liao & Shao, 2021; Ni et al., 2023; Dou et al., 2023). However, it has been found that the ReID models are biased towards the camera views of given data. The camera-aware methods have been proposed to alleviate this problem, where camera labels of the samples are utilized in model training as auxiliary information (Luo et al., 2020; Zhuang et al., 2020; Zhang et al., 2021; Wang et al., 2021; Chen et al., 2021; Cho et al., 2022; Lee et al., 2023). For example, an inter-camera contrastive loss is proposed to minimize the variations of the features from different cameras within the same class (Wang et al., 2021; Cho et al., 2022). Zhuang et al. (2020) replace batch normalization layers of a model with camera-based batch normalization layers conditional to the camera labels of inputs to reduce the distribution gap. Some other studies (Gu et al., 2020; Luo et al., 2021a) post-process a feature by subtracting the mean feature within its camera 2 Table 1: Camera bias and accuracy of various state-of-the-art ReID models based on clustering results. “SL” and “CA” denote the supervised learning and camera-aware method, respectively. “Bias” and “Accuracy” denote the Normalized Mutual Information (NMI) scores between cluster labels and camera labels, and between cluster labels and identity labels, respectively, in _×_ 100 scale. ISR is trained on external videos and the other models are trained on MSMT17-Train. |Method SL CA Backbone|MSMT17-Train|MSMT17-Test|Market-1501|CUHK03-NP|PersonX| |---|---|---|---|---|---| |Method SL CA Backbone|Bias Accuracy|Bias Accuracy|Bias Accuracy|Bias Accuracy|Bias Accuracy| |CC (Dai et al., 2022) ✗ ✗ R50<br>PPLR (Cho et al., 2022) ✗ ✗ R50<br>TransReID-SSL (Luo et al., 2021b) ✗ ✗ ViT<br>ISR (Dou et al., 2023) ✗ ✗ ViT<br>PPLR-CAM (Cho et al., 2022) ✗ ✓ R50<br>TransReID (He et al., 2021) ✓ ✓ ViT<br>SOLIDER (Chen et al., 2023) ✓ ✗ ViT|34.7 89.3<br>31.8 90.3<br>29.3 93.1<br>31.8 90.5<br>29.3 92.8<br>24.4 98.3<br>23.2 98.7|32.5 88.0<br>30.2 89.0<br>27.1 92.8<br>30.3 89.4<br>26.7 92.4<br>23.6 94.5<br>21.3 96.9|17.1 81.0<br>15.6 81.7<br>9.7 92.2<br>9.7 95.8<br>14.3 84.1<br>13.6 89.8<br>7.3 96.5|17.6 74.6<br>15.9 77.4<br>7.0 84.2<br>5.4 87.7<br>13.7 78.4<br>3.9 84.7<br>1.6 90.8|20.6 78.9<br>15.3 82.0<br>12.5 88.8<br>6.1 94.9<br>14.6 81.8<br>6.6 92.7<br>2.8 93.8| |Ground Truth - - -|21.1 -|19.2 -|6.4 -|0.1 -|0.0 -| view, but this is performed without justification and is limited to an unsupervised domain adaptation task. These previous studies have primarily focused on the bias of the models on the training domain data, while the bias on unseen domain data has been neglected. Meanwhile, we call the methods which do not take the camera views into account camera-agnostic methods. 3 Q UANTITATIVE ANALYSIS ON CAMERA BIAS In this section, we quantitatively investigate the camera bias in existing ReID models. The camera bias is the phenomenon where the feature distribution is biased towards the camera labels of the samples, which degrades ReID performance. Many camera-aware methods have been proposed to address this problem. However, the scope of the discussion has been primarily limited to training domain and the camera bias on unseen domains has not been thoroughly explored. We focus on the camera bias of ReID models on unseen domains, examining various types of models including camera-aware/agnostic, supervised/unsupervised, and domain generalizable approaches, with the widely used backbones such as ResNet (He et al., 2016) and ViT (Dosovitskiy et al., 2021). To measure the bias, we utilize Normalized Mutual Information (NMI) which quantifies the shared information between two clustering results. We extract the features of samples and perform clustering to them using InfoMAP (Rosvall & Bergstrom, 2008). Then, the camera bias is computed by NMI between cluster labels and camera labels of the samples. The accuracy of the clusters are measured by NMI between the cluster labels and the identity labels. The results on MSMT17, Market-1501, CUHK03-NP (Zhong et al., 2017a), and PersonX (Sun & Zheng, 2019) are shown in Table 1, where the bias of the ground truth ( _i.e._, NMI between the identity labels and the camera labels) indicates the inherent imbalance in a dataset. All models except ISR (Dou et al., 2023) are trained on MSMT17, hence the other datasets are unseen domains for them. For ISR, all datasets are unseen domains. We make two notable observations from the results. First, the existing ReID models have a large camera bias on the unseen domains, regardless of their training setups or backbones. Second, the unsupervised models have a large camera bias on the seen domain, even on their training data. These imply that debiasing methods for unseen domains are needed in general, and there is room for performance improvement of unsupervised methods by reducing the camera bias during training. Relatively, the recent supervised models exhibit less debiased results on the training domain. 4 U NDERSTANDING CAMERA BIAS AND FEATURE NORMALIZATION 4.1 C AMERA - SPECIFIC FEATURE NORMALIZATION In Section 3, we observed that the ReID models have a large camera bias on unseen domains. As a straightforward debiasing method, we introduce camera-specific feature normalization which postprocesses embedding vectors leveraging camera labels at test time. It is performed as follows. Suppose that a test dataset _X_ = _{_ ( _**x**_ 1 _,_ _**y**_ 1 ) _,_ ( _**x**_ 2 _,_ _**y**_ 2 ) _, · · ·,_ ( _**x**_ _N_ _,_ _**y**_ _N_ ) _}_ with _N_ samples is given, where _**x**_ _i_ and _**y**_ _i_ denote the image and camera label of each sample, respectively. A pretrained encoder 3 |0.4<br>0.3<br>0.2<br>0.1<br>0.0<br>0 50 100<br># sele|Col2| |---|---| |0.4<br>0.3<br>0.2<br>0.1<br>0.0<br>0 50 100<br># sele|150 200 250 300 350<br>cted dimensions| (a) (b) (c) Figure 2: Analysis on the 384-dimensional embedding space of a ReID model. We measure the similarity of displacement vectors and mAP results increasing the number of feature dimensions following different orders. (a) Variance of each dimension of camera mean features. (b) Cosine similarity of displacement vectors between samples of the same identities from different cameras along selected dimensions. (c) Result of camera-specific feature centering for selected dimensions. _f_ _θ_ is used to extract embedding features _F_ = _{_ _**f**_ 1 _,_ _**f**_ 2 _, · · ·,_ _**f**_ _N_ _}_, where _**f**_ _i_ = _f_ _θ_ ( _**x**_ _i_ ) . We split _F_ into _M_ subsets _F_ 1 _, F_ 2 _, · · ·,_ and _F_ _M_ depending on the camera labels, where the number of cameras is denoted by _M_ . Then, the mean and standard deviation vectors for each camera, _**m**_ _c_ and _**σ**_ _c_, are computed as follows: 1 _**m**_ _c_ = _|F_ _c_ _|_ � _**f**_ _i_ and _**σ**_ _c_ = _**f**_ _i_ _∈F_ _c_ 1 _|F_ _c_ _|_ ~~�~~ � ( _**f**_ _i_ _−_ _**m**_ _c_ ) _⊙_ ( _**f**_ _i_ _−_ _**m**_ _c_ ) _,_ (1) _**f**_ _i_ _∈F_ _c_ where _⊙_ denotes the element-wise multiplication. The camera-specific feature normalization on _**f**_ _i_ with the camera label _**y**_ _i_ is given by: **ˆ** _**f**_ _i_ = _**[f]**_ _[i]_ _[ −]_ _**[m]**_ _**[y]**_ _[i]_ _._ (2) _**σ**_ _**y**_ _i_ This operation has been used as modified forms in camera mean subtraction (Gu et al., 2020; Luo et al., 2021a) and camera-specific batch normalization (Zhuang et al., 2020). In Zhuang et al. (2020), the normalization is followed by an affine transformation learned during training. However, how does the simple camera-specific feature normalization have a debiasing effect? We revisit the camera-specific feature normalization by empirically analyzing why it mitigates the camera bias and demonstrating its generalizability through comprehensive experiments. 4.2 A NALYSIS ON FEATURE SPACE We dive deeply into the feature space of a ReID model (Luo et al., 2021b) trained on MSMT17 using CUHK03-NP samples, to understand why the normalization can play a role of debiasing. **Sensitivity to camera variations differs across dimensions** We first find that the sensitivity of each dimension of the feature space to camera variations is quite different from each other. We compute mean features of each camera view and present the element-wise variances of the mean features in the descending order in Figure 2(a). It is shown that some dimensions have a relatively large variation, which might be largely related to the camera bias of the model. **Movements of features due to camera variations** We indirectly investigate features, movements due to camera changes using the identity labels and camera labels of the samples. We obtain displacement vectors from feature pairs of two different cameras with the same identities (details in Appendix B.1) and compute their average cosine similarity in selected dimensions, with increasing the number of selected dimensions. Three selecting orders are used: (1) “Dimension index” follows the original index order of the dimensions, (2) “Camera sensitive” follows the descending order of the element-wise variances of the camera means, and (3) “Camera insensitive” follows the reverse order of (2). From Figure 2(b), we observe that the similarities of the displacement vectors in the camerasensitive dimensions are relatively large. In other words, the features tend to move consistently in the 4 (a) (b) (c) Figure 3: Analysis on low-level properties. (a) Cosine similarity of displacement vectors by image transformations. (b) Property group-specific feature normalization. The dashed line indicates the performance without normalization. (c) (Property group, camera)-specific feature normalization. The dashed line indicates the performance with camera-specific (and property-agnostic) normalization. camera-sensitive dimensions depending on a camera variation, implying that the effect of a camera change appears as translation on these embedding dimensions. **Sensitive dimensions dominate debiasing effects** Then, can we debias the features by subtracting the camera mean features for those sensitive dimensions? To find out, we apply a camera-specific centering on selected dimensions in Figure 2(c). Note that there is a clear difference in the improvement rate of ReID performance depending on the selecting order. The performance gains are actually dominated in the camera-sensitive dimensions. For example, centering on top-50 dimensions (about 13%) of higher variances achieves approximately a half of the total gains, while centering on top-50 dimensions of lower variances shows almost no gain. For the low-variance dimensions, a half of the total gains requires centering of as many as 350 dimensions (about 91%). Similar results are obtained for other models in Appendix B.2. 4.3 A NALYSIS ON DETAILED BIAS FACTORS We explore the feature normalization for detailed bias factors of ReID models, including image properties and body angle of images. The model (Luo et al., 2021b) trained on MSMT17 is used. **Movements of features due to image transformations** Given the fine-grained nature of person ReID, the camera bias of a model might be closely related to the difference in low-level image properties between cameras. Here, we analyze the changes of features due to image transformations applied to samples from CUHK03-NP, using eight low-level transformation functions with four levels of transformation strength as shown in Figure 10. The feature of the _i_ -th image and the feature of its transformed image at level _k_ are denoted by _**f**_ _i_ (0) and _**f**_ _i_ ( _k_ ), respectively. For example, for a blurring function, _**f**_ _i_ (4) denotes the feature when the _i_ -th image is most strongly blurred. Then, we compute the average cosine similarity between displacement vectors of the features after applying a transformation to the images for each level _k_, which is given by E _i,j_ [ _Sim_ ( _**f**_ _i_ ( _k_ ) _−_ _**f**_ _i_ ( _k−_ 1) _,_ _**f**_ _j_ ( _k_ ) _−_ _**f**_ _j_ ( _k−_ 1) )] . The result is shown in Figure 3(a). We observe that, for certain transformations such as decreasing brightness, the displacement vector ( _**f**_ _i_ ( _k_ ) _−_ _**f**_ _i_ ( _k−_ 1) ) due to the transformation is similar across different images to some extent, which is analogous to the effect of camera variations. **Normalization for image properties** Then, can we reduce biases of the model towards the lowlevel properties by utilizing the feature normalization? To find out, we calculate the brightness, sharpness, contrast, and area of all samples, as visualized in Figure 11. Note that all samples in the dataset have almost same contrast values. We divide the samples into _N_ groups of equal size for each property. For example, when dividing the samples into _N_ = 2 groups based on the brightness, we use the median brightness value as the threshold for group assignment. Here, a small but meaningful correlation between these group labels and camera labels is observed as shown in Figure 12. Then, we perform a group-specific feature normalization on the features using the property group labels. As presented in Figure 3(b), the normalization on the features based on the property groups is effective for brightness, sharpness, and area. It does not work for contrast since all contrast values are almost equal. In addition, we subdivide each property group into multiple groups based on the camera labels 5 Idea Generation Category:
2Direct Enhancement
SgymXhOEA5
# T RACKING OBJECTS THAT CHANGE IN APPEARANCE ## WITH PHASE SYNCHRONY **Sabine Muzellec** _[⋆]_ CerCo, CNRS, Universite de Toulouse, France Carney Institute for Brain Science Brown University, USA sabine_muzellec@brown.edu **Drew Linsley** _[⋆]_ Carney Institute for Brain Science Department of Cognitive & Psychological Sciences Brown University, USA drew_linsley@brown.edu **Girik Malik** Northeastern University Boston, MA, USA **Alekh K. Ashok** Carney Institute for Brain Science Department of Cognitive & Psychological Sciences Brown University, USA **Rufin VanRullen** CerCo, CNRS Universite de Toulouse France **Thomas Serre** Carney Institute for Brain Science Department of Cognitive & Psychological Sciences Brown University, USA A BSTRACT **Ennio Mingolla** Northeastern University Boston, MA, USA Objects we encounter often change appearance as we interact with them. Changes in illumination (shadows), object pose, or the movement of non-rigid objects can drastically alter available image features. How do biological visual systems track objects as they change? One plausible mechanism involves attentional mechanisms for reasoning about the locations of objects independently of their appearances — a capability that prominent neuroscience theories have associated with computing through neural synchrony. Here, we describe a novel deep learning circuit that can learn to precisely control attention to features separately from their location in the world through neural synchrony: the complex-valued recurrent neural network (CV-RNN). Next, we compare object tracking in humans, the CV-RNN, and other deep neural networks (DNNs), using FeatureTracker : a large-scale challenge that asks observers to track objects as their locations and appearances change in precisely controlled ways. While humans effortlessly solved FeatureTracker, state-of-the-art DNNs did not. In contrast, our CV-RNN behaved similarly to humans on the challenge, providing a computational proof-of-concept for the role of phase synchronization as a neural substrate for tracking appearance-morphing objects as they move about. 1 I NTRODUCTION Think back to the last time you prepared a meal or built something. You could keep track of the objects around you even as they changed in shape, size, texture, and location. Higher biological visual systems have evolved to track objects using multiple visual strategies that enable object tracking under different visual conditions. For instance, when objects have distinct and consistent appearances over time, humans can solve the temporal correspondence problem of object tracking by “re-recognizing” them (Fig. 1a, (Pylyshyn & Storm, 1988; Pylyshyn, 2006)). When two or more objects in the world look similar to each other, and re-recognition becomes challenging, a complementary strategy is to track one of them by integrating their motion over time (Fig. 1b, (Lettvin et al., 1959; Takemura et al., 2013; Kim et al., 2014; Adelson & Bergen, 1985; Frye, 2015; Linsley et al., 2021)). The neural substrates for tracking objects by re-recognition or motion integration have been the focus of extensive studies over the past half-century. The current consensus is that distinct neural circuits are 1 Figure 1: **How do Biological visual systems track the object tagged by the yellow arrow? (a)** Sometimes, the object’s appearance makes it easy to track (Pylyshyn, 2006; Pylyshyn & Storm, 1988). **(b)** Other times, when objects look similar, the target can be tracked by following its motion through the world (Lettvin et al., 1959; Takemura et al., 2013; Kim et al., 2014; Adelson & Bergen, 1985; Frye, 2015; Linsley et al., 2021). Here, we investigate a computational problem that has received far less attention: how do biological visual systems track objects when their colors, textures **(c)**, or shapes **(d)** change over time? **(e)** We developed the FeatureTracker challenge to systematically evaluate humans and machine vision systems on this problem. In FeatureTracker, observers watch videos containing objects that change in color and/or shape over time, and have to decide if the target object, which begins in the red square (circled in white for clarity), ends up in the blue square by the end of a video. When presented with a FeatureTracker video, one possible strategy suggested by neuroscience theories is that the oscillatory activity of neural populations can keep track of different objects over time. Specifically, the target is encoded by a population of neurons that fire with a timing that differs from that of the population that responds to the distractors Astrand et al. (2020). We approximate the cycle of the oscillation with complex-valued neurons. In the CV-RNN, the phase of a complex-valued neuron represents the object encoded by this neuron. The CV-RNN thus learns to tag the target with a phase value different from the phase value of the distractors. responsible for each strategy (Lettvin et al., 1959; Takemura et al., 2013; Kim et al., 2014; Adelson & Bergen, 1985; Frye, 2015; Pylyshyn, 2006). Much less progress has been made in characterizing how visual systems track objects as their appearances change (Fig. 1c,d). However, visual attention likely plays a critical role in tracking (Blaser et al., 2000). Visual attention is considered essential to solve many visual challenges that occur during object tracking, such as maintaining the location of an object even as it is occluded from view (Koch & Ullman, 1987; Roelfsema et al., 1998; Busch & VanRullen, 2010; Herrmann & Knight, 2001; Pylyshyn & Storm, 1988; Pylyshyn, 2006). We hypothesize that visual attention similarly helps when tracking objects that change appearance by maintaining information about their location in the world independently of their appearances. How is this type of visual attention implemented in the brain? Prominent neuroscience theories have proposed that the synchronized firing of neurons reflects the allocation of visual attention. Specifically, neural synchrony enables populations of neurons to multiplex the appearance of objects with more complex visual routines controlled by attention (McLelland & VanRullen, 2016; Wutz et al., 2020; Frey et al., 2015). Neural synchrony could, therefore, help keep track of objects regardless of their exact appearance at any point in time. Previous work proposed using complex-valued representations in RNNs (Lee et al., 2022), and in other architectures to implement neural synchrony in artificial models (Reichert & Serre, 2013; Löwe et al., 2022; Stani´c et al., 2023). According to the framework proposed by Reichert & Serre (2013), each neuron in an artificial neural network can be represented as a complex number where the magnitude encodes for specific object features, and the phase groups the features of different objects. Such representations allow the modeling of various neuroscience theories (Singer & Gray, 2003; Singer, 2007; 2009) related to the role of neural synchrony. Here, we investigate whether the use of complex-valued representations to implement neural synchrony can help to solve the FeatureTracker challenge through large-scale computational experiments (see Fig. 1e). 2 **Contributions.** The appearances of objects often change as they move through the world. To systematically measure the tolerance of observers to these changes, we introduce the FeatureTracker challenge: a synthetic tracking task where the motion, color, and shape of objects are precisely controlled over time (Fig. 1e). In each FeatureTracker video, a human observer or a machine vision algorithm has to decide if a target object winds up in a blue square after beginning in a red square. The challenge is made more difficult by the presence of non-target objects that also change in appearance over time and which inevitably cross paths with the target, forcing observers to solve the resulting occlusions (Pylyshyn & Storm, 1988; Blaser et al., 2000; Linsley et al., 2021). This challenge can be further modulated by training and testing observers on objects with different appearance statistics. Through a series of behavioral and computational experiments using FeatureTracker, we discover the following: - Humans are exceptionally accurate at tracking objects in the FeatureTracker challenge as these objects move through the world and change in color, shape, or both. - On the other hand, DNNs struggle on FeatureTracker, especially when object color spaces differ between training and test. - Inspired by neuroscience theories on how populations of neurons implement solutions to the binding problem of FeatureTracker, we incorporated a novel mechanism for computing attention through neural synchrony, using complex-valued representations, in a recurrent neural network architecture, which we call the complex-valued recurrent neural network (CV-RNN). The CV-RNN approaches human performance and decision-making on FeatureTracker. - Our findings establish a proof-of-concept that neural synchrony may support object tracking in humans, and can induce similar capabilities in artificial visual systems. We release FeatureTracker data, code, and human psychophysics at [https://github.com/S4b1n3/feature_tracker](https://github.com/S4b1n3/feature_tracker) to help the field investigate this gap between human and machine vision. 2 B ACKGROUND AND RELATED WORK **Visual routines** Ullman (1984) theorized that humans can compose atomic attentional operations, like those for segmenting or comparing objects, into rich “visual routines” that support reasoning. He further proposed that the core set of computations that comprise visual routines can be flexibly reused and applied to objects regardless of their appearance, making them a strong candidate for explaining how humans can track objects that change in appearance. Visual routines are likely implemented in brains through feedback circuits that control attention (Roelfsema et al., 2000), and potentially through neural synchrony (McLelland & VanRullen, 2016). Developing a computational understanding of how visual routines contribute to object tracking and how they might be implemented in brains would significantly advance the current state of cognitive neuroscience. **Computing through neural synchrony** The empirical finding that alpha/beta (12–30Hz) and gamma (>30Hz) oscillations tend to be anti-correlated in primate cortex has motivated the development of theories on how the temporal synchronization of different groups of neurons may reflect an overarching computational strategy of brains. In the communication-through-coherence (CTC) theory, Fries (2015) proposed that alpha/beta activity carries top-down attentional signals, which reflect information about the current context and goals. Others have expanded on this theory to suggest that these top-down signals can be spatially localized in the cortex to multiplex attentional computations independently of the features encoded by neurons (Miller et al., 2024). While there have been many different theories proposed on how computing through oscillations works (McLelland & VanRullen, 2016; Lisman & Jensen, 2013; Grossberg, 1976; Milner, 1974; Mioche & Singer, 1989), here we assume an induced oscillation and study synchrony as a mechanism for visual routines and its potential for implementing object tracking in brains. **Generalization and shortcut learning in DNNs** A drawback of DNNs’ great power is their tendency to learn spurious correlations between inputs and labels, which can lead to poor generalization (Barbu et al., 2019; Geirhos et al., 2020b). Moreover, while object classification models have grown more accurate over the past decade — now matching and sometimes exceeding human performance (Shankar et al., 2020) — they have done so by learning recognition strategies that are 3 Figure 2: **Neural synchrony helps track objects that change in appearance. (a)** The shell game is designed to probe how a neural network, with the functional constraints of biological visual systems, could track objects as they change in appearance between frames one and two. Are the two images the same, or has the objects’ color and/or orientation flipped (three possible responses)? **(b)** We tested a simplified model of the hierarchical visual system on the task, which consisted of two layers of neurons: ( _i_ ) a convolutional layer with high-resolution feature maps, followed by ( _ii_ ) a spatial average pooling of neuron responses and a layer of recurrently connected neurons (McLelland & VanRullen, 2016). 1c/2c are object colors, 1o/2o are object orientations; the loss of spatial resolution between the layers causes these object features to interfere. The model can detect the features present in the frame (red and blue color, as well as square and diamond orientations), but fails at binding the color and orientation with the position – hence cannot differentiate Frame 1 from Frame 2. **(c, d)** The same architecture can learn to solve the task with a complex-valued mechanism for neural synchrony, in which the magnitude of neurons captures object appearances, and the phase captures object locations. becoming progressively less aligned with humans (Fel* et al., 2022; Linsley et al., 2023b;a). Synthetic datasets like FeatureTracker are useful for understanding why this misalignment occurs and guiding the development of novel architectures that can address it. The PathFinder challenge, which was originally developed to investigate the ability of observers to trace long curves in clutter (Linsley et al., 2018), was used to optimize Transformer and modern state space model architectures (Tay et al., 2021; Gu et al., 2021; Smith et al., 2022). The most similar challenge to our FeatureTracker is PathTracker, which tested whether observers could track one object in a swarm of identical-looking objects as they briefly occlude each other while they move around (Linsley et al., 2021). Here, we extend PathTracker by adding parametric control over the shape and color of objects to test tracking as object appearances smoothly change. **Complex-valued representations in artificial neural networks.** The neural network architectures that have powered the deep learning revolution can be seen as modeling the rates of neurons instead of their moment-to-moment spikes. Given this constraint, there have been multiple attempts to introduce neural synchrony into these models by transforming their neurons from real- to complex-valued. Early attempts at this approach showed that object segmentation can emerge from the phase of these complex-valued neurons (Zemel et al., 1995; Weber & Wermter, 2005; Reichert & Serre, 2013; Behrmann et al., 1998). These models relied on shallow architectures, small and poorly controlled datasets, and older training routines like the energy-based optimization methods used in Boltzmann machines that have fallen out of favor over recent years. Recently, there has been a renewed interest in neural synchrony as a mechanism for DNNs (Löwe et al., 2022; Stani´c et al., 2023). Unlike these previous attempts, our CV-RNN only uses synchrony with complex-valued representations in its attention module. This makes the model far more scalable than prior attempts, as complex-valued units are at least twice as expensive as real-valued ones (only certain levels of quantization are possible with the former), and enables its use with spatiotemporal data. 4 3 M OTIVATION How do biological visual systems track objects while they move through the world and change in appearance? Given that this problem has received little attention until now, we began addressing it through a toy experiment. We developed a simple shell game where observers had to describe how the colors, locations, and shapes of two objects changed from one point in time to the next (Fig. 2a; see SI A.5.1 for additional details). We then created a highly simplified model of a hierarchical and recurrent biological visual system to identify any challenges it may face with this game. The model was composed of an initial convolutional layer with high-resolution spatial feature maps, followed by a global average pooling layer (to approximate the coarser representations found in inferotemporal cortex, and a layer of recurrent neurons with more features than the first layer but no spatial map (McLelland & VanRullen (2016), Fig.2b). The convolutional layer was implemented using a standard PyTorch Conv2D layer, whereas the recurrent layer was implemented with the recently developed Index-and-Track (InT) recurrent neural network (RNN), which includes an abstraction of biological circuits for object tracking (Linsley et al., 2021). The combined model was trained on a balanced dataset of 10 _,_ 000 samples from the shell game using a Cross-Entropy loss and the Adam optimizer (Kingma & Ba, 2014). In conditions of the game when the objects changed positions, this model performed close to chance ( 45% accuracy). The loss of spatial resolution between the model’s early and deeper layers caused its representations of each object’s appearance and location to interfere with each other (Figs. A.1 and A.2; see SI A.5 for more details). **Neural synchrony can implement** **visual routines for object tracking** The retinotopic organization of hierarchical visual systems provides an important constraint for developing models of object tracking: the spatial resolution of representations decreases as they move through the hierarchy. We need a mechanism that can resolve the interference that this loss of spatial resolution causes to object representations without expanding the capacity of the model. One potential solution to this problem is neural synchrony, which can multiplex different sources of information within the same neuronal population with minimal interference (Sternshein et al., 2011; Drew et al., 2009). Similarly, synchrony has been proposed to implement object-based attention to form perceptual groups based on their gestalt (Woelbern et al., 2002; Elliott & Müller, 2001). We, therefore, hypothesized that neural synchrony could rescue model performance in the shell game (Fig.2c). Figure 3: **Implementing neural synchrony through the** **complex-valued RNN (CV-RNN)** . The CV-RNN augments the InT RNN from Linsley et al. (2021) (shown on the left) with neural synchrony attention through the use of complexvalued units (shown on the right). In the CV-RNN, _e_ _c_ and _z_ _c_ convert _e_ and _z_ to the complex domain, φ is a recurrent unit maintaining a complex representation of the input, and θ transforms φ into a spatial map of the current frame. We adapted the recurrent InT circuit used in the second layer of our simplified biological visual system model into a new neural architecture capable of learning neural synchrony using complex-valued representations. This recurrent neural network (RNN) (Linsley et al., 2021), inspired by neural circuit models of motion perception (Berzhanskaya et al., 2007) and executive cognitive function (Wong & Wang, 2006), contains an attention module that can learn to track objects by integrating their motion (see SI A.6.1 for details). We reasoned that augmenting this attention module with neural synchrony could help the entire model learn to solve the shell game . Specifically, complex-valued neurons could enable the attention module to bind object features by synchronizing the phase of its neurons encoding features sharing the same location, and desynchronizing the phases when the location differs. 5 Idea Generation Category:
0Conceptual Integration
m2gVfgWYDO
# - U NPOSED S PARSE V IEWS R OOM L AYOUT R ECON STRUCTION IN THE A GE OF P RETRAIN M ODEL **Yaxuan Huang** [1] _[∗]_ **Xili Dai** [2] _[∗]_ **Jianan Wang** [3] **Xianbiao Qi** [4] **Yixing Yuan** [1] **Xiangyu Yue** [5] _[†]_ 1 Hong Kong Center for Construction Robotics, The Hong Kong University of Science and Technology 2 The Hong Kong University of Science and Technology (Guangzhou) 3 Astribot 4 Intellifusion Inc. 5 Multimedia Lab (MMLab) and SHIAE, The Chinese University of Hong Kong Figure 1: We present a novel method for estimating room layouts from a set of unconstrained indoor images. Our approach demonstrates robust generalization capabilities, performing well on both inthe-wild datasets (Zhou et al., 2018) and out-of-domain cartoon (Weber et al., 2024) data. A BSTRACT Room layout estimation from multiple-perspective images is poorly investigated due to the complexities that emerge from multi-view geometry, which requires muti-step solutions such as camera intrinsic and extrinsic estimation, image matching, and triangulation. However, in 3D reconstruction, the advancement of recent 3D foundation models such as DUSt3R has shifted the paradigm from the traditional multi-step structure-from-motion process to an end-to-end single-step approach. To this end, we introduce Plane-DUSt3R, a novel method for multiview room layout estimation leveraging the 3D foundation model DUSt3R. PlaneDUSt3R incorporates the DUSt3R framework and fine-tunes on a room layout dataset (Structure3D) with a modified objective to estimate structural planes. By generating uniform and parsimonious results, Plane-DUSt3R enables room layout estimation with only a single post-processing step and 2D detection results. Unlike previous methods that rely on single-perspective or panorama image, PlaneDUSt3R extends the setting to handle multiple-perspective images. Moreover, it offers a streamlined, end-to-end solution that simplifies the process and reduces error accumulation. Experimental results demonstrate that Plane-DUSt3R not only outperforms state-of-the-art methods on the synthetic dataset but also proves robust and effective on in the wild data with different image styles such as cartoon. [Our code is available at: https://github.com/justacar/Plane-DUSt3R](https://github.com/justacar/Plane-DUSt3R) 1 INTRODUCTION 3D room layout estimation aims to predict the overall spatial structure of indoor scenes, playing a crucial role in understanding 3D indoor scenes and supporting a wide range of applications. For example, room layouts could serve as a reference for aligning and connecting other objects in indoor environment reconstruction (Nie et al., 2020). Accurate layout estimation also aids robotic path planning and navigation by identifying passable areas (Mirowski et al., 2016). Additionally, room _∗_ Equal contribution, _†_ Corresponding author 1 layouts are essential in tasks such as augmented reality (AR) where spatial understanding is critical. Therefore, 3D room layout estimation has attracted considerable research attention with continued development of datasets (Zheng et al., 2020; Wang et al., 2022) and methods (Yang et al., 2022; Stekovic et al., 2020; Wang et al., 2022) over the past few decades. Methods for 3D room layout estimation (Zhang et al., 2015; Hedau et al., 2009; Yang et al., 2019) initially relied on the Manhattan assumption with a single perspective or panorama image as input. Over time, advancements (Stekovic et al., 2020) have relaxed the Manhattan assumption to accommodate more complex settings, such as the Atlanta model, or even no geometric assumption at all. Recently, Wang et al. (2022) introduced a “multi-view” approach, capturing a single room with two panorama images, marking the first attempt to extend the input from a single image to multiple images. Despite this progress, exploration in this direction remains limited, hindered by the lack of well-annotated multi-view 3D room layout estimation dataset. Currently, multi-view datasets with layout annotations are very scarce. Even the few existing datasets, such as Structure3D (Zheng et al., 2020), provide only a small number of perspective views (typically ranging from 2 to 5). This scarcity of observable views highlights a critical issue: widebaseline sparse-view structure from motion (SfM) remains an open problem. Most contemporary multi-view methods (Wang et al., 2022; Hu et al., 2022) assume known camera poses or start with noisy camera pose estimates. Therefore, solving wide-baseline sparse-view SfM would significantly advance the field of multi-view 3D room layout estimation. The recent development of large-scale training and improved model architecture offers a potential solution. While GPT-3 (Brown, 2020) and Sora (Brooks et al., 2024) have revolutionized NLP and video generation, DUSt3R (Wang et al., 2024) brings a paradigm shift for multi-view 3D reconstruction, transitioning from a multi-step SfM process to an end-to-end approach. DUSt3R demonstrates the ability to reconstruct scenes from unposed images, without camera intrinsic/extrinsic or even view overlap. For example, with two unposed, potentially non-overlapping views, DUSt3R could generate a 3D pointmap while inferring reasonable camera intrinsic and extrinsic, providing an ideal solution to the challenges posed by wide-baseline sparse-view SfM in multi-view 3D room layout estimation. In this paper, we employ DUSt3R to tackle the multi-view 3D room layout estimation task. Most single-view layout estimation methods (Yang et al., 2022) follow a two-step process: 1) extracting 2D & 3D information, and 2) lifting the results to a 3D layout with layout priors. When extending this approach to multi-view settings, an additional step is required: establishing geometric primitive correspondence across multi-view before the 3D lifting step. Given the limited number of views in existing multi-view layout datasets, this correspondence-establishing step essentially becomes a sparse-view SfM problem. Hence, incorporating a single-view layout estimation method with DUSt3R to handle multi-view layout estimation is a natural approach. However, this may introduce a challenge: independent plane normal estimation for each image fails to leverage shared information across views, potentially reducing generalizability to unseen data in the wild. To this end, we adopt DUSt3R to solve correspondence establishement and 3D lifting simultaneously, which jointly predict plane normal and lift 2D detection results to 3D. Specifically, we modify DUSt3R to estimate room layouts directly through dense 3D point representation (pointmap), focusing exclusively on structural surfaces while ignoring occlusions. This is achieved by retraining DUSt3R with the objective to predict only structural planes, the resulting model is named Plane-DUSt3R. However, dense pointmap representation is redundant for room layout, as a plane can be efficiently represented by its normal and offset rather than a large number of 3D points, which may consume significant space. To streamline the process, we leverage well-established off-the-shelf 2D plane detector to guide the extraction of plane parameters from the pointmap. We then apply post-processing to obtain plane correspondences across different images and derive their adjacency relationships. Compared to existing room layout estimation methods, our approach introduces the first pipeline capable of unposed multi-view (perspective images) layout estimation. Our contributions can be summarized as follows: 1. We propose an unposed multi-view (sparse-view) room layout estimation pipeline. To the best of our knowledge, this is the first attempt at addressing this natural yet underexplored setting in room layout estimation. 2. The introduced pipeline consists of three parts: 1) a 2D plane detector, 2) a 3D information prediction and correspondence establishment method, Plane-DUSt3R, and 3) a post-processing algorithm. The 2D detector was retrained with SOTA results on the Structure3D dataset (see Ta 2 ble 3). The Plane-DUSt3R achieves a 5 _._ 27% and 5 _._ 33% improvement in RRA and mAA metrics, for the multi-view correspondence task compared to state-of-the-art methods (see Table 2). 3. In this novel setting, we also design several baseline methods for comparison to validate the advantages of our pipeline. Specifically, we outperform the baselines by 4 projection 2D metrics and 1 3D metric respectively (see Table 1). Furthermore, our pipeline not only performs well on the Structure3D dataset (see Figure 6), but also generalizes effectively to in-the-wild datasets (Zhou et al., 2018) and scenarios with different image styles such as cartoon style (see Figure 1). 2 R ELATED W ORK **Layout estimation.** Most room layout estimation research focuses on single-perspective image inputs. Stekovic et al. (2020) formulates layout estimation as a constrained discrete optimization problem to identify 3D polygons. Yang et al. (2022) introduces line-plane constraints and connectivity relations between planes for layout estimation, while Sun et al. (2019) formulates the task as predicting 1D layouts. Other studies, such as Zou et al. (2018), propose to utilize monocular 360-degree panoramic images for more information. Several works extend the input setting from single panoramic to multi-view panoramic images, _e.g._ Wang et al. (2022) and Hu et al. (2022). However, there is limited research addressing layout estimation from multi-view RGB perspective images. Howard-Jenkins et al. (2019) detects and regresses 3D piece-wise planar surfaces from a series of images and clusters them to obtain the final layout, but this method requires posed images. The most related work is Jin et al. (2021), which focuses on a different task: reconstructing indoor scenes with planar surfaces from wide-baseline, unposed images. It is limited to two views and requires an incremental stitching process to incorporate additional views. **Holistic scene understanding.** Traditional 3D indoor reconstruction methods are widely applicable but often lack explicit semantic information. To address this limitation, recent research has increasingly focused on incorporating holistic scene structure information, enhancing scene understanding by improving reasoning about physical properties, mostly centered on single-perspective images. Several studies have explored the detection of 2D line segments using learning-based detectors (Zhou et al., 2019; Pautrat et al., 2021; Dai et al., 2022). However, these approaches often struggle to differentiate between texture-based lines and structural lines formed by intersecting planes. Some research has focused on planar reconstruction to capture higher-level information (Liu et al., 2018; Yu et al., 2019; Liu et al., 2019). Certain studies (Huang et al., 2018; Nie et al., 2020; Sun et al., 2021) have tackled multiple tasks alongside layout reconstruction, such as depth estimation, object detection, and semantic segmentation. Other works operate on constructed point maps; for instance, Yue et al. (2023) reconstructs floor plans from density maps by predicting sequences of room corners to form polygons. SceneScript (Avetisyan et al., 2024) employs large language models to represent indoor scenes as structured language commands. **Multi-view pose estimation and reconstruction.** The most widely applied pipeline for pose estimation and reconstruction on a series of images involves SfM (Sch¨onberger & Frahm, 2016) and MVS (Sch¨onberger et al., 2016), which typically includes steps such as feature mapping, finding correspondences, solving triangulations and optimizing camera parameters. Most mainstream methods build upon this paradigm with improvements on various aspects of the pipeline. However, recent works such as DUSt3R (Wang et al., 2024) and MASt3R (Leroy et al., 2024) propose a reconstruction pipeline capable of producing globally-aligned pointmaps from unconstrained images. This is achieved by casting the reconstruction problem as a regression of pointmaps, significantly relaxing input requirements and establishing a simpler end-to-end paradigm for 3D reconstruction. 3 METHOD In this section, we formulate the layout estimation task, transitioning from a single-view to a multiview scenario. We then derive our multi-view layout estimation pipeline as shown in Figure 2 (Section 3.1). Our pipeline consists of three parts: a 2D plane detector _f_ 1, a 3D information prediction and correspondence establishment method Plane-DUSt3R _f_ 2 (Section 3.2), and a post-processing algorithm _f_ 3 (Section 3.3). 3 Figure 2: Our multi-view room layout estimation pipeline. It consists of three parts: 1) a 2D plane detector _f_ 1, 2) a 3D information prediction and correspondence establishment method PlaneDUSt3R _f_ 2, and 3) a post-processing algorithm _f_ 3 . 3.1 F ORMULATION OF THE M ULTI -V IEW L AYOUT E STIMATION T ASK We begin by revisiting the single-view layout estimation task and unifying the formulation of existing methods. Next, we extend the formulation from single-view to multiple-view setting, providing a detailed analysis and discussion focusing on the choice of solutions. Before formulating the layout estimation task, we adopt the “geometric primitives + relationships” representation from Zheng et al. (2020) to model the room layout. **Geometric Primitives.** - **Planes:** The scene layout could be represented as a set of planes _{_ _**P**_ 1 _,_ _**P**_ 2 _. . .}_ in 3D space and their corresponding 2D projections _{p_ 1 _, p_ 2 _, . . .}_ in images. Each plane is parameterized by its normal _**n**_ _∈_ S [2] and offset _d_ . For a 3D point _**x**_ _∈_ R [3] lying on the plane, we have _**n**_ _[T]_ _**x**_ + _d_ = 0. - **Lines & Junction Points:** In 3D space, two planes intersect at a 3D line, three planes intersect at a 3D junction point. We denote the set of all 3D lines/junction points in the scene as _{_ _**L**_ 1 _,_ _**L**_ 2 _. . .}_ / _{_ _**J**_ 1 _,_ _**J**_ 2 _. . .}_ and their corresponding 2D projections as _{l_ 1 _, l_ 2 _, . . .}_ / _{j_ 1 _, j_ 2 _, . . .}_ in images. **Relationships.** - **Plane/Line relationships:** An adjacent matrix _**W**_ _p_ _/_ _**W**_ _l_ _∈{_ 0 _,_ 1 _}_ is used to model the relationship between planes/lines. Specifically, _**W**_ _p_ ( _i, j_ ) = 1 if and only if _**P**_ _i_ and _**P**_ _j_ intersect along a line; otherwise, _**W**_ _p_ ( _i, j_ ) = 0. Similarly to plane relationship, _**W**_ _l_ ( _i, j_ ) = 1 if and only if _**L**_ _i_ and _**L**_ _j_ intersect at a certain junction, otherwise, _**W**_ _l_ ( _i, j_ ) = 0. The pipeline of single-view layout estimation methods (Liu et al., 2019; Yang et al., 2022; Liu et al., 2018; Stekovic et al., 2020) can be formulated as: _I_ _−−−→{_ _f_ 1 2 _D,_ 3 _D}_ _−−−→{_ _f_ 3 _**P**_ _,_ _**L**_ _,_ _**J**_ _,_ _**W**_ _},_ (1) where _f_ 1 is a function that predicts 2D and 3D information from the input single view. Generally speaking, the final layout result _{_ _**P**_ _,_ _**L**_ _,_ _**J**_ _,_ _**W**_ _}_ can be directly inferred from the outputs of _f_ 1 . However, errors arising from _f_ 1 usually adversely affect the results. Hence, a refinement step that utilizes prior information about room layout is employed to further improve the performance. Therefore, _f_ 3 typically encompasses post-processing and refinement steps where the post-processing step generates an initial layout estimation, and the refinement step improves the final results. For instance, Yang et al. (2022) chooses the HRnet network (Wang et al., 2020) as _f_ 1 backbone to extract 2D plane _p_, line _l_, and predict 3D plane normal _**n**_ and offset _d_ from the input single view. After obtaining the initial 3D layout from the outputs of _f_ 1, the method reprojects the 3D 4 line to a 2D line [ˆ] _l_ on the image and compares it with the detected line _l_ from _f_ 1 . _f_ 3 minimizes the error _∥_ [ˆ] _l −_ _l∥_ 2 [2] [to optimize the 3D plane normal. In other words, it uses the better-detected 2D] line to improve the estimated 3D plane normal. In contrast, Stekovic et al. (2020) uses a different approach: its _f_ 1 predicts a 2.5D depth map instead of a 2D line _l_ and uses the more accurate depth results to refine the estimated 3D plane normal. Among the works that follow the general framework of 1 (Liu et al., 2019; 2018), Yang et al. (2022) stands out as the best single-view perspective image layout estimation method without relying on the Manhattan assumption. Therefore, we present its formulation in equation (2) and extend it to multi-view scenarios. _I_ _−−−→{_ _f_ 1 _p, l,_ _**n**_ _, d}_ _−−−→{_ _f_ 3 _**P**_ _,_ _**L**_ _,_ _**J**_ _,_ _**W**_ _},_ (2) In room layout estimation from unposed multi-view images, two primary challenges aris: 1) camera pose estimation, and 2) 3D information estimation from multi-view inputs. Camera pose estimation is particularly problematic given the scarcity of annotated multi-view layout dataset. Thanks to the recent advancements in 3D vision with pretrain model, this challenge could be effectively bypassed: DUSt3R (Wang et al., 2024) has demonstrated the ability to reconstruct scenes from unposed images without requiring camera intrinsic or extrinsic, and even without overlap between views. Moreover, the 3D pointmap generated from DUSt3R can provide significantly improved 3D information, such as plane normal and offset, compared to single-view methods (Yang et al., 2022) (see Table 1 of experiment section). Therefore, DUSt3R represents a critical advancement in extending singleview layout estimation to multi-view scenarios. Before formulating the multi-view solution, we first present the key 3D representation of DUSt3R: the pointmap _**X**_ and the camera pose _**T**_ . The camera pose _**T**_ is obtained through global alignment, as described in the DUSt3R (Wang et al., 2024)). - **Pointmap** _**X**_ **:** Given a set of RGB images _{I_ 1 _, . . ., I_ _n_ _} ∈_ R _[H][×][W][ ×]_ [3], captured from distinct viewpoints of the same indoor scene, we associate each image _I_ _i_ with a canonical pointmap _**X**_ _i_ _∈_ R _[H][×][W][ ×]_ [3] . The pointmap represents a one-to-one mapping from each pixel ( _u, v_ ) in the image to a corresponding 3D point in the world coordinate frame: ( _u, v_ ) _∈_ R [2] _�→_ _**X**_ ( _u, v_ ) _∈_ R [3] . - **Camera Pose** _**T**_ **:** Each image _I_ _i_ is associated with a camera-to-world pose _**T**_ _i_ _∈_ _SE_ (3) _._ Now, the sparse-view layout estimation problem can be formulated as shown in equation (3) _{I_ 1 _, I_ 2 _, . . .}_ _−−−−−→{_ _f_ 1 _,f_ 2 _p, l,_ _**X**_ _,_ _**T**_ _}_ _−−−→{_ _f_ 3 _**P**_ _,_ _**L**_ _,_ _**J**_ _,_ _**W**_ _}._ (3) In this work, we adopt the HRnet backbone from Yang et al. (2022) as _f_ 1 . In the original DUSt3R (Wang et al., 2024) formulation, the ground truth pointmap _**X**_ _[obj]_ represents the 3D coordinates of the entire indoor scene. In contrast, we are interested in plane pointmap _**X**_ _[p]_ that represents the 3D coordinates of structural plane surfaces, including walls, floors, and ceilings. This formulation intentionally disregards occlusions caused by non-structural elements, such as furniture within the room. Our objective is to predict the scene layout pointmap without occlusions from objects, even when the input images contain occluding elements. For simplicity, any subsequent reference to _**X**_ in this paper refers to the newly defined plane pointmap _**X**_ _[p]_ . We introduce Plane-DUSt3R as _f_ 2 and directly infer the final layout via _f_ 3 without the need for any refinement. 3.2 _f_ 2 : P LANE - BASED DUS T 3R The original DUSt3R outputs pointmaps that capture all 3D information in a scene, including furniture, wall decorations, and other objects. However, such excessive information introduces interference when extracting geometric primitives for layout prediction, such as planes and lines. To obtain a structural plane pointmap _**X**_, we modify the data labels from the original depth map (Figure 4 (a)) to the **structural plane depth map** (Figure 4 (b)), and then retrain the DUSt3R model. This updated objective guides DUSt3R to predict the pointmap of the planes while ignoring other objects. The original DUSt3R does not guarantee output at a metric scale, so we also trained a modified version of Plane-DUSt3R that produces **metric-scale** results. Given a set of image pairs P = _{_ ( _I_ _i_ _, I_ _j_ ) _| i ̸_ = _j,_ 1 _≤_ _i, j ≤_ _n, I ∈_ R _[H][×][W][ ×]_ [3] _}_, for each image pair, the model comprises two parallel branches. As shown in Figure 3, the detail of the architecture can be found in Appendix A. The regression loss function is defined as the 5 Idea Generation Category:
1Cross-Domain Application
DugT77rRhW
# A NDROID W ORLD : A D YNAMIC B ENCHMARKING E NVIRONMENT FOR A UTONOMOUS A GENTS Christopher Rawles _[∗]_ [1], Sarah Clinckemaillie _[†]_ [2], Yifan Chang _[†]_ [2], Jonathan Waltz [2], Gabrielle Lau [2], Marybeth Fair [2], Alice Li [1], William Bishop [1], Wei Li [1], Folawiyo Campbell-Ajala [1], Daniel Toyama [1], Robert Berry [1], Divya Tyamagundlu [2], Timothy Lillicrap [1], and Oriana Riva [1] 1 _Google DeepMind_ 2 _Google_ A BSTRACT Autonomous agents that execute human tasks by controlling computers can enhance human productivity and application accessibility. However, progress in this field will be driven by realistic and reproducible benchmarks. We present A NDROID W ORLD, a fully functional Android environment that provides reward signals for 116 programmatic tasks across 20 real-world Android apps. Unlike existing interactive environments, which provide a static test set, A NDROID W ORLD dynamically constructs tasks that are parameterized and expressed in natural language in unlimited ways, thus enabling testing on a much larger and more realistic suite of tasks. To ensure reproducibility, each task includes dedicated initialization, success-checking, and tear-down logic, which modifies and inspects the device’s system state. We experiment with baseline agents to test A NDROID W ORLD and provide initial results on the benchmark. Our best agent can complete 30.6% of A NDROID W ORLD ’s tasks, leaving ample room for future work. Furthermore, we adapt a popular desktop web agent to work on Android, which we find to be less effective on mobile, suggesting future research is needed to achieve universal, cross-platform agents. Finally, we also conduct a robustness analysis, showing that task variations can significantly affect agent performance, demonstrating that without such testing, agent performance metrics may not fully reflect practical challenges. A NDROID W ORLD and the experiments in this paper are available at [https://github.com/google-research/android_world.](https://github.com/google-research/android_world) 1 I NTRODUCTION Autonomous agents that interpret natural language instructions and operate computing devices can provide enormous value to users by automating repetitive tasks, augmenting human intelligence, and accomplishing complex workflows. However, a key research challenge remains the realistic evaluation of these agents in real-world settings. Despite growing enthusiasm for building autonomous agents (Deng et al., 2023; Rawles et al., 2023; Zheng et al., 2024a; Koh et al., 2024; Kim et al., 2024; He et al., 2024; Gravitas, 2023; Wu et al., 2023; Xie et al., 2023) most existing approaches for evaluation compare an agent’s actions at each step to a previously collected human demonstration (Deng et al., 2023; Rawles et al., 2023; Yang et al., 2023b; Zhang & Zhang, 2023; L`u et al., 2024; Zhang et al., 2024c; Yan et al., 2023; Li et al., 2024). Measuring performance in this way can be misleading because when performing tasks online in real environments agents can take multiple paths to solve tasks, environments may behave non-deterministically, and agents can dynamically learn from mistakes to correct their actions (Shinn et al., 2023; Liu et al., 2018b; Li et al., 2023b; Pan et al., 2024). For this reason, online evaluation of agents in realistic environments able to reward task outcome provides a gold standard for evaluation. While there is an emerging body of work to address this need across different environments (Zhou et al., 2023; Koh et al., 2024; Drouin et al., _∗_ Lead contributor. Contact: crawles@google.com _†_ Equal contribution. 1 In Simple Calendar Pro, create a calendar event on ~~{~~ y ~~ear}~~ - ~~{month}~~ - ~~{da~~ y ~~}~~ ~~at~~ ~~{ho~~ u ~~r}h~~ w ~~ith~~ the title {title} and the description {description}. The event should last for {duration_mins} mins. |Add a loc<br>for {locati<br>OsmAnd<br>What inc<br>do I have|Col2|Col3|Col4|Col5| |---|---|---|---|---| |Add a loc<br>for {locati<br>OsmAnd<br>What inc<br>do I have|Add a loc<br>for {locati<br>OsmAnd|Add a loc<br>for {locati<br>OsmAnd|ation marker<br>on} in the<br>maps app.|ation marker<br>on} in the<br>maps app.| |Add a loc<br>for {locati<br>OsmAnd<br>What inc<br>do I have|Add a loc<br>for {locati<br>OsmAnd|Add a loc<br>for {locati<br>OsmAnd||| |Add a loc<br>for {locati<br>OsmAnd<br>What inc<br>do I have|Add a loc<br>for {locati<br>OsmAnd|||| |Add a loc<br>for {locati<br>OsmAnd<br>What inc<br>do I have|What in<br>do I hav|c<br>e|c<br>e|s| |Add a loc<br>for {locati<br>OsmAnd<br>What inc<br>do I have|{date} i|{date} i|{date} i|{date} i| |Add a loc<br>for {locati<br>OsmAnd<br>What inc<br>do I have|{date} i|||| Figure 1: A NDROID W ORLD is an environment for building and testing autonomous agents. 2024; Lee et al., 2024; Xie et al., 2024; Bonatti et al., 2024; Zheng et al., 2024b), there is no comprehensive solution for mobile platforms, such as Android, which are used by billions of users and therefore represent environments in which automation agents may be very productively employed. We introduce A NDROID W ORLD to address this. At its core, A NDROID W ORLD offers a reliable means of obtaining reward signals for tasks performed by agents in realistic mobile environments. Reward signals are quantitative metrics that indicate functional correctness of a task, i.e. is the stated goal achieved? For example, for the task “Send a text message to Jane confirming I’ll be there,” a positive reward indicates that the relevant message has been sent. Unlike simulated environments (Tassa et al., 2018; Shridhar et al., 2020) or games (Mnih et al., 2013; Silver et al., 2016; Vinyals et al., 2019; Wang et al., 2023b; Tan et al., 2024; Toyama et al., 2021), real-world apps and websites do not inherently offer explicit reward signals. While human (Rawles et al., 2023; Zheng et al., 2024a; Pan et al., 2024; Kinniment et al., 2023) or LLM-based (Chiang et al., 2024; Zheng et al., 2023; Liu et al., 2023; Du et al., 2023; Ma et al., 2023; Pan et al., 2024; He et al., 2024) judges can be employed to reward the outcome of a task, these approaches scale poorly or are not fully reliable, respectively. Alternatively, environments for autonomous agents which provide automated ground-truth rewards for complex workflows have been developed (Yao et al., 2023; Zhou et al., 2023; Koh et al., 2024; Xie et al., 2024; Bonatti et al., 2024). We find two problems with these environments. First, they are constrained to desktop computing environments, overlooking the mobile domain, which is of paramount importance given the ubiquity and diversity of mobile devices in the real world. Secondly, they are limited in their real-world diversity and scale. Crucially, unlike in real-world scenarios where conditions and task inputs vary widely, these environments support only static test specifications, meaning that when task parameters deviate, the reward signal is likely to break. We seek to develop a comprehensive benchmark that addresses the limitations of the existing approaches above for evaluating automation agents in mobile environments. A NDROID W ORLD does this by spanning 20 Android apps on a total of 116 programmatic tasks to provide ground truthrewards. Unlike existing test environments (MiniWoB++ (Shi et al., 2017) being a notable exception), each task in A NDROID W ORLD is dynamically instantiated using randomly-generated parameters, challenging agents with millions of unique task goals and conditions. While MiniWob++ consists of simple, synthetic websites, A NDROID W ORLD leverages actual Android applications. A main challenge that A NDROID W ORLD must address is how to ensure that reward signals are durable when using real-world applications and varying task parameters dynamically. A NDROID W ORLD ’s solves this by leveraging the extensive and consistent state management capabilities of the Android OS, using the same mechanisms that the apps themselves utilize to store and update data. In addition to providing a comprehensive benchmark, A NDROID W ORLD is lightweight, requiring only 2 GB of memory and 8 GB of disk space, and is designed with convenience in mind. It connects agents to the Android OS by leveraging the Python library AndroidEnv (Toyama et al., 2 Table 1: Comparison of different datasets and environments for benchmarking computer agents. Env? # of apps # task Avg # task Reward Platform or websites templates instances method GAIA  n/a 466 1 text-match None M IND 2W EB  137 2350 1 None Desktop Web W EB LINX  155 2337 1 None Desktop Web W EB V OYAGER  15 643 1 LLM judge Desktop Web P IXEL H ELP  4 187 1 None Android M ETA GUI  6 1125 1 None Android M O T I F  125 4707 1 None Android (Apps+Web) A IT W  357+ 30378 1 None Android (Apps+Web) A NDROID C ONTROL  833 15283 1 None Android (Apps+Web) O MNI A CT  60+ 9802 1 None Desktop (Apps+Web) A NDROID A RENA  13 221 1 Action match/LLM Android (Apps+Web) LL AMA T OUCH  57 496 1 Screen match Android (Apps+Web) M INI W O B++  1 114 _∞_ HTML/JS state Web (synthetic) W EB S HOP  1 12k 1 product attrs match Desktop Web W EB A RENA  6 241 3.3 url/text-match Desktop Web V ISUAL W EB A RENA  4 314 2.9 url/text/image-match Desktop Web W ORK A RENA  1 29 622.4 cloud state Desktop Web M OBILE -E NV  1 13 11.5 regex Android (Apps) B-M O CA  4 6 1.9 regex Android (Apps+Web) MMI N A  14 1050 1 text-match Desktop web OSW ORLD  9 369 1 device/cloud state Desktop (Apps+Web) W INDOWS A GENT A RENA  11 154 1 device state Desktop (Apps+Web) A GENT S TUDIO  9 205 1 device state Desktop (Apps+Web) **A** **NDROID** **W** **ORLD**  20 116 _∞_ device state Android (Apps+Web) 2021) to connect to the freely available Android Emulator. [1] In addition to the 116 Android tasks, we extend A NDROID W ORLD with web tasks by integrating the MiniWoB++ (Shi et al., 2017; Liu et al., 2018a) benchmark into it. To demonstrate A NDROID W ORLD ’s usefulness as a benchmark, we build and release a multi-modal agent, M3A (Multimodal Autonomous Agent for Android), and establish state-of-the-art results on A NDROID W ORLD . We analyze M3A’s performance using both multimodal and text-only input, and we observe that while multimodal perception can improve performance in some cases, it generally does not outperform the text-only approach. On A NDROID W ORLD, M3A achieves a 30.6% success rate, which surpasses that of a web agent adapted for Android but remains significantly lower than the human success rate of 80.0%. In pursuit of building robust UI control agents, our study includes comprehensive tests under varied real-world conditions, demonstrating significant performance variations primarily driven by changes in intent parameters. We make the following contributions: (i) the creation of a new, highly diverse and realistic mobile UI control agent environment; (ii) establishment of benchmark performance with a state-of-the-art multimodal agent, and (iii) a careful analysis demonstrating the need to evaluate agents across variable task parameters and conditions due to the inherent stochasticity in both models and environments. 2 R ELATED W ORK Table 1 compares existing evaluation environments for autonomous UI agents. 2.1 I NTERACTIVE EVALUATION ENVIRONMENTS Effective evaluation of autonomous agents requires benchmarks that mimic real-world scenarios, but also interactive environments that provide reward signals upon successful task completion (Rawles et al., 2023; Deng et al., 2023; Abramson et al., 2022; Ruan et al., 2023; Chen et al., 2021). Many existing benchmarking environments target web browsing. MiniWoB++ (Shi et al., 2017; Liu et al., 2018b) consists of small, synthetic HTML pages with parameterizable tasks which allow for un 1 The Android Emulator is packaged as part of Android Studio, which can be downloaded from https://developer.android.com/studio 3 limited task variability. WebShop (Yao et al., 2023) provides a simulated e-commerce environment, whereas WebArena (Zhou et al., 2023) and VisualWebArena (Koh et al., 2024) consist of simulated websites across up to six domains. WorkArena (Drouin et al., 2024) consists of 29 tasks for enterprise software. GAIA (Mialon et al., 2023) is a static dataset that tests an agent’s ability to interact with live web environments. MMInA (Zhang et al., 2024e) is a multihop and multimodal benchmark designed to evaluate agents for compositional Internet tasks. Towards building computer use agents, OSWorld (Xie et al., 2024), WindowsAgentArena (Bonatti et al., 2024), and AgentStudio (Zheng et al., 2024b) provide a test suite of tasks for desktop computer interfaces and custom execution-based evaluation scripts across 9, 11, and 9 apps, respectively. In the mobile domain, existing benchmarks are limited and do not capture the diversity of real-world mobile interactions, containing low-complexity tasks or on a limited number of applications. BMoCA’s (Lee et al., 2024) evaluation is based on 6 simple tasks (e.g., ”Call 911”, ”turn on airplane mode”) across 4 apps [2], validated using regular expressions. Mobile-Env (Zhang et al., 2024b) offers task reproducibility limited to 13 task templates for a single app (WikiHow). While A NDROID W ORLD shares the mobile OS focus of B-MoCA and Mobile-Env, it is more comparable to OSWorld (and WindowsAgentArena, which builds on top of OSWorld) in terms of task complexity and the diversity of interactions it supports. A NDROID W ORLD enhances OSWorld’s approach by dynamically constructing the start states of an agent’s run and varying the task parameters in unlimited ways, thus allowing for a new type of evaluation under varying real-world conditions. Other studies leverage human evaluation (Rawles et al., 2023; Zheng et al., 2024a; Bishop et al., 2024) for tasks where automatic evaluation is not available. Lastly, emerging research (Pan et al., 2024; He et al., 2024; Xing et al., 2024; Zheng et al., 2024b) explores the potential of multimodal models to generalize agent evaluations to new settings, though this area requires further research to achieve accuracy comparable to manually-coded rewards. AndroidEnv (Toyama et al., 2021) provides a mechanism to manage communication with the Android emulator, similar to Playwright and Selenium for web environments. While A NDROID W ORLD leverages this functionality, it diverges in its reward system. AndroidEnv’s approach requires modifying application source code and implementing task-specific logging statements, making it wellsuited for gaming environments with easily verifiable success criteria. In contrast, A NDROID W ORLD implements a non-invasive reward mechanism, allowing it to create a benchmark suite for apps whose source code is unavailable and to reuse validation components across different apps. This approach enables A NDROID W ORLD to cover a broader range of real-world mobile tasks. 2.2 S TATIC DATASETS FOR UI AUTOMATION Datasets derived from human interactions provide proxy metrics that correlate with real-world agent performance (Li et al., 2020; Burns et al., 2021; Deng et al., 2023; Rawles et al., 2023). On mobile platforms, AitW (Rawles et al., 2023), AndroidControl (Li et al., 2024), PixelHelp (Li et al., 2020), AndroidArena (Xing et al., 2024), LlamaTouch (Zhang et al., 2024d), UGIF (Venkatesh et al., 2022), and MoTIF (Burns et al., 2021) consist of demonstrations across Android apps and mobile websites, with screens often represented via accessibility trees. In contrast, desktop web environments typically utilize the DOM for representing website content, with Mind2Web (Deng et al., 2023), OmniAct (Kapoor et al., 2024) and others, across various desktop websites. Mobilebased datasets frequently involve more complex actions, such as scrolling, which are not as useful in DOM-based desktop interactions where the entire action space is readily accessible. Additionally, API-centric datasets like API-Bank (Li et al., 2023a), ToolTalk (Farn & Shin, 2023), and ToolBench (Xu et al., 2023) assess agents’ capabilities to manipulate computer systems via APIs. 2.3 I NTERACTIVE AGENTS Prior to today’s foundation models, traditional approaches to developing user interface-operating agents primarily used reinforcement learning and behavioral cloning to simulate interactions like mouse clicks and keyboard typing (Liu et al., 2018b; Li et al., 2020; Shvo et al., 2021; Gur et al., 2022a; Humphreys et al., 2022). More recent work leverages off-the-shelf foundational models (Gemini, 2023; OpenAI, 2023; Touvron et al., 2023) with in-context learning (ICL) and fine-tuning 2 Based on what reported in the Experiments Section of the B-MoCA manuscript as of October 1 st, 2024. 4 (a) (b) (c) Figure 2: Annotators performed the tasks assigned to them, assigned a difficulty level (2a) and estimated the number of steps required to complete each task (2b), using the action space available to an agent. For each task, they selected relevant category tags from a predefined list (2c). applied to mobile (Rawles et al., 2023; Hong et al., 2023; Wang et al., 2023a; Yan et al., 2023; Zhang & Zhang, 2023; Bishop et al., 2024; Zhang et al., 2023), desktop web (Zheng et al., 2024a; Deng et al., 2023; Zhou et al., 2023; Koh et al., 2024; Cheng et al., 2024; Lai et al., 2024; You et al., 2024), and desktop OS (Wu et al., 2024; Zhang et al., 2024a; Xie et al., 2024). Recent work explores agents that reflect on system state (Shinn et al., 2023; Yao et al., 2022; Madaan et al., 2024) by leveraging exploration, self-evaluation, and retry-capabilities for continual learning and adaptation (Li et al., 2023b; Yang et al., 2023b; Pan et al., 2024; Wu et al., 2024; Gao et al., 2023; Murty et al., 2024). 3 A NDROID W ORLD 3.1 A NDROID FOR AUTONOMOUS AGENTS Android is an ideal environment for developing autonomous agents. It is the most widely-used OS globally [3] and is highly flexible for research, while providing an open world of the Web [4] and over 2M apps for agents to operate in. Using emulation, an Android environment is easy to deploy, does not require specialized hardware, and can be run on a laptop. Android Virtual Devices or emulator images are well suited for research as they are self-contained, easy to distribute, and configurable. Compared to desktops, mobile environments like Android present unique challenges for computeruse agents. While mobile UIs are simpler due to smaller screens, their action space is more complex, requiring intricate gestures (e.g., navigating carousels, long-pressing, multi-finger zooming) and often more steps to complete tasks. Unlike web-browser-only environments, Android, as an OS, offers greater flexibility, including function-calling APIs (e.g., sending texts) alongside standard UI actions (click, scroll, type). 3.2 T HE OBSERVATION AND ACTION SPACE A NDROID W ORLD provides an interface for agents to receive observations and execute actions on Android. It uses AndroidEnv (Toyama et al., 2021) and the Android Device Bridge to facilitate interaction between Android and the agent. The observation space consists of a full-resolution screenshot and a UI tree representation developed for accessibility purposes. The action space is similar to that which humans use, consisting of gestures (i.e., tapping, swiping), typing, and navigation buttons (i.e., go home and go back). In addition to these naturalistic actions, A NDROID W ORLD exposes a limited set of function calling APIs, such as send ~~t~~ ext ~~m~~ essage, to help agents accomplish goals. Appendix C provides more details on the observation format and action space. 3.3 R EPRODUCIBLE AND PARAMETERIZED TASKS A NDROID W ORLD consists of a suite of 116 tasks, spread across 20 diverse applications (see Appendix D for more details). These tasks simulate practical, everyday activities, including note 3 [https://gs.statcounter.com/os-market-share](https://gs.statcounter.com/os-market-share) 4 [Mobile is the most popular platform for accessing the web; https://gs.statcounter.com/](https://gs.statcounter.com/platform-market-share/desktop-mobile/worldwide/) [platform-market-share/desktop-mobile/worldwide/](https://gs.statcounter.com/platform-market-share/desktop-mobile/worldwide/) 5 Idea Generation Category:
3Other
il5yUQsrjC
# R OBUST C ONFORMAL P REDICTION WITH A S INGLE B INARY C ERTIFICATE **Soroush H. Zargarbashi** CISPA Helmholtz Center for Information Security zargarbashi@cs.uni-koeln.de A BSTRACT **Aleksandar Bojchevski** University of Cologne bojchevski@cs.uni-koeln.de Conformal prediction (CP) converts any model’s output to prediction sets with a guarantee to cover the true label with (adjustable) high probability. Robust CP extends this guarantee to worst-case (adversarial) inputs. Existing baselines achieve robustness by bounding randomly smoothed conformity scores. In practice, they need expensive Monte-Carlo (MC) sampling (e.g. _∼_ 10 [4] samples per point) to maintain an acceptable set size. We propose a robust conformal prediction that produces smaller sets even with significantly lower MC samples (e.g. 150 for CIFAR10). Our approach binarizes samples with an adjustable (or automatically adjusted) threshold selected to preserve the coverage guarantee. Remarkably, we prove that robustness can be achieved by computing _only one_ binary certificate, unlike previous methods that certify each calibration (or test) point. Thus, our method is faster and returns smaller robust sets. We also eliminate a previous limitation that requires a bounded score function. 1 I NTRODUCTION Despite their extensive applications, modern neural networks lack reliability as their output probability estimates are uncalibrated (Guo et al., 2017). Many uncertainty quantification methods are computationally expensive, lack compatibility with black-box models, and offer no formal guarantees. Alternatively, conformal prediction (CP) is a statistical post-processing approach that returns prediction _sets_ with a guarantee to cover the true label with high adjustable probability. CP only requires a held-out calibration set and offers a distribution-free model-agnostic coverage guarantee (Vovk et al., 2005; Angelopoulos & Bates, 2021). The model is used as a black box to compute conformity scores which capture the agreement between inputs _**x**_ and labels _y_ . These prediction sets are shown to improve human decision-making both in terms of response time and accuracy (Cresswell et al., 2024). CP assumes exchangeability between the calibration and the test set (a relaxation of the i.i.d. assumption), making it broadly applicable to images, language models, etc. CP also applies on graph node classification (Zargarbashi et al., 2023; Huang et al., 2023) where uncertainty quantification methods are limited. However, exchangeability, and therefore the conformal guarantee, easily breaks when the test data is noisy or subjected to adversarial perturbations. Robust conformal prediction extends this guarantee to worst-case inputs˜ ˜ ˜ _**x**_ within a maximum radius around the clean point _**x**_, e.g. _∀_ _**x**_ s.t. _∥_ _**x**_ _−_ _**x**_ _∥_ 2 _≤_ _r_ . In the evasion setting, we assume that the calibration set is clean, and test datapoints can be perturbed. Building on the rich literature of robustness certificates (Kumar et al., 2020), recent robust CP baselines (Gendler et al., 2021; Zargarbashi et al., 2024; Jeary et al., 2024) use a conservative score at test time that is a _certified_ bound on the conformity score of the clean unseen input. This maintains the guarantee even for the perturbed input since “if CP covers _**x**_, then robust CP certifiably covers ˜ _**x**_ ”. However, the average set size increases, especially if the bounds are loose. The certified bounds can be derived through model-dependent verifiers (Jeary et al., 2024) or smoothing-based black-box certificates (Zargarbashi et al., 2024). For the robustness of black-box models, an established approach is to certify the confidence score through randomized smoothing (Kumar et al., 2020), obtaining bounds on the expected smooth score. The tightness of these bounds depends on the information about the smooth score around the given input, e.g. the mean Yan et al. (2024), or the CDF Zargarbashi et al. (2024). Such methods: (i) assume the conformity score function has a bounded range, (ii) compute several certificates for 1 Vanilla CAS BinCP 10.0 7.5 5.0 2.5 1.0 0.9 ~~0~~ . ~~8~~ 400 200 |Col1|Col2|CAS|Col4| |---|---|---|---| ||||| ||||| ||||| ||||| |||1 −α|| ||||| ||||| 100 200 400 Calibration nodes 20 100 1000 10000 Sample size 0.00 0.25 0.50 0.75 _r_ Figure 1: [Left] Average set size with different MC sample rates, [Middle] empirical coverage of vanilla and robust CPs under attack, and [Right] runtime of robust CP as a function of calibration datapoints (after computing the MC samples which is the number of lower bound computations). each calibration (or test) point, and (iii) need a large number of Monte-Carlo samples to get tight confidence intervals. For the current SOTA method CAS (Zargarbashi et al., 2024), the accounting for sample correction inflates the prediction sets significantly for sample rates below 2000 (see Fig. 1-left). This inefficiency increases to trivially returning _Y_ as the prediction set when we run with higher coverage rates or higher radii (see § 6). In contrast, we obtain robust and small prediction sets with only _∼_ 150 MC samples. Additionally, these methods require computing certified bounds for (at least) each calibration point which we further show is a wasteful computation. **BinCP.** We observe that smooth inference is inherently more robust. Even without certificates, randomized methods show a slower decrease in coverage under attack (see Fig. 6-right). Given any score function _s_ ( _**x**_ _, y_ ) capturing conformity, Zargarbashi et al. (2024) and Gendler et al. (2021) define the smooth score as ¯ _s_ ( _**x**_ _, y_ ) = E _**ϵ**_ _∼N_ ( **0** _,σ_ _**I**_ ) [ _s_ ( _**x**_ + _**ϵ**_ _, y_ )]. Instead, we perform binarization via a threshold _τ_, i.e. ¯ _s_ ( _**x**_ _, y_ ) = E _**ϵ**_ _∼N_ ( **0** _,σ_ _**I**_ ) [I[ _s_ ( _**x**_ + _**ϵ**_ _, y_ ) _≥_ _τ_ ]] = Pr _**ϵ**_ _∼N_ ( **0** _,σ_ _**I**_ ) [ _s_ ( _**x**_ + _**ϵ**_ _, y_ ) _≥_ _τ_ ]. Both are valid conformity scores, and both change slowly around any _**x**_, however, our binarized CP (BinCP) method has several advantages. First, we define robust CP that only computes a single certificate. In comparison, CAS requires at least one certificate per calibration (or test) point. Second, our method can effortlessly use many existing binary certificates out of the box without any additional assumptions or modifications. A direct consequence is that we can use de-randomization techniques (Levine & Feizi, 2021) that completely nullify the need for sample correction under _ℓ_ 1 norm. Third, when we do need sample correction, working with binary variables allows us to use tighter concentration inequalities (Clopper & Pearson, 1934) (see § D.2 for a detailed discussion). Thus, even with significantly lower MC samples, our method still produces small prediction sets (see Fig. 1-left). This improvement is even more pronounced for datasets with a large number of classes (e.g. ImageNet shown in Fig. 5). Finally, BinCP does not require the score function to be bounded [which is a limitation in current methods. Our code is available on the BinCP Github repository.](https://github.com/soroushzargar/BinCP) 2 B ACKGROUND We assume a holdout set of labeled calibration datapoints _D_ cal = _{_ ( _**x**_ _i_ _, y_ _i_ ) _}_ _[n]_ _i_ =1 [which is exchange-] able with future test points ( _**x**_ _n_ +1 _, y_ _n_ +1 ), both sampled from some distribution _D_ . We have blackbox access to a model from which we compute an arbitrary conformity [1] score _s_ : _X × Y →_ R, e.g. score _s_ ( _**x**_ _, y_ ) = _π_ _y_ ( _**x**_ ) where _π_ _y_ ( _**x**_ ) is the predicted probability for class _y_ (other scores in § A). **Vanilla CP.** For a user-specified nominal coverage 1 _−_ _α_, let _q_ _α_ = Q ( _α_ ; _{s_ ( _**x**_ _i_ _, y_ _i_ ) _}_ _[n]_ _i_ =1 _[∪{∞}]_ [)] where Q ( _·_ ; _·_ ) is the quantile function. The sets defined as _C_ ( _**x**_ _n_ +1 ) = _{y_ : _s_ ( _**x**_ _n_ +1 _, y_ ) _≥_ _q_ _α_ _}_ have 1 _−_ _α_ guarantee to include the true label _y_ _n_ +1 . Formally, Pr[ _y_ _n_ +1 _∈C_ ( _**x**_ _n_ +1 )] _≥_ 1 _−_ _α_ (Vovk et al., 2005) where the probability is over _D_ cal _∼D,_ _**x**_ _n_ +1 _∼D_ . This guarantee, and later our robust sets, are independent of the mechanics of the model and the score function – the model’s accuracy or the quality of the score function is irrelevant. A score function that better reflects input-label agreement leads to more efficient (i.e., smaller) prediction sets. For noisy or adversarial inputs, the 1 Conformity scores quantify agreement and are equivalent up to a sign flip to non-conformity scores. 2 exchangeability between the test and calibration set breaks, making the coverage guarantee invalid. Fig. 1-middle, and Fig. 6-right show that an adversary (or bounded worst-case noise) can decrease the empirical coverage drastically with imperceptible perturbations on each test point. As a defense, _robust_ CP extends this guarantee to the worst-case bounded perturbations. **Threat model.** The adversary’s goal is to decrease the empirical coverage probability by perturbing the input. Letimages a common threat model is defined by the _B_ : _X →_ 2 _[X]_ be a ball that returns all admissible perturbed points around an input. For _ℓ_ 2 norm: _B_ _r_ ( _**x**_ ) = _{_ _**x**_ ˜ : _∥_ _**x**_ ˜ _−_ _**x**_ _∥_ 2 _≤_ _r}_ where the radius _r_ controls the perturbation magnitude. Similarly, we can use the _ℓ_ 1 norm. For binary data and graphs, Bojchevski et al. (2020) define _B_ _r_ _a_ _,r_ _d_ ( _**x**_ ) = _{_ _**x**_ ˜ : [�] _[d]_ _i_ =1 [I][[˜] _**[x]**_ _[i]_ [ =] _**[ x]**_ _[i]_ _[ −]_ [1]] _[ ≤]_ _[r]_ _[d]_ _[,]_ [ �] _[d]_ _i_ =1 [I][[˜] _**[x]**_ _[i]_ [ =] _**x**_ _i_ + 1] _≤_ _r_ _a_ _}_ where the adversary is allowed to toggle at most _r_ _a_ zero bits, and _r_ _d_ one bits. **Inverted ball** _B_ _[−]_ [1] **.** At test time we are given a (potentially) perturbed ˜ _**x**_ _∈B_ ( _**x**_ ). However, to obtain robust sets, we need to reason about (the score) of the unseen clean _**x**_ . Naively, one might assume that _**x**_ _∈B_ (˜ _**x**_ ) – the clean point is in the ball around the perturbed point. However, this only holds in special cases such as the ball defined by the _l_ 2 norm. For example, if a binary ˜ _**x**_ was obtained by removing _r_ _d_ bits and adding _r_ _a_ bits, to able to reach the clean _**x**_ from the perturbed ˜ _**x**_ we need to add _r_ _d_ bits and remove _r_ _a_ bits instead since _B_ _r_ _a_ _,r_ _d_ unlike _B_ _r_ is not symmetric. We define the inverted ball _B_ _[−]_ [1] ˜as the smallest ball centered at ˜ _**x**_ _∈B_ ( _**x**_ ) that includes the clean _**x**_ . Formally, _B_ _[−]_ [1] should satisfy _∀_ _**x**_ _∈B_ ( _**x**_ ) _⇒_ _**x**_ _∈B_ _[−]_ [1] (˜ _**x**_ ). For symmetric balls like _ℓ_ _p_ -norms, _B_ _[−]_ [1] = _B_ . For the binary ball _B_ _r_ _[−]_ _a_ [1] _,r_ _d_ [=] _[ B]_ _[r]_ _d_ _[,r]_ _a_ [we need to swap] _[ r]_ _[a]_ [and] _[ r]_ _[d]_ [to ensure this condition. Zargarbashi et al. (2024)] also discuss this subtle but important aspect without formally defining _B_ _[−]_ [1] _._ **Robust CP.** Given a threat model, robust CP defines a _conservative_ prediction set _C_ [¯] that maintains the conformal guarantee even for worst-case inputs. Formally, Pr (1) _D_ cal _∪{_ _**x**_ _n_ +1 _}∼D_ [[] _[y]_ _[n]_ [+1] _[ ∈]_ _[C]_ [¯][(˜] _**[x]**_ _[n]_ [+1] [)] _[,][ ∀]_ _**[x]**_ [˜] _[n]_ [+1] _[ ∈B]_ [(] _**[x]**_ _[n]_ [+1] [)]] _[ ≥]_ [1] _[ −]_ _[α]_ The intuition behind existing methods is as follows: (i) Vanilla CP covers _**x**_ _n_ +1 with 1 _−_ _α_ probability (ii) if _y ∈C_ ( _**x**_ _n_ +1 ) then _y ∈_ _C_ [¯] (˜ _**x**_ _n_ +1 ). Thus, robust CP covers ˜ _**x**_ _n_ +1 with at least the same probability. Here, (ii) is guaranteed via certified lower bounds c _[↓]_ [ _s,_ _**x**_ _, B_ ] or certified upper bounds c _[↑]_ [ _s,_ _**x**_ _, B_ _[−]_ [1] ]. **Theorem 1** ( **Robust CP from Zargarbashi et al. (2024)** ) **.** _Define s_ _y_ ( _·_ ) = _s_ ( _·, y_ ) _. With_ c¯ _[↑]_ [ _s_ _y_ _,_ ˜ _**x**_ _, B_ _[−]_ [1] ] _≥_ max _**x**_ _′_ _∈B_ _−_ 1 (˜ _**x**_ ) _s_ ( _**x**_ _[′]_ _, y_ ) _, let_ _C_ [¯] test (˜ _**x**_ _n_ +1 ) = � _y_ : c _[↑]_ [ _s_ _y_ _,_ ˜ _**x**_ _n_ +1 _, B_ _[−]_ [1] ] _≥_ _q_ � _, then_ _C_ test _satisfies Eq. 1 (test-time robustness). Alternatively, with_ c _[↓]_ [ _s_ _y_ _,_ _**x**_ _, B_ ] _≤_ min _**x**_ _′_ _∈B_ ( _**x**_ ) _s_ ( _**x**_ _[′]_ _, y_ ) _,_ _n_ _define q_ _[↓]_ = Q � _α_ ; �c _[↓]_ [ _s_ _y_ _i_ _,_ _**x**_ _i_ _, B_ ]� _i_ =1 � _. Then_ _C_ [¯] cal (˜ _**x**_ _n_ +1 ) = � _y_ : _s_ (˜ _**x**_ _n_ +1 _, y_ ) _≥_ _q_ _[↓]_ [�] _also satisfies_ _Eq. 1 (calibration-time robustness)._ In Theorem 1 test-time robustness uses _B_ _[−]_ [1] since it queries the clean point from the perspective of the perturbed test input. Alternatively, calibration-time robustness uses _B_ since the clean calibration point is given and we are finding the lower bound for the unseen test point in the test. The intuition is that the lower bound scores from the clean calibration points are exchangeable with the lower bound of the clean test input. The perturbed test input will surely have a higher score compared to this lower bound, hence it would be covered with higher probability. We can obtain the c _[↓]_ _,_ c _[↑]_ bounds through neural network verifiers Jeary et al. (2024) or randomized smoothing (Cohen et al., 2019). We focus on the latter since we get model-agnostic certificates with black-box access. The coverage probability is theoretically proved in CP. Similarly, (adversarially) robust CP also comes with a theoretical guarantee. In both cases we can compute the empirical coverage as a sanity check. Another metric of interest in both cases is the average set size (the efficiency) of the conformal sets. **Randomized smoothing.** A smoothing scheme _ξ_ : _X →X_ maps any point to a random nearby point. For continuous data Gaussian smoothing _ξ_ ( _**x**_ ) = _**x**_ + _**ϵ**_ adds an isotropic Gaussian noise to the input _**ϵ**_ _∼N_ ( **0** _, σ_ _**I**_ ). For sparse binary data Bojchevski et al. (2020) define sparse smoothing as _ξ_ ( _**x**_ ) = _**x**_ _⊕_ _**ϵ**_ where _⊕_ is the binary XOR, and _**ϵ**_ [ _i_ ] _∼_ Bernoulli( _p_ = _p_ _**x**_ [ _i_ ] ), where _p_ 1, and _p_ 0 are two smoothing parameters to account for sparsity. To simplify the notation we write _**x**_ + _**ϵ**_ instead of _ξ_ ( _**x**_ ) in the rest of the paper for both Gaussian and sparse smoothing, but our method works for any smoothing scheme beyond additive noise. Regardless of how rapidly a score function _s_ ( _**x**_ _, y_ ) changes, the smooth score ¯ _s_ ( _**x**_ _, y_ ) = E _**ϵ**_ [ _s_ ( _**x**_ + _**ϵ**_ _, y_ )] changes slowly near _**x**_ . This enables us to compute tight c _[↓]_ _,_ c _[↑]_ bounds that depend on the smoothing strength. See § 4, § B, and § D.1 for details. 3 3 B INARIZED C ONFORMAL P REDICTION (B IN CP) We define conformal sets by binarizing randomized scores. We first show that this preserves the conformal guarantee for clean data. Then in § 4 we extends the guarantee to worst-case adversarial inputs. As we will see in § 6 our binarization approach has gains in terms of Monte-Carlo sampling budget, computational cost, and average set size. **Proposition 1.** _For any two parameters p ∈_ (0 _,_ 1) _, τ ∈_ R _, given a smoothing scheme_ _**x**_ + _**ϵ**_ _, define_ _the boolean function_ accept[ _·, ·_ ; _p, τ_ ] _and the prediction set C_ ( _·_ ; _p, τ_ ) _as_ accept[ _**x**_ _, y_ ; _p, τ_ ] = I[Pr _**ϵ**_ [[] _[s]_ [(] _**[x]**_ [ +] _**[ ϵ]**_ _[, y]_ [)] _[ ≥]_ _[τ]_ []] _[ ≥]_ _[p]_ []] _and_ _C_ ( _**x**_ ; _p, τ_ ) = _{y_ : accept( _**x**_ _, y_ ; _p, τ_ ) _}_ _For any fixed p, let_ � _τ_ _α_ ( _p_ ) = sup _τ_ _τ_ : � _n_ � � accept( _**x**_ _i_ _, y_ _i_ ; _p, τ_ ) _≥_ (1 _−_ _α_ ) _·_ ( _n_ + 1) _i_ =1 _then the set C_ ( _**x**_ _n_ +1 ; _p, τ_ _α_ ( _p_ )) _has_ 1 _−_ _α coverage guarantee. Alternatively, for any fixed τ_ _, let_ (2) (3) � _p_ _α_ ( _τ_ ) = sup _p_ _p_ : � _n_ � � accept( _**x**_ _i_ _, y_ _i_ ; _p, τ_ ) _≥_ (1 _−_ _α_ ) _·_ ( _n_ + 1) _i_ =1 _again the prediction set C_ ( _**x**_ _n_ +1 ; _p_ _α_ ( _τ_ ) _, τ_ ) _has_ 1 _−_ _α coverage guarantee._ The correctness of Prop. 1 can be directly seen by noticing that we implicitly define new scores. **Quantile view.** Let _S_ _i_ = _s_ ( _**x**_ _i_ + _**ϵ**_ _, y_ _i_ ) be the distribution of randomized scores for _**x**_ _i_ and the true class _y_ _i_ . Let _τ_ _i_ ( _p_ ) = Q ( _p_ ; _S_ _i_ ), we have that _τ_ _α_ ( _p_ ) = Q ( _α_ ; _{τ_ _i_ ( _p_ ) _}_ _[n]_ _i_ =1 [)][ is a quantile of quantiles.] Similarly, define _p_ _i_ ( _τ_ ) = Q _[−]_ [1] ( _τ_ ; _S_ _i_ ) then _p_ _α_ ( _τ_ ) = Q ( _α_ ; _{p_ _i_ ( _τ_ ) _}_ _[n]_ _i_ =1 [)][ is a quantile of inverse quan-] tiles. Both _τ_ _i_ ( _p_ ) for a fixed _p_ and _p_ _i_ ( _τ_ ) for a fixed _τ_ are valid conformity scores for the instance _**x**_ _i_, since exchangeability is trivially preserved. Therefore, _τ_ _α_ ( _p_ ) and _p_ _α_ ( _τ_ ) are just the standard quantile thresholds from CP on some new score functions. This directly gives the 1 _−_ _α_ coverage guarantee. This view via the implicit scores is helpful for intuition, but we keep the original formulation since it is more directly amenable to certification as we show in § 4. We provide an additional formal proof of Prop. 1 via conformal risk control (Angelopoulos et al., 2022) in § C. Using either variant from Prop. 1 let ( _p_ _α_ _, τ_ _α_ ) equal ( _p, τ_ _α_ ( _p_ )) or ( _p_ _α_ ( _τ_ ) _, τ_ ) as the final pair of parameters. For test points _**x**_ _n_ +1 we accept labels whose smooth score distribution has at least _p_ _α_ proportion above the threshold _τ_ _α_, i.e. accept( _**x**_ _n_ +1 _, y_ ; _p_ _α_ _, τ_ _α_ ) = 1. The term “binarization” refers to mapping each score sample above _τ_ to 1 and all others 0. For distributions with a strictly increasing and continuous CDF (e.g. isotropic Gaussian smoothing) both variants are equivalent. **Lemma 1.** _Given distributions {S_ _i_ _}_ _[n]_ _i_ =1 _[with strictly increasing and continuous CDFs, let][ τ]_ _[α]_ [(] _[p]_ [)] _[ be]_ _obtained from Eq. 2 with fixed p and p_ _α_ ( _·_ ) _be as defined in Eq. 3. We have p_ _α_ ( _τ_ _α_ ( _p_ )) = _p._ We defer all proofs to § C. For fixed _p_, Prop. 1 yields ( _p, τ_ _α_ ( _p_ )). Fixing _τ_ = _τ_ _α_ ( _p_ ) we get sets with ( _p_ _α_ ( _τ_ ) _, τ_ ) = ( _p_ _α_ ( _τ_ _α_ ( _p_ )) _, τ_ _α_ ( _p_ )) which also equals ( _p, τ_ _α_ ( _p_ )) from Lemma 1. Fig. 2 shows the accept( _**x**_ _, y_ ; _p, τ_ ) function for several examples. This function is non-increasing in both parameters 1.0 0.5 0 0.5 _τ_ 0 0.5 _τ_ 0 0.5 _τ_ 0 0.5 _τ_ 0 0.5 _τ_ Figure 2: [Left] Function accept( _**x**_ _i_ _, y_ _i_ ; _p, τ_ ) for different ( _p, τ_ ) pairs for four random CIFAR-10 instances. Black equals 1 and white equals 0. [Right] Empirical coverage for different ( _p, τ_ ) pairs. Any ( _p, τ_ ) pair on the dashed black line (the 0.9 contour) gives conformal sets with 90% coverage. 4 _p_ and _τ_ . In general, any arbitrary assignment of _p_, and _τ_, results in some expected coverage – accept( _·, ·, p, τ_ ) equals to 1 for some number of ( _**x**_ _i_ _, y_ _i_ )s (Fig. 2-right). Pairs ( _p_ _α_ _, τ_ _α_ ) obtained from Prop. 1 are placed on the 1 _−_ _α_ contour of this expectation. The empirical coverage is close to this expectation due to exchangeability (Berti & Rigo, 1997). **Remarks.** The scores _τ_ _i_ ( _p_ ) (and similarly _p_ _i_ ( _τ_ ) remain exchangeable whether the quantile over the smoothing distribution is computed exactly or estimated from any number of Monte-Carlo samples. That is, Prop. 1 holds regardless. However, when need to be more careful when we consider the certified upper and lower bounds. In § 4 we first derive robust conservative sets that maintain worstcase coverage, assuming that we can compute probabilities and expectations exactly. Since this is not always possible, in § 5 we provide the appropriate sample correction that still preserves the robustness guarantee when using Monte-Carlo samples. We also discuss a de-randomized approach that does not need sample correction. 4 R OBUST B IN CP From Prop. 1 (either variant) we compute a pair ( _p_ _α_ _, τ_ _α_ ). Following Prop. 1, for clean _**x**_ _n_ +1, we have Pr[ _s_ ( _**x**_ _n_ +1 + _**ϵ**_ _, y_ _n_ +1 ) _≥_ _τ_ _α_ ] _≥_ _p_ _α_ with probability 1 _−α_ . We will exploit this property. Define _f_ _y_ ( _**x**_ ) = I[ _s_ ( _**x**_ _, y_ ) _≥_ _τ_ _α_ ], we have _f_ [¯] _y_ ( _**x**_ ) = E _**ϵ**_ [I[ _s_ ( _**x**_ + _**ϵ**_ _, y_ ) _≥_ _τ_ _α_ ]] = Pr _**ϵ**_ [ _s_ ( _**x**_ + _**ϵ**_ _, y_ ) _≥_ _τ_ _α_ ]. **Conventional robust CP.** One way to attain robust prediction sets is to apply the same recipe as Zargarbashi et al. (2024) (CAS) by finding upper or lower bounds on the new score function. CAS uses the smooth score ¯ _s_ _y_ ( _**x**_ ) = E _**ϵ**_ [ _s_ ( _**x**_ + _**ϵ**_ _, y_ )]. Instead, we can bound _f_ [¯] _y_ ( _**x**_ ) which is a smooth binary classifier. Note that as discussed in § 3 (quantile of inverse quantiles), _f_ [¯] _y_ ( _**x**_ ) is a conformity score function itself. Therefore, following Theorem 1, the test-time, and calibration-time robust prediction sets are _C_ ¯ test (˜ _**x**_ _n_ +1 ) = _{y_ : c _[↑]_ [ ¯ _f_ _y_ _,_ ˜ _**x**_ _n_ +1 _, B_ _[−]_ [1] ] _≥_ _p_ _α_ _},_ _C_ ¯ cal (˜ _**x**_ _n_ +1 ) = _{y_ : ¯ _f_ _y_ (˜ _**x**_ _n_ +1 ) _≥_ _q_ _[↓]_ _}_ (4) where _q_ _[↓]_ = Q � _α_ ; _{_ c _[↓]_ [ _f_ [¯] _y_ _i_ _,_ _**x**_ _i_ _, B_ ] _}_ _[n]_ _i_ =1 �. In short, we replace the clean _f_ [¯] _y_ _n_ +1 ( _**x**_ _n_ +1 ) with either its certified upper c _[↑]_ or lower c _[↓]_ bound. We elaborate on this approach before improving it. **Computing** c _[↓]_ **and** c _[↑]_ **.** Computing exact worst-case bounds on _f_ [¯] ( _f_ [¯] _y_ for all _y_ ) is intractable and requires white-box access to the score function and therefore the model. Following established techniques in the randomized smoothing literature (Lee et al., 2019) we relax the problem. Formally, c _[↓]_ [ _f,_ [¯] _**x**_ _, B_ ] = ˜ min Pr s.t. Pr (5) _**x**_ _∈B_ ( _**x**_ ) _**ϵ**_ [[] _[h]_ [(˜] _**[x]**_ [ +] _**[ ϵ]**_ [)]] _**ϵ**_ [[] _[h]_ [(] _**[x]**_ [ +] _**[ ϵ]**_ [)] = Pr] _**ϵ**_ [[] _[f]_ [(] _**[x]**_ [ +] _**[ ϵ]**_ [)] = ¯] _[f]_ [(] _**[x]**_ [)] _h∈H_ where _**x**_ ˜ _∈B H_ ( _**x**_ is the set of all measurable functions). The upper bound c _[↑]_ [ _f,_ [¯] _**x**_ _, B_ _[−]_ [1] ] is the solution to a similar _h_ . Since _f ∈H_ we have _maximization_ c _[↓]_ [ _f,_ [¯] _**x**_ _, B_ problem.] _≤_ _f_ [¯] (˜ _**x**_ ) for all **Closed form.** For _ℓ_ 2 ball with Gaussian smoothing, Eq. 5 has a closed form solution Φ _σ_ (Φ _[−]_ _σ_ [1] [( ¯] _[f]_ _[y]_ [(] _**[x]**_ [))] _[ −]_ _[r]_ [)][ where][ Φ] _[σ]_ [is the CDF of the normal distribution] _[ N]_ [(] **[0]** _[, σ]_ _**[I]**_ [)][(Cohen et al., 2019;] Kumar et al., 2020). The upper bound is similarly computed by changing the sign of _r_ . Yang et al. (2020) show the same closed-form applies solution for the _ℓ_ 1 ball, and additionally, discuss other perturbation balls and smoothing schemes most of which are applicable. For sparse smoothing we can compute the bounds with a simple algorithm with _O_ ( _r_ _a_ + _r_ _d_ ) runtime (Bojchevski et al., 2020), which we discuss in § C. For _ℓ_ 1 ball and uniform smoothing the lower bound equals _f_ [¯] _y_ ( _**x**_ ) _−_ 1 _/_ (2 _λ_ ) where _**ϵ**_ _∼U_ [0 _,_ 2 _λ_ ] _[d]_ (Levine & Feizi, 2021). This bound can also be de-randomized (see § 5). **Single Binary Certificate.** From the closed-form solutions we see that the bounds are independent of the definition of _f_, and the test point _**x**_ ; i.e. their output is a function of the scalar _p_ := _f_ [¯] _y_ ( _**x**_ ). We defer the discussion for why this holds to § B, and § D.1; in short the solution for any _**x**_ can be obtained from alternative canonical points _**u**_, and ˜ _**u**_ . Therefore, we write c _[↓]_ [ _p, B_ ] = c _[↓]_ [ _f_ [¯] _y_ _,_ _**x**_ _, B_ ] to show that c _[↓]_ depends only on _p_ and _B_, and the same for c _[↑]_ . We also notice that in common smoothing schemes and perturbation balls, it holds that c _[↓]_ [c _[↑]_ [ _p, B_ _[−]_ [1] ] _, B_ ] = _p_ which allows us to reduce both calibration-time and test-time robustness to solving a single binary certificate. We formalize this in Lemma 2. **Lemma 2.** ¯ _If_ c _[↓]_ [c _[↑]_ [ _p, B_ _[−]_ [1] ] _, B_ ] = _p for all p, then_ _C_ [¯] test (˜ _**x**_ _n_ +1 ) = _C_ [¯] cal (˜ _**x**_ _n_ +1 ) = _C_ [¯] bin (˜ _**x**_ _n_ +1 ) _where_ _C_ bin (˜ _**x**_ _n_ +1 ) = _{y_ : accept(˜ _**x**_ _n_ +1 _, y_ ; c _[↓]_ [ _p_ _α_ _, B_ ] _, τ_ _α_ ) _}_ = _{y_ : Pr _**ϵ**_ [ _s_ ( _**x**_ _n_ +1 + _**ϵ**_ _, y_ _n_ +1 ) _≥_ _τ_ _α_ ] _≥_ c _[↓]_ [ _p_ _α_ _, B_ ] _}._ 5 Idea Generation Category:
2Direct Enhancement
ltrxRX5t0H
# T HE K O LM OGOROV T EST : C OMPRESSION BY C ODE G ENERATION **Ori Yoran** [1] _[,]_ [2] _[∗]_ **, Kunhao Zheng** [1] **, Fabian Gloeckle** [1] **Jonas Gehring** [1] **, Gabriel Synnaeve** [1] **, Taco Cohen** [1] 1 Meta AI (FAIR), 2 Tel Aviv University ori.yoran@cs.tau.ac.il _{_ kunhao, fgloeckle, jgehring, gab, tscohen _}_ @meta.com A BSTRACT Compression is at the heart of intelligence. A theoretically optimal way to compress any sequence of data is to find the shortest program that outputs that sequence and then halts. However, such _Kolmogorov compression_ is uncomputable, and code generating LLMs struggle to approximate this theoretical ideal, as it requires reasoning, planning and search capabilities beyond those of current models. In this work, we introduce the K O LM OGOROV -T EST (KT), a compressionas-intelligence test for code generating LLMs. In KT a model is presented with a sequence of data at inference time, and asked to generate the shortest program that produces the sequence. We identify several benefits of KT for both evaluation and training: an essentially infinite number of problem instances of varying difficulty is readily available, strong baselines already exist, the evaluation metric (compression) cannot be gamed, and pretraining data contamination is highly unlikely. To evaluate current models, we use audio, text, and DNA data, as well as sequences produced by random synthetic programs. Current flagship models perform poorly – both GPT4- O and L LAMA -3.1-405B struggle on our natural and synthetic sequences. On our synthetic distribution, we are able to train code generation models with lower compression rates than previous approaches. Moreover, we show that gains on synthetic data generalize poorly to real data, suggesting that new innovations are necessary for additional gains on KT. 1 I NTRODUCTION Compression and code generation are deeply related through the notion of Kolmogorov complexity, denoted _K_ ( _x_ ), which is defined as the length of the shortest computer program [1] that produces the sequence _x_ as output and hence constitutes the optimal compression of _x_ (Kolmogorov, 1963; Li & Vit´anyi, 1997; Hutter et al., 2024) (see §2 for a detailed background). Kolmogorov complexity is _un-_ _computable_ as it reduces to the halting problem, making the search for improved computable _upper_ _bounds_ a never-ending challenge and a potential benchmark for intelligence that by definition cannot saturate. We propose to view code generation language models (C ODE LM S ) as upper bounds for Kolmogorov complexity: we task them to identify patterns in an input sequence and to compress it by producing a short program that outputs said sequence, and can measure their accuracy at producing correct programs as well as the compression rates achieved. This is the K O LM OGOROV -T EST (KT), a benchmark for reasoning capabilities for C ODE LM S, illustrated in Fig. 1. We point out the following benefits of compression by code generation as a benchmark for the reasoning capabilities of language models: (1) the compression metric can be trusted in that it does not produce false positives, (2) diverse and richly structured sequence data is abundantly available, (3) it is highly unlikely that pretrained models have seen many relevant (program, sequence) pairs, making memorization-based solutions infeasible [2], (4) if the benchmark is saturated, either through improved reasoning and search capabilities or memorization, we can simply increase the sequence _∗_ Work done during an internship at Meta FAIR. 1 For some universal Turing machine _U_ . 2 Models trained on synthetic data or by RL could very well end up using memorization. 1 Figure 1: **Data compression by code generation.** Consider compressing a sequence of bytes (presented as numbers in range [0 _,_ 255]) that can be produced by composing simpler sub-sequences. Standard compression methods, such as G ZIP, focous on repetitions and frequency of characters and fail to exploit the logical patterns in this sequence (although they are strong baselines for long sequences, §5.3). LLMs are better at finding complex patterns, such as a sequence of incremental numbers, and can be used for compression with arithmetic coding. However, they are sensitive to phase-shifts due to their auto-regressive manner, and require model weights for decoding. Code generative models, inspired by the concept of Kolmogorov Complexity, can identify patterns in the input sequence to generate concise programs whose execution produces the original sequence. length and (5) following research spanning decades of theoretical computer science, classical compression algorithms such as G ZIP (Deutsch, 1996) can serve as strong baselines. To evaluate current C ODE LM S on KT, we use naturally occurring sequences from three data modalities: text, audio, and DNA (§3.2). As we do not know the optimal program for the sequences, it is not possible to use this data for supervised learning, making it ideal for evaluation. To collect program-sequence pairs for supervised training and evaluation, we design a _compositional_ domainspecific language (DSL) coupled with an automatic data generation framework (§3.3), inspired by context-free grammars (Chomsky, 1956; Hopcroft et al., 2006). Our experiments (§4) show that sequence compression is an extremely challenging task for current C ODE LM S . Strong L LAMA -3.1-405B (Grattafiori et al., 2024) and GPT4- O (OpenAI et al., 2024b) models generate Python programs that fail to produce the input sequence in 78% and 40% of the time for naturally occurring data, and 66% and 45% percent of the time for synthetic sequences that follow clear patterns. On our synthetic distribution, we are able to train relatively small C ODE LM S with 1.5B parameters that outperform state-of-the-art prompted models by more than 40%, but struggle on real sequences. We further find that the prior used for program-compression is an important factor in overall compression performance, with a simple uniform prior over our DSL outperforming G ZIP . Finally, we observe modest gains for adding inline execution feedback. In §5.3, we conduct a thorough analysis of our results. We find that gains on synthetic data partially generalize to real data – models trained for longer perform better on short sequences, but all current models perform poorly on long sequences, suggesting KT can act as a useful test-bed to evaluate scaling properties of new methods. Finally, we perform an error analysis and find that models make a wide range of errors, from generating over-simplified programs that do not produce the input sequence, to repeating the input sequence in over-complicated ways. To summarize, our main contributions are: [3] - We propose the K O LM OGOROV -T EST, an extremely challenging compression-asintelligence test for C ODE LM S (§3). - We show that C ODE LM S can outperform previous compression methods on synthetic distributions where sampling program-sequence pairs is possible and efficient priors can be used, but fare very poorly on real data (§5). 3 To support future progress, we release our code, data, and a public leaderboard. 2 Figure 2: **Our main experimental settings.** - We show that performance on real data scales poorly with increasing synthetic dataset size, suggesting that new breakthroughs may be needed for further progress on KT. 2 B ACKGROUND **Generative Modelling, Information Theory and Compression.** Generative modelling and compression are deeply related. Given a generative model over sequences (e.g., autoregressive transformer (Radford et al., 2018)) _p_ ( _x_ _i_ _|x_ _<i_ ), one can use the arithmetic coding algorithm to compress a particular sequence _x_ to a bitstream of length about _−_ [�] _i_ [log] 2 _[p]_ [(] _[x]_ _[i]_ _[|][x]_ _[<i]_ [)][ bits (Rissanen, 1976;] Pasco, 1977). If the sequences are sampled from the gold distribution _p_ _[∗]_, then the expected arithmetic code length is the cross entropy between _p_ _[∗]_ and _p_ . When _p_ = _p_ _[∗]_, the cross entropy equals the entropy, which is the fundamental limit on average compression length for data sampled from _p_ _[∗]_ . Similarly, given a latent variable model _p_ ( _x, z_ ), one can encode a sequence _x_ by first encoding a code _z_ using _prior p_ ( _z_ ) and then encoding _x_ using likelihood _p_ ( _x|z_ ). Using an optimal code for _p_ ( _z_ ) and _p_ ( _x|z_ ) the coding cost will be roughly _−_ log _p_ ( _z_ ) _−_ log _p_ ( _x|z_ ). If we obtain _z_ by sampling from an encoder network _q_ ( _z|x_ ), the expected coding cost will be E _q_ [ _−_ log _p_ ( _z_ ) _−_ log _p_ ( _x|z_ )] (Habibian et al., 2019), which (up to an additional entropy bonus _H_ ( _q_ )) equals the evidence lower bound (ELBO) used for instance for VAE training (Kingma & Welling, 2022). Thus, both for autoregressive and latent variable models, maximizing likelihood is maximizing compression. **Algorithmic Information Theory, Solomonoff Induction, and Kolmogorov Complexity.** In classical generative modelling and compression, one is concerned with finding a generative model from a particular model class that can be used to compress data from a particular distribution. By contrast, in algorithmic information theory, one considers as “model class” the set of all computable functions. This class trivially contains the optimal compression, and changing the computational model (programming language / universal Turing machine) only incurs a constant overhead depending on the pair of languages involved and regardless of the input (Grunwald & Vitanyi, 2008, 2.2.2). In analogy to the discussion above, we can sketch the theory as follows, inspired by Solomonoff’s theory of inductive inference (Solomonoff, 1964). We consider the _universal prior p_ ( _ρ_ ) = 2 _[−][l]_ [(] _[ρ]_ [)] over programs _ρ_ (where _l_ ( _ρ_ ) denotes the length) that assigns higher probability to shorter programs (“Occam’s Razor”). Furthermore, we consider the likelihood _p_ ( _x|ρ_ ) over output sequences _x_ given _ρ_ which puts all mass on the actual output of _ρ_ . Then, treating _ρ_ as latent we may encode _x_ using a two-part code by finding a _program ρ_ that outputs _x_ (e.g. using a neural network _q_ ( _ρ|x_ )), and encoding it using the _prior p_ ( _ρ_ ). As before, the coding cost will be E _q_ [ _−_ log _p_ ( _ρ_ ) _−_ log _p_ ( _x|ρ_ )], where the second term is _∞_ whenever _x_ is not the output of _ρ_, and 0 when it is. Clearly, the optimal _ρ_ is the shortest program that outputs _x_ . The length of this program _ρ_ _[∗]_ is called the _Kolmogorov Complexity K_ ( _x_ ) (Kolmogorov, 1963; Li & Vit´anyi, 1997). 3 3 D ATA C OLLECTION 3.1 P ROBLEM S ETTING The theory presented above suggests an interesting challenge for C ODE LM S : to generate concise _programs_ (under well-defined _priors_ ) that output a given sequence (Fig. 1 and Alg. 2 in §A.3). We evaluate C ODE LM S on three naturally occurring modalities: audio, text, and DNA (Fig. 2, left, and §3.2). We focus on audio and textual data, popular modalities in compression works with strong baselines (Deletang et al., 2024). We also introduce DNA sequences as these follow simple biological patterns. Although real data is useful for evaluation since it can be obtained in large amounts for many modalities, corresponding _programs_ required for supervised training are not available. As a remedy, we experiment with synthetic settings where we collect program-sequence pairs by sampling programs from a Domain Specific Language (DSL) and executing the sampled programs (Fig. 2, right, and §3.3). 3.2 N ATURALLY O CCURRING S EQUENCES In all our experiments on natural data, the model has to compress 1MB of information. The length of the input sequence presented to our models ranges from 16 to 1024 bytes depending on the setting. **Audio.** We randomly sample audio snippets from the LibriSpeech development and test sets (Panayotov et al., 2015). We parse the data to three audio formats: (a) high quality audio with 16 bits depth, from which the original data can be perfectly recreated but each sample is split across two bytes, (b) lower quality audio with 8 bits depth, which is simpler for our baselines (§5), and (c) an estimation of the Mel-Frequency Cepstral Coefficients encoding (MFCC) which does not allow direct reconstruction but describes the main features of the sound (see §A.1 for more details). Each byte is represented as a number in range [0 _,_ 255]. **Text.** Following recent work (Deletang et al., 2024), we use the enwik9 Wikipedia corpus (Hutter, 2009). The Unicode text is encoded to a sequence of bytes with UTF-8 encoding which are represented as a list of numbers. **DNA.** We use Genome assembly GRCh38 which contains 3 _._ 1GB of human DNA in FASTA format (NCBI, 2023). Since the original files include upper and lower case variants of each of the four nucleotides, we use a vocabulary of eight characters represented by numbers in range [0 _,_ 7] [4] . 3.3 S YNTHETIC D ATA G ENERATION Our aim is to examine whether C ODE LM S can be trained to generate concise and correct programs. Hence, we adhere to the following desiderata for our data generation process: (a) **completeness** : each sequence has some probability for a program that produces it to be sampled, and (b) **simplicity** : shorter programs will be sampled with higher probability. **Domain Specific Language.** We design a _compositional_ DSL, in the sense that the output sequence is created by _composing_ simpler sub-sequences (Fig. 2, center). Our DSL supports the following four classes of functions (see §A.2 for a list of all supported functions): [5] - **Sequence initiators** : functions that take variables as input and return a sequence of numbers. These include a range of increasing numbers, a repetition of a single number, or a fixed (hard-coded) list of numbers. - **Sequence modifiers** : functions that take a sequence as input and return a modified version of the sequence, e.g., by reversing, repeating, or substituting elements from the sequence. A sub-class of our modifiers includes mathematical operations, such as scan-adding the elements in the sequence or applying a modulo operation. 4 Lower case variants represent low-complexity or masked regions (e.g., repeats or predicted sequences), which we keep to allow perfect reconstruction of the input sequence. 5 We note that for simplicity, our DSL does not allow recursion – it is a proper subset of primitive recursive functions and of Turing-computable functions (Turing, 1937) and thus deviates from the theory in §2. 4 Figure 3: **Two examples of program-sequence pairs from our synthetic data generation process.** - **Sequence filters** : functions that filter a sequence, e.g., by keeping only even values. - **Sequence mergers** : functions that take two sequences and merge them to a single one: concatenation, interleaving, pointwise addition, subtraction or modulo of the sequences. **Sampling programs.** Given the compositional nature of our DSL, we sample programs similarly to the way sequences are sampled from a Context Free Grammar (Chomsky, 1956; Hopcroft et al., 2006). We can also control the distribution of programs by assigning priors over the program distribution, e.g., the lengths of the initiated sequence or probability to apply modifications. For simplicity and to avoid biases we keep the priors uniform [6] . For examples of program-sequence pairs, see Fig. 3. As in the natural data domains, we generate 1MB of evaluation data. For training, we sample 1M pairs from the same distribution. We provide additional details and statistics in §A.2. **Encoding programs.** We encode programs written in our DSL using a factorized uniform prior over functions and arguments. As each line includes a call to a single function, the encoding cost of each line is the cost of encoding the function (i.e., log 2 ( _|functions|_ )) plus the cost of the input parameters (e.g., the number of bits needed to set an arbitrary list for _set list_, or the indices of the sequences in _concatenate_ ). Our code length calculation algorithm is presented in §A.2, Alg. 1. 4 E XPERIMENTAL S ETUP **Baselines.** For our classical compression baseline, we use G ZIP (Deutsch, 1996). We also include a Language Modeling is Compression (LM I C) baseline (see §B.1 for implementation details), where the LM predictions are used together with arithmetic coding to compress the sequence as discussed in §2 (Deletang et al., 2024). A major limitation of LM I C is that the original LLM is needed to decompress the data [7] . We also include R EPEAT, a naive baseline that always returns the input sequence, and an U PPER B OUND baseline that returns the matching program for our synthetic pairs. **Zero-shot prompted baselines.** We prompt our models to generate the shortest Python program that produces the input sequence (see §B.1, Fig. 9 for the full prompt). We use strong open-weight models from the L LAMA -3.1 family with 8, 70, and 405 billion parameters Grattafiori et al. (2024). In addition, we use GPT4- O as a closed-source alternative (OpenAI et al., 2024b). We also experiment with a chain-of-thought (CoT) (Wei et al., 2022) baseline (see prompt in Fig. 10) and with O PEN AI O 1- MINI (OpenAI et al., 2024a), a closed-source model trained to think before generating the final answer. We use G ZIP to encode programs before measuring their bit length (i.e., we use G ZIP as a _prior_ over programs) in all experiments, unless stated otherwise. **Trained models.** For our trained models, we use L LAMA -3.1-8B as base model in addition to a 1.5-billion parameter LLM with the same architecture, which we train on a mixture of open-source code and text (see §B.1 for more details and technical specifications). We further train our models on 10K, 100K or 1M unique programs-sequence pairs sampled from our data generator (§3.3), and 6 We note that this does not guarantee that each operator is used with the same probability, as some operators are only applicable in specific contexts (e.g., an addition between two sequences is only applicable when the sequences have the same length and the sum of the matching elements is in the allowed range). 7 As our focus is on compressing relatively small amounts of data, we report raw LM I C results disregarding the weights of the model, an upper bound of the true performance which requires the weights that were used for encoding at decoding time. 5 Idea Generation Category:
0Conceptual Integration
C45YqeBDUM
### A RTIFICIAL K URAMOTO O SCILLATORY N EURONS **Takeru Miyato** **[1]** **, Sindy L¨owe** **[2]** **, Andreas Geiger** **[1]** **, Max Welling** **[2]** 1 University of T¨ubingen, T¨ubingen AI Center 2 University of Amsterdam A BSTRACT It has long been known in both neuroscience and AI that “binding” between neurons leads to a form of competitive learning where representations are compressed in order to represent more abstract concepts in deeper layers of the network. More recently, it was also hypothesized that dynamic (spatiotemporal) representations play an important role in both neuroscience and AI. Building on these ideas, we introduce Artificial Kuramoto Oscillatory Neurons ( _AKOrN_ ) as a dynamical alternative to threshold units, which can be combined with arbitrary connectivity designs such as fully connected, convolutional, or attentive mechanisms. Our generalized Kuramoto updates bind neurons together through their synchronization dynamics. We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, calibrated uncertainty quantification, and reasoning. We believe that these empirical results show the importance of rethinking our assumptions at the most basic neuronal level of neural representation, and in particular show the importance [of dynamical representations. Code: https://github.com/autonomousvision/akorn.](https://github.com/autonomousvision/akorn) [Project page: https://takerum.github.io/akorn](https://takerum.github.io/akorn_project_page/) ~~p~~ roject ~~p~~ age/. 1 I NTRODUCTION Before the advent of modern deep learning architectures, artificial neural networks were inspired by biological neurons. In contrast to the McCulloch-Pitts neuron (McCulloch & Pitts, 1943) which was designed as an abstraction of an integrate-and-fire neuron (Sherrington, 1906), recent building blocks of neural networks are designed to work well on modern hardware (Hooker, 2021). As our understanding of the brain is improving over recent years, and neuroscientists are discovering more about its information processing principles, we can ask ourselves again if there are lessons from neuroscience that can be used as design principles for artificial neural nets. In this paper, we follow a more modern dynamical view of neurons as oscillatory units that are coupled to other neurons (Muller et al., 2018). Similar to how the binary state of a McCulloch-Pitts neuron abstracts the firing of a real neuron, we will abstract an oscillating neuron by an _N_ -dimensional unit vector that rotates on the sphere (L¨owe et al., 2023). We build a new neural network architecture that has iterative modules that update _N_ -dimensional oscillatory neurons via a generalization of the well-known non-linear dynamical model called the Kuramoto model (Kuramoto, 1984). The Kuramoto model describes the synchronization of oscillators; each Kuramoto update applies forces to connected oscillators, encouraging them to become aligned or anti-aligned. This process is similar to binding in neuroscience and can be understood as distributed and continuous clustering. Thus, networks with this mechanism tend to compress their representations via synchronization. We incorporate the Kuramoto model into an artificial neural network, by applying the differential equation that describes the Kuramoto model to each individual neuron. The resulting artificial Kuramoto oscillatory neurons ( _AKOrN_ ) can be combined with layer architectures such as fully connected layers, convolutions, and attention mechanisms. We explore the capabilities of _AKOrN_ and find that its neuronal mechanism drastically changes the behavior of the network. _AKOrN_ strongly binds object features with competitive performance to slot-based models in object discovery, enhances the reasoning capability of self-attention, and increases robustness against random, adversarial, and natural perturbations with surprisingly good calibration. 1 -5500 |000<br>500<br>000<br>500|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |000<br>500<br>000<br>500|||||||| |000<br>500<br>000<br>500|||||||| |000<br>500<br>000<br>500|||||||| 0 50 100 150 200 250 t Figure 1: Our proposed artificial Kuramoto oscillatory neurons ( _AKOrN_ ). The series of pictures on the left are 64 _×_ 64 oscillators evolving by the Kuramoto updates (Eq. (2)), along with a plot of the energies computed by Eq. (3). Each single oscillator **x** _i_ is an _N_ -dimensional vector on the sphere and is influenced by (1) connected oscillators through the weights **J** _ij_, (2) conditional stimuli **c** _i_, and (3) **Ω** _i_ that determines the natural frequency of each oscillator. See Fig. 10 for details on **C** and **J** . 2 M OTIVATION It was recognized early on that neurons interact via lateral connections (Hubel & Wiesel, 1962; Somers et al., 1995). In fact, neighboring neurons tend to cluster their activities (Gray et al., 1989; Mountcastle, 1997), and clusters tend to compete to explain the input. This “competitive learning” has the advantage that information is compressed as we move through the layers, facilitating the process of abstraction by creating an information bottleneck (Amari & Arbib, 1977). Additionally, the competition encourages different higher-level neurons to focus on different aspects of the input (i.e. they specialize). This process is made possible by synchronization: like fireflies in the night, neurons tend to synchronize their activities with their neighbors’, which leads to the compression of their representations. This idea has been used in artificial neural networks before to model “binding” between neurons, where neurons representing features such as square, blue, and toy are bound by synchronization to represent a square blue toy (Mozer et al., 1991; Reichert & Serre, 2013; L¨owe et al., 2022). In this paper, we will use an _N_ -dimensional generalization of the famous Kuramoto model (Kuramoto, 1984) to model this synchronization. Our model has the advantage that it naturally incorporates spatiotemporal representations in the form of traveling waves (Keller et al., 2024), for which there is ample evidence in the neuroscientific literature. While their role in the brain remains poorly understood, it has been postulated that they are involved in short-term memory, long-range coordination between brain regions, and other cognitive functions (Rubino et al., 2006; Lubenov & Siapas, 2009; Fell & Axmacher, 2011; Zhang et al., 2018; Roberts et al., 2019; Muller et al., 2016; Davis et al., 2020; Benigno et al., 2023). For example, Muller et al. (2016) finds that oscillatory patterns in the thalamocortical network during sleep are organized into circular wave-like patterns, which could give an account of how memories are consolidated in the brain. Davis et al. (2020) suggest that spontaneous traveling waves in the visual cortex modulate synaptic activities and thus act as a gating mechanism in the brain. In the generalized Kuramoto model, traveling waves naturally emerge as neighboring oscillators start to synchronize (see on the left in Fig. 1, and Fig. 10 in the Appendix). Another advantage of using dynamical neurons is that they can perform a form of reasoning. Kuramoto oscillators have been successfully used to solve combinatorial optimization tasks such as kSAT problems (Heisenberg, 1985; Wang & Roychowdhury, 2017). This can be understood by the fact that Kuramoto models can be viewed as continuous versions of discrete Ising models, where phase variables replace the discrete spin states. Many authors have argued that the modern architectures based on, e.g., transformers lack this intrinsic capability of “neuro-symbolic reasoning” (Dziri et al., 2024; Bounsi et al., 2024). We show that _AKOrN_ can successfully solve Sudoku puzzles, illustrating this capability. Additionally, _AKOrN_ relates to models in quantum physics and active matter (see appendix B.1). In summary, _AKOrN_ combines beneficial features such as competitive learning (i.e., feature binding), reasoning, robustness and uncertainty quantification, as well as the potential advantages of traveling waves observed in the brain, while being firmly grounded in well-understood physics models. 2 3 T HE K URAMOTO MODEL The Kuramoto model (Kuramoto, 1984) is a non-linear dynamical model of oscillators, that exhibits synchronization phenomena. Even with its simple formulation, the model can represent numerous dynamical patterns depending on the connections between oscillators (Breakspear et al., 2010; Heitmann et al., 2012). In the original Kuramoto model, each oscillator _i_ is represented by its phase information _θ_ _i_ _∈_ [0 _,_ 2 _π_ ). The differential equation of the Kuramoto model is _θ_ ˙ _i_ = _ω_ _i_ + [�] _j_ _[J]_ _[ij]_ [ sin(] _[θ]_ _[j]_ _[ −]_ _[θ]_ _[i]_ [)] _[,]_ (1) where _ω_ _i_ _∈_ R is the natural frequency and _J_ _ij_ _∈_ R represents the connections between oscillators: if _J_ _ij_ _>_ 0 the _i_ and _j_ -th oscillator tend to align, and if _J_ _ij_ _<_ 0, they tend to oppose each other. While the original Kuramoto model describes one-dimensional oscillators, we use a _multi-_ _dimensional vector version_ of the model (Olfati-Saber, 2006; Zhu, 2013; Chandra et al., 2019; Lipton et al., 2021; Markdahl et al., 2021) with a symmetry-breaking term into neural networks. We denote oscillators by **X** = _{_ **x** _i_ _}_ _[C]_ _i_ =1 [, where each] **[ x]** _[i]_ [ is a vector on a hypersphere:] **[ x]** _[i]_ _[ ∈]_ [R] _[N]_ _[,][ ∥]_ **[x]** _[i]_ _[∥]_ [2] [ = 1][.] _N_ is each single oscillator dimension called _rotating dimensions_ and _C_ is the number of oscillators. While each **x** _i_ is time-dependent, we omit _t_ for clarity. The oscillator index _i_ may have multiple dimensions: if the input is an image, for example, each oscillator is represented by **x** _c,h,w_ with _c, h, w_ indicating channel, height and width positions, respectively. The differential equation of our vector-valued Kuramoto model is written as follows: **x** ˙ _i_ = **Ω** _i_ **x** _i_ + Proj **x** _i_ ( **c** _i_ + � **J** _ij_ **x** _j_ ) where Proj **x** _i_ ( **y** _i_ ) = **y** _i_ _−⟨_ **y** _i_ _,_ **x** _i_ _⟩_ **x** _i_ (2) _j_ Here, **Ω** _i_ is an _N × N_ anti-symmetric matrix and **Ω** _i_ **x** _i_ is called the natural frequency term that determines each oscillator’s own rotation frequency and angle. The second term governs interactions between oscillators, where Proj **x** _i_ is an operator that projects an input vector onto the tangent space of the sphere at **x** _i_ . We show a visual description of Proj **x** _i_ and a relation between the vector valued Kuramoto model and the original one in the Appendix A.1. **C** = _{_ **c** _i_ _}_ _[C]_ _i_ =1 _[,]_ **[ c]** _[i]_ _[ ∈]_ [R] _[N]_ [ is a data-] dependent variable, which is computed from the observational input or the activations of the previous layer. In this paper, every **c** _i_ is set to be constant across time, but it can be a time-dependent variable. **c** _i_ can be seen as another oscillator that has a unidirectional connection to **x** _i_ . Since **c** _i_ is not affected by any oscillators, **c** _i_ strongly binds **x** _i_ to the same direction as **c** _i_, i.e. it acts as a bias direction (see Fig. 10 in the Appendix). In physics lingo, **C** is often referred to as a “symmetry breaking” field. The Kuramoto model is Lyapunov if we assume certain symmetric properties in **J** _ij_ and **Ω** _i_ (Aoyagi, 1995; Wang & Roychowdhury, 2017). For example, if **J** _ij_ = _J_ _ij_ **I**, _J_ _ij_ = _J_ _ji_ _∈_ R, **Ω** _i_ = **Ω**, and **Ωc** _i_ = **0**, each update is guaranteed to minimize the following energy (proof is found in Sec F): _E_ = _−_ [1] 2 � **x** [T] _i_ **[J]** _[ij]_ **[x]** _[j]_ _[−]_ � _i,j_ _i_ � **c** [T] _i_ **[x]** _[i]_ (3) _i_ Fig. 1 on the left shows how the oscillators and the corresponding energy evolve with a simple Gaussian kernel as the connectivity matrix. Here, we set **C** as a silhouette of a fish, where **c** _i_ = **1** on the outer silhouette and **c** _i_ = **0** on the inner silhouette. The oscillator state is initially disordered, but gradually exhibits collective behavior, eventually becoming a spatially propagating wavy pattern. We include animations of visualized oscillators, including oscillators of trained _AKOrN_ models used in our experiments, in the Supplementary Material. We would like to note that we found that even without symmetric constraints, the energy value decreases relatively stably, and the models perform better across all tasks we tested compared to models with symmetric **J** . A similar observation is made by Effenberger et al. (2022) where heterogeneous oscillators such as those with different natural frequencies are helpful for the network to control the level of synchronization and increase the network capacity. From here, we assume no symmetric constraints on **J** and **Ω** . Having asymmetric (a.k.a. non-reciprocal) connections is aligned with the biological neurons in the brain, which also do not have symmetric synapses. 3 ~~X~~ ~~[(0)]~~ ~~C~~ ~~[(0)]~~ ... ... ~~X~~ ~~[(L)]~~ ~~C~~ ~~[(L)]~~ Figure 2: Our proposed Kuramoto-based network (here, for image processing). Each block consists of a Kuramoto-layer and a readout module described in Sec 4. **C** [(] _[L]_ [)] is used to make the final prediction of our model. Similar network structures are proposed in (Bansal et al., 2022; Geiping et al., 2025). 4 N ETWORKS WITH K URAMOTO OSCILLATORS We utilize the artificial Kuramoto oscillator neurons ( _AKOrN_ ) as a basic unit of information processing in neural networks (Fig. 2). First, we transform an observation with a relatively simple function to create the initial conditional stimuli **C** [(0)] . Next, **X** [(0)] is initialized by either **C** [(0)], a fixed learned embedding, random vectors, or a mixture of these initialization schemes. The block is composed of two modules: the Kuramoto layer and the readout module, which together process the pair _{_ **X** _,_ **C** _}_ . The Kuramoto layer updates **X** with the conditional stimuli **C**, and the readout layer extracts features from the final oscillatory states to create new conditional stimuli. We denote the number of layers by _L_, and _l_ -th layer’s output by _{_ **X** [(] _[l]_ [)] _,_ **C** [(] _[l]_ [)] _}_ . **Kuramoto layer** Starting with **X** [(] _[l,]_ [0)] := **X** [(] _[l][−]_ [1)] as initial oscillators, where the second superscript denotes the time step, we update them by the discrete version of the differential equation (2): ∆ **x** [(] _i_ _[l,t]_ [)] = **Ω** [(] _i_ _[l]_ [)] **[x]** [(] _i_ _[l,t]_ [)] + Proj **x** ( _il,t_ ) ( **c** [(] _i_ _[l][−]_ [1)] + � **J** [(] _ij_ _[l]_ [)] **[x]** [(] _j_ _[l,t]_ [)] ) (4) _j_ **x** [(] _i_ _[l,t]_ [+1)] = Π **x** [(] _i_ _[l,t]_ [)] + _γ_ ∆ **x** [(] _i_ _[l,t]_ [)] _,_ (5) � � where Π is the normalizing operator **x** _/∥_ **x** _∥_ 2 that ensures that the oscillators stay on the sphere. _γ >_ 0 is a scalar controlling the step size of the update, which is learned in our experiments. We call this update a Kuramoto update or a Kuramoto step from here. We optimize both **Ω** [(] _[l]_ [)] and **J** [(] _[l]_ [)] given the task objective. We update the oscillators _T_ times. We denote the oscillators at _T_ by **X** [(] _[l,T]_ [ )] . This oscillator state is used as the initial state of the next block: **X** [(] _[l]_ [)] := **X** [(] _[l,T]_ [ )] . **Readout module** We read out patterns encoded in the oscillators to create new conditional stimuli **C** [(] _[l]_ [)] for the subsequent block. Since the oscillators are constrained onto the (unit) hyper-sphere, all the information is encoded in their directions. In particular, the relative direction between oscillators is an important source of information because patterns after certain Kuramoto steps only differ in global phase shifts (see the last two patterns in Fig. 10 in the Appendix). To capture phase invariant patterns, we take the norm of the linearly processed oscillators: **C** [(] _[l]_ [)] = _g_ ( **m** ) _∈_ R _[C]_ _[′]_ _[×][N]_ _, m_ _k_ = _∥_ **z** _k_ _∥_ 2 _,_ **z** _k_ = [�] _i_ **[U]** _[ki]_ **[x]** [(] _i_ _[l,T]_ [ )] _∈_ R _[N]_ _[ ′]_ _,_ (6) where **U** _ki_ _∈_ R _[N]_ _[ ′]_ _[×][N]_ is a learned weight matrix, _g_ is a learned function, and **m** = [ _m_ 1 _, ..., m_ _K_ ] [T] _∈_ R _[K]_ . _N_ _[′]_ is typically set to the same value as _N_ . In this work, _g_ is just the identity function, a linear layer, or at most a three-layer neural network with residual connections. Because the module computes the norm of (weighted) **X** [(] _[l,T]_ [ )], this readout module includes functions that are invariant to the global phase shift in the solution space. Unless otherwise specified, we set _C_ _[′]_ = _C_ and _K_ = _C × N_ in all our experiments. 4 4.1 C ONNECTIVITIES We implement artificial Kuramoto oscillator neurons ( _AKOrN_ ) within convolutional and selfattention layers. We write down the formal equations of the connectivity for completeness, however, they simply follow the conventional operation of convolution or self-attention applied to oscillatory neurons flattened w.r.t the rotating dimension _N_ . In short, convolutional connectivity is local, and attentive connectivity is dynamic input-dependent connectivity. **Convolutional connectivity** To implement _AKOrN_ in a convolutional layer, oscillators and conditional stimuli are represented as _{_ **x** _c,h,w_ _,_ **c** _c,h,w_ _}_ where _c, h, w_ are channel, height and width positions, and the update direction is given by: **y** _c,h,w_ := **c** _c,h,w_ + � _d_ � **J** _c,d,h_ _′_ _,w_ _′_ **x** _d,_ ( _h_ + _h_ _′_ ) _,_ ( _w_ + _w_ _′_ ) _,_ (7) _h_ _[′]_ _,w_ _[′]_ _∈R_ [ _H_ _[′]_ _,W_ _[′]_ ] where _R_ [ _H_ _[′]_ _, W_ _[′]_ ] = [1 _, ..., H_ _[′]_ ] _×_ [1 _, ..., W_ _[′]_ ] is the _H_ _[′]_ _× W_ _[′]_ rectangle region (i.e. kernel size) and **J** _c,d,h_ _′_ _,w_ _′_ _∈_ R _[N]_ _[×][N]_ are the learned weights in the convolution kernel where ( _c, d_ ) _,_ ( _h_ _[′]_ _, w_ _[′]_ ) are output and input channels, and height and width positions. **Attentive connectivity** Similar to Bahdanau et al. (2014); Vaswani et al. (2017), we construct the internal connectivity in the QKV-attention manner. In this case, oscillators and conditional stimuli are represented by _{_ **x** _l,i_ _,_ **c** _l,i_ _}_ where _l_ and _i_ are indices of tokens and channels, respectively. The update direction becomes: **y** _l,i_ := **c** _l,i_ + � � **J** _l,m,i,j_ **x** _m,j_ = **c** _l,i_ + � _m,j_ _m,j_ _m,j_ � **W** _h,i,k_ _[O]_ _[A]_ _[h]_ [(] _[l, m]_ [)] **[W]** _h,k,j_ _[V]_ **[x]** _[m,j]_ (8) _k,h_ _e_ _[d]_ _[h]_ [(] _[l,m]_ [)] _A_ _h_ ( _l, m_ ) = � _m_ _[e]_ _[d]_ _[h]_ [(] _a_ � _m_ _[e]_ _[d]_ _[h]_ [(] _[l,m]_ [)] _[, d]_ _[h]_ [(] _[l, m]_ [) =] �� _i_ **W** _[Q]_ _h,a,i_ **[x]** _[l,i]_ _[,]_ � _i_ _i_ **W** _[K]_ _h,a,i_ **[x]** _[m,i]_ _i_ � (9) where **W** _h,i,k_ _[O]_ _[,]_ **[ W]** _h,k,j_ _[V]_ _[,]_ **[ W]** _h,a,i_ _[Q]_ _[,]_ **[ W]** _h,a,i_ _[K]_ _[∈]_ [R] _[N]_ _[×][N]_ [ are learned weights of head] _[ h]_ [. Since the connec-] tivity is dependent on the oscillator values and thus not static during the updates, it is unclear whether the energy defined in Eq, (3) is proper. Nonetheless, in our experiments, the energy and oscillator states are stable after several updates (see the Supplementary Material, which includes visualizations of the oscillators of trained _AKOrN_ models and their corresponding energies over timesteps). 5 R ELATED WORKS Many studies have historically incorporated oscillatory properties into artificial neural networks (Baldi & Pineda, 1991; Wang & Terman, 1997; Ketz et al., 2013; Neil et al., 2016; Chen et al., 2021b; Rusch & Mishra, 2020; Laborieux & Zenke, 2022; Rusch et al., 2022; van Gerven & Jensen, 2024). The Kuramoto model, a well-known oscillator model describing synchronization phenomena, has been rarely explored in machine learning, particularly in deep learning. However, several works motivate us to use the Kuramoto model as a mechanism for learning binding features. For example, although tested only in fairly synthetic settings, Liboni et al. (2023) show that cluster features emerge in the oscillators of the Kuramoto model with lateral connections without optimization. Ricci et al. (2021) studies how data-dependent connectivity can construct synchrony on synthetic examples. Nguyen et al. (2024) relates the over-smoothing to the notion of phase-synchrony and uses the model to mitigate over-smoothing phenomena in graph neural networks. Also, a line of works on neural synchrony (Reichert & Serre, 2013; L¨owe et al., 2022; Stani´c et al., 2023; Zheng et al., 2023; L¨owe et al., 2023; Gopalakrishnan et al., 2024) shares the same philosophy with _AKOrN_ . Zheng et al. (2023) model synchrony by using temporal spiking neurons based on biological neuronal mechanisms. L¨owe et al. (2023) extend the concept of complex-valued neurons—used by Reichert & Serre (2013); L¨owe et al. (2022) to abstract temporal neurons—into multidimensional neurons. They show that, together with a specific activation function called _χ-binding_ that implements the ‘winner-takeall’ mechanism at the single neuron level (L¨owe et al., 2024), the multidimensional neurons learn to encode binding information in their orientations. Those synchrony-based models are shown to work well on relatively synthetic data but have been struggling to scale to natural images. L¨owe et al. (2023) show that their model can work with a large pre-trained self-supervised learning (SSL) model as a feature extractor, but its performance improvement is limited compared to slot-based models. 5 Idea Generation Category:
0Conceptual Integration
nwDRD4AMoN
## - K I VA: K ID INSPIRED V ISUAL A NALOGIES FOR T ESTING L ARGE M ULTIMODAL M ODELS **Eunice Yiu** [1] **Maan Qraitem** [2] **Anisa Noor Majhi** [1] **Charlie Wong** [1] **Yutong Bai** [1] **Shiry Ginosar** [3] _[,]_ [4] **Alison Gopnik** [1] **Kate Saenko** [2] 1 University of California, Berkeley 2 Boston University 3 Google DeepMind 4 Toyota Technological Institute at Chicago A BSTRACT This paper investigates visual analogical reasoning in large multimodal models (LMMs) compared to human adults and children. A “visual analogy” is an abstract rule inferred from one image and applied to another. While benchmarks exist for testing visual reasoning in LMMs, they require advanced skills and omit basic visual analogies that even young children can make. Inspired by developmental psychology, we propose a new benchmark of 4,300 visual transformations of everyday objects to test LMMs on visual analogical reasoning and compare them to children (ages three to five) and to adults. We structure the evaluation into three stages: identifying _what_ changed (e.g., color, number, etc.), _how_ it changed (e.g., added one object), and _applying the rule_ to new scenarios. Our findings show that while GPT-o1, GPT-4V, LLaVA-1.5, and MANTIS identify the “what” effectively, they struggle with quantifying the “how” and extrapolating this rule to new objects. In contrast, children and adults exhibit much stronger analogical reasoning at all three stages. Additionally, the strongest tested model, GPT-o1, performs better in tasks involving simple surface-level visual attributes like color and size, correlating with quicker human adult response times. Conversely, more complex tasks such as number, rotation, and reflection, which necessitate extensive cognitive processing and understanding of extrinsic spatial properties in the physical world, present more significant challenges. Altogether, these findings highlight the limitations of training models on data that primarily consists of 2D images and text. [1] 1 I NTRODUCTION What is visual cognition? Humans make countless visual inferences everyday from observing objects and scenes, quickly detecting even subtle visual changes. We generalize common patterns about changes from different observations and use these insights to solve new problems. If we put a wool sweater in the washing machine and it comes out smaller, we might infer that the wash shrinks wool and avoid washing wool coat in the future. If cookies disappear, we might infer that someone is eating our treats and and proceed to hide the chocolate elsewhere. This ability to draw parallels between situations and apply learned patterns to a new scenario is known as _analogical reasoning_ . Formally defined, an analogy is a systematic comparison between structures that uses the properties and relations of objects in a source structure to infer properties and relations of objects in a target structure (Mitchell, 2021; Schunn & Dunbar, 1996). Analogical reasoning is a hallmark of human intelligence and learning (Gentner, 1983; Holyoak, 2012; Mitchell, 2021; Sternberg, 1977). It is what enables us to be flexible, adaptive and robust learners across a wide variety of settings, finding meaning in patterns and making out-of-distribution generalizations (Chollet, 2019; Mitchell, 2021). Analogical reasoning is already available to young children (Goddu et al., 2020; Goswami, 2013; Sternberg & Rifkin, 1979), and is crucial for human problem-solving in various contexts, from building scientific models to appreciating metaphors to formulating legal arguments. Today, large multimodal (LMMs) have made significant progress, but they remain data-hungry and require substantial human effort to adapt to new contexts (Chollet, 2019; Reizinger et al., 2024). As analogical reasoning is instrumental for general-purpose and adaptive machines, it is crucial to 1 [Benchmark (code, data, models) is available at: https://github.com/ey242/KiVA](https://github.com/ey242/KiVA) 1 |Col1|Example Transformation|KiVA|KiVA-adults| |---|---|---|---| |Color||?|?| |Size||?|?| |Rotation||?|?| |Reflection||?|?| |Number||?|?| (a) Visual analogy domains. Predicting the “?” in the transformations KiVA Color Number Rotation KiVA-adults Color Number Rotation (b) Extrapolation accuracy. Figure 1: **KiVA: Kid-inspired Visual Analogies. (a)** 5 visual analogy domains examined in KiVA and KiVA-adults (see Figure 3 for the full task format). Unlike KiVA, the starting color, size, orientation and number of test objects in KiVA-adults further differ from the starting values of the given transformations. **(b)** Performance of children, adults & LMMs in extrapolating a transformation rule to a novel object in KiVA (top) and KiVA-adults (bottom). examine whether current models have such capabilities. Critically, examining analogical capabilities does not permit models to “cheat” by merely depending on their training data because it requires context-dependent abstraction beyond general object recognition. In KiVA, the same object may undergo different kinds of transformations, requiring models to combine familiar elements in new, trial-specific ways. Reasoning about analogies involves first classifying _relationships_ between object characteristics, specifying similarities and differences, then extrapolating the _same relationship_ to new objects. This paper focuses on visual analogies, testing models’ ability to reason abstractly about visual observations. See Figure 1 for a summary of the KiVA benchmark and results. There is a growing body of work examining visual reasoning and generalization capabilities in large multimodal models (Ahrabian et al., 2024; Huang et al., 2024; Moskvichev et al., 2023; Petersen & van der Plas, 2023; Webb et al., 2023). Existing benchmarks of visual analogies include (a) ARC (Chollet, 2019) and ConceptARC (Mitchell et al., 2023; Moskvichev et al., 2023), (b) variations of Raven’s Progressive Matrices (Huang et al., 2024) and (c) abstract spatial reasoning (Ahrabian et al., 2024) (see prior benchmarks in Figure 2). These prior benchmarks all have several critical limitations. First, they rely on abstract shapes and grids, lacking real-world relevance. This abstraction of stimuli neither aligns with the training data of large multimodal models nor effectively mimics the complexity and variability found in everyday visual tasks, making it less suitable for assessing how well AI models can perform analogical reasoning in practical contexts. Second, the transformations examined involve conjunctions of visual concepts such as extracting _and_ transposing pixels according to some arbitrary rule, which do not tap into basic visual cognition. Humans do not require the ability to solve these specific tasks to function effectively in their daily lives nor to demonstrate their capacity for visual analogical reasoning. Third, while we know that models often perform poorly on these benchmarks, where they fail in the reasoning process needs to be clarified since existing evaluations focus solely on prediction accuracy rather than the reasoning approach or what is perceived. We propose a Kid-inspired Visual Analogies (KiVA) benchmark founded on developmental psychology (Figure 1 (left)) (Goddu et al., 2020; Lehmann et al., 2014). We focus our analysis on basic visual analogical capabilities that are present early in human development and are important for understanding the physical world. _KiVA_ isolates the following fundamental capabilities that emerge early in human development: detecting changes in **color** (Ross-sheehy et al., 2003; Wang & Goldman, 2016) and **size** (Day & McKenzie, 1981; Wang & Goldman, 2016), changes that involve **rotation** and **reflection** (Frick et al., 2013; Quaiser-Pohl, 2003), and changes in small **numbers** of objects (Cherian et al., 2023; Levine et al., 1992). It is solvable by a three-year-old child. _KiVA-adults_ serves 2 I. II. (a) Prior benchmarks. ### ? Figure 2: **Prior benchmarks versus KiVA for visual analogies. (a)** Prior benchmarks like **I.** ConceptARC, **II.** Raven’s Progressive Matrices, and **III.** CCSE Reasoning involve arbitrary changes of abstract shapes and grids. **(b)** KiVA examines basic changes that even three-year-olds can solve. as a more challenging version of KiVA that is not solvable by young children but by adults, requiring deeper generalization from given transformations (the starting values of objects in the given and test transformations are not aligned) and featuring more variations in the above visual domains (see details in Section 3.2). Refer to Figure 1 for sample test trials of KiVA and KiVA-adults. KiVA stands out in the following ways: First, our dataset utilizes _real-world_, _physically grounded_ objects curated from established 3D datasets of common household items (Downs et al., 2022) and toys that are familiar to human children (Stojanov et al., 2021), which align more with the training distribution of computer vision models and visual data of humans more than other visual analogical reasoning datasets (Figure 2). Second, our approach is inspired by _developmental psychology_, specifically how children learn to perform analogical reasoning not abstractly, but from simple objects in grounded contexts (Christie & Gentner, 2010; Gentner, 1983; Goddu et al., 2020). We propose a similar approach for large multimodal models, investigating if they can perform like children on basic visual analogical reasoning tasks related to color, size, orientation, and number – as already reported in child development journals Coates et al. (2023); Goddu et al. (2020; 2025). Starting with simple, real-world relevant tasks in child development allows models to develop robust reasoning abilities before tackling more advanced tasks, providing a clearer pathway for evaluating and improving cognitive functions in AI. Third, we break down our evaluation to examine the _different steps_ involved in analogical reasoning to determine which steps a model can perform and where it may fail: _1)_ classifying the domain of a visual transformation, _2)_ specifying the transformation rule, and _3)_ extrapolating the inferred rule to a new item. This three-stage evaluation (Figure 3) gives us insights into models’ reasoning processes beyond simply selecting a correct or incorrect response at the end. Results from KiVA and KiVA-adults demonstrate that state-of-the-art large multimodal models, i.e., GPT-o1 (OpenAI, 2024), GPT-4V (OpenAI, 2023), LLaVA-1.5 (Liu et al., 2024) and MANTIS (Jiang et al., 2024a), still cannot solve visual analogies like humans can. These models do not match even the capabilities of a three-year-old child in reasoning about number and reflection (Figure 1). While LMMs can categorize some transformations, they still struggle to extrapolate those transformations to new objects. In particular, GPT-o1 and GPT-4V outperform LLaVA-1.5 and MANTIS but also demonstrates weaker performance in orientation and number changes than in size and color changes which are processed more quickly by humans, at an earlier age (Slater et al., 1990; Wang & Goldman, 2016), and in a more primary region of the visual cortex (Zeki et al., 1991; Zeng et al., 2020). Taken together, KiVA and KiVA-adults not only mirror the natural progression of human cognitive development, but also provides a more structured and comprehensive framework for evaluating the capabilities and growth of LMMs. We also release in our project page code for _KiVA-compositionality_, which combines multiple object transformations to probe even more complex compositional reasoning. This serves as the next benchmark for models to surpass after KiVA and KiVA-adults. 2 R ELATED W ORK **Evaluating human visual analogical reasoning.** There is a variety of tasks designed in Developmental Psychology to examine human visual analogical reasoning early on in life. Children are 3 asked to compare simple object and relational matches (Christie & Gentner, 2010; Goddu et al., 2020; Kuwabara & Smith, 2012) along dimensions such as color (Milewski & Siqueland, 1975; Ross-sheehy et al., 2003), number (Cherian et al., 2023; Levine et al., 1992), size (Day & McKenzie, 1981; Slater et al., 1990) and spatial orientation (Frick et al., 2013; Quaiser-Pohl, 2003). Older children and adults are evaluated on Raven’s Progressive Matrices (RPMs) (Carpenter et al., 1990; Lovett & Forbus, 2017; Raven & Court, 1938) and Bongard Problems (Bongard, 1970; Weitnauer et al., 2023). Even though they tend to be the most representative and largest testbeds for testing advanced visual analogical reasoning, RPMs and Bongard problems use abstract geometric shapes and test recognition of arbitrary patterns that (1) cannot be solved by children before the age of 6 and (2) are not critical to everyday visual processing. KiVA is the first visual analogical reasoning benchmark that includes common real-world objects and more natural visual cognition skills such as counting and spatial transformations — tasks that even a three-year-old child can handle (Goddu et al., 2020). We also examine where people and models fail with more fine-grained evaluation. **Evaluating visuo-linguistic reasoning in AI models.** Several proposals for evaluating modern AI systems’ visuo-linguistic reasoning capabilities followed the recent successes of large multimodal models. Many concentrate on a narrow, isolated set of tasks for detecting object properties like size estimation (Chen et al., 2024; Liu et al., 2022), color perception (Abdou et al., 2021; Samin et al., 2024), counting objects (Liang et al., 2023; Paiss et al., 2023), object viewpoint/pose and chirality (Kapelyukh et al., 2023; Lin et al., 2020; Chen et al., 2024) and visuo-linguistic compositionality (Thrush et al., 2022; Kamath et al., 2023; Liu et al., 2023). Typically, the objective of these tasks is to evaluate models’ ability to report a correct property about objects in an image. They lack the depth to probe pattern abstraction and generalization involved in visual analogical reasoning. Broader benchmarks, such as visual question answering setups (Antol et al., 2015; Goyal et al., 2017), attempt to investigate the models’ understanding of various visual concepts. One approach taken by (Bubeck et al., 2023; Yang et al., 2023) was to try and push the envelope on various tasks to capture anecdotal and qualitative observations regarding the performance of GPT-4. Perception Test (P˘atr˘aucean et al., 2023) proposed a second approach: a visual video-based benchmark including developmentally-inspired tasks such as object permanence, object tracking, spatial relations, etc. Recently, the BLINK benchmark was introduced to show that core visual perception tasks, easily solvable by humans "within a blink," remain challenging for large multimodal models due to their resistance to language-based mediation (Fu et al., 2024). However, all these benchmarks fall short in evaluating the deeper, more complex aspects of visual analogical reasoning and generalization. Another specific class of benchmarks tests generalization and reasoning within abstract puzzle grids. These include the Abstraction and Reasoning Corpus (ARC) (Chollet, 2019) and ConceptARC (Moskvichev et al., 2023; Mitchell et al., 2023); a direct translation of RPMs-based human evaluation has previously been applied to models by (Ahrabian et al., 2024) and (Huang et al., 2024) (see these prior benchmarks in Figure 2). However, the stimuli are simple, monotonic shapes like squares and circles, lacking real-world complexity and variability. Moreover, they emphasize complex pattern recognition and logical sequencing without real-world context—neglecting basic visual cognition skills even children possess—and this limited scope may render them unsuitable for training data that typically covers a much broader range of real-world visuals. In summary, although many benchmarks assess advanced visual capabilities in large multimodal models, none evaluate visual cognition that is clearly exhibited by young children—such as predicting simple transformations of real-world objects—or use children as a baseline for comparison. 3 T HE K I VA B ENCHMARK FOR V ISUAL A NALOGICAL R EASONING We introduce KiVA, a Kid-inspired Visual Analogies benchmark, wherein real-world objects undergo common transformations necessary for everyday visual cognition. We focus on isolating and testing basic visual transformations that even a three-year-old child understands (Goddu et al., 2020). As we show in Figure 1, we examine noticing **color changes** (Ross-sheehy et al., 2003; Milewski & Siqueland, 1975), **size changes** (Day & McKenzie, 1981; Slater et al., 1990), **rotation**, **reflec-** **tion** (Quaiser-Pohl, 2003; Frick et al., 2013), and **number changes** such as addition and subtraction of a small number of objects (Cherian et al., 2023; Levine et al., 1992). We then build upon this bench mark by proposing KiVA adults, which involves a greater variety of transformations and demands more abstract forms of generalization. It is solvable by adults but not by children under five. 4 1. Verbal Classification: _what_ changed Which one of the following rules {(1) change in orientation of objects (where things face), (2) change in number of objects, (3) change in size of objects, (4) no change, (5) doesn’t apply} best describes the left-to-right transformation on top of the puzzle where the picture on the left transforms to the picture on the right? Answer with the correct rule number surrounded by parentheses, then provide a "step-by-step" reasoning for your choice. 3. Visual Extrapolation: applying the same change to a new object Which one of three left-to-right object transformations (marked by either (A), (B) or (C)) on the bottom of the puzzle is the same as the left-toright transformation on the top of the puzzle? Answer with the correct letter surrounded by parentheses (or (D) if none of the options apply), then provide a "step-by-step" reasoning for your choice. correct 2. Verbal Specification: _how_ it changed Which one of the following rules {(1) objects become smaller, (2) objects become bigger, (3) no change, (4) doesn’t apply} best describes the left-to-right transformation in the top of the puzzle where the picture on the left transforms to the picture on the right?. Answer with the correct rule number surrounded by parentheses. Then provide a "step-bystep" reasoning for your choice. incorrect Figure 3: **An example of a trial in KiVA.** Models and humans are first asked to classify a given transformation (left). If the classification is correct (green arrow), humans and models are further evaluated on their verbal specification of the transformation (middle) and then on visual extrapolation (right). Otherwise, humans and models skip to make a visual extrapolation (yellow arrow). 3.1 A T HREE -S TAGE E XPERIMENTAL P ARADIGM We use our proposed dataset to benchmark computational models’ and human subjects’ visual analogical reasoning capabilities. We utilize the same testing procedure (Figure 3) for both kinds of subjects. In each trial, we start by presenting a given transformation of an object that changes by a specific rule, following the experimental paradigm of other analogical reasoning benchmarks for humans and computational models (Moskvichev et al., 2023; Bongard, 1970; Goddu et al., 2020). Inspired by the component processes model of analogical reasoning (Sternberg, 1977), we evaluate the subject’s ability to determine _what_ changed ( _Verbal Classification_ ) _how_ it changed ( _Verbal_ _Specification_ ), and apply the the same transformation rule to predict the outcome of a new object— i.e., a _Visual Extrapolation_ . We break the question down into these three steps to test the different cognitive processes involved in analogical reasoning. The first two assess the necessary prerequisites for accurate analogical reasoning, while the last step represents the core visual analogy task. Critically, KiVA retains the core nonverbal extrapolation task (last step) from previous benchmarks and the verbal questions _do not replace_ the core nonverbal tasks. Even without correct verbal responses, humans and models can still tackle the independently-assessed visual extrapolation tasks. Thus, KiVA doesn’t require specific language skills but provides a window into the analogical reasoning process of humans and models in reaching their final solutions. The first two verbal questions were further paraphrased by developmental psychologists so that it is comprehensible to a three-year-old child (Appendix A.3); models and adults did not benefit from the child-appropriate prompting so the original prompt in Figure 3 was preserved. We pose all questions in a multiple-choice format for human children, adults and models, which enables automatic scoring. Option labels for correct responses were randomized such that LMMs’ option label bias does not correlate with task accuracy. Furthermore, we provided the opportunity to select “Doesn’t apply” to accommodate responses that the provided choices may not cover. Excluding the “Doesn’t apply” option, chance level is 25% for Verbal Classification (4 choices) and 33% for Verbal Specification and Visual Extrapolation (3 choices). Refer to Figure 3 for the three-stage query pipeline and Appendix A.2 for specific prompts. **Verbal classification of transformation (“what”).** We first evaluate if the model or human can detect what changed in a given transformation and classify it in the correct visual domain, such as size or number (see Figure 3). We randomly sample incorrect multiple-choice options from other possible transformation domains. “No change” and “Doesn’t apply” are always included as options to accommodate for alternative forms of reasoning that are not covered by the choices. Suppose the model fails to identify basic changes, such as distinguishing a numerical change from a color change. It will be unable to predict how new objects change based on the given transformations. This is an inadequacy of existing visual analogical reasoning benchmarks (Moskvichev et al., 2023; Mitchell et al., 2023; Ahrabian et al., 2024; Huang et al., 2024), which focus solely on advanced predictions without ensuring fundamental change detection capabilities. **Verbal specification of transformation (“how”).** If a subject correctly classifies the transformation, we ask them to further specify also in the form of multiple-choice the transformation (see green arrow in 3). This step is crucial because it ensures the subject can accurately specify the rule governing the 5 Idea Generation Category:
3Other
vNATZfmY6R
# KGAR EVION : A N AI A GENT FOR K NOWLEDGE -I NTENSIVE B IOMEDICAL QA **Xiaorui Su** **[1]** **Yibo Wang** **[2]** **Shanghua Gao** **[1]** **Xiaolong Liu** **[2]** **Valentina Giunchiglia** **[3]** **Djork-Arn´e Clevert** **[4]** **Marinka Zitnik** **[1]** 1 Harvard University 2 University of Illinois Chicago 3 Imperial College London 4 Pfizer xiaorui ~~s~~ u@hms.harvard.edu, ywang633@uic.edu, shanghua ~~g~~ ao@hms.harvard.edu, xliu262@uic.edu, v.giunchiglia20@imperial.ac.uk, Djork-Arne.Clevert@pfizer.com, marinka@hms.harvard.edu A BSTRACT Biomedical reasoning integrates structured, codified knowledge with tacit, experience-driven insights. Depending on the context, quantity, and nature of available evidence, researchers and clinicians use diverse strategies, including rule-based, prototype-based, and case-based reasoning. Effective medical AI models must handle this complexity while ensuring reliability and adaptability. We introduce KGAR EVION, a knowledge graph-based agent that answers knowledge-intensive questions. Upon receiving a query, KGAR EVION generates relevant triplets by leveraging the latent knowledge embedded in a large language model. It then verifies these triplets against a grounded knowledge graph, filtering out errors and retaining only accurate, contextually relevant information for the final answer. This multi-step process strengthens reasoning, adapts to different models of medical inference, and outperforms retrieval-augmented generationbased approaches that lack effective verification mechanisms. Evaluations on medical QA benchmarks show that KGAR EVION improves accuracy by over 5.2% over 15 models in handling complex medical queries. To further assess its effectiveness, we curated three new medical QA datasets with varying levels of semantic complexity, where KGAR EVION improved accuracy by 10.4%. The agent integrates with different LLMs and biomedical knowledge graphs for broad applicability across knowledge-intensive tasks. We evaluated KGAR EVION on AfriMed-QA, a newly introduced dataset focused on African healthcare, demonstrating its strong zero-shot generalization to underrepresented medical contexts. 1 I NTRODUCTION Biomedical reasoning requires integrating diagnostic and therapeutic decision-making with an understanding of the biology and chemistry of the disease of drugs (Patel et al., 2005). Large language models (LLMs)(OpenAI, 2024; Dubey et al., 2024; Gao et al., 2024) demonstrate strong general capabilities, but their responses to medical questions often suffer from incorrect retrieval, omission of key information, and misalignment with current scientific and clinical knowledge. These models can struggle to generate contextually relevant answers that account for local factors, such as patient demographics, geographic variations, and specialized biomedical domains(Harris, 2023). A limitation arises from their inability to integrate multiple types of evidence by combining _structured,_ _codified_ scientific knowledge with _tacit, noncodified_ expertise—clinical intuition, case-based experience, and learned heuristics, which are essential for contextualizing scientific evidence within real-world medical decision-making (Harris, 2023). LLM-powered QA models often lack the _multi-source_ and _grounded knowledge_ required for effective medical reasoning. The answer to complex medical questions requires an understanding of the intricate and highly specialized nature of biomedical concepts. However, LLMs trained in generalpurpose data struggle to solve medical problems that require _specialized knowledge in the domain_ . This challenge stems from their inability to differentiate subtle, domain-specific nuances of medical 1 a b 0.6 0.5 0.4 0.3 0.2 |Col1|Basic<br>Intermidate<br>Expert|Col3|Col4|Col5| |---|---|---|---|---| |||||| |||||| |||||| |||||| LLaMA3-8B LLaMA3.1-8B GPT-4-Turbo Figure 1: **a)** Performance of existing LLMs on three new datasets (MedDDx-Basic, MedDDxIntermediate, MedDDx-Expert) introduced in this paper with questions of varying difficulty. **b)** Sample questions from the new datasets. significance. As a result, LLMs often do not reason effectively in complex scenarios, where successful inference requires recognizing and reasoning about dependencies between multiple interrelated medical concepts within a single question and interpreting highly similar yet semantically distinct biomedical entities with precision, as illustrated in Fig. 1. To address these limitations, researchers have turned to retrieval-augmented generation (RAG), which follows a retrieve-then-answer paradigm (Shi et al., 2024). These methods enrich LLMs with external knowledge sources, allowing them to retrieve information from biomedical databases and structured repositories (Fan et al., 2024). However, the accuracy of RAG-based answers is heavily dependent on the quality of the retrieved content, making them prone to errors (Karpukhin et al., 2020). Many biomedical knowledge bases contain incomplete, outdated, or incorrect information, leading to unreliable retrieval Adlakha et al. (2024); Thakur et al. (2023). Furthermore, approaches based on RAG lack post-retrieval verification mechanisms, leaving them unable to assess whether retrieved information is factually accurate, contextually relevant, or if it omits essential information (Zhao et al., 2023). Knowledge graphs (KG) of medical concepts serve as grounded knowledge bases that provide precise and specialized in-domain knowledge for medical QA models (Qi et al., 2024; Murali et al., 2023; Chandak et al., 2023). Although KGs improve these models, they often contain gaps, limiting their ability to capture biomedical relationships. Approaches that retrieve medical concepts based solely on direct associations (edges) in a KG fail to account for implicit or complex relationships. For example, two proteins with different biological functions may lack a direct connection in a KG, even though they share biological similarity at the molecular level (Menche et al., 2015). Advancing LLM-powered models for knowledge-intensive medical QA requires systems that simultaneously capture complex associations among multiple medical concepts, integrate multi-source knowledge systematically, and verify retrieved information to ensure contextual accuracy and relevance. **Present work.** We introduce KGAR EVION, a KG-based LLM agent designed for complex biomedical QA that integrates the non-codified knowledge of LLMs with the structured, codified knowledge found in KGs. As illustrated in Fig. 2, KGAR EVION executes four key actions to ensure accurate and context-aware biomedical reasoning. First, it prompts the LLM to generate relevant triplets based on the input question. To effectively leverage structured KG data, KGAR EVION fine-tunes the LLM on a KG completion task, incorporating pre-trained structural embeddings of triplets as prefix tokens. The fine-tuned model then evaluates the correctness of the generated triplets. Next, KGAR EVION performs a ‘Revise’ action to correct erroneous triplets, refining the knowledge base before selecting the final answer. Given the complexity of medical reasoning, KGAR EVION adaptively chooses the most appropriate reasoning approach for each question, allowing for nuanced and context-aware QA. This flexibility enables KGAR EVION to handle both _multi-choice_ and _open-_ _ended_ QA tasks. Our key contributions include: _⃝_ 1 Developing KGAR EVION, a versatile KG agent that dynamically adjusts reasoning strategies, achieving a 6.75% improvement over 15 baseline models in seven datasets, including three challenging newly curated benchmarks. _⃝_ 2 Demonstrating that grounding through generated triplets significantly enhances KGAR EVION ’s capabilities across multiple KGs. _⃝_ 3 Showing that KGAR EVION effectively answers complex, knowledge-intensive medical queries in both multi-choice and open-ended QA formats. _⃝_ 4 Evaluating KGAR EVION on African healthcare datasets: We benchmark KGAR EVION on AfriMed-QA, a newly introduced 2 dataset focused on African healthcare. The results highlight KGAR EVION ’s strong zero-shot generalization, demonstrating its ability to reason effectively in underrepresented medical contexts. _⃝_ 5 Analyzing robustness to input variations: We analyze KGAR EVION ’s sensitivity to changes in question structure, answer ordering, and answer relabeling. Unlike LLMs, which exhibit high variance when answer choices are reordered, KGAR EVION maintains stable performance, demonstrating its stronger robustness in real-world settings. KGAR EVION is available at [https://github.com/mims-harvard/KGARevion.](https://github.com/mims-harvard/KGARevion) 2 R ELATED W ORK **LLM-based reasoning.** General-purpose LLMs (GPT (OpenAI, 2024), LLaMA family (Dubey et al., 2024; Touvron et al., 2023), Mistral (Jiang et al., 2023)) and LLMs fine-tuned on biomedical data (BioMedLM (Venigalla et al., 2022), Codex (Li´evin et al., 2024), MedAlpaca (Han et al., 2023), Med-PaLM (Singhal et al., 2023), PMC-LLaMA (Wu et al., 2024a)) leverage their vast embedded knowledge to perform medical reasoning. Some models enhance reasoning by decomposing complex queries into sub-tasks, solving them step by step using structured prompts, as seen in Chain-of-Thought (CoT) (Wei et al., 2024) and CODEX CoT (Gramopadhye et al., 2024). However, these methods struggle with knowledge-intensive medical queries that require multi-source domain-specific knowledge, leading to gaps in accuracy and completeness. **RAG-based models.** Self-RAG (Asai et al., 2024) is a framework that enhances LLM performance through retrieval and self-reflection. LLM-AMT (Wang et al., 2023b) improves medical question answering by integrating authoritative medical textbooks into LLMs with specialized knowledge retrieval and self-refinement techniques. Adaptive-RAG (Jeong et al., 2024) introduces a dynamic RAG framework that adapts retrieval strategies based on the complexity of the questions. However, its performance is restricted by the quality of the knowledge retrieved (Zhang et al., 2024). **KG-based models.** Models, such as QAGNN (Yasunaga et al., 2021), JointLK (Sun et al., 2022), and Dragon (Yasunaga et al., 2022), handle medical questions solely using KGs in an end-to-end manner. However, these methods cannot be easily applied to questions involving unseen nodes or incomplete knowledge within the graphs. In addition, structured KGs have driven research toward graph-based RAG models, motivating models such as GraphRAG (Edge et al., 2024), KG-RAG (Soman et al., 2023), and MedGraphRAG (Wu et al., 2024b). KG-Rank (Yang et al., 2024) ranks the retrieved triplets and filters out irrelevant knowledge to improve the accuracy of the search. Additionally, GenGround (Shi et al., 2024) uses a Generate-then-Ground pipeline that grounds answers by prompting LLMs to validate retrieved knowledge. However, these approaches rely heavily on semantic dependencies, overlooking the rich structural information within the KGs. 3 KGAR EVION A GENT Given a set of biomedical questions _Q_, each question consists of a question stem _q_ and a set of candidate answers _C_ . For example, in Fig. 2a, the sample question has the stem _q_ = _”Which gene_ _interacts with the Heat Shock Protein 70 family that acts as a molecular chaperone and is implicated_ _in Retinitis Pigmentosa 59 due to DHDDS mutation?”_ along with a set of semantically related candidate answers _C_ = _{HSPA4, HSPA8, HSPA1B, HSPA1A}_ . The goal is to identify the correct answer using both an LLM (denoted as _P_ ) and a knowledge graph (KG, denoted as _G_ ). Here, a KG is represented as a set of triplets _G_ = _{_ ( _h, r, t_ ) _}_, where each triplet consists of a head entity ( _h_ ), a relationship ( _r_ ), and a tail entity ( _t_ ). Table F.2 provides a summary of the notation. We consider both multiple-choice and open-ended reasoning settings (see Results). KGAR EVION is an LLM-powered agent (Wu et al., 2023; Li et al., 2023) that defines agentic actions (Schick et al., 2023; Shen et al., 2023; Nakano et al., 2021) to collaboratively perform complex tasks (Tang et al., 2023; Bran et al., 2023; Boiko et al., 2023). Fig. 2 provides an overview of KGAR EVION, which operates through four key actions: Generate (§3.1; Generates triplets relevant to the input question), Review (§3.2; Assesses the correctness of each generated triplet), Revise (§3.3; Corrects any triplet identified as incorrect), and Answer (§3.3; Produces the final answer based on the triplets verified by the Review action). This structured reasoning process allows KGAR E VION to integrate both LLM-generated knowledge and structured KG-based validation, improving accuracy and robustness in knowledge-intensive medical QA. 3 **a** Which gene interacs with the Heat Shock Protein **b** 70 family that acts as a molecular chaperone, and is implicated in Retinitis Pigmentosa 59 due to ~~DHDDS~~ mutat ~~i~~ on ~~?~~ A: HSPA4 B: HSPA8 C: HSPA1B D: HSPA1A |HSPA8<br>KGARevion<br>Generate<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role,<br>retinitis pigmentosa)<br>(HSPA1A, interactions, DHDDS) ...<br>Review<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role, retinitis pigmentosa) triplet<br>(HSPA1A, interactions, DHDDS)<br>... No<br>Revise<br>(HSPA1B, interactions, DHDDS)<br>...<br>Answer<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role,<br>retinitis pigmentosa)<br>...|Col2|Col3| |---|---|---| |HSPA8<br>KGARevion<br>Generate<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role,<br>retinitis pigmentosa)<br>(HSPA1A, interactions, DHDDS) ...<br>Review<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role, retinitis pigmentosa) triplet<br>(HSPA1A, interactions, DHDDS)<br>... No<br>Revise<br>(HSPA1B, interactions, DHDDS)<br>...<br>Answer<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role,<br>retinitis pigmentosa)<br>...|Generate<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role,<br>retinitis pigmentosa)<br>(HSPA1A, interactions, DHDDS) ...<br>Review<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role, retinitis pigmentosa) triplet<br>(HSPA1A, interactions, DHDDS)<br>... No<br>Revise<br>(HSPA1B, interactions, DHDDS)<br>...<br>Answer<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role,<br>retinitis pigmentosa)<br>...|Generate<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role,<br>retinitis pigmentosa)<br>(HSPA1A, interactions, DHDDS) ...<br>Review<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role, retinitis pigmentosa) triplet<br>(HSPA1A, interactions, DHDDS)<br>... No<br>Revise<br>(HSPA1B, interactions, DHDDS)<br>...<br>Answer<br>(HSPA8, interacts, DHDDS)<br>(Heat shock proteins, play a role,<br>retinitis pigmentosa)<br>...| |||| |||| Figure 2: **a)** Overview of KGAR EVION agent. **b)** Overview of fine-tuning in the Review action. 3.1 G ENERATE A CTION The Generate action aims to gather comprehensive structured knowledge from input questions. Specifically, this action first identifies all medical concepts involved in the input question stem _q_ and then generates a set of triplets _T_ related to the question based on the extracted medical concepts. Depending on the content of the answer candidate, the input questions can be broadly categorized into two types: choice-aware and non-choice-aware. The answer candidates in the choice-aware group have specific contents, whereas the ones in the non-choice-aware group only contain yes-or-no options (as shown in Appendix Table 4). These different types of questions require distinct reasoning processes: choice-aware questions involve analyzing the content of each answer candidate, while non-choice-aware questions only require focusing on the question stem. To handle this, this action is designed to prompt the LLM (Ouyang et al., 2022; Wang et al., 2023a) to follow different procedures for generating relevant triplets according to the input question type. - For choice-aware questions, the Generate action generates triplets based on the contents of each answer candidate and extracted medical concepts in question stem _q_ ; - For non-choice-aware questions, the Generation action directly generates triplets based on medical concepts presented in question stem _q_ . The rationale behind this design is that LLMs have inherent biases in their knowledge, often generating more detailed information on familiar topics compared to less familiar ones when all answer candidates are presented simultaneously (Dai et al., 2024). Additionally, this approach helps reduce the impact of the order in which the answer candidates are presented. The process of the Generate action can be formulated as: _{P_ ( _q, a_ _i_ ) _},_ 1 _≤_ _i ≤|C|,_ if _C ̸⊆{_ Yes _,_ No _,_ Maybe _}_ _T_ = (1) � _P_ ( _q_ ) _,_ if _C ⊆{_ Yes _,_ No _,_ Maybe _}_ where _a_ _i_ denotes the candidate answer in _C_, and the LLM _P_ ( _·_ ) is prompted to extracts triplets from the medical concepts involved in its input. 3.2 R EVIEW A CTION To enable LLMs to accurately judge the correctness of generated triplets, beyond relying solely on semantic dependencies inferred by LLMs (Shinn et al., 2023), the Review action also leverages the connections and relationships among various medical concepts contained in KGs. This is achieved by fine-tuning the LLM on a KG completion task, explicitly integrating entity embeddings learned from KGs into the LLM. Then the Review action is performed by the fine-tuned LLM to assess the correctness of triplets generated by the Generate action, as shown in Fig. 2. 4 3.2.1 F INE - TUNING S TAGE **Generating KG Embeddings and Triplet Descriptions.** We use the well-known KG representation learning method, TransE (Bordes et al., 2013), to learn structural embeddings for both entities and relations in _G_ . For a triplet ( _h, r, t_ ) _∈_ _G_, the learned pre-trained embeddings are denoted as **e** _h_ _∈_ R _[d]_, **e** _r_ _∈_ R _[d]_, and **e** _t_ _∈_ R _[d]_, where _d_ represents the embedding dimension. These embeddings are kept fixed during LLM fine-tuning. In addition, we instruct the LLM to generate a description template for each relation _r ∈_ _G_ . We store these descriptions in a dictionary _D_ ( _·_ ), which can be found in Appendix Table 8. **Aligning Embedding.** Since the embeddings in LLMs are based on token vocabularies (Radford et al., 2019), LLMs cannot directly interpret pre-trained structural embeddings, as they lack semantic meaning. To make use of the pre-trained structural embeddings, we align them with the corresponding descriptions to generate new embeddings for the input triplet. Specifically, given the description _D_ ( _r_ ) for the input triplet ( _h, r, t_ ), we denote the embedding of _D_ ( _r_ ) obtained from the LLM as **X** _∈_ R _[|][l][|×][d]_ _[L]_, where _|l|_ is the maximum number of tokens and _d_ _L_ is the embedding dimension in the LLM. Next, we concatenate the embeddings of the head entity, the relation, and the tail entity, denoted as **V** = [ _g_ ( **e** _h_ ); _g_ ( **e** _r_ ); _g_ ( **e** _t_ )] _∈_ R [3] _[×][d]_ _[L]_, where _g_ ( _·_ ) : R _[d]_ _→_ R _[d]_ _[L]_ . We then apply an attention block (Vaswani, 2017), followed by a two-layer feed-forward network (FFN, as shown in Fig. 2b) to obtain the aligned triplet embedding matrix **Z** _∈_ R [3] _[×][d]_ _[L]_ as follows: � **V** = **V** + _σ_ ( **VX** _[T]_ ) **X** (2) **Z** = **V** [�] + (( _φ_ ( **V** [�] ) **W** 1 )) **W** 2 (3) where _σ_ ( _·_ ) is the Softmax function, _φ_ ( _·_ ) represents layer normalization, **W** 1 _∈_ R _[d]_ _[L]_ _[×][d]_ _[h]_ and **W** 2 _∈_ R _[d]_ _[h]_ _[×][d]_ _[L]_ are trainable parameters in the FFN, and _d_ _h_ is the dimension of the hidden layer in the FFN. **Fine-tuning LLM.** After obtaining the aligned embedding **Z**, we add it to the beginning of the instruction and fine-tune the LLM using LoRA (Hu et al., 2022) with the next-token prediction loss (Radford, 2018). The instruction is: _’Given a triple from a knowledge graph. Each triple_ _consists of a head entity, a relation, and a tail entity. Please determine the correctness of the triple_ _and response with True or False.’_ The output should be either True or False. 3.2.2 I NFERENCE S TAGE The fine-tuned LLM is then integrated to the Review action to check the accuracy of each triplet in _T_, which was generated by the Generate action (3.1). Specifically, we first use UMLS codes (Bodenreider, 2004) to map entities in the KG and obtain pre-trained structural embeddings for the head entity, relation, and tail entity, respectively. These embeddings, along with their descriptions and instructions, are fed into the fine-tuned LLM to determine whether the triplet is correct or not. However, not all entities in the generated triplet ( _h, r, t_ ) _∈_ _T_ can be mapped to entities in KGs. To address this, the Review action applies a soft constraint rule to distinguish whether the generated triplet is factually wrong or the result of incomplete knowledge in KGs, as follows: - Factually Wrong: if we can map _h_ and _t_ to entities in KGs and the output of fine-tuned LLM is False, then the triplet ( _h, r, t_ ) is factually wrong and is removed from _T_ . - Incomplete Knowledge: if we cannot map either _h_ or _t_ to entities in KGs, then the triplet ( _h, r, t_ ) is considered incomplete knowledge and is kept. In this way, the triplet in _T_ can be grouped into two categories, i.e., the True triplet set _V_ and False triplet set _F_, where _T_ = _V ∪_ _F_ and _V ∩_ _F_ = _∅_ . 3.3 R EVISE AND A NSWER A CTIONS If _F_ has triplets, KGAR EVION calls the Revise action to adjust the triplets in _F_ to include more triplets covering more medical concepts that help with the answering of the input question. The new generated revised triplets are then reviewed by the Review action to make sure that they are correct and related to the input question. If the Review action outputs “True”, then the revised triplets are added to the set of True triplets _V_ . Otherwise, KGAR EVION continues to call the Revise action until the max round _k_ ( _k ≥_ 1) is achieved. 5 Idea Generation Category:
0Conceptual Integration
tnB94WQGrn
# W HAT ’ S THE M OVE ? H YBRID I MITATION L EARNING VIA S ALIENT P OINTS **Priya Sundaresan** _[∗]_ [1] **, Hengyuan Hu** _[∗]_ [1] **, Quan Vuong** [2] **, Jeannette Bohg** [1] **, Dorsa Sadigh** [1] _∗_ Equal contribution. 1 Stanford University, 2 Physical Intelligence A BSTRACT While imitation learning (IL) offers a promising framework for teaching robots various behaviors, learning complex tasks remains challenging. Existing IL policies struggle to generalize effectively across visual and spatial variations even for simple tasks. In this work, we introduce **S** **PHINX** (Salient Point-Based Hybrid ImitatioN and eXecution), a flexible IL policy that leverages multimodal observations (point clouds and wrist images), along with a hybrid action space of low-frequency, sparse waypoints and high-frequency, dense end effector movements. Given 3D point cloud observations, **S** **PHINX** learns to infer task-relevant points within a point cloud, or _salient points_, which support spatial generalization by focusing on semantically meaningful features. These salient points serve as anchor points to predict waypoints for long-range movement, such as reaching target poses in free-space. Once near a salient point, **S** **PHINX** learns to switch to predicting dense end-effector movements given close-up wrist images for precise phases of a task. By exploiting the strengths of different input modalities and action representations for different manipulation phases, **S** **PHINX** tackles complex tasks in a sample-efficient, generalizable manner. Our method achieves **86** _._ **7** % success across 4 real-world and 2 simulated tasks, outperforming the next best state-of-the-art IL baseline by **41** _._ **1** % on average across **440** real world trials. **S** **PHINX** additionally generalizes to novel viewpoints, visual distractors, spatial arrangements, and execution speeds with a **1** _._ **7** _×_ speedup over the most competitive baseline. Our website contains code for data collection and training code [along with supplementary videos: http://sphinx-manip.github.io.](http://sphinx-manip.github.io) 1 I NTRODUCTION Imitation learning (IL) of visuomotor policies is a widely used framework for teaching robots manipulation tasks given demonstrations collected by humans (Schaal, 1996). While prior works have shown that IL policies can learn a range of behaviors with sufficient data, from simple object pickand-place to more complex tasks, they typically succeed only in highly controlled settings with low variation. Generalizing to realistic visual and spatial variations remains a significant challenge. Consider teaching a robot to make a cup of coffee in the morning, which demands precision, longhorizon reasoning, and tolerance to environment variations. The robot must first carefully grasp a mug handle, position it under the machine, insert a pod into a narrow slot, close the lid, and press a button – all with very little margin for error (Fig. 1). Even after mastering this sequence, the policy might struggle with _spatial_ changes like moving the machine, or _visual_ changes such as new coffee pods, spilled grounds, a different camera angle, or varying lighting conditions. This underscores the need for IL policies that can learn complex tasks from a limited number of demonstrations while effectively generalizing to natural and expected variations in the real-world. Conventional IL policies often struggle with both performance and generalization, largely due to limitations in their input and output representations. First, they tend to rely heavily on visual inputs like RGB images, treating irrelevant details like the background, lighting, or viewpoint the same as task-relevant information. This can cause a policy to memorize specific scenes, making it brittle to _visual_ variations (Zhao et al., 2023; Chi et al., 2023). Second, these policies usually predict actions Correspondence to _{_ priyasun@stanford.edu, hengyuan.hhu@gmail.com _}_ 1 Figure 1: **S** **PHINX** is a hybrid IL agent which learns to switch amongst different modes ( _m_ _t_ ) of execution to tackle complex tasks with visuospatial generalization. In _waypoint_ mode, _π_ [waypt] takes a point cloud as input, and predicts a single _waypoint w_ _t_ as an offset ( _ϕ_ _t_ ) to a task-relevant _salient point z_ _t_ (i.e. mug handle, coffee pod, etc. denoted). After reaching a waypoint via a controller, the policy uses learned switching to a dense policy _π_ [dense], which takes wrist-camera images as input and outputs _dense_ actions ( _a_ _t_ ) for precise manipulation around a salient point. On the right, the policy interleaves both modes of execution to complete a long-horizon coffee-making task guided by salient points (●) and mode switches (■). for the next immediate timestep, which hampers _spatial_ reasoning. Simple spatial movements, like reaching, are predicted through hundreds of end-effector actions, increasing the risk of veering off course. To address these limitations, recent works explore 3D scene representations, such as point clouds and voxel grids, to offer better spatial awareness, and propose predicting actions as endeffector poses ( _waypoints_ ) reachable through a controller or motion planner (Goyal et al., 2023; Sundaresan et al., 2023; Yang et al., 2024a;b; Shridhar et al., 2023). This can drastically shorten the action prediction horizon and enable better spatial generalization. However, these methods often lack precision, as point clouds typically lack the necessary resolution to capture small object details preserved in images. On the other hand, recent image-based IL policies attempt to remedy spatial generalization using _hybrid action spaces_ (Belkhale et al., 2023; Shi et al., 2023). Here, the policy has two potential _modes_ of execution: _waypoints_ for long-range motions like reaching, or _dense_ actions — end-effector movements predicted per-timestep — only when precision or reactivity is required. These works ultimately consider training policies with a single input modality type, and optionally hybrid actions. In reality, different phases of a task may lend themselves more favorably to different visual inputs or action modes. A policy which can effectively choose and interchange both the input modality and the underlying action representation during execution remains underexplored. Our key insight is that by encouraging the policy to attend to _salient points_ —task-relevant 3D points—we enable it to choose between different input modalities as well as different action modes when appropriate, improving performance and generalization. Fig. 1 illustrates example salient points in the coffee-making task in red (i.e. the mug handle, pod, etc.). We introduce **S** **PHINX** : _Salient Point-based Hybrid ImitatioN and eXecution_, a hybrid IL agent which learns to switch amongst a _waypoint policy_ which predicts waypoint actions given point clouds, and a _dense pol-_ _icy_ which predicts dense actions given close-up wrist-camera images. Specifically, the waypoint policy manages long-range movements by first predicting salient points that narrow the search space of actions around spatially relevant features, promoting _spatial_ generalization. It then predicts waypoint actions relative to these points. After reaching a waypoint, **S** **PHINX** switches to a dense policy which takes wrist-camera images as input. This policy captures close-up object details for precise manipulation and supports _visual_ generalization by staying agnostic to broader scene changes. To support training **S** **PHINX**, we develop a flexible data collection interface that allows demonstrators to specify salient points and switch modes in real-time during teleoperation. Empirically, we show that **S** **PHINX** can tackle a range of precise, long-horizon manipulation tasks, including four real-world scenarios (drawer-opening, cup-stacking, coffee-making, toy train assembly) and two simulated ones. **S** **PHINX** achieves **86.7%** success and outperforms the next best IL baseline by **41.1%** on average, while generalizing better to visual distractors, viewpoints, spatial arrangements, and execution speeds. We open-source our web-based data collection interface for specifying salient points and hybrid teleoperation, alongside code, supplementary material, and videos [at: http://sphinx-manip.github.io.](http://sphinx-manip.github.io) 2 2 R ELATED W ORK **Imitation Learning for Robotics Control:** Imitation learning has long been a foundational approach in robotics for teaching robots to replicate human demonstrations (Schaal, 1996; Atkeson & Schaal, 1997; Pomerleau, 1988). Robotic imitation learning policies typically take images as input and output motor commands, such as joint positions, velocities, or Cartesian end-effector poses. Recent works of that type (Reuss et al., 2023; Chi et al., 2023; Zhao et al., 2023) have demonstrated strong performance on tasks in controlled settings with a limited initial state distribution. However, they struggle to generalize to unseen visual or spatial variations. To address visual generalization, some works augment vision-based policies with diffusion-generated image observations (Yu et al., 2023; Bharadhwaj et al., 2024). While useful and complementary to our approach, these augmentations do not directly enable spatial generalization. Other works propose replacing image inputs with 3D scene representations such as point clouds and voxel grids, and outputting actions as _way-_ _points_, 6-DoF poses reachable through motion planning (Sundaresan et al., 2023; Shridhar et al., 2023; Yang et al., 2024b). While this reduces the complexity of action prediction from hundreds of actions to a single pose, 3D representations such as point clouds often lack the resolution to enable precise manipulation of small objects. Other recent approaches like HYDRA (Belkhale et al., 2023) and AWE (Shi et al., 2023) take image inputs but propose a hybrid output action space of _waypoints_ and _dense_ actions. These distinct action modes are intended for long-range and precise movements, respectively. Our method builds on these approaches by leveraging _salient points_ to bridge a hybrid input space of point clouds and wrist-camera images, and a hybrid output action space of waypoints and dense actions. **Action Representations:** Most visual imitation learning works rely on standard 6-DoF action spaces, but recent efforts explore alternatives for better spatial generalization. One approach involves predicting actions as parameterized manipulation primitives instead of low-level end-effector movements. This reduces the dimensionality of the action space and improves sample efficiency, but often requires task-specific engineering (Dalal et al., 2021; Sundaresan et al., 2023; Nasiriany et al., 2022; Agia et al., 2022). Other methods exploit equivariance, ensuring that transformations of visual inputs (e.g., rotations or scaling) are reflected in output actions (Wang et al., 2024; Yang et al., 2024a;b). However, these works often rely on limiting assumptions like access to object states via segmentation, or single-object tasks. In the grasping domain, many policies consider point clouds as inputs and an output action space defined as per-point predictions for the end-effector pose. This has proven effective for learning sample-efficient and generalizable grasping policies (Saxena et al., 2008; Sundermeyer et al., 2021). Inspired by this, our method also parameterizes waypoint actions as offsets to salient points in a point cloud, but we critically learn a _hybrid_ policy which predicts _both_ waypoint and dense actions to tackle longer-horizon and precise tasks beyond grasping. **Data Collection for Imitation Learning:** Despite the advancements in action representations and spatial generalization, the success of visual imitation learning policies still hinges on the quality of teleoperated demonstrations. Human operators typically collect robot data using interfaces like virtual reality controllers (Jedrzej Orbik, 2021), handheld devices (Chi et al., 2024), puppeteering setups (Zhao et al., 2023), or 3D mice (e.g., Spacemouse). However, these interfaces map demonstrator controls directly to robot actions on a _per-timestep_ basis, which presents two key limitations. First, the recorded data only captures (observation, dense action) pairs, lacking compatibility with waypoint actions or useful metadata such as salient points. Second, directly controlling long-range movements can be inefficient, noisy, and tiring for demonstrators. To address these issues, we design an interface (Fig. 2) that seamlessly integrates both waypoint and dense action modes. A custom web-based GUI supports waypoint mode, allowing demonstrators to specify salient points and waypoints with the ease of simple clicks and drags. A controller can then reach the specified waypoint automatically, removing the need for constant teleoperation from the demonstrator. Additionally, the interface is compatible with any external device for dense actions, allowing for easy switching between the computer mouse and the device, as long as it is on hand. This provides a flexible and efficient data collection process for high-quality hybrid datasets, with no post-hoc labeling required. 3 P ROBLEM S TATEMENT In standard IL, we are given a dataset _D_ of _N_ trajectories of expert demonstrations _{τ_ 1 _, . . ., τ_ _N_ _}_ . Each trajectory is a sequence of observation action pairs ( _o_ 0 _, a_ 0 _, . . ., o_ _T_ _, a_ _T_ ). The goal is to learn 3 a policy _π_ ( _a_ _t_ _|o_ _t_ ) that matches the expert distribution using the following loss _−_ E _τ_ _∼D_ log _π_ ( _a_ _t_ _|o_ _t_ ). However, this formulation can easily lead to compounding errors for long-horizon tasks where episodes may span hundreds of steps. In this paper, we instead consider a _hybrid_ imitation learning setting where the policy can either output a **dense action** _a_ _t_ _∈_ R [7], a short-range end-effector pose reachable by _t_ + 1, or a **waypoint** _w_ _t_ _∈_ R [7], a long-range end-effector pose reachable by a series of interpolated movements. Both waypoint and dense actions capture the end-effector pose, but a waypoint action _w_ _t_ _′_ specified at timestep _t_ _[′]_ is translated to a sequence of _k_ _t_ _′_ interpolated actions _{a_ _t_ _′_ _, . . ., a_ _t_ _′_ + _k_ _t′_ _−_ 1 _}_ by a controller _a_ _t_ = _C_ ( _o_ [pose] _t_ _, w_ _t_ _′_ ) based on the current pose _o_ [pose] _t_ and the target waypoint _w_ _t_ _′_ specified at _t_ _[′]_ . In practice, we use a simple controller that linearly interpolates between current pose and target waypoint. Then, we can record a timestep in the dataset as ( _o_ _t_ _, a_ _t_ _, m_ _t_ _,_ [ _w_ _t_ _[′]_ ]) where _m_ _t_ _∈{_ waypt _,_ dense _,_ terminate _}_ is the mode and _w_ _t_ _[′]_ is the optional target waypoint only when _m_ _t_ = waypt. Each _w_ _t_ _[′]_ spans the next _k_ _t_ _[′]_ steps decided by the controller. Our goal is thus to learn a hybrid policy _π_ ( _o_ _t_ ) that first predicts a mode _p_ ( _m_ _t_ _|o_ _t_ ) and then predicts either a waypoint from _π_ [waypt] ( _w_ _t_ _|o_ _t_ ) or a dense action from _π_ [dense] ( _a_ _t_ _|o_ _t_ ). 4 **S** **PHINX** : S ALIENT P OINT - BASED H YBRID I MITATIO N AND E X ECUTION We introduce **S** **PHINX** : Salient Point-based Hybrid ImitatioN and eXecution, a framework for learning sample-efficient, generalizable imitation policies capable of handling complex, long-horizon manipulation tasks across diverse initial conditions. **S** **PHINX** combines a high-level waypoint policy _π_ [waypt] for long-range movements and a dense policy _π_ [dense] for precise manipulation (Fig. 1). The waypoint policy takes point clouds as input, classifies semantically meaningful **salient points**, _z_ _t_ _∈_ R [3], and regresses **waypoint actions** _w_ _t_ _∈_ R [7] . Importantly, _π_ [waypt] predicts the positional component of _w_ _t_ as an _offset_ to the salient point _z_ _t_ . This grounds the desired interaction around a salient point, such as learning to reach for a mug by first identifying the handle. The dense policy _π_ [dense] takes over only for precise actions around a salient point, like carefully inserting a coffee pod into its slot (Fig. 1). Since the waypoint policy handles long-range movements, it uses point clouds to provide spatial context. The dense policy uses wrist camera images as input, capturing detailed object features for precise manipulation and enabling visual generalization to variations in the surrounding scene. Both policies also predict the next mode _m_ _t_ +1 to decide which policy to use after completing the current movement. Without loss of generality, we initialize _m_ 0 to waypoint mode. To train **S** **PHINX**, we first need to collect demonstrations using the two modes and annotate salient point for each waypoint. In Section 4.1, we introduce an intuitive web GUI to easily collect such demonstrations in the hybrid format and record salient points with no additional overhead. Then, we discuss how to learn _π_ [waypt] ( _w_ _t_ _|o_ _t_ ) and _π_ [dense] ( _a_ _t_ _|o_ _t_ ) in Section 4.2 and Section 4.3 respectively. Figure 2: **Data Collection Interface** : The demonstrator visualizes a point cloud _o_ [pcd] _t_ _[′]_ in a web GUI, where they can click a salient point _z_ _t_ _[′]_ [and specify a waypoint action] _[ w]_ _t_ _[′]_ [by clicking and dragging to rotate or translate] a digital twin of the gripper. After the controller _C_ reaches the waypoint to grasp the train, the process repeats for a waypoint above the bridge. The demonstrator then switches to providing dense actions _a_ _t_ with a 3D SpaceMouse to carefully place the train on the bridge and tilt it, causing the train to roll. 4 4.1 D ATA C OLLECTION I NTERFACE FOR **S** **PHINX** Without an existing interface that satisfies our need, we design a data collection system to support waypoint specification, salient point annotation, and mode switching seamlessly. Our hardware setup includes two third-person cameras to provide RGB-D observations to construct a colorized point cloud _o_ [pcd] _t_, and one wrist-mounted camera to provide RGB wrist images _o_ [wrist] _t_ . We develop a custom web-based GUI for specifying waypoints and salient points in waypoint mode. To provide dense actions instead, a demonstrator can seamlessly switch from the computer mouse to any dense teleoperation device like a VR/game console controller or a 3D mouse (Spacemouse) as in this work. The top row of Fig. 2 visualizes the web-based GUI and the process of recording a waypoint action. The GUI streams the point cloud of _N_ points _o_ [pcd] _t_ = _{c_ 1 _, c_ 2 _, . . ., c_ _N_ _}_ to the browser in _real_ _time_ and allows a demonstrator to select a salient point _z_ _t_ _∈{_ 1 _, . . ., N_ _}_ for each waypoint by clicking within the point cloud, (e.g. the red dot on the toy car next to the mouse cursor.) After clicking on the salient point, a digital twin of the gripper appears near the salient point to facilitate waypoint specification. The demonstrator can use click and drag interactions on the virtual gripper to set waypoints relative to these salient points. The salient point _∈_ R [3] specifies the _region_ of interest for interaction while the waypoint, a 7 DoF target end-effector pose, captures _how_ to interact with it. After specifying a waypoint, the linear controller _C_ defined above interpolates and executes actions to reach the waypoint. Critically, this removes the need for the demonstrator to manually teleoperate long-range movements. The entire waypoint motion is recorded as a sequence _{_ ( _o_ _t_ _, a_ _t_ _,_ waypt _, w_ _t_ _′_ _, z_ _t_ _′_ ) _}_ _t∈{t_ _′_ _,...,t_ _′_ + _k_ _t′_ _}_ where _t_ _[′]_ is the timestep when the waypoint is specified and _k_ _t_ _′_ is the number of steps that the controller takes to complete the waypoint _w_ _t_ _′_ . Once the controller reaches a waypoint, the demonstrator may specify another waypoint or switch to dense mode for precise manipulation. To take over with dense mode, the demonstrator simply operates the teleoperation device, such as a 6DoF joystick, and its movements are automatically detected and mapped to delta end-effector movements. This is illustrated by the bottom row of Fig. 2, where the operator uses the teleoperation device (Spacemouse in this case) to precisely align the toy car above the narrow bridge. Each step in dense mode is recorded as ( _o_ _t_ _, a_ _t_ _,_ dense). Note that regardless of the mode, we record the full set of observations _o_ _t_ which includes all camera views and proprioception to facilitate data augmentation and compatibility with any IL policy. 4.2 T HE W AYPOINT P OLICY OF **S** **PHINX** The waypoint policy _π_ [waypt] in **S** **PHINX** takes a point cloud _o_ [pcd] _t_ as input and outputs a 7-DoF endeffector pose _w_ _t_ for the robot to reach via a controller. We utilize point clouds as input to cast part of the action prediction problem to learning a salient map over the points. This encourages the policy to attend to the important spatial features (i.e. the handle on a mug) rather than memorize actions. The detailed design of the waypoint architecture is illustrated in Fig. 3. At a high level, we would like to have per-point predictions, such as the probability of a point to be salient and the translational offset between the point and the target location of the end-effector, as well as other predictions whose targets are not expressed relative to the points, such as rotation, gripper state, and mode. We use a transformer to process the points and add additional tokens for point-agnostic predictions. We first use farthest-point-sampling (FPS) (Qi et al., 2017) to downsamples a raw point cloud to _D_ = 1024 points, and then convert the points _c_ _i_ _∈_ R [6] to tokens _e_ _i_ _∈_ R _[d]_ via a shared linear projection layer. Then we feed the entire set of tokens into a transformer (Vaswani et al., 2017; Radford et al., 2019) to get output embeddings. Since the points in a point cloud are unordered, the transformer has no positional embedding and does not use a causal mask. We pass each point embedding through a shared linear layer to get two predictions per point: one for the probability of the point being a salient point ˆ _p_ and the other for the offset _**ϕ**_ [ˆ] _i_ = ( _x_ _i_ _, y_ _i_ _, z_ _i_ ) between the point position and target waypoint position, illustrated by the middle “Prediction” panel of Fig. 3. Instead of using a hard one-hot target for salient point prediction, we construct a soft salient map over points where the probability of each point is given by: _p_ _i_ _∝∥c_ _i_ _−_ _c_ _k_ _∥_ 2 if _∥c_ _i_ _−_ _c_ _k_ _∥_ 2 _≤_ _r_ else 0 (1) Here, _k_ is the index of the point selected by the user and _r_ is a hyperparameter defining a neighborhood of points that are salient. Within this radius, the probability of saliency decreases with distance 5 Idea Generation Category:
0Conceptual Integration
r0pLGGcuY6
## L IMITS OF D EEP L EARNING : S EQUENCE M ODELING THROUGH THE L ENS OF C OMPLEXITY T HEORY **Nikola Zubi´c** [1] **, Federico Soldá** [2] _[,][∗]_ **, Aurelio Sulser** [2] _[,][∗]_ **, Davide Scaramuzza** [1] 1 Robotics and Perception Group, University of Zurich, Switzerland [zubic@ifi.uzh.ch, sdavide@ifi.uzh.ch](mailto:zubic@ifi.uzh.ch) 2 Algorithms and Optimization Group, ETH Zurich, Switzerland [federico.solda@inf.ethz.ch, asulser@student.ethz.ch](mailto:federico.solda@inf.ethz.ch) - Equal contribution A BSTRACT Despite their successes, deep learning models struggle with tasks requiring complex reasoning and function composition. We present a theoretical and empirical investigation into the limitations of Structured State Space Models (SSMs) and Transformers in such tasks. We prove that one-layer SSMs cannot efficiently perform function composition over large domains without impractically large state sizes, and even with Chain-of-Thought prompting, they require a number of steps that scale unfavorably with the complexity of the function composition. Also, the language of a finite-precision SSM is within the class of regular languages. Our experiments corroborate these theoretical findings. Evaluating models on tasks including various function composition settings, multi-digit multiplication, dynamic programming, and Einstein’s puzzle, we find significant performance degradation even with advanced prompting techniques. Models often resort to shortcuts, leading to compounding errors. These findings highlight fundamental barriers within current deep learning architectures rooted in their computational capacities. We underscore the need for innovative solutions to transcend these constraints and achieve reliable multi-step reasoning and compositional task-solving, which is critical for advancing toward general artificial intelligence. 1 I NTRODUCTION Deep learning has revolutionized numerous fields, achieving remarkable success in natural language processing (OpenAI, 2023; Google, 2024; Touvron et al., 2023), computer vision (Nguyen et al., 2022; Zubi´c et al., 2024; Zhu et al., 2024), scientific computing (Merchant et al., 2023; Hansen et al., 2023), and autonomous systems (Kaufmann et al., 2023; Bousmalis et al., 2024). The pursuit of general artificial intelligence now stands as the new frontier, aiming to develop Large Language Models (LLMs) capable of solving novel and complex tasks across diverse domains such as mathematics, coding, vision, medicine, law, and psychology, approaching human-level performance (Bubeck et al., 2023). Mastery of function composition is essential for this objective, as tasks like mathematical problem-solving (Li et al., 2023), learning discrete algorithms (Thomm et al., 2024; Veliˇckovi´c & Blundell, 2021), logical reasoning (Liu et al., 2023b), and dynamic programming (Dziri et al., 2023) are deeply compositional. However, despite impressive capabilities on various language tasks, deep learning models continue to struggle with tasks requiring complex reasoning over sequences, particularly those involving function composition and compositional reasoning (Peng et al., 2024; Dziri et al., 2023). These tasks necessitate breaking down problems into simpler sub-problems and composing the solutions to these subtasks. Current Transformer models (Vaswani et al., 2017), including advanced ones like GPT-4, find it challenging to handle tasks demanding deep compositionality (Dziri et al., 2023). For instance, we demonstrate that GPT-4 achieves only about 27% accuracy on basic tasks like 4-by-3-digit multiplication. One explanation for this limitation is the Transformer’s inability to express simple state-tracking problems (Merrill & Sabharwal, 2023a). Structured State Space Models 1 (SSMs) (Gu et al., 2022; Gu & Dao, 2023) have been introduced as an alternative to Transformers, aiming to achieve similar expressive power to Recurrent Neural Networks (RNNs) for handling problems that are naturally sequential and require state tracking. While SSMs have demonstrated impressive capabilities on various sequential tasks (Goel et al., 2022; Schiff et al., 2024), they exhibit similar limitations to Transformer models in solving function composition problems. For the same 4by-3-digit multiplication task, Jamba (Lieber et al., 2024), an SSM-Attention hybrid model, achieves only 17% accuracy. Existing research has experimentally confirmed the inability of Transformers to perform function composition and compositional tasks (Dziri et al., 2023; Zhao et al., 2024), leading to issues such as hallucinations—responses that are incompatible with training data and prompts. Complexity theory analysis further reveals that Transformers belong to a weak complexity class, logspace-uniform **TC** [0] (Merrill & Sabharwal, 2023a), as do SSMs (Merrill et al., 2024), emphasizing their inherent limitations. While the impossibility of function composition for Transformers has been theoretically studied (Peng et al., 2024), a similar theoretical understanding for SSMs remains lacking. In this paper, we address this gap with two main contributions: 1. We provide a theoretical framework using complexity theory to explain the limitations of SSMs in sequence modeling, particularly in their inability to perform function composition efficiently. We prove that one-layer SSMs cannot solve function composition problems over large domains without an impractically large state size (Theorem 1). Additionally, we show that even with Chain-of-Thought prompting, SSMs require a polynomially growing number of steps to solve iterated function composition problems (Theorem 2). 2. We extend our theoretical analysis to multi-layer SSMs, demonstrating that the computation of an _L_ -layer SSM on a prompt of length _N_ can be carried out using _O_ ( _L_ log _N_ ) bits of memory, positioning SSMs within the complexity class **L** (logarithmic space). This implies that SSMs cannot solve problems that are **NL** -complete unless **L** = **NL**, which is widely believed to be false (Peng et al., 2024). We further discuss that SSMs share this limitation with Transformers, highlighting a fundamental barrier in current deep learning architectures (Theorem 3). Our critical insight is the formal proof that SSMs cannot solve iterated function composition problems without a polynomially growing number of Chain-of-Thought steps (Theorems 1 and 2) and that even multi-layer finite-precision SSMs are limited to recognizing regular languages due to their essential equivalence to finite-state machines (Theorem 4). While CoT prompting can, to some extent, enable complex problem-solving by breaking down tasks into intermediate steps, it introduces a trade-off between the model’s state size and the number of input passes required, leading to increased resource demands, which is not optimal. These findings underscore the need for innovative solutions beyond current deep learning paradigms to achieve reliable multi-step reasoning and compositional task-solving in practical applications. 2 E QUIVALENCE OF SSM S WITH O THER D EEP L EARNING M ODELS Recent advancements in deep learning architectures have unveiled significant connections between SSMs and other prevalent models such as Linear Transformers. Notably, Dao & Gu (2024) have demonstrated equivalence between Linear Transformers and SSMs, indicating that the computational processes of these models are fundamentally related. Moreover, SSMs can be trained like Convolutional Neural Networks (CNNs) and inferred as Recurrent Neural Networks (RNNs), leveraging the benefits of both convolutional and recurrent architectures. This duality allows SSMs to efficiently capture long-range dependencies like RNNs while benefiting from the parallelism during training characteristic of CNNs. Additionally, Merrill et al. (2024) have shown that SSMs and Transformers belong to the same computational complexity class, specifically logspace-uniform **TC** [0] . This alignment in computational capacity reinforces the notion that the limitations observed in SSMs indicate inherent challenges within the broader landscape of deep learning models. Therefore, by focusing our theoretical and empirical analysis on SSMs, we effectively cover the representational capabilities of current deeplearning models, including Transformers and CNNs. This comprehensive coverage justifies our 2 exploration of the limits of deep learning in sequence modeling through the lens of complexity theory. Our findings highlight the specific shortcomings of SSMs and shed light on the fundamental constraints of deep learning architectures in handling tasks that require reliable multi-step reasoning and compositional task-solving. 3 B ACKGROUND For two natural numbers _n ≤_ _m_, we denote [ _n_ ] = 1 _,_ 2 _, . . ., n_ and [ _n, m_ ] = _n, n_ + 1 _, . . ., m_, with [0] = [ _n, n −_ 1] = _∅_ . We refer to the number of bits used in each computation as computational precision _p_ . Given two domains _B, C_, we denote by _C_ _[B]_ the set of all functions from _B_ to _C_ . **Definition 1** (SSM layer) **.** _Given an input sequence_ _**x**_ 1 _, . . .,_ _**x**_ _n_ _∈_ R _[m]_ _, an SSM layer_ _L_ _is defined in_ _terms of a series of matrices_ _**A**_ _t_ _∈_ R _[d][×][d]_ _,_ _**B**_ _t_ _∈_ R _[d][×][m]_ _,_ _**C**_ _t_ _∈_ R _[m][×][d]_ _, and_ _**D**_ _t_ _∈_ R _[m][×][m]_ _for_ _t ∈_ [ _n_ ] _._ _L defines a sequence of states_ _**h**_ 1 _, . . .,_ _**h**_ _n_ _∈_ R _[d]_ _as_ _**h**_ _t_ = _**A**_ _t_ _**h**_ _t−_ 1 + _**B**_ _t_ _**x**_ _t_ ; (1) _and outputs the sequence_ _**y**_ 1 _, . . .,_ _**y**_ _n_ _∈_ R _[m]_ _as_ _**y**_ _t_ = _**C**_ _t_ _**h**_ _t_ + _**D**_ _t_ _**x**_ _t_ _._ (2) Generally, the matrices _**A**_ _t_ = _**A**_ ( _**x**_ _t_ ), _**B**_ _t_ = _**B**_ ( _**x**_ _t_ ), _**C**_ _t_ = _**C**_ ( _**x**_ _t_ ), and _**D**_ _t_ = _**D**_ ( _**x**_ _t_ ) are functions of the input vector _**x**_ _t_ for each _t ∈_ [ _n_ ] . In the special case when _**A**_ _t_, _**B**_ _t_, _**C**_ _t_, and _**D**_ _t_ are independent from the input sequence _**x**_ 1 _, . . .,_ _**x**_ _n_, we call _L_ a _linear SSM layer_ . Moreover, we call _d_ the embedding dimension. **Remark:** Although SSMs can be linked to streaming algorithms due to their limited hidden state, applying communication complexity to analyze their limitations in function composition involves intricate considerations unique to SSMs. No known streaming lower bound directly applies to our specific setting. Our analysis accounts for the particular architectural constraints of SSMs, providing a better understanding of their capabilities than general streaming algorithms. 4 F UNCTION C OMPOSITION R EQUIRES W IDE O NE -L AYER M ODELS Our analysis considers one-layer SSMs to establish fundamental limitations in function composition tasks. The insights gained at the single-layer level highlight critical challenges that persist even in deeper architectures. The function composition problem has been introduced in (Peng et al., 2024) to provide a theoretical understanding of the causes of the hallucination of Transformer models. The aim is to evaluate the model’s capability to combine relational information in the data to understand language, which is the core competence of large language models. Indeed to correctly answer questions like _’what is the birthday of Frédéric Chopin’s father?’_ given the information that _’the_ _father of Frédéric Chopin was Nicolas Chopin’_ and that _’Nicolas Chopin was born on April 15,_ _1771’_, the model needs to be able to compose the functions _’birthday-of’_ and _’father-of’_ (Peng et al., 2024), (Guan et al., 2024). Our analysis focuses on function compositions where the functions map elements from one finite, discrete domain to another, such as mapping individuals to their parents or birthdates. These functions operate over discrete sets, like persons and dates, and not over real-valued or continuous domains. Although this function composition task resembles a database join operation, it is important to note that our analysis focuses on how SSMs handle such compositions given natural language prompts. These prompts specify functions in an informal and potentially incomplete manner, lacking the full intensional knowledge present in formal database schemas. We aim to assess the model’s ability to perform reasoning over such natural language prompts despite their potential incompleteness. Next, we give a precise formulation of the _function composition problem_ due to (Peng et al., 2024). Consider two functions, _g_ mapping a domain _A_ to a domain _B_, and _f_ mapping _B_ to another domain _C_ . These functions will be described in a prompt _X_ . The _N_ tokens of _X_ are divided into four parts: 1. the zeroth part describes the argument _x ∈_ _A_, 2. the first part describes the function _g_ through _|A|_ sentences in simple, unambiguous language separated by punctuation, e.g., _’the father of Frédéric Chopin is Nicolas Chopin’_, 3 3. the second part consists of _|B|_ sentences describing the function _f_, e.g., _’the birthday of_ _Nicolas Chopin is April 15, 1771’_, 4. the third part is the query question asking for the value of _f_ ( _g_ ( _x_ )). In this section, we discuss the theoretical limitations of SSMs for solving the function composition problem. In our analysis, the concept of domain size is crucial. While we primarily consider discrete domains, such as finite sets like [ _n_ ] = _{_ 1 _,_ 2 _, . . ., n}_, it is important to discuss what domain size means in other contexts. For continuous domains like the interval [1 _, n_ ], representing general functions would require infinitely many bits, making function composition intractable for models like SSMs and Transformers. Therefore, in practical settings, the maximum meaningful domain size is constrained by the total number of tokens and the prompt length, as the model’s input capacity is limited. In our composition tasks, the functions are described within the prompt, so the prompt length effectively serves as an upper bound on the domain size. **Theorem 1.** _Consider a function composition problem with input domain size_ _|A|_ = _|B|_ = _n_ _and an_ _SSM layer_ _L_ _with embedding dimension_ _d_ _and computation precision_ _p_ _. Let_ _R_ = _n_ log _n−_ ( _d_ [2] + _d_ ) _p ≥_ 0 _, then the probability that_ _L_ _answers the query incorrectly is at least_ _R/_ (3 _n_ log _n_ ) _if_ _f_ _is sampled_ _uniformly at random from C_ _[B]_ _._ The proof is based on a reduction from a famous problem in communication complexity (Peng et al., 2024), (Yao, 1979). Additional background on Communication Complexity and relevant problem classes can be seen in the Appendix A. We have three agents dubbed Faye, Grace, and Xavier. We assume that the agents have unbounded computational capabilities but, the only communication allowed is from Faye and Grace to Xavier. Faye knows a function _f_ : [ _n_ ] _�→_ [ _n_ ] and the argument _x ∈_ [ _n_ ], Grace knows a function _g_ : [ _n_ ] _�→_ [ _n_ ] and the argument _x_, while Xavier only knows the argument _x ∈_ [ _n_ ] . The goal is for Xavier to compute the value of _f_ ( _g_ ( _x_ )), minimizing the total number of bits communicated from Faye to Xavier and from Grace to Xavier. We report a lemma from (Peng et al., 2024), which gives a hardness result for the abovementioned problem. **Lemma 1** (Lemma 1 from (Peng et al., 2024)) **.** _Consider the problem described above: if fewer than_ _n_ log _n_ _bits are communicated by Faye to Xavier, then Xavier cannot know the value_ _f_ ( _g_ ( _x_ )) _. In_ _particular, if only_ _n_ log _n −_ _R_ _bits are communicated for some_ _R ≥_ 0 _, then the probability that the_ _composition is computed incorrectly is at least_ _R/_ (3 _n_ log _n_ ) _if_ _f_ _is sampled uniformly at random_ _from C_ _[B]_ _._ Now, we prove the theorem based on the Lemma above. _Proof of Theorem 1._ To establish the bound on _q_, we give a reduction of the communication problem above to the function composition problem. Let _L_ be an SSM layer that can solve the function composition problem with probability _q_ . Suppose we have Faye, Grace, and Xavier as in the settings above, and Xavier wants to find the value _f_ ( _g_ ( _x_ )) . We construct the following prompt: the zeroth token _**x**_ 0 is _’the argument of the function_ _is_ _x_ _’_, for _i ∈_ [1 _, n_ ] let _**x**_ _i_ be the token _’_ _g_ _applied to_ _i_ _is_ _g_ ( _i_ ) _’_, where the information is provided by Grace, and for _i ∈_ [ _n_ +1 _,_ 2 _n_ ] let _**x**_ _i_ be the token string _’_ _f_ _applied to_ _i_ _is_ _f_ ( _i_ ) _’_, where the information is provided by Faye. Xavier provides the last token string _**x**_ 2 _n_ +1 = _’what is the value of_ _f_ ( _g_ ( _x_ )) _’_ . Since the SSM layer _L_ can solve the composition task with probability _q_, we have that: _**y**_ 2 _n_ +1 = _**C**_ 2 _n_ +1 _**h**_ 2 _n_ +1 + _**D**_ 2 _n_ +1 _**x**_ 2 _n_ +1 = _f_ ( _g_ ( _x_ )) (3) with probability _q_ . But this allows us to construct the following communication protocol. Since Grace knows _g_ and the argument _x_, she knows the values of _**x**_ _i_ for _i ∈_ [0 _, n_ ] and she iteratively computes: _**h**_ _i_ = _**A**_ _i_ _**h**_ _i−_ 1 + _**Bx**_ _i_ _,_ (4) and then sends _**h**_ _n_ to Xavier. On the other hand, Faye knows _f_ and hence the values of _**x**_ _i_ for _i ∈_ [ _n_ + 1 _,_ 2 _n_ ], she computes the matrix: _A_ = 2 _n_ � _**A**_ _j_ _,_ (5) _j_ = _n_ +1 4 then the vector:  2 _n−i_  �  _j_ = _n_ � _**A**_ _j_ _j_ = _n_ +1  _**B**_ _i_ _**x**_ _i_ _,_ (6) _**b**_ = 2 _n_ � _i_ = _n_ +1 and she sends them to Xavier. At this point, Xavier computes: _**h**_ 2 _n_ +1 = _**A**_ 2 _n_ +1 _·_ ( _A ·_ _**h**_ _n_ + _**b**_ ) + _**B**_ 2 _n_ +1 _**x**_ 2 _n_ +1 _._ (7) and finds the value of _f_ ( _g_ ( _x_ )) with probability _q_ by computing _**y**_ 2 _n_ +1 = _**C**_ 2 _n_ +1 _·_ _**h**_ 2 _n_ + _**D**_ 2 _n_ +1 _·_ _**x**_ 2 _n_ +1 . The total number of bits of communication between Faye and Xavier is ( _d_ [2] + _d_ ) _· p_ . By Lemma 1, it follows that _q ≤_ _R/_ (3 _n_ log _n_ ). Our theoretical results in Theorem 1 highlight that SSMs, like other deep neural networks, approximate functions rather than perform symbolic reasoning. Specifically, the probability bound indicates that if we attempt to compose functions over domains of size _n_ with an SSM of embedding dimension _d_ and computational precision _p_ such that ( _d_ [2] + _d_ ) _p < n_ log _n/_ 2, the model will output the incorrect result with a probability of at least 1 _/_ 6 . To achieve a high probability of correctness (e.g., 99%), ( _d_ [2] + _d_ ) _p_ must be significantly larger than _n_ log _n/_ 2 . This establishes a strong lower bound on the model’s width, demonstrating that to accurately perform function composition over large domains, the model’s capacity must increase substantially. While Theorem 1 addresses the limitations of one-layer SSMs, a natural question arises: Can deeper SSMs overcome these limitations? We conjecture that any SSM with a constant number of layers would still be unable to resolve the iterated composition task (as formalized in our Chain-of-Thought section 5). This is because accurately communicating token embeddings between layers becomes increasingly challenging as the depth grows. The difficulty in preserving and transmitting the necessary information across layers suggests that simply increasing the number of layers without a corresponding increase in model capacity does not address the fundamental limitations identified. 5 M ANY T HOUGHT S TEPS ARE N EEDED A chain of thought (CoT) is a series of intermediate natural language reasoning steps that lead to the final output. In this section, we focus on language models that can generate a similar chain of thought— a coherent series of intermediate reasoning steps that lead to the final answer for a problem. In (Wei et al., 2022), it was observed that CoT can mitigate the issue of hallucinations by encouraging the LLM to generate prompts that break down the task into smaller steps, eventually leading to the correct answer. In this section, we prove that, in general, many CoT steps are needed to break down compositional tasks. We start the discussion with the formal definition of an SSM with _k_ CoT steps. It adapts the definition for the Transformer model of (Merrill & Sabharwal, 2024) to the case of SSMs. **Definition 2** (SSM with CoT) **.** _Let_ _ϕ_ : (R _[m]_ ) _[∗]_ _→_ R _[m]_ _be a function mapping a prefix of tokens to a_ _new token. The function ϕ is parametrized by an SSM layer L._ _Given an input sequence_ _**x**_ 1 _,_ _**x**_ 2 _, . . .,_ _**x**_ _n_ _∈_ R _[m]_ _, we call:_ _ϕ_ _k_ ( _**x**_ 1 _,_ _**x**_ 2 _, . . .,_ _**x**_ _n_ ) = _ϕ_ _k−_ 1 ( _**x**_ 1 _,_ _**x**_ 2 _, . . .,_ _**x**_ _n_ ) _· ϕ_ ( _ϕ_ _k−_ 1 ( _**x**_ 1 _,_ _**x**_ 2 _, . . .,_ _**x**_ _n_ ) _,_ _**x**_ 1 _,_ _**x**_ 2 _, . . .,_ _**x**_ _n_ ) _,_ _where_ _ϕ_ 1 ( _**x**_ 1 _,_ _**x**_ 2 _, . . .,_ _**x**_ _n_ ) = _ϕ_ ( _**x**_ 1 _,_ _**x**_ 2 _, . . .,_ _**x**_ _n_ ) _and_ _·_ _denotes concatenation, the output of the SSM_ _layer L with k CoT-steps._ In this section, we want to prove that while this procedure could help SSM layers with compositional tasks, it might require many CoT steps to be effective. In particular, we focus on the iterated function composition problem and show a lower bound on the number of CoT steps needed by an SSM layer to solve this problem correctly. In the _iterated function composition_ problem we are given _k_ functions _f_ 1 _, f_ 2 _, . . ., f_ _k_ : [ _n_ ] _�→_ [ _n_ ], and we need to calculate _f_ _k_ ( _f_ _k−_ 1 ( _. . . f_ 2 ( _f_ 1 ( _x_ )) _. . ._ )) for _x ∈_ [ _n_ ] . Here we restrict to the case when _f_ 1 = _f_ 2 = _· · ·_ = _f_ _k_, we define _f_ [(] _[k]_ [)] ( _x_ ) := _f_ ( _f_ ( _. . . f_ ( _x_ ))), and we call this _k_ _-iterated function_ _composition_ problem. 5 Idea Generation Category:
3Other
DhdqML3FdM
# - S YNTHESIZING R EALISTIC F MRI: A P HYSIOLOGI CAL D YNAMICS -D RIVEN H IERARCHICAL D IFFUSION M ODEL FOR E FFICIENT F MRI A CQUISITION **Yufan Hu, Yu Jiang, Wuyang Li, Yixuan Yuan** _[∗]_ The Chinese University of Hong Kong yufanhu@link.cuhk.edu.hk, yujiang@cuhk.edu.hk, wymanbest@outlook.com, yxyuan@ee.cuhk.edu.hk A BSTRACT Functional magnetic resonance imaging (fMRI) is essential for mapping brain activity but faces challenges like lengthy acquisition time and sensitivity to patient movement, limiting its clinical and machine learning applications. While generative models such as diffusion models can synthesize fMRI signals to alleviate these issues, they often underperform due to neglecting the brain’s complex structural and dynamic properties. To address these limitations, we propose the Physiological Dynamics-Driven Hierarchical Diffusion Model, a novel framework integrating two key brain physiological properties into the diffusion process: brain hierarchical regional interactions and multifractal dynamics. To model complex interactions among brain regions, we construct hypergraphs based on the prior knowledge of brain functional parcellation reflected by resting-state functional connectivity (rsFC). This enables the aggregation of fMRI signals across multiple scales and generates hierarchical signals. Additionally, by incorporating the prediction of two key dynamics properties of fMRI—the multifractal spectrum and generalized Hurst exponent—our framework effectively guides the diffusion process, ensuring the preservation of the scale-invariant characteristics inherent in real fMRI data. Our framework employs progressive diffusion generation, with signals representing broader brain region information conditioning those that capture localized details, and unifies multiple inputs during denoising for balanced integration. Experiments demonstrate that our model generates physiologically realistic fMRI signals, potentially reducing acquisition time and enhancing data quality, benefiting clinical diagnostics and machine learning in neuroscience. Our [code is available at https://github.com/CUHK-AIM-Group/PDH-Diffusion.](https://github.com/CUHK-AIM-Group/PDH-Diffusion) 1 I NTRODUCTION Functional magnetic resonance imaging (fMRI) is a non-invasive neuroimaging technique that captures spatio-temporal patterns of blood oxygenation in the active brain (D’Esposito et al., 2003). Brain fMRI signals encapsulate the full spectrum of intrinsic functional networks and exhibit highly complex fluctuating patterns, providing very accurate information of neural activity (Strangman et al., 2002). Compared to other medical modalities, fMRI offers superior precision in predicting and diagnosing various neurological and psychiatric conditions (Matthews et al., 2006). Despite its utility, fMRI data also presents unique challenges, particularly during acquisition. A standard fMRI scan is highly time-consuming. For example, in the data collection process of Human Connectome Project (HCP), a resting-state fMRI scan lasts 60 minutes, divided into four 15-minute sessions (Van Essen et al., 2012). While necessary for capturing comprehensive brain activity, these long sessions can be physically taxing for participants. Additionally, fMRI scans impose strict requirements on patient stillness, which is particularly difficult for infants or those unable to stay still for long periods, as even minor movements can introduce noise and artifacts, reducing data quality (Power et al., 2014; Bollmann & Barth, 2021). These challenges significantly limit its broader _∗_ Correspondence 1 application in clinical diagnosis and machine learning algorithm development. _As a result, there is a_ _growing need for methods to generate and impute fMRI time series signals, which can reduce acqui-_ _sition times and optimize the quality of the acquired data._ Notably, diffusion models have recently emerged as a prominent generative approach and show promise in their generative capacity within the time series domain (Rasul et al., 2021; Shen et al., 2024; Fan et al., 2024; Li et al., 2024b;a), highlighting their potential for further exploration in fMRI signal generation. While showing potential for application, diffusion models often underperform when applied to brain fMRI signal generation, as most advanced methods are designed for task-agnostic generation and neglect **two key intrinsic physical properties** of brain fMRI signals. First, they tend to treat fMRI data from each region-of-interest (ROI) independently, overlooking the high-dimensional interactions between signals from different ROIs (Logothetis, 2008; Bagley et al., 2017), which are indispensable for examining hypothesized disconnectivity effects in neurodegenerative and psychiatric brain diseases (Van Den Heuvel & Pol, 2010). To address this limitation, we incorporate brain regional interactions using hypergraph (see Figure 2), which is constructed based on brain rsFC [1] . Second, while these models excel at capturing temporal trends, they often fail to account for the unique physiological patterns and dynamics properties of brain signals, specifically fractal characteristics. These dynamics arise from repeated, scale-free information exchange between brain regions, generating fMRI signals that exhibit self-similarity (He, 2014). Scale-free dynamics in fMRI have been shown to vary across brain networks and behavioral conditions (Ciuciu et al., 2012), as well as change with age (Suckling et al., 2008), arousal states (Tagliazucchi et al., 2013), and disease processes (Maxim et al., 2005). Analyzing these dynamics provides critical insights into the brain mechanisms that underlie cognition and behavior (Ciuciu et al., 2014; He et al., 2023; 2024). To overcome this shortcoming, we integrate these dynamics properties into our model, emphasizing the multifractal nature of brain activity. By introducing these two fundamental aspects of brain, our method achieves more realistic and precise fMRI signal generation, better aligning with the brain’s physiological reality. The contribution of this paper can be summarized as follows: We propose a novel framework for generating brain fMRI time series signals, grounded in the above two key observations about fMRI signals, brain regional interactions, and multifractal dynamics. This framework, called the Physiological Dynamics-Driven Hierarchical Diffusion Model, is composed of three main components: 1. The Hypergraph-based Hierarchical Signals Generator: To incorporate intricate interdependencies between brain regions, we model brain rsFC as a hypergraph structure that captures complex interactions among signals of ROIs. This component aggregates fMRI signals based on the intrinsic brain functional connectivity matrix across multiple brain regions, producing hierarchical fMRI signals that encapsulate information at various scales. 2. The Dynamics Properties Guiding Module: This module is designed to incorporate the dynamics properties of brain activity, specifically utilizing the multifractal characteristics of fMRI signals into diffusion generation process. It includes a predictor that estimates the multifractal spectrum and the generalized Hurst exponent of the fMRI series. The predicted multifractal characteristics are then projected as a conditioning input to guide the diffusion process, ensuring that the generated signals maintain the complex, scale-invariant properties observed in real fMRI data. 3. The Cross-brain Region Guiding Progressive Diffusion Model: To ensure complementary signals across different brain region ranges, progressive generation is employed, where broader regional signal trends are used as conditioning inputs to guide the detailed signals of more localized brain areas. Finally, we dynamically unifies multiple conditioning inputs during the denoising phase of diffusion, ensuring balanced and coherent integration of multiple conditions. 2 P RELIMINARIES 2.1 F UNCTIONAL CONNECTIVITY OF F MRI SIGNALS . The human brain is a complex network of functionally and structurally interconnected regions. Even at rest, there is a high level of ongoing functional connectivity and continuous information processing between the hemispheric motor cortices and between other well-established functional networks, 1 brain resting-state functional connectivity (rsFC) reflects the synchronized activity and communication between brain regions during resting state. 2 such as the primary visual, auditory, and higher-order cognitive networks (Rogers et al., 2007; Van Den Heuvel & Pol, 2010). This leads to complex correlations between fMRI time series of ROIs on brain, represented by functional connectivity. Functional connectivity is formally defined as the temporal dependence of neuronal activity patterns of anatomically separated brain regions (Aertsen et al., 1989; Friston et al., 1993). Functional connectivity can be described through functional connectivity matrix (Venkatesh et al., 2020), whose elements indicate the strength of functional interactions between pairs of regions, with higher values signifying stronger correlations. This matrix reflects how different regions of the brain interact or communicate and is often used to study the brain’s network organization and the functional relationships underlying cognitive and physiological processes. Given that functional connectivity reflects the complex interactions between different brain regions, multi-level partitioning provides a powerful method to analyze these interactions across varying spatial scales (Betzel & Bassett, 2017; Betzel et al., 2019). Multi-level partitioning is a common approach in brain analysis, as it allows for the simultaneous examination of brain behavior at both the macro level (e.g., interactions between brain regions) and the micro level (e.g., activity within local neuronal clusters) (Wang et al., 2021; Kan et al., 2023; Varga et al., 2024), providing a more comprehensive understanding of brain function and structure across different spatial scales. 2.2 M ULTIFRACTALITY OF F MRI SIGNALS . The fractal behavior has been ubiquitously observed in neuroimaging studies which may arise from various mediators such as hemodynamics, respiration, cardiac fluctuations and brain neural activities (Campbell & Weber, 2022). Extensive research has demonstrated that brain activity, regardless of the neuroimaging technique used for observation, is inherently arrhythmic and exhibits scale-free temporal dynamics (Racz et al., 2018a; Guan et al., 2022). The scale invariance dynamics in fMRI is often associated with long-range correlation in time and has been extensively demonstrated in numerous studies to be closely related to intrinsic ongoing brain activity(Ciuciu et al., 2014). ( _Definition_ **Scale-free law and Self-similarity** ) Data series generated by complex systems tend to fluctuate across different time scales (Boeing, 2016). These fluctuations often follow a scale-free law, maintaining consistent and invariant patterns across several orders of magnitude (Proekt et al., 2012). Scale-free dynamics can be described in the spectral domain by a power law as the power spectrum follows a single power law over all ranges of frequency. Let _X_ denote the fMRI signals quantifying brain activity and Γ _X_ ( _f_ ) is its Power Spectral Density (PSD). Scale-free property is classically defined as (Ciuciu et al., 2012): Γ _X_ ( _f_ ) _∝_ _f_ _[−][β]_ _, β ≥_ 0 (1) with _f_ _m_ _≤_ _f ≤_ _f_ _M_ _, f_ _M_ _/f_ _m_ _≫_ 1, where _β_ is a constant parameter known as scaling exponent. The power spectrum of fMRI data follows this power law across a wide range of frequencies, suggesting that multiple frequencies equivalently contribute to its dynamics, rather than focusing solely on a specific, preselected frequency band commonly used in brain analysis. Given that the fMRI signals _X_ follow the power law as described in equation 1, we further assume that _X_ ( _t_ ) is one-dimentional time series data where _t_ is the time step and _X_ ( _t_ ) is a stationary jointly Gaussian process. Then the covariance function of _X_ ( _t_ ) can be expressed as follows: _C_ _X_ ( _τ_ ) _∼_ _σ_ _X_ [2] [(1 +] _[ C]_ _[′]_ _[|][τ]_ _[|]_ _[−][α]_ [)][, for] _τ_ _m_ _≤_ _τ ≤_ _τ_ _M_ with _α_ = 1 _−_ _β_ . _C_ _[′]_ is a constant and _σ_ _X_ [2] [is variance of] _[ X]_ [. Then it is easily to] derive that: E( _X_ ( _t_ + _τ_ ) _−_ _X_ ( _t_ )) [2] = E _X_ ( _t_ + _τ_ ) [2] + E _X_ ( _t_ ) [2] _−_ 2E _X_ ( _t_ + _τ_ ) _X_ ( _t_ ) = _c_ 2 _|τ_ _|_ _[−][α]_, where _c_ 2 = _−_ 2 _σ_ _X_ [2] _[C]_ _[′]_ [. The fact that] _[ X]_ [ is Gaussian further suggests that] _[ ∀][q >][ −]_ [1][:] E _|X_ ( _t_ + _τ_ ) _−_ _X_ ( _t_ ) _|_ _[q]_ = _c_ _q_ _|τ_ _|_ _[−]_ _[qβ]_ 2 _, τ_ _m_ _≤_ _τ ≤_ _τ_ _M_ (2) Defining _Y_ ( _t_ ) = � _t_ _X_ ( _s_ ) _ds_, equation is as follows, when _τ_ _m_ _≤_ _τ_ 1 _, τ_ 2 _≤_ _τ_ _M_ : _Y_ ( _t_ + _τ_ 1 ) _−_ _Y_ ( _t_ ) fdd = _Y_ ( _t_ + _τ_ 2 ) _−_ _Y_ ( _t_ ) (3) � _τ_ 1 _[H]_ � _t∈_ R � _τ_ 2 _[H]_ � _t∈_ R fdd Where _H_ = ( _−α/_ 2) = ( _β−_ 1) _/_ 2 and = means equality of all joint finite dimensional distributions. In other words, this means that for all _q > −_ 1, such that E _|Y_ ( _t_ ) _|_ _[q]_ _< ∞_ : E _|Y_ ( _t_ + _τ_ ) _−_ _Y_ ( _t_ ) _|_ _[q]_ = _c_ _q_ _|τ_ _|_ _[qH]_ _, τ_ _m_ _≤_ _τ ≤_ _τ_ _M_ _,_ or E _|Y_ ( _t_ + _τ_ 2 ) _−_ _Y_ ( _t_ ) _|_ _[q]_ = E _|Y_ ( _t_ + _τ_ 1 ) _−_ _Y_ ( _t_ ) _|_ _[q]_ [ �] _||ττ_ 21 _||_ � _qH_ _,_ (4) 3 _t∈_ R � (3) _t∈_ R _τ_ 1 _[H]_ � fdd = � _Y_ ( _t_ + _ττ_ 22 _[H]_ ) _−_ _Y_ ( _t_ ) _τ_ 2 _[H]_ when _τ_ _m_ _≤_ _τ_ 1 _, τ_ 2 _≤_ _τ_ _M_ . A geometric dataset exhibiting scale invariance is considered self-similar if it can be decomposed into smaller parts, each of which resembles the entire original structure (MIshra & Bhatnagar, 2014). As shown in equation 3 and equation 4, _Y_ ( _t_ ) is an example of a self-similar process. From a more generalized perspective, above equations are not only fold for jointly Gaussian process but for a broader and more general class, that of self-similar processes with stationary increments, referred to as H-sssi processes, and defined in (Samorodnitsky et al., 1996): fdd _{X_ ( _t_ ) _}_ _t∈_ R = � _a_ _[H]_ _X_ ( _t/a_ )� _t∈_ R (5) For _∀a >_ 0, _H ∈_ (0 _,_ 1). Parameter _H_ is referred to as the self-similarity exponent. For real physiological data collected from brain, fMRI data can be also be viewed as the increment process _Y_ ( _t_ ) = _X_ ( _t_ + _τ_ 0 ) _−_ _X_ ( _t_ ) of an H-sssi process _X_, where _τ_ 0 is a constant chosen by physiology and data acquisition set up, thereby exhibiting both scale-free and self-similarity properties. Data from H-sssi processes can be typically characterized by monofractal scaling exponents. However, in reality, brain signals are more complex: while they exhibit global pattern consistency (monofractality), they also demonstrate distinct functional activity patterns in local areas (multifractality) (Racz et al., 2018b; Franc¸a et al., 2018; La Rocca et al., 2018). This reflects the multi-scale regulation of neural activity in the brain. Larger-scale signals may capture whole-brain functional coordination, aligning with the global self-similarity described by monofractal models, whereas smaller-scale signals represent localized neuronal group activity. ( _Definition_ **Multifractality** ) For fMRI data _Y_ ( _t_ ), equation 1 holds over a wide range of _τ_ . However, the scaling exponent deviate significantly from the expected linear behavior _qH_, manifesting as: E _|Y_ ( _t_ + _τ_ ) _−_ _Y_ ( _t_ ) _|_ _[q]_ = _c_ _q_ _|τ_ _|_ _[ζ]_ [(] _[q]_ [)] _, τ_ _m_ _≤_ _τ ≤_ _τ_ _M_ (6) Note the _q_ -order scaling exponent _ζ_ ( _q_ ) is necessarily a strictly concave function of _q_ . For this reason, instead of H-sssi process, there need a broader class to depict this process, referred to as that of multifractal processes. Data with multifractal properties are more complex, featuring varying local characteristics and described by a range of scaling exponents. Multifractal signals exhibit both small-scale and large-scale local fluctuations, which are absent in monofractal signals. These fluctuations, associated with different statistical moments, enable multifractals to capture fractal properties across multiple scales and represent localized nonlinear dynamics within the data (Lopes & Betrouni, 2009). By extending this capability, multifractal models provide a deeper understanding of brain dynamics, characterizing the interactions across various scales and revealing how cognitive functions emerge from the synergy of processes operating at multiple levels. 3 M ETHOD 3.1 THE F RAMEWORK The overall framework of the Physiological Dynamics-Driven Hierarchical Diffusion Model is illustrated in Figure 1. The fMRI time series data of a subject with _N_ ROIs can be denoted as **X** _T_ = ( _x_ 1 _, . . ., x_ _i_ _, . . ., x_ _N_ ) _[T]_ _∈_ R _[N]_ _[×][T]_, where _x_ _i_ _∈_ R _[T]_ represents the time series of the _i_ -th ROI. Each _x_ _i_ spans _T_ timesteps in the time series. We are given input data _X_ _t_ 0 _−L_ : _t_ 0 _∈_ R _[N]_ _[×][L]_, where _L_ represents the size of the retrospective window, and _t_ 0 denotes the initial position of the forecast window. The objective of the task is to predict the future fMRI values of _N_ ROIs for a time span of _t_ future time steps _X_ _t_ 0 : _t_ 0 + _t_ _∈_ R _[N]_ _[×][t]_ . First, we input the fMRI time series data _X_ _t_ 0 _−L_ : _t_ 0 into the Hypergraph-based Hierarchical Signal Generator to produce _R_ time series data _X_ _t_ _[r]_ 0 _−L_ : _t_ 0 [, where] _[ r]_ [ denotes signals aggregated from] _[ r]_ [-level] _X_ spatial ranges of brain regions. Then, we use a diffusion model to generateˆ _t_ _[r]_ 0 : _t_ 0 + _t_ [signals. Specifically, we first extract the historical embedding] **[ h]** _[r]_ _t_ 0 [= RNN] _R_ time series data _[θ]_ � _**x**_ _[r]_ _t_ 0 _[,]_ **[ h]** _t_ _[r]_ 0 _−_ 1 � of the known time window using RNN, which serves as the basic historical condition _c_ _history_ for the diffusion process. This embedding is then input into the Dynamics Properties Guiding Module, 4 **Hierarchical Feature** **(c) Cross-brain Region Guiding Progressive Diffusion Model** **Extractor** "𝑋 !$%& ! :! ! #! $%& $%& _x_ !#' h !.'$%& **Multi-condition** **Module** **Multi-condition** **Module** **Multi-condition** **Module** c !#'$%( _x_ $%(!#' c !#'$%& _x_ $%&!#' !#'$%' _x_ $%'!#' _N_ _N_ & _N_ ) " … 0 RNN RNN "𝑋 !$%( ! :! ! #! … %:!$'# ! 𝑋 %:! ! h !.'$%( **H** **H** **S** **G** $'#:( _x_ !#' AAL atlas $%( ROI 1 ROI 2 … ~~ROI~~ _n_ … "𝑋 !$%' ! :! ! #! h !"#$%# RNN c !#' $%' Figure 1: The framework of Physiological Dynamics-Driven Hierarchical Diffusion Model (PDHDiffusion) with three main modules (a, b, and c), where we introduce two key physiological characteristics of brain fMRI to generate more realistic fMRI signals. where a specifically designed loss function _L_ _fractal_ is used to optimize the predicted multifractal characteristics and generate the corresponding multifractal conditions _c_ _fractal_ . Additionally, in the Cross-brain Region Guiding Progressive Diffusion Model, signals from broader brain region ranges _X_ [ˆ] _t_ _[r]_ 0 [+1] : _t_ 0 + _t_ [are used as cross-brain region conditions] _[ c]_ _[region]_ [to guide the generation of more] detailed signals _X_ [ˆ] _t_ _[r]_ 0 : _t_ 0 + _t_ [during the diffusion process. As a result, we obtain realistic generated] signals _X_ [ˆ] _t_ _[r]_ 0 [=1] : _t_ 0 + _t_ [, which accurately capture brain region relationships, fractal characteristics, while] maintaining alignment with the known retrospective window data. 3.2 H YPERGRAPH - BASED H IERARCHICAL S IGNALS G ENERATOR The first key module in our framework is the Hypergraph-based Hierarchical Signals Generator, where we model the complex interdependence between fMRI signals from sample points across different brain regions as a hypergraph structure. This structure enables the propagation of information across varying spatial scales of brain regions, resulting in hierarchical fMRI signals that capture multiple levels of information. As mentioned before, the fMRI time series data of a subject with _N_ ROIs is denoted as **X** = ( _x_ 1 _, . . ., x_ _i_ _, . . ., x_ _N_ ) _[T]_ _∈_ R _[N]_ _[×][T]_ . Traditional generation methods (Rasul et al., 2021; Alcaraz & Strodthoff, 2022) fail to capture the complex high-dimensional physiological and structural dependencies between fMRI data of ROIs, leading to suboptimal outcomes. In this framework, we address this by modeling the relationships between fMRI signals from different ROIs using a hypergraph structure. To align with real physiological conditions, we first model the fMRI signals as a standard graph structure _G_ _s_, based on the functional connectivity matrix _C ∈R_ _[N]_ _[×][N]_ as the adjacency matrix, where _N_ is the number of ROIs and each element in _C_ reflects the interaction patterns between different ROIs (i.e., between distinct fMRI signals). In particular, a threshold is applied to the value distribution in _C_, where edges are established between sampling points that exhibit connectivity values exceeding the threshold. The resulting graph is denoted as _G_ _s_ = ( _V_ _s_ _, E_ _s_ ), where _v_ _i_ _∈V_ _s_ denotes a vertex corresponding to a sampling point, and _e_ _ij_ _∈E_ _s_ represents an edge connecting vertices _v_ _i_ and _v_ _j_ . On the basis of constructed graph _G_ _s_ of fMRI signals, we further construct a hypergraph _G_ = ( _V, E, W_ ) by defining hyperedges using the _k_ -Hop neighbors method as described in (Gao et al., 2022). The hypergraph _G_ consisting of a vertex set _V_, a hyperedge set _E_, and a hyperedge weight 5 Idea Generation Category:
0Conceptual Integration
zZ6TT254Np
### - C ONTROL ORIENTED C LUSTERING OF V ISUAL L ATENT R EPRESENTATION **Han Qi** _[∗]_ [1] **, Haocheng Yin** _[∗†]_ [ 2] **and Heng Yang** [1] 1 School of Engineering and Applied Sciences, Harvard University 2 Department of Computer Science, ETH Zürich A BSTRACT We initiate a study of the geometry of the visual representation space —the information channel from the vision encoder to the action decoder— in an image-based control pipeline learned from behavior cloning. Inspired by the phenomenon of _neural collapse_ (NC) in image classification (Papyan et al., 2020), we empirically demonstrate the prevalent emergence of a similar _law of clustering_ in the visual representation space. Specifically, - In _discrete_ image-based control (e.g., Lunar Lander), the visual representations cluster according to the natural discrete action labels; - In _continuous_ image-based control (e.g., Planar Pushing and Block Stacking), the clustering emerges according to “ _control-oriented_ ” classes that are based on (a) the relative pose between the object and the target in the input or (b) the relative pose of the object induced by expert actions in the output. Each of the classes corresponds to one _relative pose orthant_ (R EPO ). Beyond empirical observation, we show such a law of clustering can be leveraged as an _algorithmic tool_ to improve test-time performance when training a policy with limited expert demonstrations. Particularly, we _pretrain_ the vision encoder using NC as a _regularization_ to encourage control-oriented clustering of the visual features. Surprisingly, such an NC-pretrained vision encoder, when finetuned end-to-end with the action decoder, boosts the test-time performance by 10% to 35%. Real-world vision-based planar pushing experiments confirmed the surprising advantage of control-oriented visual representation pretraining. [1] 1 I NTRODUCTION We use a toy example to (a) introduce the concept of neural collapse and the task of policy learning from expert demonstrations, and (b) synchronize readers from the respective communities. **Minimum-time double integrator** Consider a dynamical system known as the _double integrator_ _q_ ¨( _t_ ) = _u_ ( _t_ ) _,_ (1) where _q ∈_ R is the position, and _u ∈_ U := [ _−_ 1 _,_ 1] is the external control that decides the system’s acceleration. For an example, imagine _q_ as the position of a car and _u_ as how much throttle or braking is applied to the car. Let _x_ ( _t_ ) := ( _q_ ( _t_ ) _,_ ˙ _q_ ( _t_ )) _∈_ R [2] be the state of the system. Suppose the system starts at _x_ (0) = _χ_, and we want to find the optimal state-feedback policy that drives the system to the origin using _minimum time_ . Formally, this is an optimal control problem written as min _u_ ( _t_ ) _[T,]_ subject to _x_ (0) = _χ, x_ ( _T_ ) = 0 _, u_ ( _t_ ) _∈_ U _, ∀t,_ and (1) _._ (2) Problem (2) admits a closed-form optimal policy (Rao & Bernstein, 2001): +1 if � _q <_ ˙ 0 and _q ≤_ 2 [1] [1] 2 _[q]_ [˙] [2] [�] ˙ 2 [1] _[q]_ [˙] [2] [�] or � _q ≥_ 0 and _q < −_ [1] 2 _u_ _⋆_ = _π_ _⋆_ ( _x_ ) :=    0 if _q_ = 0 and ˙ _q_ = 0 _−_ 1 otherwise. (3) This optimal policy is _bang-bang_ : it applies either full throttle or full brake until reaching the origin. _∗_ Equal contribution _†_ Work done during visit at the Harvard Computational Robotics Lab 1 [https://computationalrobotics.seas.harvard.edu/ControlOriented_NC](https://computationalrobotics.seas.harvard.edu/ControlOriented_NC) 1 _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ _[̸]_ treat policy learning as a classification problem given input _x_ and output _u_ . We design a six-layer MLP (2 _→_ 64 _→_ 64 _→_ 64 _→_ 64 _→_ 3 _→_ 3) and train it with the cross-entropy loss. We use _N_ = 5000. For the _u_ = 0 class, we repeat _N_ times the sample “ _x_ = 0” to balance the dataset. **Geometry of the representation space** The MLP is able to learn a good policy, but this is not the purpose of our experiment. We instead look at the geometry of the three-dimensional feature space at the penultimate layer. Particularly, let _{f_ _i_ [+1] _}_ _[N]_ _i_ =1 [,] _[ {][f]_ _i_ [ 0] _[}]_ _[N]_ _i_ =1 [,] _[ {][f]_ _i_ _[ −]_ [1] _}_ _[N]_ _i_ =1 [be the three sets of feature] vectors corresponding to controls _{_ +1 _,_ 0 _, −_ 1 _}_ . We compute the per-class and global mean vectors: _[̸]_ � _µ_ [+1] + _µ_ [0] + _µ_ _[−]_ [1] [�] _._ (4) 3 _[̸]_ _µ_ _[c]_ = [1] _N_ _[̸]_ _N_ � _[̸]_ � _f_ _i_ _[c]_ _[, c]_ [ = +1] _[,]_ [ 0] _[,][ −]_ [1;] _µ_ = [1] 3 _i_ =1 _[̸]_ Let ˜ _µ_ _[c]_ := _µ_ _[c]_ _−_ _µ_ be the _globally-centered_ class mean for each class _c_ . Fig. 1 plots the class means _µ_ ˜ _[c]_ and the globally-centered feature vectors _f_ [˜] _i_ _[c]_ [:=] _[ f]_ _i_ _[ c]_ _[−]_ _[µ]_ [ in different colors, as training progresses.] We observe a clear _clustering_ of the features according to their labels. Moreover, the clustering admits a precise geometry: (a) the lengths of globally-centered class means _∥µ_ ˜ _[c]_ _∥_ tend to be equal to each other for _c_ = +1 _,_ 0 _, −_ 1, and (b) the angles spanned by pairs of class mean vectors _[̸]_ (˜ _µ_ _[c]_ [1] _,_ ˜ _µ_ _[c]_ [2] ) also tend to be equal to each other for _c_ 1 _̸_ = _c_ 2, shown by the perfect “tripod” in Fig. 1 epoch 1000. **Neural collapse** The clustering phenomenon shown in Fig. 1 was first observed by Papyan et al. (2020) in image classification and dubbed the name _neural collapse_ (NC). In particular, NC refers to a set of four manifestations in the representation space (i.e., the penultimate layer): (NC1) Variability collapse: feature vectors of the same class converge to their class mean. (NC2) Simplex ETF: globally-centered class mean vectors converge to a geometric configuration known as Simplex Equiangular Tight Frame (ETF), i.e., mean vectors have the same lengths and form equal angles pairwise (as shown in Fig. 1). (NC3) Self-duality: the class means and the last-layer linear classifiers are self-dual. (NC4) Nearest class-center prediction: the network predicts the class whose mean vector has the minimum Euclidean distance to the feature of the test image. Since its original discovery, NC has attracted significant interests, both empirical (Jiang et al., 2023; Wu & Papyan, 2024; Rangamani et al., 2023) and theoretical (Fang et al., 2021; Han et al., 2021); see Appendix I for a detailed discussion of existing literature. The results we show in Fig. 1 are just another example to reinforce the prevalence of neural collapse, because behavior cloning of the bang-bang optimal controller reduces to a classification problem. **Our goal** Does a similar _law of clustering_, in the spirit of NC, happen when cloning image-based control policies? Two motivations underlie this question. First, understanding the structure of learned representation has been a fundamental pursuit towards improving the interpretability of deep 2 learning models. While the discovery of NC has deepened our understanding of visual representation learning for _classification_, such an understanding is missing when using visual representation for _decision-making_ . Second, just as related work has shown the benefits of NC for generalization and robustness (Bonifazi et al., 2024; Liu et al., 2023; Yang et al., 2022), we seek to uncover new algorithmic tools that improve model performance by “shaping” the latent representation space. Towards this goal, we consider a general image-to-action architecture consisting of (a) a _vision en-_ _coder_ that embeds high-dimensional images as compact visual features (He et al., 2016; Oquab et al., 2023), and (b) an _action decoder_ that generates control outputs given the latent vectors (Mandlekar et al., 2021; Chi et al., 2023). The entire pipeline is trained end-to-end using expert demonstrations. We study the instantiation of this architecture in three tasks: Lunar Lander from OpenAI Gym (Brockman, 2016), Planar Pushing that is popular in robotics (Chi et al., 2023), and Block Stacking from MimicGen (Mandlekar et al., 2023). The first is a _discrete control_ task, while the second and the third are _continuous control_ tasks. Fig. 2 overviews the architecture and the tasks. |Col1|Col2|Land| |---|---|---| |||| |get<br>sition|get<br>sition|get<br>sition| |Object|Col2|Pusher| |---|---|---| |Object||| |Target<br>Position|Target<br>Position|Target<br>Position| |Col1|Col2|Gripper<br>Object| |---|---|---| |||| |Target<br>Position|Target<br>Position|Target<br>Position| Figure 2: Investigation of a law of clustering, similar to NC, in the visual representation space. The bridge and information channel from vision to control is the visual representation space. We aim to study the geometry of the visual representation space from the lens of neural collapse. Particularly, we seek to answer two fundamental questions. (Q1) Does a clustering phenomenon similar to NC happen in the visual representation space? If so, according to which “classes” do they cluster? (Q2) Is the extent to which neural collapse happens related to the model’s test-time performance? If so, can we transform neural collapse from a “phenomenon” to an “algorithmic tool”? These two questions have never been answered before —either empirically or theoretically— because vision-based control is in stark contrast with existing literature in neural collapse for image classification. First, a prerequisite of NC is that the training data need to already be classified. Continuous vision-based control tasks (like planar pushing and block stacking), however, are _regression_ problems where the output is a continuous control signal. There is no supervision coming from classification whatsoever. Second, theoretical analysis of NC typically assumes a linear classifier from the representation space to the model output (Han et al., 2021; Fang et al., 2021; Jiang et al., 3 2023). This assumption is strongly violated in vision-based control because from visual representation space to control lies a rather nonlinear and complicated action decoder (see Fig. 2). **Our contribution** Despite the challenges mentioned, we empirically demonstrate that the answers to both questions are affirmative in vision-based control tasks. Our contributions are: (C1) **Control-oriented clustering** We first show that, on the discrete-control task Lunar Lander, visual latent features cluster according to the discrete action labels. This forms the natural vision-based generalization of the double integrator example shown in Fig. 1. We then primarily focus on continuous-control tasks. A natural path to study NC for a regression problem is to give each sample a class label and check whether the visual features cluster according to the labels. Then, the nontrivial question becomes what should these classes be? Based on the posit that the visual representation should _convey a goal of control_ for the action decoder, we design two “control-oriented” classification strategies. They compute (a) the relative pose between the object and the target in the input image space, or (b) the relative pose change of the object induced by the sequence of expert actions, and classify the samples into 8 classes, each corresponding to one orthant in the relative pose space (called a R EPO ). We then demonstrate the prevalent emergence of neural collapse in the visual representation space, in both Planar Pushing and Block Stacking. (C2) **Control-oriented visual representation pretraining** When the number of expert demonstrations is decreased, the strength of NC decreases, so does the model’s test-time performance. This motivates us to leverage NC as an algorithmic tool to improve the model’s performance under insufficient demonstrations. Indeed, we show that by using the controloriented NC metrics as the loss function to pretrain the vision encoder, we obtain 10% to 35% boost in the model’s test-time performance. Real-world Planar Pushing experiments confirmed the advantage of NC pretraining. **Paper organization** We first show observation of neural collapse in the discrete-control task Lunar Lander in §2. We then introduce the problem setup of continuous vision-based control tasks, controloriented classification, and the prevalence of neural collapse in the visual representation space for continuous-control tasks in §3. We describe our method of NC pretraining and show it improves model performance in §4. We demonstrate real-world robotic experiments in §5 and conclude in §6. 2 N EURAL C OLLAPSE IN D ISCRETE V ISION - BASED C ONTROL As a transition from our _bang-bang_ policy in §1 to more challenging continuous vision-based control tasks, we first study whether the visual representation clusters according to the given discrete action labels in a discrete vision-based control task, _Lunar Lander_ (Brockman, 2016). In a nutshell, Lunar Lander is a task where the lander aims to reach a given target position by deciding between four discrete actions (see Fig. 2). We first train a performant _state-based_ policy using reinforcement learning (RL) and then use the RL expert to collect vision-based demonstrations for behavior cloning (BC). The BC pipeline uses ResNet18 as the vision encoder and an MLP as the action decoder, supervised by cross-entropy loss with the expert demonstrations. Appendix A provides more details. Since the action space for Lunar Lander is discrete with 4 actions, we directly study the clustering phenomenon of the latent features according to these 4 actions. Suppose the training dataset contains _M_ samples of images (input) and actions (output), and denote the set of visual features as _F_ = _{f_ _t_ _}_ _[M]_ _t_ =1 [, where each feature] _[ f]_ _[t]_ [is computed by passing the input images through the ResNet18] encoder. We assign the action label _c ∈_ [4] to each feature _f_ _t_ and denote it _f_ _t_ _[c]_ [.] **Neural collapse metrics** We compute three metrics to evaluate (NC1) and (NC2). Define _M_ _c_ � _f_ _i_ _[c]_ _[,]_ (5) _i_ =1 _M_ 1 _µ_ _[c]_ = _M_ _c_ _M_ _c_ � _c_ � _f_ _i_ _[c]_ _[, c]_ [ = 1] _[, . . ., C,]_ _µ_ = _M_ [1] _i_ =1 _C_ � _c_ =1 as the class mean vectors and the global mean vector, respectively. Note that _M_ _c_ denotes the total number of samples in each class _c_ and [�] _[C]_ _c_ =1 _[M]_ _[c]_ [ =] _[ M]_ [. Then define][ ˜] _[µ]_ _[c]_ [ :=] _[ µ]_ _[c]_ _[ −]_ _[µ]_ [ as the globally-] centered class means, and _f_ [˜] _i_ _[c]_ [:=] _[ f]_ _i_ _[ c]_ _[−][µ]_ [ as the globally-centered feature vectors. Consistent with][ Wu] 4 & Papyan (2024), we evaluate (NC1) using the _class-distance normalized variance_ (CDNV) metric that depends on the ratio of within-class to between-class variabilities: CDNV _c,c_ _′_ := 2 _∥σµ_ ˜ _c_ [2] _[c]_ _−_ [+] _[ σ]_ _µ_ ˜ _c_ _[c]_ [2] _[′][′]_ _∥_ [2] _[,]_ _∀c ̸_ = _c_ _[′]_ _,_ (6) 1 _M_ _c_ where _σ_ _c_ [2] [:=] _M_ _c_ _−_ 1 � _i_ =1 _[∥][f]_ [˜] _i_ _[ c]_ _[−]_ _[µ]_ [˜] _[c]_ _[∥]_ [2] [ is the within-class variation. Clearly,][ (NC1)][ happens when] CDNV _c,c_ _′_ _→_ 0 for any _c ̸_ = _c_ _[′]_ . We use a single number CDNV to denote the mean of all CDNV _c,c_ _′_ for _c ̸_ = _c_ _[′]_ . We evaluate (NC2) using the standard deviation (STD) of the lengths and angles spanned by ˜ _µ_ _[c]_ (calling AVE as the shortcut for averaging): _̸_  _._ (7) _̸_  � _{∥µ_ ˜ _[c]_ _∥}_ _[C]_ _c_ =1 � STDNorm := [STD] ˜ _,_ STDAngle := STD AVE � _{∥µ_ _[c]_ _∥}_ _[C]_ _c_ =1 � _̸_   _̸_ _µ_ ˜ _[c]_ _∥µ_ ˜ _[c]_ �� _̸_ _µ_ ˜ _[c]_ _µ_ ˜ _[c]_ _[′]_ _∥µ_ ˜ _[c]_ _∥_ _[,]_ _∥µ_ ˜ _[c]_ _[′]_ _̸_ _∥µ_ ˜ _[c]_ _[′]_ _∥_ _̸_ �� _c_ = _̸_ _c_ _[′]_ _̸_ Clearly, (NC2) happens if and only if both STDNorm and STDAngle become zero. We do not evaluate (NC3) and (NC4) because they require a linear classifier from the representation space to the output, which does not hold in our vision-based control setup. **Results** Fig. 3 plots the three NC evaluation metrics w.r.t. training epochs for Lunar Lander. We observe consistent decrease of three NC metrics as training progresses and approaching zero at the end of training, suggesting strong control-oriented clustering in the visual representation space. _̸_ _̸_ _̸_ _̸_ _̸_ _̸_ |60|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |60<br>50<br>40<br>30<br>20<br>10<br>0||||||||| |60<br>50<br>40<br>30<br>20<br>10<br>0||||||||| |60<br>50<br>40<br>30<br>20<br>10<br>0||||||||| |60<br>50<br>40<br>30<br>20<br>10<br>0||||||||| |60<br>50<br>40<br>30<br>20<br>10<br>0|||100 200 300|100 200 300|100 200 300|400 500 600|400 500 600|400 500 600| |0.6<br>0.5<br>0.4 Angle<br>0.3<br>STD<br>0.2<br>0.1<br>0.0<br>0|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |0.6<br>0.5<br>0.4 Angle<br>0.3<br>STD<br>0.2<br>0.1<br>0.0<br>0|||||||| |0.6<br>0.5<br>0.4 Angle<br>0.3<br>STD<br>0.2<br>0.1<br>0.0<br>0|||||||| |0.6<br>0.5<br>0.4 Angle<br>0.3<br>STD<br>0.2<br>0.1<br>0.0<br>0|||||||| |0.6<br>0.5<br>0.4 Angle<br>0.3<br>STD<br>0.2<br>0.1<br>0.0<br>0||100 200 300|100 200 300|100 200 300|400 500 600|400 500 600|400 500 600| |0.25<br>0.20 Norm<br>0.15<br>STD<br>0.10<br>0.05<br>0.00<br>0|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |0.25<br>0.20 Norm<br>0.15<br>STD<br>0.10<br>0.05<br>0.00<br>0|||||||| |0.25<br>0.20 Norm<br>0.15<br>STD<br>0.10<br>0.05<br>0.00<br>0|||||||| |0.25<br>0.20 Norm<br>0.15<br>STD<br>0.10<br>0.05<br>0.00<br>0|||||||| |0.25<br>0.20 Norm<br>0.15<br>STD<br>0.10<br>0.05<br>0.00<br>0||100 200 300|100 200 300|100 200 300|400 500 600|400 500 600|400 500 600| _̸_ _̸_ _̸_ _̸_ _̸_ Figure 3: Emergence of neural collapse in the visual representation space for Lunar Lander. From left to right, it shows three NC metrics w.r.t. training epochs using ResNet18 as the vision encoder and an MLP as the action decoder, with four discrete actions as classification labels. 3 N EURAL C OLLAPSE IN C ONTINUOUS V ISION - BASED C ONTROL We now consider continuous vision-based control tasks, where an obvious set of class labels does not exist anymore. We focus on two tasks: Planar Pushing and Block Stacking. Planar pushing (Lynch & Mason, 1996; Yu et al., 2016) is a longstanding problem in robotics, and consists of controlling a _pusher_ to push a given _object_ to follow certain trajectories or to a target position. This problem is fundamental because manipulating objects is crucial to deploy robots in the physical world (Bicchi & Kumar, 2000). Despite how effortlessly humans perform this task, planar pushing represents one of the most challenging problems for classical model-based control (e.g., optimization-based control (Wensing et al., 2023)) due to the underlying hybrid dynamics and underactuation (i.e., the controller needs to plan where/when the pusher should make/break contact with the object and whether to slide along or stick to the surface of the object, leading to different “modes” in the dynamics (Goyal et al., 1991; Hogan & Rodriguez, 2020)). Block stacking (Mandlekar et al., 2023; Zhu et al., 2020) is a classical manipulation task, aiming to stack one block onto another target block. This involves lifting up a block, moving the block above the target, rotating the block to match the angle of the target, and putting the block down. For both tasks, we describe how to train a vision-based control policy from expert demonstrations. **Policy learning from demonstrations** We are given a collection of _N_ expert demonstrations _D_ = _{D_ _i_ _}_ _[N]_ _i_ =1 [where each] _[ D]_ _[i]_ [ is a sequence of images and controls] _D_ _i_ = ( _I_ 0 _, u_ 0 _, I_ 1 _, u_ 1 _, . . ., I_ _l_ _i_ _−_ 1 _, u_ _l_ _i_ _−_ 1 _, I_ _l_ _i_ ) _,_ 5 with _I_ the image, _u_ the control, _l_ _i_ the length of _D_ _i_, and in the final image _I_ _l_ _i_ the object is moved to the target position. From _D_ we extract a set of _M_ training samples _S_ = _{s_ _t_ _}_ _[M]_ _t_ =1 [where each sample] _s_ _t_ consists of a sequence of _K_ images and a sequence of _H_ controls _s_ _t_ = ( _I_ _t−K_ +1 _, . . ., I_ _t−_ 1 _, I_ _t_ _| u_ _t_ _, u_ _t_ +1 _, . . ., u_ _t_ + _H−_ 1 ) _._ (8) _M_ is usually much larger than _N_ . The end-to-end policy is then trained on _S_, which takes as input the sequence of images and outputs the sequence of controls. At test time, the policy is executed in a receding horizon fashion (Mayne & Michalska, 1988) where only the first predicted control _u_ _t_ is executed and then a new sequence of controls is re-predicted to incorporate feedback from the new image observations. For every training sample (8), denote _f_ _t_ = VisionEncoder( _I_ _t−K_ +1 _, . . ., I_ _t−_ 1 _, I_ _t_ ) (9) as the visual feature of the sequence of images. |θ|θ y “Move sou Rotate cloc| |---|---| ||y<br>x| ||| |θ|Col2| |---|---| ||y| The essence of neural collapse is a _law of clustering_ in the representation space. To study this, we need to assign every feature vector _f_ _t_ a class. In image classification and discrete vision-based control (e.g., Lunar Lander), this class is explicitly given. However, in vision-based continuous control the output is a sequence of controls, what should the “class” be? A natural choice is to perform _k_ -means clustering of the output actions. Unfortunately, not only is this classification not interpretable, it also does not lead to observation of NC (see Appendix B). We then conjecture that for control tasks such as planar pushing and block stacking, the role of the vision encoder is to convey a “control goal” to the action decoder from image observations. For example, looking at the left image in Fig. 4 for Planar Pushing, the vision encoder may set the goal of control to “push the T block southwest and rotate it counter-clockwise”. Similarly, in the right image in Fig. 4 for Block Stacking, the vision encoder may convey “move the block southeast and rotate it clockwise”. Building upon this intuition, we design two strategies to classify each training sample: one that is _goal-based_ and the other that is _action-based_ . Since both strategies lead to similar observations of NC, we only present goal-based classification in the main text and refer the interested reader to Appendix C for action-based classification. **Goal-based classification** Given a sample _s_ _t_ as in (8), we look at image _I_ _t_ . We compute the relative pose of the target position with respect to the object. As depicted in Fig. 4, this relative pose is a triplet ( _x, y, θ_ ) containing a 2D translation and a 1D rotation for both planar pushing and block stacking. We divide the 3D space that the relative pose triplet ( _x, y, θ_ ) lives in into eight classes based on the signs of _x, y, θ_ . In other words, each class corresponds to one orthant of the space (called a _relative pose orthant_, or in short a R EPO ), as visualized in Fig. 4 middle. A nice property is that the resulting classes are semantically interpretable! **Remark 1** (Finegrained R EPO s) **.** _What will happen if the relative pose space is divided into a larger_ _number of classes? In Appendix G.4, we divide the relative pose space into_ 64 _and_ 216 _classes, and_ _demonstrate that such a law of clustering still holds, albeit to a slightly weaker extent._ 6 |0.7|7|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |0.6<br>0.5<br>Score<br>0.4<br>0.3<br>0.2<br>0 50|6||||||| |0.6<br>0.5<br>Score<br>0.4<br>0.3<br>0.2<br>0 50|5||||||| |0.6<br>0.5<br>Score<br>0.4<br>0.3<br>0.2<br>0 50|4||||||| |0.6<br>0.5<br>Score<br>0.4<br>0.3<br>0.2<br>0 50|3||||||| |0.6<br>0.5<br>Score<br>0.4<br>0.3<br>0.2<br>0 50|||||||| |0.6<br>0.5<br>Score<br>0.4<br>0.3<br>0.2<br>0 50|||100 1|50 200 2|50 200 2|50 300|50 300| |0.7<br>0.6<br>0.5 Score<br>0.4<br>0.3<br>0|7<br>6|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |0.7<br>0.6<br>0.5 Score<br>0.4<br>0.3<br>0||||||| |0.7<br>0.6<br>0.5 Score<br>0.4<br>0.3<br>0|5<br>4|||||| |0.7<br>0.6<br>0.5 Score<br>0.4<br>0.3<br>0|3|||||| |0.7<br>0.6<br>0.5 Score<br>0.4<br>0.3<br>0|3|50 100 15|0 200 25|0 200 25|0 300|0 300| |0.30<br>0.28<br>0.26 Score<br>0.24<br>0.22<br>0 2|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.30<br>0.28<br>0.26 Score<br>0.24<br>0.22<br>0 2||||||||| |0.30<br>0.28<br>0.26 Score<br>0.24<br>0.22<br>0 2||||||||| |0.30<br>0.28<br>0.26 Score<br>0.24<br>0.22<br>0 2||||||||| |0.30<br>0.28<br>0.26 Score<br>0.24<br>0.22<br>0 2||50 500 7|50 500 7|50 100012|50 100012|501|5001|750| |0.11<br>0.10 Score<br>0.09<br>0.08<br>0 5|1|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |0.11<br>0.10 Score<br>0.09<br>0.08<br>0 5|0|||||| |0.11<br>0.10 Score<br>0.09<br>0.08<br>0 5|9|||||| |0.11<br>0.10 Score<br>0.09<br>0.08<br>0 5|8|||||| |0.11<br>0.10 Score<br>0.09<br>0.08<br>0 5|8|0 100|150 200 2|150 200 2|50 300|50 300| (a) ResNet + DM (b) DINOv2 + DM (c) ResNet + LSTM (d) DINOv2 + LSTM Figure 5: Test scores w.r.t. training epoches of four different instantiations of the image-based control pipeline for planar pushing. In (a) and (b) we show test scores of three random seeds. In (c) and (d) we show test scores of a single seed because using LSTM as the action decoder leads to poor test-time performance, an observation that is consistent with Chi et al. (2023). (c) ResNet + LSTM (a) ResNet + DM (b) DINOv2 + DM 3.2 P REVALENT E MERGENCE OF N EURAL C OLLAPSE Using the control-oriented classification strategy described above, we are ready to study whether a law of clustering similar to NC emerges in the visual representation space for continuous control tasks. Given the set of visual features _F_ = _{f_ _t_ _}_ _[M]_ _t_ =1 [, we assign a class label] _[ c][ ∈]_ [[] _[C]_ []][ to each feature] vector _f_ _t_ and denote it _f_ _t_ _[c]_ [. We used the same neural collapse metrics introduced in §][2][.] 3.2.1 P LANAR P USHING **Simulation setup** We collect _N_ = 500 expert demonstration on a push-T setup shown in Fig. 2. At each round, the object and target positions are randomly initialized and the same human expert controls the pusher to push the object into alignment with the target position (through a computer interface provided by pymunk (Blomqvist, 2024)). This provides _M_ = 55 _,_ 480 training samples. We train four different instantiations of the image-based control pipeline: using ResNet or DINOv2 as the vision encoder, and Diffusion Model (DM) or LSTM as the action decoder. The four trained models are evaluated on a test push-T dataset with 100 tasks. We define our evaluation metric as the ratio of the overlapping area between the object and the target to the total area of the target. **Results** Fig. 5 shows the evaluation scores of four different models. DINOv2 combined with a diffusion model (DM) attains the best performance around 70%. ResNet combined with DM (the original diffusion policy from Chi et al. (2023)) is slightly worse but quite close. When an LSTM is used as the action decoder to replace DM, the performance significantly drops, an observation that is consistent with Chi et al. (2023) and confirms the advantage of using DM as the action encoder. For this reason, it is not worthwhile training models with LSTM from different random seeds. Fig. 6 plots the three NC evaluation metrics w.r.t. training epochs for two different models (both with DM) with goal-based classification strategy described in §3.1. We observe consistent decrease of three NC metrics as training progresses, suggesting the prevalent emergence of a law of clustering that is similar to NC. Results with action-based classification are similar and provided in Appendix G.2. Despite the poor test-time performance of trained models using LSTM, similar neural collapse is observed and shown in Appendix G. **From pretrained to finetuned ResNet features** As observed in Chi et al. (2023); Kim et al. (2024); Team et al. (2024), _pretrained_ ResNet features deliver poor performance when used for control (such as planar pushing) and end-to-end finetuning is necessary to adapt the vision features (see Appendix D for concrete evidence). This is intriguing: why are pretrained visual features insufficient for control? We now have a plausible answer. ResNet is pretrained for image classification, and according to NC, the pretrained ResNet features are clustered according to the class labels in image classification, such as dogs and cats. However, under our NC observation in Fig. 6, ResNet features for planar pushing are clustered according to “control-oriented” classes that are related to the relative pose between the object and the target. Therefore, during finetuning, we conjecture the visual features have “re-clustered” according to the new task. We verify this conjecture in Appendix E. 3.2.2 B LOCK S TACKING **Simulation setup** Designed by Mandlekar et al. (2023), the block stacking is implemented as one of the manipulation tasks in MimicGen dataset using robosuite framework (Zhu et al., 2020) 7 |35|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |5<br>0<br>5|||||||| |5<br>0<br>5|||||||| |0|||||||| |0|||||||| |5<br>0<br>5<br>0|||||||| |5<br>0<br>5<br>0||50 1|00 150|200 2|200 2|50 300|50 300| |0.62<br>0.60 Angle<br>0.58 STD<br>0.56<br>0.54<br>0 5|2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.62<br>0.60 Angle<br>0.58 STD<br>0.56<br>0.54<br>0 5|2|||||||| |0.62<br>0.60 Angle<br>0.58 STD<br>0.56<br>0.54<br>0 5|2|||||||| |0.62<br>0.60 Angle<br>0.58 STD<br>0.56<br>0.54<br>0 5|2|||||||| |0.62<br>0.60 Angle<br>0.58 STD<br>0.56<br>0.54<br>0 5|2||0 1|00 1|50 200 2|50 200 2|50 300|50 300| |0.13<br>0.12<br>0.11 Norm<br>0.10<br>STD<br>0.09<br>0.08<br>0.07<br>0 5|3<br>2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.13<br>0.12<br>0.11 Norm<br>0.10<br>STD<br>0.09<br>0.08<br>0.07<br>0 5|3<br>2|||||||| |0.13<br>0.12<br>0.11 Norm<br>0.10<br>STD<br>0.09<br>0.08<br>0.07<br>0 5|1|||||||| |0.13<br>0.12<br>0.11 Norm<br>0.10<br>STD<br>0.09<br>0.08<br>0.07<br>0 5|1|||||||| |0.13<br>0.12<br>0.11 Norm<br>0.10<br>STD<br>0.09<br>0.08<br>0.07<br>0 5|1|||||||| |0.13<br>0.12<br>0.11 Norm<br>0.10<br>STD<br>0.09<br>0.08<br>0.07<br>0 5|1|||||||| |0.13<br>0.12<br>0.11 Norm<br>0.10<br>STD<br>0.09<br>0.08<br>0.07<br>0 5|1||0 1|00 1|50 200 2|50 200 2|50 300|50 300| (a) ResNet + DM, goal-based classification |3<br>2|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |3<br>2|||||||| |1|||||||| |0<br>9<br>8|||||||| |7<br>6<br>0||50 1|00 150|20|0 2|50 30|0| |0.6<br>0.6<br>0.6 Angle<br>0.5<br>0.5<br>STD<br>0.5<br>0.5<br>0.5<br>0.5|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.6<br>0.6<br>0.6 Angle<br>0.5<br>0.5<br>STD<br>0.5<br>0.5<br>0.5<br>0.5|2<br>1|||||||| |0.6<br>0.6<br>0.6 Angle<br>0.5<br>0.5<br>STD<br>0.5<br>0.5<br>0.5<br>0.5|0<br>9<br>8<br>7|||||||| |0.6<br>0.6<br>0.6 Angle<br>0.5<br>0.5<br>STD<br>0.5<br>0.5<br>0.5<br>0.5|0<br>9<br>8<br>7|||||||| |0.6<br>0.6<br>0.6 Angle<br>0.5<br>0.5<br>STD<br>0.5<br>0.5<br>0.5<br>0.5|6<br>5<br>4<br>0|5|0 1|00 1|50 2|00 2|50 30|0| |tion 0.10|0|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.1<br>0.0<br>Norm<br>0.0<br>STD<br>0.0<br>0.0|0<br>9<br>8|||||||| |0.1<br>0.0<br>Norm<br>0.0<br>STD<br>0.0<br>0.0|0<br>9<br>8|||||||| |0.1<br>0.0<br>Norm<br>0.0<br>STD<br>0.0<br>0.0|0<br>9<br>8|||||||| |0.1<br>0.0<br>Norm<br>0.0<br>STD<br>0.0<br>0.0|0<br>9<br>8|||||||| |0.1<br>0.0<br>Norm<br>0.0<br>STD<br>0.0<br>0.0|7<br>6<br>0|5|0 1|00 1|50 20|0 2|50 30|0| (b) DINOv2 + DM, goal-based classification Figure 6: Prevalent emergence of neural collapse in the visual representation space for planar pushing. (a) Three NC metrics w.r.t. training epochs using ResNet as the vision encoder and diffusion model (DM) as the action decoder, with goal-based classification label. (b) Three NC metrics w.r.t. training epochs using DINOv2 as the vision encoder and DM as the action decoder. All the plots show the mean and standard deviation (in shaded band) over three random seeds. The test scores are shown in Fig. 5(a)(b). Similar observations of NC hold when replacing DM with LSTM as the action decoder, shown in Appendix G. backended on MuJoCo (Todorov et al., 2012). To train the behavior cloning pipeline, we used the dataset “core stack_d0” provided by MimicGen which contains _N_ = 1000 demos as our expert demonstrations. This provides _M_ = 107 _,_ 590 training samples. We used ResNet18 as the vision encoder and diffusion model as the action decoder. **Results** Fig. 7 plots the three NC evaluation metrics w.r.t. training epochs for Resnet+DM with the goal-based classification strategies described in §3.1. We observe consistent decrease of three NC metrics as training progresses, suggesting the prevalent emergence of a law of clustering that is similar to NC in the block stacking manipulation task. |110|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |10<br>00<br>90<br>80<br>70<br>60<br>0|||||||| |10<br>00<br>90<br>80<br>70<br>60<br>0|||||||| |10<br>00<br>90<br>80<br>70<br>60<br>0|||||||| |10<br>00<br>90<br>80<br>70<br>60<br>0|||||||| |10<br>00<br>90<br>80<br>70<br>60<br>0|||||||| |10<br>00<br>90<br>80<br>70<br>60<br>0||50 1|00 15|0 200 2|0 200 2|50 300|50 300| |0.62<br>0.61<br>0.60 Angle<br>0.59<br>0.58<br>STD<br>0.57<br>0.56<br>0.55<br>0 5|2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.62<br>0.61<br>0.60 Angle<br>0.59<br>0.58<br>STD<br>0.57<br>0.56<br>0.55<br>0 5|1|||||||| |0.62<br>0.61<br>0.60 Angle<br>0.59<br>0.58<br>STD<br>0.57<br>0.56<br>0.55<br>0 5|0<br>9<br>8<br>7|||||||| |0.62<br>0.61<br>0.60 Angle<br>0.59<br>0.58<br>STD<br>0.57<br>0.56<br>0.55<br>0 5|0<br>9<br>8<br>7|||||||| |0.62<br>0.61<br>0.60 Angle<br>0.59<br>0.58<br>STD<br>0.57<br>0.56<br>0.55<br>0 5|0<br>9<br>8<br>7|||||||| |0.62<br>0.61<br>0.60 Angle<br>0.59<br>0.58<br>STD<br>0.57<br>0.56<br>0.55<br>0 5|6|||||||| |0.62<br>0.61<br>0.60 Angle<br>0.59<br>0.58<br>STD<br>0.57<br>0.56<br>0.55<br>0 5|6||0 1|00 1|50 200 2|50 200 2|50 300|50 300| |0.12<br>0.11<br>0.10 Norm<br>0.09<br>0.08 STD<br>0.07<br>0.06<br>0 5|2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.12<br>0.11<br>0.10 Norm<br>0.09<br>0.08 STD<br>0.07<br>0.06<br>0 5||||||||| |0.12<br>0.11<br>0.10 Norm<br>0.09<br>0.08 STD<br>0.07<br>0.06<br>0 5|1<br>0<br>9<br>8|||||||| |0.12<br>0.11<br>0.10 Norm<br>0.09<br>0.08 STD<br>0.07<br>0.06<br>0 5|1<br>0<br>9<br>8|||||||| |0.12<br>0.11<br>0.10 Norm<br>0.09<br>0.08 STD<br>0.07<br>0.06<br>0 5|1<br>0<br>9<br>8|||||||| |0.12<br>0.11<br>0.10 Norm<br>0.09<br>0.08 STD<br>0.07<br>0.06<br>0 5|1<br>0<br>9<br>8||0 1|00 1|50 200 2|50 200 2|50 300|50 300| Figure 7: Prevalent emergence of neural collapse in the visual representation space for Block Stacking. From left to right, it shows three NC metrics w.r.t. training epochs using ResNet as the vision encoder and a DM as the action decoder, with goal-based classification label. 4 V ISUAL R EPRESENTATION P RETRAINING WITH N EURAL C OLLAPSE Our experiments in §3.2 Idea Generation Category:
1Cross-Domain Application
pPQPQ7Yd58
# A DA R ANK G RAD : A DAPTIVE G RADIENT R ANK AND M OMENTS FOR M EMORY -E FFICIENT LLM S T RAINING AND F INE -T UNING **Yehonathan Refael** [1] **Jonathan Svirsky** [2] **Boris Shustin** [3] **Wasim Huleihel** [1] **Ofir Lindenbaum** [2] 1 Tel Aviv University 2 Bar-Ilan University 3 University of Oxford A BSTRACT Training and fine-tuning large language models (LLMs) come with challenges related to memory and computational requirements due to the increasing size of the model weights and the optimizer states. Various techniques have been developed to tackle these challenges, such as low-rank adaptation (LoRA), which involves introducing a parallel trainable low-rank matrix to the fixed pre-trained weights at each layer. However, these methods often fall short compared to the full-rank weight training approach, as they restrict the parameter search to a low-rank subspace. This limitation can disrupt training dynamics and require a full-rank warm start to mitigate the impact. In this paper, we introduce a new method inspired by a phenomenon we formally prove: as training progresses, the rank of the estimated layer gradients gradually decreases, and asymptotically approaches rank one. Leveraging this, our approach involves adaptively reducing the rank of the gradients during Adam optimization steps, using an efficient online-updating lowrank projections rule. We further present a randomized SVD scheme for efficiently finding the projection matrix. Our technique enables full-parameter fine-tuning with adaptive low-rank gradient updates, significantly reducing overall memory requirements during training compared to state-of-the-art methods while improving model performance in both pretraining and fine-tuning. Finally, we provide a convergence analysis of our method and demonstrate its merits for training and fine-tuning language and biological foundation models. The code is available on [GitHub.](https://github.com/jsvir/AdaRankGrad) 1 I NTRODUCTION Large language models (LLMs) have gained significant attention due to their impressive ability to handle various tasks, such as dialogue-based systems and text completion. Both supervised finetuning and additional pre-training can further enhance their performance across tasks and domains. However, training these models presents significant computational and memory challenges. This is because performing the gradient updates requires storing billions of LLM’s trainable parameters along with the optimizer state (e.g., gradients and moments). In Adam, for example, the gradients and the estimated first and second moments triple the size of the model itself (Xu et al., 2024; Brown et al., 2022; Kim et al., 2023). To tackle the challenges associated with LLM fine-tuning, researchers have developed various optimization techniques to reduce memory usage during model training. One key approach that has emerged is Parameter-efficient fine-tuning (PEFT) (Han et al., 2024), which enables the adaptation of pre-trained language models (PLMs) to different tasks without the need to fine-tune all model parameters. A prominent method within PEFT is the Low-Rank Adaptation (LoRA) algorithm, introduced by Hu et al. (2021). LoRA reparameterizes a weight matrix **W** _∈_ R _[m][×][n]_ into **W** = **W** 0 + **BA**, where **W** 0 is a frozen full-rank matrix, and **B** _∈_ R _[m][×][r]_ and **A** _∈_ R _[r][×][n]_ are low-rank adaptors. Since _r ≪_ min( _m, n_ ), the low-rank adaptors **A** and **B** require fewer trainable parameters, reducing memory usage. LoRA has been widely adopted for fine-tuning, with several 1 variants emerging, including Adaptive LoRA, which adapts the rank of the matrices during training (Wang et al., 2023), LoRA+, which uses different learning rates for the two matrices (Chen et al., 2023), and Sparse LoRA, which introduces sparsity to the matrices to further reduce computational cost (Xu et al., 2023). These methods have been demonstrated to enhance the efficiency and performance of LLM fine-tuning for various tasks. Despite its advantages, recent research has identified some limitations of low-rank reparameterization. For example, LoRA may not achieve the same performance levels as full-rank fine-tuning (Meng et al., 2024) and might require initial full-rank model training as a warm-up before effectively utilizing the low-rank subspace (Lialin et al., 2023b). These issues may stem from the fact that optimal weight matrices are not inherently low-rank or from changes in gradient training dynamics introduced by the reparameterization. In addition, LoRA keeps the tuning layer shapes in the base model static without dynamic adjustments. Another approach by He et al. (2022) dynamically adjusts tuning parameters during training, and (Zhang et al., 2023; Svirsky et al., 2024) gradually reduces tuning parameters. Until recently, the pre-training of large language models (LLMs) has been primarily limited to corporations and governments with substantial computational and memory resources. The significant challenges posed by the enormous memory and computational requirements made it impractical for the average home user. To illustrate this challenge, we take the following as an example. Training a Mistral 7B model from scratch poses substantial memory challenges. Given its 7 billion parameters, a single update step requires approximately 70 GB of memory: 14 GB for the model parameters, 42 GB for Adam optimizer states and gradients, and 14 GB for activations. Consequently, consumerlevel GPUs like the NVIDIA RTX 3090, which has 24 GB of VRAM, are inadequate for handling such a large-scale training task. To overcome this challenge, the study in (Zhao et al., 2024a) introduced a training strategy called GaLore that enables full-parameter learning while being more memory-efficient than traditional low-rank adaptation methods such as LoRA. The core idea behind GaLore is to exploit the slowly changing low-rank structure of the gradient **G** _∈_ R _[n][×][m]_ of the weight matrix **W**, rather than approximating the weight matrix itself as low-rank. GaLore significantly improved memory efficiency, reducing optimizer state memory usage by up to 65.5%. The following noticeable variant is Q-GaLore (Dettmers et al., 2023), which combines low-rank gradient projection with INT4 quantization to further reduce memory usage, and an additional parallel variant would be ReLoRA (Lialin et al., 2023b) employed in pre-training-by periodically updating **W** 0 using previously learned low-rank adaptors. We found that Galore to be suboptimal since it arbitrarily requires pre-defining a fixed low-rank size for the gradient projection/low-rank-approximation, while gradients rank gradually diminish during training down to rank one. Additionally, GaLore uses a fixed window size for the number of iterations between updates to the subspace onto which the gradients are projected, keeping this window size constant. Finally, Galore does not transform (adjust) the first and second moments at any update of projection subspace, which we empirically found degrading the potential performance. We suggest an inner transformation scheme of the moments at any projection updates. **Our approach and theoretical results.** In this paper, we introduce a new training method aimed at optimizing memory efficiency in the training or fine-tuning of large language models (LLMs) while also improving convergence rates and overall performance. Our method leverages two key properties of LLMs. First, we present a novel theoretical finding that shows how the approximate rank of the LLM gradient matrices decreases progressively throughout the training process (even under basic SGD settings), asymptotically approaching rank one. Note that previous studies have only demonstrated an implicit upper bound on the rank of the gradient, which is far from being tight. For example, Zhao et al. (2024a) showed that the rank of the gradient satisfies rank ( **G** _[n][×][m]_ ) _<_ min _{n, m}/_ 2. Second, as highlighted in previous research (Gromov et al., 2024; Refael et al., 2024; Jaiswal et al., 2024a), the depth of a layer (how far from input/output) and its architectural design contribute differently to the model’s performance. Specifically, when perturbations from the same distribution are applied across various layers, the impact on accuracy varies significantly. This indicates that the optimization steps have less influence on the model’s performance for certain layers, depending on their depth and architecture type. Noise in these layers has a smaller impact on the overall task, meaning that the gradients in these layers carry less important information. This results in naturally lower-rank update steps (gradients). 2 Building upon these two insights and to address the limitations of the LoRA variants, we propose a method that enables full-parameter learning while dramatically reducing memory requirements and computational complexity through adaptive low-rank gradient projections during the Adam update step. For each gradient tensor **G** _[j]_ _t_ _∈_ R _[r][×][n]_ at layer _j_ _∈_ [ _L_ ] and iteration _t_, AdaRankGrad efficiently identifies a unique set of significant projection directions (subspace) **P** _[j]_ _t_ _∈_ R _[r]_ _[j]_ _[×][n]_ along which the gradient **G** _[j]_ _t_ [exhibits] the largest changes, where _r_ _t_ _[j]_ [is the lowest possible] rank to still maintain a predefined information fraction (given threshold) relative to the original, nonprojected gradient. Practically, **P** _[j]_ _t_ **[G]** _t_ _[j]_ [is the low-] projected-gradient and **P** _[j]_ _t_ _⊤_ **P** _jt_ **[G]** _[j]_ _t_ [is the best low-] rank approximation the of the gradients **G** _[j]_ _t_ [that em-] bodies the required fraction of its information. The projections **P** _[j]_ _t_ [are being adaptively updated through-] out training (based on the convergence criteria of the gradients on the projected subspace), where their rank _r_ _t_ _[j]_ [is dictated by preserving the given information] threshold. The method ensures: (1) The method determines the optimal projection dimension for each Figure 1: The illustration shows how layer’s gradient tensor independently, adjusting dy- AdaRankGard 3 is trained. First, the granamically throughout training. This rank adjustment dients **G** _t_ are projected into a 3D space leverages property we prove that the effective dimen- (in this example), represented as **G** [ˆ] [3] _t_ _[×][m]_ = sionality of the full gradients gradually decreases over **P** [3] _t_ _[×][n]_ **G** _t_ _[n][×][m]_ . As convergence occurs, the time, allowing updates to be performed in a lower- gradient’s dimension decreases to a 2D dimensional projection space, thereby reducing mem- space and then to a 1D space. This dimenory usage. (2) The projection matrix for each layer’s sionality reduction indicates convergence gradients is updated based on a convergence criterion while efficiently using memory. within their respective subspace. This ensures updates occur precisely when needed, avoiding premature or delayed transitions between subspaces and resulting in faster overall convergence. Table 1: Comparison between AdaRankGrad, GaLore, and LoRA. Assume **W** _∈_ R _[n][×][m]_ ( _n ≥_ _m_ ), constant rank _r_, and adaptive-rank _r_ adap (with initial-rank _r_ init = _r_ ). AdaRankGrad GaLore LoRA Weights _nm_ _nm_ _nm_ + _nr_ + _mr_ Optim States ( _r_ adap _< r_ ) _nr_ adap + 2 _mr_ adap _nr_ + 2 _mr_ 2 _nr_ + 2 _mr_ Multi-Subspace ✓ ✓ **x** Adaptive-Subspace-Dimension ✓ **x** **x** Adaptive-Subspace-Updates ✓ **x** **x** Pre-Training ✓ ✓ **x** Fine-Tuning ✓ ✓ ✓ 2 R ELATED W ORK AND B ACKGROUND **Memory efficient optimizers.** Memory-efficient optimization has been a recent focus of research. Multiple studies have aimed to reduce the memory requirements of gradient statistics in adaptive optimization algorithms (Shazeer & Stern, 2018; Anil et al., 2019). One common approach is quantization, which helps decrease the memory footprint of optimizer states (Li et al., 2024). Additionally, recent advancements have suggested reducing the memory used by weight gradients by integrating the backward operation with the optimizer update (Lv et al., 2023a;b). This characteristic has been leveraged to reduce memory usage during training processes (Gooneratne et al., 2020; Huang et al., 2023; Modoranu et al., 2023). **Low-rank gradient optimization.** The phenomenon of low-rank gradients naturally arises during the training of neural networks, a subject that has been extensively examined both theoretically and practically, e.g., Zhao et al. (2022); Cosson et al. (2023); Yang et al. (2023). This characteristic low-rank structure gradients has been leveraged to reduce memory usage during training processes Gooneratne et al. (2020); Huang et al. (2023); Modoranu et al. (2023), and results in a reduced computational complexity as compared to standard gradient descent methods. 3 **Adam optimization.** Arguably, among the most popular optimization methods used for training large language models (LLMs) are the _Adam optimizer_ (Kingma & Ba, 2017) and its variant, _AdamW_ (Loshchilov & Hutter, 2019), which incorporates weight decay for regularization. However, it is well-established that Adam optimization has higher memory complexity compared to other optimization alternatives. To illustrate this, let us briefly review how the Adam optimization algorithm operates. First, we need to establish some notation. Consider a neural network denoted as Φ( _·_ ; _**θ**_ ), which consists of _L_ layers and is parameterized by _**θ**_ ≜ **W** 1 _[d]_ [1] _[×][d]_ [0] _, . . .,_ **W** _Ld_ _L_ _−_ _−_ 1 1 _×d_ _L−_ 2 _,_ **W** _Ld_ _L_ _×d_ _[L]_ 0 _[−]_ [1] . Here, **W** _i_ represents the weights tensor parameters as� � sociated with the _i_ -th layer, for _i ∈_ [ _L_ ]. In the following, let _t ∈_ N represent the _t_ -th step of the Adam optimization algorithm. Then, we recall that the single update step in Adam is given by, Specifically, at time step _t_, **G** _t_ denotes the backpropagated gradient matrix, i.e., _∇_ Φ ( _**θ**_ _t−_ 1 ). The exponentially weighted moving averages of the first and second moments are denoted by **M** _t_ and **V** _t_, respectively, with their bias-corrected counterparts given by **M** [ˆ] _t_ and **V** [ˆ] _t_ . The AdamW optimizer updates the model parameters at step _t_ according to the rule, _**θ**_ _t_ = _**θ**_ _t−_ 1 _−_ _α_ **M** ˆˆ _t_ � ~~_√_~~ **V** _t_ + _ϵ_ [+] _[ λ]_ _**[θ]**_ _[t][−]_ [1] , where _λ ≥_ 0 is the weight � decay rate (for Adam _λ_ = 0), and all operations are per- ˆ decay rates for the moving averages of the moments,formed element-wise. In this equation, _β_ 1 and _β_ 2 control the _α_ is the _**θ**_ _t_ = _**θ**_ _t−_ 1 _−_ _α_ **M** [ˆ] _t_ _/_ � ~~�~~ **V** _t_ + _ϵ_ � _._ learning rate, and _ϵ_ is a small constant to avoid division by zero. Notably, since Adam/W requires storing both **M** _t_ and **V** _t_ at each time step, it incurs an additional memory footprint of 2 _mn_ . While existing approaches (Zhao et al., 2024b; Vyas et al., 2024; Okewu et al., 2020) focus on low-rank approximations of the first and second moments with the goal of reducing memory requirements, we propose to approximate the gradients by a low-rank factorization. Consequently, in our scheme the moments are integrally constrained onto this reduced dimension, and thus we gain both benefits. 3 M ETHOD AND M AIN R ESULTS 3.1 T HEORETICAL M OTIVATION : G RADUALLY G RADIENT R ANK V ANISHING As mentioned in the introduction, a few recent empirical results (e.g., (Jaiswal et al., 2024b; Zhao et al., 2024a; Lialin et al., 2023a)), demonstrate that the gradients, when training or fine-tuning LLM’s, are “approximately low-rank”. As an example, this phenomenon can be observed in Figure 2, where it is evident that the squared norm of the gradient’s singular values decay to zero exponentially fast. As hinted above, this phenomenon is only true in the Figure 2: The figure illustrates the exponen approximate sense; roughly speaking, only very few tial decay of eigenvalues in the MLP layer’s eigenvalues hold almost all the information captured gradient, at the first iteration of fine-tuning by the gradient. Accordingly, a low-rank matrix ap RoBERTa-Base (Liu, 2019) model, on the proximates the underlying gradient up to a negligi MRPC task, from GLUE (Wang et al., 2019). ble approximation error. The practical implication Notably, the red line indicates that 50% of is that while the weight matrices are not necessarily the gradient information (in terms of squared low-rank, training certain high-rank layers with low norm ratio) is captured by the first eigen rank based-gradient updates is possible. To make the value, while the green line shows that 90% above discussion precise, consider the following def is contained within the first two eigenvalues. inition for approximate low-rank matrices. **Definition 1** (Approximate low-rank matrix) **.** _A matrix_ **A** _∈_ R _[n][×][m]_ _is called_ ( _η, ε_ ) _-approximately_ _rank-r, if there exist η ∈_ [0 _,_ 1) _, ε >_ 0 _, and a matrix_ **A** app _,r_ _∈_ R _[n][×][m]_ _with_ rank( **A** app _,r_ ) = _r and_ _r <_ min _{n, m}, such that,_ _∥_ **A** _−_ **A** app _,r_ _∥_ _F_ _≤_ _η · ∥_ **A** _∥_ _F_ + _ε._ (1) 4 As it turns out, it can be shown (see, e.g., (Golub & Van Loan, 2013)) that the optimal **A** app _,r_ minimizing the approximation error in the left-hand-side of equation 1, can be obtained by applying an SVD on **A** and then retaining only the top _r_ singular values and their corresponding singular vectors. Mathematically, we have **A** app _,r_ = [�] _[r]_ _i_ =1 _[σ]_ _[i]_ **[u]** _[i]_ **[v]** _i_ _[⊤]_ [, where] _[ {][σ]_ _[i]_ _[}]_ _[i]_ [ are the singular values] of **A**, and _{_ **u** _i_ _}_ _i_ and _{_ **v** _i_ _}_ are the corresponding left and right singular vectors, respectively. The approximation error is in turn given by _∥_ **A** _−_ **A** app _,r_ _∥_ [2] _F_ [=][ �] _i_ [min] = _r_ +1 _[{][m,n][}]_ _σ_ _i_ [2] [. This construction gives] an ( _η_ **A** _,_ 0)-approximately rank- _r_ matrix, with the minimal _η_ **A** possible. Recently in (Zhao et al., 2024a), the structure of the gradient for a wide family of nonlinear networks known as “reversible networks” (Tian et al., 2021) was studied, [1] defined as follows. **Definition 2.** _(Reversibility_ ( _Tianet al.,_ 2021) _) A layerl is reversible if there is a_ **G** _ℓ_ ( _**x**_ ; _**θ**_ ) _∈_ R _[n]_ _[ℓ]_ _[×][n]_ _[ℓ][−]_ [1] _so that the pre-activation at layer ℓ_ _satisfies_ _**f**_ [˜] _ℓ_ ( _**x**_ ; _**θ**_ ) = **G** _ℓ_ ( _**x**_ ; _**θ**_ ) _**f**_ [˜] _ℓ−_ 1 ( _**x**_ ; _**θ**_ ) _and back-_ _propagated gradient after nonlinearity_ ˜ _**g**_ _ℓ−_ 1 = **G** _[⊤]_ _ℓ_ [(] _**[x]**_ [;] _**[ θ]**_ [)] **[P]** _[⊤]_ _ℓ_ [(] _**[x]**_ [;] _**[ θ]**_ [)˜] _**[g]**_ _[ℓ]_ _[, for some matrix]_ **[ P]** _[ℓ]_ [(] _**[x]**_ [;] _**[ θ]**_ [)] _[ ∈]_ R _[n]_ _[ℓ]_ _[×][n]_ _[ℓ]_ _. A network is reversible if all layers are._ For simplicity of notation, we use **G** _[ℓ]_ _t_ [to denote][ [] **[G]** _[ℓ]_ [(] _**[x]**_ [;] _**[ θ]**_ [)]] _t_ [, where] _[ t][ ∈]_ [N][ is the iteration index in the] optimization process. Furthermore, when it is clear from the context, we omit the layer index _ℓ_ from our notations. Assuming reversibility and SGD weight update (i.e., **W** _t_ = **W** _t−_ 1 + _α_ **G** _t−_ 1 ), it is shown in Zhao et al. (2024a) that for both _ℓ_ 2 and cross entropy losses, the gradient is of the structure 1 _N_ form **G** = _N_ � _i_ =1 [(] **[A]** _[i]_ _[ −]_ **[B]** _[i]_ _[W]_ **[C]** _[i]_ [)][, where] _[ N]_ [ is the batch size,] _[ {]_ **[A]** _[i]_ _[}]_ _i_ _[N]_ =1 [are input-dependent] matrices, and _{_ **B** _i_ _,_ **C** _i_ _}_ _[N]_ _i_ =1 [are certain positive semi-definite (PSD) matrices. Furthermore, it was] proven that if the gradient **G** _t_, has the above structure for all _t ≥_ t 0, for some t 0 _∈_ N, then, the stable rank sr ( **G** _t_ ) ≜ _[∥]_ _∥_ **[G]** **G** _[t]_ _t_ _[∥]_ _∥_ _[F]_ 2 [satisfies,] 1 _−_ _ηλ_ 2 sr ( **G** _t_ ) _≤_ sr **G** _[∥]_ t 0 + � � � 1 _−_ _ηλ_ 1 2( _t−_ t 0 ) 2 _∥_ **G** t 0 _−_ **G** t 0 � ��� ��� _F_ 2 _∥_ **G** t 0 ���� ��� 2 _[,]_ where **S** ≜ _N_ 1 � _Ni_ =1 **[C]** _[i]_ _[ ⊗]_ **[B]** _[i]_ [,] _[ λ]_ [1] _[ < λ]_ [2] [ denote its two smallest distinct eigenvalues, and] **[ G]** t _[∥]_ 0 [is] the projection of **G** t 0 onto the minimal eigenspace _V_ 1 of **S** that corresponds to _λ_ 1 . Accordingly, as _t →∞_, we get that the final stable rank is upper bounded by sr( **G** _[∥]_ t 0 [)][. Under the same gra-] dient structure assumption and for the vanilla settings of the SGD weights update Battash et al. (2024), we were able to prove the following stronger result. We prove that the approximated stable rank of the gradients approach one as the training process progresses. To state this result, we need to make a few notations. Let **G** _t_ = **U** _t_ Σ _t_ **V** _t_ _[⊤]_ [be the SVD decomposition of] **[ G]** _[t]_ [, and] let **P** _t_ ( _l, r_ ) = **U** [: _, l_ : _r_ ] _t_ **U** [: _, l_ : _r_ ] _[⊤]_ _t_ [be the corresponding projection matrix. When clear for] the context, we omit the index _l_ and use **P** _t_ ( _r_ ) _≡_ **P** _t_ ( _l_ = 1 _, r_ ). We have the following result. **Lemma 1** (Asymptotically rank-one) **.** _If a neural_ _network is trained using vanilla SGD, then the_ _following holds for the gradient of a reversible_ _layer at iteration t,_ _κ_ ( _t_ ) ≜ _[∥]_ **[G]** _[t]_ _[ −]_ **[P]** _[t]_ [(][1][)] **[G]** _[t]_ _[∥]_ _F_ [2] _≤_ _O_ ( _C_ _[−][t]_ ) _,_ _∥_ **G** _t_ _∥_ [2] _F_ _for some constant C >_ 1 _._ The above result implies that **G** _t_ approaches its rank-one approximation **P** _t_ (1) **G** _t_ _,_ as the iteration number increases, namely, **G** _t_ becomes rank-one. The proof of Lemma 1 is relegated to Section B. Figure 3: The figure presents the effective rank (see Section 4) measured after every 100 update steps on the RTE dataset, from GLUE (Wang et al., 2019). Finally, in Fig. 3 and Fig. 4, we demonstrate that for a large language model (RoBERTa-base Liu (2019)), which contains also non reversible layers, the rank decay evolves as a function of the number of update steps, in a fine-tuning task. 1 It can be shown that this family includes many different kinds of layers, such as, linear layers (MLP and Conv.), and (leaky) ReLU non-linearity. 5 Idea Generation Category:
2Direct Enhancement
LvNROciCne
# K NOWLEDGE L OCALIZATION : M ISSION N OT A CCOMPLISHED ? E NTER Q UERY L OCALIZATION ! **Yuheng Chen** **[1,2]** **, Pengfei Cao** **[1,2]** **, Yubo Chen** **[1,2]** **, Kang Liu** **[1,2]** **, Jun Zhao** _[∗]_ **[1,2]** _[∗]_ 1 The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2 School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China chenyuheng2022@ia.ac.cn, _{_ pengfei.cao,yubo.chen,kliu,jzhao _}_ @nlpr.ia.ac.cn A BSTRACT Large language models (LLMs) store extensive factual knowledge, but the mechanisms behind how they store and express this knowledge remain unclear. The Knowledge Neuron (KN) thesis is a prominent theory for explaining these mechanisms. This theory is based on the **Knowledge Localization** (KL) assumption, which suggests that a fact can be localized to a few knowledge storage units, namely knowledge neurons. However, this assumption has two limitations: first, it may be too rigid regarding knowledge storage, and second, it neglects the role of the attention module in knowledge expression. In this paper, we first re-examine the KL assumption and demonstrate that its limitations do indeed exist. To address these, we then present two new findings, each targeting one of the limitations: one focusing on knowledge storage and the other on knowledge expression. We summarize these findings as **Query Localization** (QL) assumption and argue that the KL assumption can be viewed as a simplification of the QL assumption. Based on QL assumption, we further propose the Consistency-Aware KN modification method, which improves the performance of knowledge modification, further validating our new assumption. We conduct 39 sets of experiments, along with additional visualization experiments, to rigorously [confirm our conclusions. Code is available here.](https://github.com/heng840/KnowledgeLocalization) 1 I NTRODUCTION Large language models (LLMs) are believed to store extensive factual knowledge (MetaAI, 2024; Touvron et al., 2023), however, the mechanisms behind this storage and expression have not been well-explained. The Knowledge Neurons (KN) thesis (Dai et al., 2022; Meng et al., 2022; 2023; Niu et al., 2024; Chen et al., 2024b;a) is a prominent theory aiming to explain these mechanisms. It proposes that LLMs recall facts through their multi-layer perceptron (MLP) weights, referring to the units responsible for storing knowledge as knowledge neurons (KNs). Based on this, KN-inspired model editing methods are proposed (Meng et al., 2022; 2023), which first localize knowledge neurons and then modify them to update knowledge, providing further support for the KN thesis. Not only them, but also many works have adopted KN theory and applied it to study downstream tasks (Chen et al., 2024b;a; Wang et al., 2024c), making its theoretical foundation crucial. In fact, the KN thesis is based on the knowledge localization ( **KL** ) assumption: a piece of factual knowledge can be localized to several knowledge neurons. However, this assumption has two limitations. (1) In terms of knowledge storage, if we refer to different rephrased queries expressing the same fact as _neighbor queries_, and the corresponding knowledge neurons as _neighbor KNs_, then the KL assumption implies that neighbor KNs are consistent. However, as Figure 1 illustrates, while the neighbor KNs of Fact 1 exhibit high consistency, those of Fact 2 show low consistency, indicating the KL assumption does not hold universally. We denote facts that satisfy the KL assumption as **Con-** **sistent Knowledge** ( _K_ _C_, e.g., Fact 1 ), while facts that violate the KL assumption are categorized as **Inconsistent Knowledge** ( _K_ _I_, e.g., Fact 2 ). Previous research and the KL assumption essentially assume that all factual knowledge belongs to _K_ _C_ . (2) In terms of knowledge expression, the KL _∗_ Corresponding authors. 1 K C : Fact 1 K I : Fact 2 K C : Fact 1 K I : Fact 2 Figure 1: Heatmaps of the neuron activation values, with darker colors indicating higher values (can be viewed as knowledge neurons). The left two heatmaps show neuron activations for two neighbor queries of _⟨Suleiman I, position, Shah⟩_ (Fact 1 ), while the right two correspond to _⟨Christoph Ahlhaus, position, mayor⟩_ (Fact 2 ). assumption overlooks the attention module, yet there must be interconnections between the different modules in LLMs. Similarly, since KL only considers the role of the MLP module in storing knowledge, it does not take into account how the model selects and expresses this knowledge to answer queries. Therefore, we re-examine the KL assumption and raise questions Q1 and Q2: **Q1** : Does the KL assumption hold for all facts? If not, is _K_ _I_ widely prevalent? (§2) **A1** We investigate the knowledge localization assumption and find that the universal presence of _K_ _I_ that violates this assumption. (1) **Statistical Evidence.** As shown capital is _**Associate**_ in Figure 1, if the knowledge neurons |Col1|Col2|Associate|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |The capital of<br>China is|The capital of<br>China is|The capital of<br>China is|The capital of<br>China is|t||Answer| |The capital of<br>China is|The capital of<br>China is|The capital of<br>China is|The capital of<br>China is|t||| |The capital of<br>China is|The capital of<br>China is|The capital of<br>China is|The capital of<br>China is|MLP||| |China’s<br>capital is|Col2|Col3| |---|---|---| |China’s<br>capital is||Associate| corresponding to a fact exhibit low Figure 2: The Query Localization assumption. consistency for its neighbor queries, it indicates that the fact does not conform to the KL assumption. Based on this observation, we propose a metric to evaluate the consistency among neighbor KNs, and the statistical results show that a significant proportion of facts belong to _K_ _I_ . For example, in LLaMA3-8b, this proportion reaches 77%. This directly proves that facts that do not conform to the KL assumption are widespread. (2) **Modification-Based Evidence.** We categorize facts into _K_ _C_ and _K_ _I_ based on their consistency scores to perform knowledge erasure and updates. We find that for facts in _K_ _I_, editing the KNs corresponding to the query itself does not generalize well to neighbor queries. This indirectly indicates that the neighbor KNs for _K_ _I_ are inconsistent. In summary, the answer to Q1 is: the KL assumption is not always valid and _K_ _I_ is widely prevalent. Figure 2: The Query Localization assumption. **Q2** : Since the KL assumption has two limitations, what is a more realistic assumption? (§3) **A2** Our two findings address the two limitations of the knowledge localization assumption. (1) **Query-KN Mapping** : In terms of knowledge storage, the KL assumption implies that localization results are static and universally applicable across all queries. However, our findings indicate that for facts in _K_ _I_, localization results are influenced by the query context rather than being fixed. In other words, knowledge neurons are associated with the query rather than the fact. For instance, Figure 1 shows that different neighbor queries for Fact 2 correspond to different knowledge neurons. Similarly, in Figure 2, neighbor queries _q_ 1 and _q_ 2 are associated with distinct KNs ( _KN_ 1 and _KN_ 2 ). (2) **Dynamic KN Selection** . In terms of knowledge expression, the KL assumption overlooks the role of the attention module. Our findings show that LLMs rely on the attention module to select appropriate KNs to answer a specific query. For example, in Figure 2, neighbor queries _q_ 1 and _q_ 2 are associated with different KNs. Then, when _q_ 1 is input, _KN_ 1 is activated and selected to provide the answer “Beijing”, while the activation value of _KN_ 2 remains low, preventing it from being selected. Based on these insights, we propose the **Query Localization (QL)** assumption, which consists of query-KN mapping and dynamic KN selection. To further demonstrate the validity of our assumption, we apply it in model editing experiments. We propose the Consistency-Aware KN modification 2 method, which leverages the QL assumption to improve knowledge modification, achieving an 8% and 9% performance improvement over two baselines in the “Erasure” setting on LLaMA3-8b, further validating the QL assumption. In summary, the answer to Q2 is: a more realistic assumption is the Query Localization assumption. Our contributions are summarized as follows: - We conduct the first in-depth exploration of the Knowledge Localization assumption, a foundational and widely accepted assumption. We classify facts into _K_ _C_ and _K_ _I_, and demonstrate that _K_ _I_, i.e., facts that do not adhere to this assumption, are widely present. - We propose a more realistic Query Localization assumption, which includes two parts: query-KN mapping and dynamic KN selection. This addresses the limitations of the KL assumption in both knowledge storage and expression. - We apply the QL assumption to improve knowledge modification methods, further validating the soundness of the QL assumption. 2 E XPLORING K NOWLEDGE L OCALIZATION L IMITATIONS This section investigates Q1 and demonstrates the existence of Inconsistent Knowledge ( _K_ _I_ ), which does not satisfy the knowledge localization (KL) assumption. Our experiments adopt GPT-2 (Radford et al., 2019), LLaMA2-7b (Touvron et al., 2023), and LLaMA3-8b (MetaAI, 2024), representing a range of sizes of popular auto-regressive models. This allows us to assess the scalability of our methods and conclusions. Consistent with other knowledge localization methods (Dai et al., 2022; Chen et al., 2024a), we employ the fill-in-the-blank cloze task (Petroni et al., 2019) to assess whether a pretrained model knows a fact. Regarding the dataset, we employ the ParaRel dataset (Elazar et al., 2021). For details to the dataset, see Table 5 in Appendix B. 2.1 S TATISTICAL E VIDENCE FOR THE E XISTENCE OF I NCONSISTENT K NOWLEDGE In this subsection, we prove that the consistency of knowledge neurons of some facts is very low, which shows that these facts do not conform to the knowledge localization assumption. **Consistency Analysis** According to the KL assumption, neighbor queries should be localized to the same KNs, with any deviations primarily attributable to the localization method itself. To assess this, we calculate the corresponding KNs for each query and introduce the KN-Consistency Score (CS) metric. Given a fact with _k_ neighbor queries _{q_ 1 _, . . ., q_ _k_ _}_, we calculate its CS as follows: �� _n |_ [�] _[k]_ _i_ =1 **[1]** _[n][∈N]_ _i_ _[>]_ [ 1] �� �� �� (1) _k_ _i_ =1 _[N]_ _[i]_ ���� ��� _CS_ orig = �� _ki_ =1 _[N]_ _[i]_ �� �� � _k_ _i_ =1 _[N]_ _[i]_ ���� ��� relaxation ===== _⇒_ _CS_ = where _N_ _i_ is the set of knowledge neurons corresponding to query _q_ _i_, and _n_ denote the knowledge neuron. **1** _n∈N_ _i_ is an indicator function, which equals 1 if _n_ belongs to **1** _n∈N_ _i_ . Thus, [�] _[k]_ _i_ =1 **[1]** _[n][∈N]_ _i_ represents the number of times _n_ appears across all KN sets (i.e., _N_ _i_ ). In the original metric, _CS_ orig, the numerator represents the intersection of all _N_ _i_, meaning a KN must appear in all sets to be counted. After relaxation ( _CS_ ), the numerator includes any KN that appears in more than one of the _N_ _i_ sets, allowing it to be counted even if it is not present in every set. This relaxation reduces the impact of localization errors and provides stronger evidence for the existence of _K_ _I_ . Then, we use a thresholding technique based on _CS_, classifying facts above a certain threshold as _K_ _C_ (consistent knowledge) and those below it as _K_ _I_ (inconsistent knowledge). We consider two types of thresholds: a static threshold and Otsu’s threshold [1] . While Otsu’s threshold aims to maximize the between-class variance and effectively separate two classes of data, the static threshold reflects the inherent nature of a fact’s adherence (or non-adherence) to the KL assumption. See Table 4 in A for specific thresholds. To ensure our findings are not method-specific, we compare three advanced knowledge localization methods (Dai et al., 2022; Enguehard, 2023; Chen et al., 2024a), with minor modifications for task adaptation, primarily to the method of Enguehard (2023) (detailed in Appendix D). Finally, we apply Welch’s t-test [2] to confirm the statistical significance of the difference between _K_ _C_ and _K_ _I_ . 1 [https://en.wikipedia.org/wiki/Otsu%27s_method](https://en.wikipedia.org/wiki/Otsu%27s_method) 2 [https://en.wikipedia.org/wiki/Welch%27s_t-test](https://en.wikipedia.org/wiki/Welch%27s_t-test) 3 **T** **GPT-2** **Dai et al. (2022)** **Enguehard (2023)** **Chen et al. (2024a)** _R_ _C_ _CS_ _C_ _R_ _I_ _CS_ _I_ _t_ _R_ _C_ _CS_ _C_ _R_ _I_ _CS_ _I_ _t_ _R_ _C_ _CS_ _C_ _R_ _I_ _CS_ _I_ _t_ _U_ _I_ St 0.56 0.21 **0.44** 0.03 236 0.54 0.23 **0.46** 0.03 235 0.53 0.25 **0.47** 0.03 230 **0.42** Ot 0.41 0.24 **0.59** 0.06 223 0.44 0.29 **0.55** 0.05 219 0.40 0.29 **0.60** 0.06 221 **0.53** **LLaMA2-7b** **T** **Dai et al. (2022)** **Enguehard (2023)** **Chen et al. (2024a)** _R_ _C_ _CS_ _C_ _R_ _I_ _CS_ _I_ _t_ _R_ _C_ _CS_ _C_ _R_ _I_ _CS_ _I_ _t_ _R_ _C_ _CS_ _C_ _R_ _I_ _CS_ _I_ _t_ _U_ _I_ St 0.40 0.21 **0.60** 0.04 158 0.39 0.20 **0.61** 0.04 150 0.40 0.20 **0.60** 0.04 160 **0.55** Ot 0.21 0.28 **0.79** 0.062 152 0.20 0.25 **0.80** 0.07 158 0.24 0.30 **0.76** 0.06 132 **0.70** **LLaMA3-8b** **T** **Dai et al. (2022)** **Enguehard (2023)** **Chen et al. (2024a)** _R_ _C_ _CS_ _C_ _R_ _I_ _CS_ _I_ _t_ _R_ _C_ _CS_ _C_ _R_ _I_ _CS_ _I_ _t_ _R_ _C_ _CS_ _C_ _R_ _I_ _CS_ _I_ _t_ _U_ _I_ St 0.16 0.16 **0.84** 0.03 114 0.15 0.18 **0.85** 0.03 105 0.18 0.19 **0.82** 0.03 123 **0.77** Ot 0.23 0.14 **0.77** 0.03 128 0.21 0.15 **0.79** 0.03 107 0.24 0.16 **0.76** 0.03 130 **0.70** Table 1: Overall results of Consistency Analysis. The symbol **T** represents the static (St) and Otsu (Ot) thresholds. The _t_ -statistics and _p_ -values are from the T-test, with _p <_ 1 _e −_ 6 in all cases. |e|LLaMA2-7b| |---|---| |1.0<br>0.8<br>0.6<br>0.4<br>0.2<br>0.0|Threshold = 0.3| |1.0<br>0.8<br>0.6<br>0.4<br>0.2<br>0.0|| |e|LLaMA3-8b| |---|---| |1.0<br>0.8<br>0.6<br>0.4<br>0.2<br>0.0|Threshold = 0.3| |1.0<br>0.8<br>0.6<br>0.4<br>0.2<br>0.0|| Figure 3: Violin plot for Consistency Analysis. The _x_ -axis are the fact relations, and the _y_ -axis is the _CS_ 2 value. The width of each violin plot indicates the density of data at different _CS_ 2 values. We select a threshold of 0.3 as an example, and facts below this threshold are classified as _K_ _I_ . Regarding the evaluation metrics, we calculate the proportions of _K_ _C_ and _K_ _I_, denoted as _R_ _C_ and _R_ _I_, respectively. We also compute the average values of _CS_ for these facts, denoted as _CS_ _C_ and _CS_ _I_ . Furthermore, we calculate the proportion of facts classified as _K_ _I_ by all three methods, denoted as _U_ _I_ (i.e., the union of _K_ _I_ ). **Findings** Figure 3 classifies facts based on their respective relations (e.g., P39 represents the “position” relation), illustrating the distribution of _CS_ when utilizing the knowledge localization method proposed by Dai et al. (2022). The violin plots for other methods can be found in Figures 7 and 8 in Appendix C. Together, Figure 3 and Table 1 summarize the overall results. (1) Inconsistent knowledge ( _K_ _I_ ) is widely present across different knowledge localization methods, LLMs, and relations. In Table 1, the consistently high ratio of _K_ _I_ ( _R_ _I_ ) and low _CS_ values ( _CS_ _I_ ) demonstrate that the proportion of facts categorized as _K_ _I_ is substantial across different methods, 4 with _U_ _I_ showcasing high classification agreement among all three knowledge localization methods. For LLaMA3, using a static threshold, 77% of the facts are consistently classified into _K_ _I_ . Moreover, in Figure 3, using an example threshold of 0.3, the majority of facts across various relations fall below this threshold, thus belonging to _K_ _I_ . (2) Statistical tests reveal a significant difference between _K_ _C_ and _K_ _I_ . For instance, using the static threshold (St) for LLaMA3-8b, the recorded _t_ statistic is 123, with a _p_ -value less than 1 _e −_ 6. These results reflect a very strong distinction, as the high _t_ -statistic and extremely low _p_ -value show that the difference is highly reliable. Combining (1) and (2), we conclude that **inconsistent knowledge (** _K_ _I_ **) is prevalent** . Beyond statistical analysis, we further validate the existence of KI through knowledge modification experiments. 2.2 M ODIFICATION -B ASED E VIDENCE FOR THE E XISTENCE OF I NCONSISTENT K NOWLEDGE In this subsection, we conduct knowledge modification experiments to demonstrate the existence of inconsistent knowledge ( _K_ _I_ ). We use a static threshold to classify facts into _K_ _C_ and _K_ _I_ . **Experimental setups** Let _⟨s, r, o⟩_ denote a fact consisting of a subject ( _s_ ), relation ( _r_ ), and object ( _o_ ). We perform two types of knowledge modification: Erasure and Update. Given a fact with _k_ queries _{q_ 1 _, . . ., q_ _k_ _}_, and for a query _q_ _i_, modify the MLP weights of LLMs as follows. _W_ _l,p_ = 0 _,_ if Erasure (2) � _W_ _l,p_ _−_ _λ_ 1 _E_ ( _o_ ) + _λ_ 2 _E_ ( _o_ _[′]_ ) _,_ if Update where _l_ and _p_ represent the layer and position of the knowledge neuron, _W_ _l,p_ is the corresponding MLP weight. _E_ ( _o_ ) and _E_ ( _o_ _[′]_ ) are the word embeddings of the original object _o_ and the updated object _o_ _[′]_, respectively. _λ_ 1 and _λ_ 2 are hyperparameters. We perform knowledge modification on two different KN sets: (1) _N_ _i_, the set of knowledge neurons corresponding to _q_ _i_, (2) _N_ _u_, the union of KNs across all _k_ queries, i.e., _N_ _u_ = [�] _[k]_ _i_ =1 _[N]_ _[i]_ [.] **Evaluation Metrics** (1) _Knowledge Modification Metrics_ : We adopt three metrics (detailed in E): Reliability (Rel), Generalization (Gen), and Locality (Loc) (Yao et al., 2023). These three metrics respectively represent the model’s ability to answer the original query, neighbor queries, and unrelated queries after knowledge modification. All three metrics are better when higher. To facilitate comparison, we also calculate the average of these three metrics (Avg). (2) _General Capability Metrics_ : Editing neurons may disrupt the model’s performance in generating text (Zhang et al., 2024; Zhao et al., 2023). Similar to other model editing methods (Wang et al., 2024b), we employ the perplexity (PPL) metric to evaluate the model’s general capability after modification. Specifically, we randomly select five entries from WikiText2 (Merity et al., 2017) each time and calculate the relative increase in PPL before (b) and after (a) editing the model: ∆PPL = [PPL] PPL _[a]_ _[−]_ [PPL] _b_ _[b]_ . A lower ∆PPL is better, as it indicates less disruption to the model. **Findings** Table 2 presents the results of this experiment, leading us to the following conclusions. (1) **Low Generalization in Inconsistent Knowledge in** _N_ _i_ : Modifying _N_ _i_, i.e., the KNs corresponding to _q_ _i_, leads to low generalization for _K_ _I_ . Specifically, under the “Erasure” setting, the generalization scores are only 0.09 for GPT-2 and 0.04 for LLaMA3-8b, indicating unsuccessful modification of neighbor queries. Despite high Reliability and Locality scores on original and unrelated queries, the poor generalization reveals the limitations of this method. In contrast, _K_ _C_ exhibits higher “Avg” and “Gen” metrics. For example, for LLaMA3, “Avg” and “Gen” metric values reach 0.47 and 0.30, respectively, suggesting better consistency among neighbor KNs (i.e., KNs corresponding to neighbor queries). (2) **High** ∆ **PPL and lower Locality for Inconsistent Knowledge in** _N_ _u_ : To achieve high generalization for _K_ _I_, substantial modifications to _N_ _u_ (union of _N_ _i_ ) are required, necessitating the alteration of many KNs to impact a single fact. However, this approach significantly increases perplexity change (∆PPL), with a peak of 1.05 for LLaMA3-8b under the “Erasure” setting (i.e., a 105% increase in PPL), and causes Locality to drop from 0.80 to 0.50, indicating excessive alterations to model parameters. It is precisely because the neighbor KNs are inconsistent that taking 5 Idea Generation Category:
3Other
tfyHbvFZ0K
# F EW FOR M ANY : T CHEBYCHEFF S ET S CALARIZATION FOR M ANY -O BJECTIVE O PTIMIZATION **Xi Lin** [1] **, Yilu Liu** [1] **, Xiaoyuan Zhang** [1] **, Fei Liu** [1] **, Zhenkun Wang** [2] **, Qingfu Zhang** [1] **,** _[∗]_ 1 City University of Hong Kong, 2 Southern University of Science and Technology _{_ xi.lin, yiluliu3, xzhang2523-c, fliu36-c _}_ @my.cityu.edu.hk wangzhenkun90@gmail.com, qingfu.zhang@cityu.edu.hk A BSTRACT Multi-objective optimization can be found in many real-world applications where some conflicting objectives can not be optimized by a single solution. Existing optimization methods often focus on finding a set of Pareto solutions with different optimal trade-offs among the objectives. However, the required number of solutions to well approximate the whole Pareto optimal set could be exponentially large with respect to the number of objectives, which makes these methods unsuitable for handling many optimization objectives. In this work, instead of finding a dense set of Pareto solutions, we propose a novel Tchebycheff set scalarization method to find a few representative solutions (e.g., 5) to cover a large number of objectives (e.g., _>_ 100 ) in a collaborative and complementary manner. In this way, each objective can be well addressed by at least one solution in the small solution set. In addition, we further develop a smooth Tchebycheff set scalarization approach for efficient optimization with good theoretical guarantees. Experimental studies on different problems with many optimization objectives demonstrate the effectiveness of our proposed method. 1 I NTRODUCTION In real-world applications, it is very often that many optimization objectives should be considered at the same time. Examples include manufacturing or engineering design with various specifications to achieve (Adriana et al., 2018; Wang et al., 2020), decision-making systems with different factors to consider (Roijers et al., 2014; Hayes et al., 2022), and molecular generation with multiple criteria to satisfy (Jain et al., 2023; Zhu et al., 2023). For a non-trivial problem, these optimization objectives conflict one another. Therefore, it is very difficult, if not impossible, for a single solution to accommodate all objectives at the same time (Miettinen, 1999; Ehrgott, 2005). In the past several decades, much effort has been made to develop efficient algorithms for finding a set of Pareto solutions with diverse optimal trade-offs among different objectives. However, the Pareto set that contains all optimal trade-off solutions could be an manifold in the decision space, of which the dimensionality can be large for a problem with many objectives (Hillermeier, 2001). The number of required solutions to well approximate the whole Pareto set will increase exponentially with the number of objectives, which leads to prohibitively high computational overhead. In addition, a large solution set with high-dimensional objective vectors could easily become unmanageable for decision-makers. Indeed, a problem with more than 3 objectives is already called the many-objective optimization problem (Fleming et al., 2005; Ishibuchi et al., 2008), and existing methods will struggle to deal with problems with a significantly larger number of optimization objectives (Sato & Ishibuchi, 2023). In this work, instead of finding a dense set of Pareto solutions, we investigate a new approach for many-objective optimization, which aims to find a small set of solutions (e.g., 5 ) to handle a large number of objectives (e.g., _>_ 100 ). In the optimal case, each objective should be well addressed by at least one solution in the small solution set as illustrated in Figure 1(d). This setting is important for different real-world applications with many objectives to optimize, such as finding complementary _∗_ Corresponding author. 1 |0.040|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10| |---|---|---|---|---|---|---|---|---|---| |0.035<br>0.030<br>0.025<br>0.020 2<br>0.015<br>0.010<br>0.005<br>1|||||||||| |0.035<br>0.030<br>0.025<br>0.020 2<br>0.015<br>0.010<br>0.005<br>1|||||||||| |0.035<br>0.030<br>0.025<br>0.020 2<br>0.015<br>0.010<br>0.005<br>1|||||||||| |0.035<br>0.030<br>0.025<br>0.020 2<br>0.015<br>0.010<br>0.005<br>1|||||||||| |0.035<br>0.030<br>0.025<br>0.020 2<br>0.015<br>0.010<br>0.005<br>1|||||||||| |0.035<br>0.030<br>0.025<br>0.020 2<br>0.015<br>0.010<br>0.005<br>1|||||||||| |0.035<br>0.030<br>0.025<br>0.020 2<br>0.015<br>0.010<br>0.005<br>1|||||||||| |0.035<br>0.030<br>0.025<br>0.020 2<br>0.015<br>0.010<br>0.005<br>1|200 1|400 1|600 1|800 2000 2200 2400 2600 2800|800 2000 2200 2400 2600 2800|800 2000 2200 2400 2600 2800|800 2000 2200 2400 2600 2800|800 2000 2200 2400 2600 2800|800 2000 2200 2400 2600 2800| (a) 10 Solutions (b) 100 Solutions (d) 5 Solutions (Ours) (c) 1 _,_ 000 Solutions Figure 1: **Large Set v.s. Small Set for Multi-Objective Optimization. (a)(b)(c) Large Set:** Classic algorithms use 10, 100 and 1000 solutions to approximate the whole Pareto front for 2 and 3 -objective optimization problems. The required number of solutions for a good approximation could increase exponentially with the number of objectives. **(d) Small Set:** This work investigates how to efficiently find a few solutions (e.g., 5) to collaboratively handle many optimization objectives (e.g., 100). engineering designs to satisfy various criteria (Fleming et al., 2005), producing a few different versions of advertisements to serve a large group of diverse audiences (Matz et al., 2017; Eckles et al., 2018), and building a small set of models to handle many different data (Yi et al., 2014; Zhong et al., 2016) or tasks (Standley et al., 2020; Fifty et al., 2021). However, this demand has received little attention from the multi-objective optimization community. To properly handle this setting, this work makes the following contributions [1] : - We propose a novel Tchebycheff set (TCH-Set) scalarization approach to find a few optimal solutions in a collaborative and complementary manner for many-objective optimization. - We further develop a smooth Tchebycheff set (STCH-Set) scalarization approach to tackle the non-smoothness of TCH-Set scalarization for efficient gradient-based optimization. - We provide theoretical analyses to show that our proposed approaches enjoy good theoretical properties for multi-objective optimization. - We conduct experiments on various multi-objective optimization problems with many objectives to demonstrate the efficiency of our proposed method. 2 P RELIMINARIES AND R ELATED W ORK 2.1 M ULTI -O BJECTIVE O PTIMIZATION In this work, we consider the following multi-objective optimization problem: _**x**_ min _∈X_ _**[f]**_ [(] _**[x]**_ [) = (] _[f]_ [1] [(] _**[x]**_ [)] _[, f]_ [2] [(] _**[x]**_ [)] _[,][ · · ·][, f]_ _[m]_ [(] _**[x]**_ [))] _[,]_ (1) where _**x**_ _∈X_ is a solution and _**f**_ ( _**x**_ ) = ( _f_ 1 ( _**x**_ ) _, f_ 2 ( _**x**_ ) _, · · ·, f_ _m_ ( _**x**_ )) _∈_ R _[m]_ are _m_ differentiable objective functions. For a non-trivial problem, there is no single solution _**x**_ _[∗]_ that can optimize all objective functions at the same time. Therefore, we have the following definitions of dominance, (weakly) Pareto optimality, and Pareto set/front for multi-objective optimization (Miettinen, 1999): **Definition 1** (Dominance and Strict Dominance) **.** _Let_ _**x**_ [(] _[a]_ [)] _,_ _**x**_ [(] _[b]_ [)] _∈X_ _be two solutions for problem (1),_ _**x**_ [(] _[a]_ [)] _is said to dominate_ _**x**_ [(] _[b]_ [)] _, denoted as_ _**f**_ ( _**x**_ [(] _[a]_ [)] ) _≺_ _**f**_ ( _**x**_ [(] _[b]_ [)] ) _, if and only if_ _f_ _i_ ( _**x**_ [(] _[a]_ [)] ) _≤_ _f_ _i_ ( _**x**_ [(] _[b]_ [)] ) _∀i ∈_ _{_ 1 _, ..., m}_ _and_ _f_ _j_ ( _**x**_ [(] _[a]_ [)] ) _< f_ _j_ ( _**x**_ [(] _[b]_ [)] ) _∃j ∈{_ 1 _, ..., m}_ _. In addition,_ _**x**_ [(] _[a]_ [)] _is said to strictly dominate_ _**x**_ [(] _[b]_ [)] _(i.e.,_ _**f**_ ( _**x**_ [(] _[a]_ [)] ) _≺_ _strict_ _**f**_ ( _**x**_ [(] _[b]_ [)] ) _), if and only if f_ _i_ ( _**x**_ [(] _[a]_ [)] ) _< f_ _i_ ( _**x**_ [(] _[b]_ [)] ) _∀i ∈{_ 1 _, ..., m}._ **Definition 2** ((Weakly) Pareto Optimality) **.** _A solution_ _**x**_ _[∗]_ _∈X_ _is Pareto optimal if there is no_ _**x**_ _∈X_ _such that_ _**f**_ ( _**x**_ ) _≺_ _**f**_ ( _**x**_ _[∗]_ ) _. A solution_ _**x**_ _[′]_ _∈X_ _is weakly Pareto optimal if there is no_ _**x**_ _∈X_ _such that_ _**f**_ ( _**x**_ ) _≺_ _strict_ _**f**_ ( _**x**_ _[′]_ ) _._ **Definition 3** _X|_ _**f**_ (ˆ _**x**_ ) ⊀ _**f**_ (Pareto Set and Pareto Front) ( _**x**_ ) _∀_ _**x**_ ˆ _∈X}_ _is called the Pareto set. Its image in the objective space_ **.** _The set of all Pareto optimal solutions_ _**X f**_ _[∗]_ (= _**X**_ _{_ _[∗]_ _**x**_ ) = _∈_ _{_ _**f**_ ( _**x**_ ) _∈_ R _[m]_ _|_ _**x**_ _∈_ _**X**_ _[∗]_ _} is called the Pareto front._ 1 [Our source code is available at: https://github.com/Xi-L/STCH-Set](https://github.com/Xi-L/STCH-Set) 2 Under mild conditions, the Pareto set and front could be on an ( _m −_ 1) -dimensional manifold in the decision or objective space (Hillermeier, 2001), which contains infinite Pareto solutions. Many optimization methods have been proposed to find a finite set of solutions to approximate the Pareto set and front (Miettinen, 1999; Ehrgott, 2005; Zhou et al., 2011). If at least _k_ solutions are needed to handle each dimension of the Pareto front, the required number of solutions could be _O_ ( _k_ [(] _[m][−]_ [1)] ) for a problem with _m_ optimization objectives. Two illustrative examples with a set of solutions to approximate the Pareto front for problems with 2 and 3 objectives are shown in Figure 1. However, the required number of solutions will increase exponentially with the objective number _m_, leading to an extremely high computational overhead. It could also be very challenging for decision-makers to efficiently handle such a large set of solutions. Indeed, for a problem with many optimization objectives, a large portion of the solutions could become non-dominated and hence incomparable with each other (Purshouse & Fleming, 2007; Knowles & Corne, 2007). In the past few decades, different heuristic and evolutionary algorithms have been proposed to tackle the many-objective black-box optimization problems (Zhang & Li, 2007; Bader & Zitzler, 2011; Deb & Jain, 2013). These algorithms typically aim to find a set of a few hundred solutions to handle problems with 4 to a few dozen optimization objectives (Li et al., 2015; Sato & Ishibuchi, 2023). However, they still struggle to tackle problems with significantly many objectives (e.g., _>_ 100 ), and cannot efficiently solve large-scale differentiable optimization problems. Dimensionality reduction (Deb & Saxena, 2005; Brockhoff & Zitzler, 2006; Singh et al., 2011) is a widely used technique to deal with many-objective optimization problems with potential redundant objectives. By summarizing all objectives by a few representative objectives, these methods can reformulate the originally challenging problem into a simpler problem with much fewer objectives. A detailed discussion with the dimensionality reduction can be found in Appendix D.6. 2.2 G RADIENT - BASED M ULTI -O BJECTIVE O PTIMIZATION When all objective functions are differentiable with gradient _{∇f_ _i_ ( _**x**_ ) _}_ _[m]_ _i_ =1 [, we have the following] definition for Pareto stationarity: **Definition 4** (Pareto Stationary Solution) **.** _A solution_ _**x**_ _∈X_ _is Pareto stationary if there exists a_ _set of weights_ _**α**_ _∈_ **∆** _[m][−]_ [1] = _{_ _**α**_ _|_ [�] _[m]_ _i_ =1 _[α]_ _[i]_ [ = 1] _[, α]_ _[i]_ _[ ≥]_ [0] _[ ∀][i][}]_ _[ such that the convex combination of]_ _gradients_ [�] _[m]_ _i_ =1 _[α]_ _[i]_ _[∇][f]_ _[i]_ [(] _**[x]**_ [) =] **[ 0]** _[.]_ **Multiple Gradient Descent Algorithm** One popular gradient-based approach is to find a valid gradient direction such that the values of all objective functions can be simultaneously improved (Fliege & Svaiter, 2000; Schaffler et al., 2002; D ¨ esid ´ eri, 2012). The multiple gradient descent algorithm ´ (MGDA) (D´esid´eri, 2012; Sener & Koltun, 2018) obtains a valid gradient _**d**_ _t_ = [�] _[m]_ _i_ =1 _[α]_ _[i]_ _[∇][f]_ _[i]_ [(] _**[x]**_ [)][ by] solving the following quadratic programming problem at each iteration: _m_ min _α_ _i_ _[||]_ � _i_ _m_ _m_ _i_ =1 _[α]_ _[i]_ _[∇][f]_ _[i]_ [(] _**[x]**_ _[t]_ [)] _[||]_ 2 [2] _[,]_ _s.t._ � _i_ _α_ _i_ _≥_ 0 _, ∀i_ = 1 _, ..., m,_ (2) _i_ =1 _[α]_ _[i]_ [ = 1] _[,]_ and updates the current solution by a simple gradient descent _**x**_ _t_ +1 = _**x**_ _t_ _−_ _η_ _t_ _**d**_ _t_ . If _**d**_ _t_ = **0**, it means that there is no valid gradient direction that can improve all objectives at the same time, and therefore _**x**_ _t_ is a Pareto stationary solution (Desid ´ eri, 2012; Fliege et al., 2019). This idea has inspired ´ many adaptive gradient methods for multi-task learning (Yu et al., 2020; Liu et al., 2021a;b; Momma et al., 2022; Liu et al., 2022; Navon et al., 2022; Senushkin et al., 2023; Lin et al., 2023; Liu et al., 2024). Different stochastic multiple gradient methods have also been proposed in recent years (Liu & Vicente, 2021; Zhou et al., 2022; Fernando et al., 2023; Chen et al., 2023; Xiao et al., 2023). The location of solutions found by the original MGDA is not controllable, and several extensions have been proposed to find a set of diverse solutions with different trade-offs (Lin et al., 2019; Mahapatra & Rajan, 2020; Ma et al., 2020; Liu et al., 2021c). However, a large number of solutions is still required for a good approximation to the Pareto front. For solving problems with many objectives, MGDA and its extensions will suffer from a high computational overhead due to the high-dimensional quadratic programming problem (2) at each iteration. In addition, since a large portion of solutions is non-dominated with each other, it could be very hard to find a valid gradient direction to optimize all objectives at the same time. **Scalarization Method** Another popular class of methods for multi-objective optimization is the scalarization approach (Miettinen, 1999; Zhang & Li, 2007). The most straightforward method is 3 (a) Solution 1 (b) Solution 2 (c) Solution 3 (d) Solution 4 (e) Solution 5 (f) All Solutions Figure 2: **Few Solutions to Address Many Optimization Objectives. (a)-(e):** 5 different solutions to tackle different optimization objectives in a complementary manner. **(f):** They together successfully handle all 100 optimization objectives. linear scalarization (Geoffrion, 1967): (Linear Scalarization) _**x**_ min _∈X_ _[g]_ [(][LS][)] [(] _**[x]**_ _[|]_ _**[λ]**_ [) =] _m_ � _λ_ _i_ _f_ _i_ ( _**x**_ ) _,_ (3) _i_ =1 where _**λ**_ = ( _λ_ 1 _, . . ., λ_ _m_ ) is a preference vector over _m_ objectives on the simplex **∆** _[m][−]_ [1] = _{_ _**λ**_ _|_ [�] _[m]_ _i_ =1 _[λ]_ _[i]_ [ = 1] _[, λ]_ _[i]_ _[ ≥]_ [0] _[ ∀][i][}]_ [. A set of diverse solutions can be obtained by solving the scalarization] problem (3) with different preferences. Recently, different studies have shown that a well-tuned linear scalarization can outperform many adaptive gradient methods for multi-task learning Kurin et al. (2022); Xin et al. (2022); Lin et al. (2022); Royer et al. (2023). However, from the viewpoint of multi-objective optimization, linear scalarization cannot find any Pareto solution on the non-convex part of the Pareto front (Das & Dennis, 1997; Ehrgott, 2005; Hu et al., 2023). Many other scalarization methods have been proposed in past decades. Among them, the Tchebycheff scalarization with good theoretical properties is a promising alternative (Bowman, 1976; Steuer & Choo, 1983): (Tchebycheff Scalarization) _**x**_ min _∈X_ _[g]_ [(][TCH][)] [(] _**[x]**_ _[|]_ _**[λ]**_ [) = max] 1 _≤i≤m_ _[{][λ]_ _[i]_ [(] _[f]_ _[i]_ [(] _**[x]**_ [)] _[ −]_ _[z]_ _i_ _[∗]_ [)] _[}][,]_ (4) where _**λ**_ _∈_ **∆** _[m][−]_ [1] is the preference and _**z**_ _[∗]_ _∈_ R _[m]_ is the ideal point (e.g., _z_ _i_ _[∗]_ [= min] _[ f]_ _[i]_ [(] _**[x]**_ [)] _[ −]_ _[ϵ]_ with a small _ϵ >_ 0 ). It is well-known that the Tchebycheff scalarization is able to find all weakly Pareto solutions for any Pareto front (Choo & Atkins, 1983). However, the max operator makes it become nonsmooth and hence suffers from a slow convergence rate by subgradient descent (Goffin, 1977) for differentiable multi-objective optimization. Recently, a smooth Tchebycheff scalarization approach (Lin et al., 2024) has been proposed to tackle the nonsmoothness issue: � (Smooth Tchebycheff Scalarization) _**x**_ min _∈X_ _[g]_ _µ_ [(][STCH][)] ( _**x**_ _|_ _**λ**_ ) = _µ_ log _m_ _λi_ ( _fi_ ( _**x**_ ) _−zi_ _[∗]_ [)] _e_ _µ_ � � _i_ =1 _,_ (5) where _µ_ is the smooth parameter with a small positive value (e.g., 0 _._ 1 ). According to (Lin et al., 2024), this smooth scalarization approach enjoys a fast convergence rate for the gradient-based method, while also having good theoretical properties for multi-objective optimization. A similar smooth optimization approach has also been proposed in He et al. (2024) for robust multi-task learning. Very recently, Qiu et al. (2024) prove and analyze the theoretical advantages of smooth Tchebycheff scalarization (5) over the classic Tchebycheff scalarization (4) for multi-objective reinforcement learning (MORL). The scalarization methods do not have to solve a quadratic programming problem at each iteration and thus have lower pre-iteration complexity than MGDA. However, they still need to solve a large number of scalarization problems with different preferences to obtain a dense set of solutions to approximate the whole Pareto set. 3 T CHEBYCHEFF S ET S CALARIZATION FOR M ANY O BJECTIVE O PTIMIZATION 3.1 S MALL S OLUTION S ET FOR M ANY -O BJECTIVE O PTIMIZATION Unlike previous methods, this work does not aim to find a huge set of solutions for approximating the whole Pareto set. Instead, we want to find a small set of solutions in a collaborative and 4 complementary way such that each optimization objective can be well addressed by at least one solution. We have the following formulation of our targeted set optimization problem: min _**f**_ ( _**x**_ ) = ( min (6) _**X**_ _K_ = _{_ _**x**_ [(] _[k]_ [)] _}_ _[K]_ _k_ =1 _**x**_ _∈_ _**X**_ _K_ _[f]_ [1] [(] _**[x]**_ [)] _[,]_ [ min] _**x**_ _∈_ _**X**_ _K_ _[f]_ [2] [(] _**[x]**_ [)] _[,][ · · ·][,]_ [ min] _**x**_ _∈_ _**X**_ _K_ _[f]_ _[m]_ [(] _**[x]**_ [))] _[,]_ where _**X**_ _K_ = _{_ _**x**_ [(] _[k]_ [)] _}_ _[K]_ _k_ =1 [is a set of] _[ K]_ [ solutions to tackle all] _[ m]_ [ objectives] _[ {][f]_ _[i]_ [(] _**[x]**_ [)] _[}]_ _i_ _[m]_ =1 [. With a large] _K ≥_ _m_, we will have a degenerated problem: min min (7) _**X**_ _K_ _**[f]**_ [(] _**[x]**_ [) = ( min] _**x**_ [(1)] _∈X_ _[f]_ [1] [(] _**[x]**_ [)] _[,]_ [ min] _**x**_ [(2)] _∈X_ _[f]_ [2] [(] _**[x]**_ [)] _[,][ · · ·][,]_ _**x**_ [(] _[m]_ [)] _∈X_ _[f]_ _[m]_ [(] _**[x]**_ [))] _[,]_ where each objective function _f_ _i_ is independently solved by its corresponding solution _**x**_ [(] _[i]_ [)] _∈X_ via single objective optimization and the rest ( _K −_ _m_ ) solutions are redundant. If _K_ = 1, the set optimization problem (6) will be reduced to the standard multi-objective optimization problem (1). In this work, we are more interested in the case 1 _< K ≪_ _m_, which finds a small set of solutions (e.g., _K_ = 5 ) to tackle a large number of objectives (e.g., _m ≥_ 100 ) as illustrated in Figure 2. In the ideal case, if the ground truth optimal objective group assignment is already known (e.g., which objectives should be optimized together by the same solution), it is straightforward to directly find an optimal solution for each group of objectives. However, for a general optimization problem, the ground truth objective group assignment is usually unknown in most cases, and finding the optimal assignment could be very difficult. Very recently, a similar setting has been investigated in two concurrent works (Ding et al., 2024; Li et al., 2024). Ding et al. (2024) study the sum-of-minimum (SoM) optimization problem _m_ 1 � _mi_ =1 [min] _[{][f]_ _[i]_ [(] _**[x]**_ [(1)] [)] _[, f]_ _[i]_ [(] _**[x]**_ [(2)] [)] _[, . . ., f]_ _[i]_ [(] _**[x]**_ [(] _[K]_ [)] [)] _[}]_ [ that can be found in many machine learning ap-] plications such as mixed linear regression (Yi et al., 2014; Zhong et al., 2016). They generalize the classic k-means++ (Arthur, 2007) and Lloyd’s algorithm (Lloyd, 1982) for clustering to tackle this problem, but do not take multi-objective optimization into consideration. Li et al. (2024) propose a novel Many-objective multi-solution Transport (MosT) framework to tackle many-objective optimization. With a bi-level optimization formulation, they adaptively construct a few weighted multi-objective optimization problems that are assigned to different representative regions on the Pareto front. By solving these weighted problems with MGDA, a diverse set of solutions can be obtained to well cover all objectives. In this work, we propose a straightforward and efficient set scalarization approach to explicitly optimize all objectives by a small set of solutions. A detailed experimental comparison with these methods can be found in Section 4. 3.2 T CHEBYCHEFF S ET S CALARIZATION The set optimization formulation (6) is still a multi-objective optimization problem. In non-trivial cases, there is no single small solution set _**X**_ _K_ with _K < m_ solutions that can optimize all _m_ objective functions _{f_ _i_ ( _**x**_ ) _}_ _[m]_ _i_ =1 [at the same time. To tackle this optimization problem, we propose] the following Tchebycheff set (TCH-Set) scalarization approach: min _g_ [(][TCH-Set][)] ( _**X**_ _K_ _|_ _**λ**_ ) = max _**X**_ _K_ = _{_ _**x**_ [(] _[k]_ [)] _}_ _[K]_ _k_ =1 1 _≤i≤m_ = max 1 _≤i≤m_ _λ_ _i_ ( min _i_ [)] � _**x**_ _∈_ _**X**_ _K_ _[f]_ _[i]_ [(] _**[x]**_ [)] _[ −]_ _[z]_ _[∗]_ � _λ_ _i_ ( min _i_ [)] _,_ (8) � 1 _≤k≤K_ _[f]_ _[i]_ [(] _**[x]**_ [(] _[k]_ [)] [)] _[ −]_ _[z]_ _[∗]_ � where _**λ**_ = ( _λ_ 1 _, . . ., λ_ _m_ ) and _**z**_ _[∗]_ = ( _z_ 1 _[∗]_ _[, . . ., z]_ _m_ _[∗]_ [)] [ are the preference and ideal point for each] objective function. In this way, all objective values _{f_ _i_ ( _**x**_ ) _}_ _[m]_ _i_ =1 [among the whole solution set] _**X**_ _K_ = _{_ _**x**_ [(] _[k]_ [)] _}_ _[K]_ _k_ =1 [are scalarized into a single function] _[ g]_ [(][TCH-Set][)] [(] _**[X]**_ _[K]_ _[|]_ _**[λ]**_ [)] [. In this work, a simple] uniform vector _**λ**_ = ( _m_ [1] _[, . . .,]_ _m_ [1] [)] [ is used in all experiments without any specific preference among] the objectives. A discussion on the effect of different preferences can be found in Appendix D.2. By optimizing this TCH-Set scalarization function (8), we want to find an optimal small solution set _**X**_ _K_ _[∗]_ [such that each objective can be well addressed by at least one solution] _**[ x]**_ [(] _[k]_ [)] _[ ∈]_ _**[X]**_ _[K]_ [ with a low] worst objective value max 1 _≤i≤m_ � _λ_ _i_ (min 1 _≤k≤K_ _f_ _i_ ( _**x**_ [(] _[k]_ [)] ) _−_ _z_ _i_ _[∗]_ [)] � . When the solution set contains only one single solution (e.g., _K_ = 1 ), it will be reduced to the classic single-solution Tchebycheff scalarization (4). To avoid degenerated cases and focus on the key few-for-many setting, we make the following two assumptions in this work: 5 Idea Generation Category:
0Conceptual Integration
O4N9kWwV6R
# CARTS: A DVANCING N EURAL T HEOREM P ROVING - WITH D IVERSIFIED T ACTIC C ALIBRATION AND B IAS R ESISTANT T REE S EARCH **Xiao-Wen Yang** [1] _[,]_ [2] **, Zhi Zhou** [1] **, Haiming Wang** [3] _[,]_ [5] **, Aoxue Li** [4] **Wen-Da Wei** [1] _[,]_ [2] **, Hui Jin** [4] **, Zhenguo Li** [4] **, Yu-Feng Li** [1] _[,]_ [2] _[∗]_ 1 National Key Laboratory for Novel Software Technology, Nanjing University 2 School of Artificial Intelligence, Nanjing University 3 Sun Yat-sen University 4 Noah’s Ark Lab, Huawei 5 Moonshot AI A BSTRACT Recent advancements in neural theorem proving integrate large language models with tree search algorithms like Monte Carlo Tree Search (MCTS), where the language model suggests tactics and the tree search finds the complete proof path. However, many tactics proposed by the language model converge to semantically or strategically similar, reducing diversity and increasing search costs by expanding redundant proof paths. This issue exacerbates as computation scales and more tactics are explored per state. Furthermore, the trained value function suffers from false negatives, label imbalance, and domain gaps due to biased data construction. To address these challenges, we propose CARTS (diversified tactic CAlibration and bias-Resistant Tree Search), which balances tactic diversity and importance while calibrating model confidence. CARTS also introduce preference modeling and an adjustment term related to the ratio of valid tactics to improve the biasresistance of the value function. Experimental results demonstrate that CARTS consistently outperforms previous methods achieving a pass@l rate of 49.6% on the miniF2F-test benchmark. Further analysis confirms that CARTS improves tactic diversity and leads to a more balanced tree search. The code for our imple[mentation is available at https://github.com/njuyxw/CARTS.](https://github.com/njuyxw/CARTS) 1 I NTRODUCTION Automated theorem proving (ATP) (Harrison et al., 2014) is an essential task of artificial intelligence (AI) with significant challenge. Recently, the development of large language models has brought new vitality and advancements to this field (Han et al., 2022; Jiang et al., 2023; Xin et al., 2024a). For example, AlphaProof (Deepmind, 2024; Trinh et al., 2024) solved four out of six problems from International Mathematical Olympiad (IMO), achieving the same level as a silver medalist in the competition. These advancements stem from the integration of language models and formal theorem proving systems (such as Lean (Moura & Ullrich, 2021) or Isabella (Paulson, 1994)), which model the theorem proving task as a Markov Decision Process (MDP) (Polu & Sutskever, 2020). The language model functions as a policy network that provides heuristic proof tactics, while tree search methods are utilized to explore correct sequence of steps that maximize the reward. Although the improvements of language models (Xin et al., 2024a) can significantly improve the performance of theorem proving, efficient tree search methods remains crucial for theorems with long and complex proof steps. Existing search techniques (Polu & Sutskever, 2020; Wang et al., 2023; Xin et al., 2024a) primarily rely on Best First Search (BFS) or Monte Carlo Tree Search (MCTS) (Kocsis & Szepesv´ari, 2006). While these method can achieve impressive performance, they have two significant drawbacks. **Firstly**, the output sampling of auto-regressive language models frequently exhibits significant redundancy, often producing similar tactics. Although the language model generates a substantial number of tactics that differ at the character level, they share the same underlying semantics. For instance, both ‘intro h’ and ‘intro H’ can be generated _∗_ Corresponding author: Yu-Feng Li (liyf@lamda.nju.edu.cn) 1 |Col1|Col2| |---|---| |State 1|State 1| |Col1|Col2| |---|---| |State 2|State 2| |Col1|Col2| |---|---| |State 3|State 3| Figure 1: **Overall Framework.** Diversified tactic calibration can calibrate the model confidence and enhance the diversity of candidate tactics, thus mitigating ineffective exploration. The biasresistant value function can adapt to the test data, provide more accurate scores for evaluating tactics, thus improving the efficiency of utilization. by the language model; however, they convey the same meaning in Lean4, as they both introduce a hypothesis into the proving context. This redundancy can also lead to an imbalance in the number of high-level proof strategies. For example, most of the tactics generated by the language model may focus on the strategy of proof by contradiction, while there are relatively few tactics involving mathematical induction. These will result in a huge amount of ineffective exploration during the tree search process, thus increasing search costs. This issue worsens as computation scales and more tactics are explored per state. **Secondly**, the construction of value function training data often relies on existing policy models to generate negative samples and thus introducing bias. On one hand, this construction may produce a substantial number of negative samples, potentially far exceeding the positive ones. This could result in label imbalance, causing the cross-entropy loss used in training the value function to easily converge to local optima. On the other hand, the samples generated by the policy model may contain false negatives, introducing noise into the dataset. Additionally, the domain gap between the training dataset (e.g., Mathlib) and the test dataset (e.g., IMO problems) exacerbates the bias of the value function during the inference stage. This leads to inaccurate evaluations of the current proof state’s value, hindering effective exploitation during the tree search process. Overall, these two issues prevent existing tree search techniques from efficient exploration and effective exploitation, resulting in sub-optimal search performance. In order to solve these challenge, we propose diversified tactic **CA** libration and bias- **R** esistant **T** ree **S** earch ( **CARTS** ). Diversified tactic calibration involves reordering and rescoring multiple candidate tactics generated by a language model’s sampling output. This approach balances importance and diversity, thereby enhancing exploration efficiency. We use the Maximal Marginal Relevance (MMR) algorithm (Peng et al., 2005) to achieve this, which is a classical method in the field of information retrieval. Meanwhile, we propose a bias-resistant value function. During the training stage, preference modeling is employed to construct the training dataset, and the Bradley-Terry model (Bradley & Terry, 1952) is utilized to train the value network. This approach addresses the issue of data imbalance and false negatives. During the inference stage, we introduce an adjustment term related to the ratio of valid tactics into the value function to mitigate the domain gap between the training and test dataset. This stems from a insight that if the number of valid tactics is limited, concerns 2 may arise regarding the effectiveness of the current policy model, necessitating a reduced value for the current action. Bias-resistant value function can enhance the effectiveness of exploitation during the search process. The complete framework of CARTS is shown in Figure 1. We conducted sufficient experiments on the widely recognized theorem-proving benchmarks, namely miniF2F (Zheng et al., 2022) and ProofNet (Azerbayev et al., 2023) in Lean. Our proposed CARTS demonstrates superior performances compared to all other search methods when the policy network remains unchanged. We achieved a pass@1 success rate of 49.6% on the miniF2F-test dataset, which is the state-of-the-art performance among all one-step tree search methods. To summarize, this paper (i). proposes a diversified tactic calibration assisted monte carlo tree search to improve the exploration efficiency. (ii). proposes a bias-resistant value function to improve the exploitation effectiveness. (iii). demonstrates the effectiveness of the proposed method across different models and benchmarks in experiments. 2 R ELATED W ORK **Neural theorem proving.** In recent years, the advancement of large language models has brought new progress to theorem proving (Li et al., 2024). GPT-f (Polu & Sutskever, 2020) is the first to utilize language models trained on proof data to predict candidate proof steps and employ search algorithms to discover the complete proof path. A series of subsequent studies have employed diverse language model techniques from various perspectives to enhance theorem proving performance. In terms of model training, PACT (Han et al., 2022) employs a set of self-supervised auxiliary tasks to train the model. Curriculum Learning (Polu et al., 2023) introduces curriculum expert iterations to update the network. Llemma (Azerbayev et al., 2024) continues pre-training the CodeLlama models (Roziere et al., 2023) on a math-focused corpus. AlphaGeometry (Trinh et al., 2024) integrates a transformer model trained on synthetic geometry data with a symbolic deduction engine to solve olympiad geometry problems. InterLM2-Math (Ying et al., 2024b) compiles a substantial collection of both formal and informal contest-level math problems (Ying et al., 2024a; Wu et al., 2024). First et al. (2023) incorporates a repair feedback mechanism in proof generation. This feedback is facilitated by an LLM fine-tuned on tuples consisting of incorrect proof, error message and correct proof. In terms of algorithmic design, Reprover (Yang et al., 2023) employs retrieval-augmented generation for proof generation. DSP (Jiang et al., 2023) initially uses informal hints to guide proofs by translating informal proofs into formal sketches, which are then completed with Isabelle’s automated reasoning tactics. LEGOProver (Xin et al., 2024c) enhances DSP with a skill library that expands throughout the proof search. Lyra (Zheng et al., 2024) iterates on DSP by using error feedback to modify the formal sketch, employing automated reasoning tools to correct incorrect proofs of intermediate hypotheses. COPRA (Thakur et al., 2024) utilizes in-context learning agents to augment theorem proving. These methods employ different formal systems. In our paper, we focus on Lean, which has been verified to perform well on IMO-level tasks (Deepmind, 2024). **Search methods for theorem proving.** Neural theorem proving primarily consists of two categories: whole proof generation methods Xin et al. (2024a); Wang et al. (2024b) and tree search methods. Tree search methods are increasingly becoming the mainstream approach in recent years. A typical approach involves using Best First Search (BFS), as seen in methods like GPT-f (Polu & Sutskever, 2020), Reprover (Yang et al., 2023) and others (Lin et al., 2024; Welleck & Saha, 2023). In contrast, Thakur et al. (2024) employs depth-first search (DFS). Inspired by AlphaZero (Silver et al., 2018), many methods utilize MCTS, such as HyperTree Proof Search (Lample et al., 2022). There are also several improvements to the MCTS algorithm for theorem proving tasks. For instance, DT-Solver (Wang et al., 2023) uses virtual nodes and a proof-level value function to dynamically guide the MCTS search. Wang et al. (2024a) introduces a novel method that allows for the emergence of unproven lemmas during the search, which are subsequently proven recursively. The aforementioned methods are all one-step tree search techniques, which generates a single tactic at each step. Recently, multi-step tree search methods have been developed. For instance, DeepSeekProver-V1.5 (Xin et al., 2024b) employs MCTS to enhance the whole proof generation process, utilizing intrinsic rewards and discounted upper confidence bounds to guide exploration. Despite the success of these methods, challenges remain, namely the lack of diversity for searched proof paths and bias in the trained value function. This paper focuses on addressing these challenges by employing our diversified tactic calibration and bias-resistant tree search. 3 **Algorithm 1** Maximal Marginal Relevance for Diversified Tactic Calibration **Require:** current state _s_, tactics set _{a_ 1 _, a_ 2 _, ..., a_ _e_ _}_, next state set _{s_ _[′]_ 1 _[, s]_ _[′]_ 2 _[, ..., s]_ _[′]_ _e_ _[}]_ [, number of se-] lected tactics _k_, parameter _λ_ _S ←{s}_ _A ←{}_ **while** _|A| <_ min( _k, e_ ) **do** _a_ _[∗]_ _←_ arg max _a_ _i_ � _λ · v_ _policy_ ( _s, a_ _i_ ) _−_ (1 _−_ _λ_ ) _·_ max _s_ _[′]_ _j_ _[∈][S]_ _[ f]_ _[enc]_ [(] _[s]_ _i_ _[′]_ [)] _[⊤]_ _[f]_ _[enc]_ [(] _[s]_ _[′]_ _j_ [)] � Add _a_ _[∗]_ to _A_ Add the corresponding next state _s_ _[∗]_ to _S_ **end while** **return** the expanded action set _A_ 3 M ETHOD In this section, we present the details of our proposed method CARTS, which consists of two components. We begin by introducing the diversified tactic calibration (3.1), followed by giving details of our bias-resistant value function (3.2). Together, these two approaches enhance the effectiveness of exploration and exploitation during the search process. 3.1 D IVERSIFIED T ACTIC C ALIBRATION In practice, the multiple candidate tactics generated by language models often exhibit redundancy. Diversified tactic calibration addresses this by calibrating candidate tactics’ model confidence based on their intrinsic similarity. We implement the calibration using the Maximal Marginal Relevance (MMR) algorithm (Peng et al., 2005) and structure our method’s framework through Monte Carlo Tree Search (Kocsis & Szepesv´ari, 2006). The standard MCTS method used in theorem proving (Wang et al., 2023; Xin et al., 2024b) involves three steps: _Selection_, _Expansion_ and _Backpropagation_ . We incorporate diversified tactic calibration into the _Expansion_ phase, resulting in our CARTS method, which comprises three steps: _Selection_, _Calibration & Expansion_, and _Backpropagation_ . Details of each step are provided as follows. **Selection.** In the selection phase, the algorithm starts from the root node and traverses the tree down to a leaf node. It uses a tree policy to choose child nodes that balance exploration and exploitation. The tree policy at a tree node _s_ selects an action _a_ that maximizes the weighted upper confidence bound (WUCB) score, the WUCB score for each tree node _s_ is formulated as follows: WUCB( _s, a_ ) = _[W]_ [(] _[s][,][ a]_ [)] _N_ ( _s, a_ ) [+] _[ w]_ [(] _[s, a]_ [)] _[ ·]_ ~~�~~ _N_ ( _s, ·_ ) (1) _N_ ( _s, a_ ) Here, _N_ ( _s, a_ ) denotes the count of how many times action _a_ has been taken in state _s_ and _N_ ( _s, ·_ ) the total number of times any action has been taken in state _s_ during the whole search. _W_ ( _s, a_ ) denotes the total value accumulated. Unlike the PUCT score used in DT-Solver (Wang et al., 2023), which incorporates probabilities estimated by the language model, we introduce a weight _w_ ( _s, a_ ) that represents both importance (model confidence) and diversity for the tactic at the current state. We will detail the weights in the calibration phase. **Calibration & Expansion.** At this stage, multiple candidate tactics are generated from the language model, followed by verification through the Lean prover. Verified tactics that pass in Lean are calibrated and expanded into the search tree. Concretely, the proof generation model is designed to generate a one-step proof tactic _a_ from a given proof state _s_, along with a conditional probability _p_ ( _a|s_ ). Typically, we use beam search to sample a large collection of tactics (the quantity is _E_ ) from the language model, which may result in much low probabilities for each tactic. This will lead to reduced exploration during the search process. To fix this, we apply a length penalty, defined 1 as _v_ _policy_ ( _s, a_ ) = _p_ _l_ ( _s|a_ ), where _l_ is the token length of tactic _a_ . This value reflects the model’s confidence or the importance of the current action. 4 After verification by Lean, only _e_ tactics remain, denoted as _{a_ 1 _, a_ 2 _, ..., a_ _e_ _}_ . Here, we do not directly expand these tactics into the search tree because they often exhibit significant redundancy. Therefore, we need to capture the similarity between these tactics and then reorder them for both diversity and importance. We can compare the similarities between two actions using a pre-trained and fixed sentence encoder. However, directly using action similarity poses a challenge: similar actions may lead to different next states, or dissimilar actions may result in the same next state. This undermines our goal of enhancing search diversity. Therefore, we assess next states similarities after executing these tactics. Formally, we denote the set of the next states as _{s_ _[′]_ 1 _[, s]_ 2 _[′]_ _[, . . ., s]_ _[′]_ _e_ _[}]_ [, where] _[ s]_ _[′]_ _i_ represents the next state of _s_ following the execution of action _a_ _i_ . We use a sentence encoder _f_ _enc_ ( _·_ ), which has been already pre-trained on a large-scale corpus, to accept the textualized state and outputs high-dimensional embeddings. Then we utilize the MMR algorithm to reorder the tactics and calibrate the model confidence. The algorithm iteratively selects items (e.g., tactics in our context) from a candidate set to maximize the following objective function: MMR( _s, a_ _i_ ) = _λ · v_ _policy_ ( _s, a_ _i_ ) _−_ (1 _−_ _λ_ ) _·_ max _i_ [)] _[⊤]_ _[f]_ _[enc]_ [(] _[s]_ _[′]_ _j_ [)] (2) _s_ _[′]_ _j_ _[∈][S]_ _[ f]_ _[enc]_ [(] _[s]_ _[′]_ Where, _λ_ is a parameter that controls the trade-off between importance and diversity. It typically ranges from 0 to 1. The calculation of the MMR score serves a calibration, effectively penalizing tactics with low diversity, as represented by the second term in the formula. The algorithm begins with an initial state set _S_ = _{s}_ and an empty action set _A_ . While the size of _A_ is less than the predefined value _k_, which is smaller than _E_, the action with the highest MMR score is selected and added to _A_, along with its next state being added to _S_ . Once _k_ actions have been chosen, the set _A_ is returned. If _k < e_, we select all the tactics and reorder them. The value _k_ serves as a constraint on the maximum number of expansion nodes. It is worth noting that we add the current state _s_ into the initial state set _S_ to mitigate the recurrence of identical states. The reason this way is effective is that our algorithm assigns a lower MMR score to the actions when the next state closely resembles the current state. Algorithm 1 illustrates the complete process of diversified tactic calibration. After diversified tactic calibration, we obtain a small set of actions _A_ that contains both importance and diversity. Then, these actions are treated as edges, with their corresponding next state as nodes, which are expanded into the current search tree. For each edge _a_ _i_, we assign a weight _w_ ( _s, a_ _i_ ) = max _{_ 0 _,_ MMR( _s, a_ _i_ ) _}_, utilized during the selection phase to assess the need for exploration. Unlike traditional MCTS (Kocsis & Szepesv´ari, 2006) or DT-Solver (Wang et al., 2023), our weights places greater emphasis on encouraging the exploration of tactics with high diversity. **Backpropagation.** At this stage, we update the statistics of the nodes and edges along the search trajectory. We have a bias-resistant value function _V_ ( _s, a_ ) which will be detailed in the next section, estimating the value of taking action _a_ from the source node _s_ . For a given trajectory, we use the value function to evaluate the value of the leaf node and accumulate this value along all edges in the path. Specifically, we update the weight of the edge recursively as follows: _W_ ( _s_ _t_ _, a_ _t_ ) += _V_ ( _s, a_ ), where _s_ _t_ and _a_ _t_ represent the node and edge at the trajectory. Additionally, we increment the visit count for the edge: _N_ ( _s_ _t_ _, a_ _t_ ) += 1. This process ensures that the statistics reflect the outcomes of the simulations, allowing for improved selection in future iterations. 3.2 B IAS - RESISTANT V ALUE F UNCTION In MCTS-based methods, training a value function is crucial (Polu & Sutskever, 2020; Lample et al., 2022), typically involving the creation of positive and negative samples using the policy network on training data. Positive samples consist of correct actions (or trajectories (Wang et al., 2023)) from the dataset, while negative samples are those generated by the policy network that lead to undesirable states. Binary cross-entropy loss is then used to train the value network. Due to the hardness of verifying the correctness of actions not on the proof path, previous work (Polu & Sutskever, 2020) often treats these actions as negative samples, resulting in an excessive number of negative samples, some of which are even inaccurate. This makes binary loss unsuitable and biases the value function. Furthermore, the domain gap between the training and test datasets also contributes to biases. In this paper, we conduct debiasing during both training and inference stages, as detailed below. **Training.** To mitigate bias introduced by data collection, we first structure the dataset into preference pairs of positive and negative samples. We utilize an embedding model _f_ _enc_ to effectively 5 filter out noisy samples. Specifically, if _f_ _enc_ ( _s_ _[′]_ ) _[⊤]_ _f_ _enc_ ( _s_ _[′]_ _pos_ [)] _[ > τ]_ [, we discard the action] _[ a]_ [.] _[ s]_ _[′]_ [ is] the next state from a sampled negative action _a_ at the current state _s_, _s_ _[′]_ _pos_ [is the correct next state] and _τ_ is a threshold. This filtering ensures that the selected negative actions are more likely to be undesirable, thus reducing data noise. Moreover, we adopt the preference modeling framework to train our bias-resistant value function. We employ the Bradley-Terry (BT) (Bradley & Terry, 1952) model, a widely used technique for preference modeling. The BT model posits that the probability of action _a_ _pos_ being preferred over action _a_ _neg_ given state _s_ is expressed as: P( _a_ _pos_ _≻_ _a_ _neg_ _| s_ ) = exp( _V_ _θ_ ( _s, a_ ex _pos_ p( _V_ )) + exp( _θ_ ( _s, a_ _pos_ _V_ )) _θ_ ( _s, a_ _neg_ )) (3) Assuming access to the filtered dataset _D_ = _{_ ( _s_ [(] _[i]_ [)] _, a_ [(] _pos_ _[i]_ [)] _[, a]_ [(] _neg_ _[i]_ [)] [)] _[}]_ _[N]_ _i_ =1 [, we can parametrize the value] function _V_ _θ_ ( _s, a_ ) and estimate the parameters _θ_ by minimizing the negative log-likelihood. Preference modeling offers several advantages. Firstly, by only providing the relative superiority among samples, false negative samples do not require further processing. This is because we can reasonably assume that the correct proof steps provided in the dataset are always optimal and align with human theorem proving’s preferences. Additionally, since the dataset is presented in the form of preference pairs, this effectively oversamples (Shi et al., 2023) the positive pairs, alleviating the issue of class imbalance between positive and negative samples, as demonstrated in some studies (Zhang et al., 2024; Pattnaik et al., 2024). **Inference.** To mitigate the domain gap between the training and test datasets, we introduce an adjustment term into the value function during the inference stage. As previously mentioned, before calibration in CARTS, all _E_ tactics should be processed through the Lean system to filter out _e_ valid tactics. Intuitively, if the number of valid tactics is small, people will raise concerns about the capability of the current policy model, needing for a reduced reward for the current action. We define this reward adjustment as: _α_ = _e/E_, representing the ratio between the number of valid tactics and the total number of tactics generated by the language model at the current state. This adjustment term serves as a test-time adaptation to the test dataset. The final bias-resistant value function integrates both the trained value network and this adjustment term as: _V_ ( _s, a_ ) = 0 _,_ if _s_ _[′]_ has no child nodes, 1 _,_ else if _s_ _[′]_ is the proved state, (4)  21 [(] _[α]_ [ +] _[ V]_ _[θ]_ [(] _[s, a]_ [))] _[,]_ otherwise. Where _s_ _[′]_ is the next state. Unlike the intrinsic reward introduced by DeepSeek-Prover-V1.5 (Xin et al., 2024b), which only considers whether the search expands nodes, we consider both the expansion capability of the policy network and the generalizability of the value network, forming our final bias-resistant value function. The adjustment term can be interpreted as a form of test-time adaptation to the distribution of test data, thus can mitigate the domain gap. 4 E XPERIMENTS In this section, we evaluate the theorem-proving performance of CARTS in Lean. We first describe the experimental setup, then present the main results, followed by an analysis of our method. Currently, theorem-proving methods are primarily categorized into two main types: whole-proof generation methods and tree search methods. Our approach is applicable exclusively to one-step tree search methods; therefore, we focus our comparison solely on this category. 4.1 E XPERIMENTAL S ETUP **Datasets.** We follow Internlm-math (Ying et al., 2024b; Wu et al., 2024) and DeepSeek-ProverV1.5 (Xin et al., 2024b), utilizing miniF2F benchmark (Zheng et al., 2022) and ProofNet benchmark (Azerbayev et al., 2023) for our evaluation. We specifically use the test split of miniF2F same as (Xin et al., 2024b), which includes 244 problems ranging from basic algebra and number theory and also contains AIME and IMO challenging problems. ProofNet is a benchmark for undergraduatelevel mathematics, comprising 371 formal problems derived from widely-used undergraduate pure mathematics textbooks. It covers topics such as real and complex analysis, abstract algebra, and 6 Table 1: Results on the miniF2F-test for various models and search methods. The highest performance for each search method is highlighted in **bold** . **Model** **Sample Budget** **Search Method** **miniF2F-test** _Tree Search Methods_ COPRA(Code Llama) (Thakur et al., 2024) 500 DFS 5.7% COPRA(GPT-3.5) (Thakur et al., 2024) 60 DFS 9.0% COPRA(GPT-4) (Thakur et al., 2024) 60 DFS 26.6% Llemma-7B (Azerbayev et al., 2024) 32 _×_ 100 BFS 26.2% Llemma-34B (Azerbayev et al., 2024) 32 _×_ 100 BFS 25.8% LLMStep (Welleck & Saha, 2023) 32 _×_ 100 BFS 27.9% Curriculum Learning (Polu et al., 2023) 8 _×_ 512 BFS 29.6% InternLM2-Math-7B (Ying et al., 2024b) 32 _×_ 100 BFS 30.3% InternLM2-Math-Plus-7B (Ying et al., 2024a) 32 _×_ 100 BFS 43.4% DeepSeek-Prover-V1.5-SFT (Xin et al., 2024b) 3200 RMaxTS 53.5% DeepSeek-Prover-V1.5-RL (Xin et al., 2024b) 3200 RMaxTS 55.0% Reprover-Lean4 (229M) (Yang et al., 2023) 64 _×_ 100 InternLM2-Math-Plus-1.8B (Ying et al., 2024b) 64 _×_ 100 StepProver (7B) (Wu et al., 2024) 32 _×_ 300 BFS 35.7% MCTS 36.5% DTSolver 36.0% CARTS **37.7%** BFS 38.9% MCTS 39.3% DTSolver 38.5% CARTS **41.0%** BFS 48.8% MCTS 46.7% DTSolver 46.3% CARTS **49.6%** topology. This benchmark presents a greater challenge than miniF2F, posing significant difficulties for theorem provers. Although the original versions of both benchmarks are Lean3, we have modified them to Lean4 for CARTS’s evaluation, aligning with the development of the Lean community. **Baselines models.** We include baselines representing classical and state-of-the-art neural theorem proving in Lean. **COPRA** (Thakur et al., 2024) is an in-context learning agent that utilizes general language models to generate tactics for finding the final proof. **Llemma** (Azerbayev et al., 2024) is trained on extensive mathematical corpora. Additionally, we incorporate advanced models such as **LLMStep** (Welleck & Saha, 2023), **Reprover** (Yang et al., 2023), **Curriculum Learning** (Polu et al., 2023), **InternLM2-Math** (Ying et al., 2024b), and **StepProver** (Wu et al., 2024). All these models are based on one-step tree search methods. We also include **DeepSeek-Prover-V1.5** (Xin et al., 2024b), which integrates whole proof generation and tree search. However, it is not suitable for our CARTS, and thus, we mark it in gray. **Search methods.** Baseline models employ various search methods, such as depth-first search ( **DFS** ) (Thakur et al., 2024), best first search ( **BFS** ) (Yang et al., 2023), and monte carlo tree search ( **MCTS** ). Additionally, **DT-Solver** extends MCTS using virtual nodes. We compared the performance of BFS, MCTS, DT-Solver and CARTS on Reprover (Yang et al., 2023), InternLM2-Math (Ying et al., 2024b), and InternLM2-StepProver (Wu et al., 2024). For MCTS and DT-Solver, we replace the value network with the intrinsic reward (Xin et al., 2024b) for simplication. It is noteworthy that multiple tree search attempts with different seeds can be applied and ensemble (Polu & Sutskever, 2020; Lin et al., 2024; Xin et al., 2024a); however, due to computational cost limitation, we only compared the results for one single tree search attempt. **Metrics.** We evaluate the performance of various search methods using the pass@1 metric with a budget _B_ . Similar to (Xin et al., 2024b), if _B_ is a single value, it indicates the number of model 7 Table 2: Results on the ProofNet for various models and search methods. **Model** **Sample Budget** **Search Method** **ProofNet** Reprover-Lean4 (Yang et al., 2023) 64 _×_ 100 StepProver (Wu et al., 2024) 32 _×_ 300 BFS 11.1% MCTS 11.7% CARTS **11.9%** BFS 18.1% MCTS 18.3% CARTS **18.8%** Figure 2: Improvement curve in pass@1 for the miniF2F test as the expansion budget varies. Left illustrates Reprover-Lean4 (Yang et al., 2023) and right illustrates StepProver (Xin et al., 2024b). generations used in tree expansions. If _B_ = _E × T_, _E_ represents the number of tactics generated per expansion, and _T_ denotes the number of expansion iterations. **Experimental details.** In terms of parameter settings, for Reprover-Lean4 (Yang et al., 2023), we set _λ_ = 0 _._ 8, and for InternLM2-Math-Plus-1.8B (Ying et al., 2024b) and StepProver (Wu et al., 2024), we set _λ_ = 0 _._ 9. Additionally, we set _k_ = 8 for all models. We use the text embedding model intfloat/e5-small-v2 (Wang et al., 2022) as the encoder _f_ _enc_ . Details regarding data collection and training for the bias-resistant value function are presented in Appendix A. 4.2 M AIN R ESULTS In Table 1, we illustrate the pass@1 successful rate on the miniF2F-test benchmark of various models and search methods. Our proposed CARTS surpasses all compared search methods when the policy model is fixed. The results encompass different architectures and parameter sizes of language models, demonstrating that our method is more effective at search stage regardless of the policy model. Notably, we achieve a 49.6% success rate on the miniF2F-test, representing state-ofthe-art performance among one-step tree search methods. Table 2 demonstrates the results on the ProofNet benchmark. Due to the dataset’s complexity, current policy models exhibit low accuracy, which constrain the search performances. However, our proposed CARTS also achieves the highest performance compared to other search methods when the budget is fixed. Additionally, to demonstrate that CARTS has the superior search efficiency, we present a comparison of the pass@1 performance among four search methods on miniF2F-test when varying the expansion budget, as illustrated in Figure 2. It is evident that while the performance of different search methods Idea Generation Category:
0Conceptual Integration
VQwI055flA
# L IGHTNING -F AST I MAGE I NVERSION AND E DITING - FOR T EXT TO -I MAGE D IFFUSION M ODELS **Dvir Samuel** [1] _[,]_ [3] _[ ∗]_ **Barak Meiri** [1] _[,]_ [2] **Haggai Maron** [4] _[,]_ [5] **Yoad Tewel** [2] _[,]_ [5] **Nir Darshan** [1] **Shai Avidan** [2] **Gal Chechik** [3] _[,]_ [5] **Rami Ben-Ari** [1] 1 OriginAI, 2 Tel-Aviv University, 3 Bar-Ilan University, 4 Technion, 5 NVIDIA Research Israel Figure 1: Consecutive real image inversions and editing using our GNRI with Flux.1-schnell (BlackForest, 2024) (0.4 sec on an A100 GPU). A BSTRACT Diffusion inversion is the problem of taking an image and a text prompt that describes it and finding a noise latent that would generate the exact same image. Most current deterministic inversion techniques operate by approximately solving an implicit equation and may converge slowly or yield poor reconstructed images. We formulate the problem by finding the roots of an implicit equation and devlop a method to solve it efficiently. Our solution is based on Newton-Raphson (NR), a well-known technique in numerical analysis. We show that a vanilla application of NR is computationally infeasible while naively transforming it to a computationally tractable alternative tends to converge to out-of-distribution solutions, resulting in poor reconstruction and editing. We therefore derive an efficient guided formulation that fastly converges and provides high-quality reconstructions and editing. We showcase our method on real image editing with three popular open-sourced diffusion models: Stable Diffusion, SDXL-Turbo, and Flux with different deterministic schedulers. Our solution, **Guided Newton-Raphson In-** **version**, inverts an image within 0.4 sec (on an A100 GPU) for few-step models (SDXL-Turbo and Flux.1), opening the door for interactive image editing. We further show improved results in image interpolation and generation of rare objects. 1 I NTRODUCTION Text-to-image diffusion models (Rombach et al., 2022; Saharia et al., 2022; Ramesh et al., 2022; Balaji et al., 2022) can generate diverse and high-fidelity images based on user-provided text prompts. These models are further used in several important tasks that require _inversion_, namely, discovering an initial noise (seed) that, when subjected to a backward (denoising) diffusion process along _∗_ Correspondence to: Dvir Samuel dvirsamuel@gmail.com 1 with the prompt, generates the input image. Inversion is used in various tasks including image editing (Hertz et al., 2022), personalization (Gal et al., 2023b;a), seed noise interpolation for semantic augmentation (Samuel et al., 2023) and generating rare concepts (Samuel et al., 2024). As inversion became a critical building block in various tasks, several inversion methods have been suggested. Denoising Diffusion Implicit Models (DDIM) (Song et al., 2021) introduced a deterministic and fast sampling technique for image generation with diffusion models. DDIM _inversion_ transforms an image back into a latent noise representation by approximating the inversion equation. Although this approximation makes it very fast, it also introduces an approximation error (as explained in section 3), causing noticeable distortion artifacts in the reconstructed images. This is particularly noticeable in few-step diffusion models (Luo et al., 2023; Sauer et al., 2023; Esser et al., 2024), with large gaps between the diffusion time-steps, and where inference is achieved with only 2 – 4 steps. Several efforts have been made to address inconsistencies in DDIM Inversion, which result in poor reconstruction quality (Mokady et al., 2023; Wallace et al., 2023; Pan et al., 2023). For example, (Pan et al., 2023; Garibi et al., 2024) used fixed-point iterations to solve the DDIMinversion equation. Another approach in (Hong et al., 2024) tackled the minimization of residual error in DDIM-inversion via gradient descent. Although these methods demonstrate improvements over previous approaches, their editing quality and computational speed are still limited. In this paper, we frame the deterministic diffusion inversion problem as finding the roots of an implicit function. We propose a solution based on the Newton-Raphson (NR) numerical scheme (Burden et al., 2015) - a very fast and well-tested optimization method for root finding. We further define a scalar NR scheme for this problem that can be computed efficiently. However, NR has a main disadvantage in our context. When applied to our highly non-convex function, it tends to find roots that drift outside the distribution of latents that the diffusion model was trained on.To mitigate this, we introduce a guidance term that leverages prior knowledge of the likely solution locations, steering the NR iterations toward in-distribution roots. This is done by adding this prior knowledge on the noise distribution at each diffusion step. Ultimately, our approach enables rapid inversion while maintaining state-of-the-art reconstruction and editing quality. We name our approach GNRI, for _**G**_ _uided_ _**N**_ _ewton_ _**R**_ _aphson_ _**I**_ _nversion_ . GNRI converges in just a few iteration steps at each diffusion time step and achieves high-quality image inversions (in terms of reconstruction accuracy). In practice, 1-2 iterations are sufficient for convergence that yields significantly more accurate results than other inversion methods. Importantly, GNRI requires no model training, fine-tuning, prompt optimization, or additional parameters, and is compatible with all pre-trained diffusion and flow-matching models with deterministic schedulers. We demonstrate its effectiveness for inverting different deterministic schedulers used with latent diffusion model (Stable Diffusion) (Rombach et al., 2022), few-step latent diffusion model (SDXL-turbo) (Sauer et al., 2023) and few-step flow matching model (Flux.1) (Black-Forest, 2024). Figure 1 demonstrates the quality and speed of GNRI for iterative editing using few-step diffusion models. Using such models models that require 4 denoising steps, our approach can edit real images within 0.4 seconds. This allows users to edit images _on the fly_, using text-to-image models. We conduct a comprehensive evaluation of GNRI. First, we directly assess the quality of inversions found with GNRI by measuring reconstruction errors compared to deterministic inversion approaches. Our method suppresses all methods with _×_ 2 to _×_ 40 speedup gain. We then demonstrate the benefit of GNRI in two downstream tasks (1) In _Image editing_, GNRI smoothly changes fine details in the image in a consistent and coherent way, whereas previous methods struggle to do so. (2) In _Seed Interpolation_ and _Rare concept generation_ (Samuel et al., 2023) that require diffusion inversion. In both of these tasks, GNRI yields more accurate seeds, resulting in superior generated images, both qualitatively and quantitatively. 2 R ELATED WORK Text-to-image diffusion models (Rombach et al., 2022; Saharia et al., 2022; Ramesh et al., 2022; Balaji et al., 2022; Esser et al., 2024) translate random samples (seeds) from a high-dimensional space, guided by a user-supplied text prompt, into corresponding images. DDIM (Song et al., 2021) is a widely used deterministic scheduler, that demonstrates the inversion of an image to its latent noise seed. When applied to the inversion of text-guided diffusion models, DDIM inversion suffers from low reconstruction accuracy that is reflected in further tasks, particularly when applied to few 2 step diffusion models that require only 3-4 denoising steps. This happens because DDIM inversion relies on a linear approximation, causing the propagation of errors that result in inaccurate image reconstruction. Recent studies (Mokady et al., 2023; Wallace et al., 2023; Pan et al., 2023; Hong et al., 2024) address this limitation. Null-text inversion (Mokady et al., 2023) optimizes the embedding vector of an empty string. This ensures that the diffusion process calculated using DDIM inversion, aligns with the reverse diffusion process. (Miyake et al., 2023) replace the null-text embedding with a prompt embedding instead. This enhances convergence time and reconstruction quality but results in inferior image editing performance. In both (Mokady et al., 2023) and (Miyake et al., 2023), the optimized embedding must be stored, resulting in nearly 3 million additional parameters for each image (using 50 denoising steps of StableDiffusion (Rombach et al., 2022)). EDICT (Wallace et al., 2023) introduced invertible neural network layers, specifically Affine Coupling Layers, to calculate both backward and forward diffusion paths. While effective, it comes at the cost of prolonging inversion time. BDIA (Zhang et al., 2023) introduced a novel approximation tailored for EDICT, enhancing its computational efficiency while maintaining the accuracy in diffusion inversion. However, it still demands considerably more time, approximately ten times longer than DDIM Inversion. TurboEdit (Wu et al., 2024) introduced an encoder-based iterative inversion technique for few-step diffusion models, enabling precise image inversion and disentangled image editing. DirectInv (Ju et al., 2023) separates the source and target diffusion branches for improved text-guided image editing, leading to better content preservation and edit fidelity. AIDI (Pan et al., 2023) and ReNoise (Garibi et al., 2024) used a fixed-point iteration technique at each inversion step to find a better solution for the implicit function posed by DDIM equations and thereby achieving improved accuracy. ExactDPM (Hong et al., 2024), similar to (Pan et al., 2023; Garibi et al., 2024) proposed gradient-based methods for finding effective solutions for the implicit function. Alternative methods proposed in (Huberman et al., 2023; Brack et al., 2024; Deutch et al., 2024) use a stochastic DDPM inversion instead of a deterministic one. Although it can recover the exact image during reconstruction, the stochastic nature of these approaches usually causes the method to struggle with reconstructing fine details during editing. These methods require many steps for highquality editing, making the process time-consuming. Additionally, they demand a larger memory footprint, as they need to store _T_ + 1 latents for each inverted image, restricting their use only to reconstruction and editing tasks. We also compare our approach with these methods in terms of editing quality and demonstrate that our method outperforms them all. 3 P RELIMINARIES We first establish the fundamentals of Denoising Diffusion Implicit Models (DDIMs). In this model, a _backward pass_ (denoising) is the process that generates an image from a seed noise. A _forward_ _pass_ is the process of adding noise gradually to an image until it becomes pure Gaussian noise. _Inversion_ (Song et al., 2021) is similar to the forward pass but the goal is to end with a specific Gaussian noise that would generate the image if denoised. **Forward Pass in Diffusion Models.** Diffusion models (Rombach et al., 2022) learn to generate images through a systematic process of iteratively adding Gaussian noise to a latent data sample until the data distribution is mostly noise. The data distribution is subsequently gradually restored through a reverse diffusion process initiated with a random sample (noise seed) from a Gaussian distribution. In more detail, the process of mapping a (latent) image to noise is a Markov chain that starts with _z_ 0, and gradually adds noise to obtain latent variables _z_ 1 _, z_ 2 _, . . ., z_ _T_, following a distribution _q_ ( _z_ 1 _, z_ 2 _, . . ., z_ _T_ _|z_ 0 ) = Π _[T]_ _t_ =1 _[q]_ [(] _[z]_ _[t]_ _[|][z]_ _[t][−]_ [1] [)][, where] _[ ∀][t]_ [ :] _[ z]_ _[t]_ _[∈]_ [R] _[d]_ [ with] _[ d]_ [ denoting the] dimension of the space. Each step in this process is a Gaussian transition, that is, _q_ follows a Gaussian distribution _q_ ( _z_ _t_ _|z_ _t−_ 1 ) := _N_ ( _z_ _t_ ; _µ_ _t_ = _γ_ _t_ _z_ _t−_ 1 _,_ Σ _t_ = _β_ _t_ _I_ ) _,_ (1) parameterized by a schedule ( _β_ 0 _, γ_ 0 ) _, . . .,_ ( _β_ _T_ _, γ_ _T_ ) _∈_ (0 _,_ 1) [2] . As discussed below, in DDIM, _γ_ _t_ = _√_ 1 _−_ _β_ _t_ and in Euler scheduling _β_ _t_ = _γ_ _t_ = 1 _∀t_ . **Deterministic schedule diffusion models.** It has been shown that one can “sample” in a deterministic way from the diffusion model, and this accelerates significantly the denoising process. Several deterministic schedulers have been proposed (Song et al., 2021; Lu et al., 2022; Karras et al., 2022), we describe here two popular ones, DDIM and Euler schedulers. 3 _Denoising Diffusion Implicit Models (DDIM)._ Sampling from diffusion models can be viewed as solving the corresponding diffusion Ordinary Differential Equations (ODEs) (Lu et al., 2022). DDIM (Song et al., 2021), a popular deterministic scheduler, proposed to denoise a latent by _α_ _t−_ 1 _z_ _t−_ 1 = _z_ _t_ _−_ _[√]_ _α_ _t−_ 1 _·_ ∆ _ψ_ ( _α_ _t_ ) _· ϵ_ _θ_ ( _z_ _t_ _, t, p_ ) _,_ (2) � _α_ _t_ where _α_ _t_ = 1 _−_ _β_ _t_, _ψ_ ( _α_ ) = � _α_ 1 _[−]_ [1][, and][ ∆] _[ψ]_ [(] _[α]_ _[t]_ [) =] _[ ψ]_ [(] _[α]_ _[t]_ [)] _[−]_ _[ψ]_ [(] _[α]_ _[t][−]_ [1] [)][ and] _[ ϵ]_ _[θ]_ [(] _[z]_ _[t]_ _[, t, p]_ [)][ is the output] of a network that was trained to predict the noise to be removed. _Euler schedulers._ Euler schedulers follow a similar deterministic update rule _z_ _t−_ 1 = _z_ _t_ + ( _σ_ _t−_ 1 _−_ _σ_ _t_ ) _v_ _θ_ ( _z_ _t_ _, t, p_ ) _,_ (3) here _σ_ _t_ _, σ_ _t_ _−_ 1 are scheduling parameters and _v_ _θ_ is the output of the network that was trained to predict the velocity. **Diffusion inversion.** We now focus on inversion in the latent representation. We describe our approach first for DDIM, and extension to other schedulers is discussed below. Given an image representation _z_ 0 and its corresponding text prompt _p_, we seek a noise seed _z_ _T_ that, when denoised, reconstructs the latent _z_ 0 . Several approaches were proposed for this task. DDIM inversion rewrites Eq. (2) as: _z_ _t_ = _f_ ( _z_ _t_ ) _α_ _t_ _f_ ( _z_ _t_ ) := _z_ _t−_ 1 + _[√]_ _α_ _t_ _·_ ∆ _ψ_ ( _α_ _t_ ) _· ϵ_ _θ_ ( _z_ _t_ _, t, p_ ) _._ (4) ~~�~~ _α_ _t−_ 1 DDIM inversion approximates this implicit equation in _z_ _t_ by replacing _z_ _t_ with _z_ _t−_ 1 _α_ _t_ _f_ ( _z_ _t_ ) _≈_ _z_ _t−_ 1 + _[√]_ _α_ _t_ _·_ ∆ _ψ_ ( _α_ _t_ ) _· ϵ_ _θ_ ( _z_ _t−_ 1 _, t, p_ ) _._ (5) ~~�~~ _α_ _t−_ 1 The quality of the approximation depends on the difference _z_ _t_ _−_ _z_ _t−_ 1 (a smaller difference would yield a small error) and on the sensitivity of _ϵ_ _θ_ to that _z_ _t_ . See (Dhariwal & Nichol, 2021; Song et al., 2021) for details. By applying Eq. (5) repeatedly for every denoising step _t_, one can invert an image latent _z_ 0 to a latent _z_ _T_ in the seed space. DDIM inversion is fast, but the approximation of Eq. (5) inherently introduces errors at each time step. As these errors accumulate, they cause the whole diffusion process to become inconsistent in the forward and the backward processes, leading to poor image reconstruction and editing (Mokady et al., 2023; Wallace et al., 2023; Pan et al., 2023). This is particularly noticeable in few-step and consistency models with a small number of denoising steps (typically 2-4 steps), where there’s a significant gap between _z_ _t_ and _z_ _t−_ 1 Garibi et al. (2024). This inversion technique can also be applied to other deterministic schedulers. For instance, for Euler scheduler, one defines _z_ _t_ = _f_ ( _z_ _t_ ), _f_ ( _z_ _t_ ) := _z_ _t−_ 1 + ( _σ_ _t_ _−_ _σ_ _t−_ 1 ) _v_ _θ_ ( _z_ _t_ _, t, p_ ),and this is approximated by _z_ _t−_ 1 + ( _σ_ _t_ _−_ _σ_ _t−_ 1 ) _v_ _θ_ ( _z_ _t−_ 1 _, t, p_ ). **Iterative inversion and optimization methods.** Several papers proposed to improve the approximation using iterative methods. AIDI (Pan et al., 2023) and ReNoise (Garibi et al., 2024) proposed to directly solve Eq. (4) using fixed-point iterations (Burden, 1985), a widely-used method in numerical analysis for solving implicit functions. In a related way, (Hong et al., 2024) solves a more precise inversion equation, obtained by employing higher-order terms, using gradient descent. **Newton-Raphson Root finding.** The Newton-Raphson method is a widely used numerical technique for finding roots of a real-valued function _F_ (Burden et al., 2015). It is particularly effective for solving equations of the form _F_ ( _z_ ) = 0, when _F_ : R _[D]_ _→_ R _[D]_ for an arbitrary dimension _D_ . It provides fast convergence, typically quadratic, by requiring the evaluation of the function and the inversion of its Jacobian matrix. It has also been shown that when initialized near a local extremum, NR may converge to that point (oscillating around it) (Kaw et al., 2003). The NR scheme in general form, is given by _z_ _t_ _[k]_ [+1] = _z_ _t_ _[k]_ _[−]_ _[J]_ [(] _[z]_ _t_ _[k]_ [)] _[−]_ [1] _[ F]_ [(] _[z]_ _t_ _[k]_ [)] _[,]_ (6) where _J_ ( _z_ _t_ _[k]_ [)] _[−]_ [1] [ presents the inverse of a Jacobian matrix] _[ J][ ∈]_ [R] _[D][×][D]_ [ for] _[ F]_ [ (all derivatives are w.r.t] to _z_ ), and _k_ stands for the iteration number. The iteration starts with an initial guess, _z_ = _z_ [0] . 4 Figure 2: **Newton-Raphson Inversion** iterates over an implicit function Eq. 4 using Eq. 10 scheme, at every time-step in the inversion path. Initialized with _z_ _t_ [0] [=] _[ z]_ _[t][−]_ [1] [it converges within] _[ ≈]_ [2 iterations,] to _z_ _t_ . Each box denotes one inversion step; black circles correspond to intermediate latents in the denoising process; green circles correspond to intermediate Newton-Raphson iterations. 4 O UR METHOD : G UIDED N EWTON R APHSON I NVERSION Existing inversion methods are either fast-but-inaccurate, like DDIM inversion, or more precise-butslow, like (Pan et al., 2023) and (Hong et al., 2024). This paper presents a method that achieves a balance of speed and improved precision compared to these existing approaches. To achieve these features we introduce two key ideas. First, we frame inversion as a scalar rootfinding problem that can be solved efficiently, and use the well-known _Newton-Raphson_ root-finding method. Second, we make the observation that an inversion step is equivalent to finding a specific noise vector, and we _use a strong prior_ obtained from the distribution of that noise. This prior during the iterative solution guides Newton-Raphson towards finding a root that adheres to the correct distribution of latents. See Figure 2 for illustration. 4.1 F RAMING INVERSION AS EFFICIENT ROOT - FINDING Diffusion inversion can be done by finding the fixed point of the function _f_ ( _z_ ) in Eq. 4. This is equivalent to finding the zero-crossings or roots, _z_ _t_ of the residual function _r_ : R _[D]_ _→_ R _[D]_ defined as _r_ ( _z_ _t_ ) := _z_ _t_ _−_ _f_ ( _z_ _t_ ). One could now apply _Newton-Raphson_ (NR) to find a root. However, in our case, _z_ _t_ is a high dimensional latent( _D ≈_ 16 _K_ in StableDiffusion (Rombach et al., 2022)), making it computationally infeasible to compute the Jacobian (Eq. 6), explicitly and invert it. To address this computational limitation, we apply a norm over _r_ ( _z_ _t_ ), ˆ _r_ : R _[d]_ _→_ R [+] 0 [yielding a multi-variable,] _scalar_ function, which has the same set of roots: _r_ ˆ( _z_ _t_ ) := _||z_ _t_ _−_ _f_ ( _z_ _t_ ) _||_ 1 _,_ (7) and seek the roots ˆ _r_ ( _z_ _t_ ) = 0 where _||·||_ denotes a _L_ 1 norm which simply sum over all absolute values of ˆ _r_ . Applying the norm on the residual has be done in previous work (Hong et al., 2024; Garibi et al., 2024), however here it is used to reduce the Jacobian to a vector and that can be computed quickly (see derivation in Appendix A). We call this method _Newton-Raphson Inversion_, NRI. This method is efficient and converges quickly in practice. However, NRI in our case has a key limitation: Eq. (4), and thus ˆ _r_, can have multiple solutions or local minima. The accelerated convergence of NR due to linear “extrapolation”, can cause it to ”jump” to solutions outside the latent variable distribution for the diffusion model. Indeed, in practice, we find that solving Eq. (7) for _r_ ˆ( _z_ _t_ ) = 0 often converges quickly to a solution that results with poor image reconstruction. We quantify and illustrate this phenomenon in section 5.1. Next, we describe a remedy to this problem. 4.2 G UIDED N EWTON R APHSON I NVERSION The NR procedure discussed above may find roots that are far from the modes of the latent distribution. How can we guide NR to converge to useful roots among all possible ones? Note that determining a _z_ _t_ given _z_ _t−_ 1, is equivalent to finding which noise term is added during the forward pass of the diffusion process. Luckily, the distribution of that noise is known by the design of the diffusion process and we can use it as prior information. More precisely, since each step in the diffusion process follows a Gaussian distribution _q_ ( _z_ _t_ _|z_ _t−_ 1 ) (Eq. 1), we define a positive function 5 Idea Generation Category:
1Cross-Domain Application
t9l63huPRt
# D ENSE V IDEO O BJECT C APTIONING FROM D ISJOINT S UPERVISION **Xingyi Zhou** **[*]** **Anurag Arnab** **[*]** **Chen Sun** **Cordelia Schmid** Google DeepMind A BSTRACT We propose a new task and model for _dense video object captioning_ – detecting, tracking and captioning trajectories of objects in a video. This task unifies spatial and temporal localization in video, whilst also requiring fine-grained visual understanding that is best described by natural language. We propose a unified model, and demonstrate how our end-to-end approach is more accurate and temporally coherent than a multi-stage pipeline combining state-of-the-art detection, tracking, and captioning models. Moreover, we propose a training strategy based on a mixture of disjoint tasks, which allows us to leverage diverse, large-scale datasets which supervise different parts of our model. Although each pretraining task only provides weak supervision, they are complementary and, when combined, result in noteworthy zero-shot ability and serve as strong initialization for additional finetuning to further improve accuracy. We carefully design new metrics capturing all components of our task, and show how we can repurpose existing video grounding datasets ( _e.g_ . VidSTG and VLN) for our new task. We show that our model improves upon a number of strong baselines for this new task. Furthermore, we can apply our model to the task of spatial grounding, outperforming prior state-of-the-art on VidSTG and VLN, without explicitly training for it. Our code is [available at https://github.com/google-research/scenic.](https://github.com/google-research/scenic/tree/main/scenic/projects/densevoc) 1 I NTRODUCTION Powered by gigantic datasets and models, _language_ is becoming the output modality of the most capable artificial intelligence models (Team et al., 2023; Alayrac et al., 2022; Ouyang et al., 2022; Li et al., 2023; Liu et al., 2023; Tong et al., 2024; Li et al., 2024a). Language unifies different tasks with the same output space (Raffel et al., 2020; Chen et al., 2023a), is more descriptive than discrete class labels (Wu et al., 2022a; Long et al., 2023), and naturally facilitates zero-shot prediction of novel tasks (Radford et al., 2021; Brown et al., 2020). Inspired by advances in natural language understanding, the vision community has explored language in a number of tasks including image captioning (Chen et al., 2015), dense image captioning (Krishna et al., 2017b), question answering (Antol et al., 2015), video captioning (Monfort et al., 2021) and representation learning (Radford et al., 2021). However, likely due to the scarcity of large-scale, aligned training data, we are not aware of any existing single vision-language model that unifies both fine-grained **spatial** - (by detecting objects) and **temporal** - (by reasoning across time in videos) understanding. In this paper, we propose a new task and model for _dense video object captioning_ (Dense VOC) – the task of generating captions of trajectories of all objects from video (Fig. 1). Dense VOC requires understanding across space, time, and language (Fig. 2), and is therefore a superset of existing vision tasks, namely object detection (Everingham et al., 2015; Lin et al., 2014), multi-object tracking (Dendorfer et al., 2021; Dave et al., 2020) and captioning (Chen et al., 2015). It offers a broad range of applications, such as sports analysis, wildlife monitoring, and behavioral analysis. A prominent challenge for training our model is that datasets with captioned trajectories are scarce. However, annotations for each sub-task, or even each combination of the sub-tasks, are abundant. For example, we can train our object proposal component using image-level object detection labels from COCO (Lin et al., 2014), and the captioning component from video-level captioning datasets like SMiT (Monfort et al., 2021). These disjoint training tasks are complementary, and in combination supervise our entire model. This enables us to perform our Dense VOC task in a zero-shot manner, and we show that we can achieve noteworthy performance despite not having access to any full, - Equal contribution. {zhouxy, aarnab}@google.com 1 |t = 3|Col2| |---|---| ||| ~~A~~ ~~child~~ ~~in~~ ~~blue~~ ~~clothes~~ ~~plays~~ ~~basketball~~ ~~with~~ ~~another~~ ~~child~~ Figure 1: **Overview of the dense video object captioning (Dense VOC) task.** Given a video, we predict object trajectories (identities denoted by colors) and their natural language description. We show a video from the VidSTG (Zhang et al., 2020) validation set. captioned object trajectories during training. Furthermore, this pretraining serves as a powerful initialization for finetuning on the full Dense VOC task, where limited annotations are available. Another challenge in our task is to produce holistic and consistent captions for objects across frames. Note that a baseline of applying a strong, dense image captioning model per-frame, and then linking objects together is poorly suited to this scenario: the cap ~~tions~~ ~~at~~ ~~each~~ frame are likely to be different due to subtle appearance changes across frames. This motivates our end-to-end trained model, which includes a novel end-to-end tracking algorithm that aggregates features of the same object across time, enabling the subsequent captioner to leverage global features to produce coherent captions. Although we are the first to our knowledge to study Dense Figure 2: **Overview of Dense VOC** . Our VOC, we can still repurpose existing video grounding problem involves understanding across datasets for evaluation and domain-specific finetuning. We space, time, and language, and thus en use VidSTG (Zhang et al., 2020) and VLN (Voigtlaender compasses other vision tasks, which typ et al., 2023), originally designed for spatiotemporal sen ically consider one or two of these axes. tence grounding: Instead of finding an object tube given a We show these subtasks are complemen sentence query (grounding), we predict object trajectories tary, and pretraining on them enables directly and use the sentence queries as the ground truth zero-shot generalization to Dense VOC. captions. In addition, we show that our generative model trained for Dense VOC can perform grounding by simply selecting the bounding boxes with the maximum likelihood of producing the query sentence. We also develop a new metric that jointly measures captioning, detection and tracking accuracy by extending HOTA (Luiten et al., 2021), the most popular metric for multi-object tracking. Experiments show that our end-to-end trained Dense VOC model outperforms baselines consisting of strong, per-task models by a substantial margin, producing more accurate and inherently temporally consistent captions. Moreover, we achieve significant improvements from our disjoint, multi-dataset training. We additionally show how we can readily apply our model to related domain-specific datasets: by finetuning our model on a recent person tracking and captioning dataset, BenSMOT (Li et al., 2024b), we outperform prior work by 18 _._ 2 points. Furthermore, by applying our generative captioning model to the discriminative grounding task, we are able to outperform dedicated spatial grounding models on both VidSTG and VLN. In summary, we propose the following contributions: 1. We propose the new task of Dense Video Object Captioning. We propose novel evaluation metrics, and repurpose existing grounding datasets for evaluation. 2. We design an end-to-end architecture for our task, with a novel tracking algorithm and feature aggregator that ensures temporally consistent captions. Unlike conventional offline trackers, our tracker is trained end-to-end with the model and produces long-term trajectory features for subsequent captioning. 3. We show our model can be trained without full annotations for the task, with a mixture of disjoint datasets which supervise different parts of our model. 4. We further show how our models generalize to downstream video grounding tasks, achieving state-of-the-art results on two datasets, without explicitly being trained for grounding. 5. Moreover, we significantly improves the state-of-the-art on the BenSMOT dataset Li et al. (2024b) for Semantic Multi-Object Tracking. 2 2 R ELATED W ORK **Image captioning** (Chen et al., 2015; Anderson et al., 2018; Xu et al., 2015; Rennie et al., 2017) describes the content of an image with language. State-of-the-art methods map the input image to output text by using multi-modal models (Jiang et al., 2020; Desai & Johnson, 2021; Li et al., 2020; Zhang et al., 2021a; Li et al., 2023; Yu et al., 2022) pretrained on large datasets (Sharma et al., 2018; Radford et al., 2021). For example, GIT (Wang et al., 2022) simple forwards vision tokens from a ViT encoder (Dosovitskiy et al., 2021) to an auto-regressive language decoder (Vaswani et al., 2017; Devlin et al., 2019). Similar ideas apply to **video captioning** (Xu et al., 2016; Zhou et al., 2018; Monfort et al., 2021), by concatenating (Wang et al., 2022) or pooling (Yan et al., 2022) features from each frame, before feeding them to an auto-regressive text decoder. Our work builds on existing captioning architectures (Wang et al., 2022), and extends them to trajectory captioning using our end-to-end model and weak supervision (Monfort et al., 2021; Krishna et al., 2017b; Lin et al., 2014). **Dense object captioning** in contrast, detects objects in an image and describes them with text (Johnson et al., 2016; Li et al., 2019; Shao et al., 2022; Wu et al., 2022a). It was popularized by the Visual Genome (Krishna et al., 2017b) dataset, which contains full annotations for the task. Early work, DenseCap (Johnson et al., 2016) used a one-stage detector (Redmon et al., 2016) followed by an LSTM text decoder (Hochreiter & Schmidhuber, 1997) on dense feature maps. Most recently, GRiT (Wu et al., 2022a) built upon the state-of-the-art image captioning architecture of GIT (Wang et al., 2022), and generated object captions, also with a transformer decoder (Vaswani et al., 2017), from RoI-pooled (He et al., 2017) image features. Our model advances architectures like GRiT to videos and incorporates end-to-end tracking. We also note that **dense video captioning** in the literature refers to the task of localizing and captioning multiple events _temporally_ in videos (Krishna et al., 2017a; Zhou et al., 2018; Wang et al., 2021a; Yang et al., 2023a). Our task, in contrast, involves tracking and captioning objects in a video, and therefore requires _spatial_ localization, which is why we name our task “dense video object captioning”. **Multi-object tracking** detects objects and track them with a consistent identity label. The predominant approach is tracking-after-detection (Bewley et al., 2016; Zhang et al., 2021b; Du et al., 2021), _i.e_ . first running detectors on each frame and then using a separate tracker to link them. While this works well for existing benchmarks with only a few classes (Dendorfer et al., 2021; Geiger et al., 2012; Yang et al., 2019), it is more challenging in our case: we need tracks _before_ captioning to have a single, consistent textual output for the whole trajectory. Thus, our work follows **end-to-end** **multi-object tracking** (Cheng et al., 2022; Li et al., 2022a; Wang et al., 2021c; Zhou et al., 2022b). We adopt a global tracker GTR (Zhou et al., 2022b), which casts tracking as pairwise association among all objects within a video. Whilst GTR applies a sliding-window-based identity association algorithm during inference as a post-processing step, we design an efficient algorithm to perform this process end-to-end. This is necessary for our task, since our trajectory features are used by a subsequent captioning module which is trained jointly. We are not aware of prior work which efficiently assigns object identities and corresponding features to tracks, and trains end-to-end through this process. Finally, note that **video object tracking and segmentation** (Yang et al., 2021; 2023b; Yang & Yang, 2022; Cheng & Schwing, 2022; Cheng et al., 2024) focuses on following only a _single_ object which is given in the first frame (Perazzi et al., 2016; Xu et al., 2018). This is therefore a different setting from our task of detecting, tracking and captioning multiple objects. **Video object grounding** (Zhang et al., 2020; Voigtlaender et al., 2023) finds a spatio-temporal tube given a video and query sentence as inputs. Existing, discriminative methods (Zhang et al., 2020; Yang et al., 2022; Jin et al., 2022; Su et al., 2021) co-embed visual and text inputs, and use the sentence feature to find the corresponding object. In contrast, we use our generative language model for this task by selecting the object with the highest likelihood of producing the query. To our knowledge, we are the first work to explore the alternate paradigm of generative models for this task. Finally, we note that these tasks are also related to video-referring segmentation (Bellver et al., 2020; Wu et al., 2022b; Yu et al., 2016) which grounds textual queries to segmentation masks. Segmentation, however, is not the focus of our work. Concurrent to our work, BeyondMOT (Li et al., 2024b) proposes an video object tracking and captioning benchmark and model. We highlight two differences: 1. Li et al. (2024b) uses a frameby-frame tracker similar to our baselines (Tab. 2), and we propose a novel end-to-end tracker. 2. Our work aims to track and caption **all objects** in the video, while Li et al. (2024b) handles **only** 3 |Tracking Trajectory captioning<br>Class-agnostic Grouping Aggregator Language<br>detector (Sec. 3.2) (Alg. 1) (Sec. 3.3) Decoder|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |Tracking Trajectory captioning<br>Class-agnostic Grouping Aggregator Language<br>detector (Sec. 3.2) (Alg. 1) (Sec. 3.3) Decoder|Tracking Trajectory captioning<br>Class-agnostic Grouping Aggregator Language<br>detector (Sec. 3.2) (Alg. 1) (Sec. 3.3) Decoder|Tracking Trajectory captioning<br>Class-agnostic Grouping Aggregator Language<br>detector (Sec. 3.2) (Alg. 1) (Sec. 3.3) Decoder|Tracking Trajectory captioning<br>Class-agnostic Grouping Aggregator Language<br>detector (Sec. 3.2) (Alg. 1) (Sec. 3.3) Decoder|||| |Tracking Trajectory captioning<br>Class-agnostic Grouping Aggregator Language<br>detector (Sec. 3.2) (Alg. 1) (Sec. 3.3) Decoder|Tracking<br>Grouping<br>(Sec. 3.2) (Alg. 1)|Tracking<br>Grouping<br>(Sec. 3.2) (Alg. 1)|Trajectory captioning<br>Aggregator Language<br>(Sec. 3.3) Decoder|Trajectory captioning<br>Aggregator Language<br>(Sec. 3.3) Decoder|Trajectory captioning<br>Aggregator Language<br>(Sec. 3.3) Decoder|Trajectory captioning<br>Aggregator Language<br>(Sec. 3.3) Decoder| |Tracking Trajectory captioning<br>Class-agnostic Grouping Aggregator Language<br>detector (Sec. 3.2) (Alg. 1) (Sec. 3.3) Decoder|Tracking<br>Grouping<br>(Sec. 3.2) (Alg. 1)|||||| Figure 3: **Overview of our model.** Our end-to-end model has three modules: First it produces object proposals per-frame using a class-agnostic detector (left, trained with detection loss, _L_ _object_ ). These object proposals are then passed to an end-to-end tracking module that groups objects into trajectories (middle, trained with association loss, _L_ _assoc_ ). The identities produced by the tracking module are used to aggregate features which are then fed to a language decoder to produce the final caption (right, trained with caption loss _L_ _caption_ ). Our model can be trained end-to-end with partial supervision on different and disjoint datasets to provide zero-shot Dense VOC capabilities. **persons** . As a result, our task is much more challenging, and we show our model yields superior performance on their benchmark. OW-VISCap (Choudhuri et al., 2024) on the other hand augments a video segmentation model, Cheng et al. (2022), with a language model (OPT with 2.7 billion parameters (Zhang et al., 2022a)) head for video segmentation and captioning. In contrast, our model is trained flexibly using our disjoint pretraining, which enables us to achieve better detection and tracking performance whilst still using a substantially smaller model. 3 M ETHOD As shown in Fig. 3, our end-to-end model consists of interlinked heads for object proposal, tracking and captioning the resulting trajectories. Before introducing our novel components, we review prior techniques for captioning and dense object captioning in images (Wu et al., 2022a; Wang et al., 2022). 3.1 B ACKGROUND Image captioning maps an input image, **I** _∈_ R _[H][×][W][ ×]_ [3], to a caption _c_ = ( _y_ 1 _, y_ 2 _, . . ., y_ _n_ _t_ ) which is a sequence of up to _n_ _t_ text tokens from a given vocabulary. The minimal set of components is an image encoder, followed by a text decoder (Vaswani et al., 2017). The encoder maps the input image **I**, to a feature representation **f** _∈_ R _[n]_ _[v]_ _[×][d]_ consisting of _n_ _v_ tokens with dimensionality _d_ . The subsequent text decoder is auto-regressive (Graves, 2013) – it predicts the next text token, _y_ _i_, as a function of both the image features, **f**, and previously generated text tokens, **y** 0: _i−_ 1, denoted by _y_ _i_ = Decode( **f** _,_ **y** 0: _i−_ 1 ). Note that the first step of decoding begins with _y_ 0 = BOS, a special beginning-of-sentence token, and the caption ends when the end-of-sentence token, EOS, is output by the model. This simple image captioning model has been demonstrated to be effective and scalable by GIT (Wang et al., 2022), achieving state-of-the-art results across a number of captioning datasets. GRiT (Wu et al., 2022a) extends the approach further to dense object captioning of images: Here, the authors use an object proposal network (Zhou et al., 2019) to produce a set of _K_ class-agnostic bounding boxes, _b_ 1 _, b_ 2 _, . . ., b_ _K_ . Features corresponding to each of these objects are obtained using RoIAlign (He et al., 2017), resulting in a localized feature, _f_ _k_ _∈_ R _[r][×][r][×][d]_ where _r_ = 7 is the output resolution of RoIAlign. Each of these grid features is flattened into _f_ _k_ _∈_ R _[r]_ [2] _[×][d]_ and decoded independently by the text decoder, as done in GIT. Therefore, the loss used to train a GRiT model consists of _L_ = _L_ _object_ + _L_ _caption_ where _L_ _caption_ is a cross-entropy loss over all text tokens in the vocabulary, and _L_ _object_ consists of bounding box regression and objectness terms, as standard in object detection literature (Zhou et al., 2019; Ren et al., 2015; Lin et al., 2017). We now describe how we extend object captioning to videos by tracking object proposals over time (Sec. 3.2) and aggregating trajectory features and captioning them (Sec. 3.3) in an end-to-end fashion. Section 3.4 explains how we train our model, whilst Sec. 3.5 describes how we apply our model directly to video object grounding tasks. 3.2 E ND - TO - END TRACKING As shown in Fig. 3 (left), we first produce object proposals separately for each frame. Tracking then aims to assign each object in each frame a unique trajectory identity _δ ∈_ N . We define **f** _k_ _[t]_ _[∈]_ [R] _[r]_ [2] _[×][d]_ [ as] the ROI feature of object proposal _k_ in frame _t_, **F** = [ **f** _k_ _[t]_ []] _[T,K]_ _t_ =1 _,k_ _[t]_ =1 [as the concatenation of all object] 4 **Algorithm 1:** Identity assignment from association matrix. This greedy algorithm can be implemented efficiently on accelerators, enabling end-to-end training. **Input** **:** Association Matrix **A** _∈_ R **[TK]** _[×]_ **[TK]** // _T_ : num. frames. _K_ : num. objects per frame. **Hyperparameters :** Association score threshold _θ_ **Output** **:** Identities for each object _δ ∈_ N _[T K]_ _M ←_ _T × K_ // Number of total objects. _AA_ ˆ _← ←_ preprocess(( _A ≥_ _θ_ ) _._ astype(bool) _A_ ) // Preprocess _A_ to ensure object pairs in the same frame have a score of// Binary matrix for possible merges. 0. _δ ←_ _zeros_ ( _M_ ) // Initialize output identities, shape ( _M,_ ) id_count _←_ 0 // Initialize ID count. **while** _A_ [ˆ] _.any() > 0_ **do** track_len _←_ _A_ [ˆ] .sum(axis=1) // Number of objects in each merge. i _←_ track_len.argmax() // Find the longest track to merge. id_count _←_ id_count + 1 // Create a new identity. _δ_ ˆ _←_ _δ_ +ˆ id_count *ˆ _A_ [ˆ] _i_ // Assign the current track a new ID using _A_ [ˆ] _i_ . _A ←_ _A −_ _A_ _i·_ _|A_ ˆ _·i_ // Remove merged indices. “ _|_ ” is logical or. **end** **return** _δ_ features in the video. Let _M_ = _|_ **F** _|_ = [�] _[T]_ _t_ =1 _[K]_ _[t]_ [ as the total number of objects in all frames, where] _K_ _t_ is the number of object proposals at the _t_ [th] frame. Thus, we have **F** _∈_ R _[M]_ _[×][r]_ [2] _[×][d]_ . From these object features, **F**, we predict a global association matrix, **A** _∈_ R _[M]_ _[×][M]_, where **A** _ij_ = 1 if the objects denoted by the _i_ _[th]_ row and _j_ _[th]_ column are from the same trajectory (Fig. 3 middle). Otherwise, **A** _ij_ = 0 means that they are from different trajectories, or one of them is the background. We use a transformer module, H, with two self-attention layers, similar to Zhou et al. (2022b), to predict the association matrix **A** = _σ_ (H( **F** ) ), where _σ_ is the sigmoid activation. Given the object trajectory annotations, we construct the ground truth association matrix **A** **[¯]** for **A**, where **A** **[¯]** _ij_ = 1 if and only if row _i_ and column _j_ of **A** are matched to the same ground truth trajectory using an Intersection over Union (IoU) criteria of 0.5. The training loss _L_ _assoc_ for this module is then a binary cross entropy between **A** and **A** **[¯]**, _L_ _assoc_ = _M_ 1 � _ij_ [BCE][(] _[A]_ _[ij]_ _[,]_ [ ¯] _[A]_ _[ij]_ [)][.] After constructing our association matrix, **A**, we need to aggregate object-level features according to identities _δ_ = [ _δ_ _k_ _[t]_ []] _[T,K]_ _t_ =1 _,k_ _[t]_ =1 [, to generate trajectory-level captions for the next captioning stage. Here,] _δ_ _k_ _[t]_ [denotes the identity of the] _[ k]_ [-th object proposal in the] _[ t]_ [-th frame. We design a greedy grouping] algorithm (Alg. 1) operating on **A** to obtain _δ_ . Concretely, we greedily extract the longest trajectory from untracked objects, until there are no possible associations left (indicated by the association score being above a threshold _θ_ ). This guarantees each trajectory has at most one object in each frame. This algorithm can be implemented efficiently on accelerators, allowing us to backpropagate through it. As aforementioned, prior trackers (Zhang et al., 2021b; Zhou et al., 2020; 2022a) do not explicitly perform identity assignment within the model, but rather as a post-processing step since tracking is the final output for such methods. Our work efficiently assigns object identities to tracks in an end-to-end trainable network, which enables us to perform joint trajectory-level captioning training as described next. 3.3 T RAJECTORY CAPTIONING Our end-to-end tracking module produces object features, **f** _k_ (we omit the frame index _t_ below for clearer notation), paired with their identities, _δ_ _k_, which denote their correspondence over time. We now describe two methods for aggregating features along this trajectory in order to caption it. **Soft aggregation.** A straightforward way to leverage object features over time is to compute a weighted sum to combine them into a single, global trajectory feature. We observe that the association matrix, **A** (Sec. 3.2), already serves as a summation weight. Specifically, we set **G** = _||_ **AA** _||_ _[·]_ **[ F]** [,] where _·_ denotes matrix multiplication, and _|| · ||_ normalizes **A** by rows. Each row of **G**, **g** _k_ _∈_ R _[r]_ [2] _[×][d]_, therfore denotes an aggregated feature over its trajectory for object _k_ . **Hard aggregation.** An alternative to weighted temporal averaging is to concatenate and construct new trajectory features. Let **f** _τ_ = _{_ **f** _k_ _′_ _}_ _δ_ _k′_ = _τ_ be the set of all object features with identity _τ_ . We note **f** _τ_ can be as long as the entire video, and thus it may be expensive to directly use **f** _τ_ . Therefore, we uniformly 5 Idea Generation Category:
0Conceptual Integration
auZZ2gN0ZN
## R ETHINKING AND I MPROVING A UTOFORMALIZATION : T OWARDS A F AITHFUL M ETRIC AND A D EPENDENCY - R ETRIEVAL BASED A PPROACH **Qi Liu, Xinhao Zheng, Xudong Lu, Qinxiang Cao** _[∗]_ **, Junchi Yan** _[∗†]_ Sch. of Computer Science & Sch. of Artificial Intelligence, Shanghai Jiao Tong University _{_ purewhite,void ~~z~~ xh,luxudong2001,caoqinxiang,yanjunchi _}_ @sjtu.edu.cn [https://github.com/Purewhite2019/rethinking_autoformalization](https://github.com/Purewhite2019/rethinking_autoformalization) A BSTRACT As a central component in formal verification, statement autoformalization has been widely studied including the recent efforts from machine learning community, but still remains a widely-recognized difficult and open problem. In this paper, we delve into two critical yet under-explored gaps: 1) absence of faithful and universal automated evaluation for autoformalization results; 2) agnosia of contextual information, inducing severe hallucination of formal definitions and theorems. To address the first issue, we propose **BEq** ( _Bidirectional Extended_ _Definitional Equivalence_ ), an automated neuro-symbolic method to determine the equivalence between two formal statements, which is formal-grounded and wellaligned with human intuition. For the second, we propose **RAutoformalizer** ( _Retrieval-augmented Autoformalizer_ ), augmenting statement autoformalization by _Dependency Retrieval_, retrieving potentially dependent objects from formal libraries. We parse the dependencies of libraries and propose to _structurally infor-_ _malise_ formal objects by the topological order of dependencies. To evaluate OOD generalization and research-level capabilities, we build a novel benchmark, _Con-_ _NF_, consisting of 961 informal-formal statement pairs from frontier mathematical researches. Experiments validate the effectiveness of our approaches: BEq is evaluated on 200 diverse formal statement pairs with expert-annotated equivalence label, exhibiting significantly improved accuracy (82 _._ 50% _�→_ 90 _._ 50%) and precision (70 _._ 59% _�→_ 100 _._ 0%). For dependency retrieval, a strong baseline is devised. Our RAutoformalizer substantially outperforms SOTA baselines in both in-distribution ProofNet benchmark (12 _._ 83% _�→_ 18 _._ 18%, BEq@8) and OOD Con-NF scenario (4 _._ 58% _�→_ 16 _._ 86%, BEq@8). _Philosophy is written in this grand book, the universe._ _It is written in the language of mathematics._ Galileo Galilei, _The Assayer_ 1 I NTRODUCTION Theorem provers, such as Lean (Moura & Ullrich, 2021), Coq (Bertot & Cast´eran, 2013) and Isabelle (Nipkow et al., 2002), can check the validity and correctness of mathematical statements and proofs by strict algorithms, whose own soundness and completeness are proven in theory. However, instead of directly working on natural language mathematics, these tools define their own formal languages, which hinders the democratization of formal mathematics. Statement autoformalization aims at translating mathematical statements from natural language to formal verifiable statement. Readers unfamiliar with formal theorem proving are advised to read Yang et al. (2024). Due to its rigorously logical nature, this task is widely-recognized to be challenging, requiring profound understanding of both informal semantics and formal syntax (Li et al., 2024a). Beyond a fundamental component in formal mathematics and software verification, strong autoformalization methods have far broader impacts and could result in the creation of a general _∗_ Equal correspondence. _†_ Also affiliated with Shanghai Artificial Intelligence Laboratory. This work was in part supported by NSFC (92370201, 62222607) and Shanghai Municipal Science and Technology Major Project under Grant 2021SHZDZX0102. 1 purpose reasoning module (Szegedy, 2020). Outside-the-box applications of autoformalization include synthesizing training dataset for formal theorem provers (Wu et al., 2022; Xin et al., 2024), especially AlphaProof (Castelvecchi, 2024), enhancing informal math reasoning by rejection sampling (Zhou et al., 2024), and automating code verification (Lin et al., 2024). Current mainstream methods work in the following process. A large language model (LLM) is either prompted (Wu et al., 2022) or fine-tuned (Azerbayev et al., 2023; Jiang et al., 2023a) to directly generate a formal statement given its informal counterpart. The predicted statements are then evaluated by laborious human annotation (Azerbayev et al., 2023) or unreliable proxy automated metrics including machine translation metrics such as BLEU (Wu et al., 2022) and perplexity (Wang et al., 2018), symbolic type check pass rate (Lu et al., 2024) or LLM grader (Ying et al., 2024a). Rethinking this paradigm, we find out two key limitations. Firstly, an **effective, human-aligned and** **universal automated evaluation metric is absent** . Machine translation metrics are fragile to equivalent transformations in human perspective, for example _β-reduction_ (function application). Type check is too weak to filter out syntactically correct but semantically absurd autoformalization. It is a necessary but not sufficient condition for the ideal equivalence. LLM graders are non-determinant and highly dependent on prompts, and are easily misled by imperceptible but fundamental differences or huge but nonessential transformations. Murphy et al. (2024) are pioneers to utilize SMT solver for faithful automated evaluation, but is restricted to Euclidean geometry only. Secondly, the current paradigm directly generates formal statements, **ignoring the context of previously formal-** **ized statements and definitions** . This might result in severe hallucination of identifiers and syntax, especially in out-of-distribution (OOD) cases. A similar issue is reported in Wu et al. (2022), where definition misalignment between informal mathematics and formal libraries is the major cause of failure cases. Our experiments on both in-domain and OOD scenarios, shown in Table 3, show the severity of this problem and exhibit a promising path to address it. For the first issue, we propose _BEq (Bidirectional Extended Definitional Equivalence)_, a neuralsymbolic equivalence relation between formal statements. This metric aligns well with collective human opinions. In formal systems built upon dependent type theory (Univalent Foundations Program, 2013), such as Lean 4 (Moura & Ullrich, 2021), definitional equality is a symbolic equivalence relation under a variety of intuitive transformations, such as bound variable renaming, function application, and definition unfolding. However, it heavily relies on the definitions of objects and conversion rules, hence it is too strict and inflexible from human perspective. For example, n + 0 and n are definitional equal for a natural number _n_, but n and 0 + n are not. Worse still, definitional equality struggle with handling metavariable differences. We extend definitional equivalence by 1) equipping it with a restricted set of symbolic transformation primitives and a neural transformation function aiming to convert one formal statement to be definitionally equivalent to the other, and 2) loosing the equivalence criteria to bidirectionally “convertible” under the transformation function. To evaluate its performance, we build a benchmark consisting of 200 formal statement pairs with expert-annotated equivalence labels. BEq significantly outperforms previous SOTA methods, improving the precision from 70 _._ 59% to 100% and the accuracy from 82 _._ 50% to 90 _._ 50% . For the second, we propose a new task, _Dependency Retrieval_, and a new method, _RAutoformalizer_ _(Retrieval-augmented Autoformalizer)_ . Dependency retrieval seeks to select potentially dependent formal objects given an informal statement. RAutoformalizer uses the retrievals to enhance autoformalization. To enable this new paradigm, we propose to parse the dependencies in formal libraries and construct training data by _topological informalization_, informalizing formal objects by topological order. An immense dataset of 243,797 formal objects (including 139,933 theorems) is synthesized upon Mathlib 4. We also build the _Con-NF_ benchmark [1] to evaluate out-of-distribution (ODD) generalization and research-level capabilities of current methods. A baseline is built for dependency retrieval, with 35 _._ 52% Recall@5 on ProofNet and 24 _._ 32% Recall@5 on Con-NF. RAutoformalizer exhibits substantial improvement over previous methods, improving BEq@8 from 12 _._ 83% to 18 _._ 18% on ProofNet and from 4 _._ 58% to 16 _._ 86% on Con-NF. To sum up, we identify two key limitations in statement autoformalization: 1) absence of faithful and universal automated evaluation; 2) agnosia of contextual information. The contributions are: 1) We give a neural-symbolic equivalence metric, **BEq** ( _Bidirectional Extended Definitional Equiv-_ _alence_ ), extending _Definition Equality_ in dependent type theory more aligned with human intuition. 1 Based on Lean 4 Con(NF) library (A formal consistency proof of Quine’s set theory _New Foundations_ ) 2 2) We propose a new _dependency retrieval_ task and introduce a novel paradigm, **RAutoformalizer** (Retrieval-Augmented Autoformalizer). We further propose _topological informalization_ to synthesize high-quality training data for these initiatives. To evaluate research-level autoformalization and out-of-distribution (OOD) performance, we create a new benchmark, _Con-NF_, which consists of 961 informal-formal statement pairs from New Foundations (Holmes & Wilshaw, 2024). 3) We validate BEq by expert evaluation on 200 formal statement pairs and set a baseline for dependency retrieval. Extensive experiments of RAutoformalizer show its superior performance on statement autoformalization. Ablation studies further validate the effectiveness of our technical modifications, and also exhibit the great potential of the retrieval-augment paradigm. 2 R ELATED W ORKS **Autoformalization.** It aims to automatically translate the natural language (informal) mathematics into formal verified code. Current autoformalization methods can be roughly divided into three levels. Statement autoformalization focuses on autoformalizing statements (Wang et al., 2020; Wu et al., 2022; Azerbayev et al., 2023; Jiang et al., 2023a; Gulati et al., 2024; Poiroux et al., 2024); proof autoformalization focuses on translating informal proofs (and sometimes including corresponding statements) into formal code (Cunningham et al., 2023; Jiang et al., 2023b; Zhao et al., 2023; Murphy et al., 2024; Lu et al., 2024); theory autoformalization, translating a whole theory including definitions, axioms, theorems, and proofs, remains under-explored. Patel et al. (2024) proposes a three-stage plan to break the difficulty into easier subtasks. **Methods of Autoformalization.** Autoformalization is notoriously challenging for prevalent datadriven approaches (Li et al., 2024b). Existing informal-formal parallel corpora are fairly scarce, which impedes machine learning training. To alleviate this, researchers synthesize informal-formal pairs by rule-based informalization (Wang et al., 2018; Cunningham et al., 2023), LLM-based backtranslation (Azerbayev et al., 2023; Jiang et al., 2023a), training with multilingual corpus (Jiang et al., 2023a), or utilizing in-context learning (ICL) capabilities (Wu et al., 2022). Ying et al. (2024a) proposes an expert iteration pipeline by iteratively synthesizing and filtering training data. A major difference from machine translation is the existence of verifiers. Another line of work focuses on utilizing verifier feedbacks. Poiroux et al. (2024) uses rejection sampling to enhance autoformalization by typecheck results; Lu et al. (2024) introduces a neural step-level verifier and perform expert iteration; Jiang et al. (2023b); Murphy et al. (2024) combines LLM and formal verifier for proof autoformalization, and Zhao et al. (2023) enhances it with subgoal-based demonstration. **Evaluation of Autoformalization.** There are many benchmarks for statement autoformalization, covering undergraduate-level math problems (Azerbayev et al., 2023), more complex areas from Mathlib 4 (Gulati et al., 2024), and Euclidean geometry (Murphy et al., 2024). Due to the high flexibility of natural language and the rigor of formal language, faithfully evaluating autoformalization is widely-recognized to be challenging and under-explored (Szegedy, 2020; Azerbayev et al., 2023; Jiang et al., 2023a; Murphy et al., 2024). Wu et al. (2022); Jiang et al. (2023a); Ying et al. (2024a) evaluate autoformalization results by human experts. Wang et al. (2018) reports identical matching accuracy. Proxy metrics, including perplexity (Wang et al., 2018), BLEU [2] (Wang et al., 2018; Poiroux et al., 2024; Azerbayev et al., 2023; Wu et al., 2022) and compiler typecheck pass rate (Lu et al., 2024; Azerbayev et al., 2023; Jiang et al., 2023a) are utilized to automate evaluation. Ying et al. (2024a); Gulati et al. (2024) prompts LLMs to determine the equivalence between predicted formal statement and ground-truth. Murphy et al. (2024) propose to use SMT solver to evaluate the equivalence between formal statements in Euclidean geometry. For proof autoformalization, current evaluation focuses on theorem proving, only verifying formal proofs’ correctness while potentially overlooking semantic inconsistencies between informal and formal proofs. The evaluation of theory autoformalization is also insufficiently researched. **Retrieval-augmented Generation.** It has been extensively studied in NLP. For code generation, code documentations (Zhou et al., 2023), APIs (Zan et al., 2022), repository files (Zhang et al., 2023) and dynamic knowledge soup (Su et al., 2024) are retrieved to augment generation. In formal verification, Azerbayev et al. (2023) proposes to augment statement autoformalization by retrieving relevant prompts. ReProver (Yang et al., 2024) enhances theorem proving with premise selection. 2 BLEU (Papineni et al., 2002) is a metric for evaluating machine translation based on n-gram matching. 3 Figure 1: Illustration of _BEq_ ( _Bidirectional Extended Definitional Equivalence_ ) and _Unidirectional_ _Definitional Implication_ . _**s**_ _P_ _∼_ _B_ _**s**_ _Q_ if and only if both _**s**_ _P_ _←_ _U_ _**s**_ _Q_ and _**s**_ _Q_ _←_ _U_ _**s**_ _P_ hold. To determine the first, we assume _**s**_ _Q_ holds. Then the transformation function (implemented with a LLM) _T_ is called to generate transformation (proof of _**s**_ _P_ using _**s**_ _Q_ ) conditioned on _**s**_ _Q_ and transformation primitive (tactic) set _R_ . If the transformation holds, we conclude that _**s**_ _P_ _←_ _U_ _**s**_ _Q_ . Otherwise, we believe _**s**_ _P_ _̸←_ _U_ _**s**_ _Q_ . Vice versa for the second direction. 3 B IDIRECTIONAL E XTENDED D EFINITIONAL E QUIVALENCE 3.1 B ACKGROUND A fundamental problem for all generative tasks is to faithfully, effectively, and interpretably evaluate the results. In statement autoformalization, we follow prevalent benchmarks such as ProofNet (Azerbayev et al., 2023) and LeanEuclid (Murphy et al., 2024) to evaluate by comparing model predictions with ground-truths: let S denote the set of all formal statements, given a predicted formal statement _**s**_ pred _∈_ S and the corresponding ground-truth _**s**_ gt _∈_ S, an equivalence relation _∼_ : S _×_ S used to determine the correctness of autoformalization should be: - ( _· ∼·_ ) equivalence relation: a binary relation with reflexivity, symmetry and transitivity. - ( _· ∼·_ ) is well aligned with human intuition. - ( _· ∼·_ ) is universally applicable in all domains. Intuitively, equivalence from a human perspective is generally one that can be quickly determined and reasoned. Hence, the key lies in defining an equivalence relation that can be demonstrated through brief proofs. We choose to build an equivalence relation that aligns with humans by 1) extending _definitional equality_, and 2) restricting the degree of proof automation. **Definitional Equality** . In Lean 4 (Moura & Ullrich, 2021), two expressions are _definitionally equal_ if they are equivalent w.r.t. a series of conversion rules, such as _α-conversion_ (renaming bound variable), _η-expansion_ (modifying unused arguments in functions), _proof irrelevance_ (proofs of the same Prop), _β-reduction_ (function application), _ζ-reduction_ (eliminating let-in definitions), _δ-reduction_ (unfolding variable and constant definitions), _ι-reduction_ (application of recursive functions defined on inductive types to an explicit constructor) (Bailey et al., 2024). This equality is a binary relation with reflexivity, symmetry, and transitivity, and it is applicable in all math areas formalized in Lean 4. And it has many intriguing characteristics that fit more closely with human instinct. For example, fun (b:Nat) => b is equivalent to fun (u:Nat) => u because definitional equality allows _α_ -conversion, in which bound variable b is renamed to u. However, several critical weaknesses hinder definitional equality from becoming a good and intuitive metric for autoformalization. Firstly, some expressions that are naturally “equivalent” from a human perspective are not definitionally equal. For example, for a natural number n:Nat, n + 0 and n are definitionally equal, but 0 + n and n are not definitionally equal. Definitional equality heavily relies on the definitions of objects and conversion rules, while many intuitive equivalences, are neglected. Worse still, typecheck often get stuck in typeclass instance problems due to metavariables, which hinders evaluating definitional equality between statements. 3.2 E XTENDING D EFINITIONAL E QUALITY **Formulation.** Suppose there are two formal statements, _**s**_ _P_ and _**s**_ _Q_ . Without loss of generality, _**s**_ _P_ and _**s**_ _Q_ are assumed syntactically valid, since it is nonsense to talk about equivalence between invalid formal statements. Definitional equality is denoted as _∼_ D . The main reason behind the aforementioned limitations of definitional equality is its strictness on reductions and conversions. We hence loose the limitation and extend definitional equality to align 4 with human intuition. Let R be the set of all transformation primitives, _U_ ( _**s**_ _, R_ ) : S _×_ 2 [R] _�→_ 2 [S] to be the set of all valid formal statements that can be constructed by applying transformations in _R ⊂_ R on _**s**_, and _T_ : (S _×_ (S _×_ 2 [R] )) _�→_ S to be a _restricted transformation function_ such that _′_ _**s**_ _P_ _[,]_ _**s**_ _[′]_ _P_ _[∈U]_ [(] _**[s]**_ _[P]_ _[,][ R]_ [)] _[ ∧]_ _**[s]**_ _[Q]_ _[∼]_ _[D]_ _**[s]**_ _[′]_ _P_ _T_ ( _**s**_ _P_ _|_ _**s**_ _Q_ _, R_ ) = (1) � _⊥,_ _∀_ _**s**_ _[′]_ _P_ _[∈U]_ [(] _**[s]**_ _[P]_ _[,][ R]_ [)] _[,]_ _**[ s]**_ _[Q]_ _[̸∼]_ _[D]_ _**[s]**_ _[′]_ _P_ Intuitively, given transformation primitives _R ⊂_ R, _T_ transforms _**s**_ _P_ definitionally equal to _**s**_ _Q_ if possible and returns the transformed statement. Otherwise, it returns a dummy statement _⊥_, which is not definitionally equal to any other valid statement (e.g., an invalid statement). In Lean 4, a formal statement can be converted to a proof goal by entering tactic mode. A proof goal ( _{_ _**s**_ _P,i_ _}_ _[n]_ _i_ =1 _[,]_ _**[ s]**_ _[Q]_ [)][ consists of some assumptions] _[ {]_ _**[s]**_ _[P,i]_ _[}]_ _[n]_ _i_ =1 [and a conclusion] _**[ s]**_ _[Q]_ [, where all] _**[ s]**_ _[P,i]_ and _**s**_ _Q_ are statements, and _n_ can be 0. Then tactics, which are metaprograms, reduce one goal to another, which is often easier to solve by assumptions. For example, transforming ( _{S}, R →_ _S_ ) to ( _{R, S}, S_ ) by tactic intro and trivially prove it by exact. A formal statement _**s**_ _P_ can be transformed to a proof goal by simply setting assumptions to be empty set and conclusion to be _**s**_ _P_, resulting in the proof goal ( _∅,_ _**s**_ _P_ ). And a proof goal ( _{_ _**s**_ _P,i_ _}_ _[n]_ _i_ =1 _[,]_ _**[ s]**_ _[Q]_ [)][ can be transformed back] to a formal statement _**s**_ _P,_ 1 _∧_ _**s**_ _P,_ 2 _∧· · · ∧_ _**s**_ _P,n_ _→_ _**s**_ _Q_ . These transformations occur at the syntax level, leaving semantics unchanged. Therefore, we can determine semantic equivalence in the space of proof goals and concretize R to be the set of all tactics in Lean. The restricted transformation function _T_ can be approximated by sampling tactic sequences from a large language model and symbolically executing on Lean kernel multiple times, until a valid _**s**_ _[′]_ _P_ [is found, or the time limit ex-] ceeds. With a slight abuse of notation, we denote both the formal statement _**s**_ _P_ and its corresponding proof goal as _**s**_ _P_ . Then, _Unidirectional Definitional Implication_ ( _· ←_ U _·_ ) is defined as _**s**_ _P_ _←_ U _**s**_ _Q_ _⇐⇒_ _**s**_ _P_ _∼_ D _T_ ( _**s**_ _Q_ _|_ _**s**_ _P_ _, R_ ) (2) Intuitively, this implication from _**s**_ _Q_ to _**s**_ _P_ indicates whether the proof goal of the statement _**s**_ _P_ can be definitionally equal to a restrictively transformed _**s**_ _Q_ by _T_ . Correspondingly, **BEq** ( _Bidirectional_ _Extended Definitional Equivalence_ ) ( _· ∼_ B _·_ ) is defined as _**s**_ _P_ _∼_ B _**s**_ _Q_ _⇐⇒_ _**s**_ _P_ _←_ U _**s**_ _Q_ _∧_ _**s**_ _Q_ _←_ U _**s**_ _P_ (3) which is - a superset of definitional equality: Let _R_ = _∅_, then, _T_ becomes identity mapping ∆( _·_ ) and _**s**_ _P_ _∼_ B _**s**_ _Q_ _⇐⇒_ _**s**_ _P_ _∼_ D ∆( _**s**_ _Q_ ) _∧_ _**s**_ _Q_ _∼_ D ∆( _**s**_ _P_ ) _⇐⇒_ _**s**_ _P_ _∼_ D _**s**_ _Q_ - an equivalence relation, which is a binary relation with 1. Reflexivity: _**s**_ _P_ _∼_ B _**s**_ _P_ holds because _**s**_ _P_ _∼_ D _**s**_ _P_ . 2. Symmetry: _**s**_ _P_ _∼_ B _**s**_ _Q_ _⇐⇒_ _**s**_ _Q_ _∼_ B _**s**_ _P_ holds by unfolding the definition of BEq. 3. Transitivity: If _**s**_ _P_ _∼_ B _**s**_ _Q_ and _**s**_ _Q_ _∼_ B _**s**_ _R_ holds, we have _**s**_ _P_ _∼_ D _T_ ( _**s**_ _Q_ _|_ _**s**_ _P_ _, R_ ) and _**s**_ _Q_ _∼_ D _T_ ( _**s**_ _R_ _|_ _**s**_ _Q_ _, R_ ). Suppose _T_ ( _**s**_ _Q_ _|_ _**s**_ _P_ _, R_ ) applies tactic sequence [ _**t**_ [(] _QP_ _[i]_ [)] []] _i_ _[m]_ =1 [to] transform proof goal _**s**_ _Q_ to be definitionally equal to _**s**_ _P_, and _T_ ( _**s**_ _R_ _|_ _**s**_ _Q_ _, R_ ) applies [ _**t**_ [(] _RQ_ _[j]_ [)] []] _j_ _[n]_ =1 [. Therefore, by applying Concat][([] _**[t]**_ [(] _RQ_ _[j]_ [)] []] _j_ _[n]_ =1 _[,]_ [ [] _**[t]**_ [(] _QP_ _[i]_ [)] []] _i_ _[m]_ =1 [)][ on] _**[ s]**_ _[R]_ [, we can trans-] form proof goal _**s**_ _R_ to be definitionally equal to _**s**_ _P_ . Therefore, _**s**_ _P_ _∼_ D _T_ ( _**s**_ _R_ _|_ _**s**_ _P_ _, R_ ). **Implementation.** An overview of BEq is depicted in Figure 1. To implement the transformation function _T_, we perform 5-shot prompting InternLM-Math-Plus-20B (Ying et al., 2024b) served on vLLM (Kwon et al., 2023). If not mentioned otherwise, model prediction is sampled by beam search where temperature _T_ = 0 _._ 0, attempt number _n_ = 8, and beam size _b_ = 8. The choice of transformation primitives is sophisticated and critical for aligning with humans. We set _R_ = _{_ apply, cases’, constructor, exact, exact?, ext, have, intro,intros, rw, use _}_ to extend vanilla definitional equality (for higher recall) while preventing _U_ ( _·, R_ ) and the equivalence class being too large (for higher precision). More experiments on the choices of attempt numbers, transformation primitives, and sampling strategies can be found in Appendix A.1. 5 Idea Generation Category:
0Conceptual Integration
hUb2At2DsQ
# - Q UANTITATIVE A PPROXIMATION FOR N EURAL O PER ATORS IN N ONLINEAR P ARABOLIC E QUATIONS **Takashi Furuya** [1] _[,][∗]_ **Koichi Taniguchi** [2] _[,][∗]_ **Satoshi Okuda** [3] 1 Doshisha University, takashi.furuya0101@gmail.com 2 Shizuoka University, taniguchi.koichi@shizuoka.ac.jp 3 Rikkyo University, okudas@rikkyo.ac.jp - These authors contributed equally to this work A BSTRACT Neural operators serve as universal approximators for general continuous operators. In this paper, we derive the approximation rate of solution operators for the nonlinear parabolic partial differential equations (PDEs), contributing to the quantitative approximation theorem for solution operators of nonlinear PDEs. Our results show that neural operators can efficiently approximate these solution operators without the exponential growth in model complexity, thus strengthening the theoretical foundation of neural operators. A key insight in our proof is to transfer PDEs into the corresponding integral equations via Duahamel’s principle, and to leverage the similarity between neural operators and Picard’s iteration—a classical algorithm for solving PDEs. This approach is potentially generalizable beyond parabolic PDEs to a class of PDEs which can be solved by Picard’s iteration. 1 I NTRODUCTION Neural operators have gained significant attention in deep learning as an extension of traditional neural networks. While conventional neural networks are designed to learn mappings between finite-dimensional spaces, neural operators extend this capability by learning mappings between infinite-dimensional function spaces. A key application of neural operators is in constructing surrogate models of solvers for partial differential equations (PDEs) by learning their solution operators. Traditional PDE solvers often require substantial computational resources and time, especially when addressing problems with high dimensionality, nonlinearity, or complex boundary shape. In contrast, once trained, neural operators serve as surrogate models, providing significantly faster inference compared to traditional numerical solvers. Neural operators are recognized as universal approximators for general continuous operators. However, the theoretical understanding of their approximation capabilities, particularly as solvers for PDEs, is not yet fully developed. This paper focuses on whether neural operators are suitable for approximating solution operators of PDEs, and which neural operator architectures might be effective for this purpose. The idea of this paper is to define neural operators by aligning them with Picard’s iteration, a classical method for solving PDEs. Specifically, by associating each forward pass of the neural operator’s layers with a Picard’s iteration step, we hypothesize that increasing the number of layers would naturally lead to an approximate solution of the PDE. We constructively prove a quantitative approximation theorem for solution operators of PDEs based on this idea. This theorem shows that, by appropriately selecting the basis functions within the neural operator, it is possible to avoid the exponential growth in model complexity (also called “the curse of parametric complexity”) that often arises in general operator approximation, thereby providing a theoretical justification for the effectiveness of neural operators as PDE solvers. 1.1 R ELATED WORKS Neural operators were introduced by Kovachki et al. (2023) as one of the operator learning methods, such as DeepONet (Chen & Chen, 1995; Lu et al., 2019) and PCA-Net (Bhattacharya et al., 2021). 1 Various architectures have been proposed, including Graph Neural Operators (Li et al., 2020b), Fourier Neural Operators (Li et al., 2020a), Wavelet Neural Operators (Tripura & Chakraborty, 2023; Gupta et al., 2021), Spherical Fourier Neural Operators (Bonev et al., 2023), and Laplace Neural Operators (Chen et al., 2024). These architectures have demonstrated empirical success as the surrogate models of simulators across a wide range of PDEs, as benchmarked in Takamoto et al. (2022). In the case of parabolic PDEs, which are the focus of this paper, neural operators have also shown promising results, for example, the Burgers, Darcy, Navier-Stokes equations (Kovachki et al., 2023), the KPP-Fisher equation (Takamoto et al., 2022), the Allen-Cahn equation (Tripura & Chakraborty, 2023; Navaneeth et al., 2024), and the Nagumo equation (Navaneeth et al., 2024). Universal approximation theorems of operator learning for general operators were established for neural operators (Kovachki et al., 2021; 2023; Lanthaler et al., 2023; Kratsios et al., 2024), DeepOnet (Lu et al., 2019; Lanthaler et al., 2022), and PCA-net (Lanthaler, 2023). This indicates that these learning methods possess the capabilities to approximate a wide range of operators. However, operator learning for general operators suffers from “the curse of parametric complexity”, where the number of learnable parameters exponentially grows as the desired approximation accuracy increases (Lanthaler & Stuart, 2023). A common approach to mitigating the curse of parametric complexity is to restrict general operators to the solution operators of PDEs. Recently, several quantitative approximation theorems have been established for the solution operators of specific PDEs without experiencing exponential growth in model complexity. For instance, Kovachki et al. (2021); Lanthaler (2023) developed quantitative approximation theorems for the Darcy and Navier-Stokes equations using Fourier neural operators and PCA-Net, respectively. Additionally, Lanthaler & Stuart (2023) addressed the Hamilton-Jacobi equations with Hamilton-Jacobi neural operators. Further studies, such as Chen et al. (2023); Lanthaler et al. (2022); Marcati & Schwab (2023), investigated quantitative approximation theorems using DeepONet for a range of PDEs, including elliptic, parabolic, and hyperbolic equations, while Deng et al. (2022) focused on advection-diffusion equations. In line with these researches, the present paper also concentrates on restricting the learning operator to the solution operators of specific PDEs, namely parabolic PDEs. However, unlike previous studies, this work leverages the similarity with the Picard’s iteration in the framework of relatively general neural operator. This approach is potentially generalizable beyond parabolic PDEs to a range of other equations, including the Navier-Stokes equation, nonlinear dispersive equations, and nonlinear hyperbolic equations, which are solvable by Picard’s iteration. See Kovachki et al. (2024) for other directions on the quantitative approximation of operator learning. 1.2 O UR RESULTS AND CONTRIBUTIONS In this paper, we present a quantitative approximation theorem for the solution operator of nonlinear parabolic PDEs using neural operators. Notably, in Theorem 1, we show that for any given accuracy, the depth and the number of neurons of the neural operators do not grow exponentially. The proof relies on Banach’s fixed point theorem. Under appropriate conditions, the solution to nonlinear PDEs can be expressed as the fixed point of a contraction mapping, which can be implemented through Picard’s iteration. This contraction mapping corresponds to an integral operator whose kernel is the Green function associated with the linear equation. By expanding the Green function in a certain basis and truncating the expansion, we approximate the contraction mapping using a layer of neural operator. In this framework, the forward propagation through the layers is interpreted as steps in Picard’s iteration. Our approach does not heavily rely on the universality of neural networks. More specifically, we utilize their universality only to approximate the nonlinearity, which is represented as a one-dimensional nonlinear function, and hence, the approximation rates obtained in Theorem 1 correspond to the one-dimensional approximation rates of universality. As a result, our approach demonstrates that exponential growth in model complexity of our neural operators can be avoided. 1.3 N OTATION We introduce the notation often used in this paper. 2 - For _r ∈_ [1 _, ∞_ ], we write the H¨older conjugate of _r_ as _r_ _[′]_ : 1 _/r_ + 1 _/r_ _[′]_ = 1 if _r ∈_ (1 _, ∞_ ); _r_ _[′]_ = _∞_ if _r_ = 1; _r_ _[′]_ = 1 if _r_ = _∞_ . - Let _X_ be a set. For an operator _A_ : _X →_ _X_ and _k ∈_ N, we denote by _A_ [[] _[k]_ []] the _k_ times compositions of _A_ (or the _k_ times products of _A_ ): _A_ [[0]] means the identity operator on _X_ and _A_ [[] _[k]_ []] := _A ◦· · · ◦_ _A_ _._ ~~�~~ ~~��~~ � _k_ times - Let _X_ and _Y_ be normed spaces with norm _∥· ∥_ _X_ and _∥· ∥_ _Y_, respectively. For a linear operator _A_ : _X →_ _Y_, we denote by _∥A∥_ _X→Y_ the operator norm of _A_ : _∥A∥_ _X→Y_ := sup _∥f_ _∥_ _X_ =1 _∥Af_ _∥_ _Y_ _._ - Let _X_ be a normed space with norm _∥· ∥_ _X_ . We denote by _B_ _X_ ( _R_ ) the closed ball in _X_ with center 0 and radius _R >_ 0: _B_ _X_ ( _R_ ) := _{f ∈_ _X_ : _∥f_ _∥_ _X_ _≤_ _R}._ - For _q ∈_ [1 _, ∞_ ], the Lebesgue space _L_ _[q]_ ( _D_ ) is defined by the set of all measurable functions 1 _f_ = _f_ ( _x_ ) on _D_ such that _∥f_ _∥_ _L_ _q_ := (� _D_ _[|][f]_ [(] _[x]_ [)] _[|]_ _[q]_ _[ dx]_ [)] _q_ _< ∞_ if _q ̸_ = _∞_ and _∥f_ _∥_ _L_ _∞_ := ess sup _x∈D_ _|f_ ( _x_ ) _| < ∞_ if _q_ = _∞_ . For _r, s ∈_ [1 _, ∞_ ], the space _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) is defined by the set of all measurable functions _f_ = _f_ ( _t, x_ ) on (0 _, T_ ) _× D_ such that � _r_ [1] _r_ _< ∞_ _∥f_ _∥_ _L_ _r_ (0 _,T_ ; _L_ _s_ ) := _T_ 0 �� �� _s_ _|f_ ( _t, x_ ) _|_ _[s]_ _dx_ _D_ � _[r]_ _s_ _dt_ (with the usual modifications for _r_ = _∞_ or _s_ = _∞_ ). - For _r, s ∈_ [1 _, ∞_ ], the notation _⟨·, ·⟩_ means the dual pair of _L_ _[s]_ _[′]_ ( _D_ ) and _L_ _[s]_ ( _D_ ) or _L_ _[r]_ _[′]_ (0 _, T_ ; _L_ _[s]_ _[′]_ ( _D_ )) and _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )): _T_ _u_ ( _x_ ) _v_ ( _x_ ) _dx_ or _⟨u, v⟩_ := _D_ � 0 _u_ ( _t, x_ ) _v_ ( _t, x_ ) _dxdt,_ � _D_ _⟨u, v⟩_ := � 0 respectively. 2 L OCAL WELL - POSEDNESS FOR NONLINEAR PARABOLIC EQUATIONS To begin with, we describe the problem setting of PDEs addressed in this paper. Let _D_ be a bounded domain in R _[d]_ with _d ∈_ N. We consider the Cauchy problem for the following nonlinear parabolic PDEs: _∂_ _t_ _u_ + _Lu_ = _F_ ( _u_ ) in (0 _, T_ ) _× D,_ (P) � _u_ (0) = _u_ 0 in _D,_ where _T >_ 0, _∂_ _t_ := _∂/∂t_ denotes the time derivative, _L_ is a certain operator (e.g. _L_ = _−_ ∆:= _−_ [�] _[d]_ _j_ =1 _[∂]_ [2] _[/∂x]_ [2] _j_ [is the Laplacian),] _[ F]_ [ :][ R] _[ →]_ [R][ is a nonlinearity,] _[ u]_ [ : (0] _[, T]_ [)] _[ ×][ D][ →]_ [R][ is a solution] to (P) (unknown function), and _u_ 0 : _D →_ R is an initial data (prescribed function). It is important to note that boundary conditions are contained in the operator _L_ . In this sense, the problem (P) can be viewed as an abstract initial boundary value problem on _D_ . See Appendix A for examples of _L_ . As it is a standard practice, we study the problem (P) via the integral formulation _t_ _u_ ( _t_ ) = _S_ _L_ ( _t_ ) _u_ 0 + _S_ _L_ ( _t −_ _τ_ ) _F_ ( _u_ ( _τ_ )) _dτ,_ (P’) � 0 where _{S_ _L_ ( _t_ ) _}_ _t≥_ 0 is the semigroup generated by _L_ (i.e. the solution operators of the linear equation _∂_ _t_ _u_ + _Lu_ = 0). The second term in the right hand side of (P’) is commonly referred to as the Duhamel’s integral. Under suitable conditions on _L_ and _F_, the problems (P) and (P’) are equivalent if the function _u_ is sufficiently smooth. For example, in the case where _L_ = _−_ ∆, it follows from the smoothing effect of _S_ _L_ ( _t_ ) that the solution to (P’) is a classical solution to (P) (see the argument in the proof of Brezis & Cazenave (1996, Theorem 1)). Even in the case where _L_ is a more general operator satisfying Assumption 1, a similar argument can be also done by replacing the differentiation in _x_ with the operator _L_ (together with use of the techniques in the proofs of Iwabuchi et al. (2021, Lemmas 3.5 and 3.10) for instance). The aim of this section is to state the results on local well-posedness (LWP) for (P’). Here, the LWP means the existence of local in time 3 solution, the uniqueness of the solution, and the continuous dependence on initial data. Its proof is based on the fixed point argument (or also called the contraction mapping argument). These results are fundamental to study our neural operator in Sections 3 and 4. In particular, Propositions 1 and 2 below serve as guidelines for setting function spaces as the domain and range of neural operators in Definition 1 (Section 3) and for determining the norm to measure the error in Theorem 1 (Section 4). In this paper we impose the following assumptions on _L_ and _F_ . **Assumption 1.** _For any_ 1 _≤_ _q_ 1 _≤_ _q_ 2 _≤∞, there exists a constant C_ _L_ _>_ 0 _such that_ [1] _∥S_ _L_ ( _t_ ) _∥_ _L_ _[q]_ 1 _→L_ _[q]_ 2 _≤_ _C_ _L_ _t_ _[−][ν]_ [(] _q_ _q_ 2 [)] _,_ _t ∈_ (0 _,_ 1] _,_ (1) [1] _q_ 1 _[−]_ _q_ [1] _for some ν >_ 0 _._ **Assumption 2.** _F ∈_ _C_ [1] (R; R) _satisfies F_ (0) = 0 _and_ _|F_ ( _z_ 1 ) _−_ _F_ ( _z_ 2 ) _| ≤_ _C_ _F_ max (2) _i_ =1 _,_ 2 _[|][z]_ _[i]_ _[|]_ _[p][−]_ [1] _[|][z]_ [1] _[ −]_ _[z]_ [2] _[|][,]_ _for any z_ 1 _, z_ 2 _∈_ R _and for some p >_ 1 _and C_ _F_ _>_ 0 _._ **Remark 1.** _The range_ (0 _,_ 1] _of t in_ (1) _can be generalized to_ (0 _, T_ _L_ ] _, but it is assumed here to be_ (0 _,_ 1] _for simplicity. This generalization is not essential, as the existence time T of the solution is_ _sufficiently small in the fixed point argument later. Long time solutions are achieved by repeatedly_ _using the solution operator of_ (P’) _constructed in the fixed point argument (see also Subsection 4.2)._ **Remark 2.** _Typical examples of L are the Laplacian with the Dirichlet, Neumann, or Robin bound-_ _ary condition, the Schr¨odinger operator, the elliptic operator, and the higher-order Laplacian._ _See Appendix A for the details. On the other hand, typical examples of the nonlinearity F are_ _F_ ( _u_ ) = _±|u|_ _[p][−]_ [1] _u, ±|u|_ _[p]_ _(which can be regarded as the main term of the Taylor expansion of a_ _more general nonlinearity F if F is smooth in some extent). See Appendix G for further remarks on_ _Assumptions 1 and 2._ Under the above assumptions, the problem (P’) is local well-posed, where, as a solution space, we use the space _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) with the parameters _r, s_ satisfying _ν_ 1 _r, s ∈_ [ _p, ∞_ ] and (3) _s_ [+ 1] _r_ _[<]_ _p −_ 1 _[.]_ More precisely, we have the following result on LWP. **Proposition 1.** _Assume that r, s satisfy_ (3) _._ _Then, for any u_ 0 _∈_ _L_ _[∞]_ ( _D_ ) _, there exist a time_ _T_ = _T_ ( _u_ 0 ) _∈_ (0 _,_ 1] _and a unique solution u ∈_ _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) _to_ (P’) _. Moreover, for any_ _u_ 0 _, v_ 0 _∈_ _L_ _[∞]_ ( _D_ ) _, the solutions u and v to_ (P’) _with u_ (0) = _u_ 0 _and v_ (0) = _v_ 0 _satisfy the con-_ _tinuous dependence on initial data: There exists a constant C >_ 0 _such that_ _∥u −_ _v∥_ _L_ _r_ (0 _,T_ _′_ ; _L_ _s_ ) _≤_ _C∥u_ 0 _−_ _v_ 0 _∥_ _L_ _[∞]_ _,_ _where T_ _[′]_ _<_ min _{T_ ( _u_ 0 ) _, T_ ( _v_ 0 ) _}._ The proof is based on the fixed point argument. Given an initial data _u_ 0 _∈_ _L_ _[∞]_ ( _D_ ) and _T, M >_ 0, we define the map Φ = Φ _u_ 0 by _t_ Φ[ _u_ ]( _t_ ) := _S_ _L_ ( _t_ ) _u_ 0 + _S_ _L_ ( _t −_ _τ_ ) _F_ ( _u_ ( _τ_ )) _dτ_ (4) � 0 for _t ∈_ [0 _, T_ ] and the complete metric space _X_ := _B_ _L_ _r_ (0 _,T_ ; _L_ _s_ ( _D_ )) ( _M_ ) equipped with the metric d( _u, v_ ) := _∥u −_ _v∥_ _L_ _r_ (0 _,T_ ; _L_ _s_ ) _._ Let _R, M >_ 0 be arbitrarily fixed, and let _T >_ 0 (which is taken sufficiently small later). Then, under Assumptions 1 and 2, it can be proved that for any _u_ 0 _∈_ _B_ _L_ _∞_ ( _R_ ), the map Φ : _X →_ _X_ is _δ_ -contractive with a contraction rate _δ ∈_ (0 _,_ 1), i.e. d(Φ[ _u_ ] _,_ Φ[ _v_ ]) _≤_ _δ_ d( _u, v_ ) for any _u, v ∈_ _X,_ where _T_ is taken small enough to depend on _R_, _M_ and _δ_ (not to depend on _u_ 0 itself). Therefore, Banach’s fixed point theorem allows us to prove that there exists uniquely a function _u ∈_ _X_ such that _u_ = Φ[ _u_ ] (a fixed point). This function _u_ is precisely the solution of (P’) with the initial data _u_ (0) = _u_ 0 . Thus, Proposition 1 is shown. See Appendix B for more details of the proof. This proof by the fixed point argument guarantees the following result (see e.g. Zeidler (1986)). 4 **Proposition 2.** _Assume that r, s satisfy_ (3) _. Then, for any R, M >_ 0 _and for any δ ∈_ (0 _,_ 1) _, there_ _exists a time T ∈_ (0 _,_ 1] _, depending on R, M and δ, such that the following statements hold:_ (i) _There exists a unique solution operator_ Γ [+] : _B_ _L_ _∞_ ( _R_ ) _→_ _B_ _L_ _r_ (0 _,T_ ; _L_ _s_ ) ( _M_ ) _such that_ Γ [+] ( _u_ 0 ) = _u_ _for any u_ 0 _∈_ _B_ _L_ _∞_ ( _R_ ) _, where u is the solution to_ (P’) _with u_ (0) = _u_ 0 _given in Proposition 1._ (ii) _Given u_ 0 _∈_ _B_ _L_ _∞_ ( _R_ ) _, define Picard’s iteration by_  _u_ [(1)] := _S_ _L_ ( _t_ ) _u_ 0 _,_ _t_  [(] _[ℓ]_ [+1)] [(] _[ℓ]_ [)] [(1)] _u_ [(1)] := _S_ _L_ ( _t_ ) _u_ 0 _,_ _t_ _u_ [(] _[ℓ]_ [+1)] := Φ[ _u_ [(] _[ℓ]_ [)] ] = _u_ [(1)] + � 0  _t_ (5) _S_ _L_ ( _t −_ _τ_ ) _F_ ( _u_ [(] _[ℓ]_ [)] ( _τ_ )) _dτ,_ _ℓ_ = 1 _,_ 2 _, · · ·,_ 0 _that is,_ _u_ [(] _[ℓ]_ [)] := Φ [[] _[ℓ]_ []] [0] = Φ ~~�~~ _◦· · · ◦_ � ~~�~~ Φ�[0] _,_ _ℓ_ = 1 _,_ 2 _, · · ·,_ _ℓ_ _times_ _where_ Φ : _X →_ _X is a δ-contraction mapping defined by_ (4) _._ _Then u_ [(] _[ℓ]_ [)] _→_ _u in_ _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) _as ℓ_ _→∞_ _and_ _δ_ _[ℓ]_ d( _u_ [(] _[ℓ]_ [)] _, u_ ) _≤_ _ℓ_ = 1 _,_ 2 _, · · · ._ 1 _−_ _δ_ [d(] _[u]_ [(1)] _[,]_ [ 0)] _[,]_ 3 N EURAL OPERATOR FOR NONLINEAR PARABOLIC EQUATIONS In this section, we aim to construct neural operators Γ that serve as accurate approximation models of the solution operators Γ [+] for nonlinear parabolic PDEs (P). We start by explaining our idea in rough form. Our idea is inspired by the fixed point argument and Picard’s iteration. By Section 2, for any _u_ 0 _∈_ _B_ _L_ _∞_ ( _R_ ), the solution _u_ to (P’) on [0 _, T_ ] _×D_ can be obtained through the Picard’s iteration (5) under appropriate settings. If the semigroup _S_ _L_ ( _t_ ) has an integral kernel _G_ = _G_ ( _t, x, y_ ), which is the Green function _G_ of the linear equation _∂_ _t_ _u_ + _Lu_ = 0, then we can write _S_ _L_ ( _t_ ) _u_ 0 ( _x_ ) = _G_ ( _t, x, y_ ) _u_ 0 ( _y_ ) _dy,_ � _D_ � 0 _t_ _t_ _t_ _S_ _L_ ( _t −_ _τ_ ) _F_ ( _u_ [(] _[ℓ]_ [)] ( _τ, x_ )) _dτ_ = 0 � 0 0 _G_ ( _t −_ _τ, x, y_ ) _F_ ( _u_ [(] _[ℓ]_ [)] ( _τ, y_ )) _dydτ._ � _D_ Suppose that _G_ has an expansion _G_ ( _t −_ _τ, x, y_ ) = � _c_ _m,n_ _ψ_ _m_ ( _τ, y_ ) _φ_ _n_ ( _t, x_ ) = lim _N_ _→∞_ _m,n∈_ Λ � _c_ _m,n_ _ψ_ _m_ ( _τ, y_ ) _φ_ _n_ ( _t, x_ ) _,_ _m,n∈_ Λ _N_ for 0 _≤_ _τ, t ≤_ _T_ and _x, y ∈_ _D_ . For convenience, we always assume that _G_ ( _t, x, y_ ) = 0 for _t ≤_ 0. Here, Λ is an index set that is either finite or countably infinite, and Λ _N_ is a subset of Λ with its cardinality _|_ Λ _N_ _|_ = _N ∈_ N and the monotonicity Λ _N_ _⊂_ Λ _N_ _′_ for any _N ≤_ _N_ _[′]_ . We write the partial sum as _G_ _N_ ( _t −_ _τ, x, y_ ) := � _c_ _m,n_ _ψ_ _m_ ( _τ, y_ ) _φ_ _n_ ( _t, x_ ) _m,n∈_ Λ _N_ for 0 _≤_ _τ, t ≤_ _T_ and _x, y ∈_ _D_ . We always assume that _G_ _N_ ( _t, x, y_ ) = 0 for _t ≤_ 0 as well. We define Φ _N_ by _t_ _G_ _N_ ( _t, x, y_ ) _u_ 0 ( _y_ ) _dy_ + _D_ � 0 _G_ _N_ ( _t −_ _τ, x, y_ ) _F_ ( _u_ ( _τ, y_ )) _dydτ_ � _D_ Φ _N_ [ _u_ ]( _t, x_ ) := � 0 = � � _c_ _m,n_ _⟨ψ_ _m_ (0 _, ·_ ) _, u_ 0 _⟩φ_ _n_ ( _t, x_ ) + � _m,n∈_ Λ _N_ _m,n∈_ � _c_ _m,n_ _⟨ψ_ _m_ _, F_ ( _u_ ) _⟩φ_ _n_ ( _t, x_ ) _m,n∈_ Λ _N_ = � _c_ _m,n_ ( _⟨ψ_ _m_ (0 _, ·_ ) _, u_ 0 _⟩_ + _⟨ψ_ _m_ _, F_ ( _u_ ) _⟩_ ) _φ_ _n_ ( _t, x_ ) _,_ _m,n∈_ Λ _N_ 5 and moreover, we define an approximate Picard’s iteration by  _u_ ˆ [(1)] := _G_ _N_ ( _t, x, y_ ) _u_ 0 ( _y_ ) _dy_ = � _c_ � _D_ _m,n∈_ Λ _N_  _u_ ˆ [(1)] := � � _c_ _m,n_ _⟨ψ_ _m_ (0 _, ·_ ) _, u_ 0 _⟩φ_ _n_ ( _t, x_ ) _,_ _m,n∈_ Λ _N_ _G_ _N_ ( _t, x, y_ ) _u_ 0 ( _y_ ) _dy_ = � _D_  _u_ ˆ [(] _[ℓ]_ [+1)] := Φ _N_ [ˆ _u_ [(] _[ℓ]_ [)] ] _,_ _ℓ_ = 1 _,_ 2 _, · · · ._ Then, for sufficiently large _N_, we expect: 1. Φ _N_ _≈_ Φ and Φ _N_ is also contractive on _X_ _N_ := _B_ _L_ _r_ (0 _,T_ _N_ ; _L_ _s_ ) ( _M_ ) for some _T_ _N_ _>_ 0. 2. There exists a fixed point ˆ _u ∈_ _X_ _N_ of Φ _N_ such that ˆ _u_ [(] _[ℓ]_ [)] _→_ _u_ ˆ as _ℓ_ _→∞_ . 3. ˆ _u ≈_ _u_ on [0 _,_ min _{T, T_ _N_ _}_ ] _× D_ for any _u_ 0 _∈_ _B_ _L_ _∞_ ( _R_ ). 4. Define Γ : _B_ _L_ _∞_ ( _R_ ) _→_ _X_ _N_ by Γ( _u_ 0 ) := ˆ _u_ [(] _[L]_ [)] for _L ∈_ N. Then Γ _≈_ Γ [+] as _N, L →∞_ . The above Γ is the prototype of our neural operator, where _N_ corresponds to the rank and _L_ to the layer depth. In other words, in our neural operator, the Picard’s iteration step corresponds to the forward propagation in the layer direction of the neural operator, which converges to the fixed point (i.e. the solution to (P’)) as the layers get deeper for sufficiently large rank _N_ . The above is only a rough idea. The precise definition of our neural operator is the following: **Definition 1** (Neural operator) **.** _Let T >_ 0 _and r, s ∈_ [1 _, ∞_ ] _. Let φ_ := _{φ_ _n_ _}_ _n_ _and ψ_ := _{ψ_ _m_ _}_ _m_ _be families of functions in L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) _and L_ _[r]_ _[′]_ (0 _, T_ ; _L_ _[s]_ _[′]_ ( _D_ )) _, respectively. We define a neural_ _operator_ Γ : _L_ _[∞]_ ( _D_ ) _→_ _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) _by_ Γ : _L_ _[∞]_ ( _D_ ) _→_ _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) : _u_ 0 _�→_ _u_ ˆ [(] _[L]_ [+1)] _._ _Here, the output function_ ˆ _u_ [(] _[L]_ [+1)] _is given by the following steps:_ _1._ **(Input layer)** ˆ _u_ [(1)] = (ˆ _u_ [(1)] 1 _[,]_ [ ˆ] _[u]_ [(1)] 2 _[, . . .,]_ [ ˆ] _[u]_ [(1)] _d_ 1 [)] _[ is given by]_ _u_ ˆ [(1)] ( _t, x_ ) := ( _K_ _N_ [(0)] _[u]_ [0] [)(] _[t, x]_ [) +] _[ b]_ [(0)] _N_ [(] _[t, x]_ [)] _[.]_ _Here, K_ _N_ [(0)] [:] _[ L]_ _[∞]_ [(] _[D]_ [)] _[ →]_ _[L]_ _[r]_ [(0] _[, T]_ [;] _[ L]_ _[s]_ [(] _[D]_ [))] _[d]_ [1] _[ and][ b]_ [(0)] _N_ _[∈]_ _[L]_ _[r]_ [(0] _[, T]_ [;] _[ L]_ _[s]_ [(] _[D]_ [))] _[d]_ [1] _[ are defined by]_ ( _K_ _N_ [(0)] _[u]_ [0] [)(] _[t, x]_ [) :=] � _C_ _n,m_ [(0)] _[⟨][ψ]_ _[m]_ [(0] _[,][ ·]_ [)] _[, u]_ [0] _[⟩][φ]_ _[n]_ [(] _[t, x]_ [)] _with C_ _n,m_ [(0)] _[∈]_ [R] _[d]_ [1] _[×]_ [1] _[,]_ _m,n∈_ Λ _N_ _b_ [(0)] _N_ [(] _[t, x]_ [) :=] � _b_ [(0)] _N_ _[φ]_ _[n]_ [(] _[t, x]_ [)] _with b_ [(0)] _N_ _[∈]_ [R] _[d]_ [1] _[.]_ _n∈_ Λ _N_ _2._ **(Hidden layers)** _For_ 2 _≤_ _ℓ_ _≤_ _L,_ ˆ _u_ [(] _[ℓ]_ [)] = (ˆ _u_ [(] 1 _[ℓ]_ [)] _[,]_ [ ˆ] _[u]_ [(] 2 _[ℓ]_ [)] _[, . . .,]_ [ ˆ] _[u]_ [(] _d_ _[ℓ]_ _ℓ_ [)] [)] _[ are iteratively given by]_ _u_ ˆ [(] _[ℓ]_ [+1)] ( _t, x_ ) = _σ_ � _W_ [(] _[ℓ]_ [)] _u_ ˆ [(] _[ℓ]_ [)] ( _t, x_ ) + ( _K_ _N_ [(] _[ℓ]_ [)] _[u]_ [ˆ] [(] _[ℓ]_ [)] [)(] _[t, x]_ [) +] _[ b]_ [(] _[ℓ]_ [)] [�] _,_ 1 _≤_ _ℓ_ _≤_ _L −_ 1 _._ _3._ **(Output layer)** ˆ _u_ [(] _[L]_ [+1)] _is given by_ _u_ ˆ [(] _[L]_ [+1)] ( _t, x_ ) = _W_ [(] _[L]_ [)] _u_ ˆ [(] _[L]_ [)] ( _t, x_ ) + ( _K_ _N_ [(] _[L]_ [)] _[u]_ [ˆ] [(] _[L]_ [)] [)(] _[t, x]_ [) +] _[ b]_ [(] _[L]_ [)] _[.]_ _Here, σ_ : R _→_ R _is a nonlinear activation operating element-wise, and W_ [(] _[ℓ]_ [)] _∈_ R _[d]_ _[ℓ]_ [+1] _[×][d]_ _[ℓ]_ [+1] _is a weight matrix of the ℓ-th hidden layer, and b_ [(] _[ℓ]_ [)] _∈_ R _[d]_ _[ℓ]_ _is a bias vector, and K_ _N_ [(] _[ℓ]_ [)] : _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) _[d]_ _[ℓ]_ _→_ _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) _[d]_ _[ℓ]_ [+1] _is defined by_ ( _K_ _N_ [(] _[ℓ]_ [)] _[u]_ [)(] _[t, x]_ [) :=] � _C_ _n,m_ [(] _[ℓ]_ [)] _[⟨][ψ]_ _[m]_ _[, u][⟩][φ]_ _[n]_ [(] _[t, x]_ [)] _with C_ _n,m_ [(] _[ℓ]_ [)] _[∈]_ [R] _[d]_ _[ℓ]_ [+1] _[×][d]_ _[ℓ]_ _[,]_ _m,n∈_ Λ _N_ _where we use the notation_ _⟨ψ_ _m_ _, u⟩_ := ( _⟨ψ_ _m_ _, u_ 1 _⟩, . . ., ⟨ψ_ _m_ _, u_ _d_ _ℓ_ _⟩_ ) _∈_ R _[d]_ _[ℓ]_ _,_ _for u_ = ( _u_ 1 _, . . ., u_ _d_ _ℓ_ ) _∈_ _L_ _[r]_ (0 _, T_ ; _L_ _[s]_ ( _D_ )) _[d]_ _[ℓ]_ _. Note that d_ _L_ +1 = 1 _._ _We denote by NO_ _[L,H,σ]_ _N,φ,ψ_ _[the class of neural operators defined as above, with the depth][ L][, the number]_ _of neurons H_ = [�] _[L]_ _ℓ_ =1 _[d]_ _[ℓ]_ _[, the rank][ N]_ _[, the activation function][ σ][, and the families of functions][ φ, ψ][.]_ 6 The operators _K_ _N_ [(] _[l]_ [)] [play a crucial role in capturing the non-local nature of PDEs. They are defined by] truncating the basis expansion, a definition for neural operators inspired by Lanthaler et al. (2023). In this context, _K_ _N_ [(] _[l]_ [)] [are finite-rank operators with rank] _[ N]_ [. The model complexity is determined not] only by the depth _L_ and the number _H_ of neurons, but also by the rank _N_ . The families _φ_ and _ψ_ are hyperparameters, which are chosen so that their expansions can approximate the Green function. Examples are the Fourier basis, wavelet basis, orthogonal polynomial, spherical harmonics, and eigenfunctions of _L_ . When we select _φ_ and _ψ_ as the Fourier Idea Generation Category:
0Conceptual Integration
yUefexs79U
# - U NI CBE: A N U NIFORMITY DRIVEN C OMPARING B ASED E VALUATION F RAMEWORK WITH U NIFIED M ULTI -O BJECTIVE O PTIMIZATION **Peiwen Yuan** [1] **,** **Shaoxiong Feng** [2] **,** **Yiwei Li** [1] **,** **Xinglin Wang** [1] **,** **Yueqi Zhang** [1] **,** **Jiayi Shi** [1], **Chuyi Tan** [1], **Boyuan Pan** [2], **Yao Hu** [2], **Kan Li** [1] _[‡]_ 1 School of Computer Science, Beijing Institute of Technology 2 Xiaohongshu Inc _{_ peiwenyuan,liyiwei,wangxinglin,zhangyq _}_ @bit.edu.cn _{_ shijiayi, tanchuyi, likan _}_ @bit.edu.cn _{_ shaoxiongfeng2023 _}_ @gmail.com _{_ panboyuan,xiahou _}_ @xiaohongshu.com A BSTRACT Human preference plays a significant role in measuring large language models and guiding them to align with human values. Unfortunately, current comparing-based evaluation (CBE) methods typically focus on a single optimization objective, failing to effectively utilize scarce yet valuable preference signals. To address this, we delve into key factors that can enhance the accuracy, convergence, and scalability of CBE: suppressing sampling bias, balancing descending process of uncertainty, and mitigating updating uncertainty. Following the derived guidelines, we propose U NI CBE, a unified uniformity-driven CBE framework which simultaneously optimize these core objectives by constructing and integrating three decoupled sampling probability matrices, each designed to ensure uniformity in specific aspects. We further ablate the optimal tuple sampling and preference aggregation strategies to achieve efficient CBE. On the AlpacaEval benchmark, U NI CBE saves over 17% of evaluation budgets while achieving a Pearson correlation with ground truth exceeding 0.995, demonstrating excellent accuracy and convergence. In scenarios where new models are continuously introduced, U NI CBE can even save over 50% of evaluation costs, highlighting its improved scalability. 1 I NTRODUCTION The ongoing evolution of large language models (LLMs) has made it increasingly important to assess their alignment with human preferences (Dubois et al., 2024; Zheng et al., 2023). The preference signals provided by humans are crucial for accurately assessing and guiding models toward safe and reliable AGI (Ji et al., 2023; Jiang et al., 2024). However, the rapid iteration of LLMs in training and application scenarios has created a substantial demand for evaluation, complicating the acquisition of sufficient labor-intensive human preferences (Chiang et al., 2024; Cui et al., 2023). Therefore, exploring the use of precious preference signals for efficient model alignment evaluation is of great significance and requires long-term research. Current mainstream model evaluation paradigms include scoring-based evaluation (SBE) (Liu et al., 2023; Cai et al., 2023) and comparing-based evaluation (CBE) (Chiang et al., 2024; Dubois et al., 2024). The former requires the judge to offer preference scores for individual responses, while the latter needs the judge to establish a preference order among multiple candidate model responses. By directly comparing the responses of different models, Zheng et al. (2023); Liu et al. (2024) confirm that CBE can more accurately assess model performance. However, the _O_ ( _NM_ [2] ) evaluation overhead limits the practicality of CBE when there are _M_ models to evaluate on _N_ samples (Qin et al., 2024). To achieve efficient CBE, various methods have been explored Chiang et al. (2024); Zhou et al. (2024); Dubois et al. (2024). As shown in Figure 1, based on existing observational results, these methods iteratively allocate preference budget to the next (models, sample) tuple according to - Corresponding author. 1 Figure 1: Flowchart of the process for comparing-based evaluation. respective optimization objectives. Specific preference aggregation methods (e.g., ELO rating (Elo & Sloan, 1978)) are then applied to predict the model capability scores based on these preference results. Nevertheless, as shown in Table 1, the optimization objectives of these methods are often singular, failing to simultaneously achieve the accuracy, convergence, and scalability well. We will discuss this in detail in §2 and conduct experimental validation in §5.2. Table 1: Optimization Objectives of widely applied CBE Methods. The number of ’+’ indicates the degree of optimization for the objective, which is discussed in §2 and measured in Table 2. Qin et al. (2024) Chiang et al. (2024) Dubois et al. (2024) Ours Methods R ANDOM A RENA A LPACA E VAL U NI CBE Accuracy + - - ++ Convergence - + - ++ Scalability - - ++ ++ To develop a method that can accurately assess model performance, quickly converge evaluation results, and ensure good scalability when new models are introduced, we theoretically analyze and summarize the following guidelines: - Improving the _**accuracy**_ of evaluation results relies on completely **uniform** sampling of tuple combinations, so as to mitigate sampling bias. - Accelerating the _**convergence**_ process involves ensuring the **uniformity** of the win rate uncertainty matrix during its descending process to reduce observation variance. - Enhancing _**scalability**_ requires sufficient budgets being allocated to new added models to ensure the **uniform** allocation among models, which helps reduce updating uncertainty. Based on these insights, we propose U NI CBE, a unified uniformity-driven framework that can achieve CBE with better accuracy, convergence and scalability. In each iteration of the evaluation process, we first establish sampling probability matrices under different optimization objectives respectively based on real-time preference results. Afterwards, we integrate these matrices to obtain a global sampling probability matrix. Furthermore, we explore various tuple sampling strategies and preference aggregation methods to achieve optimal evaluation results. To comprehensively validate the effectiveness and generalizability of U NI CBE, we conduct experiments involving various types of judges (LLMs and humans), different benchmarks, varied model sets to be evaluated, diverse scenarios (static and dynamic), and multiple evaluation metrics. The main results indicate that, compared to random sampling baseline, U NI CBE saves over 17% of evaluation budgets when achieving the same assessment accuracy (with a Pearson coefficient exceeding 0.995 with the ground truth), demonstrating significantly better convergence and accuracy than baselines. Furthermore, in scenarios where new models are continuously introduced, U NI CBE save over 50% of evaluation costs compared to random sampling, showcasing excellent scalability. 2 R ELATED W ORK Comparative preference signals have long been used for model training (Ouyang et al., 2022; Touvron et al., 2023) and evaluation (Chiang et al., 2024; Yuan et al., 2024). Centered around comparing-based evaluation, we will discuss existing budget allocation strategies and preference aggregation methods below. 2 **Budget Allocation** Many efforts have been made to explore preference budget allocation approaches. The most naive method is to randomly select (models, sample) tuple for judging each time until the preference budget is reached (Qin et al., 2024). This method ensures a relatively uniform sampling across tuple combinations in expectation, thereby guaranteeing the accuracy of evaluation results according to derivation in §3.2. Arena (Chiang et al., 2024) aims to sample model pairs proportionally to the variance gradient of win rate at each step, seeking to accelerating the convergence of evaluation by reducing the uncertainty of the observed win rate matrix in a greedy manner. AlpacaEval (Dubois et al., 2024) measures model performance by comparing the models under evaluation with a fixed reference model. When new models are introduced, preference budget is prioritized for them to stabilize the estimation of their capabilities, thereby achieving good scalability. Despite these methods performing well in their intended objectives, they cannot achieve a balance among accuracy, convergence, and scalability. This makes it imperative to explore better preference budget allocation strategy that can effectively reconcile all these attributes. **Preference Aggregation** Due to the possibility that the same group of models may exhibit different ranking relationships across different samples, it is essential to estimate the global model capability scores to better fit these non-transitive preference results. Dubois et al. (2024); Zheng et al. (2023) directly use the average pair-wise win rate of each model as a measure of its capability. Feng et al. (2024); Wu & Aji (2023) apply the classical Elo rating system (Elo & Sloan, 1978) (see the Appendix B.1 for detailed introduction) by treating the evaluation process as a sequence of model battles in order to derive model scores. Fageot et al. (2024); Chiang et al. (2024) employ the Bradley-Terry model (Bradley & Terry, 1952) (see Appendix B.2 for detailed introduction) to estimate model scores by maximizing the likelihood of the comparison results between models. We will systematically compare the effectiveness of these preference aggregation methods in §5.3. 3 P RELIMINARY In this section, we start by symbolically introducing the working process of CBE. Afterwards, we introduce the key objectives for achieving efficient CBE: accuracy, convergence, and scalability, and analyze the factors that influence them. We mainly discuss the pair-wise evaluation scenario (where the judge provides preference between two models per time) for its wide applications (Tashu & Horv´ath, 2018; Qin et al., 2024). Actually, list-wise preferences can be easily converted into pair-wise ones, as demonstrated in §5.4, so the discussions below are general for CBE. 3.1 P ROCESS OF CBE Generally, a CBE method _f_ can be divided into three parts: budget allocation strategy _f_ _[ba]_, tuple sampling strategy _f_ _[ts]_, and preference aggregation strategy _f_ _[pa]_ . Given benchmark _D_ : _s_ 1: _N_ and models under evaluation _M_ : _m_ 1: _M_, we iterate the following steps: _step 1._ applying _f_ _[ba]_ to attain sampling matrix _P_ _[l]_ at iteration _l_, where _P_ _i,j,k_ _[l]_ [denotes the probability to select tuple][ (] _[m]_ _[i]_ _[, m]_ _[j]_ _[, s]_ _[k]_ [)] for judging; _step 2._ applying _f_ _[ts]_ to sample certain tuple ( _m_ _[l]_ [1] _, m_ _[l]_ [2] _, s_ _[l]_ ) based on _P_ _[l]_ ; _step 3._ attaining preference result _r_ _[l]_ from the judge, where _r_ _[l]_ _∈_ [0 _,_ 1] denotes the degree _m_ _[l]_ [1] wins over _m_ _[l]_ [2] (0.5 means tie). We stop this iterative process when the preset preference budget _T_ is achieved and then apply _f_ _[pa]_ on preference results _{_ ( _m_ _[l]_ [1] _, m_ _[l]_ [2] _, s_ _[l]_ _, r_ _[l]_ ) _}_ _[T]_ _l_ =1 [to attain estimated model scores] _[ u]_ [1:] _[M]_ [.] 3.2 A CCURACY Theoretically, if we have a budget of _T_ [ˆ] = _[NM]_ [(] 2 _[M]_ _[−]_ [1][)], we can explore all tuples to obtain the ground truth estimation for the model scores ˆ _u_ 1: _M_ . However, typically _T_ is much smaller than _T_ [ˆ] in reality considering the preciousness of preference signals. Previous studies (Vabalas et al., 2019; Kossen et al., 2021) have discussed the risks of introducing sampling bias in incomplete sampling scenarios, which we believe could similarly lead to potential risks in CBE. Considering that the content of each sample is ( _m_ _[l]_ [1] _, m_ _[l]_ [2] _, s_ _[l]_ ), we think the sample bias exists across both samples and models. **Bias across Samples.** Since different models may excel at answering different types of queries, the model scores can vary depending on the sampled data: _u_ _t_ = _f_ _[pa]_ ( _{_ ( _m_ _i_ _, m_ _j_ _, s_ _k_ _, r_ _i,j,k_ ) _}_ _i∈_ 1: _M,j∈i_ +1: _M_ ) _t_ = ˆ _u_ _t_ + _η_ _m_ _t_ _,_ - _,s_ _k_ for _∀_ _t, k_ (1) 3 0.25 0.20 0.15 0.10 0.05 0.00 푝푎푎�� Elo rating: � 푝푎퐸�� average win rate: � 푝푎 Elo rating: � 푝푎퐸�� BT score: � 푝푎퐵� � 푎�� 푝푎퐸�� BT score: � 푝푎퐵� (a) Sampling bias with different preference aggregation strategies across samples and models. alpaca-7b (b) Interval distribution of bias across samples with |0.30<br>0.25<br>0.20<br>Proportion<br>0.15<br>0.10<br>0.05<br>0.00<br>000000 00000 ..... ...... 532 44321100 055 50005050 --- -------- 000 00000000 ........... 55443322110 50505050505|Col2|Col3|Col4| |---|---|---|---| |0.30<br>0.25<br>0.20<br>Proportion<br>0.15<br>0.10<br>0.05<br>0.00<br>000000 00000 ..... ...... 532 44321100 055 50005050 --- -------- 000 00000000 ........... 55443322110 50505050505|||| _f_ _BT_ _[pa]_ [as preference aggregation strategy.] 0.4 0.2 0 −0.2 −0.4 llama-2-13b-chat-hf mistral-large-2402 mistral-medium gpt-4-0125-preview guanaco-7b guanaco-13b guanaco-33b guanaco-65b llama-2-7b-chat-hf openchat-13b baichuan-13b-chat baize-v2-7b baize-v2-13b claude claude-2 claude-2.1 gemini-pro gpt-3.5-turbo-0301 gpt-3.5-turbo-0613 (c) Bias across models with _f_ _BT_ _[pa]_ [as preference aggregation strategy.] Figure 2: Analyses of potential sampling bias risks in CBE. where _u_ ˆ _t_ when sorely assessing on sample _η_ _m_ _t_ _,_ - _,s_ _k_ represents the bias between the observed model score _s_ _k_ . To verify this, we conduct experiments on the AlpacaEval _u_ _t_ of _m_ _t_ and the ground truth benchmark (Dubois et al., 2024) using GPT-4o (OpenAI, 2024) as the judge across randomly selected 20 LLMs (listed in Figure 2(c)). We first traversed all model pairs for samples _s_ 1: _N_ to obtain corresponding _N_ sets of preference results and then calculate the respective _|η_ _m_ _i_ _,_ - _,s_ _k_ _|_ for _i ∈_ 1 : _M_ and _k ∈_ 1 : _N_ according to equation 1 (model scores are normalized to an average of 1). We calculate the average value of _|η_ _m_ _i_ _,_ - _,s_ _k_ _|_ across models and samples using different preference aggregation strategies _f_ _[pa]_ discussed in §2. As shown in Figure 2(a), with all kinds of _f_ _[pa]_, the average ~~diff~~ erence ~~b~~ etween t ~~h~~ e mo ~~d~~ e ~~l~~ scores estimated on single sample and the ground truth values exceeds 0.25, indicating a significant bias across samples. We further analyze the proportion of samples with different biases using _f_ _BT_ _[pa]_ [in Figure 2(b) and find that they overall follow a Gaussian distribution,] showing the wide existence of sample bias in CBE. **Bias across Models.** Just as humans may perform differently when facing different opponents, models may also have varying scores when competing against different models: _u_ _i_ = _f_ _[pa]_ ( _{_ ( _m_ _i_ _, m_ _j_ _, s_ _k_ _, r_ _i,j,k_ ) _}_ _k∈_ 1: _N_ ) _i_ = ˆ _u_ _i_ + _η_ _m_ _i_ _,m_ _j_ _,_ - for _∀_ _i, j_ (2) We validate this from two perspectives: (1) We calculate the average _|η_ _m_ _i_ _,m_ _j_ _,_ - _|_ according to equation 2 like the process above and show the results in Figure 2(a). Overall, although the bias across models is significantly lower than the bias across samples, it still exists at a scale around 0.05. We further visualize the pair-wise model score bias in Figure 2(c) to validate its wide existence. (2) We obtain over 1.7 million pairwise preference results across 129 LLMs collected by Chatbot Arena [*] . After excluding pairs with fewer than 50 comparisons, we calculate the pairwise win rates and find non-transitivity in 81 model triplets (win rate: _A > B_, _B > C_, _C > A_ ), which also verifies the existence of bias across models. - [https://storage.googleapis.com/arena_external_data/public/clean_](https://storage.googleapis.com/arena_external_data/public/clean_battle_20240814_public.json) [battle_20240814_public.json](https://storage.googleapis.com/arena_external_data/public/clean_battle_20240814_public.json) 4 **Uniform Allocation Brings the Least Bias.** Based on the discussions above, we analyze the budget allocation strategy that can introduce the least bias. Considering the presence of sampling bias, the estimation error of _u_ _i_ with _T_ evaluation budget can be expressed as follows: _u_ _i_ _−_ _u_ ˆ _i_ = _T_ � **1** _m_ _l_ 1 = _m_ _i_ _× η_ _m_ _l_ 1 _,m_ _l_ 2 _,s_ _l_ (3) _l_ =1 Considering that _u_ = ˆ _u_ when all the tuples are traversed, we have the following equation: _N_ � _η_ _i,j,k_ for _∀_ _i_ (4) _k_ =1 0 = _u_ _i_ _−_ _u_ ˆ _i_ = _M_ � _j_ =1 The goal of obtaining the minimum estimation error for _u_ _i_ is transformed into sampling _T_ numbers ( equation 3) from _MN_ numbers that sum to zero ( equation 4), such that the absolute value of the sum of these _T_ numbers is minimized. We have provided a detailed proof in Appendix A that the best strategy is completely uniform sampling. _**This denotes that the score estimation error can be**_ _**minimized when the preference budgets are uniformly distributed across models and samples to**_ _**bring the least sampling bias.**_ 3.3 C ONVERGENCE During the evaluation process, as new preference results are continuously observed, the estimated values of the models win rate matrix and model scores also change constantly. To accelerate the convergence process, we analyze the uncertainty of the win rate matrix as follows. Defining that: 1 1 _X_ _i,j_ _[l]_ [=] _P_ _[l]_ _r_ _[l]_ **1** _m_ _l_ 1 = _m_ _i_ & _m_ _l_ 2 = _m_ _j_ + _P_ _[l]_ (1 _−_ _r_ _[l]_ ) **1** _m_ _l_ 1 = _m_ _j_ & _m_ _l_ 2 = _m_ _i_ (5) _i,j_ _j,i_ The unbiased estimated win rate matrix Φ at iteration _L_ can be calculated as follows: Φ _[L]_ = [1] _L_ We further estimate the variance matrix Θ as: _L_ � _X_ _[l]_ (6) _l_ =1 Θ _[L]_ = [1] _L_ _L_ �( _X_ _[l]_ _−_ Φ _[L]_ ) _◦_ ( _X_ _[l]_ _−_ Φ _[L]_ ) (7) _l_ =1 Denoting if the model pair ( _m_ _i_ _, m_ _j_ ) has been compared on sample _s_ _k_ after _l_ iterations as _C_ _i,j,k_ _[l]_ [, the] uncertainty (standard deviation) of each element in the win rate matrix is as follows: _ϵ_ _[l]_ _i,j_ [=] ~~�~~ ��� � _Nk_ =1 Θ _[l]_ _i,j_ _[C]_ _i,j,k_ _[l]_ (8) Allocating the next preference budget on ( _m_ _i_ _, m_ _j_ ) can reduce the uncertainty of their win rate by: ~~�~~ ~~�~~ �� Θ _[l]_ _i,j_ _−_ �� Θ _[l]_ _i,j_ � � _Nk_ =1 _[C]_ _i,j,k_ _[l]_ � � _Nk_ =1 _[C]_ _i,j,k_ _[l]_ [+ 1] (9) Considering that our core objective is to conduct accurate capability assessments for all models and estimate their ranking relationship, _**we should globally ensure the uniformity of the win rate**_ _**uncertainty matrix during its descending process to achieve smooth evaluation convergence**_ . 3.4 S CALABILITY Due to the continuous emergence of new LLMs, the demand for scalability in evaluation method is becoming increasingly prominent (Chern et al., 2024). Considering that we have evaluated _m_ 1: _M_ with _T_ budgets, when model _m_ _M_ +1 is introduced for assessment, a well-scalable CBE method 5 should be able to quickly calibrate the capability estimates of _m_ 1: _M_ +1 with minimal additional preference budget. In this scenario, at the beginning stage when _m_ _M_ +1 is introduced, avg( _C_ _M_ +1 _,_ - _,_ - ) is much smaller than avg( _C_ = _̸_ _M_ +1 _,_ - _,_ - ). According to equation 8, the uncertainty at this point mainly arises from _ϵ_ _M_ +1, which is also intuitively easy to understand. _**Therefore, the key to improving**_ _**scalability lies in allocating sufficient evaluation budgets to the newly added models to ensure the**_ _**uniform allocation among models, reducing the updating uncertainty.**_ 4 U NI CBE The discussions above reveal guidelines for strengthening scalability, accuracy, and convergence in CBE. Based on this, we propose U NI CBE, a unified uniformity-driven framework that can simultaneously enhance these objectives well. 4.1 B UDGET A LLOCATION To ensure the uniformity of tuple combination sampling for minimizing the introduction of sampling bias according to §3.2, we construct _P_ _[acc]_ [-] _[l]_ at iteration _l_ as follows: _P_ _i,j,k_ _[acc]_ [-] _[l]_ [=] _[ α]_ _[−]_ [�] _k_ _[N]_ =1 _[C]_ _i,j,k_ _[l]_ _× α_ _[−]_ [�] _i_ _[M]_ =1 _[C]_ _i,j,k_ _[l]_ _× α_ _[−]_ [�] _j_ _[M]_ =1 _[C]_ _i,j,k_ _[l]_ (10) where [�] _[N]_ _k_ =1 _[C]_ _i,j,k_ _[l]_ [denotes the times model pair][ (] _[m]_ _[i]_ _[, m]_ _[j]_ [)][ has been compared,][ �] _[M]_ _i_ =1 _[C]_ _i,j,k_ _[l]_ [and] � _Mj_ =1 _[C]_ _i,j,k_ _[l]_ [denote the times model] _[ m]_ _[i]_ [ and] _[ m]_ _[j]_ [ has been tested on] _[ s]_ _[k]_ [ respectively. If certain model-] model combination or model-sample combination have been sampled multiple times, equation 10 will reduce the probability of such combinations being selected again, thereby achieving sufficient uniformity to minimize the introduction of bias between models and samples, respectively. To accelerate the convergence of evaluation results, we construct _P_ _[con]_ [-] _[l]_ according to §3.3 as follows: _P_ _i,j,k_ _[con]_ [-] _[l]_ [=] _[ ϵ]_ _[l]_ _i,j_ (11) Sampling specific model pair helps reduce the uncertainty of their win rate estimation according to equation 9. By sampling proportionally to the win rate uncertainty matrix, we can uniformly decrease the uncertainty for each model pair, thereby facilitating convergence. We construct _P_ _[sca]_ [-] _[l]_ to allocate more preference budget to the newly introduced model so as to improving the scalability according to §3.4 as follows: _P_ _i,j,k_ _[sca]_ [-] _[l]_ [=] _[ α]_ _[−]_ [�] _k_ _[N]_ =1 � _Mi_ =1 _[C]_ _i,j,k_ _[l]_ _× α_ _[−]_ [�] _k_ _[N]_ =1 � _Mj_ =1 _[C]_ _i,j,k_ _[l]_ (12) Finally, we integrate the matrices mentioned above to obtain _P_ _[l]_, ensuring that sampling according to _P_ _[l]_ can simultaneously balance the accuracy, convergence, and scalability of evaluation results: _P_ _[acc]_ [-] _[l]_ _◦_ _P_ _[con]_ [-] _[l]_ _◦_ _P_ _[sca]_ [-] _[l]_ _P_ _[l]_ = �( _P_ _acc_ - _l_ _◦_ _P_ _con_ - _l_ _◦_ _P_ _sca_ - _l_ ) (13) 4.2 T UPLE S AMPLING After obtaining _P_ _[l]_, we need to sample tuples for judging based on it. Two tuple sampling strategies are considered: - **probabilistic sampling** _f_ _p_ _[ts]_ [means sampling tuple directly according to] _[ P]_ _[ l]_ [.] - **greedy sampling** _f_ _g_ _[ts]_ [means selecting the tuple with the maximum probability in] _[ P]_ _[ l]_ [.] The default tuple sampling strategy of U NI CBE is _f_ _g_ _[ts]_ [, which can avoid the suboptimal achievement] of objectives due to uncertainty in the sampling process. 4.3 P REFERENCE A GGREGATION As discussed in §2, mainstream preference aggregation strategies include averaging win rate _f_ _avg_ _[pa]_ [,] Elo rating system _f_ _Elo_ _[pa]_ [and Bradley-Terry model] _[ f]_ _BT_ _[ pa]_ [. In our preliminary experiment (Figure 2(c))] we observe that _f_ _BT_ _[pa]_ [can better alleviate sampling bias, for which we choose it as our default setting.] 6 5 E XPERIMENTS Centered around U NI CBE, we will empirically compare its performance with baselines and validate its scalability in §5.2, explore the optimal variants in §5.3, and demonstrate its generalizability under different settings in §5.4. 5.1 E XPERIMENTAL S ETTINGS **Benchmarks.** We choose AlpacaEval (Dubois et al., 2024) and MT-Bench (Zheng et al., 2023) benchmarks for our experiments. For AlpacaEval, we use its default version which includes 805 high-quality human annotated instructions and corresponding responses from multiple LLMs. We randomly choose 20 LLMs (listed in Figure 2(c)) for experiments, with GPT-4o and GPT-3.5-turbo as judges (see Appendix D for the prompt). For MT-Bench, we use the released responses from the all 6 LLMs and corresponding human preferences for experiments. **Baselines.** We choose widely applied [†] methods R ANDOM, A RENA and A LPACAL E VAL as baselines, which have been discussed in §2 and listed in Table 1. **Metrics.** To assess the effectiveness of the CBE methods, we evaluate the accuracy of both the estimated model pair-wise win rates and the model scores. We calculate the average absolute error between the estimated win rates and corresponding ground truth (the estimates when _T_ = _T_ [ˆ] ). We calculate the Spearman correlation coefficient _r_ _s_ between the predicted model scores and the corresponding ground truth to evaluate the accuracy of the model’s rank-order relationship, and the Pearson correlation coefficient _r_ _p_ to assess the accuracy of the linear relationship. **Details.** To ensure the reliability of the experimental results, for each setting, we randomly select _M_ (default to 15 for AlpacaEval and 5 for MT-Bench) models and _N_ (default to 805 for AlpacaEval and 700 for MT-Bench) samples, and report the average results across 10,000 random seeds. We don’t observe obvious performance difference in preliminary experiments when varying _α_ within the range of [1 _._ 5 _,_ 3] (we conduct a detailed discussion about this in Appendix F.1), thus we set the default value of _α_ as 2 in our experiments. |Col1|Col2|Col3|Col4|Col5|UniC| |---|---|---|---|---|---| ||||||Ran<br>Are<br>Alp| ||||||| ||||||| |0.99<br>0.99<br>0.98<br>0.98<br>0.98 rs<br>0.98<br>0.97<br>0.97|25<br>00|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.99<br>0.99<br>0.98<br>0.98<br>0.98 rs<br>0.98<br>0.97<br>0.97|75<br>50|||||||| |0.99<br>0.99<br>0.98<br>0.98<br>0.98 rs<br>0.98<br>0.97<br>0.97|25<br>00<br>75<br>50|||||||| |0.99<br>0.99<br>0.98<br>0.98<br>0.98 rs<br>0.98<br>0.97<br>0.97|25<br>00<br>75<br>50|||||||| |0.99<br>0.99<br>0.99<br>0.99 rp<br>0.98<br>0.98<br>0.98|75<br>50|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |0.99<br>0.99<br>0.99<br>0.99 rp<br>0.98<br>0.98<br>0.98|25<br>00||||||| |0.99<br>0.99<br>0.99<br>0.99 rp<br>0.98<br>0.98<br>0.98|25<br>00||||||| |0.99<br>0.99<br>0.99<br>0.99 rp<br>0.98<br>0.98<br>0.98|75<br>50<br>25||||||| |0.125 0.150 0.175 Under|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| |0.125<br>0.100<br>0.075<br>0.050<br>0.025<br>0.000|||||| |0.125<br>0.100<br>0.075<br>0.050<br>0.025<br>0.000|||||| |0.20 Under|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.15<br>0.10<br>0.05<br>0.00||||||||| |0.15<br>0.10<br>0.05<br>0.00||||||||| |0.12<br>0.10<br>0.07<br>0.05<br>0.02<br>0.00|5<br>0<br>5<br>0|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |0.12<br>0.10<br>0.07<br>0.05<br>0.02<br>0.00|5<br>0||||||| Figure 3: Results of compared CBE methods with GPT-4o as the judge on AlpacaEval benchmark. The X-axis (applicable to all plots below) represents the preference budget ( _k_ ). **∆** denotes the mean absolute error of the estimated win rate. **r** **s** and **r** **p** denote the Spearman and Pearson correlations between the the estimated model scores and the ground truth respectively. 5.2 M AIN R ESULTS **Accuracy and Convergence.** The results of compared CBE methods on AlpacaEval benchmark with GPT4-turbo as the judge are shown in Figure 3. To better illustrate the results, we also calculate the percentage of preference budget saved by each method compared to R ANDOM baseline when [https://tatsu-lab.github.io/alpaca_eval/, https://lmarena.ai/](https://tatsu-lab.github.io/alpaca_eval/) 7 achieving the same performance. In terms of performance, A LPACA E VAL _<<_ R ANDOM _<_ A RENA _<_ U NI CBE. To understand the differences in the performance of each method, we quantitatively analyze them based on the guidelines summarized in § 3. To achieve accuracy, convergence, and scalability, it is supposed to allocate the preference budget in a way that ensures uniformity across tuples, uniformity across model pairs in win-rate uncertainty, and uniformity across models. We calculate the cosine similarity between the allocation results of these methods and the corresponding expected uniform vectors for each objective as a measure, denoted as _β_ _acc_, _β_ _con_, and _β_ _sca_, respectively (see Appendix E for calculating process). As shown in Table 2, the fixed inclusion of the reference model in the tuple selection of A LPACA E VAL compromises uniformity across multiple aspects, thereby resulting in lower _β_ values and significantly poorer performance. A RENA and R ANDOM respectively improve the balance of uncertainty and suppression of sampling bias, resulting in higher _β_ _con_ and _β_ _acc_ values. Following our guidelines, U NI CBE improves _β_ _con_, _β_ _acc_, and _β_ _sca_ simultaneously and save over 17% of the preference budget compared to R ANDOM with a ∆ close to 0.01, showcasing improved accuracy and convergence. Table 2: The measurement results of the achievement of objectives in §3 for the compared methods. Methods R ANDOM A RENA A LPACA E VAL U NI CBE _β_ _acc_ .5803 .5725 .0925 .7364 _β_ _con_ .9081 .9172 .3515 .9228 _β_ _sca_ .9972 .9945 .4987 .9997 **Scalability.** To analyze scalability, we establish a scenario where we initially have 11 models awaiting assessment, and new models are sequentially added every 2000 samplings. As shown in Figure 4, Whenever a new model is introduced, U NI CBE can rapidly stabilize the performance through adaptive preference allocation skewing for the new model, saving over 50% of the budget compared to the R ANDOM baseline. In contrast, A RENA and R ANDOM exhibit poorer scalability since they do not consider scalability as optimization objective. Although the budget allocated to the reference model is significantly more than that for other models, resulting in a lower _β_ _sca_ for A LPACA E VAL, the strategy of automatically allocating the budget to the new introduced models also provides it with good scalability. |Col1|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |||||||| |||||||| |||||||| |||||||| |||||||| |||||||| |0.6 r|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |0.6<br>0.5<br>0.4<br>0.3<br>0.2<br>0.1<br>0.0|||||||| |0.6<br>0.5<br>0.4<br>0.3<br>0.2<br>0.1<br>0.0|||||||| |0.6<br>0.5<br>0.4<br>0.3<br>0.2<br>0.1<br>0.0|||||||| |0.99<br>0.98<br>0.97 rs<br>0.96<br>0.95<br>1 2 3 4 5 6 7 8<br>Under<br>0.5<br>0.4<br>Percentage<br>0.3<br>0.2<br>0.1<br>0.0 ve|9|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.99<br>0.98<br>0.97 rs<br>0.96<br>0.95<br>1 2 3 4 5 6 7 8<br>Under<br>0.5<br>0.4<br>Percentage<br>0.3<br>0.2<br>0.1<br>0.0 ve|8|||||||| |0.99<br>0.98<br>0.97 rs<br>0.96<br>0.95<br>1 2 3 4 5 6 7 8<br>Under<br>0.5<br>0.4<br>Percentage<br>0.3<br>0.2<br>0.1<br>0.0 ve|7<br>6|||||||| |0.99<br>0.98<br>0.97 rs<br>0.96<br>0.95<br>1 2 3 4 5 6 7 8<br>Under<br>0.5<br>0.4<br>Percentage<br>0.3<br>0.2<br>0.1<br>0.0 ve|5|||||||| |0.99<br>0.98<br>0.97 rs<br>0.96<br>0.95<br>1 2 3 4 5 6 7 8<br>Under<br>0.5<br>0.4<br>Percentage<br>0.3<br>0.2<br>0.1<br>0.0 ve||||||||| |0.99<br>0.98<br>0.97 rs<br>0.96<br>0.95<br>1 2 3 4 5 6 7 8<br>Under<br>0.5<br>0.4<br>Percentage<br>0.3<br>0.2<br>0.1<br>0.0 ve||||||||| |0.99<br>0.98<br>0.97 rs<br>0.96<br>0.95<br>1 2 3 4 5 6 7 8<br>Under<br>0.5<br>0.4<br>Percentage<br>0.3<br>0.2<br>0.1<br>0.0 ve||||||||| |1.000|00|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve|95<br>90<br>85|||||||| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve|95<br>90<br>85|||||||| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve|80|||||||| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve|80|||||||| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve|75<br>70|||||||| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve|75<br>70|||||||| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve|75<br>70|||||||| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve||||||||| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve||||||||| |0.995<br>0.990<br>0.985<br>0.980 rp<br>0.975<br>0.970<br>0.965<br>0.960<br>1 2 3 4 5 6 7 8<br>Under<br>0.6<br>0.4 Percentage<br>0.2<br>0.0 ve||||||||| Figure 4: Results of compared CBE methods in the scenario where new model are consistently introduced every 2000 iterations. 5.3 V ARIANTS A BLATIONS **Budget Allocation Objectives.** We test the impact of different optimization objectives by removing _P_ _[acc]_, _P_ _[con]_, and _P_ _[sca]_ from equation 13 separately. As shown in Figure 5, the significant performance degradation observed when removing _P_ _[acc]_ from U NI CBE indicates that mitigating sampling bias to improve accuracy is the most critical factor in achieving efficient CBE. Furthermore, we find that _P_ _[con]_ has a considerable impact on _r_ _s_ . We hypothesize that this is because balancing the uncertainty among different models helps prevent any one model from having a significant ranking 8 |0.125 U U U|UniCBE w/o UniCBE w/o Psca|Col3|Col4| |---|---|---|---| |0.125 U<br>U<br>0.100 U<br>0.075 U<br>0.050 R<br>0.025<br>0.000|niCBE w/o P<br>niCBE w fpts<br>niCBE w fEplao<br>an niC dB oE w fApcac<br>m||| |0.125 U<br>U<br>0.100 U<br>0.075 U<br>0.050 R<br>0.025<br>0.000|||| |0.20 Under|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |0.15<br>0.10<br>0.05<br>0.00|||||||| |0.15<br>0.10<br>0.05<br>0.00|||||||| |0.125 0.150 0.175 Under rp|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |0.125<br>0.100<br>0.075<br>0.050<br>0.025<br>0.000||||||| |0.125<br>0.100<br>0.075<br>0.050<br>0.025<br>0.000||||||| Figure 5: Ablation studies of U NI CBE with GPT-4o as the judge on AlpacaEval benchmark. bias due to its larger uncertainty. The performance drop when removing _P_ _[sca]_ also suggests that ensuring uniformity in sampling across models not only enhances scalability but also further reduces sampling bias, thereby improving accuracy. **Tuple Sampling and Preference Aggregation Strategies.** As shown in Figure 5, replacing greedy sampling with probabilistic sampling _f_ _p_ _[ts]_ [results in a significant performance drop. This] is likely because the randomness introduced by _f_ _p_ _[ts]_ [hinders the achievement of multiple optimiza-] tion objectives. In terms of preference aggregation strategies, the Elo rating system _f_ _Elo_ _[pa]_ [shows a] slight performance decline compared to the BT model due to its higher instability (Boubdir et al., 2023). Moreover, the strategy of directly using the average win rate _f_ _avg_ _[pa]_ [may introduce additional] bias, as it fails to consider the varying strengths of the opponents faced Idea Generation Category:
3Other
rpwGUtTeA5
# - S IMUL PL: A LIGNING H UMAN P REFERENCES IN S I MULTANEOUS M ACHINE T RANSLATION **Donglei Yu** [1 2] **, Yang Zhao** [1 2] **, Jie Zhu** [3] **, Yangyifan Xu** [1 2] **, Yu Zhou** [1 2] _[∗]_ **, Chengqing Zong** [1 2] 1 School of Artificial Intelligence, University of Chinese Academy of Sciences 2 State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China 3 Graduate School of Translation and Interpretation, Beijing Foreign Studies University _{_ yudonglei2021,zhaoyang2015 _}_ @ia.ac.cn jojo-josephine@bfsu.edu.cn _{_ yangyifanxu2021,yu.zhou,chengqing.zong _}_ @ia.ac.cn A BSTRACT Simultaneous Machine Translation (SiMT) generates translations while receiving streaming source inputs. This requires the SiMT model to learn a read/write policy, deciding when to translate and when to wait for more source input. Numerous linguistic studies indicate that audiences in SiMT scenarios have distinct preferences, such as accurate translations, simpler syntax, and no unnecessary latency. Aligning SiMT models with these human preferences is crucial to improve their performances. However, this issue still remains unexplored. Additionally, preference optimization for SiMT task is also challenging. Existing methods focus solely on optimizing the generated responses, ignoring human preferences related to latency and the optimization of read/write policy during the preference optimization phase. To address these challenges, we propose Simultaneous Preference Learning (SimulPL), a preference learning framework tailored for the SiMT task. In the SimulPL framework, we categorize SiMT human preferences into five aspects: **translation quality preference**, **monotonicity preference**, **key point** **preference**, **simplicity preference**, and **latency preference** . By leveraging the first four preferences, we construct human preference prompts to efficiently guide GPT-4/4o in generating preference data for the SiMT task. In the preference optimization phase, SimulPL integrates **latency preference** into the optimization objective and enables SiMT models to improve the read/write policy, thereby aligning with human preferences more effectively. Experimental results indicate that SimulPL exhibits better alignment with human preferences across all latency levels in Zh _→_ En, De _→_ En and En _→_ Zh SiMT tasks. Our data and code will be [available at https://github.com/EurekaForNLP/SimulPL.](https://github.com/EurekaForNLP/SimulPL) 1 I NTRODUCTION Simultaneous Machine Translation (SiMT) (Grissom II et al., 2014; Gu et al., 2017; Ma et al., 2019) generates translations while receiving the streaming source inputs. Therefore, the SiMT model needs to learn not only the translation ability but also a read/write policy during training to decide whether to wait for the next incoming source token (READ) or to generate a new target token (WRITE) (Grissom II et al., 2014; Alinejad et al., 2021). The real-time nature of SiMT scenarios leads to unique human preferences from audiences, which has been demonstrated by relevant linguistic studies (Kurz, 2001; Zwischenberger, 2010). On one hand, the audiences prefer translations that are accurate and easy to understand (Moser, 1996; Sridhar et al., 2013; Dayter, 2020); on the other hand, they also prefer translations to be delivered without unnecessary latency. Fulfilling these preferences is an important goal for interpreters (Amini et al., 2013; Kurz, 2001) and should also be considered in SiMT. However, how to make SiMT models _∗_ Corresponding author. 1 align with human preferences remains unexplored. Existing SiMT methods (Ma et al., 2019; Alinejad et al., 2021) are primarily trained and evaluated on corpora from the normal offline machine translation (OMT) task, which do not reflect real SiMT scenarios. Some studies (Chen et al., 2020; Guo et al., 2023) have proposed constructing monotonic references to avoid hallucinations, but they still fail to comprehensively consider human preferences. Furthermore, aligning preferences in the SiMT task presents its own challenges. Existing preference alignment methods (Rafailov et al., 2024; Xu et al., 2024a; Ethayarajh et al., 2024) are designed for tasks such as OMT and question answering, which focus solely on optimizing the model’s generated responses. In contrast, these methods have limitations in the SiMT context: they do not account for human preferences regarding latency in the SiMT task and fail to consider enhancing the read/write policy of SiMT models during the preference optimization phase. As a result, these current preference alignment methods are unsuitable for the SiMT task. To address these issues, we propose Simultaneous Preference Learning (SimulPL), a preference learning framework tailored for the SiMT task. In the SimulPL framework, based on existing research in linguistics and computational linguistics (Moser, 1996; Zwischenberger, 2010; He et al., 2016; Cho, 2016; Chen et al., 2020; Guo et al., 2023), we categorize human preferences in SiMT scenarios and focus on five aspects: **translation quality preference**, **monotonicity preference**, **key** **point preference**, **simplicity preference**, and **latency preference** . Based on the first four preferences, SimulPL constructs human preference prompts to effectively guide GPT-4/4o in generating preference data for the SiMT task. During the fine-tuning phase, SimulPL proposes Multi-task Supervised Fine-tuning (MSFT) to jointly train the translation ability and read/write policy of the SiMT model for initial preference alignment. Subsequently, SimulPL employs SimulDPO for further preference optimization. During the SimulDPO phase, SimulPL integrates **latency preference** into the optimization objective and enables the SiMT model to further adjust its read/write policy, thereby facilitating more effective alignment with human preferences. We evaluate SimulPL on test sets with references that we manually revised to align with human preferences. Experimental results demonstrate that SimulPL achieves higher translation quality across all latency levels. Furthermore, our manual assessment and multi-aspect evaluation indicate that SimulPL exhibits better alignment with human preferences from both the overall perspective and across the categorized five aspects. To the best of our knowledge, SimulPL is the first preference learning framework for simultaneous tasks like SiMT. Our contributions can be summarized as follows: - Our work addresses a critical gap in the study of human preferences for SiMT scenarios. We categorize SiMT human preferences into five aspects: translation quality, monotonicity, key points, simplicity, and latency. This categorization enables the construction of human preference prompts to efficiently guide LLMs in generating preference data for SiMT. - We propose SimulPL, a preference learning framework tailored for SiMT scenarios. Unlike existing preference learning methods, SimulPL integrates latency preference into the optimization objective and allows the SiMT model to improve its read/write policy during the preference optimization process, enabling better alignment with human preferences. - Experimental results demonstrate that SimulPL effectively enhances the translation quality across various latency levels. Furthermore, our preference evaluation indicates that SimulPL exhibits better alignment with human preferences. 2 R ELATED W ORK **Simultaneous Translation** Various SiMT methods introduce different read/write policies. Some approaches propose rule-based fixed policies (Ma et al., 2019; Elbayad et al., 2020), while others focus on adaptive policies that adjust dynamically based on the context. These adaptive policies are modeled in various forms, such as multi-head monotonic attention Ma et al. (2020b), Transducer (Liu et al., 2021), information transport model (Zhang & Feng, 2022), Hidden Markov model (Zhang & Feng, 2023), and self-modifying process (Yu et al., 2024). More recently, some studies (Wang et al., 2023a; Agostinelli et al., 2024; Wang et al., 2024) have also demonstrated the promising performance of large language models in SiMT tasks. However, these efforts are predominantly validated on OMT datasets. Chen et al. (2020) constructed monotonic pseudo-references to reduce unnecessary reorderings. Wang et al. (2023b) generated monotonic references with two-stage beam 2 Human Perference Prompt Human Perference SimulDPO Training Loss Figure 1: Overview of our proposed SimulPL Framework. With the first four preferences, we construct the human preference prompts to guide GPT-4/4o generating human-preferred translations. The latency preference is integrated into the preference optimization process. search. Guo et al. (2023) employed RL to balance monotonicity and quality of translations. However, existing work fails to account for real SiMT scenarios and alignment with human preferences. **LLM Alignment** Aligning LLMs with human preference has become a crucial research challenge recently. Reinforcement Learning from Human Feedback (RLHF) is one of the key approaches (Ouyang et al., 2022; Bai et al., 2022; Yuan et al., 2023). For stable training and less costs, Rafailov et al. (2024) proposed Direct Preference Optimization (DPO), which directly optimizes LLMs without relying on a reward model. Similarly, methods such as CPO (Xu et al., 2024a) and KTO (Ethayarajh et al., 2024) were introduced to improve DPO. Besides, preference alignment is also widely applied to enhance specific tasks (Stiennon et al., 2020; Chen et al., 2024b; Yang et al., 2024). Xu et al. (2024b) explored using RLHF to improve the translation quality. He et al. (2024) utilized the automated evaluation metrics as feedback to enhance translation performance. Nevertheless, existing methods neglect latency preference in the SiMT task and do not improve the read/write policy in the optimization process, both of which negatively impact the alignment in the SiMT task. 3 P RELIMINARIES **Reward Modeling** Existing preference alignment methods typically involve reward modeling and preference optimization. For reward modeling, a human-annotated preference dataset (x _,_ y _[w]_ _,_ y _[l]_ ) is first constructed, where x represents the input, y _[w]_ is preferred over y _[l]_, which is denoted as y _[w]_ _≻_ y _[l]_ . Subsequently, existing methods (Christiano et al., 2017; Kim et al., 2023) often train a reward model based on the Bradley-Terry model (Bradley & Terry, 1952), which is formulated as: exp( _r_ (x _,_ y _[w]_ )) _p_ (y _[w]_ _≻_ y _[l]_ _|_ x) = (1) exp( _r_ (x _,_ y _[w]_ )) + exp( _r_ (x _,_ y _[l]_ )) [=] _[ σ]_ [(exp(] _[r]_ [(x] _[,]_ [ y] _[w]_ [))] _[ −]_ [exp(] _[r]_ [(x] _[,]_ [ y] _[l]_ [)))] where _r_ (x _,_ y _[w]_ ) is the score estimated by the reward model, and _σ_ ( _·_ ) is the logistic sigmoid function. **Preference Optimization** Reinforcement learning (RL) is widely used for preference optimization. Using signals from a reward model, the LLM can be optimized with the following objective: max _π_ _θ_ [E] [x] _[∼][D,]_ [y] _[∼][π]_ _[θ]_ [(y] _[|]_ [x)] [[] _[r]_ [(x] _[,]_ [ y)]] _[ −]_ _[β]_ [D] [KL] [[] _[π]_ _[θ]_ [(y] _[ |]_ [ x)] _[||][π]_ [ref] [(y] _[ |]_ [ x)]] (2) 3 Table 1: Statics of our constructed datasets. We present the reference-free COMET scores of our annotated target sentences with GPT-4/4o and the original target sentences. Size Ref-free COMET Dataset train test GPT-4/4o Origin Zh _→_ En 13,491 2,000 79.13 73.72 De _→_ En 15,717 2,168 78.93 75.02 En _→_ Zh 19,967 2,841 80.30 76.97 |Col1|Our Win Tie Our Lose|Col3| |---|---|---| |63% 5% 32%<br>58% 8% 34%<br>59% 15% 26%|63% 5% 32%<br>58% 8% 34%<br>59% 15% 26%|63% 5% 32%<br>58% 8% 34%<br>59% 15% 26%| 0% 20% 40% 60% 80% 100% Figure 2: Human evaluation between our annotated target references and origin target references. Our newly annotated references are more preferred. Zh _→_ En De _→_ En En _→_ Zh Additionally, several methods, such as DPO (Rafailov et al., 2024), directly conduct preference alignment without a reward model. However, existing preference alignment methods cannot be directly applied to the SiMT task, as their optimization objectives do not account for the latency preference and do not adjust the read/write policy in the optimization process. 4 M ETHOD : S IMUL PL We propose Simultaneous Preference Learning (SimulPL), a preference learning framework tailored for the SiMT task. The overview of SimulPL is shown in Figure 1. In this framework, we construct human preference prompts based on our categorization of SiMT human preferences to guide GPT4/4o in generating preference data. During the fine-tuning phase, SimulPL introduces Multi-task Supervised Fine-tuning (MSFT) to jointly learn translation ability and the read/write policy for initial preference alignment. During the preference optimization phase, SimulPL proposes Simultaneous Direct Preference Optimization (SimulDPO), which takes latency preference into account and further improves the read/write policy. The details are discussed in the following. 4.1 C ATEGORIZATION OF H UMAN P REFERENCE In real-time SiMT scenarios, the audience exhibits unique human preferences (Kurz, 2001; Zwischenberger, 2010; Amini et al., 2013). Based on existing research in linguistics and computational linguistics, we categorize SiMT human preferences into five aspects: - **Translation Quality Preference** : Similar to OMT, faithful and fluent translations are also preferred in SiMT (Ma et al., 2019; Miao et al., 2021). - **Monotonicity Preference** : In the SiMT process, translating monotonically in accordance with the source word order allows for the delivery of translations with minimal pauses (Yang et al., 2023; Chen et al., 2020), which is favored by the audience (Mac´ıas, 2006). - **Key Point Preference** : According to existing research (Moser, 1996; He et al., 2016), concise translations that highlight important information points are more appealing than those that provide complete information in the SiMT scenarios. - **Simplicity Preference** : In real-time SiMT scenarios, the audience prefers sentences with simpler syntactic structures, which are easier to follow (Sridhar et al., 2013; Dayter, 2020). - **Latency Preference** : In real-time settings, the audience prefers translations to be delivered without unnecessary latency (Rennert, 2010; Cho, 2016). It is important to note that latency preference differs from the other four preferences, as it focuses not on the translation content but rather on reducing delays. Therefore, SimulPL aligns with the first four preferences by improving translation ability, and with the latency preference by enhancing the read/write policy. 4.2 D ATA C ONSTRUCTION **Annotation of Human-preferred Translation** In our categorization, the first four preferences are reflected in the translation content. Therefore, we utilize them as prior knowledge to construct hu 4 man preference prompts and leverage GPT-4/4o (Achiam et al., 2023) to efficiently generate humanpreferred translations, denoted as Y _[w]_ . The original references, not fully aligned with human preferences, are denoted as Y _[l]_ . For the training data, we select subsets from three datasets—WMT15 De _→_ En, WMT22 Zh _→_ En, and MUST-C En _→_ Zh for annotation. The complete prompt used for annotation is provided in Appendix A.1. Correspondingly, newstest2015 De _→_ En, newstest2021 Zh _→_ En, and tst-COMMON are annotated for evaluation. To ensure the accuracy of the test set, we first use GPT-4/4o to generate drafts, and then manually revise them to produce human-preferred references. Our annotators are all qualified in simultaneous interpretation, ensuring reliable and trustworthy revisions. The statistics for our constructed dataset, along with ref-free COMET scores, are shown in Table 1. Notably, we calculate reference-free COMET scores for both kinds of references, showing that our annotated references match the quality of the originals. To verify that our constructed translation data aligns with human preference, we randomly sample 100 sentences from each of the three language pairs and conducted a manual evaluation by professional simultaneous interpreters. The results in Figure 2 show that our annotated data achieves a higher win rate, indicating stronger alignment with human preference. To further validate our annotated data quality, we conduct additional comparisons between GPT-generated translations and manually revised translations through human evaluation, along with automatic evaluation from the perspective of the first four preferences. These results are available in Appendix A.2. **Prefix Pairs Extraction** To enable the SiMT model in learning translation based on source prefixes instead of complete source sentences, we extract prefix pairs from our annotated sentence pairs using word alignment and add them to the training data. For each sentence pair (X _,_ Y _[w]_ ), we use awesome-align (Dou & Neubig, 2021) to get the word alignment. For target token _y_ _t_, we denote the corresponding source token as _x_ _a_ _t_, and the set of extracted prefix pairs are denoted as: _D_ _p_ _[w]_ [=] _[ {]_ [(x] _[,]_ [ y] _[w]_ [)] _[ |]_ [ if 0] _[ < t][ ≤|]_ [y] _[w]_ _[|][,]_ [ then 0] _[ < a]_ _[t]_ _[≤|][x][|]_ [;] _[ a]_ _|_ y _[w]_ _|_ +1 _[>][ |][x][|}]_ (3) Intuitively, for the given source prefix x, the target prefix y _[w]_ includes all the translatable content. Similarly, we can extract prefix pairs from sentence pairs (X _,_ Y _[l]_ ) to obtain _D_ _p_ _[l]_ [. Then, we merge] _[ D]_ _p_ _[w]_ and _D_ _p_ _[l]_ [to create the prefix-level preference dataset:] _D_ _p_ = _{_ (x _,_ y _[w]_ _,_ y _[l]_ ) _|_ (x _,_ y _[w]_ ) _∈_ _D_ _p_ _[w]_ _[,]_ [ (x] _[,]_ [ y] _[l]_ [)] _[ ∈]_ _[D]_ _p_ _[l]_ _[}]_ (4) 4.3 M ULTI - TASK S UPERVISED F INE - TUNING Based on a pre-trained language model _π_ pre, SimulPL introduces Multi-task Supervised Fine-tuning (MSFT) to jointly learn translation ability and read/write policy on _D_ _p_ _[w]_ [for initial preference align-] ment. For translation ability, the model learns to generate the target prefix y _[w]_ from the source prefix x. For read/write policy, SimulPL adds an extra confidence layer, consisting of a linear layer and a sigmoid layer, to make read/write decisions. Specifically, when predicting _y_ _t_ _[w]_ [, an additional con-] fidence _c_ _[w]_ _t_ [is estimated by the confidence layer. If] _[ t <][ |]_ [y] _[w]_ _[|]_ [, the model should predict] _[ c]_ _[w]_ _t_ = 1, indicating the WRITE decision. Otherwise, if _t > |_ y _[w]_ _|_, the model should estimate _c_ _[w]_ _t_ [= 0][, which] means it should stop translating and choose the READ decision. The complete training loss for the MSFT phase is calculated as: _|_ y _[w]_ _|_ +1 � [I( _t ≤|_ y _[w]_ _|_ ) log _c_ _[w]_ _t_ [+][ I][(] _[t >][ |]_ [y] _[w]_ _[|]_ [) log (1] _[ −]_ _[c]_ _[w]_ _t_ [)]] _t_ =1 _L_ MSFT = _−_ _|_ y _[w]_ _|_ � _y_ _t_ _[w]_ [log] _[ π]_ [sft] [(] _[y]_ _t_ _[w]_ _[|]_ [ x] _[,]_ [ y] _≤_ _[w]_ _t−_ 1 [)] _[ −]_ _t_ =1 (5) where _π_ sft is initialized with the parameters of _π_ pre, and I( _·_ ) denotes the indicator function. It is noted that we train the model to predict _c_ _[w]_ _|_ y _[w]_ _|_ +1 [= 0][, allowing the SiMT model to learn to stop] translating at the appropriate position. 4.4 S IMULTANEOUS D IRECT P REFERENCE O PTIMIZATION After the MSFT phase, SimulPL introduces Simultaneous Direct Preference Optimization (SimulDPO) to further align with human preferences. In the SimulDPO phase, SimulPL integrates the latency preference into the optimization objective and allows the SiMT model to further improve its read/write policy during preference optimization. 5 Idea Generation Category:
1Cross-Domain Application
XBF63bHDZw
## - - - T OPO LM: BRAIN LIKE SPATIO FUNCTIONAL ORGANI ### ZATION IN A TOPOGRAPHIC LANGUAGE MODEL **Neil Rathi** _[∗][,]_ [1] _[,]_ [2] **, Johannes Mehrer** _[∗][,]_ [1] **, Badr AlKhamissi** [1] **,** **Taha Binhuraib** [3], **Nicholas M. Blauch** [4], **Martin Schrimpf** [1] _[,][†]_ 1 EPFL, 2 Stanford University, 3 Georgia Institute of Technology, 4 Harvard University A BSTRACT Neurons in the brain are spatially organized such that neighbors on tissue often exhibit similar response profiles. In the human language system, experimental studies have observed clusters for syntactic and semantic categories, but the mechanisms underlying this functional organization remain unclear. Here, building on work from the vision literature, we develop TopoLM, a transformer language model with an explicit two-dimensional spatial representation of model units. By combining a next-token prediction objective with a spatial smoothness loss, representations in this model assemble into clusters that correspond to semantically interpretable groupings of text and closely match the functional organization in the brain’s language system. TopoLM successfully predicts the emergence of a spatially organized cortical language system as well as the organization of functional clusters selective for fine-grained linguistic features empirically observed in human cortex. Our results suggest that the functional organization of the human language system is driven by a unified spatial objective, and provide a functionally and spatially aligned model of language processing in the brain. [1] 1 I NTRODUCTION Artificial neural network (ANN) models of language have recently been shown to accurately predict neural activity in the human language system (Schrimpf et al., 2021; Caucheteux & King, 2022; Goldstein et al., 2022). When presented with the same text input, the unit activity at internal layers of especially transformer-based models (Vaswani et al., 2017; Radford et al., 2019) is strikingly similar to the internal activity measured experimentally in human cortex. The most powerful models predict even close to 100% of the explainable variance of neural responses to sentences in some brain datasets (Schrimpf et al., 2021). However, while there is a strong alignment to the brain’s _functional_ _responses_, a crucial element of cortex is entirely lacking from today’s language models: the _spatial_ _arrangement_ of neurons on the cortical surface. In recent models of the visual system, the introduction of _topography_ has led to ANNs that begin to match brain activity functionally as well as spatially (Lee et al., 2020; Margalit et al., 2024; Keller et al., 2021; Blauch et al., 2022; Lu et al., 2023). These models provide a principle for understanding the development of spatial organization in the brain, in the form of minimizing wiring cost, such that neurons with similar response profiles tend to cluster together. These clusters resemble the spatiofunctional organization in the early cortex with orientation preferences such as pinwheels (Hubel & Wiesel, 1962; 1968; Maunsell & Newsome, 1987; Felleman & Van Essen, 1991), and in higher-level visual regions with category-selective regions such as face patches (Kanwisher et al., 1997; Haxby et al., 2001; Tsao et al., 2003; 2006; 2008; Freiwald et al., 2009). The topography of the human language system on the other hand lacks a comprehensive computational explanation. Neuroscience experiments suggest both a macro-organization at the level of a distributed cortical network that selectively responds to linguistic processing (Fedorenko et al., 2010; 2011; 2024; Blank et al., 2014), as well as a micro-organization into clusters that correspond _∗_ Equal contribution by NR and JM. _†_ Correspondence: martin.schrimpf@epfl.ch 1 [Code available at https://github.com/epflneuroailab/topolm.](https://github.com/epflneuroailab/topolm) 1 **(a)** input 1 input 2 ... # [ input n ] input 1 input 2 **(b)** **r** = corr SL _k_ = 1 – corr2 ( **r**, **d** ) ... input _n_ activations at layer _k_ **d** = 1 5 dist( _i_, _j_ ) + 1 FWHM 4 0 _x_ 1 _x_ 2 _x_ **(c)** fMRI readout sampling in cortex x fMRI-like readout sampling in model ~~x~~ _f_ ( _x_ ) _f_ max _f_ max 2 y ~ y tokens Figure 1: **Building a topographic language model with brain-like spatio-functional organiza-** **tion. (a)** TopoLM modifies the Transformer architecture with a two-dimensional spatial encoding at the output of each attention and MLP layer. This representation enables the use of a spatial correlation loss that encourages smooth response profiles in adjacent units. This spatial loss is jointly optimized with cross-entropy task loss during training. **(b)** At each forward pass, we randomly select five neighborhoods in each layer (of which we here only show 3 for clearity) and compute the pairwise correlation of unit activations within each neighborhood. The spatial loss is computed by comparing these correlations to the inverse distances between associated unit pairs, with the final loss averaged across units pairs and neighborhoods. Computing the loss on cortical neighborhoods is an efficient approximation of the spatial loss. **(c)** We use a FWHM filter to simulate the fMRI sampling process such that a simulated voxel’s response (‘fMRI-like signal’) is composed of a combination of responses from neighboring units (Kriegeskorte et al., 2010). We simulate the response as a Gaussian random variable, with FWHM 2 _._ 0 mm, assuming unit distances of 1 _._ 0 mm. to syntactic and semantic categories such as verbs, nouns, and concrete words (Shapiro et al., 2006; Moseley & Pulverm¨uller, 2014; Hauptman et al., 2024). What are the mechanisms underlying this spatio-functional organization of the language system in the brain? Here, we develop **TopoLM**, a neural network model for brain-like topographic language processing. TopoLM is based on the transformer architecture but incorporates an explicit spatial arrangement of units. We train the model via a combined task and spatial loss, which optimizes the model to perform autoregressive language modeling while encouraging local correlation, similarly to a recent approach used in vision (Lee et al., 2020; Margalit et al., 2024). The spatio-functional organization that emerges in this model is semantically interpretable and aligned with clusters that have been observed experimentally in brain recordings. Comparing TopoLM with a non-topographic baseline model (i.e. one trained without spatial loss) on a series of benchmarks, we show that while TopoLM achieves slightly lower scores on some behavioral tasks (BLiMP), its performance on other downstream tasks (GLUE) and on brain alignment benchmarks (using the Brain-Score platform) is on par with the non-topographic control. Importantly, this spatio-functional organization arises purely as a result of the combined task and spatial loss, as the model is trained solely on naturalistic text _without_ fitting to brain data. This work thus extends the principle of cortical response smoothness proposed in vision (Margalit et al., 2024) into the language system, providing a unified explanation for understanding the functional organization of cortex. 2 R ELATED W ORK **Topographic Vision Models.** In contrast to the core human language system, the primate visual cortex shows a clear hierarchy of interconnected regions starting at the primary visual cortex (V1), passing V2 and V4, and reaching inferior temporal cortex (IT) which is thought to underlie representations of complex visual objects such as faces and scenes. Within V1, orientation-selective cortical patches (’hypercolumns’) are spatially arranged in circular ’pinwheels,’ where the preferred orientation of neurons rotates smoothly around a central point, covering all possible orientations (0 to 180 degrees). This structure is observed across species, including humans and non-human 2 primates (Kaschube et al., 2010). On a more global level, early visual areas (V1, V2) show strong retinotopic organization where nearby stimuli in the visual field activate nearby locations in early visual regions (Engel et al., 1994; 1997; Tootell et al., 1998). While the strength of retinotopic organization strongly decreases, but remains detectable going to higher-level regions of the visual cortex (Larsson & Heeger, 2006; Schwarzlose et al., 2008; Kravitz et al., 2010; Groen et al., 2022), the final stage of the ventral visual pathway, IT, shows clear categorical clustering into e.g. regions selective for faces or scenes (Kanwisher et al., 1997; Haxby et al., 2001). This spatial organization of the primate visual cortex has prompted work on topographic ANNs for vision. First approaches focused on the organization of inferotemporal cortex, restricting topographic organization to later model layers (TDANN, Lee et al. 2020; ITN, Blauch et al. 2022; DNNSOM, Zhang et al. 2021, Doshi & Konkle 2023). Recent models are designed such that all layers are topographic and thus mimic topographic features across the visual cortex—for example, smoothly varying orientation preference maps forming pinwheels in model V1 and category-selective regions in model IT (All-TNNs, Lu et al. 2023; new version of TDANN, Margalit et al. 2024). Our topographic language model belongs to the **Topographic Deep Artificial Neural Network** (TDANN) family of models (Lee et al., 2020; Margalit et al., 2024). Herein, a central claim is that inducing a preference towards _smoothness_ of cortical responses in the model provides a **unifying** **principle** for the development of topography in the brain. This smoothness optimization is applied to all layers in the model and replicates functional organization in early (e.g. V1) _and_ later (e.g. IT) regions of the visual cortex. The TDANN’s spatial smoothness, as implemented with an additional loss term, is an indirect but efficient approach to minimizing local wiring-length, and can additionally help to minimize long-range connectivity which, in neuroscience terms, corresponds to brain size and power consumption (Margalit et al., 2024). **Topographic Language Models.** Comparatively little work has explored the idea of inducing topography in language models. In particular, the only topographic language model we are aware of is BinHuraib et al. (2024)’s **Topoformer**, which induces spatial organization onto a single-headed attention Transformer architecture using local connectivity constraints. This model arranges keys and queries on 2D grids, combined with a locally connected layer in the attention mechanism as opposed to full connectivity. Our approach primarily differs from Topoformer in that we use a spatial smoothness _loss_ term to drive the emergence of local correlations, similarly to Lee et al. (2020) and Margalit et al. (2024)’s TDANN vision models. In this sense, our model extends Margalit et al. (2024)’s unifying principle of functional organization from the visual cortex into the language system. TopoLM is thus able to benefit from full connectivity, rather than requiring local connectivity to develop clustering. Because we apply this loss to the output of entire attention mechanism at each layer (as well as to the MLP), TopoLM can also benefit from multi-head attention, which empirically improves fits to neural data (AlKhamissi et al., 2024); this was not explored in (BinHuraib et al., 2024). Finally, our model also uses an autoregressive task loss, rather than a masked autoencoder objective (as used in BinHuraib et al. (2024)), which has been shown to have higher performance on neural alignment benchmarks (e.g. Schrimpf et al., 2021). 3 M ODEL D ESIGN AND V ISUALIZATION Instead of the convolutional neural network architecture used in topographic vision models (Margalit et al., 2024), we use the Transformer architecture (Vaswani et al., 2017) which is dominant in language modeling. We augment the objective function with a spatial correlation loss, in addition to the cross-entropy task loss. This loss function measures spatial smoothness, which serves as an efficiently computable proxy for neural wiring length: neurons located close to one another should have similar response profiles—i.e. their activations should be correlated (Lee et al., 2020). To introduce a notion of ‘space’ in the model, we bijectively map the units of each attention layer and MLP to a square grid. We randomly permute these positions for each layer such that each layer has a unique spatial encoding. [2] On each forward pass, we first compute the pairwise Pearson’s 2 Our goal is to abstract away from feed-forward propagation as much as possible, as the hierarchical organization of the brain is quite different from that of a language model. This random permutation prevents the 3 correlation vector **r** _k_ between unit activations on the input batch for each layer _k_ (see Figure 1B). If a layer has _N_ units, **r** is of dimension nCr( _N,_ 2). Then, the spatial loss for layer _k_ is given by SL _k_ = [1] (1) 2 [(1] _[ −]_ [corr(] **[r]** _[k]_ _[,]_ **[ d]** _[k]_ [))] _[,]_ where **d** _k_ is a vector of pairwise inverse distances between units, based on their spatial encoding, and corr is Pearson’s _r_ . This means that nearby units (i.e. high inverse distance) should have highly correlated activations on the same inputs, and that distant units should be less correlated; this gives us a notion of spatial smoothness. We scale by a factor of 0 _._ 5 to ensure that SL _∈_ [0 _,_ 1]. We compute this spatial loss for every attention and MLP layer in a Transformer, prior to normalization and addition into the residual stream. Rather than computing the spatial loss for the entire layer, as in Margalit et al. (2024) we approximate the loss using small neighborhoods, ensuring that the model optimizes for local, rather than global ‘long-distance’ constraints. [3] For each batch of inputs, the model is then optimized subject to the loss criterion _ℓ_ = TL + � _α_ _k_ SL _k_ _,_ (2) _k∈_ layers where TL is the task loss and _α_ _k_ is the relative weight of the spatial loss associated with layer _k_ . This combined loss metric encourages the model to learn representations that are both spatially organized and useful (and, in the case of self-supervised cross-entropy task loss, task-general). **Model Specification and Training.** In the below experiments, we utilize an adapted GPT-2-small style architecture (Radford et al., 2019). We use hidden dimension 784 such that we can evenly embed units in a 28 _×_ 28 grid. The model has 12 Transformer blocks, each with 16 attention heads and a GELU activation function. We train our models on a randomly sampled 10B-token subset of the FineWeb-Edu dataset. The task loss is cross-entropy on next-word prediction. We use batch size 48 and block size 1024. For spatial loss, we set _α_ _k_ = 2 _._ 5 across all layers [4] and operationalize the inverse distance vector **d** with the _ℓ_ _[∞]_ norm. For each batch, we average the spatial loss across 5 randomly selected neighborhoods, each of _ℓ_ _[∞]_ radius 5. This allows us to compute loss more efficiently without significant performance drops. We train both a topographic model and a non-topographic baseline, where _α_ _k_ = 0 and all other hyperparameters remain the same. [5] We trained both models with early stopping after three consecutive increases on validation loss. At the end of training, the topographic model achieved a validation task loss of 3.075 and spatial loss of 0.108 (summed across layers), while the non-topographic model achieved validation loss 2.966. Models trained for 5 days on 4xNVIDIA 80GB A100s. In all below analyses, we compare TopoLM to BinHuraib et al. (2024)’s pre-trained TopoformerBERT, a BERT-style model (Devlin et al., 2019) with local connectivity trained on the BookCorpus dataset (Zhu et al., 2015). [6] Note critically that Topoformer-BERT is a _baseline_, but not a control—it is trained on a much smaller corpus, has only one attention head per layer, and is bidirectional. model from exploiting the feed-forward nature of the Transformer. Without it, the model minimizes spatial loss by propagating the same spatial pattern through the network; see Figure 12. 3 This could be controlled for using many alternative methods, e.g. a Gaussian smoothing kernel. However, we choose to use neighborhood-level approximations for simplicity. 4 We chose this value of _α_ after extensive hyperparameter search. In particular, lower values of _α_ do not adequately encourage the development of topography, while greater values impede task performance and the development of meaningful representations. 5 After hyperparameter tuning, we optimize using AdamW with _β_ 1 = 0 _._ 9 _, β_ 2 = 0 _._ 95 and learning rate 6 _×_ 10 _[−]_ [4], scheduled with warmup and cosine decay. We use weight decay 0 _._ 1, gradient clipping at 1 _._ 0, and do not use dropout. Neighborhood size and number of neighborhoods were also determined via hyperparameter search; note, however, that we empirically observed little effect of neighborhood size and number of neighborhoods on task performance or on our topographic metrics. 6 Topoformer-BERT has 16 layers, each with a single attention head, with topographic constraints applied both at the output of each attention layer and after the key matrix product; we therefore compare TopoLM to outputs at these levels. We refer to BinHuraib et al. (2024) for more details. 4 **(a)** TopoLM: Individual Responses Across Language Selective Clusters **(b)** Fedorenko et al.: Individual Responses Across Anatomical Regions **(c)** Whole-network Responses in TopoLM, Non-topographic Baseline, and Neural Data **Non-topographic** **Topographic** **Neural Data** Figure 2: **Brain-like response profiles across the core language system. (a)** Applying a functional localizer (Fedorenko et al., 2010) we isolate the core language system of TopoLM, and find clear brain-like spatial organization (for brevity, we only show Transformer blocks 5-12 here). Response profile across individual language-selective clusters (shown in yellow) in TopoLM are similar to one another, consistent with **(b)** the language system in human cortex (Fedorenko et al., 2024). **(c)** Across the entire core language system, TopoLM (blue) _mostly_ matches the neural data (green), but not exactly; however, the non-topographic baseline model (orange) fails to capture neural patterns as well. 5 **Readout Sampling.** Due to the coarse spatial sampling in fMRI neuroimaging work, voxels contain the aggregated response of a large population of neurons (Kriegeskorte et al., 2010, Figure 1C). In all following analyses, we thus apply a simulated version of fMRI readout sampling to model activations, consisting of smoothing with a Gaussian kernel, to imitate the locally aggregated responses of fMRI voxels. Importantly, we do so before computing selectivity based on these activations, and thus do not apply readout sampling to the functional selectivity maps directly. We set unit distance 1 _._ 0 mm and FWHM 2 _._ 0 mm. 4 S PATIO -F UNCTIONAL O RGANIZATION OF THE C ORE L ANGUAGE S YSTEM Language processing in the brain engages a set of left-lateralized frontal and temporal brain regions. These areas are typically referred to as the ’core language system’ (Fedorenko et al., 2010) and respond selectively to linguistic input in contrast to non-linguistic stimuli (see Fedorenko et al., 2024, for an overview). Due to anatomical differences between individuals, the language system is defined via a **functional localizer** that contrasts syntactically and semantically valid sentences against a perceptually matched control, such as strings of nonwords (Fedorenko et al., 2011). Within individuals, the core language system shows clear spatio-functional organization, wherein language selective neurons cluster together across multiple cortical lobes. Anatomically distinct **subregions** of this system exhibit highly consistent response profiles to stimuli, suggesting that the system operates as a network (e.g. Fedorenko et al., 2011; Tuckute et al., 2024) (Figure 2B). Prior work on neural alignment in language models typically compares neural responses across the _entire_ core language system of the brain to model activations. The topographic organization of our model enables us to test for the emergence of a brain-like spatially organized core language system _in silico_ . A successful spatio-functional alignment between brain and model would mean that (1) distinct language-selective clusters emerge in the model, (2) these clusters all have consistent response profiles similar to consistent response profiles across sub-regions of the ’core language system’ in humans [7], and (3) the response profiles match the activity profiles in the brain ( _sentences_ _> {unconnected words, jabberwocky} > nonwords_ ; AlKhamissi et al., 2024). **Methods.** To isolate the core language system in TopoLM, we use the same localization stimuli as Fedorenko et al. (2010), which consists of a set of 160 sentences and 160 strings of non-words, all 12 words each. After passing these through the model, at each attention and MLP layer we run a _t_ -test across the activations of all layer units. We then define the core language system as all units that are significantly language-selective ( _p <_ 0 _._ 05 after correction for multiple comparison across all layers using the false-discovery-rate (FDR) (Benjamini & Hochberg, 1995)). We then define language-selective clusters using an evolutionary clustering algorithm applied to each contrast map. In each layer, we begin with the most selective unit by _t_ -value, and then repeatedly add the most selective neighboring unit to the cluster, until we hit a pre-determined _p_ -value threshold associated with the unit’s selectivity (here, _p_ (FDR) _<_ 0 _._ 05). We repeat this process, searching for new clusters until all units in the layer are exhausted. We discard clusters with fewer than 10 units. Within each cluster, we measure responses to the same stimuli used in neuroscience experiments (Figure 2B, Fedorenko et al., 2011): _sentences_ (indexing syntactic and lexical information), _uncon-_ _nected (scrambled) words_ (lexical but not syntactic), _Jabberwocky sentences_ which are well-formed sentences where content words are replaced by phonotactically plausible non-words (syntactic but not lexical), and _unconnected (scrambled) non-words_ (neither syntactic nor lexical). Note that these stimuli are distinct from those used for localization. We measure the model ‘response’ as the mean absolute activation across all units in a cluster. **Results.** TopoLM exhibits clear brain-like spatial organization of the language network, such that (1) multiple language-selective clusters emerge across the topographic tissue (Figure 2A and Figure 8), (2) across most clusters, the response profiles are consistent with one another (Figure 9), and (3) response profiles _mostly_ match the ones in the brain. The response profiles are not a perfect match to the brain data—while sentences have higher activations than Jabberwocky and nonword stimuli, 7 ”The language areas all show a similar response profile (despite slight apparent differences, no region by condition interactions come out as reliable, even in well-powered studies.” Fedorenko et al. (2024), 6 **(a)** Neural Data — Hauptman et al. 2024 t-value Clusters: verbs,nouns **(b)** Hauptman 2024 stimuli Moseley 2014 stimuli **(c)** Non-topographic Topographic Topographic no fMRI readout sampling no fMRI readout sampling + fMRI readout sampling t-value 8 4 0 -4 -8 Figure 3: **Brain-like verb- and noun-selective clusters in TopoLM. (a)** fMRI data from Hauptman et al. (2024) points to verb- (red) and noun-selective (blue) regions in the human cortex with strong clustering (Moran’s _I_ = 0 _._ 96). **(b)** Quantification of clustering. Relative to high clustering in the brain (green dashed line), the non-topographic baseline shows limited clustering (orange). The topographic model shows moderate clustering at the unit level (light blue) and strong clustering when simulating fMRI sampling (dark blue). On stimuli from Moseley & Pulverm¨uller (2014) (fMRI data not available) we find qualitatively similar results. **(c)** Exemplary model maps (last MLP layer) showing the verb-/noun contrast (red-blue) in response to stimuli from Hauptman et al. (2024). The non-topographic baseline shows no clustering while the topographic model develops verb- and noun-selective clusters. they do not have higher activation than unconnected words as in brain data. However, looking across the entire language selective network, the response profile of the non-topographic baseline model similarly fails to capture the neural response profile (Figure 2C), suggesting a general shortcoming of the base transformer model, rather than a weakness of topography. We similarly find evidence for language-selective clustering in Topoformer-BERT (see Figure 13). 5 S PATIO -F UNCTIONAL O RGANIZATION O F S EMANTIC C LUSTERS Beyond selectivity for language in general, experimental evidence supports the existence of cortical noun- and verb-selective clusters across human subjects during processing of verbal and nominal stimuli in auditory (Elli et al., 2019; Hauptman et al., 2024), visual (Moseley & Pulverm¨uller, 2014), and speech production (Shapiro et al., 2006) tasks. Here, we compare spatial activation patterns predicted by TopoLM to two groups of fMRI studies: Elli et al. (2019) / Hauptman et al. (2024), who use the same set of stimuli and the same experimental setup (Figure 3 and Appendix Figures 7 and 10), and Moseley & Pulverm¨uller (2014) (Figure 4). We evaluate TopoLM predictions quantitatively using Hauptman et al. (2024)’s fMRI data [8] and perform qualitative evaluations where no neural data was available (Elli et al., 2019; Moseley & Pulverm¨uller, 2014). 5.1 C LUSTERS SELECTIVE FOR VERBS AND NOUNS **Neuroimaging Study.** Using fMRI data from Hauptman et al. (2024) (Appendix B), we find verband noun-selective clusters in the left hemisphere (Figure 3A and Appendix Figure 7), thus replicating their results. To quantify the ‘degree’ of clustering in these maps, we use Moran’s _I_ with 8 [Available on OPENICPSR at https://doi.org/10.3886/E198163V3.](https://doi.org/10.3886/E198163V3) 7 (a) Non-topographic Topographic Topographic (b) no fMRI readout sampling no fMRI readout sampling + fMRI readout sampling 75 50 25 1.0 0.8 0.6 0.4 0.2 non-topo topographic topographic no fMRI sampling no fMRI sampling + fMRI-like sampling Clusters: verbs,nouns 0 –25 –50 –75 0.0 abstract concrete abstract concrete abstract concrete Figure 4: **Verb- and noun-selectivity in response to concrete and abstract stimuli. (a)** Using stimuli from Moseley & Pulverm¨uller (2014), we find verb-/noun-selective clusters (verb: red / noun: blue) emerging in TopoLM for concrete (solid lines), but not for abstract words (dashed lines), thus replicating their results. **(b)** We obtain strong verb-/noun-clustering when concrete words are used to compute the verb-/noun-contrast (light blue, solid lines), _I_ = 0 _._ 80), but substantially lower clustering for abstract words ( _I_ = 0 _._ 23, light blue, dashed lines, _t_ -test: _p <_ 0 _._ 001). However, we do not find evidence for such a difference in verb-/noun-clustering when using the non-topographic control ( _I_ = 0 _._ 11 vs. 0 _._ 12, _t_ -test: _p >_ 0 _._ 05, orange). Results do not change qualitatively when fMRI readout sampling is performed before computing contrasts and clustering (dark blue). In all the presented cases, spatial autocorrelation is computed on un-thresholded maps (for all layers, see Figure 11). Following neuroimaging conventions on defining category-selective cortical clusters, we show model maps thresholded at _p_ (FDR) _<_ 0 _._ 05. Queen contiguity (i.e. _ℓ_ _[∞]_ radius, following the distance metric used to train TopoLM), a common measure of spatial autocorrelation, ranging from _−_ 1 to 1 (see Appendix A for details). The group level effects indicate strong clustering ( _I_ = 0 _._ 96, _p <_ 0 _._ 001). **Clustering in TopoLM.** To investigate whether similar verb- and noun-selective clusters emerge in TopoLM, we extracted the model activations in response to the same stimuli as Elli et al. (2019) and Hauptman et al. (2024). We find that verb and noun-selective clustering emerges across layers in TopoLM after contrasting activations to verb and noun stimuli (verb: red / noun: blue, Figure 3A, Figure 10). Similar to our analysis of the neuroimaging data, we quantified this observation using Moran’s _I_ . The non-topographic baseline model yields a low degree of clustering ( _I_ = 0 _._ 11). Contrast maps in TopoLM show a strong degree of clustering ( _I_ = 0 _._ 48) which is further increased when applying fMRI-like readout sampling ( _I_ = 0 _._ 81, Figure 3B). Applying the same sampling to the non-topographic model also increases clustering, but it remains substantially less brain-like than the topographic model ( _I_ = 0 _._ 60, Appendix Figure 10). **Clustering in Topoformer-BERT.** Applying the same procedure to Topoformer-BERT, we find no evidence for noun-verb selective clustering, with few units coming out significant in the nounverb contrast (10 _._ 61% of units; see Figure 13). However, we do find that, before thresholding for significance ( _p <_ 0 _._ 05), the model does exhibit a high degree of clustering competitive with TopoLM (Moran’s _I_ = 0 _._ 66 before sampling, 0 _._ 85 after sampling). In other words, though local connectivity constraint induces clustering in the model, these clusters do not match the spatio-functional organization of the brain. This impression is confirmed by additional anaylsis using a variant of Moran’s I that only considers units with significant t-values (Figure 14). 5.2 C LUSTERS SELECTIVE FOR CONCRETE, BUT NOT ABSTRACT VERB - NOUN CONTRASTS **Neuroimaging Study.** Moseley & Pulverm¨uller (2014) focus on how cortical noun-verb selectivity relates to semantics, in particular focusing on _concreteness_ . Examining specific anatomically defined brain regions in fMRI, this study finds evidence for selectivity between concrete verbs and 8 concrete nouns; yet, critically, there is no evidence for responses to abstract words from the same categories. Here, we investigate whether TopoLM replicates these findings—both the existence of spatially organized noun-verb selectivity in response to concrete words and the _non_ existence of this selectivity in response to abstract words (Figure 4). **Clustering in TopoLM.** We presented the original experimental stimuli to TopoLM and computed contrasts between abstract and concrete verbs and nouns (see Appendix C for stimulus details). Rather than comparing response profiles in anatomically defined subregions as in Moseley & Pulverm¨uller (2014) (since the model lacks defined ‘anatomical’ regions), we explore whether lexical class-selective clustering emerges between concrete and abstract words. Consistent with brain data, we find clustering of verb- and noun-selective model units in concrete stimuli, but find only very weak or no clustering for the same contrast in abstract stimuli. We quantified this impression using again Moran’s _I_ : concrete words yield substantially higher verb-/noun-clustering than abstract words ( _I_ = 0 _._ 8 vs. _I_ = 0 _._ 23 in unthresholded maps for TopoLM, Figure 4, light blue). We obtain qualitatively similar results when simulating the fMRI sampling process before computing contrasts (Figure 4, dark blue). Additionally, we observed overall low degrees of clustering in non-topographic baseline models and no evidence for a difference in the degree of clustering when concrete vs. abstract words were used for computing the verb-/noun-contrast (Figure 4, orange). **Clustering in Topoformer-BERT.** We again apply the same procedure to Topoformer-BERT and find no evidence for noun-verb selective clustering, with no units coming out significant in the noun-verb contrast (see Figure 13). Again, before thresholding, we find high clustering (concreteconcrete: Moran’s _I_ = 0 _._ 60 before sampling, _I_ = 0 _._ 84 after; abstract-abstract: _I_ = 0 _._ 61 before sampling, _I_ = 0 _._ 84 after); yet since none of these units come out significant, we fail to find evidence for brain-like spatio-functional organization in Topoformer-BERT (for details, see Figure 14). 6 D OWNSTREAM P ERFORMANCE AND B RAIN A LIGNMENT Some topographic vision models sacrifice task performance for the sake of spatial Idea Generation Category:
1Cross-Domain Application
aWXnKanInf
# - - E NERGY BASED B ACKDOOR D EFENSE A GAINST F ED ERATED G RAPH L EARNING **Guancheng Wan** [1] _[†]_ **Zitong Shi** [1] _[†]_ **Wenke Huang** [1] _[†]_ **Guibin Zhang** [3] **Dacheng Tao** [4] **Mang Ye** [1] _[,]_ [2] _[∗]_ 1 National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University 2 Taikang Center for Life and Medical Sciences, Wuhan University 3 Tongji University 4 Generative AI Lab, College of Computing and Data Science, Nanyang Technological University A BSTRACT Federated Graph Learning is rapidly evolving as a privacy-preserving collaborative approach. However, backdoor attacks are increasingly undermining federated systems by injecting carefully designed triggers that lead the model making incorrect predictions. Trigger structures and injection locations in Federated Graph Learning are more diverse, making traditional federated defense methods less effective. In our work, we propose an effective **Fed** erated Graph Backdoor Defense using **T** opological **G** raph **E** nergy ( **FedTGE** ). At the client level, it injects distribution knowledge into the local model, assigning low energy to benign samples and high energy to the constructed malicious substitutes, and selects benign clients through clustering. At the server level, the energy elements uploaded by each client are treated as new nodes to construct a global energy graph for energy propagation, making the selected clients’ energy elements more similar and further adjusting the aggregation weights. Our method can handle high data heterogeneity, does not require a validation dataset, and is effective under both small and large malicious proportions. Extensive results on various settings of federated graph scenarios under backdoor attacks validate the effectiveness of this approach. The [code is available at https://github.com/ZitongShi/fedTGE.](https://github.com/ZitongShi/fedTGE) 1 I NTRODUCTION Federated Learning (FL) (Yang et al., 2019a; Mammen, 2021) has rapidly emerged as a significant research area in decentralized machine learning. This methodology allows multiple clients to collaboratively train a shared global model while preserving the privacy of sensitive data, thus eliminating the need to aggregate distributed data and ensuring adherence to privacy protocols (Zhang et al., 2021a; Kairouz et al., 2021). Consequently, FL presents a promising solution for training Graph Neural Networks (GNNs) on isolated graph data. Moreover, some existing work has utilized FL to train GNNs (Ju et al., 2024a; Kipf & Welling, 2016; Veliˇckovi´c et al., 2017), which we denote as Federated Graph Learning (FGL). While this distributed nature brings numerous benefits (Gilmer et al., 2017; Bruna et al., 2013), it also introduces additional vulnerabilities, particularly in the form of _**backdoor attacks**_ from malicious participants (Chen et al., 2017; Li et al., 2022). These attacks involve injecting harmful data or models into the training process, embedding hidden behaviors that can trigger incorrect model outputs under specific conditions. These attacks aim to cause local models to learn incorrect information and activate the backdoor at critical times, resulting in erroneous predictions. With the objective of better defending against these malicious attacks, defense methods against backdoor attacks in federated learning have been widely studied (Guerraoui et al., 2018; Yin et al., 2018a; Pillutla et al., 2022). Certain methods exclude outlier updates based on the statistical characteristics of model outputs. Some approaches examine pairwise distances among local models or the distances between local models and the global model to mitigate the influence of anomalous _∗_ Corresponding author, _†_ denotes equal contributions; each reserves the right to be listed first. 1 clients (Shejwalkar & Houmansadr, 2021). However, these methods often struggle to perform effectively in FGL environments, where graph data typically exhibit non-iid characteristics and complex topological structures. Some byzantine-robust federated learning methods require a clean and representative validation dataset (Cao et al., 2021). Consequently, they are less effective in scenarios where collecting a validation data set is challenging, such as in medical (Li et al., 2019b) and financial (Yang et al., 2019b) scenarios. Although recent studies (Huang et al., 2024b) have explored graph classification, there remains a significant gap in backdoor defense for node classification. Based on the aforementioned discussion, we review the challenges existing in FGL under backdoor attacks. First of all, to address the high heterogeneity of the data, some methods choose to monitor the similarity of the updates of each client to adjust their contribution to the global model (Fung et al., 2018; Pillutla et al., 2022). However, the inherent topological complexity of graph data allows the trigger location and shape to be more arbitrary. It can be inserted at any position in the graph, leading to non-aligned updates, which hinders the effective identification of malicious clients and inevitably affects defense performance, as demonstrated in Table 1. Based on this observation, we raise the question: 1) _How can we design_ _a backdoor defense method that can_ _address scenarios where triggers exhibit_ _complex topological characteristics?_ Figure 1: Problem illustration. We describe the challenges FGL encounters under backdoor attacks: **I)** Triggers vary in size, shape, and location of injection, making them more hidden. **II)** The structural heterogeneity introduced by FGL makes distinguishing between heterogeneous and malicious entities more difficult. Secondly, some methods attempt to simply calculate the distances between clients or the similarity of certain distributions without any additional processing to differentiate between malicious and benign clients (Cao et al., 2021; Huang et al., 2023a), or filter out benign clients based on outlier detection (Shejwalkar & Houmansadr, 2021). However, these methods can easily misclassify perturbations caused by heterogeneity as malicious outliers due to their incomplete modeling of the data distribution. The additional structural heterogeneity introduced by FGL further complicates the ability to capture distributional information, inadvertently providing additional protection against backdoor attacks. This ultimately hinders the ability to effectively distinguish between malicious and benign clients in the metrics used for measurement, leading to the question: 2) _How can we learn structural distributions in a fine-grained_ _manner and differentiate them from backdoor attacks to better filter out malicious clients?_ To address the two mentioned issues, we turn to energy-based models and explore their potential. Energy is an unnormalized probability likelihood (Song & Kingma, 2021), offering a flexible modeling approach that is not constrained by normalization. The strength of energy-based models lies in their ability to be integrated with virtually any model architecture. In our work, we combine energy with GCN to form an Energy-based GCN, preserving GCN’s capability to capture complex structural information while benefiting from the flexibility of energy-based modeling. We introduce Topological Energy Client Clustering (TECC) to solve problem 1). TECC quantifies differences in client data distributions. Clients with significant energy distribution differences are marked as malicious and excluded from the aggregation process. We enhance local models by incorporating distribution awareness, combining their predictive capabilities with the ability to distinguish data energy distributions. Specifically, we add a final step in the training process to ensure the model assigns lower energy to the benign sample. However, indiscriminately lowering sample energy can lead to trivial solution. Therefore, we construct perturbed samples to simulate malicious triggers and raise the energy of these samples. Ultimately decoupling the distributions of benign and malicious samples. We then cluster the client energy distributions, identifying clients with significantly different energy distributions as potentially malicious. To solve the second issue, further decoupling the energy distributions of malicious and benign clients, we propose Topological Energy Similarity Propagation (TESP). We collect the energy 2 distributions of each client and establish an energy graph based on the similarity of the energy distributions uploaded by the clients. Specifically, we consider the energy of samples uploaded by each client as its energy element. They are considered as new nodes for constructing the global energy graph. We then establish edges between highly similar energy elements to complete the construction of the energy graph. We enhance the similarity of energy distributions among clients that have established edge indices, thereby increasing the distinction between these clients and unselected malicious ones. Concurrently, energy elements with fewer established indices are considered _outliers_ and are assigned lower transmission and aggregation weights. This energy adjustment in turn improves the clustering effectiveness of TECC. In synergy, this framework enables the model to learn effective topological distributions while achieving fine-grained decoupling of malicious and benign clients. We refer to the combination of these two strategies as **FedTGE**, an effective **Fed** erated Graph Backdoor Defense via **T** opological **G** raph **E** nergy. Our principal contributions are summarized as follows. - We study a challenging problem: defending against backdoor attacks in Federated Graph Learning. Our focus is on mitigating these attacks while overcoming several assumptions made by existing methods, such as data homogeneity, the availability of validated samples, and the presence of a moderate proportion of malicious clients. - We propose FedTGE, an innovative approach that addresses backdoor attacks characterized by complex topological triggers and highly arbitrary injection positions in FGL from the energy perspective. Our method enables clients to model the energy of graph structures at a fine-grained level, assigning higher aggregation weights to clients with high similarity in their energy distributions. This enhances the robustness of graph backdoor defenses. - We conducted experiments on five mainstream datasets under both IID and Non-IID scenarios, as well as with varying proportions of malicious clients. The results demonstrate that our approach outperforms the current state-of-the-art methods in traditional FL. 2 R ELATED W ORK 2.1 F EDERATED G RAPH L EARNING Federated Graph Learning (FGL) (Fu et al., 2022; Huang et al., 2023b; Li et al., 2023; Wan et al., 2024; Cai et al., 2024; Li et al., 2025; Fu et al., 2025) combines the characteristics of FL (Ye et al., 2023; Huang et al., 2024a; Liao et al., 2024) and GNNs (Chen et al., 2023b; Yin et al., 2024; Ju et al., 2024b), enabling collaborative learning of graph-structured data while preserving data privacy. In recent years, extensive research has focused on improving the generalization of the global model or obtaining personalized models that can span different graph domains (Wu et al., 2020; Chen et al., 2022; 2023a; 2024). However, the inherent heterogeneity and the complex, dynamic topology of graph data, along with the distributed nature of FGL, create significant vulnerabilities for backdoor attacks. Although there has been extensive work on effectively backdooring GNNs (Xi et al., 2021; Zhang et al., 2021c; Sun et al., 2020), there is a scarcity of research on backdoor defense paradigms specifically suited for FGL. To the best of our knowledge, we are the first to delve into backdoor defense in FGL, striving to create relevant benchmarks and contribute to this field. 2.2 B ACKDOOR D EFENSE IN F EDERATED L EARNING Malicious backdoor attackers pose a serious threat to federated systems (Huang et al., 2024b). To tackle the problem, researchers have proposed numerous defense methods, such as vector filtering techniques such as Bulyan (Guerraoui et al., 2018), RFA (Pillutla et al., 2022), and DnC (Shejwalkar & Houmansadr, 2021). Additionally, some defense methods utilize proxy data to further leverage server knowledge to defend against attacks, such as FLTrust (Cao et al., 2021) and Sageflow (Park et al., 2021). While most of these approaches can ensure successful defense under certain assumptions, they fail to provide stable defense performance in scenarios with non-iid data, difficulty in collecting proxy datasets, or a large number of attackers. Compared to traditional FL, FGL backdoor attacks often have triggers with higher randomness and greater stealth due to the more complex topological structure of graph data, making them more susceptible to attacks. While an excellent defense study has been proposed in FGL (Yang et al., 2024), it primarily focuses on graph classification tasks, and is not directly applicable to node classification tasks. We propose using energy as a bridge to address the aforementioned issues and fill the gap in backdoor defense for node classification. 3 2.3 E NERGY - BASED M ODEL The Energy-Based Model (EBM) is a generative model that directly models the unnormalized probability density function of the underlying data distribution. EBM represent the probability distribution of data by defining an energy function. A key feature of EBM is their flexibility: the energy function can be implemented using various forms of neural networks without strict structural constraints. This allows EBM to be adapted to a wide range of data types and tasks, including images, videos, and text (Deng et al., 2020; Arbel et al., 2020). In the domain of graphs, EBM have been applied to tasks such as substructure-preserving molecule design, molecular graph generation, and scene graph generation (Roy et al., 2023; Wu et al., 2023). The success of EBM in learning high-dimensional and complex molecular structures, such as proteins, underscores their powerful modeling capabilities (Cao & Shen, 2020; Xiao et al., 2023). In our work, we develop an Energy-Based GCN on the client side to model the energy of the entire graph. Benign samples that conform to the data distribution are assigned lower energy values. These energy distributions are subsequently uploaded, and on the server side, we further align the energy distributions among the selected clients. This alignment increases the separation between the energy elements of benign and malicious clients, thereby establishing a robust defense system against malicious attacks. 3 P RELIMINARY 3.1 F EDERATED G RAPH L EARNING We follow the general paradigm of federated graph learning, where multiple clients collaboratively train a shared global model. Consider _K_ clients, indexed by _k_ and defined as _C_ = _{c_ _k_ _}_ _[K]_ _k_ =1 [. At] the beginning of the _t_ _[th]_ communication round, we denote the current global model as _M_ _[t]_ with parameters _w_ _[t]_, and the local model as _M_ _[t]_ _k_ [with corresponding parameters] _[ w]_ _k_ _[t]_ [. Each client] _[ c]_ _[k]_ possesses private data _G_ _k_ = ( _V_ _k_ _, E_ _k_ ), where _V_ _k_ = _{v_ _i_ _}_ _[N]_ _i_ _[k]_ represents the set of nodes containing _|V_ _k_ _|_ = _N_ _k_ nodes, and _E_ _k_ = _{e_ _mn_ _}_ _m,n_ denotes the set of edges. The adjacency matrix of _G_ _k_ is defined as **A** _k_ = _{A_ _ij_ _}_ _i,j_, where _A_ _ij_ = 1 if there is an edge between nodes _v_ _i_ and _v_ _j_, and _A_ _ij_ = 0 otherwise. Similarly, **X** _k_ represents node features, and **Y** _k_ represents the corresponding label set. 3.2 E NERGY - BASED M ODEL Consider a sample **x** _∈_ R _[D]_ . The energy-based model builds a function _E_ ( **x** _, y_ ) : R _[D]_ _→_ R that maps input instances with given labels to a scalar value, known as _energy_ . The Boltzmann distribution expressed in terms of energy is represented as: exp( _−E_ ( **x** _,_ _y_ )) _p_ ( _y|_ **x** ) = (1) � _y_ _[∗]_ [exp(] _[−][E]_ [(] **[X]** _[, y]_ _[∗]_ [))] _[,]_ where [�] _y_ _[∗]_ [exp(] _[−][E]_ [(] **[X]** _[, y]_ _[∗]_ [))][ is the partition function. Observe that in Eq. (1), it is very similar to] our discriminative neural classifier. To relate the two, we set _E_ ( **x** _, y_ ) = _−f_ ( **x** )[ _y_ ], where _f_ ( **x** )[ _y_ ] is the logits output of the model, and the energy function _E_ ( _x_ ) can be formulated as follows: _E_ ( _x_ ) = _−_ log � � exp( _−E_ ( **x** _, y_ _[∗]_ )) = _−_ log � _y_ _[∗]_ _y_ exp( _f_ ( **x** )[ _y_ ]) _._ (2) _y_ 4 M ETHODOLOGY 4.1 O VERVIEW The overall framework of FedTGE is illustrated in Figure 2, and its algorithmic pseudocode is presented in Algorithm 1. At the client level, We inject structural energy awareness into the local models, lowering the energy of benign samples and raising that of malicious samples, respectively. At the server level, we cluster based on the differences in energy elements across clients to identify benign clusters. From a global perspective, we further construct an energy graph to enhance the similarity of the energy elements of the selected clients and adjust the aggregation weights accordingly. 4.2 T OPOLOGICAL E NERGY D IFFERENCE C LUSTERING **Motivation.** In FGL backdoor attacks, the intricate topological structure of the graph data introduces considerable uncertainty in the methods used for trigger injection. This uncertainty is evident in both 4 Figure 2: Architecture illustration of FedTGE. We used blue and red arrows to represent the two components of our method, TEDC and TESP, respectively. Best viewed in color. Zoom in for details. the selection of injection positions and the diverse shapes that the triggers can assume. Although there are many effective defense paradigms in traditional FL, they often rely on impractical assumptions and are unable to handle triggers with complex topologies and diverse injection positions. **Meta Energy.** We develop an Energy-Based GCN on the client side to model the energy of the entire graph, enhancing the model by injecting distributional knowledge of the samples. This enables the network to perform both node classification and distinguish the meta-energy of benign and malicious samples. First, we construct an energy-based model on top of the trained classifier: _E_ _θ_ ( _x_ ) = _−_ log � exp( _f_ _θ_ ( **x** )[ _y_ ]) _._ (3) _y_ For a node _v_ _i_, we define its output in the energy-based model simply as its meta energy: _M_ _e_ ( _v_ _i_ ) = _E_ _θ_ ( _v_ _i_ ). The meta energy represents the unnormalized likelihood of the sample point. Lower energy corresponds to higher likelihood and consequently a greater probability of the sample being benign. We then introduce the concept of perturbed meta energy, _M_ [˜] _e_ (˜ _v_ _i_ ) = _E_ _θ_ (˜ _v_ _i_ _[adv]_ ), where _v_ ˜ _i_ _[adv]_ represents the perturbed version of _v_ _i_ . Specifically, ˜ _v_ _i_ _[adv]_ is generated by arbitrarily adding or removing edges connected to _v_ _i_ and perturbing both its features and those of its neighbors. The objective is to inject a meta energy distribution into the original model by lowering _M_ _e_ ( _v_ _i_ ) and raising _M_ [˜] _e_ (˜ _v_ _i_ _[adv]_ ). Let _d_ _i_ represent the degree of _v_ _i_, and _p_ denote the perturbation percentage. The generation of **X** _[adv]_ is achieved by perturbing the features of **X** . Additionally, the adjacency matrix **A** _[adv]_ for ˜ _v_ _i_ _[adv]_ can be formulated as follows: _A_ _[adv]_ _it_ = 1 _, t ̸_ = _i_ where _A_ _it_ = 0 _._ (4) � _A_ _[adv]_ _ij_ = 0 _, j ̸_ = _i_ where _A_ _ij_ = 1 _._ **Meta Energy Calibration Objective.** The density function of the energy-based model is given by: _p_ _θ_ ( **x** ) = exp( _−E_ _θ_ ( **x** )) _/Z_ _θ_ . Directly maximizing _p_ _θ_ ( _v_ _i_ ) to minimize _E_ _θ_ ( _v_ _i_ ) seems like a straightforward approach, but the normalization partition function _Z_ _θ_ = � exp( _−E_ _θ_ ( _x_ )) _dx_ is typically very difficult to compute. Therefore, we consider using score matching to train the EBM. Score matching is a technique for training EBMs by aligning the gradient of the log probability density function. By converting the distribution into its equivalent score, we can train EBMs more efficiently, as _∇_ _x_ log _p_ _θ_ ( _x_ ) = _−∇_ _x_ _E_ _θ_ ( _x_ ) eliminates the need for a normalization constant _Z_ _θ_ . However, traditional score matching only focuses on learning the data distribution and does not address the alignment of sample energy. This limitation, combined with the challenges posed by the highly discrete and topological nature of graph data, makes designing an effective score all the more critical. **Definition 4.1. (Meta Energy Score):** _For v_ _i_ _, we define the score assigned by the energy model as_ _the meta energy score, which we use as a gradient surrogate in the discrete space:_ _M_ _e_ ( _v_ 1 ) _−_ _M_ ˜ _e_ ( _v_ ˜ 1 ) _M_ _e_ ( _v_ ˜ _H_ ) _M_ _θ_ _[s]_ [(] _[v]_ _i_ [) =] _[ ∇][E]_ _θ_ [(] _[v]_ _i_ [) =] � _M_ _e_ ( _v_ 1 ) _, · · ·,_ _[M]_ _[e]_ [(] _[v]_ _[H]_ _M_ [)] _e_ _[ −]_ ( _v_ _H_ [˜] ) � _._ (5) In fact, Eq. (6) is equivalent to the following equation: ˜ log _p_ _θ_ ( _v_ 1 ) _−_ log _p_ _θ_ ( _v_ 1 ) _M_ _θ_ _[s]_ [(] _[v]_ _[i]_ [) =] _[ ∇][E]_ _[θ]_ [(] _[v]_ _[i]_ [) =] � log _p_ _θ_ ( _v_ 1 ) _v_ 1 ) _−_ log _p_ _θ_ ( _v_ ˜ 1 ) _[p]_ _[θ]_ [(] _[v]_ _[H]_ [)] _[ −]_ [lo][g] _[p]_ _[θ]_ [(] _[v]_ [˜] _[H]_ [)] _, · · ·,_ [lo][g] log _p_ _θ_ ( _v_ 1 ) log _p_ _θ_ ( _v_ _H_ ) _._ (6) � log _p_ _θ_ ( _v_ _H_ ) 5 Theoretically, the number of possible ˜ _v_ generated in this manner is infinite. We denote _H_ as the number of ˜ _v_ participating in the score calculation. This implies that using the gradient surrogate _∇E_ _θ_ enables the model to learn the energy density distribution of the real data _p_ data ( _v_ _i_ ). With an effective score proxy for the gradient in place, we still follow the traditional score matching objective: _̸_ _̸_ _[̸]_ _̸_ _̸_ _D_ _F_ ( _p_ data ( **x** ) _∥_ _p_ _θ_ ( **x** )) = E _p_ data ( **x** ) _̸_ _̸_ _[̸]_ _̸_ _̸_ 1 _._ (7) � 2 _[||∇]_ **[x]** [ log] _[ p]_ [data] [(] **[x]** [)] _[ −∇]_ **[x]** [ log] _[ p]_ _[θ]_ [(] **[x]** [)] _[||]_ [2] � _̸_ _̸_ _[̸]_ _̸_ _̸_ With the energy score surrogate, our optimization objective is formulated as: _̸_ _̸_ _[̸]_ _̸_ _̸_ _D_ _F_ ( _p_ data ( _v_ _i_ ) _∥_ _p_ _θ_ ( _v_ _i_ )) = E _p_ data ( _v_ _i_ ) _̸_ _̸_ _[̸]_ _̸_ _̸_ 1 � 2 _[||][M]_ data _[ s]_ [(] _[v]_ _i_ [)] _[ −]_ _[M]_ _θ_ _[ s]_ [(] _[v]_ _i_ [)] _[||]_ [2] � _._ (8) _̸_ _̸_ _[̸]_ _̸_ _̸_ However, since the _M_ data _[s]_ [(] _[v]_ _[i]_ [)][ of the real data is unknown during the actual training of the model,] we need to further optimize Eq. (8). Following (Hyv¨arinen & Dayan, 2005) and incorporating my gradient proxy while reducing computational complexity, we rewrite it as follows: _̸_ _̸_ _[̸]_ _̸_ _̸_ _L_ _MEC_ = _N_ [1] _̸_ _̸_ _[̸]_ _̸_ _̸_ _N_ � _i_ =1 _̸_ _̸_ _[̸]_ _̸_ _̸_ � _∇_ _v_ _i_ _M_ _θ_ _[s]_ [(] _[v]_ _i_ [)] _[T]_ _[ ∇]_ _v_ _i_ _[M]_ _θ_ _[ s]_ [(] _[v]_ _i_ [) +] 2 [1] _[∥][M]_ _θ_ _[ s]_ [(] _[v]_ _i_ [)] _[∥]_ [2] � _._ (9) _̸_ _̸_ _[̸]_ _̸_ _̸_ In the loss function, we smooth the model output and minimize _∇_ _v_ _i_ _E_ _θ_ ( _v_ _i_ ). As demonstrated in Definition 4.1, we effectively increase _M_ [˜] _e_ (˜ _v_ _i_ ) _/M_ _e_ ( _v_ _i_ ). This aligns perfectly with our goal of incorporating knowledge of the data distribution into the model, allowing us to assign lower energy to benign samples and higher energy to malicious ones. **Energy Element Discrepancy Cluster.** We calculate the meta energy for each sample of each client and refer to the collection of meta energy for each client as the energy element set, denoted as **E** _k_ . We mark those energy elements that have significant differences from other clients and higher energy values as malicious clients and exclude them from the aggregation process. To systematically identify these anomalies, we use unsupervised FINCH clustering to filter out malicious clients. A comparison with popular clustering methods is provided in Table 3. As an example with three malicious clients, the pseudocode for the algorithm is shown in Algorithm 1. **Algorithm 1** FedTGE **Input:** _Communication rounds T_ _, participant scale K, k_ _[th]_ _client private model w_ _k_ _, and local data_ _G_ _k_ **Output:** _The final global model M_ _[T]_ **for** _t_ = 1 _,_ 2 _, · · ·, T_ **do** _Client Side:_ **for** _k_ = 1 _**to**_ _K_ _**in parallel**_ **do** _f_ _k_ ( _·_ ) _←_ LocalUpdating( _w_ _[t]_ _, G_ _k_ ) // Original training strategy _f_ _k_ _[t]_ [(] _[·]_ [)] _[ ←]_ [EnergyCalibrating][(] _[f]_ _[k]_ [(] _[·]_ [)] _[,][ G]_ _[k]_ [)][ // Injecting distribution knowledge] **E** _[t]_ _k_ [=] _[ {][E]_ [(] _[f]_ _k_ _[ t]_ [(] _[v]_ _[i]_ [))] _[}]_ _i_ _[N]_ =1 _[k]_ [// Calculating energy elements] _Server Side:_ // Cluster and find the cluster with the smaller mean _{_ **E** _[t]_ _a_ _[,]_ **[ E]** _[t]_ _b_ _[}]_ [ and] _[ {]_ **[E]** _[t]_ _k_ _[}]_ _[N]_ _k_ = _̸_ _a,b_ **[where]** [ mean][(] **[E]** _a_ _[t]_ _[,]_ **[ E]** _[t]_ _b_ [)] _[ ≥]_ [mean][(] _[{]_ **[E]** _[t]_ _k_ _[}]_ _[N]_ _k_ = _̸_ _a,b_ [)] **S** _[t]_ = _{S_ _mn_ _[t]_ _[}]_ _[N]_ _mn_ [= cos(] **[E]** _[t]_ _m_ _[,]_ **[ E]** _[t]_ _n_ [)] _[m,n][̸]_ [=] _[a,b]_ [// Calculating energy elements similarity] _G_ _[e]_ = ( _V_ _[e]_ _, E_ _[e]_ ) _←_ ( _τ,_ **S** _[t]_ ) // Constructing energy graph **E** _[t]_ _k_ _[∗]_ _[←]_ [(] _[β]_ _k_ _[t]_ _[,][ {]_ **[E]** _[t]_ _k_ _[}]_ _[N]_ _k_ = _̸_ _a,b_ [)][ // Energy graph similarity propagation] _I_ _k_ _[t]_ _[←]_ [(] _[β]_ _k_ _[t]_ _[,][ {]_ **[E]** _k_ _[t][∗]_ _[}]_ _[N]_ _k_ = _̸_ _a,b_ [)][ // Energy disparity aggregation] _w_ _[t]_ [+1] = [�] _[N]_ _l_ =1 _[I]_ _[k]_ _[w]_ _k_ _[t]_ [// Model parameter update] **return** _M_ _[T]_ 4.3 T OPOLOGICAL E NERGY S IMILARITY P ROPAGATION **Motivation.** Some defense methods simply measure certain distances between clients or certain distribution similarities without any additional processing to differentiate between malicious and benign clients. In scenarios with moderate to high proportions of malicious clients, these methods are susceptible to the combined effects of heterogeneity and backdoor attacks. This results in their inability to accurately filter out malicious clients, and in some cases, they even misclassify benign clients as malicious due to the heterogeneity. 6 **Construct Global Energy Graph.** Excluding the identified malicious clients, we utilize the energy elements of benign clients to compute the cosine similarity between each pair and construct a cosine similarity matrix, denoted as _S_ . We define the similarity between clients _c_ _m_ and _c_ _n_ as the element **S** _mn_ in the _m_ -th row and _n_ -th column of the matrix _S_ . Additionally, we set a threshold, denoted as _τ_, to determine which samples are considered similar. When the value of **S** _ij_ is less than _τ_, we consider the energy sequences of these two clients to be sufficiently similar. This implies that we can add an edge between these two clients in the global energy graph: **S** = [ _S_ _mn_ ] _[N]_ _m,n_ =1 _[,]_ where _S_ _mn_ = _∥_ **EE** _mm_ _∥∥ ·_ **EE** _nn_ _∥_ _[.]_ (10) 1 _,_ if _S_ _mn_ _≥_ _τ._ _E_ _[e]_ = [ _e_ _mn_ ] _[N]_ _m,n_ =1 _[,]_ where _e_ _mn_ = �0 _,_ if _S_ _mn_ _< τ._ (11) Here, _N_ denotes the number of selected clients. The notation _∥_ **E** _k_ _∥_ represents the norm of the energy sequence for client _c_ _k_ . _e_ _mn_ represents the edge between energy distributions _E_ _m_ and _E_ _n_, and _τ_ is the set threshold. If _e_ _mn_ = 1, it indicates that the two clients are sufficiently similar, and an edge will be established between them; otherwise, _e_ _mn_ = 0. **Energy Graph Similarity Propagation.** After establishing edge indices in the previous step, we obtain a global energy Graph with _N_ nodes. From the above analysis, it is evident that **E** _k_ with more established indices has higher similarity with other clients. We consider these clients to be more benign and assign them higher propagation weights. We define energy transmission to occur over multiple rounds, and we consider the update rule for energy transmission as follows: **E** _[∗]_ _k_ [=] _[ α]_ **[E]** _k_ _[β]_ _k_ [+ ][(][1] _[ −]_ _[α]_ [)] _n_ � _n_ **E** _[l]_ _k_ _[β]_ _l_ _[,]_ where _β_ _k_ = _N_ _d_ _k_ _._ (12) _l_ =1 � _l_ =1 _[d]_ _[l]_ Here, _n_ represents the number of indices established by **E** _k_ . **E** _[l]_ _k_ [denotes the] _[ l]_ [-th neighbor of] **[ E]** _[k]_ [,] and _β_ _k_ represents the energy propagation weight of **E** _k_ . **Energy Disparity Aggregation.** Conventional parameter aggregation treats all elements equally, failing to recognize their varying impacts on the target distribution. In our framework, we consider samples with lower energy to be more _**typical**_ . Commonly, low-energy samples are viewed as better fitting the model distribution, indicating they may be more suitable for training the model. Furthermore, a client with lower energy suggests a lower likelihood of malicious intent. Meanwhile, the more indices a client establishes, the higher the likelihood of it being benign. Therefore, we assign higher aggregation weights to such clients. This can be formalized as follows: exp( _−_ **E** _[∗]_ _k_ [)] _I_ _k_ = (13) � _l∈N_ [exp(] _[−]_ **[E]** _l_ _[∗]_ [)] _[β]_ _[k]_ _[.]_ Here, _I_ _k_ represents the aggregation weight assigned to client _k_ . _w_ _[t]_ [+1] = _N_ � _I_ _k_ _w_ _k_ _[t]_ _[.]_ (14) _l_ =1 In this section, we consider two problematic scenarios: 1) If client _c_ _k_ shows insufficient similarity with certain clients _c_ _m,n,l_, we consider it a suspected misclassification ( _c_ _[sus]_ _k_ ) and revoke its qualification to establish connections with **E** **[˜]** _[sus]_ _k_ and **E** _m,n,l_ . 2) If a client _c_ _k_ has a similarity with all other clients below _τ_, we consider _c_ _k_ a malicious client that has been mistakenly clustered with benign clients, revoking its right to participate in parameter aggregation. **Discussion.** Energy-based models have been widely studied in various domains, including images Idea Generation Category:
0Conceptual Integration
5Jc7r5aqHJ
# S ECOND -O RDER M IN -M AX O PTIMIZATION WITH L AZY H ESSIANS **Lesi Chen** _[∗]_ IIIS, Tsinghua University & Shanghai Qizhi Institute chenlc23@mails.tsinghua.edu.cn **Chengchang Liu** _[∗]_ The Chinese University of Hong Kong 7liuchengchang@gmail.com **Jingzhao Zhang** _[†]_ IIIS, Tsinghua University & Shanghai Qizhi Institute & Shanghai AI Lab jingzhaoz@mail.tsinghua.edu.cn A BSTRACT This paper studies second-order methods for convex-concave minimax optimization. Monteiro & Svaiter (2012) proposed a method to solve the problem with an optimal iteration complexity of _O_ ( _ϵ_ _[−]_ [3] _[/]_ [2] ) to find an _ϵ_ -saddle point. However, it is unclear whether the computational complexity, _O_ (( _N_ + _d_ [2] ) _dϵ_ _[−]_ [2] _[/]_ [3] ), can be improved. In the above, we follow Doikov et al. (2023) and assume the complexity of obtaining a first-order oracle as _N_ and the complexity of obtaining a second-order oracle as _dN_ . In this paper, we show that the computation cost can be reduced by reusing Hessian across iterations. Our methods take the overall computational complexity of _O_ [˜] (( _N_ + _d_ [2] )( _d_ + _d_ [2] _[/]_ [3] _ϵ_ _[−]_ [2] _[/]_ [3] )), which improves those of the previous methods by a factor of _d_ [1] _[/]_ [3] . Furthermore, we generalize our method to strongly-convex-strongly-concave minimax problems and establish the complexity of _O_ [˜] (( _N_ + _d_ [2] )( _d_ + _d_ [2] _[/]_ [3] _κ_ [2] _[/]_ [3] )) when the condition number of the problem is _κ_, enjoying a similar speedup upon the state-of-the-art method. Numerical experiments on both real and synthetic datasets also verify the efficiency of our method. 1 I NTRODUCTION We consider the following minimax optimization problem: min (1) _**x**_ _∈_ R _[dx]_ [ max] _**y**_ _∈_ R _[dy]_ _[ f]_ [(] _**[x]**_ _[,]_ _**[ y]**_ [)] _[,]_ where we suppose _f_ ( _**x**_ _,_ _**y**_ ) is (strongly-)convex in _**x**_ and (strongly-)concave in _**y**_ . This setting covers many useful applications, including functionally constrained optimization (Xu, 2020), game theory (Von Neumann & Morgenstern, 1947), robust optimization (Ben-Tal et al., 2009), fairnessaware machine learning (Zhang et al., 2018), reinforcement learning (Du et al., 2017; Wang, 2017; Paternain et al., 2022; Wai et al., 2018), decentralized optimization (Kovalev et al., 2021; 2020), AUC maximization (Ying et al., 2016; Hanley & McNeil, 1982; Yuan et al., 2021). First-order methods are widely studied for this problem. Classical algorithms include ExtraGradient (EG) (Korpelevich, 1976; Nemirovski, 2004), Optimistic Gradient Descent Ascent (OGDA) (Popov, 1980; Mokhtari et al., 2020a;b), Hybrid Proximal Extragradient (HPE) (Monteiro & Svaiter, 2010), and Dual Extrapolation (DE) (Nesterov & Scrimali, 2006; Nesterov, 2007). When the gradient of _f_ ( _·, ·_ ) is _L_ -Lipschitz continuous, these methods achieve the rate of _O_ ( _ϵ_ _[−]_ [1] ) under the convexconcave (C-C) setting and the rate of _O_ (( _L/µ_ ) log( _ϵ_ _[−]_ [1] )) when _f_ ( _·, ·_ ) is _µ_ -strongly convex in _**x**_ and _µ_ -strongly-concave in _**y**_ (SC-SC) for _µ >_ 0. They are all optimal in C-C and SC-SC settings due to the lower bounds reported by (Nemirovskij & Yudin, 1983; Zhang et al., 2022a). _∗_ Equal contributions. _†_ The corresponding author. 1 Second-order methods usually lead to faster rates than first-order methods when the Hessian of _f_ ( _·, ·_ ) is _ρ_ -Lipschitz continuous. A line of works (Nesterov & Scrimali, 2006; Huang et al., 2022) extended the celebrated Cubic Regularized Newton (CRN) (Nesterov & Polyak, 2006) method to minimax problems with local superlinear convergence rates and global convergence guarantee. However, the established global convergence rates of _O_ ( _ϵ_ _[−]_ [1] ) by Nesterov & Scrimali (2006) and _O_ (( _Lρ/µ_ [2] ) log( _ϵ_ _[−]_ [1] )) by Huang et al. (2022) under C-C and SC-SC conditions are no better than the optimal first-order methods. Another line of work generalizes the optimal first-order methods to higher-order methods. Monteiro & Svaiter (2012) proposed the Newton Proximal Extragradient (NPE) method with a global convergence rate of _O_ ( _ϵ_ _[−]_ [2] _[/]_ [3] log log( _ϵ_ _[−]_ [1] )) under the C-C conditions. This result nearly matches the lower bounds (Adil et al., 2022; Lin & Jordan, 2024), except an additional _O_ (log log( _ϵ_ _[−]_ [1] )) factor which is caused by the implicit binary search at each iteration. Bullins & Lai (2022); Adil et al. (2022); Huang & Zhang (2022); Lin et al. (2022) provided a simple proof of NPE motivated by the EG analysis and showed that replacing the quadratic regularized Newton step with the cubic regularized Newton (CRN) step in NPE achieves the optimal second-order oracle complexity of _O_ ( _ϵ_ _[−]_ [2] _[/]_ [3] ). Recently, Alves & Svaiter (2023) proposed a search-free NPE method to achieve the optimal second-order oracle complexity with pure quadratic regularized Newton step based on ideas from homotopy. Over the past decade, researchers also proposed various secondorder methods, in addition to the NPE framework, that achieve the same convergence rate, such as the second-order extensions of OGDA (Jiang & Mokhtari, 2022; Jiang et al., 2024) (which we refer to as OGDA-2) and DE (Lin & Jordan, 2024) (they name the method Persesus). The results for C-C problems can also be extended to SC-SC problems, where Jiang & Mokhtari (2022) proved the OGDA-2 can converge at the rate of _O_ (( _ρ/µ_ ) [2] _[/]_ [3] + log log( _ϵ_ _[−]_ [1] )), and Huang & Zhang (2022) proposed the ARE-restart with the rate of _O_ (( _ρ/µ_ ) [2] _[/]_ [3] log log( _ϵ_ _[−]_ [1] )). Although the aforementioned second-order methods Adil et al. (2022); Lin & Jordan (2024); Lin et al. (2022); Jiang & Mokhtari (2022); Monteiro & Svaiter (2012) enjoy an improved convergence rate over the first-order methods and have achieved optimal iteration complexities, they require querying one new Hessian at each iteration and solving a matrix inversion problem at each Newton step, which leads to a _O_ ( _d_ [3] ) computational cost per iteration. This becomes the main bottleneck that limits the applicability of second-order methods. Liu & Luo (2022a) proposed quasi-Newton methods for saddle point problems that access one Hessian-vector product instead of the exact Hessian for each iteration. The iteration complexity is _O_ ( _d_ [2] ) for quasi-Newton methods. However, their methods do not have a global convergence guarantee under general (S)C)-(S)C conditions. Jiang et al. (2023) proposed an online-learning guided Quasi-Newton Proximal Extragradient (QNPE) algorithm, but their method relies on more complicated subroutines than classical Newton methods. Although the oracle complexity of QNPE is strictly better than the optimal first-order method EG, their method is worse in terms of total computational complexity. In this paper, we propose a computation-efficient second-order method, which we call LEN (Lazy Extra Newton method). In contrast to all existing second-order methods or quasi-Newton methods for minimax optimization problems that always access new second-order information for the coming iteration, LEN reuses the second-order information from past iterations. Specifically, LEN solves a cubic regularized sub-problem using the Hessian from the snapshot point that is updated every _m_ iteration, then conducts an extra-gradient step by the gradient from the current iteration. We provide a rigorous theoretical analysis of LEN to show it maintains fast global convergence rates and improves the (near)-optimal second-order methods (Monteiro & Svaiter, 2012) in terms of the overall computational complexity. We summarize our contributions as follows (also see Table 1). - When the object function _f_ ( _·, ·_ ) is convex in _**x**_ and concave in _**y**_, we propose LEN and prove that it finds an _ϵ_ -saddle point in _O_ ( _m_ [2] _[/]_ [3] _ϵ_ _[−]_ [2] _[/]_ [3] ) iterations. Under Assumption 3.4, where the complexity of calculating _**F**_ ( _**z**_ ) is _N_ and the complexity of calculating _∇_ _**F**_ ( _**z**_ ) is _dN_, the optimal choice is _m_ = Θ( _d_ ). In this case, LEN only requires a computational complexity of _O_ [˜] (( _N_ + _d_ [2] )( _d_ + _d_ [2] _[/]_ [3] _ϵ_ _[−]_ [2] _[/]_ [3] )), which is strictly better than _O_ (( _N_ + _d_ [2] ) _dϵ_ _[−]_ [2] _[/]_ [3] ) for the existing optimal second-order methods by a factor of _d_ [1] _[/]_ [3] . - When the object function _f_ ( _·, ·_ ) is _µ_ -strongly-convex in _**x**_ and _µ_ -strongly-concave in _**y**_, we apply the restart strategy on LEN and propose LEN-restart. We prove the algorithm can find an _ϵ_ -root with _O_ [˜] (( _N_ + _d_ [2] )( _d_ + _d_ [2] _[/]_ [3] ( _ρ/µ_ ) [2] _[/]_ [3] )) computational complexity, where _ρ_ _O_ means the Hessian of˜(( _N_ + _d_ [2] ) _d_ ( _ρ/µ_ ) [2] _[/]_ _f_ [3] )( in prior works. _·, ·_ ) is _ρ_ Lipschitz-continuous. Our result is strictly better than the 2 Table 1: We compare the required computational complexity to achieve an _ϵ_ -saddle point of the proposed LEN with the optimal choice _m_ = Θ( _d_ ) and other existing algorithms on both convexconcave (C-C) and strongly-convex-strongly-concave (SC-SC) problems. Here, _d_ = _d_ _x_ + _d_ _y_ is the dimension of the problem. We assume the gradient is _L_ -Lipschitz continuous for EG and the Hessian is _ρ_ -Lipschitz continuous for others. We count each gradient oracle call with _N_ computational complexity, and each Hessian oracle with _dN_ computational complexity. Setup Method Computational Cost EG (Korpelevich, 1976) _O_ (( _N_ + _d_ ) _ϵ_ _[−]_ [1] ) NPE (Monteiro & Svaiter, 2012) _O_ ˜(( _N_ + _d_ [2] ) _dϵ_ _[−]_ [2] _[/]_ [3] ) C-C search-free NPE (Alves & Svaiter, 2023) _O_ (( _N_ + _d_ [2] ) _dϵ_ _[−]_ [2] _[/]_ [3] ) OGDA-2 (Jiang & Mokhtari, 2022) _O_ (( _N_ + _d_ [2] ) _dϵ_ _[−]_ [2] _[/]_ [3] ) LEN (Theorem 4.3) _O_ ˜(( _N_ + _d_ [2] )( _d_ + _d_ [2] _[/]_ [3] _ϵ_ _[−]_ [2] _[/]_ [3] )) EG (Korpelevich, 1976) _O_ ˜(( _N_ + _d_ )( _L/µ_ )) OGDA-2 (Jiang & Mokhtari, 2022) _O_ (( _N_ + _d_ [2] ) _d_ ( _ρ/µ_ ) [2] _[/]_ [3] ) SC-SC ARE-restart (Huang & Zhang, 2022) _O_ ˜(( _N_ + _d_ [2] ) _d_ ( _ρ/µ_ )) [2] _[/]_ [3] ) Perseus-restart (Lin & Jordan, 2024) _O_ ˜(( _N_ + _d_ [2] ) _d_ ( _ρ/µ_ ) [2] _[/]_ [3] ) LEN-restart (Corollary 4.1) _O_ ˜(( _N_ + _d_ [2] )( _d_ + _d_ [2] _[/]_ [3] ( _ρ/µ_ ) [2] _[/]_ [3] )) **Notations.** Throughout this paper, log is base 2 and log + ( _·_ ) := 1 + log( _·_ ). We use _∥· ∥_ to denote the spectral norm and the Euclidean norm of matrices and vectors, respectively. We denote _π_ ( _t_ ) = _t −_ ( _t_ mod _m_ ) where _m ∈_ N + . 2 R ELATED W ORKS AND T ECHNICAL C HALLENGES **Lazy Hessian in minimization problems.** The idea of reusing Hessian was initially presented by Shamanskii (1967) and later incorporated into the Levenberg-Marquardt method, the Damped Newton method, and the proximal Newton method (Fan, 2013; Lampariello & Sciandrone, 2001; Wang et al., 2006; Adler et al., 2020). However, the explicit advantage of lazy Hessian update over ordinary Newton (-type) update was not discovered until the recent work of (Doikov et al., 2023; Chayti et al., 2023). They applied the following lazy Hessian update on cubic regularized Newton (CRN) methods (Nesterov & Polyak, 2006): _,_ (2) 6 _[∥]_ _**[z]**_ _[ −]_ _**[z]**_ _[t]_ _[∥]_ [3] � _**z**_ _t_ +1 = arg min _**z**_ _∈_ R _[d]_ _⟨_ _**F**_ ( _**z**_ _t_ ) _,_ _**z**_ _−_ _**z**_ _t_ _⟩_ + [1] � 2 [1] 2 _[⟨∇]_ _**[F]**_ [ (] _**[z]**_ _[π]_ [(] _[t]_ [)] [)(] _**[z]**_ _[ −]_ _**[z]**_ _[t]_ [)] _[,]_ _**[ z]**_ _[ −]_ _**[z]**_ _[t]_ _[⟩]_ [+] _[ M]_ 6 where _M ≥_ 0 and _**F**_ : R _[d]_ _→_ R _[d]_ is the gradient field of a convex function. They establish the convergence rates of _O_ ( _[√]_ _mϵ_ _[−]_ [3] _[/]_ [2] ) for nonconvex optimization (Doikov et al., 2023), and _O_ ( _[√]_ _mϵ_ _[−]_ [1] _[/]_ [2] ) for convex optimization (Chayti et al., 2023) respectively. Such rates lead to the total computational cost of _O_ [˜] (( _N_ + _d_ [2] )( _d_ + _√dϵ_ _[−]_ [3] _[/]_ [2] )) and _O_ [˜] (( _N_ + _d_ [2] )( _d_ + _√dϵ_ _[−]_ [1] _[/]_ [2] )) by setting _m_ = Θ( _d_ ), which _dϵ_ _[−]_ [3] _[/]_ [2] )) and _O_ [˜] (( _N_ + _d_ [2] )( _d_ + _√_ cost of _O_ (( _N_ + _d_ [2] )( _d_ + _√dϵ_ _[−]_ [3] _[/]_ [2] )) and _O_ (( _N_ + _d_ [2] )( _d_ + _√dϵ_ _[−]_ [1] _[/]_ [2] )) by setting _m_ = Θ( _d_ ), which strictly improve the result by classical CRN methods by a factor of _√d_ in both setups. We have also observed that the idea of the “lazy Hessian” is widely used in practical second-order methods. KFAC (Martens & Grosse, 2015; Grosse & Martens, 2016) approximates the Fisher information matrix and uses an exponential moving average (EMA) to update the estimate of the Fisher information matrix, which can be viewed as a soft version of lazy update. Sophia (Liu et al., 2024) estimates a diagonal Hessian matrix as a pre-conditioner, which is updated in a lazy manner to reduce the complexity. C2EDEN (Liu et al., 2023) atomizes the communication of local Hessian in several consecutive iterations, which also benefits from the idea of lazy updates. **Challenge of using lazy Hessian updates in minimax problems.** In comparison to previous work on lazy Hessian, our LEN and LEN-restart methods demonstrate the advantage of using lazy Hessian 3 _d_ in both setups. for a broader class of optimization problems, the _minimax_ problems. Our analysis differs from the ones in Doikov et al. (2023); Chayti et al. (2023). Their methods only take a lazy CRN update (2) at each iteration, which makes it easy to bound the error of lazy Hessian updates using Assumption 3.1 and the triangle inequality in the following way: _∥∇_ _**F**_ ( _**z**_ _t_ ) _−∇_ _**F**_ ( _**z**_ _π_ ( _t_ ) ) _∥≤_ _ρ∥_ _**z**_ _π_ ( _t_ ) _−_ _**z**_ _t_ _∥≤_ _ρ_ _t−_ 1 � _∥_ _**z**_ _i_ _−_ _**z**_ _i_ +1 _∥._ _i_ = _π_ ( _t_ ) Our method, on the other hand, not only takes a lazy (regularized) Newton update but also requires an extra gradient step (Line 4 in Algorithm 1). Thus, the progress of one Newton update _{∥_ _**z**_ _i_ +1 _/_ 2 _−_ _**z**_ _i_ _∥}_ _[t]_ _i_ = _π_ ( _t_ ) [cannot directly bound the error term] _[ ∥]_ _**[z]**_ _[t]_ _[ −]_ _**[z]**_ _[π]_ [(] _[t]_ [)] _[∥]_ [introduced by the lazy Hessian update.] Moreover, for minimax problems the matrix _∇_ _**F**_ ( _**z**_ _π_ ( _t_ ) ) is no longer symmetric, which leads to different analysis and implementation of sub-problem solving (Section 4.3). We refer the readers to Section 4 for more detailed discussions. 3 P RELIMINARIES In this section, we introduce the notation and basic assumptions used in our work. We start with several standard definitions for Problem (1). **Definition 3.1.** _We call a function f_ ( _**x**_ _,_ _**y**_ ) : R _[d]_ _[x]_ _×_ R _[d]_ _[y]_ _→_ R _has ρ-Lipschitz Hessians if we have_ _**x**_ _−_ _**x**_ _[′]_ _∥∇_ [2] _f_ ( _**x**_ _,_ _**y**_ ) _−∇_ [2] _f_ ( _**x**_ _[′]_ _,_ _**y**_ _[′]_ ) _∥≤_ _ρ_ ����� _**y**_ _−_ _**y**_ _[′]_ _,_ _∀_ ( _**x**_ _,_ _**y**_ ) _,_ ( _**x**_ _[′]_ _,_ _**y**_ _[′]_ ) _∈_ R _[d]_ _[x]_ _×_ R _[d]_ _[y]_ _._ ����� **Definition 3.2.** _A differentiable function f_ ( _·, ·_ ) _is µ-strongly-convex-µ-strongly-concave for some_ _µ >_ 0 _if_ _f_ ( _**x**_ _[′]_ _,_ _**y**_ ) _≥_ _f_ ( _**x**_ _,_ _**y**_ ) + ( _**x**_ _[′]_ _−_ _**x**_ ) _[⊤]_ _∇_ _x_ _f_ ( _**x**_ _,_ _**y**_ ) + _[µ]_ _∀_ _**x**_ _[′]_ _,_ _**x**_ _∈_ R _[d]_ _[x]_ _,_ _**y**_ _∈_ R _[d]_ _[y]_ ; 2 _[∥]_ _**[x]**_ _[ −]_ _**[x]**_ _[′]_ _[∥]_ [2] _[,]_ _f_ ( _**x**_ _,_ _**y**_ _[′]_ ) _≤_ _f_ ( _**x**_ _,_ _**y**_ ) + ( _**y**_ _[′]_ _−_ _**y**_ ) _[⊤]_ _∇_ _y_ _f_ ( _**x**_ _,_ _**y**_ ) _−_ _[µ]_ 2 _[∥]_ _**[y]**_ _[ −]_ _**[y]**_ _[′]_ _[∥]_ [2] _[,]_ _∀_ _**y**_ _[′]_ _,_ _**y**_ _∈_ R _[d]_ _[y]_ _,_ _**x**_ _∈_ R _[d]_ _[x]_ _._ _We say f is convex-concave if µ_ = 0 _._ We are interested in finding a saddle point of Problem (1), formally defined as follows. **Definition 3.3.** _We call a point_ ( _**x**_ _[∗]_ _,_ _**y**_ _[∗]_ ) _∈_ R _[d]_ _[x]_ _×_ R _[d]_ _[y]_ _a saddle point of a function f_ ( _·, ·_ ) _if we have_ _f_ ( _**x**_ _[∗]_ _,_ _**y**_ ) _≤_ _f_ ( _**x**_ _[∗]_ _,_ _**y**_ _[∗]_ ) _≤_ _f_ ( _**x**_ _,_ _**y**_ _[∗]_ ) _,_ _∀_ _**x**_ _∈_ R _[d]_ _[x]_ _,_ _**y**_ _∈_ R _[d]_ _[y]_ _._ Next, we introduce all the assumptions made in this work. In this paper, we focus on Problem (1) that satisfies the following assumptions. **Assumption 3.1.** _We assume the function f_ ( _·, ·_ ) _is twice continuously differentiable, has ρ-Lipschitz_ _continuous Hessians, and has at least one saddle point_ ( _**x**_ _[∗]_ _,_ _**y**_ _[∗]_ ) _._ We will study convex-concave problems and strongly-convex-strongly-concave problems. **Assumption 3.2** (C-C setting) **.** _We assume the function f_ ( _·, ·_ ) _is convex in_ _**x**_ _and concave in_ _**y**_ _._ **Assumption 3.3** (SC-SC setting) **.** _We assume the function f_ ( _·, ·_ ) _is µ-strongly-convex-µ-strongly-_ _concave. We denote the condition number as κ_ := _ρ/µ._ We let _d_ := _d_ _x_ + _d_ _y_ and denote the aggregated variable _**z**_ := ( _**x**_ _,_ _**y**_ ) _∈_ R _[d]_ . We also denote the GDA field of _f_ and its Jacobian as _,_ _∇_ _**F**_ ( _**z**_ ) := _∇_ 2 _xx_ _[f]_ [(] _**[x]**_ _[,]_ _**[ y]**_ [)] _∇_ [2] _xy_ _[f]_ [(] _**[x]**_ _[,]_ _**[ y]**_ [)] _−∇_ [2] _−∇_ [2] � � _yx_ _[f]_ [(] _**[x]**_ _[,]_ _**[ y]**_ [)] _yy_ _[f]_ [(] _**[x]**_ _[,]_ _**[ y]**_ [)] _∇_ _x_ _f_ ( _**x**_ _,_ _**y**_ ) _**F**_ ( _**z**_ ) := � _−∇_ _y_ _f_ ( _**x**_ _,_ _**y**_ ) _._ (3) � The GDA field of _f_ ( _·, ·_ ) has the following properties. **Lemma 3.1** (Lemma 2.7 Lin et al. (2022)) **.** _Under Assumptions 3.1 and 3.2, we have_ 4 _1._ _**F**_ _is monotone, i.e. ⟨_ _**F**_ ( _**z**_ ) _−_ _**F**_ ( _**z**_ _[′]_ ) _,_ _**z**_ _−_ _**z**_ _[′]_ _⟩≥_ 0 _, ∀_ _**z**_ _,_ _**z**_ _[′]_ _∈_ R _[d]_ _._ _2. ∇_ _**F**_ _is ρ-Lipschitz continuous, i.e. ∥∇_ _**F**_ ( _**z**_ ) _−∇_ _**F**_ ( _**z**_ _[′]_ ) _∥≤_ _ρ∥_ _**z**_ _−_ _**z**_ _[′]_ _∥, ∀_ _**z**_ _,_ _**z**_ _[′]_ _∈_ R _[d]_ _._ _3._ _**F**_ ( _**z**_ _[∗]_ ) = 0 _if and only if_ _**z**_ _[∗]_ = ( _**x**_ _[∗]_ _,_ _**y**_ _[∗]_ ) _is a saddle point of function f_ ( _·, ·_ ) _._ _Furthermore, if Assumption 3.3 holds, we have_ _**F**_ ( _·_ ) _is µ-strongly-monotone, i.e._ _⟨_ _**F**_ ( _**z**_ ) _−_ _**F**_ ( _**z**_ _[′]_ ) _,_ _**z**_ _−_ _**z**_ _[′]_ _⟩≥_ _µ∥_ _**z**_ _−_ _**z**_ _[′]_ _∥_ [2] _, ∀_ _**z**_ _,_ _**z**_ _[′]_ _∈_ R _[d]_ _._ For the C-C case, the commonly used optimality criterion is the following restricted gap. **Definition 3.4** (Nesterov (2007)) **.** _Let_ B _β_ ( _**w**_ ) _be the ball centered at_ _**w**_ _with radius β. Let_ ( _**x**_ _[∗]_ _,_ _**y**_ _[∗]_ ) _be a saddle point of function f_ _. For a given point_ (ˆ _**x**_ _,_ ˆ _**y**_ ) _, we let β sufficiently large such that it holds_ ˆ ˆ max _{∥_ _**x**_ _−_ _**x**_ _[∗]_ _∥, ∥_ _**y**_ _−_ _**y**_ _[∗]_ _∥} ≤_ _β,_ _we define the restricted gap function as_ Gap(ˆ _**x**_ _,_ ˆ _**y**_ ; _β_ ) := max min _**y**_ _∈_ B _β_ ( _**y**_ _[∗]_ ) _[f]_ [(ˆ] _**[x]**_ _[,]_ _**[ y]**_ [)] _[ −]_ _**x**_ _∈_ B _β_ ( _**x**_ _[∗]_ ) _[f]_ [(] _**[x]**_ _[,]_ [ ˆ] _**[y]**_ [)] _[,]_ _We call_ (ˆ _**x**_ _,_ ˆ _**y**_ ) _an ϵ-saddle point if_ Gap(ˆ _**x**_ _,_ ˆ _**y**_ ; _β_ ) _≤_ _ϵ and β_ = Ω(max _{∥_ _**x**_ 0 _−_ _**x**_ _[∗]_ _∥, ∥_ _**y**_ 0 _−_ _**y**_ _[∗]_ _∥}_ ) _._ For the SC-SC case, we use the following stronger notion. **Definition 3.5.** _of function f_ _. We call Suppose that Assumption 3.3 holds. Let_ ˆ _**z**_ = (ˆ _**x**_ _,_ ˆ _**y**_ ) _an ϵ-root if ∥_ _**z**_ ˆ _−_ _**z**_ _[∗]_ _∥≤_ _**z**_ _[∗]_ _ϵ._ = ( _**x**_ _[∗]_ _,_ _**y**_ _[∗]_ ) _be the unique saddle point_ Most previous works only consider the complexity metric as the number of oracle calls, where an oracle takes a point _**z**_ _∈_ R _[d]_ as the input and returns a tuple ( _**F**_ ( _**z**_ ) _, ∇_ _**F**_ ( _**z**_ )) as the output. The existing algorithms (Monteiro & Svaiter, 2012; Bullins & Lai, 2022; Adil et al., 2022; Lin et al., 2022) have achieved optimal complexity regarding the number of oracle calls. In this work, we focus on the computational complexity of the oracle. More specifically, we distinguish between the different computational complexities of calculating the Hessian matrix _∇_ _**F**_ ( _**z**_ ) and the gradient _**F**_ ( _**z**_ ). Formally, we make the following assumption as Doikov et al. (2023). **Assumption 3.4.** _We count the computational complexity of computing_ _**F**_ ( _·_ ) _as N and the compu-_ _tational complexity of ∇_ _**F**_ ( _·_ ) _as Nd._ **Remark 3.1.** _Assumption 3.4 supposes the cost of computing ∇_ _**F**_ ( _·_ ) _is d times that of computing_ _**F**_ ( _·_ ) _. It holds in many practical scenarios as one Hessian oracle can be computed via d Hessian-_ _vector products on standard basis vectors_ _**e**_ 1 _, · · ·,_ _**e**_ _d_ _, and one Hessian-vector product oracle is_ _typically as expensive as one gradient oracle (Wright, 2006):_ _1. When the computational graph of f is obtainable, both_ _**F**_ ( _**z**_ ) _and ∇_ _**F**_ ( _**z**_ ) _**v**_ _can be com-_ _puted using automatic differentiation with the same cost for any_ _**z**_ _,_ _**v**_ _∈_ R _[d]_ _._ _2. When f is a black box function, we can estimate the Hessian-vector ∇_ _**F**_ ( _**z**_ ) _**v**_ _via the finite-_ 1 _difference_ _**u**_ _δ_ ( _**z**_ ; _**v**_ ) = _δ_ [(] _**[F]**_ [ (] _**[z]**_ [ +] _[ δ]_ _**[v]**_ [)] _[ −]_ _**[F]**_ [ (] _**[z]**_ _[ −]_ _[δ]_ _**[v]**_ [))] _[ and we have]_ [ lim] _[δ][→]_ [0] _**[ u]**_ _[δ]_ [(] _**[z]**_ [;] _**[ v]**_ [) =] _∇_ _**F**_ ( _**z**_ ) _**v**_ _under mild conditions on_ _**F**_ ( _·_ ) _._ 4 A LGORITHMS AND CONVERGENCE ANALYSIS In this section, we present novel second-order methods for solving minimax optimization problems (1). We present LEN and its convergence analysis for convex-concave minimax problems in Section 4.1. We generalize LEN for strongly-convex-strongly-concave minimax problems by presenting LEN-restart in Section 4.2. We discuss the details of solving minimax cubic-regularized sub-problem, present detailed implementation of LEN, and give the total computational complexity of proposed methods in Section 4.3. 4.1 T HE LEN ALGORITHM FOR CONVEX - CONCAVE PROBLEMS We propose LEN for convex-concave problems in Algorithm 1. Our method builds on the optimal Newton Proximal Extragradient (NPE) method (Monteiro & Svaiter, 2012; Bullins & Lai, 2022; 5 Idea Generation Category:
0Conceptual Integration
ijbA5swmoK
# - C OMPUTATIONAL E XPLORATIONS OF T OTAL V ARIA TION D ISTANCE **Arnab Bhattacharyya** _[∗]_ **Sutanu Gayen** **Kuldeep S. Meel** University of Warwick IIT Kanpur University of Toronto Georgia Institute of Technology **Dimitrios Myrisiotis** **A. Pavan** **N. V. Vinodchandran** CNRS@CREATE LTD. Iowa State University University of Nebraska-Lincoln A BSTRACT We investigate some previously unexplored (or underexplored) computational aspects of total variation (TV) distance. First, we give a simple deterministic polynomial-time algorithm for checking equivalence between mixtures of product distributions, over arbitrary alphabets. This corresponds to a special case, whereby the TV distance between the two distributions is zero. Second, we prove that unless NP _⊆_ RP it is impossible to efficiently estimate the TV distance between arbitrary Ising models, even in a bounded-error randomized setting. 1 I NTRODUCTION The total variation (TV) distance between distributions _P_ and _Q_ over a common sample space _D_, denoted by _d_ TV ( _P, Q_ ), is defined as _d_ TV ( _P, Q_ ) := max _S⊆D_ [(] _[P]_ [(] _[S]_ [)] _[ −]_ _[Q]_ [(] _[S]_ [)) = 1] 2 � _|P_ ( _x_ ) _−_ _Q_ ( _x_ ) _|_ = � _x∈D_ _x∈D_ � � max(0 _, P_ ( _x_ ) _−_ _Q_ ( _x_ )) _._ _x∈D_ The TV distance satisfies many basic properties which makes it a versatile and fundamental measure for quantifying the dissimilarity between probability distributions. First, it has an explicit probabilistic interpretation: The TV distance between two distributions is the maximum gap between the probabilities assigned to a single event by the two distributions. Second, it satisfies many mathematically desirable properties: It is bounded and lies in [0 _,_ 1], it is a metric, and it is invariant with respect to bijections. Third, it satisfies some interesting composability property. Given _f_ ( _g_ 1 _, g_ 2 _, . . ., g_ _n_ ), suppose we replace _g_ 2 with _g_ 2 _[′]_ [such that] _[ d]_ [TV] [(] _[g]_ [2] _[, g]_ 2 _[′]_ [)] _[ ≤]_ _[ε]_ [. Then the TV distance between] _[ f]_ [(] _[g]_ [1] _[, g]_ [2] _[, . . ., g]_ _[n]_ [)] and _f_ ( _g_ 1 _, g_ 2 _[′]_ _[, . . ., g]_ _[n]_ [)] [ is at most] _[ ε]_ [. Because of these reasons, the total variation distance is a central] distance measure employed in a wide range of areas including probability and statistics Mitzenmacher & Upfal (2005), machine learning Shalev-Shwartz & Ben-David (2014), and information theory Cover & Thomas (2006), cryptography Stinson (1995), data privacy Dwork (2006), and pseudorandomness Vadhan (2012). Lately, the computational aspects of TV distance have attracted a lot of attention. Sahai & Vadhan (2003) established, in a seminal work, that additively approximating the TV distance between two distributions that are samplable by Boolean circuits is hard for SZK (Statistical Zero Knowledge). The complexity class SZK is fundamental to cryptography and is believed to be computationally hard. Subsequent works captured variations of this theme Goldreich et al. (1999); Malka (2015); Dixon et al. (2020): For example, Goldreich et al. (1999) showed that the problem of deciding whether a distribution samplable by a Boolean circuit is close or far from the uniform distribution is complete for NISZK (Non-Interactive Statistical Zero Knowledge). Moreover, Cortes et al. (2007); Lyngsø & Pedersen (2002); Kiefer (2018) showed that it is undecidable to check whether the TV distance between two hidden Markov models is greater than a threshold or not, and that it is #P -hard to additively approximate it. Finally, Bhattacharyya et al. (2023) showed that ( _a_ ) exactly computing the _∗_ Work done while the author was affiliated with the National University of Singapore. 1 TV distance between product distributions is #P -complete, and ( _b_ ) multiplicatively approximating the TV distance between Bayes nets is NP-hard. On an algorithmic note, Bhattacharyya et al. (2020) designed efficient algorithms to additively approximate the TV distance between distributions efficiently samplable and efficiently computable (including the case of _ferromagnetic_ Ising models). In particular, they designed efficient algorithms for additively approximating the TV distance of structured high dimensional distributions such as Bayesian networks, Ising models, and multivariate Gaussians. In a similar vein, Pote & Meel (2021) studied a related property testing variant of TV distance for distributions encoded by circuits. Multiplicative approximation of TV distance has received less attention compared to additive approximation. Recently, Bhattacharyya et al. (2023) gave an FPTAS for estimating the TV distance between an arbitrary product distribution and a product distribution with a bounded number of distinct marginals. Feng et al. (2023) designed an FPRAS for multiplicatively approximating the TV distance between two arbitrary product distributions and Feng et al. (2024) gave an FPTAS for the same task. Finally, Bhattacharyya et al. (2024) gave an FPRAS for estimating the TV distance between Bayes nets of small treewidth. In this paper we address some previously unexplored (or under-explored) computational aspects of total variation distance relating to mixtures of product distributions and Ising models. 1.1 E QUIVALENCE C HECKING FOR M IXTURES OF P RODUCT D ISTRIBUTIONS Mixtures of product distributions constitute a natural and important class of distributions that have been studied in the mathematics and computer science literature. For instance, it is a standard observation that any distribution can be described by some (possibly large) mixture of product distributions (see Observation 9 in Appendix A). Freund & Mansour (1999) gave an efficient algorithm for learning a mixture of two product distributions over the Boolean domain. As part of their analysis, they showed that given two mixtures of two product distributions, their KL divergence can be upper bounded by that of the components and a certain distance between the mixture coefficients. However, this upper bound does not lead to an equivalence checking algorithm. A related problem in machine learning is source identification, whereby one is asked to identify the source parameters of a distribution. Gordon et al. (2021); Gordon & Schulman (2022); Gordon et al. (2023) give algorithms for source identification of a mixture of _k_ product distributions on _n_ bits, when given as input approximate values of multilinear moments. We focus on the equivalence checking problem regarding mixtures of product distributions. Note that while it is easy to check whether two product distributions are equivalent, that is, by checking whether their respective Bernoulli parameters are equal, it is not clear how to do so for the case of mixtures of product distributions. This is so, because there are mixtures of product distributions that are equal (as distributions) but different sets of Bernoulli parameters describe them. For example, consider the case where we have two mixtures over one bit, namely _P_ = 1 _·_ _P_ 1 +0 _·_ _P_ 2 and _Q_ = [1] 2 _[·]_ _[Q]_ [1] [ +] 2 [1] _[·]_ _[Q]_ [2] [, where] _P_ 1 = _P_ 2 = Bern( [1] 2 [)] [ while] _[ Q]_ [1] [ = Bern(] [1] 3 [)] [ and] _[ Q]_ [2] [ = Bern(] [2] 3 [)] [. In this case,] _[ P]_ [ =] _[ Q]_ [ = Bern(] [1] 2 [)] [,] but the parameters of _P_ and _Q_ are different. We present a simple deterministic polynomial-time algorithm for checking equivalence between mixtures of product distributions. Let us first formally define mixtures of product distributions. Let _w_ 1 _, . . ., w_ _k_ be real numbers (weights) such that 0 _≤_ _w_ _i_ _≤_ 1 for all 1 _≤_ _i ≤_ _k_, [�] _[k]_ _i_ =1 _[w]_ _[i]_ [ = 1] [, and] _P_ 1 _, . . ., P_ _k_ are _n_ -dimensional product distributions over an alphabet Σ . The distribution _P_ specified by the tuple ( _w_ 1 _, . . ., w_ _k_ _, P_ 1 _, . . ., P_ _k_ ) is a _mixture of products_ if for all _x ∈_ Σ _[n]_ it is the case that _P_ ( _x_ ) = [�] _[k]_ _i_ =1 _[w]_ _[i]_ _[P]_ _[i]_ [(] _[x]_ [)][. For a distribution] _[ P]_ [, we denote by] _[ P]_ _[ ≤][i]_ [ its marginal on the first] _[ i]_ [ variables.] We may now state our first main result. **Theorem 1.** _There is a deterministic algorithm_ _E_ _such that, given two mixtures of product distri-_ _butions_ _P_ _and_ _Q_ _, specified by_ ( _w_ 1 _, . . ., w_ _k_ _, P_ 1 _, . . ., P_ _k_ ) _and_ ( _v_ 1 _, . . ., v_ _k_ _, Q_ 1 _, . . ., Q_ _k_ ) _, respectively,_ _decides whether_ _P_ = _Q_ _or not. Moreover, if_ _P ̸_ = _Q_ _, then_ _E_ _outputs some_ _x ∈_ Σ _[i]_ _(with_ _i ≤_ _n_ _) such_ _that P_ _[≤][i]_ ( _x_ ) _̸_ = _Q_ _[≤][i]_ ( _x_ ) _. The running time of E is O_ ( _nk_ [4] _|_ Σ _|_ [4] ) _._ (Note that the algorithm outlined in Theorem 1 has input size Ω( _kn |_ Σ _|_ ) .) The primary conceptual contribution of our work is a connection between equivalence checking for mixtures of distributions 2 [1] [1] 2 _[·]_ _[Q]_ [1] [ +] 2 [1] [1] 2 [)] [ while] _[ Q]_ [1] [ = Bern(] 3 [1] [2] 3 [)] [ and] _[ Q]_ [2] [ = Bern(] 3 [2] [1] 3 [)] [. In this case,] _[ P]_ [ =] _[ Q]_ [ = Bern(] 2 and basis construction over appropriately chosen vector space. The connection lends itself to a construction that makes the algorithm as well as proof accessible to undergraduates. 1.2 H ARDNESS OF A PPROXIMATING T OTAL V ARIATION D ISTANCE B ETWEEN I SING M ODELS The Ising model (Ising, 1925; Lenz, 1920), originally developed to describe ferromagnetism in statistical mechanics, serves as a cornerstone in the study of phase transitions and critical phenomena. It consists of discrete variables, known as spins, which can take values of either +1 or _−_ 1 . These spins are arranged on a lattice, and their interactions with nearest neighbors lead to a rich tapestry of behavior, including spontaneous magnetization and phase transitions at critical temperatures. One of the most fascinating aspects of the Ising model is its ability to illustrate complex systems using simple rules. For instance, in 2D, it exhibits a second-order phase transition at a critical temperature, where the system changes from a disordered state to an ordered state as temperature decreases. This model has been extensively studied, leading to profound insights not only in physics but also in fields such as biology, sociology, and computer science. For more information the reader is invited to check the survey written by Cipra (1987). On another note, the computational study of the Ising model has become increasingly relevant. With its relatively simple structure—interacting binary spins on a lattice, the Ising model serves as an ideal platform for exploring computational techniques ranging from Monte Carlo simulations to mean-field approximations. Monte Carlo methods, in particular, are widely used to investigate thermodynamic properties of the Ising model, as they allow for efficient sampling of spin configurations at various temperatures, enabling the computation of quantities like magnetization and susceptibility. Some notable algorithmic results along these lines are the ones by Kasteleyn (1963) and Fisher (1966), whereby they showed that the evaluation of the partition function for planar Ising models can be reduced to some appropriate determinant computations. Moreover, Jerrum & Sinclair (1993) devised an efficient Monte Carlo approximation algorithm for estimating the partition function of arbitrary ferromagnetic (whereby all _w_ _i,j_ ’s are positive) Ising models. On the other hand, there are some works pertaining to the intractability of computing various quantities of interest regarding Ising models (Welsh, 1993), such as the partition function outlined above. For example, Jerrum & Sinclair (1993) show that unless NP = RP, there is no fully polynomial-time randomized approximation scheme (FPRAS) to estimate the partition function of arbitrary Ising models. Moreover, Istrail (2000) proves that computing the partition function (for various kinds of Ising models) is NP-complete. The second part of our work falls in this latter category. Let us first fix some notation. We focus on Ising models _P_ such that for all _x ∈{−_ 1 _,_ 1 _}_ _[n]_ it is the case that the probability that the underlying system of spins assumes the configuration _x_ is _w_ _i,j_ _x_ _i_ _x_ _j_ + � [�] _i,j_ _i_ _h_ _i_ _x_ _i_ _i_   [�] _i,j_ _w_ _i,j_ _x_ _i_ _x_ _j_ + � [�] _i,j_ _i_ _h_ _i_ _x_ _i_ _i_ _P_ ( _x_ ) = [1]   [�] _i,j_   _∝_ exp   _,_ _Z_ [exp] _y_ [exp] �� _i_ _[h]_ _[i]_ _[y]_ _[i]_ � is the partition function of _P_, and _{w_ _i,j_ _}_ _i,j_ and whereby _Z_ := [�] _i,j_ _[w]_ _[i,j]_ _[y]_ _[i]_ _[y]_ _[j]_ [ +][ �] _{h_ _i_ _}_ _i_ are the parameters of the system. We prove that it is hard to estimate the TV distance between Ising models under the very mild complexity-theoretic assumption NP _̸⊆_ RP, which states that Boolean formula satisfiability ( SAT ) does not admit any one-sided-error randomized polynomial-time algorithm, that is, a randomized polynomial-time algorithm that may output a false positive answer with small probability (Arora & Barak, 2009). **Theorem 2.** _If_ NP _̸⊆_ RP _, then there is no FPRAS that estimates the TV distance between any two_ _Ising models._ Our proof draws on the hardness result of Jerrum & Sinclair (1993), and shows that the partition function of Ising models can be reduced to the TV distance between Ising models, by a simple efficient _approximation preserving_ reduction. The main ingredients of this reduction are as follows. First, we prove that estimating the partition function of any Ising model reduces to estimating any atomic 3 marginal the form **Pr** _P_ [ _x_ _k_ = _±_ 1] for any variable _x_ _k_ and any Ising model _P_ (see Proposition 6). Then we show that estimating any atomic marginal the form **Pr** _P_ [ _x_ _k_ = _±_ 1] for any variable _x_ _k_ and any Ising model _P_, can be reduced to estimating the TV distance between the Ising models _P, Q_, whereby _Q_ depends on _P_ (see Proposition 7). 1.3 P APER O RGANIZATION We give some preliminaries in Section 2. We prove Theorem 1 in Section 3 and Theorem 2 in Section 4. We conclude in Section 5 with some interesting open problems. Observation 9 is proved in Appendix A and Proposition 6 is proved in Appendix B. 2 P RELIMINARIES We require the following folklore result, which is an application of Gaussian elimination. **Proposition 3.** _There is a deterministic algorithm_ _G_ _that gets as input a set of vectors_ _V_ _, and outputs_ _a maximum-size subset S ⊆_ _V of linearly independent vectors. The running time of G is O_ � _|V |_ [4] [�] _._ _̸_ An _n_ -dimensional product distribution _R_ over an alphabet Σ is described by the _n |_ Σ _|_ parameters _n_ � **Pr** _R_ [[] _[X]_ _[i]_ [ =] _[ y]_ []] � _i∈_ [ _n_ ] _,_ so that _R_ ( _x_ ) = � **Pr** _R_ [[] _[X]_ _[i]_ [ =] _[ x]_ _[i]_ []] for all _x ∈_ Σ _[n]_ . _̸_ _i∈_ [ _n_ ] _,_ so that _R_ ( _x_ ) = _y∈_ Σ _̸_ _n_ � _̸_ **Pr** for all _x ∈_ Σ _[n]_ . � _R_ [[] _[X]_ _[i]_ [ =] _[ x]_ _[i]_ []] _i_ =1 _̸_ For _n_ -dimensional product distribution _R_ over an alphabet Σ, we denote its marginal over the first 1 _≤_ _j ≤_ _n_ coordinates by _R_ _[≤][j]_ . Note that for any _x ∈_ Σ _[j]_ we have _R_ _[≤][j]_ ( _x_ ) = [�] _[j]_ _i_ =1 **[Pr]** _[R]_ [[] _[X]_ _[i]_ [ =] _[ x]_ _[i]_ []] _[ .]_ We shall also require the following notion of approximation algorithm. **Definition 4.** A function _f_ : _{_ 0 _,_ 1 _}_ _[∗]_ _→_ R admits a _fully polynomial-time randomized approximation_ _scheme (FPRAS)_ if there is a _randomized_ algorithm _A_ such that for every input _x_ (of length _n_ ) and parameters _ε, δ >_ 0, the algorithm _A_ outputs a _ε_ -multiplicative approximation of _f_ ( _x_ ), i.e., a value that lies in the interval [ _f_ ( _x_ ) _/_ (1 + _ε_ ) _,_ (1 + _ε_ ) _f_ ( _x_ )], with probability at least 1 _−_ _δ_ . The running time of _A_ is polynomial in _n,_ 1 _/ε,_ 1 _/δ_ . 3 E QUIVALENCE C HECKING FOR M IXTURES OF P RODUCT D ISTRIBUTIONS Let us now prove Theorem 1. First, observe that _P_ = _Q_ if and only if _P_ _[≤][j]_ = _Q_ _[≤][j]_ for all 1 _≤_ _j ≤_ _n_ . This is so, since if _P_ = _Q_, then every marginal of _P_ matches the respective marginal of _Q_ (in symbols, _P_ _[≤][j]_ = _Q_ _[≤][j]_ for all 1 _≤_ _j ≤_ _n_ ). Otherwise, there would be some 1 _≤_ _j ≤_ _n_ and _y ∈_ Σ _[j]_ such that _P_ _[≤][j]_ ( _y_ ) _̸_ = _Q_ _[≤][j]_ ( _y_ ) . The latter would then establish the existence of an _x_ := ( _y, z_ ) _∈_ Σ _[n]_ (for some _z ∈_ Σ _[n][−][j]_ ) such that _P_ ( _x_ ) _̸_ = _Q_ ( _x_ ) . On the other hand, if _P_ _[≤][j]_ = _Q_ _[≤][j]_ for all 1 _≤_ _j ≤_ _n_, then _P_ _[≤][n]_ = _Q_ _[≤][n]_, which in particular implies that _P_ = _Q_ . Note that the condition _P_ _[≤][j]_ = _Q_ _[≤][j]_ for all 1 _≤_ _j ≤_ _n_ is equivalent, by the definitions of _P_ and _Q_, to the condition [�] _[k]_ _i_ =1 _[w]_ _[i]_ _[P]_ _[ ≤]_ _i_ _[j]_ = [�] _[k]_ _i_ =1 _[v]_ _[i]_ _[Q]_ _i_ _[≤][j]_ for all 1 _≤_ _j ≤_ _n_ . Thus, if _P ̸_ = _Q_, then there is some 1 _≤_ _j ≤_ _n_ so that [�] _[k]_ _i_ =1 _[w]_ _[i]_ _[P]_ _[ ≤]_ _i_ _[j]_ = _̸_ [�] _[k]_ _i_ =1 _[v]_ _[i]_ _[Q]_ _i_ _[≤][j]_ _[.]_ We will use an inductive argument on 1 _≤_ _j ≤_ _n_ to show that these conditions can be efficiently checked (in either case). **Base Case.** For _j_ = 1, we can efficiently check whether it is the case that _̸_ _k_ � _w_ _i_ _P_ _i_ _[≤]_ [1] = _i_ =1 This is done by checking for all _x ∈_ Σ that _̸_ _k_ � _v_ _i_ _Q_ _[≤]_ _i_ [1] _[.]_ _i_ =1 _̸_ _k_ � _w_ _i_ **Pr** _P_ _i_ [[] _[X]_ [1] [ =] _[ x]_ []] _[ −]_ _i_ =1 _̸_ _k_ � _v_ _i_ **Pr** _Q_ _i_ [[] _[X]_ [1] [ =] _[ x]_ [] = 0] _[.]_ _i_ =1 4 If these tests pass, then the algorithm proceeds with the inductive argument outlined below (otherwise, it outputs _x_ ). Towards this, we will now find a basis _B_ 1 for the set of coefficient vectors of the equations _k_ _k_ � _w_ _i_ **Pr** _P_ _i_ [[] _[X]_ [1] [ =] _[ x]_ []] _[ z]_ _[i]_ _[ −]_ � _v_ _i_ **Pr** _Q_ _i_ [[] _[X]_ [1] [ =] _[ x]_ []] _[ z]_ _[k]_ [+] _[i]_ [ = 0] _,_ � _i_ =1 _i_ =1 � _x∈_ Σ over variables _z_ 1 _, . . ., z_ 2 _k_ . Note that the size of _B_ 1 is at most min(2 _k, |_ Σ _|_ ) _≤_ 2 _k_ . We can find _B_ 1 as follows. We appeal to Proposition 3 and run the algorithm _G_ outlined there on the set of vectors _w_ 1 **Pr** �� _P_ 1 [[] _[X]_ [1] [ =] _[ x]_ []] _[, . . ., w]_ _[k]_ **[ Pr]** _P_ _k_ [[] _[X]_ [1] [ =] _[ x]_ []] _[,][ −][v]_ [1] **[ Pr]** _Q_ 1 [[] _[X]_ [1] [ =] _[ x]_ []] _[, . . .,][ −][v]_ _[k]_ **[ Pr]** _Q_ _k_ [[] _[X]_ [1] [ =] _[ x]_ []] �� _x∈_ Σ _k_ � � _,_ _x∈_ Σ � _w_ _i_ **Pr** _P_ _i_ [[] _[X]_ [1] [ =] _[ x]_ []] _[ z]_ _[i]_ _[ −]_ _i_ =1 � _v_ _i_ **Pr** _Q_ _i_ [[] _[X]_ [1] [ =] _[ x]_ []] _[ z]_ _[k]_ [+] _[i]_ [ = 0] _i_ =1 over variables _z_ 1 _, . . ., z_ 2 _k_ . Note that the size of _B_ 1 is at most min(2 _k, |_ Σ _|_ ) _≤_ 2 _k_ . We can find _B_ 1 as follows. We appeal to Proposition 3 and run the algorithm _G_ outlined there on the set of vectors _w_ 1 **Pr** �� _P_ 1 [[] _[X]_ [1] [ =] _[ x]_ []] _[, . . ., w]_ _[k]_ **[ Pr]** _P_ _k_ [[] _[X]_ [1] [ =] _[ x]_ []] _[,][ −][v]_ [1] **[ Pr]** _Q_ 1 [[] _[X]_ [1] [ =] _[ x]_ []] _[, . . .,][ −][v]_ _[k]_ **[ Pr]** _Q_ _k_ [[] _[X]_ [1] [ =] _[ x]_ []] �� _x_ Σ (in time _O_ � _|_ Σ _|_ [4] [�] ). Then, we define _B_ 1 to be the set of independent vectors being output by _G_ . **Induction Hypothesis.** Assume that for a _j ≥_ 1 it is the case that _k_ � _w_ _i_ _P_ _i_ _[≤][j]_ = _i_ =1 _k_ � _v_ _i_ _Q_ _[≤]_ _i_ _[j]_ _[,]_ _i_ =1 and we have a basis _B_ _j_ for the set of coefficient vectors of the following equations _k_ _k_ � _w_ _i_ _P_ _i_ _[≤][j]_ ( _x_ ) _z_ _i_ _−_ � _v_ _i_ _Q_ _[≤]_ _i_ _[j]_ [(] _[x]_ [)] _[ z]_ _[k]_ [+] _[i]_ [ = 0] � _i_ =1 _i_ =1 � Σ _[j]_ _k_ � � � _w_ _i_ _P_ _i_ _[≤][j]_ ( _x_ ) _z_ _i_ _−_ _i_ =1 � _v_ _i_ _Q_ _[≤]_ _i_ _[j]_ [(] _[x]_ [)] _[ z]_ _[k]_ [+] _[i]_ [ = 0] _i_ =1 _x∈_ Σ _[j]_ over variables _z_ 1 _, . . ., z_ 2 _k_ . Note that _B_ _j_ is of size at most min�2 _k, |_ Σ _|_ _[j]_ [�] _≤_ 2 _k_ . **Induction Step.** We will establish that we can check whether _P_ and _Q_ agree up to coordinate _j_ + 1 and compute a basis _B_ _j_ +1 for the respective set of coefficient vectors of the equations that capture this equivalence. To see whether _P_ and _Q_ agree up to coordinate _j_ + 1, one needs to check that _k_ � _w_ _i_ _P_ _i_ _[≤][j]_ ( _x_ ) **Pr** _P_ _i_ [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ []] _[ −]_ _i_ =1 _k_ � _v_ _i_ _Q_ _[≤]_ _i_ _[j]_ [(] _[x]_ [)] **[ Pr]** _Q_ _i_ [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ [] = 0] _i_ =1 for all _x ∈_ Σ _[j]_ and _y ∈_ Σ . A crucial observation (that follows from the inductive hypothesis) is that we only need to check whether these equations hold for the assignments _x_ that correspond to vectors in _B_ _j_, and the values _y ∈_ Σ . (Note that each basis vector _b ∈_ _B_ _j_ can be specified by an assignment _x_ _b_ _∈_ Σ _[j]_ . This follows from the way these basis vectors are constructed. See below, the discussion after Claim 5.) If any of these tests fails, then the algorithm outputs ( _x, y_ ) ; else, it continues as follows. To proceed with the induction, it would suffice to show how to construct a basis _B_ _j_ +1 for the set of coefficient vectors of the following equations over variables _z_ 1 _, . . ., z_ 2 _k_, namely _k_ _k_ �� _i_ =1 _w_ _i_ _P_ _i_ _[≤][j]_ ( _x_ ) **Pr** _P_ _i_ [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ []] _[ z]_ _[i]_ _[ −]_ � _i_ =1 _v_ _i_ _Q_ _[≤]_ _i_ _[j]_ [(] _[x]_ [)] **[ Pr]** _Q_ _i_ [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ []] _[ z]_ _[k]_ [+] _[i]_ [ = 0] � _x∈_ Σ _[j]_ _,_ _._ � � _w_ _i_ _P_ _i_ _[≤][j]_ ( _x_ ) **Pr** _P_ _i_ [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ []] _[ z]_ _[i]_ _[ −]_ _i_ =1 _k_ � � _v_ _i_ _Q_ _[≤]_ _i_ _[j]_ [(] _[x]_ [)] **[ Pr]** _Q_ _i_ [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ []] _[ z]_ _[k]_ [+] _[i]_ [ = 0] _i_ =1 _x∈_ Σ _[j]_ _,_ _y∈_ Σ _._ Let _B_ _j_ = _{b_ 1 = ( _b_ 1 _,_ 1 _, . . ., b_ 1 _,_ 2 _k_ ) _, . . ., b_ _m_ = ( _b_ _m,_ 1 _, . . ., b_ _m,_ 2 _k_ ) _}_ whereby _m ≤_ 2 _k_ and _C_ := � _mi_ =1 _[C]_ _[i]_ [ is such that] _C_ 1 := �� _b_ 1 _,_ 1 **Pr** _P_ 1 [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ []] _[, . . ., b]_ [1] _[,k]_ **[ Pr]** _P_ _k_ [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ []] _[,]_ _,_ _y∈_ Σ ... _b_ 1 _,k_ +1 **Pr** _Q_ 1 [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ []] _[, . . ., b]_ [1] _[,]_ [2] _[k]_ **[ Pr]** _Q_ _k_ [[] _[X]_ _[j]_ [+1] [ =] _[ y]_ []] �� 5 Idea Generation Category:
3Other
xak8c9l1nu
# P ERPLEXITY -T RAP : PLM-B ASED R ETRIEVERS O VERRATE L OW P ERPLEXITY D OCUMENTS **Haoyu Wang** [1] **, Sunhao Dai** _[∗]_ [1] _[∗]_ **, Haiyuan Zhao** [1] **, Liang Pang** [2] **, Xiao Zhang** [1] **Gang Wang** [3] **, Zhenhua Dong** [3] **, Jun Xu** [1] _[†]_ **, Ji-Rong Wen** [1] 1 Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2 CAS Key Laboratory of AI Safety, Institute of Computing Technology, Beijing, China 3 Huawei Noah’s Ark Lab, Shenzhen, China {wanghaoyu0924,sunhaodai,junxu}@ruc.edu.cn, A BSTRACT Previous studies have found that PLM-based retrieval models exhibit a preference for LLM-generated content, assigning higher relevance scores to these documents even when their semantic quality is comparable to human-written ones. This phenomenon, known as source bias, threatens the sustainable development of the information access ecosystem. However, the underlying causes of source bias remain unexplored. In this paper, we explain the process of information retrieval with a causal graph and discover that PLM-based retrievers learn perplexity features for relevance estimation, causing source bias by ranking the documents with low perplexity higher. Theoretical analysis further reveals that the phenomenon stems from the positive correlation between the gradients of the loss functions in language modeling task and retrieval task. Based on the analysis, a causal-inspired inferencetime debiasing method is proposed, called **C** ausal **D** iagnosis and **C** orrection (CDC). CDC first diagnoses the bias effect of the perplexity and then separates the bias effect from the overall estimated relevance score. Experimental results across three domains demonstrate the superior debiasing effectiveness of CDC, emphasizing the validity of our proposed explanatory framework [1] . 1 I NTRODUCTION The rapid advancement of large language models (LLMs) has driven a significant increase in AIgenerated content (AIGC), leading to information retrieval (IR) systems that now index both humanwritten and LLM-generated contents (Cao et al., 2023; Dai et al., 2024b; 2025). However, recent studies (Dai et al., 2024a;c; Xu et al., 2024) have uncovered that Pretrained Language Model (PLM) based retrievers (Guo et al., 2022; Zhao et al., 2024) exhibit preferences for LLM-generated documents, ranking them higher even when their semantic quality is comparable to human-written content. This phenomenon, referred to as **source bias**, is prevalent among various popular PLM-based retrievers across different domains (Dai et al., 2024a). If the problem is not resolved promptly, human authors’ creative willingness will be severely reduced, and the existing content ecosystem may collapse. So it’s urgent to comprehensively understand the mechanism behind source bias, especially when the amount of online AIGC is rapidly increasing (Burtch et al., 2024; Liu et al., 2024). Existing studies identify perplexity (PPL) as a key indicator for distinguishing between LLMgenerated and human-written contents (Mitchell et al., 2023; Bao et al., 2023). Dai et al. (2024c) find that although the semantics of the text remain unchanged, LLM-rewritten documents possess much lower perplexity than their human-written counterparts. However, it’s still unclear _whether document_ _perplexity has a causal impact on the relevance score estimation of PLM-based retrievers_ (which may lead to source bias), and _if so, why such causal impact exists_ . In this paper, we delve deeper into the cause of source bias by examining the role of perplexity in PLM-based retrievers. By manipulating sampling temperature when generating with LLMs, we _∗_ Equal contributions. _†_ Corresponding author. 1 [Codes are available at https://github.com/WhyDwelledOnAi/Perplexity-Trap.](https://github.com/WhyDwelledOnAi/Perplexity-Trap) 1 observe a negative correlation between estimated relevance scores and perplexity. Inspired by this, we construct a causal graph where document perplexity plays as a treatment and document semantic plays as a confounder (Figure 2). We adopt a two-stage least squares (2SLS) regression procedure (Angrist and Pischke, 2009; Hartford et al., 2017) to eliminate the influence of confounders when estimating this biased effect, the experimental results indicate the effect is significantly negative. Based on these findings, the cause of source bias can be elucidated as the unexpected causal effect of perplexity on estimated relevance scores. For semantically identical documents, the documents with low perplexity causally get higher estimated relevance scores from PLM-based retrievers. Since LLM-generated documents typically have lower perplexity than human-written ones, they receive higher estimated relevance scores and are ranked higher, leading to the presence of source bias. To further understand why estimated relevance scores of PLM-based retrievers are influenced by perplexity, we provide a theoretical analysis for the overlap between masked language modeling (MLM) task and mean-pooling retrieval task. Analysis in the linear decoder scenario shows that, the retrieval objective’s gradients are positively correlated to the language modeling gradients. This correlation causes the retrievers to consider not only the document semantics required for retrieval but also the bias introduced by perplexity. Meanwhile, this correlation further explains the trade-off between retrieval performance and source bias observed in previous study (Dai et al., 2024a): the stronger the ranking performance of the PLM-based retrievers, the greater the impact of perplexity. Based on the analysis, we propose an inference-time debiasing method called **CDC** ( **C** ausal **D** iagnosis and **C** orrection). With the proposed causal graph, we separate the causal effect of perplexity from the overall estimated relevance scores during inference, achieving calibrated unbiased relevance scores. Specifically, CDC first estimates the biased causal effect of perplexity on a small set of training samples, which is then applied to de-bias the test samples at the inference stage. This debiasing process is inference-time and can be seamlessly integrated into existing trained PLM-based retrievers. We demonstrate the debiasing effectiveness of CDC with experiments across six popular PLM-based retrievers. Experimental results show that the estimated causal effect of perplexity can be generalized to other data domains and LLMs, highlighting its practical potential in eliminating source bias. We summarize the major contributions of this paper as follows: _•_ We construct a causal graph and estimate the causal effect through experiments, demonstrating that PLM-based retrievers causally assign higher relevance scores to documents with lower perplexity, which is the cause of source bias. _•_ We provide a theoretical analysis explaining that the effect of perplexity in PLM-based retrievers is due to the positive correlation between objective gradients of retrieval and language modeling. _•_ We propose CDC for PLM-based retrievers to counteract the biased effect of perplexity, with experiments demonstrating its effectiveness and generalizability in eliminating source bias. 2 R ELATED W ORK With the rapid development of LLMs (Zhao et al., 2023), the internet has quickly integrated a huge amount of AIGC (Cao et al., 2023; Dai et al., 2024b; 2025). Potential bias may occur when these generated contents are judged by neural networks as a competitor together with human works. For example, Dai et al. (2024c) are the first to highlight a paradigm shift in information retrieval (IR): the content indexed by IR systems is transitioning from exclusively human-written corpora to a coexistence of human-written and LLM-generated corpora. They then uncover an important finding that mainstream neural retrievers based on pretrained language models (PLMs) prefer LLM-generated content, a phenomenon termed source bias (Dai et al., 2024a;c). Xu et al. (2024) further discover that this bias extends to text-image retrieval, and similarly, other works further observe the existence of source bias in other IR scenarios, such as recommender systems (RS) (Zhou et al., 2024), retrievalaugmented generation (RAG) (Chen et al., 2024) and question answering (QA) (Tan et al., 2024). In the context of LLMs-as-judges, similar bias is discovered as self-enhancement bias (Zheng et al., 2024), likelihood bias (Ohi et al., 2024), and familiarity bias (Stureborg et al., 2024), where LLM overates AIGC when serving as a judge. Existing works provide intuitive explanations suggesting that this kind of bias may stem from coupling between neural judges and LLMs (Dai et al., 2024c; Xu et al., 2024), such as similarities in model 2 (a) DL19 (b) TREC-COVID (c) SCIDOCS Figure 1: Perplexity and estimated relevance scores of ANCE on positive query-document pairs in three dataset, where documents are generated by LLM rewriting with different sampling temperatures. The Pearson coefficients highlight the significant negative correlation between the two variables. architectures and training objectives. However, the specific nature of this coupling, how it operates to cause source bias, and why it exists remains unclear. Ohi et al. (2024) find the correlation between perplexity and bias, while our work is the first to systematically analyze the effect of perplexity for neural models’ preference. Given that both PLMs and LLMs are highly complex neural network models, investigating this question is particularly challenging and difficult. 3 E LUCIDATING S OURCE B IAS WITH C AUSAL G RAPH This section first conducts intervention experiments to illustrate the motivation. Subsequently, we construct a causal graph to explain source bias and demonstrate the rationality of the causal graph. 3.1 M OTIVATION : I NTERVENTION E XPERIMENTS ON T EMPERATURE Previous studies have revealed a significant difference in the perplexity (PPL) distribution between LLM-generated content and human-written content (Mitchell et al., 2023; Bao et al., 2023), suggesting that PPL might be a key indicator for analyzing the cause of source bias (Dai et al., 2024c). To verify whether perplexity causally affects estimated relevance scores, we use LLMs (in following chapters the LLMs we use are Llama2-7B-chat (Touvron et al., 2023) unless emphasized) to generate documents with almost identical semantics but varying perplexity, where semantics are expected as the only associated variable when retrieval. Specifically, we manipulate the sampling temperatures during generation to obtain LLM-generated documents with different PPLs but similar semantic content. Following the method of Dai et al. (2024c), we use the following simple prompt: “ _Please rewrite the following text: {human-written_ _text}_ ”. We also recruit human annotators to conduct evaluations to ensure the quality of the generated LLM content. The results, shown in Appendix E.2.1, indicate that there are fewer quality discrepancies between documents generated at different sampling temperatures compared to the original humanwritten documents. This ensures the reliability of the subsequent experiments. We then explore the relationship between perplexity and estimated relevance scores on the corpora generated with different temperatures, where perplexity is calculated by BERT masked language modelling following previous work (Dai et al., 2024c). Figure 1 presents the average perplexity and estimated relevance scores by ANCE across three datasets from different domains. As expected, lower sampling temperatures result in less randomness in LLM-generated content and thus lower PPL. Meanwhile, we find that documents generated with lower temperatures were also more likely to be assigned higher estimated relevance scores. The Pearson coefficients for the three datasets are all below -0.8, emphasizing the strong negative linear correlation between document perplexity and relevance score. Similar results for other PLM-based retrievers are provided in Appendix E.2.2. Since document semantics remain unchanged during rewriting, the synchronous variation between document perplexity and estimated relevance scores reflects a causal effect. These findings offer an intuitive explanation for source bias: LLM-generated content typically has lower PPL, and since documents with lower perplexity are more likely to receive higher relevance scores, LLM-generated content is more likely to be ranked highly, leading to source bias. 3 3.2 C AUSAL G RAPH FOR S OURCE B IAS Inspired by the findings above, we propose a causal graph to elucidate source bias (Fan et al., 2022), as illustrated in Figure 2. Let _Q_ denotes the query set and _C_ denote the corpus. During the inference stage for a certain PLM-based retriever, given a query _q ∈Q_ and a document _d ∈C_, the estimated relevance score _R_ [ˆ] _q,d_ _∈R_ is simultaneously determined by both the golden relevance score _R_ _q,d_ _∈R_ and document perplexity _P_ _d_ _∈R_ + . Note that the fundamental goal of IR is to calculate the similarity between document semantics _M_ _d_ and query semantics _M_ _q_ for document ranking, _R_ _q,d_ _→_ _R_ [ˆ] _q,d_ is considered an unbiased effect, while the influence of _P_ _d_ _→_ _R_ [ˆ] _q,d_ is considered as a biased effect. Subsequently, we explain the rationale behind each edge in the causal graph as follows: First, let the document source _S_ _d_ is a binary variable where _S_ _d_ = 1 denotes the document is generated by LLM and _S_ _d_ = 0 denotes the document is written by human. As suggested in (Dai et al., 2024c), LLMgenerated documents through rewriting possess lower perplexity than their original documents, even though there is no significant difference in their semantic content. Thus, an edge _S_ _d_ _→_ _P_ _d_ exists. This phenomenon can be attributed to two main reasons: (1) Sampling strategies aimed at probability maximization, such a ~~s~~ ~~greedy~~ ~~algorithms~~, ~~discard~~ ~~long-tailed~~ documents during LLM inference. More detailed analysis and verification can be found in (Dai et al., 2024c). (2) Approximation error during LLM training causes the tails of the document distribution to be lost (Shumailov et al., 2023). Next, the document semantics _M_ _d_ reflect the topics of the document _d_, including domain, events, sentiment information, and so on. Since documents with different semantic meanings convey different amounts of information, their difficulties in masked token prediction vary. This means that different document semantics lead to different document perplexities. For example, colloquial conversations are more predictable than research papers due to their less specialized vocabulary. Thus, the content directly affects the perplexity, establishing the edge _M_ _d_ _→_ _P_ _d_ . 𝑆 𝑑 Source 𝑀 𝑑 Document Semantic 𝑀 𝑞 Query Semantic 𝑃 𝑑 Perplexity 𝑅 𝑞,𝑑 Golden Relevance Score ෠𝑅 𝑞,𝑑 Estimated Relevance Score Figure 2: The proposed causal graph for explaining source bias. Finally, as retrieval models are trained to estimate ground-truth relevance, their outputs are valid approximations of the golden relevance scores, making _M_ _d_ _→_ _R_ _q,d_ _←_ _M_ _q_ a natural unbiased effect. However, retrieval models may also learn non-causal features unrelated to semantic matching, especially high-dimensional features in deep learning. According to findings in Section 3.1, document perplexity _P_ _d_ has emerged as a potential non-causal feature learned by PLM-based retrievers, where higher relevance estimations coincide with lower document perplexity. Moreover, Since document perplexity is determined at the time of document generation, which temporally predates the existence of estimated relevance scores, document perplexity should be a cause rather than a consequence of changes in relevance. Hence, a biased effect of _P_ _d_ _→_ _R_ [ˆ] _q,d_ exists. 3.3 E XPLAINING S OURCE B IAS VIA THE P ROPOSED C AUSAL G RAPH Based on the causal graph constructed above, source bias can be explained as follows: Although the content generated by LLMs retains similar semantics to the human-written content, LLM-generated content typically exhibits lower perplexity. Coincidentally, retrievers learn and incorporate perplexity features into their relevance estimation processes, consequently assigning higher relevance scores to LLM-generated documents. This leads to the lower ranking of human-written documents. It is worth noting that source bias is an inherent issue in PLM-based retrievers. Before the advent of LLMs, these retrievers had already learned non-causal perplexity features from purely human-written corpora. However, because the document ranking was predominantly conducted on human-written corpora, the relationship between PLM-based retrievers and perplexity was not evident. As powerful LLMs have become more accessible, the emergence of LLM-generated content has accentuated the perplexity effect. The content generated by LLMs exhibits a perceptibly different perplexity distribution compared to human-written content. This disparity in perplexity distribution causes documents from different sources to receive significantly different relevance rankings. 4 Table 1: Quantified causal effects (and corresponding _p_ -value) for document perplexity on estimated relevance scores via two-stage regression. Bold indicates that the estimate can pass a significance test with _p_ -value _<_ 0 _._ 05 . Significant negative causal effects are prevalent across various PLM-based retrievers in different domain datasets. Dataset BERT RoBERTa ANCE TAS-B Contriever coCondenser DL19 **-9.32 (1e-4)** **-28.15 (2e-12)** **-0.52 (9e-3)** **-0.96 (1e-2)** -0.02 (0.33) **-0.69 (3e-2)** TREC-COVID **-1.69 (2e-2)** 2.42 (8e-2) 0.09 (0.21) **-0.48 (6e-3)** **-0.05 (7e-7)** **-0.32 (8e-3)** SCIDOCS -2.44 (6e-2) **-6.42 (2e-3)** -0.23 (0.15) -0.39 (0.10) -0.02 (0.24) -0.26 (0.41) 4 E MPIRICAL AND T HEORETICAL A NALYSIS ON THE E FFECT OF P ERPLEXITY In this section, we conduct empirical experiments and theoretical analysis to substantiate that PLMbased retrievers assign higher relevance scores to documents with lower perplexity. 4.1 E XPLORING THE B IASED E FFECT C AUSED BY P ERPLEXITY 4.1.1 E STIMATION M ETHODS From the temperature intervention experiments in Section 3.1, we observe a clear negative correlation between document perplexity and estimated relevance scores. Despite human evaluation allows us to largely confirm that document semantics _M_ _d_ generated from different temperatures are almost the same, estimating the biased effect of _P_ _d_ _→_ _R_ [ˆ] _q,d_ directly is problematic due to inevitable minor variations in document semantics, which, though subtle, are significant in causal effect estimation. From the causal view, to robustly estimate the causal effect of _P_ _d_ _→_ _R_ [ˆ] _q,d_, the document semantics _M_ _d_, query semantics _M_ _q_ and golden relevance scores _R_ _q,d_ are considered as confounders. Therefore, directly estimating this biased causal effect is not feasible without addressing this confounding factor. We use 2SLS based on instrumental variable (IV) methods (Angrist and Pischke, 2009; Hartford et al., 2017) to more accurately evaluate the causal effect of document perplexity on estimated relevance scores, more details about the method can be found in Appendix D. According to the causal graph, document source _S_ _d_ serves as an IV for estimating the effect of _P_ _d_ _→_ _R_ [ˆ] _q,d_ . The IV is independent of confounders: query semantics _M_ _q_, document semantics _M_ _d_, and golden relevance scores _R_ _q,d_ . In the first stage of the regression, we use linear regression to predict document perplexity _P_ _d_ based on document source _S_ _d_ : _P_ _d_ = _β_ 1 _S_ _d_ + _P_ [˜] _d_ _,_ (1) where _P_ [˜] _d_ is independent with document source _S_ _d_ and therefore depends solely on document semantics _M_ _d_ . As a result, we obtain coefficient _β_ [ˆ] 1 and the predicted document perplexity _P_ [ˆ] _d_ . In the second stage, we substitute _P_ _d_ with _P_ [ˆ] _d_ = _β_ [ˆ] 1 _S_ _d_ to estimate the predicted relevance score _R_ [ˆ] _q,d_ from the certain PLM-based retrievers: _R_ ˆ _q,d_ = _β_ 2 ˆ _P_ _d_ + ˜ _R_ _q,d_ _, ._ (2) where residual term _R_ [˜] _q,d_ represents the part of the estimated relevance scores that can’t be explained by document perplexity. Since ˆ _P_ [ˆ] _d_ is independent of document semantics _M_ _d_, the estimated coefficient _β_ 2 can accurately reflect the causal effect of perplexity on estimated relevance scores. 4.1.2 E XPERIMENTAL R ESULTS AND A NALYSIS In this section, we apply the causal effect estimation method described previously to assess the impact of document perplexity _P_ _d_ on the estimated relevance score _R_ [ˆ] _q,d_ . **Models.** To comprehensively evaluate this causal effect, we select several representative PLM-based retrieval models from the Cocktail benchmark (Dai et al., 2024a), including: (1) BERT (Devlin et al., 2019); (2) RoBERTa (Liu et al., 2019); (3) ANCE (Xiong et al., 2020); (4) TAS-B (Hofstätter et al., 2021); (5) Contriever (Izacard et al., 2022); (6) coCondenser (Gao and Callan, 2022). We employ the officially released checkpoints. For more details, please refer to Appendix E.1. 5 Idea Generation Category:
3Other
U1T6sq12uj
# P IVOT M ESH : G ENERIC 3D M ESH G ENERATION VIA P IVOT V ERTICES G UIDANCE **Haohan Weng** [1] _[∗]_ **Yikai Wang** [2] _[†]_ **Tong Zhang** [1] **C. L. Philip Chen** [1] **Jun Zhu** [23] 1 South China University of Technology 2 Tsinghua University 3 ShengShu A BSTRACT Generating compact and sharply detailed 3D meshes poses a significant challenge for current 3D generative models. Different from extracting dense meshes from neural representation, some recent works try to model the native mesh distribution (i.e., a set of triangles), which generates more compact results as humans crafted. However, due to the complexity and variety of mesh topology, most of these methods are typically limited to generating meshes with simple geometry. In this paper, we introduce a generic and scalable mesh generation framework PivotMesh, which makes an initial attempt to extend the native mesh generation to large-scale datasets. We employ a transformer-based autoencoder to encode meshes into discrete tokens and decode them from face level to vertex level hierarchically. Subsequently, to model the complex typology, our model first learns to generate pivot vertices as coarse mesh representation and then generate the complete mesh tokens with the same auto-regressive Transformer. This reduces the difficulty compared with directly modeling the mesh distribution and further improves the model controllability. PivotMesh demonstrates its versatility by effectively learning from both small datasets like Shapenet, and large-scale datasets like Objaverse and Objaverse-xl. Extensive experiments indicate that PivotMesh can generate compact and sharp 3D meshes across various categories, highlighting its great potential for native mesh modeling. Project Page: [https://whaohan.github.io/](https://whaohan.github.io/pivotmesh) [pivotmesh](https://whaohan.github.io/pivotmesh) 1 I NTRODUCTION The field of 3D generation has witnessed remarkable advancements in recent years (Poole et al., 2023; Hong et al., 2023; Xu et al., 2024a). Meshes, the predominant representation for 3D geometry, are widely adopted across various applications from video games and movies to architectural modeling. Despite the promising performance of current methods, they mostly rely on neural 3D representation like triplanes (Hong et al., 2023; Li et al., 2023) and FlexiCubes (Xu et al., 2024a). Post-processed meshes extracted from these representations tend to be dense and over-smoothed, which are unfriendly for modern rendering pipelines as shown in Figure 1 (bottom). In contrast, meshes crafted by humans are typically more compact with fewer faces, reusing geometric primitives to efficiently represent real-world objects. To avoid extracting dense meshes through post-processing, another promising direction is emerging that focuses on explicitly modeling the mesh distribution (i.e., native mesh generation). This line of works (Nash et al., 2020; Siddiqui et al., 2023; Alliegro et al., 2023) generates meshes by predicting the 3D coordinates of faces, thus producing compact meshes as humans. However, due to the complexity and variety of topological structures in meshes, most of these methods are typically confined to generating meshes with simple topology, hindering the generalizability across complex objects. Therefore, _it still remains a challenge to establish a generic generative model for native_ _mesh generation at scale._ In this paper, we propose PivotMesh, a generic and scalable framework to extend mesh generation to large-scale datasets across various categories. PivotMesh consists of two parts: a mesh autoencoder and a pivot-guided mesh generator. First, the autoencoder is based on the Transformer to encode _∗_ Work done during the internship at ShengShu _†_ Corresponding author 1 ~~Meshes~~ ~~Generate~~ ~~by~~ ~~PivotMesh~~ InstantMesh InstantMesh (Decimated) PivotMesh Figure 1: Different from 3D generation methods based on neural representations like InstantMesh (Xu et al., 2024a), our methods can generate compact and sharp meshes with much fewer faces when producing similar shapes. meshes into discrete tokens. We also adopt a two-stage decoding strategy to decode mesh tokens from face level to vertex level hierarchically, which further improves the reconstruction performance and mesh surface continuity. Second, we employ an auto-regressive Transformer to learn the joint distribution of pivot vertices and complete mesh tokens, where the pivot vertices serve as the coarse representation to guide the following mesh generation. Specifically, pivot vertices are selected based on vertex degree and dropped randomly to prevent overfitting. As shown in Figure 1 (top), once the model is trained, it can produce meshes from scratch, starting with the generation of pivot vertices followed by the complete mesh token sequence. Furthermore, it can perform conditional generation given the pivot vertices from the reference mesh and supports downstream applications. PivotMesh is designed to be scalable and extensible. We initially evaluate its effectiveness on small dataset ShapeNet (Chang et al., 2015) as previous settings (Siddiqui et al., 2023). Next, we carefully curate and train our model on the existing largest 3D datasets Objavese (Deitke et al., 2023) and Objaverse-xl (Deitke et al., 2024). By leveraging large datasets, our model can generate generic meshes across various categories to accelerate the mesh creation process. Both the qualitative and quantitative experiments show that the proposed PivotMesh beats previous mesh generation methods like PolyGen (Nash et al., 2020) and MeshGPT (Siddiqui et al., 2023) by a large margin. The contributions of this paper can be summarized as follows: - We propose a generic and scalable mesh generation framework PivotMesh, which makes an initial attempt to extend the native mesh generation to large-scale datasets. - We present a Transformer-based autoencoder to preserve the geometry details and surface continuity in meshes by efficiently decoding from face level to vertex level hierarchically. - We introduce pivot vertices guidance for complex mesh geometry modeling, which serves as the coarse representation to guide the complete mesh generation in a coarse-to-fine manner. - PivotMesh achieves promising performance in various applications like mesh generation, variation, and refinement, accelerating the mesh creation process. 2 R ELATED W ORKS **Neural 3D Shape Generation.** Most previous attempts learn 3D shape with various representations, e.g., SDF grids (Cheng et al., 2023; Chou et al., 2023; Shim et al., 2023; Zheng et al., 2023) and neural fields (Gupta et al., 2023; Jun & Nichol, 2023; Müller et al., 2023; Wang et al., 2023a; Zhang 2 Table 1: **Difference between MeshGPT and two concurrent works with PivotMesh.** Our main contribution mainly falls on two aspects. First, our autoencoder yields great reconstruction performance with a shorter sequence length. Second, our model can produce a more complex typology with the pivot guidance. _n_ is the number of faces and _v_ is the number of vertices. Difference MeshGPT MeshXL MeshAnything PivotMesh AE Architecture GNN-CNN N/A Transformer Transformer AE Decoding Face-level N/A Face-level Face-level & Vertex-level Sequence Type Latent Coordinates Latent Pivot-guided Latent Generation Formulation Direct Direct Direct Coarse-to-fine Sequence Length 6 _n_ 9 _n_ 9 _n_ 0 _._ 1 _v_ + 6 _n_ Compression Ratio ( _↓_ ) 66 _._ 7% 100% 100% 68 _._ 9% et al., 2023; Liu et al., 2023b; Lyu et al., 2023). To improve the generalization ability, researchers start to leverage pretrained 2D diffusion models (Rombach et al., 2022; Saharia et al., 2022; Liu et al., 2023a) with score distillation loss (Poole et al., 2023; Lin et al., 2023; Wang et al., 2023b) in a per-shape optimization manner. Multi-view diffusion models (Shi et al., 2023b; Weng et al., 2023; Zheng & Vedaldi, 2023; Shi et al., 2023a; Chen et al., 2024d; Voleti et al., 2024) are used to further enhance the quality and alleviate the Janus problem. Recently, Large Reconstruction Models (LRM) (Hong et al., 2023; Li et al., 2023; Xu et al., 2023; Wang et al., 2024; Xu et al., 2024b; Tang et al., 2024a; Xu et al., 2024a) train the Transformer backbone on large scale dataset (Deitke et al., 2023) to effectively generates generic neural 3D representation and shows the great performance of scaling. However, these neural 3D shape generation methods require post conversion (Lorensen & Cline, 1998; Shen et al., 2021) for downstream applications, which is non-trivial and easy to produce dense and over-smooth meshes. **Native Mesh Generation.** Compared with the well-developed generative models of neural shape representations, the generation of the mesh remains under-explored. Some pioneering works try to tackle this problem by formulating the mesh representation as surface patches (Groueix et al., 2018), deformed ellipsoid (Wang et al., 2018), mesh graph (Dai & Nießner, 2019) and binary space partitioning (Chen et al., 2020). PolyGen (Nash et al., 2020) uses two separated auto-regressive Transformers to learn vertex and face distribution respectively. Polydiff (Alliegro et al., 2023) learns the triangle soups of mesh with a diffusion model. MeshGPT (Siddiqui et al., 2023) is most relevant to our work, which first tokenizes the mesh representation with a GNN-based encoder and learns the mesh tokens with a GPT-style Transformer. Despite its promising results on small datasets, it is non-trivial to extend MeshGPT to the large-scale dataset. Our research, along with several concurrent works (Chen et al., 2024a;b;c; Tang et al., 2024b) as shown in Table 1, is designed to build a generic generative model for native mesh generation within large-scale datasets. 3 M ETHOD In this section, we will introduce the details of the proposed PivotMesh as shown in Figure 2. The challenges to scale up native mesh generation are analyzed in Section 3.1. Meshes formulated as triangle face sequences are first encoded into discrete tokens by the proposed mesh autoencoder (Section 3.2). Then, we use an auto-regressive Transformer to learn the joint distribution of pivot vertices and mesh tokens (Section 3.3). 3.1 C HALLENGES FOR N ATIVE M ESH G ENERATION ON L ARGE D ATASETS There are two main challenges for scaling up native mesh generation to large datasets. **Mesh Reconstruction.** It is challenging to tokenize meshes due to its high requirement on reconstruction accuracy to preserve the mesh surface continuity. Previous works like MeshGPT (Siddiqui et al., 2023) formulate meshes as face graphs for reconstruction, which only focuses on face-level relationships and neglects the connection and interaction among vertices. Furthermore, the limited network capability of autoencoder (i.e., GNN and CNN) also hinders its scalability on large-scale 3 (a) Hierarchical Mesh Auto-enoder (b) Pivot-guided Mesh Transformer Figure 2: **The overall method of PivotMesh.** (a) Triangle mesh sequences are tokenized into mesh tokens and hierarchically decoded from face level to vertex level via our mesh autoencoder. (b) The auto-regressive Transformer first learns to generate pivot vertices as coarse mesh representation and then generates the complete mesh tokens in a coarse-to-fine manner. datasets. To this end, we propose a Transformer-based autoencoder to preserve the geometry details and surface continuity by decoding from face level to vertex level hierarchically. **Complex Typology Modeling.** Due to the complexity and variety of mesh topology, directly modeling the mesh sequence on large-scale datasets makes it easy to produce trivial meshes with simple geometry (e.g., cubes). To model complex mesh typology, a natural solution is to first generate a coarse representation and then the full mesh sequence. For this purpose, we define pivot vertices (the sequence of high-degree vertices) as the coarse representation of meshes. With the guidance of pivot vertices, our model is capable of generating complex mesh geometry in a coarse-to-fine manner. 3.2 E NCODE M ESHES INTO D ISCRETE T OKENS A triangle mesh _M_ with _n_ faces can be formulated as the following sequence: _M_ := ( _f_ 1 _, f_ 2 _, ..., f_ _n_ ) = ( _v_ 11 _, v_ 12 _, v_ 13 _, v_ 21 _, v_ 22 _, v_ 23 _, ..., v_ _n_ 1 _, v_ _n_ 2 _, v_ _n_ 3 ) _,_ (1) where each face _f_ _i_ consists of 3 vertices and each vertex _v_ _i_ contains 3D coordinates discretized with a 7-bit uniform quantization. To effectively learn the mesh distribution, we first tokenize the sequence into discrete tokens with the proposed transformer-based autoencoder. **Attention-based Tokenizer.** Different from MeshGPT equipped with a GNN-CNN-based autoencoder, we employ a Transformer-based architecture as the backbone for the encoder, capturing the long-range relationship between faces. Furthermore, we replace the vanilla positional encoding in the Transformer with a single-layer GNN to capture the local topology of meshes. This preserves the permutation invariance of faces with higher scalability, yielding more effective and robust token representation for meshes. **Hierarchical Decoding.** To further improve the reconstruction performance and mesh surface contiguity, we design a hierarchical decoder from face level to vertex level. The face embedding _F_ _i_ _[′]_ from the vector quantization module is first passed to a face-level decoder. Then, the decoded face embedding is converted to vertex embedding _V_ _i_ _[′]_ [by a simple MLP. The vertex embedding is then] decoded by a vertex-level decoder, whose architecture is similar to the face decoder except that its input sequence is 3 times longer. _F_ _i_ _[′]_ [=][ FaceDec][(] _[F]_ _[ ′]_ _i_ [)] _[,]_ (2) _V_ _i_ _[′]_ [=][ VertexDec][(][MLP] _[n][→]_ [3] _[n]_ [(] _[F]_ _[ ′]_ _i_ [))] _[,]_ The final decoded vertex embedding _V_ _i_ is then converted to the quantized 3D coordinate logits _∈_ (1 _,_ 2 _, ...,_ 2 [7] ) for each axis ( _x_, _y_ and _z_ ) and computes the cross entropy with the input mesh 4 Simple Geometry - Face Sequence: (" !!, " !", " !# ; " "!, " "", " "# ) - Vertex Sequence (MeshGPT): (&, ', (; ', (, )) - Pivot guided Sequence (PivotMesh): Coarse Rep. Full Rep. Figure 3: **Illustration for pivot guided mesh sequence.** It is hard to directly generate vertex sequence with geometry details (MeshGPT formulation) on large scale datasets. In our paper, we found that some vertices (with high degrees) repeatedly occur in vertex sequence thus we define them as pivot vertices for coarse mesh representation. By first generating pivot vertices and then the full meshes, our model is capable of producing more complex geometry with higher quality. sequence. Such hierarchical architecture allows the connection and interaction among both face and vertex level, thus improving the reconstruction accuracy and surface continuity. 3.3 G UIDE M ESH G ENERATION WITH P IVOT V ERTICES To model complex mesh typology, it is natural to first generate a coarse representation and then the full mesh sequences. However, it is non-trivial to find such a coarse representation to preserve typology information with short length. As shown in Figure 3, we found that some vertices repeatedly occur in the mesh sequence (since such vertices connect multiple faces), therefore they are highly informative. Furthermore, these vertices are with high degrees thus preserving more geometry details. To this end, we define these vertices as pivot vertices, and propose to first generate them as the coarse representation for full mesh sequence. **Degree-based Pivot Vertices Selection.** First, we need to select the most frequently occurring vertices as the pivot vertices. Specifically, a mesh can be regarded as a graph, where each vertex _v_ _i_ represents a node and the connection between vertices represents the edges. Then, we compute the vertex degree _deg_ ( _v_ _i_ ) in mesh graphs and select the pivot vertices set _P_ with the top-degree vertices. The size of the pivot vertices set is proportional to the number of vertices with a fixed ratio _η_ _select_ . Furthermore, to prevent overfitting in pivot-to-mesh modeling, we randomly drop some pivot vertices with the ratio _η_ _drop_ of all vertices for each training iteration. In our experiments, the select ratio _η_ _select_ = 15% and the dropping ratio _η_ _drop_ = 5%, yielding the final pivot vertex ratio _η_ = 10% . The benefits of our pivot selecting strategy fall into two aspects. First, it leverages frequently occurring vertices (with higher degree), enabling the Transformer to utilize these as conditional tokens mesh sequence generation efficiently. Second, it tends to preserve intricate mesh details, as regions with finer geometry typically necessitate more faces and thus larger vertex degrees. **Coarse-to-fine Mesh Modeling.** As shown in Figure 3, we employ a standard auto-regressive Transformer decoder to learn the joint distribution of the pivot vertex tokens _p_ _i_ _∈_ _P_ and the complete mesh tokens _t_ _i_ _∈_ _T_ . A learnable start and end token are used to identify the beginning and end of the token sequence, while a pad token is used to separate the pivot vertex tokens and mesh tokens. The order of both pivot vertices tokens and full mesh tokens is sorted by _z_ - _y_ - _x_ coordinates from lowest to highest. Different from the Transformer in Section 3.2, we add absolute positional encoding here to indicate the position in the token sequence. The token sequences are modeled with a Transformer with parameter _θ_ by maximizing the log probability: _|T |_ � _p_ ( _t_ _i_ _|t_ 1: _i−_ 1 _, P_ ; _θ_ ) _i_ =1 5 _|P |_ � _p_ ( _p_ _j_ _|p_ _j_ : _j−_ 1 ; _θ_ ) _,_ (3) _j_ =1 Idea Generation Category:
2Direct Enhancement
WAC8LmlKYf
# - - PETRA: P ARALLEL E ND TO -E ND T RAINING OF R E VERSIBLE A RCHITECTURES **St´ephane Rivaud** [1] _[,]_ [5] **Louis Fournier** [1] **Thomas Pumir** [6] **Eugene Belilovsky** [3] _[,]_ [4] **Mickael Eickenberg** [2] **Edouard Oyallon** [2] _[,][∗]_ 1 ISIR – Sorbonne Universit´e, Paris 2 Flatiron Institute, New York 3 Mila, Montr´eal 4 Concordia Universit´e, Montr´eal 5 LISN – Universit´e Paris-Saclay, CNRS, Inria, Orsay 6 Helm.ai, San Francisco stephane.a.rivaud@inria.fr A BSTRACT Reversible architectures have been shown to be capable of performing on par with their non-reversible architectures, being applied in deep learning for memory savings and generative modeling. In this work, we show how reversible architectures can solve challenges in parallelizing deep model training. We introduce PETRA, a novel alternative to backpropagation for parallelizing gradient computations. PETRA facilitates effective model parallelism by enabling stages (i.e., a set of layers) to compute independently on different devices, while only needing to communicate activations and gradients between each other. By decoupling the forward and backward passes and keeping a single updated version of the parameters, the need for weight stashing is also removed. We develop a custom autograd-like training framework for PETRA, and we demonstrate its effectiveness on CIFAR10, ImageNet32, and ImageNet, achieving competitive accuracies comparable to backpropagation using ResNet-18, ResNet-34, and ResNet-50 models. 1 I NTRODUCTION First-order methods using stochastic gradients computed via backpropagation on mini-batches are the de-facto standard for computing parameter updates in Deep Neural Networks (LeCun et al., 2015). As datasets and models continue to grow (Alabdulmohsin et al., 2022) there is an urgent need for memory-efficient and scalable parallelization of deep learning training across multiple workers. Data parallelism via mini-batches (LeCun et al., 2015) has been widely adopted in deep learning frameworks (Li et al., 2020). This approach computes gradients across model replicas distributed among workers, yet it requires frequent synchronization to aggregate gradients, leading to high communication costs, as well as substantial memory redundancy. Furthermore, with the increasing size and scale of models exceeding that of the growth of on-device memory, the forward and backward passes now often exceed a single device’s memory capacity (Ren et al., 2021). To further address these issues, methods have attempted to mitigate this memory overhead and to parallelize the sequential backpropagation steps themselves across devices, while computing exact gradients. Techniques like optimizer sharding (Rajbhandari et al., 2020), tensor parallelism (Shoeybi et al., 2019), activation checkpointing (Chen et al., 2016), or pipelining (Huang et al., 2019), have been deployed individually or combined, leading for instance to the development of 3D parallelism (Smith et al., 2022), a popular methodology which improves the efficiency of the backpropagation implementation. On the other hand, the fundamental inefficiency underlying the parallelization of backpropagation has not been addressed by these methods. However, the use of exact gradient restricts algorithmic choices and parallel implementations, as highlighted by Jaderberg et al. (2017). For instance, backpropagation is _backward locked_ : the inputs of each layer must be propagated through the network and preserved until an error signal is _∗_ During a one year leave - now back at CNRS, Sorbonne University. 1 **a) Backpropagation** Stage 2 Stage 1 Stage 0 **b) PETRA** Stage 2 Stage 1 Stage 0 Loss Layer Propagation of one batch **Time** Figure 1: **Comparison of PETRA with standard backpropagation.** This approach splits the stages of a model and decouples their forward and backward passes, resulting in a sixfold increase in parallelization speed in this example. retropropagated to the layer of origin. This requirement enforces a synchronous dependency among subsequent layers and requires them to systematically store intermediary activations, potentially impeding overall resource efficiency as workers must wait for each other to continue their computations and release memory used for activations. To unlock the potential of backpropagation, inexact backpropagation procedures have been proposed. These procedures are generally conceptualized within the context of model parallelism, where a neural network is split into stages that can process their activations in parallel, potentially on multiple devices. For example, some methods use outdated parameters or activations, such as double-buffered pipelining (Harlap et al., 2018) or delayed gradient approaches (Zhuang et al., 2021b). However, these methods introduce significant memory overhead due to the use of ad hoc buffers for activations, parameters, or both. Following an opposite direction, local learning methods (Nøkland & Eidnes, 2019; Belilovsky et al., 2020), which estimate inexact gradients via a local auxiliary neural network, pave the way to parallel gradient computations but often lead to unrecoverable performance drops (Fournier et al., 2023). This underscores the need for a robust alternative to backpropagation, with limited memory overhead. In this work, we introduce PETRA (Parallel End-to-End Training with Reversible Architectures), a novel method designed to parallelize gradient computations within reversible architectures with minimal computational overhead. Reversible architectures are an ideal candidate for this task, as they can significantly reduce memory overhead during standard backpropagation with limited communication costs. Furthermore, reversibility is a minor requirement, as many studies have demonstrated that standard architectures can be adapted into reversible ones without any performance drops (Gomez et al., 2017; Jacobsen et al., 2018b; Mangalam et al., 2022; Kitaev et al., 2020). By allowing parameters to evolve in parallel and by computing an approximate inversion during backward, we propose an effective alternative to backpropagation which allows high model parallelism with a constant communication overhead and **no additional parameter or activation buffers** . In fact, for a constant increase in communication overhead, PETRA achieves a linear speedup compared to standard backpropagation with respect to the number _J_ of stages the network is split into. We illustrate our approach in Fig. 1, by contrasting the evolution of PETRA with a standard backpropagation pass. **Contributions.** Our contributions are as follows: **(1)** We introduce PETRA, a streamlined approach for parallelizing the training of reversible architectures. This method leverages a delayed, approximate inversion of activations during the backward pass, allowing for enhanced computational efficiency. **(2)** Our technique significantly reduces memory overhead by minimizing the necessity to store extensive computational graphs. **(3)** It enables the parallelization of forward and backward pass computations across multiple devices, effectively distributing the workload and reducing training time. **(4)** We validate the efficacy of PETRA through rigorous testing on benchmark datasets such as CIFAR-10, ImageNet-32, and ImageNet, where it demonstrates robust performance with minimal impact on accuracy. **(5)** We observe a significant empirical throughput 2 increase when using PETRA. **(6)** Additionally, we provide a flexible reimplementation of the autograd system in PyTorch, specifically tailored for our experimental setup, which is available at [https://github.com/stephane-rivaud/PETRA.](https://github.com/stephane-rivaud/PETRA) 2 R ELATED WORK **Reversible architectures.** Reversible DNNs are composed of layers that are invertible, meaning that the input of a layer can be computed from its output. This approach allows to avoid the need to store intermediary activations during the forward pass by reconstructing them progressively during the backward pass (Gomez et al., 2017), at the cost of an extra computation per layer. Invertible networks further improve this method by removing dimensionality reduction steps such as downsamplings, making the networks fully invertible (Jacobsen et al., 2018a). Reversibility is not restricted to a type of architecture or tasks and has been extensively used for generative models (Dinh et al., 2014), for ResNets (Gomez et al., 2017), and Transformers (Mangalam et al., 2022). However, as far as we know, reversible architectures have never been used to enhance parallelization capabilities. **Alternatives to backpropagation.** Multiple alternatives to backpropagation have been proposed previously to improve over its computational efficiency. For instance, DNI (Jaderberg et al., 2017) is the first to mention the backpropagation inefficiency and its inherent synchronization locks. However, they address those locks with a method non-competitive with simple baselines. Local (or greedy) learning (Nøkland & Eidnes, 2019; Belilovsky et al., 2019) propose to use layerwise losses to decouple the training of layers, allowing them to train in parallel (Belilovsky et al., 2021). Local learning in videos (Malinowski et al., 2021) notably uses the similarity between successive temporal features to remove buffer memory. However, the difference in training dynamics between local training and backpropagation still limits such approaches (Fournier et al., 2023; Wang et al., 2021). **Pipeline parallelism.** Pipelining encompasses a range of model parallel techniques that divide the components of a network into stages that compute in parallel, while avoiding idle workers. Initially popularized by Huang et al. (2019), a batch of data is divided into micro-batches that are processed independently at each stage. Although more efficient pipelining schedules have been proposed (Fan et al., 2021), notably to mitigate the peak memory overhead, keeping an exact batch gradient computation requires leaving a bubble of idle workers. By alternating one forward and one backward pass for each worker, PipeDream (Narayanan et al., 2019) can allow to get rid of idleness bubbles, but at the expense of introducing staleness in the gradients used. Narayanan et al. (2021) mitigates this staleness to only one optimization step by accumulating gradients, thus also reducing the parameter memory overhead to only two versions of the parameters. Nevertheless, these approaches still suffer from a quadratic activation memory overhead with regard to the number of stages, as micro-batch activations pile up in buffers, especially for early layers. Some implementations propose to limit this overhead by combining activation checkpointing (Chen et al., 2016) with pipelining (Kim et al., 2020; Liu et al., 2023), although the memory overhead still scales with the number of stages. **Delayed gradient.** By allowing stale gradients in the update process, these previous methods provide the context for our approach. Delayed gradient optimization methods are model parallel techniques that aim to decouple and process layers in parallel during backpropagation. In these approaches, delays occur stage-wise: the backward pass may be computed with outdated parameters or activations compared to the forward pass. For instance, Huo et al. (2018a) proposes a feature replay approach, where a forward pass first stores intermediary activations, which are then ”replayed” to compute the backward pass in parallel. This method still requires heavy synchronization between layers, yielding a lock on computations. In Zhuang et al. (2020) and Zhuang et al. (2021a), stale gradients are computed from older parameter versions differing from the parameters used during the update. This staleness can be mitigated: Zhuang et al. (2021a) ’shrinks’ the gradient by the delay value, but more advanced techniques also exist (Yang et al., 2021; Kosson et al., 2021). Still, these methods are limited like previous pipelining methods by their memory overhead as the computational graph is fully stored. A first step to reduce this, as proposed in Diversely Stale Parameters (DSP) (Xu et al., 2019), PipeMare (Yang et al., 2021) and (Kosson et al., 2021), is to keep a single set of parameters and approximate the gradients computed during the backward pass with the updated parameters, which differ from the ones used in the forward pass. This requires, like in activation 3 **(a) ResNet** **(b) RevNet Forward** **(c) RevNet Reverse** Figure 2: **Differences between the residual block of a ResNet and its reversible counterpart. (a)** Forward of a residual block. **(b)** Forward and **(c)** Reverse forward of a reversible residual block. For reversible blocks, as in Gomez et al. (2017), the input _x_ _j_ is doubled in size and split equally into _{x_ [1] _j_ _[, x]_ [2] _j_ _[}]_ [ along its channels. The function] _[ F]_ _[j]_ [ includes a skip-connection while][ ˜] _[F]_ _[j]_ [ does not.] checkpointing, an additional reconstruction of the computational graph. Furthermore, the quadratic activation memory overhead still limits the scalability of these methods for a large number of stages. 3 M ETHOD 3.1 S TANDARD BACKPROPAGATION We consider a DNN composed of _J_ stages (e.g., a layer or a set of layers). An input _x_ 0 is propagated through the network, recursively defined by _x_ _j_ ≜ _F_ _j_ ( _x_ _j−_ 1 _, θ_ _j_ ) _,_ (1) where _F_ _j_ is the _j_ -th stage parameterized by _θ_ _j_ . The backpropagation algorithm is the ubiquitous algorithm to compute parameter gradients. First, an input is propagated through the network with a forward pass, while storing its intermediate activations. A scalar loss _L_ is then deduced from the corresponding output _x_ _J_ . Parameter gradients are then computed during the backward pass by taking advantage of the chain rule: starting from the last stage with _δ_ _J_ = _∇_ _x_ _J_ _L_, the gradients with regard to the activations are given by _δ_ _j_ ≜ _∇_ _x_ _j−_ 1 _L_ = _∂_ _x_ _F_ _j_ ( _x_ _j−_ 1 _, θ_ _j_ ) [T] _δ_ _j_ +1 _,_ (2) and the gradients with regard to the parameters are defined as ∆ _j_ ≜ _∇_ _θ_ _j_ _L_ = _∂_ _θ_ _F_ _j_ ( _x_ _j−_ 1 _, θ_ _j_ ) [T] _δ_ _j_ +1 _._ (3) Note that these computations follow a synchronous and sequential order. The parameters _θ_ _j_ can then be updated given their gradient estimate ∆ _j_, using any optimizer. 3.2 R EVERSIBLE ARCHITECTURES We focus on the reversible neural networks presented in Gomez et al. (2017), although our method is not dependent on this architecture. Note that this is a weak restriction as many architectures are adaptable to reversible ones Mangalam et al. (2022). In practice, only a few stages which do not preserve feature dimensionality are not reversible and correspond to the downsampling blocks in the ResNet. Fig. 2 highlights how reversible residual blocks _F_ _j_ differ from their standard counterpart. The input is split into two equal-size inputs, along the channel dimension, that are propagated forward according to Fig. 2b using an ad-hoc operator _F_ [˜] _j_ . It can be reconstructed by reverse propagating the output according to Fig. 2c, by subtracting the output of _F_ [˜] _j_ rather than adding it like in the previous forward. 4 Table 1: **Comparisons with other methods in an ideal setting for one stage.** We compare several methods to compute a gradient estimate in a model parallel setting: classical backpropagation, its reversible counterpart (Gomez et al., 2017), the Delayed gradients approach of Zhuang et al. (2020) and its improvements using checkpointing by Xu et al. (2019), and our proposed approach. Here, _J_ is the total number of stages while _j_ is the stage index. For the sake of simplicity, we assume that a backward pass requires approximately 2 times more FLOPs than a forward pass. _Full Graph_ indicates that it is required to store the full computational graph of a local forward pass. With a limited increase in communication volume and FLOPs, PETRA requires the least storage of all methods while being _linearly_ faster than backpropagation. We assume that the forward and backward passes can be executed in parallel for PETRA or delayed gradients, making the backward pass responsible for most of the computation time in parallelizable approaches. **Storage** **Comm.** **FLOPs** **Mean time** **Methods** **Activations** **Params.** **Volume** **per batch** **Backpropagation** Full Graph (FG) **1** **1** **3** _**J**_ 3 _J_ **Reversible backprop.** **0** **1** 4 4 _J_ 4 _J_ 2( _J−j_ ) **Delayed gradients** 2( _J −_ _j_ ) _×_ FG _k_ **1** **3** _**J**_ **2** **+ Checkpointing** 2( _J −_ _j_ ) **1** **1** 4 _J_ 3 **PETRA (ours)** **0** **1** 4 4 _J_ 3 **Reversible stages.** In order to compute the exact gradients during the backpropagation phase, each reversible stage needs to retrieve its output from the stage above. We note _F_ _j_ _[−]_ [1] the reverse stage function, which reconstructs the input from the output. We recursively apply the reconstruction to the final activation _x_ _J_, such that _x_ _j−_ 1 � _δ_ _j_ = _F_ _j_ _[−]_ [1] ( _x_ _j_ _, θ_ _j_ ) � � _∂_ _x_ _F_ _j_ ( _F_ _j_ _[−]_ [1] ( _x_ _j_ _, θ_ _j_ ) _, θ_ _j_ ) [T] _δ_ _j_ +1 _._ (4) � Note that reconstructing the input in our procedure is computationally equivalent to recomputing the activations in activation checkpointing, meaning it is equivalent to a single forward pass. Thus, this augmented backward procedure is equivalent to one regular forward call and backward call. However, one should observe that since the input _x_ _j−_ 1 must be sent to the reversible stages, this doubles the cost of backward communications. **Non-reversible stages.** In practice, a reversible architecture includes layers that reduce dimensionality for computational efficiency, which thus correspond to non-invertible functions. For those very few stages, we employ a buffer mechanism to store activations and, like activation checkpointing, we recompute the computational graph with a forward pass during the backward pass. Note that this would not be the case when using invertible (i.e., bijective) architectures (Jacobsen et al., 2018a), which use an invertible downsampling. 3.3 A PARALLELIZABLE APPROACH : PETRA As with any model parallel training technique, PETRA requires to partition the network architecture into stages _F_ _j_ that are distributed across distinct devices. Each device _j_ needs only to communicate with its neighboring devices _j −_ 1 and _j_ + 1. The pseudo-code in Alg. 1 details the operations performed by each device, and the whole algorithm execution can be summarized as follows. The first device sequentially accesses mini-batches, initiating the data propagation process. When receiving its input _x_ _[t]_ _j−_ 1 [from the previous stage, each stage processes it in forward mode and passes] it to the next stage, until the final stage is reached. The final stage evaluates the loss and computes the gradients with regard to its input and parameters, thus initiating the backward process, which is performed in parallel of the forward process. In it, each stage processes the input and its associated gradient from the next stage. This means first reconstructing the computational graph, either while reconstructing the input ˜ _x_ _[t]_ _j−_ 1 [for reversible stages or with a forward pass as in activation check-] 5 Idea Generation Category:
0Conceptual Integration
0fhzSFsGUT
# - A LCHEMY : A MPLIFYING T HEOREM -P ROVING C APA BILITY THROUGH S YMBOLIC M UTATION **Shaonan Wu** [1,2,] _[ ∗]_ **Shuai Lu** [3,†] **Yeyun Gong** [3,] **Nan Duan** [3,] **Ping Wei** [1,2,] _[†]_ 1 National Key Laboratory of Human-Machine Hybrid Augmented Intelligence 2 Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University 3 Microsoft Research Asia _{_ shaonanwu@stu.,pingwei@ _}_ xjtu.edu.cn, _{_ shuailu,yegong,nanduan _}_ @microsoft.com A BSTRACT Formal proofs are challenging to write even for experienced experts. Recent progress in Neural Theorem Proving (NTP) shows promise in expediting this process. However, the formal corpora available on the Internet are limited compared to the general text, posing a significant data scarcity challenge for NTP. To address this issue, this work proposes _Alchemy_, a general framework for data synthesis that constructs formal theorems through symbolic mutation. Specifically, for each candidate theorem in Mathlib, we identify all invocable theorems that can be used to rewrite or apply to it. Subsequently, we mutate the candidate theorem by replacing the corresponding term in the statement with its equivalent form or antecedent. As a result, our method increases the number of theorems in Mathlib by an order of magnitude, from 110k to 6M. Furthermore, we perform continual pretraining and supervised finetuning on this augmented corpus for large language models. Experimental results demonstrate the effectiveness of our approach, achieving a 4.70% absolute performance improvement on _Leandojo_ benchmark. Additionally, our approach achieves a 2.47% absolute performance gain on the out-of-distribution miniF2F benchmark based on the synthetic data. To provide further insights, we conduct a comprehensive analysis of synthetic data composition and the training paradigm, offering valuable guidance for developing a strong theorem prover. [1] 1 I NTRODUCTION Nowadays, some pioneer mathematicians are attempting to verify their proofs using the proof assistant Lean (de Moura et al., 2015; Tao, 2023). Writing proofs for formal statements demands mastery of formal language and domain-specific mathematical knowledge. To mitigate the complexity associated with completing proofs, several research efforts (Polu & Sutskever, 2020; Polu et al., 2023; Trinh et al., 2024) seek to automatically generate formalized proof through a neural model, known as Neural Theorem Proving (NTP). NTP represents a long-standing challenge for machine learning-based methods (Li et al., 2024), highlighting the limitations in the reasoning abilities of neural models. Prevalent Large Language Models (LLMs) (Brown et al., 2020; Dubey et al., 2024) still struggle with theorem-proving, despite excelling in related reasoning-intensive scenarios such as math reasoning (Reid et al., 2024) or code generation (Guo et al., 2024). The key challenge of theorem-proving lies in data scarcity (Li et al., 2024; Trinh et al., 2024). Due to the difficulties associated with the manual formalization of theorems, formal corpora available on the Internet are relatively scarce compared to the general text (Azerbayev et al., 2024). Synthetic data has shown promise in alleviating the data scarcity problem. Some works propose to directly create theorems in symbolic space. For instance, Wang & Deng (2020) attempts to train a neural theorem generator on human-written formal theorems for the low-weighted formal system Metamath. Other efforts focus on generating theorems based on symbolic rules (Wu et al., 2021; Trinh et al., _∗_ Work done during internship at Microsoft Research Asia. _†_ corresponding author. 1 [The code is available at https://github.com/wclsn/Alchemy.](https://github.com/wclsn/Alchemy) 1 2024), which are restricted to a specific domain of mathematics, such as inequality theorems and 2D geometry. Additionally, there are endeavors focusing on autoformalization (Xin et al., 2024; Ying et al., 2024), which typically translates natural language mathematical problems into formalized statements, samples correct proofs, and retrains the theorem prover iteratively. Autoformalization has yielded promising results in competition-level theorem-proving tasks through the use of large autoformalized datasets (Xin et al., 2024). However, the process of formalizing problems and retrieving proofs is labor-intensive and cost-prohibitive. The distribution of formalized theorems is constrained by the pool of human-collected natural language problems and the intrinsic capabilities of the model. Compared to autoformalization, synthesizing theorems in symbolic space is a more direct process without intermediate translation, and is also easier to scale up to large, cost-effective CPU units. Building upon the advanced Lean theorem prover (de Moura et al., 2015), we introduce a general method that synthesizes theorems directly in symbolic space. We analogize theorem synthesis to constructing functions in general programming language and adopt an up-to-down approach. Initially, a new statement (function declaration) is constructed for each candidate theorem. Specifically, with the mathematical library of Lean Mathlib [2] as seed data, we aim to find a symbolic manipulation between two existing statements. We posit that Lean’s tactics serve as suitable candidates for manipulation because of their efficacy in handling symbolic expressions. _{rw, apply}_ are two basic tactics frequently used in theorem proving and capable of handling the equality and implication relationship between terms. We assign both tactics to the set of manipulations and retrieve the invocable theorems for each candidate theorem by executing a predefined list of instructions in an interactive Lean environment. Then we mutate the candidate statement by replacing its components with their corresponding equivalent forms or logical antecedents. Ultimately, we construct the corresponding proof (function body) based on the existing proof and verify its correctness using Lean. The worked example shown in Fig.1 illustrates the entire procedure of our algorithm. This algorithm is executed on a large CPU-only computing unit for several days. Our method increases the number of theorems in Mathlib by an order of magnitude from 110,657 to 6,326,649. This significant increase in the number of theorems demonstrates the potential of creating theorems in symbolic space. We pre-train the LLMs on the combination of Mathlib theorems and their mutated variants. Then we fine-tune the models on the extracted state-tactic pairs, composing both the training split of Mathlib and additional synthesized state-tactic pairs. We demonstrate the effectiveness of our method by evaluating the theorem-proving capability of these provers on the challenging _Leandojo_ benchmark (Yang et al., 2023). Our synthetic data improve the performance by 4.70% (over 70 theorems) on the novel ~~p~~ remises split. Furthermore, the synthesized data exhibit promise in enhancing the outof-distribution theorem-proving ability of LLMs, as evidenced by a performance increase of about 2.47% on the competition-level miniF2F benchmark (Zheng et al., 2022). Our main contributions are as follows. To the best of our knowledge, this work represents the first general data synthesis framework in the symbolic space for the Lean theorem prover, effectively complementing mainstream autoformalization-based methods. Notably, our synthesis pipeline increases the number of theorems in Mathlib by an order of magnitude. Associated code has been made open-source to facilitate further research in data synthesis for formal systems. Also, the synthesized theorems can serve as a valuable supplement to Mathlib. We conduct a comprehensive evaluation on both in-distribution and out-of-distribution benchmarks, providing empirical insights to enhance the theorem-proving capabilities of LLMs. 2 R ELATED W ORK **Neural Theorem Proving** . Proof assistants such as Lean (de Moura et al., 2015), Isabelle (Paulson, 1994) or Coq (Barras et al., 1997) are gaining traction within the mathematical community. These tools help mathematicians in interactively formalizing and checking the correctness of proofs (Tao, 2024). Neural networks have shown promise in lowering the barrier of using a specific formal language for mathematicians, serving as a copilot (Song et al., 2024; Welleck & Saha, 2023). Polu & Sutskever (2020) propose to prove theorems automatically by training a decoder-only transformer to predict the next proofstep and construct the entire proof through a predefined search tragedy. Then a series of works seek to enhance the efficiency of this framework by incorporating auxiliary training 2 https://github.com/leanprover-community/mathlib4 2 Figure 1: The overview of our synthesis pipeline. At the theorem level, we find invocable theorems that can be used to rewrite or apply to the assumptions or assertion of the candidate statement, such as the _iff_ and implication rules about the _Coprime_ . Then, we construct the new statements by replacing the specific component with its equivalent form or antecedent. At the proof tree level, our method merges two existing proof trees. objectives (Han et al., 2022), conducting reinforcement learning (Polu et al., 2023; Xin et al., 2024), improving proof search tragedy (Lample et al., 2022; Wang et al., 2023; Xin et al., 2024), refining the premise-selection (Mikula et al., 2024; Yang et al., 2023) and so on. **Synthetic Theorem Creation** . Data scarcity is a main challenge for NTP (Li et al., 2024). Synthetic data can effectively alleviate this problem alongside manual data collection (Wu et al., 2024). The current approach for synthesizing theorems diverges into two pathways. For autoformalizationbased methods, the prevalent statement-level autoformalization is to translate a set of natural language problems into formal statements, followed by expert iteration to sample a collection of proofs for these statements (Wu et al., 2022; Xin et al., 2024; Ying et al., 2024). The proof-level autoformalization (Jiang et al., 2023; Huang et al., 2024) leverages LLM to generate a proof sketch, which is completed by symbolic engines such as Sledgehammer (B¨ohme & Nipkow, 2010). In contrast, the second pathway focuses on synthesizing theorems in formal space. Wang & Deng (2020) propose to train a neural theorem generator to synthesize theorems on a low-weight formal system, Metamath (Megill & Wheeler, 2019) which has only one tactic _substitute_ . Wu et al. (2021) sequentially edits the seed expression according to a predefined set of axioms and an axiom order to create a new statement, concatenating the implications from all steps to build a complete proof. This method is used to create theorems on domains grounded in well-established axioms, such as inequality theorems and ring algebra (Polu & Sutskever, 2020). Beyond these works, AlphaGeometry (Trinh et al., 2024) can solve olympiad geometry without human demonstrations by constructing statements and proofs in symbolic space from scratch, using a carefully designed deduction engine and large-scale computing resources. Our method aims to directly synthesize theorems in symbolic space on the advanced Lean theorem prover, fully utilizing the power of computing. 3 **Benchmarks for Theorem Proving** . Most neural theorem provers based on Lean are primarily trained on Lean’s mathematical library, Mathlib. It encompasses a broad spectrum of mathematical subjects (e.g., algebra and analysis), composed of over 110,000 theorems along with their respective axioms and definitions. Researchers test the capability of neural models to prove in-distribution theorems on a held-out set of Mathlib (Polu & Sutskever, 2020; Han et al., 2022; Polu et al., 2023). Yang et al. (2023) creates a challenging data split of Mathlib ( _novel_ ~~_p_~~ _remise_ split) which requires testing proofs to use at least one premises not seen in the training stage and mitigates the overestimated phenomena in the traditional setting of evaluation ( _random_ split). Another widely-used benchmark, miniF2F, (Zheng et al., 2022) is a cross-system benchmark and includes competitionlevel problems as well as IMO-level problems in the domain of algebra and number theory. 3 M ETHOD Theorems written in Lean can be viewed as a special form of code, where declarations and function bodies possess precise mathematical meanings. The initial step in creating a new theorem involves formulating a theorem statement (function declaration) that defines the essence of the theorem. Then, one must verify its correctness by generating a proof block (function body) and submitting it to the proof assistant for validation. The resulting theorems that pass type checking can serve as supplementary data for training a neural theorem prover. Following Polu & Sutskever (2020), we use proofstep prediction as the training objective and best-first-search as the search tragedy. 3.1 S TATEMENT G ENERATION **Find invocable theorems** . Constructing a new statement is the first step in creating a Lean theorem. The candidate theorem _t_ has a statement denoted as _s_ . In the corresponding Lean repository, there exists _M_ potentially invocable theorems _T_ _pinv_ = _{t_ _j_ _}_ _[M]_ _j_ =0 _[−]_ [1] [. We assume that the challenge] in creating a new theorem involves effectively leveraging the possibly invocable theorem _t_ _j_ to mutate the candidate statement _s_ . This understanding arises from two perspectives. Each theorem in Lean can be represented in the form of a proof tree as presented in Fig.1. The leaf nodes represent the assumptions, and the root node signifies the assertion. At the tree level, the task of generating a new Lean theorem with existing theorems is equivalent to defining manipulations Φ that combine the proof trees of _t_ _j_ and _t_ . To streamline this process, our focus is solely on establishing the connection between the root node of _t_ _j_ and the leaf node (or root node) of the candidate theorem _t_ . From a mathematical standpoint, we can transform a target formula into an equal variant or break it down into multiple subformulas that suffice to prove the original formula, by employing the equality or “only if” relationship between formulas. The mathematical interconnections between formulas provide heuristic insights on how to mutate _s_ to create a new theorem. Similarly, we can substitute the terms in _s_ with their equivalent forms or logical antecedents. For instance, consider the statement _a_ + _b > c_ + _d, m >_ 0 _→_ _m_ ( _a_ + _b_ ) _> m_ ( _c_ + _d_ ) and the known theorems _a > b ⇐⇒_ _e_ _[a]_ _> e_ _[b]_ and _a > c, b > d_ = _⇒_ _a_ + _b > c_ + _d_ . From these, we can derive new theorems: _a_ + _b > c_ + _d, m >_ 0 _→_ _e_ _[m]_ [(] _[a]_ [+] _[b]_ [)] _> e_ _[m]_ [(] _[c]_ [+] _[d]_ [)], and _a > c, b > d, m >_ 0 = _⇒_ _m_ ( _a_ + _b_ ) _> m_ ( _c_ + _d_ ). In summary, identifying manipulations Φ that use _t_ _j_ to modify the assumptions or assertion of _s_ is the primary step in constructing new statements. With their intrinsic mathematical meanings and proficiency in manipulating terms within Lean, tactics are promising candidates for the manipulations Φ. Following the preceding discussion, we choose two frequently used basic tactics, _rw_ and _apply_ to formulate Φ. - **rw** . The “rewriting” tactic _rw_ is mostly used to replace some terms in the target expression with their equivalent forms according to the given identity or _iff_ (a.k.a., if and only if) rules [3] . In the presence of an identity _h_ : _a_ = _b_ or an _iff_ rule _h_ : _P_ _⇐⇒_ _Q_, _rw [h]_ substitutes all occurrences of term on the left side of equality in the proof goal with term on the right side. The direction of substitution can be reversed by adding a back arrow in the bracket ( _rw [←_ _h]_ ). The target of rewriting can also be changed using _at_, e.g. _rw [h] at_ _h_ 1, where _h_ 1 is an arbitrary assumption of the current proof state. 3 Strictly speaking, the _rw_ tactic is used to handle equality in Lean. The identity and _iff_ are just some kinds of equality. 4 Table 1: Templates for instructions designed to be executed in a Lean environment. We determine if a theorem is invocable by running the specific instruction. **Tactic** **Instruction Template** **Description** **Equality** invocable ~~t~~ heorem : _a_ = _b_ or _a ⇐⇒_ _b_ _rw_ rw [invocable ~~t~~ heorem] replace all _a_ s in goal with _b_ rw [ _←_ invocable ~~t~~ heorem] replace all _b_ s in goal with _a_ rw [invocable ~~t~~ heorem] at assumption replace all _a_ s in assumption with _b_ rw [ _←_ invocable ~~t~~ heorem] at assumption replace all _b_ s in assumption with _a_ **Implication** invocable ~~t~~ heorem : _a_ = _⇒_ _b_ _apply_ set assumption as current proof have assumption := by apply invocable ~~t~~ heorem goal, and try to argue backwards - **apply** . The _apply_ tactic is a “suffice-to” tactic. Given an implication, it will match the consequent with the proof goal. If matched, it will transform the goal into the antecedent of the implication. With an implication rule _h_ : _P_ = _⇒_ _Q_ and a proof goal _Q_, then _apply_ _[h]_ will reduce the goal to proving _P_, which means that “proving P suffices to prove Q by implication”. Similarly, _apply_ can be used to modify the assumption by deducing the implication forward. With assumption _h_ 1 : _P_, then _apply [h] at h_ 1 will change _h_ 1 into _Q_, which means “If P is true, then we can assert Q is true by the implication”. **Algorithm 1** Find invocable theorems **Input:** candidate statement _s_, potential invocable theorems _T_ _pinv_, instruction templates _I_ **Output:** invocable theorems _T_ _inv_ _▷T_ _inv_ : _{_ ( _init_ ~~_s_~~ _tate, next_ ~~_s_~~ _tate, instruction_ ) _· · · }_ ( _env, init_ ~~_s_~~ _tate_ ) _←_ INIT ( _s_ ) _▷_ initialize gym-like environment and retrieve initial state _T_ _inv_ _←∅_ **for** _t_ **in** _T_ _pinv_ **do** **for** _i_ **in** _I_ **do** _▷_ for each instruction template instruction _inst ←_ FORMAT ( _t, i_ ) _next_ ~~_s_~~ _tate ←_ RUN TAC ( _env, init_ ~~_s_~~ _tate, inst_ ) _▷_ run a tactic specified by instruction _i_ and theorem _t_ **if** VALID ( _next_ ~~_s_~~ _tate_ ) **then** _▷_ if return a valid proof state Add ( _init_ ~~_s_~~ _tate, next_ ~~_s_~~ _tate, inst_ ) to _T_ _inv_ **end if** **end for** **end for** To generate a new statement, we need to find the relationship between the candidate statement _s_ and the potentially invocable theorems _T_ _pinv_ . The pseudocode outlined in Algorithm 1 describes the main procedure to find invocable theorems. The process involves initializing a gym-like environment to interact with Lean and extracting the initial proof state for the candidate statement. Then, the algorithm iteratively tests whether one theorem can be used to rewrite or apply to the candidate theorem leveraging the instruction templates shown in Table 1. Suppose the feedback from the interactive environment is deemed valid according to predefined criteria, the algorithm adds the proof states before and after the tactic running together with the respective instruction to the set of invocable theorems _T_ _inv_ . More information about this process is described in Appendix C.2. **Mutate statements** . After obtaining the initial set of invocable theorems, we applied some filtering rules to _T_ _inv_ to improve the quality of the data and lower the complexity of mutating statements. With filtered invocable theorems, we construct new statements by replacing the components with their equivalent forms or antecedents. Since we use tactics in Lean to formulate the manipulations Φ, most symbolic manipulations are bypassed to the Lean proof assistant. What remains is just parsing and replacing. Specifically, for the candidate statement _s_ and instruction _i_, we utilize its abstract syntax tree to pinpoint the exact location within the code that requires modification. Then 5 Idea Generation Category:
0Conceptual Integration
7NL74jUiMg
## T RAINING L ANGUAGE M ODELS TO S ELF -C ORRECT VIA R EINFORCEMENT L EARNING **Aviral Kumar** [∗+] **, Vincent Zhuang** [∗+] **, Rishabh Agarwal** [∗] **, Yi Su** [∗] **,** **JD Co-Reyes, Avi Singh, Kate Baumli, Shariq Iqbal, Colton Bishop, Rebecca Roelofs**, **Lei M Zhang, Kay McKinney, Disha Shrivastava, Cosmin Paduraru, George Tucker**, **Doina Precup, Feryal Behbahani** [†] **, Aleksandra Faust** [†] Google DeepMind A BSTRACT Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffective in modern LLMs. Current methods for training self-correction typically depend on either multiple models, a more advanced model, or additional forms of supervision. To address these shortcomings, we develop a multi-turn online reinforcement learning (RL) approach, _**SCoRe**_, that significantly improves an LLM’s self-correction ability using _entirely_ _self-generated data_ . To build _**SCoRe**_, we first show that variants of supervised fine-tuning (SFT) on offline model-generated correction traces are often insufficient for instilling self-correction behavior. In particular, we observe that training via SFT falls prey to either a distribution mismatch between mistakes made by the data-collection policy and the model’s own responses, or to behavior collapse, where learning implicitly prefers only a certain mode of correction behavior that is often not effective at self-correction on test problems. _**SCoRe**_ addresses these challenges by training under the model’s own distribution of self-generated correction traces and using appropriate regularization to steer the learning process into learning a self-correction behavior that is effective at test time as opposed to fitting high-reward responses for a given prompt. This regularization process includes an initial phase of multi-turn RL on a base model to generate a policy initialization that is less susceptible to collapse, followed by using a reward bonus to amplify self-correction. With Gemini 1.0 Pro and 1.5 Flash models, we find that _**SCoRe**_ achieves state-of-the-art self-correction performance, improving the base models’ self-correction by 15.6% and 9.1% respectively on MATH and HumanEval. 1 I NTRODUCTION Large language models (LLMs) are a useful tool for reasoning in scientific domains such as math and coding (Shao et al., 2024; Lozhkov et al., 2024; Team, 2024). An aspirational property of LLMs in such settings is their ability to implement _meta-strategies_ or _algorithms_ that use test-time computation to generate improved responses. However, modern LLMs do not implement such strategies reliably. For instance, consider a problem that requires models to detect and revise (or “self-correct”) their own responses in order to eventually arrive at the best possible final response. This self-correction capability has been shown to be severely lacking in current LLMs, especially in the absence of external input (also called _intrinsic self-correction_ ) (Huang et al., 2023; Kamoi et al., 2024). To make progress towards teaching LLMs to implement meta-strategies for challenging inputs, we study a special instance of training LLMs to perform self-correction to fix their mistakes “on-the-fly”. This should be possible: on many queries where current LLMs fail, they still possess the underlying “knowledge” needed to arrive at the correct response but are unable to correctly elicit and draw inferences about their own knowledge when needed (Yang et al., 2024). For example, strong LLMs can often successfully complete a sub-part of a math proof when prompted with the remainder, but may not be able to complete it from scratch. In a similar vein, leveraging their previous responses should, in principle, enable LLMs to improve their subsequent ones. Despite this, self-correction has remained elusive, highlighting the need to go beyond existing training paradigms. ∗ Equal contribution, + Randomly ordered via coin flip, - Jointly supervised. Corresponding authors: _[vin-_ _centzhuang, rishabhagarwal, yisumtv]@google.com, aviralku@andrew.cmu.edu_ 1 |Col1|Col2|Col3|Col4| |---|---|---|---| ||||| ||||| ||||| ||||| ||Para<br>Sequ|llel Samp<br>ential (S|les<br>elf-Correct)| ||||| |Gemin|Col2|Col3|i 1.5 Fl|ash on MATH|Col6|Col7| |---|---|---|---|---|---|---| |Direct Gene<br>Self-Correcti|Direct Gene<br>Self-Correcti|Direct Gene<br>Self-Correcti|ration<br>+4.4%<br>on (SC)|ration<br>+4.4%<br>on (SC)|ration<br>+4.4%<br>on (SC)|ration<br>+4.4%<br>on (SC)| |Direct Gene<br>Self-Correcti||Direct Gene<br>Self-Correcti|ration<br>on (SC)|ration<br>on (SC)|ration<br>on (SC)|ration<br>on (SC)| |||(SC - Dire|ct)|||| |-11|-11|+0<br>.2%|.4% +1.|.4% +1.|8%|| |||||||| |||||||| |||||||| Figure 1: **Left:** _**SCoRe**_ achieves state-of-the-art self-correction performance on MATH; **Right:** _**SCoRe**_ inference-time scaling: spending samples on _sequential_ self-correction becomes more effective than only on **How can we imbue LLMs with self-correction abilities?** Prior attempts for self-correcting LLMs either rely on prompt-engineering (Madaan et al., 2023; Kim et al., 2023) or on fine-tuning models specifically for self-correction. While the former approaches often fail to perform meaningful intrinsic self-correction, fine-tuning approaches require running multiple models during inference, such as a separate refinement model (Havrilla et al., 2024b; Welleck et al., 2023), or rely on “teacher” supervision to guide the process of self-correction (Qu et al., 2024). With the use of separate models of teacher supervision, self-correction does not necessarily outperform parallel, independent attempts. We develop an approach that is effective at self-correction _without_ these requirements. Our approach, **S** elf- **Co** rrection via **Re** inforcement Learning ( _**SCoRe**_ ), trains only a single model that can both produce a response to a problem and _also_ correct errors without any oracle feedback. To develop _**SCoRe**_, we start by analyzing the shortcomings of SFT-based approaches (e.g., STaR (Zelikman et al., 2022)) and naïve RL that optimizes final response correctness for teaching selfcorrection. We find that such approaches fall prey to either: **(1)** _distribution shift_, where the trained model is able to correct errors made by the base model that generated the data, but these gains do not transfer to self-correction under the learned model’s own mistakes; or **(2)** _behavior collapse_, where the learning progress simply learns to produce the best first-attempt response followed by superficial or no modifications in the second attempt. To address these issues, _**SCoRe**_ trains for self-correction directly via on-policy, multi-turn RL. To prevent behavior collapse, _**SCoRe**_ employs two-stage training: in the first stage, it produces an initialization that is less susceptible to behavior collapse by training to correct second-attempt responses while constraining the first-turn distribution to be close to the base model; followed by training on both attempts to maximize reward in the second stage. Crucially, the second stage of multi-turn RL employs a reward shaping term that rewards “progress” towards self-correction as opposed to the correctness of the final response. Our contribution is _**SCoRe**_, a multi-turn RL approach for teaching LLMs how to correct their own mistakes. To the best of our knowledge, _**SCoRe**_ is the first approach to attain significantly positive intrinsic self-correction: relative to base Gemini models, our method attains an absolute **15.6%** gain on self-correction for reasoning problems from MATH (Hendrycks et al., 2021) and an absolute **9.1%** gain on coding problems from HumanEval (Chen et al., 2021). We additionally motivate the design of _**SCoRe**_ by extensively studying the failure modes of SFT and standard RL approaches, which broadly indicate that reinforcement learning plays an essential role in _self-trained_ self-correction. 2 R ELATED W ORK Prior works study self-correction for LLMs under a variety of assumptions and problem settings. The most prominent settings include problems where external input tokens from an environment are available, such as agentic tasks (Liu et al., 2023), code repair (Jain et al., 2024), and tool use (Chen et al., 2023). While self-correction with external feedback is possible with strong models (Pan et al., 2023), even they struggle in the substantially harder setting with no external input (intrinsic self-correction) (Kamoi et al., 2024; Huang et al., 2023). Prior work that attempts to amplify intrinsic correction abilities is largely based on prompting and fine-tuning. **Prompting for intrinsic self-correction** . Recent work demonstrates that naïvely prompting LLMs for self-correction can degrade performance (Huang et al., 2023; Zheng et al., 2024; Tyen et al., 2024; 2 Figure 2: **Two example traces of self-correction.** In the upper example, _**SCoRe**_ is able to correct an arithmetic mistake it makes in turn 1. In the lower example, the model is able to correct a reasoning error. Qu et al., 2024). These results contradict prior work (Madaan et al., 2023; Shinn et al., 2023; Kim et al., 2023) and largely stem from mismatched assumptions about the setting (Kamoi et al., 2024). For example, Shinn et al. (2023); Kim et al. (2023) use ground-truth answers during self-correction that may not generally be available; Madaan et al. (2023) use weak prompts for initial responses, thereby overestimating the total improvement possible. Therefore, there is no major work showing successful intrinsic self-correction via prompting alone. In the context of code self-repair, Olausson et al. (2023) show that even when strong models are prompted with some form of partial feedback, e.g., test-cases but not the desired outcomes, they are unable to correct their mistakes. **Fine-tuning for intrinsic self-correction** . Several works that go beyond prompting rely on finetuning with demonstrations of revisions, e.g. obtaining revisions directly from human annotators (Saunders et al., 2022) or stronger models (Ye et al., 2023; Qu et al., 2024). Our work aims to train for self-correction entirely without the use of larger models or humans, when the _learner itself_ _is asked to generate its own training data._ Similar to these prior works, we assume access to a reward function for evaluating model-generated outputs (Welleck et al., 2023; Akyürek et al., 2023; Zhang et al., 2024). Perhaps the closest to us from this set is Qu et al. (2024), which utilizes an iterative STaR-like approach self-correction. While this work largely uses oracle teacher supervision, their preliminary results from training for self-correction only show minor improvements over five turns, consistent with the results we see for STaR. We show that _**SCoRe**_ attains substantially better results. Other approaches train separate models for performing correction (e.g., GLoRE (Havrilla et al., 2024b), Self-Correction (Welleck et al., 2023), Akyürek et al. (2023); Paul et al. (2023). While such approaches can be convenient, they require system design for serving multiple models at deployment. **Multi-turn RL for LLMs.** Prior work at the intersection of LLMs and multi-turn RL builds machinery for optimizing rewards with value-based (Snell et al., 2022; Zhou et al., 2024; Farebrother et al., 2024; Shani et al., 2024), policy-based (Xiong et al., 2024; Shao et al., 2024), and modelbased (Hong et al., 2024) approaches. We do not focus on building machinery for RL (we use the approach of Ahmadian et al. (2024)), but rather train for self-correction as an RL problem. 3 P RELIMINARIES AND P ROBLEM S ETUP Our goal is to develop an approach for training LLMs to improve their own predictions entirely on self-generated data. As discussed so far, we situate ourselves in the intrinsic self-correction setting (Huang et al., 2023), where models attempt to correct their initial responses _without_ any 3 Table 1: **Self-correction performance after training on** _D_ STaR **and** _D_ SFT **.** We find that the gap between the second and first attempts ( ∆ (t1,t2)) is either negative or small. Both approaches erroneously modify a correct response to be incorrect, i.e., reflected in a high ∆ [c][→][i] ( _t_ 1 _, t_ 2) and a low ∆ [i][→][c] ( _t_ 1 _, t_ 2). **Method** **Accuracy@t1** **Accuracy@t2** ∆ **(t1, t2)** ∆ [i][→][c] **(t1, t2)** ∆ [c][→][i] **(t1, t2)** Base model 52.6% 41.4% -11.2% 4.6% 15.8% STaR _D_ StaR 55.4% 41.2% -14.2% 5.4% 19.6% STaR _D_ StaR [+] 53.6% 54.0% 0.4% 2.6% 2.2% Pair-SFT _D_ SFT 52.4% 54.2% 1.8% 5.4% 3.6% Pair-SFT _D_ SFT [+] 55.0% 55.0% 0% 0% 0% external feedback. Concretely, given a dataset _D_ = {( _**x**_ _i_ _,_ _**y**_ _i_ [∗] [)}] _[N]_ _i_ =1 [of problems] _**[ x]**_ _i_ [and responses] _**y**_ _i_ [∗] [, we will train an LLM policy] _[ π]_ _θ_ [(][⋅][∣[] _**[x]**_ _[,]_ [ ˆ] _**[y]**_ 1∶ _l_ _[, p]_ 1∶ _l_ [])] [ that, given the problem] _**[ x]**_ [, previous] _[ l]_ [ model] attempts ˆ _**y**_ 1∶ _l_ at the problem, and auxiliary instructions _p_ 1∶ _l_ (e.g., instruction to find a mistake and improve the response), solves the problem _**x**_ as correctly as possible. This formalism is akin to the multi-turn MDP in Qu et al. (2024). We also assume access to an oracle reward ̂ _r_ ( _**y**_ _,_ _**y**_ [∗] ), such as an answer checker (Uesato et al., 2022), that evaluates the correctness of response _**y**_ by comparing it with the oracle response _**y**_ [∗] . Critically, we _do not_ assume access to this oracle at test-time; instead, the model must deduce whether there was a mistake and correct it if necessary, as is often the case in e.g. mathematical reasoning problems. Unlike the setup of Qu et al. (2024), we also do not run majority voting for most of our main results. An example of our problem setting is given in Figure 2. We aim to find an LLM policy _π_ (□∣◦) mapping input tokens ◦ to output tokens - that maximizes the correctness reward obtained from the verifier at the end of _l_ + 1 turns ( _l_ = 1). Formally: max _π_ _θ_ E _**x**_ _,_ _**y**_ [∗] ∼ _D,_ _**y**_ ˆ _l_ +1 ∼ _π_ _θ_ (⋅∣[ _**x**_ _,_ _**y**_ ˆ 1∶ _l_ _,p_ 1∶ _l_ ]) [ _l_ +1 ̂ ˆ ∑ _r_ ( _**y**_ _i_ _,_ _**y**_ [∗] _._ (1) _i_ =1 )] _l_ +1 ∑ Crucially, note that unlike standard SFT or prevalent RL fine-tuning workflows, which train the policy _π_ to directly produce _**y**_ [∗] (or any other _**y**_ wih ̂ _r_ ( _**y**_ _,_ _**y**_ [∗] ) = 1 ), Equation 1 trains _π_ over multiple attempts _simultaneously_, where intermediate turns are supervised indirectly to maximize the sum. **Base RL fine-tuning approach we use.** We use a REINFORCE policy gradient training approach with a KL-divergence penalty against a fixed model (Ahmadian et al., 2024), which is widely used in RL fine-tuning of LLMs, primarily in the setting of single-turn RLHF. Formally, these methods train the policy _π_ _θ_ (⋅∣ _**x**_ ) to optimize the following, where _π_ ref is a reference policy. ̂ max E _**x**_ _t_ _,_ _**y**_ _t_ ∼ _π_ _θ_ (⋅∣ _**x**_ _t_ ) [ _r_ ( _**y**_ _t_ _,_ _**y**_ [∗] ) − _β_ 1 _D_ _KL_ ( _π_ _θ_ (⋅∣ _**x**_ _t_ )∣∣ _π_ ref (⋅∣ _**x**_ _t_ ))] _,_ (2) _θ_ **Metrics.** To measure self-correction performance (we consider _l_ = 2 in this paper), we report and analyze the following metrics: **(1) Accuracy@t1** : the model’s accuracy at the first attempt; **(2)** **Accuracy@t2** : the model’s accuracy at the second attempt, **(3)** ∆ **(t1, t2)** : the net improvement in model accuracy between the first and second attempts, which measures the efficacy of self-correction, **(4)** ∆ [i][→][c] **(t1, t2)** : the fraction of problems that are incorrect in the first attempt but become correct at the second attempt, which measures how many _new_ problems can self-correction solve; and **(5)** ∆ [c][→][i] **(t1, t2)** : the fraction of problems that are correct in the first attempt but become incorrect at the second attempt, which measures how well the model understands what makes a response correct. 4 SFT ON S ELF -G ENERATED D ATA IS I NSUFFICIENT FOR S ELF -C ORRECTION A natural approach for training self-correction is to utilize some form of supervised fine-tuning on data collected from a base model. Variants of this approach have been shown to scale well on single-turn reasoning problems (Singh et al., 2023; Zelikman et al., 2022). In this section, we assess the empirical efficacy of two such approaches for self-correction: STaR (Zelikman et al., 2022), and a version of Welleck et al. (2023) that trains only one model. We ultimately find that although these methods improve self-correction over the base model, they fail to achieve substantially positive self-correction ( ∆ **(t1,t2)** ). By probing these models, we observe two main failure modes: **(1)** a collapse to non-correcting behavior, where the models learn to produce a 4 |Col1|Col2|Col3|Col4|Base|model| |---|---|---|---|---|---| |||||Base|model| ||||||| |Col1|Col2|Col3|STa|aR +| |---|---|---|---|---| ||||ST|aR| |||||| |Col1|Col2|Trai<br>Eva|n ( +<br>STaR<br>l| |---|---|---|---| ||||| ||||| |Col1|Col2|Col3|Col4| |---|---|---|---| ||||| |.0 0.2 0.4 0.6 0.8|.0 0.2 0.4 0.6 0.8|.0 0.2 0.4 0.6 0.8|.0 0.2 0.4 0.6 0.8| (c) **Pair-SFT** edit distance ratios. (b) **STaR** edit distance ratios. |Col1|Col2|Col3|Col4|Col5|SFT| |---|---|---|---|---|---| ||||||SFT| ||||||| ||||||| |Col1|Col2|Col3|Col4|SCoRe| |---|---|---|---|---| |||||| |||||| (a) **Histograms** of edit distance ratios on MATH 500. Figure 3: **Edit distance between first-attempt and second-attempt responses** from fine-tuned models, our approach ( _**SCoRe**_ ) and the base model. While training on self-generated error correction traces learns to not make major edits primarily, SFT learns to make some edits but is still quite conservative. good response on the first attempt and only make minor (or no) modifications in the second attempt, and **(2)** an inability of offline methods to be robust to distribution shift in the first-attempt responses. **Analysis setup: methods and dataset construction.** We prompt Gemini 1.5 Flash to obtain a large number of two-turn self-correction traces on MATH (Hendrycks et al., 2021). The **STaR** approach filters these trajectories to retain only those that successfully revise incorrect responses and runs SFT on the resulting dataset. Another approach is to use base model data from above to construct “synthetic” repair traces by pairing incorrect responses with correct ones (Welleck et al., 2023). We study a variant of this method that we call **Pair-SFT**, which does not train a separate corrector model and does not augment this initial dataset with multi-turn traces. Formally, we denote the datasets for STaR and Pair-SFT as _D_ STaR and _D_ SFT respectively. We run 3 iterations for STaR following the protocol in Singh et al. (2024), and only one iteration for Pair-SFT, following the protocol in Welleck et al. (2023) and other standard workflows on SFT. **Main empirical findings.** We present the self-correction results before and after fine-tuning on _D_ STaR and _D_ SFT in Table 1. We find that although ∆ **(t1, t2)** is substantially higher for Pair-SFT relative to the base model, there is only little benefit to self-correction (1.8% gain). This gain is of a similar order to findings from Qu et al. (2024). By studying ∆ [i][→][c] and ∆ [c][→][i], we find that SFT mainly reduces the number of correct problems that are mistakenly changed to incorrect in the second attempt, but does not significantly increase the fraction of incorrect first attempts that are corrected. This result is consistent with prior works on intrinsic self-correction that have found negligible or negative ∆ **(t1, t2)** values. |Fixe|d train|Col3|Col4|Col5| |---|---|---|---|---| |||||| |||||| |||||| Figure 4: **Self-correction performance** on different sets of first-attempt responses: (a) “fixed”: first response is sampled from the initial model, (b) “self-generated”: first response is generated by the learner itself. Throughout training, the correction rate on fixed responses increases for both train and validation problems, but degrades substantially on self-generated responses. This indicates that training on a fixed offline dataset of correction traces suffers from distribution shift. We also find that unlike Pair-SFT, training on _D_ STaR does not reduce ∆ [c][→][i], indicating that the STaR policy does not have a clear understanding of when to make modifications and when not to. Observing this, we also trained on an extended version of _D_ STaR [+] [(and] _[ D]_ SFT [+] [), which additionally] contains tuples with both correct responses. We would expect the addition of such “correct-to-correct” data to prevent the model from erroneously revising a correct response. As shown in Table 1, the inclusion of this data helps STaR substantially but only results in 0.4% change in ∆ **(t1, t2)** . On the other hand, for SFT, inclusion of this data overly biases the model against changing its answer. 5 |Col1|Col2|(t1, t| |---|---|---| ||Stage I Tur|(t1, t2<br>n 1| |Col1|Mu<br>St| |---|---| ||St| (a) **Training accuracy curves.** When training with standard multi-turn RL, the responses at both the attempts become tightly coupled together, leading to poor coverage for subsequent iterations and worse learning progress. Stage I in _**SCoRe**_ is explicitly designed to alleviate this and achieves much higher ∆ (t1,t2), leading to increased exploration and better final performance. (b) **Frequency in which the learner proposes a** **different answer in the second turn.** Without explicitly modifying the policy initialization as in _**SCoRe**_, the policy quickly learns to often not change its answer, leading to poor exploration. Stage I in _**SCoRe**_ prevents this issue, and learns non-collapsed behavior in Stage II. Figure 5: **Behavior collapse in standard multi-turn RL** for training self-correction. These results indicate that some explicit approach to avoid collapse is necessary, i.e. Stage I in _**SCoRe**_ . **Diving deeper: analyzing self-correction behavior.** To further understand how these STaR and SFT models edit their responses, we measured their **edit distance ratios**, defined as the edit distance between the responses normalized by the total length of both the responses. As shown in Figure 3a, while the base model sometimes makes substantially large edits to the original response, models fine-tuned on _D_ STaR and _D_ SFT are overly conservative and often make no edits at all. This is akin to a form of _**behavior collapse**_ : training to maximize likelihoods on off-policy revision traces does not teach the desired correction “behavior”, even though it improves first-attempt accuracy. Similar observations of LLMs ignoring nuanced behaviors (e.g., producing a mistake in a response and then correcting it in subsequent steps) have been observed in Ye et al. (2024b). We also compared the distributions of edit distance ratios on training and test-time self-correction traces in Figures 3b/3c. While STaR produces qualitatively similar edit distance ratios on both the train and validation sets, we still observe some discrepancies between the train and validation edit distance ratios for SFT, implying that Pair-SFT is not very effective at generalizing to new problems from the same distribution. We visualized this by plotting the self-correction performance of the SFT model on a fixed set of first attempts and self-generated first attempts in Figure 4. We observe vastly different behaviors between static and self-generated first-attempt distributions: while the model is able to optimize training correction accuracy and also slightly improves on first attempts appearing in the validation set (distributed _i.i.d._ to the training distribution), its self-correction accuracy degrades. Hence, _**distribution shift**_ is a significant challenge for offline methods such as Pair-SFT. 5 _**SCoRe**_ : S ELF -C ORRECTION VIA M ULTI -T URN R EINFORCEMENT L EARNING The above results highlight that an effective approach for training LLMs to self-correct entirely via self-generated data must address both distribution shift and behavior collapse. Utilizing on-policy RL is a natural way to address distribution shift, and our method will do so by extending Equation 2 to multiple turns under the hierarchical framework of Zhou et al. (2024). However, _**is behavior collapse**_ _**an issue for standard multi-turn RL (i.e., optimizing reward at the end of the second attempt)?**_ To answer this question, we run standard multi-turn RL training to optimize Equation 1 only on ( _x_ 2 _, y_ 2 ) pairs, appearing in the second attempt. Since this objective maximizes the second-attempt performance given self-generated first attempts but without training the first attempt, we expect the self-correction ∆ (t1,t2) of the model to increase. However, as shown in Figure 5, while the performance of each attempt improves with training, the difference ∆ (t1, t2) does not. In other words, standard multi-turn RL collapses to be overly biased against changing its response, resulting in no self-correction ability and a similar behavior collapse as what we saw with STaR. 6 Figure 6: **An overview of our approach (** _**SCoRe**_ **).** _**SCoRe**_ trains a model in two stages: **Stage I:** instead of running SFT (which produces pathological amplification of biases) to initialize RL training, we train a good initialization that can produce high-reward responses in the second-attempt while mimicking the base model’s initial response at the first attempt. **Stage II:** jointly optimizing both attempts, where the latter uses a shaped reward to incentivize the discovery of the self-correction strategy instead of the simple strategy of producing the best first response followed by making any minor edits to it in the second attempt. _**Why does RL still suffer from collapse?**_ Note that there are two equally good solutions when optimizing a policy with RL on the training prompts: **(i)** learning to correct the first attempt, or **(ii)** learning to produce the best first-attempt response, followed by no meaningful correction. Of course only the former strategy produces self-correction behavior to new problems, but RL on pre-trained LLM may not learn **(i)** over **(ii)**, since both of these strategies can appear equally good on _the training_ _set_ . Abstractly, learning the “meta” strategy of self-correction during training is difficult unless the “direct” strategy that optimizes reward appears less viable on the training data. Conceptually, this is similar to the memorization challenge in meta-learning (Yin et al., 2019), which suggests that when provided with mutually exclusive tasks, meta-learning is likely to recover the supervised learning solution (without using context from the few shots) that directly predicts the output. Here, this is analogous to not self-correcting past attempts, directly producing an answer. **Method overview.** We saw that standard RL leads to a collapse to non-correcting behavior, which optimizes accuracy of both attempts, but does not incentivize self-correction. Hence, our key insight in _**SCoRe**_ is that we must more explicitly encourage self-correction behavior, which we accomplish via a two-stage approach. The first stage ( **Stage I** ) serves the role of initialization where we train the model to decouple its behavior across the two attempts by attempting to optimize second-attempt accuracy while explicitly constraining the distribution of first attempts to the base model. From here, **Stage II** then jointly optimizes the reward of both attempts. To ensure that Stage II does not collapse to the “direct” solution, we bias the reward to reinforce self-correction progress. 5.1 S TAGE I: T RAINING AN I NITIALIZATION TO D ECOUPLES A TTEMPTS The goal of Stage I of _**SCoRe**_ is to obtain an initialization by improving the coverage of base model’s second attempts given the first attempt, so that subsequent training with on-policy multi-turn RL is less prone to behavior collapse. While this would typically be done via SFT, our results in Section 4 show that SFT itself suffers from collapse. Therefore, we use RL in this stage to decouple the two attempts. To do so, we explicitly fine-tune the base model to produce _high-reward_ responses at the second attempt, while forcing the model to not change its first attempt by constraining it to be close to the base model using a KL-divergence. While this may appear sub-optimal – as it constrains the first attempt to the base model – but we find this stage is critical in reducing the base model’s bias towards collapsing the first and second-attempt distributions, thus avoiding behavior collapse when actual multi-turn RL is run. Formally, the objective is: ̂ max _θ_ E _**x**_ 1 _,_ _**y**_ 1 ∼ _π_ _θ_ (⋅∣ _**x**_ ) _,_ _**y**_ 2 ∼ _π_ _θ_ (⋅∣[ _**x**_ 1 _,p_ 1 ]) [ _r_ ( _**y**_ 2 _,_ _**y**_ [∗] ) − _β_ 2 _D_ _KL_ ( _π_ _θ_ (⋅∣∣ _**x**_ 1 )∣∣ _π_ ref (⋅∣ _**x**_ 1 ))] _,_ (3) where _β_ 2 is a hyperparameter designed to enforce a strict KL penalty _**only on the first attempt**_ to avoid shifting of the first-turn responses (denoted by the term in blue). As we use RLOO (Equation 2) for training the policy, there is still a default KL-divergence penalty, but with a much smaller weight and is omitted from Eq. 3 for clarity. We show that unlike standard multi-turn RL, Stage I is effective at decoupling the two responses (Figure 5b) and leads to better Stage II performance. 7 5.2 S TAGE II: M ULTI -T URN RL TO O PTIMIZE B OTH A TTEMPTS The second stage of _**SCoRe**_ is initialized from Stage I and now jointly optimizes the performance of both attempts. Concretely, Stage II trains _π_ _θ_ (⋅∣⋅) using objective (Eq. 2 applied to Eq. 4): max E _**x**_ 1 _,_ _**y**_ 1 ∼ _π_ _θ_ (⋅∣ _**x**_ ) _,_ _**y**_ 2 ∼ _π_ _θ_ (⋅∣[ _**x**_ 1 _,p_ 1 ]) _θ_ [ 2 ̂ ∑ _r_ ( _**y**_ _i_ _,_ _**y**_ [∗] ) − _β_ 1 _D_ _KL_ ( _π_ _θ_ (⋅∣ _**x**_ _i_ )∣∣ _π_ ref (⋅∣ _**x**_ _i_ _,_ (4) _i_ =1 ))] where _**x**_ _i_ _, i_ ∈ {1 _,_ 2} corresponds to the set of input tokens passed as context to the model. **Reward shaping to prevent behavior collapse.** Optimizing Equation 4 via multi-turn RL can still learn to couple responses. This is because we still attempt to maximize rewards for both attempts in Equation 4. To prevent the learning process from collapsing to a non self-correction policy in Stage II, we found it crucial to bias the RL problem towards learning self-correction behavior. We implement this via _reward shaping_ : by rewarding transitions that make “progress” towards learning the desired self-correction behavior. Concretely, given an two-turn rollout sampled from the policy _τ_ = { _**x**_ 1 _,_ ˆ _**y**_ 1 _,_ ̂ _r_ ( _**y**_ 1 _,_ _**y**_ [∗] ) _,_ _**x**_ 2 _,_ ˆ _**y**_ 2 _,_ ̂ _r_ ( _**y**_ 2 _,_ _**y**_ [∗] )}, we modify the reward ̂ _r_ ( _**y**_ 2 _,_ _**y**_ [∗] ) in Equation 4, at the ̂ ̂ second attempt with a bonus _b_ [̂] ( _**y**_ 2 ∣ _**y**_ 1 _,_ _**y**_ [∗] ) ∶= _α_ ⋅ ( _r_ ( _**y**_ 2 _,_ _**y**_ [∗] ) − _r_ ( _**y**_ 1 _,_ _**y**_ [∗] )), where _α_ is a positive constant multiplier, ideally larger than 1.0. Adding this bonus to the second attempt measures a notion of progress by _only_ emphasizing transitions that flip the correctness of the response and assigns a heavy negative penalty to transitions that change a correct response to incorrect in the second attempt. Thus, the addition of this bonus should regularize the training process from collapsing on to the “direct” solution that also appears optimal on the training set but does not learn self-correction. 5.3 P UTTING IT T OGETHER AND I MPLEMENTATION D ETAILS Our approach is illustrated in Figures 6 & 11. We detail all hyperparameters in Appendix B. In practice, one can also use an adaptive _β_ 2 that attempts to balance the magnitudes of the Idea Generation Category:
2Direct Enhancement
CjwERcAU7w
# RES U M: R ARE E VENT S URROGATE M ODEL FOR P HYSICS D ETECTOR D ESIGN **Ann-Kathrin Schuetz** [1] **Alan W. P. Poon** [1] **Aobo Li** [2] _[∗]_ aschuetz@lbl.gov awpoon@lbl.gov aol002@ucsd.edu 1 Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA 2 Halıcıo˘glu Data Science Institute, Department of Physics, UC San Diego, La Jolla, CA 92093, USA _∗_ Corresponding Authors A BSTRACT The experimental discovery of neutrinoless double-beta decay (NLDBD) would answer one of the most important questions in physics: Why is there more matter than antimatter in our universe? To maximize the chances of detection, NLDBD experiments must optimize their detector designs to minimize the probability of background events contaminating the detector. Given that this probability is inherently low, design optimization either requires extremely costly simulations to generate sufficient background counts or contending with significant variance. In this work, we formalize this dilemma as a Rare Event Design (RED) problem: identifying optimal design parameters when the design metric to be minimized is inherently small. We then designed the Rare Event Surrogate Model (RESuM) [1] for physics detector design optimization under RED conditions. RESuM uses a pretrained Conditional Neural Process (CNP) model to incorporate additional prior knowledges into a Multi-Fidelity Gaussian Process (MFGP) model. We applied RESuM to optimize neutron moderator designs for the LEGEND NLDBD experiment, identifying an optimal design that reduces neutron background by (66 _._ 5 _±_ 3 _._ 5)% while using only 3.3% of the computational resources compared to traditional methods. Given the prevalence of RED problems in other fields of physical sciences, the RESuM algorithm has broad potential for simulationintensive applications. 1 I NTRODUCTION Why is there more matter than antimatter in our universe? This question remains one of the most important yet unsolved questions in physics. Several Nobel Prizes have been awarded for groundbreaking discoveries that have advanced our understanding of this questions, including the discovery of CP violation in kaons (Cronin and Fitch, 1980), the detection of cosmic neutrinos (Koshiba, 2002), and the development of the Kobayashi-Maskawa theory of CP violation (Kobayashi and Maskawa, 2008). Despite these monumental achievements, the reason for the dominance of matter over antimatter remains unsolved. One of the most promising next steps toward answering this question is the potential discovery of Neutrinoless Double-Beta Decay (NLDBD) (Dolinski et al., 2019). Such a discovery would represent a major milestone in this direction and would undoubtedly be considered a Nobel-Prize-level breakthrough in physics. Due to its utmost importance, the entire U.S. nuclear physics community has gathered for a year-long discussion in 2023 and recommended the experimental search for NLDBD as the second-highest priority (Committee, 2023) for next 10 years. The most challenging aspect of NLDBD search is dealing with background events: physical events that are not NLDBD, but are indistinguishable from it. Since NLDBD is hypothesized to occur less than once every three years (LEGEND-Collaboration et al., 2021; Dolinski et al., 2019), even a single background event entering the detector could potentially ruin the entire detection effort. Therefore, designing ultra-pure NLDBD detectors with optimal parameters to minimize the probability of background events entering the detector becomes the utmost goal of all NLDBD experiments. 1 [Github Repository: https://github.com/annkasch/resum-legend](https://github.com/annkasch/resum-legend) 1 Traditionally, the detector design procedure is conducted with simulations: we first simulate our detectors and _N_ 1 background events under a certain design parameter _**θ**_ 1, then count the number of background events that eventually enter our detector, _m_ 1 . We then repeat the simulation process with another design parameter _**θ**_ 2 and count _m_ 2 . If _m_ 1 _/N_ 1 _< m_ 2 _/N_ 2, it suggests that the design _**θ**_ 1 is better than _**θ**_ 2 . This simulation process can be repeated multiple times until an optimal design is found. An obvious shortcoming of this traditional approach is the computational cost: due to the ultra-pure nature of the NLDBD detector, _N_ needs to be very large for _m_ to even be non-zero. This is amplified by the complexity of the design space, involving numerous and often non-linearly interdependent parameters such as detector geometry, material properties, and environmental conditions. An obvious solution to this problem is to build a surrogate model that can significantly accelerate our simulations (Li et al., 2023; Ravi et al., 2024). However, due to the rare event nature, _m_ is either 0 or a small, discrete integer, which leads to high variance in our design metric _m/N_ . This variance renders training traditional continuous surrogate models extremely difficult. In this paper, we formulate this problem as a Rare Event Design (RED) problem and present RESuM—a Rare Event Surrogate model to solve this problem. RESuM navigate through a complex landscape and approximate the complex relationships between the design parameters _**θ**_ and rare event design metric _m/N_ . The benchmarking result shows that RESuM could reduce the LEGEND neutron background by (66 _._ 5 _±_ 3 _._ 5)% using only 3.3% of the computational power compared to traditional methods. Due to the broad presence of RED problems in physical sciences, RESuM has the potential to be applied to other domains, including Astronomy and Material Science. 2 R ELATED W ORKS Due to the computational cost of particle physics simulations, generative models like VAE (Z. Fu et al., 2024), GAN (Kansal et al., 2021; Vallecorsa, 2018; Hashemi et al., 2024), and diffusion models are widely used as surrogate models for fast simulation (Kansal et al., 2023). Although these deep generative models, usually trained on large datasets, robustly reproduce enriched highdimensional data, their black-box nature renders them non-interpretable and lacking clear statistical meaning. Meanwhile, the CNP model (Garnelo et al., 2018), as a probabilistic generative model, offers the distinct advantage of few-shot learning and provides clear statistical interpretation. It has demonstrated good performance in few-shot problems, including classification tasks (Requeima et al., 2019), statistical downscaling (Vaughan et al., 2022), and hydrogeological modeling (Cui et al., 2022). In this study, we explore a novel surrogate modeling approach that focuses solely on key detector design metrics. The CNP model was not used as a generative model, but as a predictive model to smooth out discreteness of rare design metrics. Another related field is rare event simulation and modeling in reliability engineering. The rare event problem here focuses on estimating extremely low failure probabilities _P_ _f_ . Since direct Monte Carlo simulation becomes intractable as _P_ _f_ approaches zero, specialized techniques, including adaptive sampling and FORM/SORM methods, have been developed. The development progressed from FORM by Hasofer (1974) and its extension to non-normal distributions (Fiessler et al., 1979), comprehensively reviewed in Der Kiureghian et al. (2005). Methodological advances include adaptive sampling (Bucher, 1988), surrogate-based methods (Li and Xiu, 2010; Li et al., 2011), sequential importance sampling (Papaioannou et al., 2016), and multi-fidelity approaches (Peherstorfer et al., 2016; 2018). Recent work has introduced multilevel sampling (Wagner et al., 2020) and ensemble Kalman filters (Wagner et al., 2022). While Adaptive Importance Sampling (AIS) can potentially solve the RED problem, its implementation to LEGEND simulation presents several significant challenges, as discussed in Appendix 15. In contrast, the RESuM model proposed in this work provides a simple yet efficient solution to the RED problem in LEGEND. 3 R ARE E VENT D ESIGN P ROBLEM **Definition** Let _**θ**_ _∈_ **Θ** be the vector of design parameters, where **Θ** represents the space of all possible design parameters. Consider a simulation involving _N_ events, or data points, under design parameter _**θ**_ ; each event can either trigger a signal [2] or not. Define a stochastic process _{X_ 1 _, . . ., X_ _N_ _}_, 2 “trigger a signal” could represent any event of interest depending on the task setup. In the case of the NLDBD background minimization task, it means a background event successfully leaches into the detector 2 where each random variable _X_ _i_ corresponds to the i _[th]_ event in the simulation and _X_ _i_ = 1 if the i _[th]_ event triggers a signal and _X_ _i_ = 0 if it doesn’t. Each random variable _X_ _i_ is statistically independent of all other _X_ _j_ for _j ̸_ = _i_ . Each simulated event _i_ is considered independent, and the outcome of _X_ _i_ depends on two sets of parameters: a set of design parameters _**θ**_ which is universal across all events, and another sets of event-specific parameters _**ϕ**_ _**i**_ _∈_ **Φ** where **Φ** represents the space of all possible event-specific parameters. The probability that the _i_ -th event will trigger a signal is thereby defined as a function of both _**θ**_ and _**ϕ**_ _**i**_, which could be denoted as _t_ ( _**θ**_ _,_ _**ϕ**_ _**i**_ ). Let _m_ represent the number of events that trigger a signal. The design metric _y_ is then defined as: _N_ _[m]_ � _i_ =1 _[X]_ _[i]_ _N_ [=] _N_ _y_ = _[m]_ = (1) _N_ **Rare Event Assumption** The number of triggered events _m_ follows a binomial distribution with the triggering probability _t_ ( _**θ**_ _,_ _**ϕ**_ _**i**_ ). Under the rare event assumption that _m ≪_ _N_ and the triggering probability for each event _t_ ( _**θ**_ _,_ _**ϕ**_ _**i**_ ) is small, the number of triggered events _m_ can be approximated by a Poisson distribution as _m ∼_ Poisson ( _N_ _t_ [¯] ( _**θ**_ )). Where _t_ [¯] ( _**θ**_ ) is the expected triggering probability over all simulated events when _N_ goes to infinity: _t_ ¯( _**θ**_ ) = _t_ ( _**θ**_ _,_ _**ϕ**_ ) _g_ ( _**ϕ**_ ) _d_ _**ϕ**_ (2) � The function _g_ ( _**ϕ**_ ) denotes a predefined probability density function (PDF) where _**ϕ**_ _i_ could be sampled from during the simulation process. _t_ [¯] ( _**θ**_ ) is obtained by marginalizing _t_ ( _**θ**_ _,_ _**ϕ**_ ) over _g_ ( _**ϕ**_ ). Therefore, the ultimate metric that we want to minimize is _t_ [¯], which is the expectation of _y_ : _**θ**_ _[∗]_ = arg min _t_ ¯( _**θ**_ ) (3) _**θ**_ _∈_ **Θ** Since _t_ [¯] depends on _**θ**_, minimizing _t_ [¯] requires extensive sampling of different _**θ**_ values within the design space **Θ** to identify the optimal parameter. **Large N Scenario** Assuming that _t_ [¯] ( _**θ**_ ) remains fixed. When _N_ becomes large, according to the central limit theorem, _m_ will tend to follow a normal distribution: _m ∼N_ ( _N_ _t_ [¯] ( _**θ**_ ) _, N_ _t_ [¯] ( _**θ**_ )) Since _y_ = _m/N_, this means that _y_ will also follow a normal distribution with symmetric, welldefined statistical uncertainties _t_ [¯] ( _**θ**_ ) _/N_ : _y ∼N_ ( _t_ [¯] ( _**θ**_ ) _,_ _t_ [¯] ( _**θ**_ ) _/N_ ) As _N −→_ + _∞_, _y_ will asymptotically approximate _t_ [¯] ( _**θ**_ ) with statistical uncertainties approaching 0. **Small N Scenario** When _N_ becomes small, the total number of instances _m_ that trigger a signal has a higher variance, as each individual instance has a significant impact on _m_ . The accuracy _m_ measure _y_ = _N_ [can no longer be approximated with a normal distribution. This makes] _[ y]_ [ more] sensitive to statistical fluctuations of a few simulated events. Furthermore, there is a non-negligible probability that no event will trigger a signal, meaning that _m_ = 0 and _y ∼_ _[m]_ _N_ [= 0][. In summary,] in the small N scenario, the design metric _y_ of interests will only takes on a discrete set of values, _y ∈_ � _N_ 0 _[,]_ _N_ [1] _[, . . .,]_ _[m]_ _N_ �. 0 [1] _N_ _[,]_ _N_ _[m]_ _N_ �. _[m]_ _N_ _[, . . .,]_ _N_ 4 R ARE E VENT S URROGATE M ODEL The Rare Event Surrogate Model (RESuM) aims to solve the RED problem under the constraint of limited access to large _N_ simulations and an unknown triggering probability _t_ ( _**θ**_ _,_ _**ϕ**_ _**i**_ ). Consider a scenario where we run _K_ simulation trials with different design parameter _**θ**_, indexed by _k_ ; each simulation trial contains _N_ events indexed by _i_ . The RESuM model includes three components: a Conditional Neural Process (CNP) (Garnelo et al., 2018) model that is trained on event level; a Multi-Fidelity Gaussian Process (MFGP) (Kennedy and O’Hagan, 2000; Qian and Wu, 2008) model that trains on simulation trial level; and active learning techniques to sequentially sample the parameter space after training. The conceptual framework and details of our model design are outlined in the following subsections. 3 4.1 B AYESIAN P RIOR K NOWLEDGE WITH C ONDITIONAL N EURAL P ROCESS The random variable _X_ _ki_ represents whether the _i_ [th] event triggered a signal or not. In traditional particle physics, the value of _X_ _ki_ is determined through a Monte Carlo simulation process: first, a parameter _**ϕ**_ _ki_ is sampled from the distribution _g_ ( _**ϕ**_ ) to generate the event. This event then propagates through the detector, characterized by the design parameter _**θ**_ _**k**_ . The outcome of the simulation, which implicitly involves the joint distribution _t_ ( _**θ**_ _**k**_ _,_ _**ϕ**_ _**ki**_ ), is only observed as _X_ _ki_ . As discussed before, _X_ _ki_ can only be 0 or 1. In the small _N_ scenario, the root cause of the discreteness of _y_ is this binary nature: 1 if a signal is triggered or 0 if not. This produces significant statistical variance in _y_ . Suppose we want to model this simulation process with a Bernoulli distribution: _X_ _ki_ _∼_ Bernoulli( _t_ ( _**θ**_ _**k**_ _,_ _**ϕ**_ _**ki**_ )) (4) The goal of incorporating prior knowledge is to smooth out the binary _X_ _ki_ into a continuous, floating-point score _β_ between 0 and 1. The score _β_ _ki_ should approximate _t_ ( _**θ**_ _**k**_ _,_ _**ϕ**_ _**ki**_ ) given design parameter _**θ**_ _**k**_ and event-specific parameter _**ϕ**_ _**ki**_ . This work provides an alternative solution by adopting a similar idea to the CNP model. CNP works by learning a representation of input-output relationships from context data to predict outputs for new inputs (Garnelo et al., 2018). In our case, the input is _**θ**_ _**k**_ and _**ϕ**_ _**ki**_, and the output is the random variable _X_ _ki_ . The random process that generates _X_ _ki_ based on the inputs is the Bernoulli process controlled by _t_ ( _**θ**_ _,_ _**ϕ**_ ). We then adopt the same representation learning idea used in the CNP, which involves approximating the random process by sampling from a Gaussian distribution conditioned at observed data. The mean and variance are modeled with neural networks: Bernoulli( _t_ ( _**θ**_ _,_ _**ϕ**_ )) _≈_ Bernoulli( _β_ ) (5) _β ∼N_ ( _µ_ _NN_ ( _**θ**_ _,_ _**ϕ**_ _,_ _**w**_ ) _, σ_ _NN_ [2] [(] _**[θ]**_ _[,]_ _**[ ϕ]**_ _[,]_ _**[ w]**_ [))] _[|]_ _[X]_ _ki_ _[,]_ _**[ϕ]**_ _**ki**_ _[,]_ _**[θ]**_ _**k**_ (6) Where _β_ is the CNP-generated score in general and _β_ _ki_ = _β|_ _**θ**_ _k_ _,_ _**ϕ**_ _ki_ is the CNP score of a specific event (i _[th]_ event in the k _[th]_ simulation trial). The nuisance parameters _**w**_ represent the trainable parameters of the neural network (Garnelo et al., 2018), including the weights and biases, that are optimized during training by minimizing the likelihood of the observed data. Importantly, the neural networks are not trained to predict the binary observable _X_, but rather to estimate the continuous floating-point score _β_ . A comprehensive description of the CNP model, along with the interpretation of the score _β_ and the associated loss function (likelihood), is provided in Appendix 13. The score _β_ for each simulated event serves as prior information that is incorporated into the MFGP surrogate model. 4.2 M ODEL D ESCRIPTION Building on the conceptual framework described in 4.1, we will provide an end-to-end overview of RESuM as shown in Figure 1. We generate two types of simulations: low-fidelity (LF) and highfidelity (HF). Detailed descriptions of these simulations can be found in Section 5.1. The primary distinction between them lies in the number of simulated events _N_, where _N_ _HF_ _≫_ _N_ _LF_ . Another key difference is the distribution _g_ ( _**ϕ**_ ) from which the parameter _**ϕ**_ _i_ of each event is sampled, where HF simulation contains a more complicated, physics-oriented _g_ ( _**ϕ**_ ). The low computational cost of LF simulation allows us to simulate more trials thereby exploring a broader range of _**θ**_ . The first step is to train the CNP model. The CNP comprises three primary components: an encoder, an aggregator, and a decoder. The parameters _**θ**_ _**k**_ _,_ _**ϕ**_ _**ki**_, and _X_ _ki_ of each simulated event are first concatenated into a context point. The encoder, implemented as a Multi-Layer Perceptron (MLP), transforms each context point into a low-dimensional representation. These representations are then aggregated through averaging to form a unified representation that represents _t_ ( _**θ**_ ). The decoder uses _t_ ( _**θ**_ ) and the _**ϕ**_ _ki_ of new data to output parameters _µ_ _ki_ and _σ_ _ki_ [2] [for each event] _[ i]_ [. We then use] _µ_ _ki_ and _σ_ _ki_ [2] [to form a normal distribution and sample a CNP score] _[ β]_ _[ki]_ [ from it. The scores] _[ β]_ _[ki]_ [ are] chosen to naturally fit a normal-like distribution but bounded between 0 and 1. Since the CNP is trained at event level, _β_ _ki_ will be the same regardless of whether the event is generated in HF or LF simulation. Based on the trained CNP model, the next step involves in calculating three design metrics at different fidelities. The first one is _y_ _Raw_ = _m/N_ from HF simulations, which is the ultimate design 4 Figure 1: Overview of the RESuM framework for solving RED problems. The left side illustrates the CNP used for modeling both LF and HF simulation data. The CNP aggregates event-specific parameters _**ϕ**_ _i_ and design parameters _**θ**_ from LF and HF simulations to produce _y_ CNP [LF] [and] _[ y]_ CNP [HF] [,] which, together with HF simulation output _y_ Raw [HF] [, serve as inputs to the surrogate model. The right] side shows the multi-MFGP that combines predictions ˆ _y_ CNP from LF and HF to estimate the HF design metric ˆ _y_ Raw [HF] [.] metric we want our surrogate model to emulate. The second metric is also derived from HF simulations but is defined as the average CNP score of all simulated events: _y_ _CNP_ = [1] _N_ _N_ � _β_ _ki_ (7) _i_ =0 The third metric is _y_ _CNP_ calculated over LF simulations. These three design metrics are then incorporated into a MFGP model to train the surrogate model. Co-kriging was used to account for correlations among different design metrics. The mathematical detail of MFGP can be found in Appendix 11. After training the MFGP model, we adopt active learning to select new sampling points _**θ**_ new to generate _y_ _Raw_ with HF simulations. Since HF simulation is expensive, to determine which point to collect next, we use a gradient-based optimizer to find _**θ**_ _n_ +1 = arg max _**θ**_ _∈_ X _I_ ( _**θ**_ ) (Paleyes et al., 2023). The acquisition function _I_ ( _**θ**_ ) determines the next data point to explore by balancing exploration (high variance) and exploitation (high mean). We chose the integrated variance reduction method, where the next point, _**θ**_ _n_ +1, is selected to maximally reduce the total variance of the model (Sacks et al., 1989). More detail about the active learning method can be found in Appendix 12. 5 E XPERIMENT AND R ESULT The Large Enriched Germanium Experiment for Neutrinoless Double-Beta Decay (L EGEND ) is a next-generation pioneering experiment in the search for NLDBD, with over 300 international collaborators. One of the major background event type in LEGEND are [77(m)] Ge, which are produced through a three-step physics process: (1) **Cosmic Muons** are high-energy particles that constantly shower down from the sky. (2) When cosmic muons enter the LEGEND outer detector, they can interact with materials in the outer detector, which generates a lot of **neutrons** . (3) Neutrons then propagate through the LEGEND detector system. If a neutron enters the inner detector, it has a chance to produce **[77(]** _**[m]**_ **[)]** **Ge** by neutron capture, which is the primary background of concern. 77(m) Ge, once produced, will be particularly challenging, because it could mimic NLDBD events, making it nearly impossible to distinguish and reject once produced [3] . The most viable solution to mitigate this background is through the design of a neutron moderator—a neutron shield that slows down the neutrons and reduce the neutron flux entering the inner detector system between step (2) and (3). Figure 2 provides an overview of the LEGEND detector and a proposed neutron moderator design. Our goal with the RESuM is to optimize the geometric design of the neutron moderator to prevent most neutrons from leaching into the inner detector. The optimization process consists of two key steps: generating simulations under different design parameters, and adopting the RESuM model to surrogate and identify the optimal design of neutron moderators. 3 Currently, there are no efficient methods to eliminate 77(m) Ge once created, aside from employing complex active tagging algorithms (Neuberger et al., 2021) that introduce additional dead time. 5 Idea Generation Category:
0Conceptual Integration
lqTILjL6lP
# H ARM A UG : E FFECTIVE D ATA A UGMENTATION FOR K NOWLEDGE D ISTILLATION OF S AFETY G UARD M ODELS **Seanie Lee** **[1]** _[∗]_ **Haebin Seong** **[2]** _[∗]_ **Dong Bok Lee** **[1]** **Minki Kang** **[1]** **Xiaoyin Chen** **[3,4]** **Dominik Wagner** **[5]** **Yoshua Bengio** **[3,4,6]** **Juho Lee** **[1]** **Sung Ju Hwang** **[1,7]** 1 KAIST 2 Theori 3 Universit´e de Montr´eal 4 Mila – Qu´ebec AI Institute 5 Technische Hochschule N¨urnberg Georg Simon Ohm 6 CIFAR AI Chair 7 DeepAuto.ai lsnfamily02@kaist.ac.kr, hbseong97@gmail.com _{_ markhi, zzx1133 _}_ @kaist.ac.kr xiaoyin.chen@mila.quebec, dominik.wagner@th-nuernberg.de yoshua.bengio@mila.quebec, _{_ juholee,sjhwang82 _}_ @kaist.ac.kr A BSTRACT Safety guard models that detect malicious queries aimed at large language models (LLMs) are essential for ensuring the secure and responsible deployment of LLMs in real-world applications. However, deploying existing safety guard models with billions of parameters alongside LLMs on mobile devices is impractical due to substantial memory requirements and latency. To reduce this cost, we distill a large teacher safety guard model into a smaller one using a labeled dataset of instruction-response pairs with binary harmfulness labels. Due to the limited diversity of harmful instructions in the existing labeled dataset, naively distilled models tend to underperform compared to larger models. To bridge the gap between small and large models, we propose **HarmAug**, a simple yet effective data augmentation method that involves jailbreaking an LLM and prompting it to generate harmful instructions. Given a prompt such as, “Make a single harmful instruction prompt that would elicit offensive content”, we add an affirmative prefix ( _e.g._, “I have an idea for a prompt:”) to the LLM’s response. This encourages the LLM to continue generating the rest of the response, leading to sampling harmful instructions. Another LLM generates a response to the harmful instruction, and the teacher model labels the instruction-response pair. We empirically show that our HarmAug outperforms other relevant baselines. Moreover, a 435-millionparameter safety guard model trained with HarmAug achieves an F1 score comparable to larger models with over 7 billion parameters, and even outperforms them in AUPRC, while operating at less than 25% of their computational cost. [Our code, safety guard model, and synthetic dataset are publicly available.](https://github.com/hbseong97/HarmAug) 1 I NTRODUCTION The deployment of large language models (LLMs) in the wild requires precautions (Lee, 2016; Bender et al., 2021). Malicious users can exploit vulnerabilities in LLMs, including those finetuned with safety alignment, and jailbreak the models to generate harmful content (Zou et al., 2023; Liu et al., 2024a; Paulus et al., 2024; Yuan et al., 2024). To improve upon the built-in guardrails of LLMs, additional LLM-based safety guard models (Inan et al., 2023; Han et al., 2024) are deployed to detect and block malicious jailbreak attempts aimed at bypassing the model’s safeguards. Indeed, safety guard models have successfully defended many jailbreak attacks (Chao et al., 2024). However, deploying large safety guard models, which have over 7 billion parameters, alongside an LLM is impractical on mobile devices due to their expensive memory cost and latency. Integrating a 7-billion-parameter LLM into current mobile devices, such as the iPhone 15 or Google Pixel 8 Pro, remains infeasible, even with 8-bit weight quantization (Liu et al., 2024b). These devices are _∗_ Equal contribution 1 Figure 1: Using exemplars from labeled datasets and a prompt for generating harmful instructions, we add an affirmative prefix “I have an idea for a prompt:” to an LLM’s response. The LLM completes the response with a harmful instruction, while another LLM samples harmful and refusal responses to the instruction. LlamaGuard-3 labels these pairs and the synthetic data is used to distill the model into a 435M-parameter DeBERTa. equipped with 6GB to 12GB of DRAM (Hristov, 2022; Google, 2023), and mobile applications are usually restricted to utilizing only a small portion of this available memory. This underscores the need for sub-billion parameter safety guard models that can efficiently maintain robust defenses. Another advantage of smaller safety guard models is that they enable efficient red-teaming and further fine-tuning. Red-teaming refers to discovering adversarial prompts that can elicit harmful responses from LLMs before deployment (Perez et al., 2022). This process involves iteratively querying a prompt to the LLM and evaluating the harmfulness of that prompt with the safety guard model, which is time and memory consuming due to the expensive calls to both the LLM and the large safety guard model. Utilizing a small and efficient safety guard model can help reduce these high costs. During the deployment of the safety guard model alongside the LLM, the safety guard model needs to be regularly updated to defend against new attacks. A smaller safety guard model can also help save costs associated with further fine-tuning the model to detect those attacks. To achieve efficiency, we distill a large safety guard model (the teacher) into a smaller model using a labeled dataset of instruction-response pairs with binary labels indicating the harmfulness of each pair. However, the limited diversity of harmful instructions in the existing dataset causes the smaller model to underperform compared to the teacher model. To address this limitation, we propose a data augmentation method called **Har-** **mAug**, which involves prompting an LLM to generate additional harmful |0.86|Col2|Col3| |---|---|---| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102|eBERTa-large + HarmAug|| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102|DeBERTa-large + EDA|| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102|ug<br>DeBERTa-large + GFN<br>ug|| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102|DeBERTa-large|Aeg<br>Llama| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102||Llama-Gua| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102|mAug|Llama-G| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102||| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102|Existing Guard Models|Existing Guard Models| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102|DeBERTa + Baseline KD M<br>DeBERTa + HarmAug K|odels<br>D Models (O| |0.86<br>0.84 D<br>0.82 DeBERTa-base + HarmA<br>0.80 DeBERTa-small + HarmA AUPRC<br>0.78<br>Avg.<br>0.76 DeBERTa-xsmall + Har<br>0.74<br>0.72<br>0.70<br>0.60 HateBERT<br>0.55<br>Roberta-R4<br>102|103|103| instructions. Since the LLM is fine- Figure 2: Avg. AUPRC of each model as a function of their size. tuned to meet safety constraints, naive prompting is ineffective, as the model often refuses to generate harmful instructions. To bypass this safety mechanism, we use the prompt, “Make a single harmful instruction that would elicit offensive content,” along with exemplars from the labeled dataset, and prepend an affirmative prefix ( _e.g._, “I have an idea for a prompt:”) to the LLM’s response, as illustrated in Fig. 1. This encourages the model to complete the response, effectively generating harmful instructions. A second LLM generates harmful and refusal responses to these instructions, and the teacher safety guard model labels the instruction-response pairs. These synthetic samples are then augmented with the existing dataset and used to distill the teacher model into a smaller DeBERTa (He et al., 2023) model. We empirically show that our proposed HarmAug outperforms other relevant augmentation approaches on OpenAI Moderation (Markov et al., 2023), ToxicChat (Lin et al., 2023), Harm Figure 2: Avg. AUPRC of each model as a function of their size. 2 Bench (Mazeika et al., 2024), and WildGuardMix (Han et al., 2024) datasets. A 435-millionparameter DeBERTa model trained with our HarmAug achieves an F1 score comparable to large safety guard models with over 7 billion parameters. As shown in Fig. 2 our model even outperforms them in terms of Area Under the Precision-Recall Curve (AUPRC), while reducing the computational cost of the teacher by 75% (Table 2). Moreover, our efficient safety guard model, employed as a reward model for red-teaming, reduces the red-teaming runtime by half while still effectively discovering adversarial prompts (Table 3). Lastly, our model effectively detects jailbreak attacks and can be efficiently fine-tuned to defend against new attacks (Fig. 4b and Fig. 5). Our contributions and findings are summarized as follows: - For efficient deployment of safety guard models in the wild, we propose to distill large models into small sub-billion parameter models. - To bridge the performance gap between small and large safety guard models, we propose a data augmentation method where an LLM is prompted to complete the remainder of a prepended affirmative response to a prompt describing how to generate harmful instructions. - We empirically validate that a small model trained with our data augmentation method achieves a performance comparable to larger models while significantly reducing computational cost. [• We release our synthetic dataset, safety guard model, and code as open-source resources, allowing](https://huggingface.co/datasets/AnonHB/HarmAug_generated_dataset) the research community to fully access, reproduce, and extend our work on improving detection of harmful conversations and computational efficiency of safety guard models. 2 R ELATED W ORK **Safety guard models.** The detection of harmful, offensive, and toxic language has been a subject of extensive research. Deep models (Caselli et al., 2021; Hada et al., 2021; Vidgen et al., 2021) have been widely employed to identify hate speech on social media platforms. Recently, instruction tuned LLMs have been prompted as safety guards to assess harmfulness of conversations between users and LLMs (Chao et al., 2024). In addition to prompting, several works (Inan et al., 2023; Ghosh et al., 2024; Han et al., 2024) have curated datasets and fine-tuned LLMs on these datasets to detect harmful sentences. However, deploying large safety guard models to detect harmful responses from another deployed LLM in real-world applications ( _e.g._ on mobile devices) is impractical due to their high latency and memory requirements. **Data augmentation.** There is an extensive body of literature on data augmentation in the text domain. Various methods have been proposed, including replacing words with synonyms (Wei & Zou, 2019), back-translation using neural machine translation (Sennrich et al., 2016), masking and reconstructing tokens with a masked language model (Ng et al., 2020), as well as perturbing word embeddings (Lee et al., 2021). Recently, leveraging LLMs for synthetic data generation has gained popularity. Wang et al. (2022) generate samples using LLMs conditioned on keywords and target labels. For example, Wang et al. (2023) sample exemplars from a pool and perform in-context learning to synthesize samples. However, these prompting methods are not directly applicable to our objective of generating harmful instructions. The LLM’s safety alignment causes it to refuse the generation of harmful content when prompted using naive methods. **Jailbreaks.** The term jailbreak generally refers to bypassing the built-in safety guard of models. Initially, jailbreaks were discovered through manual trial and error, exploiting the varied objectives for which models were trained (Wei et al., 2023a). Recently, automated jailbreak attacks have become more prevalent. These attacks employ techniques such as genetic algorithms (Liu et al., 2024a), iterative gradient-based methods (Zou et al., 2023), automated prompting with auxiliary LLMs (Chao et al., 2023), in-context learning (Wei et al., 2023b), or train an LLM for jailbreaking prefix generation (Paulus et al., 2024) to optimize query prompts. In this work, we circumvent the safety guardrails of LLMs and prompt the LLM to sample harmful instructions. **Knowledge distillation (KD).** KD aims to compress a large teacher model into a smaller student model while retaining the performance of the teacher model (Hinton et al., 2014). It trains the student model under the guidance of the teacher through various methods, such as minimizing the KullbackLeibler divergence between their outputs (Liang et al., 2021), matching hidden representations (Jiao et al., 2020; Sun et al., 2019), matching attention scores (Wang et al., 2020), or enforcing the student to directly imitate the teacher’s predictions (Kim & Rush, 2016; Ho et al., 2023; Kang et al., 2024). 3 3 METHOD 3.1 P RELIMINARIES **Problem Definition.** In our problem setup, we assume a training dataset _D_ = _{_ ( **x** _i_ _,_ **y** _i_ _, c_ _i_ ) _}_ _[n]_ _i_ =1 [,] where **x** _i_ is an input sequence (instruction), **y** _i_ is the response to the instruction, and _c_ _i_ _∈{_ 0 _,_ 1 _}_ is a binary label indicating the harmfulness of the pair ( **x** _i_ _,_ **y** _i_ ). Additionally, we define a safety guard model _p_ _θ_ ( _· |_ **x** _,_ **y** ) parameterized by _θ_, which assigns a probability to the pair of sequences ( **x** _,_ **y** ) being harmful. Our goal is to distill the teacher _p_ _θ_ into a smaller safety guard model _q_ _ϕ_ ( _· |_ **x** _,_ **y** ), while minimizing accuracy degradation to improve efficiency of the safety guard model in the wild. The efficiency of this distilled safety guard model reduces the computational cost, _i.e._, latency, floating point operations (FLOPs), and memory usage, during both the development and deployment phases of LLMs. Before deploying an LLM, developers typically conduct iterative prompting to generate harmful responses, and evaluate their harmfulness with a safety guard model to identify and address vulnerabilities (Perez et al., 2022). However, this approach is resource-intensive and costly. During LLM deployment, the safety guard model is employed alongside the LLM to detect harmful responses generated from malicious user input. Moreover, the safety guard model needs to be regularly updated to effectively counter newly emerging jailbreak attacks. **Learning Objective.** A widely used objective for knowledge distillation (Hinton et al., 2014) is to enforce the student _q_ _ϕ_ to imitate the output of the teacher _p_ _θ_ while minimizing negative log likelihood (binary cross-entropy; BCE) of the training dataset _D_ as follows: 1 minimize _ϕ_ _n_ _n_ �(1 _−_ _λ_ ) _· D_ KL ( _p_ _θ_ ( _· |_ **x** _i_ _,_ **y** _i_ ) _∥_ _q_ _ϕ_ ( _· |_ **x** _i_ _,_ **y** _i_ )) + _λ · L_ BCE ( **x** _i_ _,_ **y** _i_ _, c_ _i_ ) _i_ =1 (1) _L_ BCE ( **x** _i_ _,_ **y** _i_ _, c_ _i_ ) = _c_ _i_ _·_ log _q_ _ϕ_ ( _c_ = 1 _|_ **x** _i_ _,_ **y** _i_ ) + (1 _−_ _c_ _i_ ) _·_ log _q_ _ϕ_ ( _c_ = 0 _|_ **x** _i_ _,_ **y** _i_ ) where _D_ KL denotes a Kullback-Leibler (KL) divergence and _λ ∈_ [0 _,_ 1] is a hyperparmeter that controls the weighting between KL divergence and binary cross-entropy loss. 3.2 D ATA A UGMENTATION : H ARM A UG Training the student model on the training dataset _D_ with Eq. (1) is suboptimal, as it easily overfits to the training data distribution and fails to generalize in detecting new malicious instructions under distribution shifts (Quionero-Candela et al., 2009; Subbaswamy et al., 2019). To address this issue, we propose a data augmentation method that involves leveraging LLMs to generate harmful instructions **x** and their corresponding responses **y** . Suppose we are given an LLM _p_ LLM, pretrained on large scale text corpora and fine-tuned with reinforcement learning from human feedback (RLHF; Christiano et al., 2017). The LLM has acquired significant knowledge of harmfulness since the pretraining corpora contain a substantial amount of biased and offensive content (Bender et al., 2021). However, naively prompting the LLM to generate new harmful instructions is ineffective due to its built-in safety guardrails. During the RLHF fine-tuning stage, the LLM has been explicitly trained to refuse generating offensive content (Bai et al., 2022a;b; Touvron et al., 2023), which leads it to also reject generating harmful instructions. **Prefix attack to bypass safety guardrails of LLMs.** To address this issue, we propose a simple prefix attack to bypass the safety guardrail of _p_ LLM . In addition to a set of _k_ exemplars _{_ **x** _j_ 1 _, . . .,_ **x** _j_ _k_ _}_ randomly sampled from _D_, similar to (Wei et al., 2023b), and a prompt describing how to generate harmful instructions, such as “Make a single harmful instruction prompt that would elicit offensive content.”, we add an affirmative prefix of the LLM’s response to the prompt ( _e.g._, “I have an idea for a prompt:”) as follows: This prefix attack is similar to the prefix injection (Wei et al., 2023a), asking the LLM to answer 4 with a prefix by adding guidelines to the user prompt. However, our attack prefills the prefix in the LLM’s response and enforce the LLM to complete rest of the response. Given the prompt with the affirmative prefix, denoted as **z** _j_, the LLM completes the response, _i.e._, ˆ **x** _j_ _∼_ _p_ LLM ( _· |_ **z** _j_ ), leading to the sampling harmful instructions. We refer to our method as **HarmAug** . Empirically, we found that our prefix attack effectively bypasses the built-in guardrails of the LLM, allowing for the generation of harmful instructions (Table 4). This jailbreak vulnerability may be attributed to a weakness in the current RLHF process for safety alignment. Humans rarely respond with a refusal immediately following an affirmative answer to a request, and the LLM is supervised fine-tuned to replicate such human behavior before the RLHF process. As a result, the model is heavily biased towards generating refusal responses to harmful instructions but the model is rarely penalized for generating responses after an affirmative prefix during RLHF, despite the prompt being harmful. After sampling synthetic harmful instructions, we utilize two different LLMs for generating responses to those synthetic harmful instructions. The first LLM generates a refusal, denoted as **y** ˆ _j_ 1 _,_ to each harmful instruction ˆ **x** _j_ . Similarly, the second LLM, which is fine-tuned on few-shot adversarial examples, samples a harmful response ˆ **y** _j_ 2 to each ˆ **x** _j_ . Additionally, we pair the prompt with an empty sequence ˆ **y** _j_ 3 . The rationale for including the empty sequence is to train versatile safety guard models capable of handling both instruction classification and instruction-response pair classification tasks. Then, the teacher _p_ _θ_ labels each instruction-response pair: _c_ _jl_ = 1 _{p_ _θ_ ( _c_ = 1 _|_ ˆ **x** _j_ _,_ ˆ **y** _jl_ ) _> τ_ _}_ (2) for _l ∈{_ 1 _,_ 2 _,_ 3 _}_, where 1 is an indicator function and _τ ∈_ (0 _,_ 1) is a threshold for the pair of sequences classified as harmful. Finally, we augment the training dataset with our synthetic dataset _D_ ˆ = _{_ (ˆ **x** _j_ _,_ ˆ **y** _jl_ _, c_ _jl_ ) [3] _l_ =1 _[}]_ _[m]_ _j_ =1 [and train the small safety guard model] _[ q]_ _[ϕ]_ [ with][ Eq. (1)][.] 4 EXPERIMENTS We first introduce datasets, baselines, and evaluation metrics, followed by experimental results on multiple benchmarks (Sec. 4.1), red-teaming language models (Sec. 4.2), further fine-tuning against new jailbreak attacks (Sec. 4.3), and ablations (Sec. 4.4). **Datasets.** For the training dataset _D_, we use the train split of WildGuardMix (Han et al., 2024) combined with our synthetic dataset. We evaluate the safety guard models on four public benchmark datasets: OpenAI Moderation (OAI; Markov et al., 2023), ToxicChat (Lin et al., 2023), HarmBench (Mazeika et al., 2024), and the test split of WildGuardMix. The first two datasets are targeted for instruction classification ( _i.e._, a response is always an empty sequence), while the others are designed for instruction-response pair classification. **Safety Guard Models.** [We use DeBERTa-v3-large (He et al., 2023) as the language model (LM)](https://huggingface.co/microsoft/deberta-v3-large) backbone for the safety guard model _q_ _ϕ_ and compare our method against the following baselines: 1. **EDA** (Wei & Zou, 2019): This method employs synonym replacement, random insertion, random swap, and random deletion to augment the dataset _D_ for training DeBERTa. 2. **GFN** (Lee et al., 2024): This approach trains an LM with GFlowNet (GFN; Bengio et al., 2021) to sample harmful instructions proportional to the mixture of the harmful score distribution induced by the safety guard model _p_ _θ_ and a reference language model’s likelihood. We augment the training _D_ with instructions generated by the LM fine-tuned with GFlowNet and train DeBERTa on the augmented dataset. More details are provided in Appendix B.2. 3. **Existing safety guard models** : These models include LMs fine-tuned for safety guard, such [as RoBERTa-R4 (Vidgen et al., 2021), HateBERT (Hartvigsen et al., 2022), Llama-Guard-1,](https://huggingface.co/facebook/roberta-hate-speech-dynabench-r4-target) [Llama-Guard-2, Llama-Guard-3 (Inan et al., 2023), WildGuard (Han et al., 2024), and Aegis-](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B) [Guard (Ghosh et al., 2024).](https://huggingface.co/nvidia/Aegis-AI-Content-Safety-LlamaGuard-Defensive-1.0) **Evaluation metrics.** Following prior works (Inan et al., 2023; Han et al., 2024), we evaluate the safety guard models using F1 score and AUPRC. More details are provided in Appendix C. 4.1 M AIN R ESULTS **Experimental setups.** We use Llama-Guard-3 for the teacher safety guard model _p_ _θ_ and DeBERTa-v3-large (He et al., 2023) for the student model _q_ _ϕ_ [. We utilize Gemma-1.1-2b-it for](https://huggingface.co/google/gemma-1.1-2b-it) _p_ LLM to generate 100 _,_ 000 harmful instructions, except for the ablation studies in Table 5 and Fig. 7. 5 Table 1: We run experiments three times with different random seeds and report the average of F1 and AUPRC scores. The best results are bolded and the second-best are underlined. For results including standard deviations, please refer to Table 9 in Appendix D. **OAI** **ToxicChat** **HarmBench** **WildGuardMix** **Average** Model size F1 AUPRC F1 AUPRC F1 AUPRC F1 AUPRC F1 AURPC Llama-Guard-1 7B 0 _._ 7520 0 _._ 8452 0 _._ 5818 0 _._ 7001 0 _._ 5012 0 _._ 8067 0 _._ 4793 0 _._ 7204 0 _._ 5786 0 _._ 7681 Llama-Guard-2 8B **0.8139** 0 _._ 8824 0 _._ 4233 0 _._ 4368 **0.8610** 0 _._ 8945 0 _._ 6870 0 _._ 7833 0 _._ 6963 0 _._ 7492 Llama-Guard-3 8B 0 _._ 8061 **0.8869** 0 _._ 4859 0 _._ 4823 0 _._ 8551 **0.8999** 0 _._ 6852 0 _._ 8129 0 _._ 7080 0 _._ 7720 WildGuard [1] 7B 0 _._ 7268 n/a 0 _._ 6547 n/a 0 _._ 8596 n/a 0 _._ 7504 n/a **0.7479** n/a Aegis-Guard 7B 0 _._ 6982 0 _._ 8532 **0.6687** 0 _._ 7455 0 _._ 7805 0 _._ 8178 0 _._ 6686 0 _._ 7386 0 _._ 7040 0 _._ 7888 RoBERTa-R4 125M 0 _._ 5625 0 _._ 6970 0 _._ 2217 0 _._ 3339 0 _._ 0288 0 _._ 6958 0 _._ 0477 0 _._ 3925 0 _._ 2152 0 _._ 5298 HateBERT 110M 0 _._ 6442 0 _._ 7443 0 _._ 3148 0 _._ 4867 0 _._ 1423 0 _._ 6669 0 _._ 0789 0 _._ 3763 0 _._ 2951 0 _._ 5685 OpenAI Moderation n/a 0.7440 0.8746 0.4480 0.6206 0.5768 0.7763 0.4881 0.6393 0.5644 0.7089 DeBERTa 435M 0 _._ 7092 0 _._ 7869 0 _._ 6118 0 _._ 6837 0 _._ 8379 0 _._ 8806 0 _._ 7507 0 _._ 8337 0 _._ 7274 0 _._ 7962 DeBERTa + EDA 435M 0 _._ 6858 0 _._ 8394 0 _._ 5964 0 _._ 7141 0 _._ 8430 0 _._ 8793 0 _._ 7279 0 _._ 8315 0 _._ 7133 0 _._ 8161 DeBERTa + GFN 435M 0 _._ 6939 0 _._ 7793 0 _._ 6259 0 _._ 7191 0 _._ 8463 0 _._ 8842 0 _._ 7443 **0.8376** 0 _._ 7276 0 _._ 8050 **DeBERTa + HarmAug** 435M 0 _._ 7236 0 _._ 8791 0 _._ 6283 **0.7553** 0 _._ 8331 0 _._ 8841 **0.7576** 0 _._ 8265 0 _._ 7357 **0.8362** Table 2: Computational cost of our model running on WildGuardMix test split, compared to Llama-Guard-3 [and WildGuard. We measure actual total inference cost on an A100 GPU instance of RunPod.](https://www.runpod.io/) Model F1 ( _↑_ ) Size ( _↓_ ) FLOPs / token ( _↓_ ) Latency / token ( _↓_ ) Peak Memory ( _↓_ ) Monetary Cost ( _↓_ ) WildGuard **0.7504 (107%)** 7B (88%) 131.87 G (106%) 722.08 µs (418%) 22.63 GB (79%) 0.180 $ (216%) Llama-Guard-3 0.6998 (100%) 8B (100%) 124.01 G (100%) 172.62 µs (100%) 28.82 GB (100%) 0.083 $ (100%) **DeBERTa + HarmAug** **0.7576 (108%)** **435M (5%)** **743.55 M (0.6%)** **43.22** µs **(25%)** **3.37 GB (12%)** **0.022 $ (26%)** [For each generated instruction, we generate a refusal response and a harmful response with Llama-](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [3-8B-Instruct and boyiwei/pure](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) ~~[b](https://huggingface.co/boyiwei/pure_bad_100-7b-full)~~ ad ~~1~~ 00-7b-full, respectively. Llama-Guard-3 then labels each instruction-response pair. The threshold for the harmfulness score _τ_ is set to 0 _._ 5. We fine-tune DeBERTa-v3-large for 3 epochs with a batch size of 256, a weight decay of 0 _._ 1, _λ_ of 0 _._ 5, and a learning rate of 3 _·_ 10 _[−]_ [5] . We use AdamW (Loshchilov & Hutter, 2019) optimizer and linearly decay the learning rate from the initial value 3 _·_ 10 _[−]_ [5] to 0. **Quantitative Results.** As shown in Table 1, our HarmAug significantly outperforms other augmentation baselines, including GFN and EDA. Remarkably, on the OAI and ToxicChat benchmark datasets, DeBERTa trained with our data augmentation method HarmAug, achieves a higher AUPRC than any other model, including its teacher Llama-Guard-3, as well as other models with 7 or 8 billion parameters. Additionally, our model, comprising only 435 million parameters, shows the highest average AUPRC and the second-best average F1 score among all evaluated models. These results demonstrate the effectiveness and efficiency of our approach, challenging the trend of fine-tuning large autoregressive models for safety tasks, which is both slow and costly. **Computational Cost.** To evaluate the efficiency of our model relative to WildGuard and the teacher model Llama-Guard-3, we measure the operational costs of each model by analyzing the average FLOPs and latency per token, peak GPU memory usage, and the financial expense of [running the models on an A100 GPU instance from RunPod while processing all instances in the](https://www.runpod.io/) test split of WildGuardMix. As shown in Table 2, our model significantly reduces the monetary cost, FLOPs, latency, and peak memory usage of WildGuard and Llama-Guard-3, while achieving a higher or comparable F1 score. These experimental results highlight the efficiency and efficacy of our safety guard model. **Qualitative Results.** To study how our data augmentation method changes distribution of instructions, we cluster the prompts from the union of the original dataset _D_ and our synthetic dataset _D_ [ˆ], [and compare it against clustering with only the original dataset. We use Hugging Face’s text cluster-](https://github.com/huggingface/text-clustering) [ing library which embeds instructions with a language model and runs DBSCAN (Ester et al., 1996)](https://github.com/huggingface/text-clustering) for clustering. As shown in Fig. 3, our data augmentation significantly increases the number of clusters from 65 to 332. This suggests our data augmentation method, HarmAug, improves diversity of instructions in the training dataset. Generated instructions are presented in Table 12 of Appendix E. 4.2 C ASE S TUDY I: E FFICIENT R EWARD M ODELS OF R ED - TEAMING L ANGUAGE M ODELS **Background.** Red-teaming, which involves discovering diverse prompts that can elicit harmful responses from a target LLM _p_ target (Perez et al., 2022), aims to discover and address potential 1 [We report “n/a” for AUPRC since the WildGuard library does not provide the probability of harmfulness.](https://github.com/allenai/wildguard) 6 (a) Without HarmAug (b) With HarmAug Figure 3: Clustering results of the original dataset and our augmented dataset. Our data augmentation HarmAug significantly increases the number of clusters, identified by DBSCAN, from 65 to 332. Table 3: The prompt generator _p_ _ψ_, trained with each small safety guard model, samples 1 _,_ 024 prompts. We assess the harmfulness of the prompts using the oracle safety guard model _p_ _θ_ . Reward Model Train Reward ( _↑_ ) Test Reward ( _↑_ ) Diversity ( _↑_ ) Runtime Llama-Guard-3 (Oracle) - 0 _._ 99 0 _._ 65 17h 23m RoBERTa-R4 **0.84** 0 _._ 00 0 _._ 55 12h 19m HateBERT 0.84 0.00 0.59 8h 32m **DeBERTa + HarmAug** 0 _._ 83 **0.82** **0.74** 9h 8m harmful effects of LLMs prior to their deployment. However, this process is computationally expensive. Previous works (Perez et al., 2022; Hong et al., 2024; Lee et al., 2024) iteratively train a language model policy _p_ _ψ_ to generate prompts, using harmfulness scores from LLM-based safety guards like Llama-Guard-3 as rewards. However, this process incurs significant computational costs. Lee et al. (2024) propose to fine-tune the language model _p_ _ψ_ with the GFlowNet objective (Bengio et al., 2021), which allows to sample a prompt **x** proportional to a reward distribution. The reward of the prompt **x** is defined as: 1 _R_ ( **x** ) = exp � _β_ [E] **[y]** _[∼][p]_ [target] [(] **[y]** _[|]_ **[x]** [)] [[log] _[ p]_ _[θ]_ [(] _[c]_ [ = 1] _[ |]_ **[ x]** _[,]_ **[ y]** [)]] � _· p_ ref ( **x** ) [1] _[/γ]_ _,_ (3) where _β_ and _γ_ are positive constants that control the peakiness of the reward, _p_ _θ_ is a safety guard model, and _p_ ref is a reference language model to measure the likelihood of **x** to enforce the generation of natural sentences. Then the language model _p_ _ψ_ is trained to minimize the following trajectory balance objective (Malkin et al., 2022): _[p]_ _[ψ]_ [(] **[x]** [)] _L_ TB ( **x** ; _ψ_ ) = log _[Z]_ _[ψ]_ _[ ·]_ � _R_ ( **x** ) 2 _,_ (4) � where _Z_ _ψ_ _>_ 0 is a learnable scalar approximating the partition function. Note that the training example **x** can be sampled from either the on-policy _p_ _ψ_ or off-policies such as replay buffer. However, computing the reward _R_ ( **x** ) is costly due to the approximation of the expectation in Eq. (3). Each reward evaluation requires sampling multiple responses **y** from the target LLM _p_ target and then calculating the harmfulness score for each ( **x** _,_ **y** ) pair using the safety guard model _p_ _θ_ . **Experimental setup.** To reduce the computational cost of calculating the reward _R_ ( **x** ), we train the harmful prompt generator _p_ _ψ_ using Eq. (4), replacing the large safety guard model _p_ _θ_ (LlamaGuard-3), with our smaller model _q_ _ϕ_ (DeBERTa-v3-large), which has been trained using HarmAug. After training, the generator _p_ _ψ_ samples _k_ = 1 _,_ 024 prompts, which are then evaluated based on their harmfulness score and diversity. We use the oracle safety guard model _p_ _θ_ to assess harmfulness of the prompts as: 5 � _p_ _θ_ ( _c_ = 1 _|_ **x** [(] _[i]_ [)] _,_ **y** [(] _[j]_ [)] ) _,_ **x** [(] _[i]_ [) iid] _∼_ _p_ _ψ_ ( **x** ) _,_ **y** [(] Idea Generation Category:
0Conceptual Integration
y3zswp3gek
# O N A C ONNECTION B ETWEEN I MITATION L EARNING AND RLHF **Teng Xiao** _[♠]_ **, Yige Yuan** _[♣]_ **, Mingxiao Li** [▲] **, Zhengyu Chen** [♦] **, Vasant G Honavar** _[♠]_ _♠_ Pennsylvania State University _♣_ University of Chinese Academy of Sciences - Tencent AI Lab ♦ Meituan Inc `tengxiao@psu.edu`, `vhonavar@psu.edu` A BSTRACT This work studies the alignment of large language models with preference data from an imitation learning perspective. We establish a close theoretical connection between reinforcement learning from human feedback ( `RLHF` ) and imitation learning ( `IL` ), revealing that `RLHF` implicitly performs imitation learning on the preference data distribution. Building on this connection, we propose `DIL`, a principled framework that directly optimizes the imitation learning objective. `DIL` provides a unified imitation learning perspective on alignment, encompassing existing alignment algorithms as special cases while naturally introducing new variants. By bridging `IL` and `RLHF`, `DIL` offers new insights into alignment with `RLHF` . Extensive experiments demonstrate that `DIL` outperforms existing methods on various challenging benchmarks. The code for `DIL` is available at `[https://github.com/tengxiao1/DIL](https://github.com/tengxiao1/DIL)` . 1 I NTRODUCTION Aligning large language models (LLMs) with human preferences is essential to ensure that the responses generated by LLMs align with human expectations (Bai et al., 2022; Ouyang et al., 2022; Stiennon et al., 2020). Recently, Reinforcement Learning from Human Feedback ( `RLHF` ) (Ouyang et al., 2022; Christiano et al., 2017) has become a widely adopted framework for fine-tuning language models according to human preference data. This approach typically involves training a reward model based on human feedback and subsequently employing reinforcement learning (RL) algorithms, such as PPO (Schulman et al., 2017), to optimize the model to maximize the reward signal. `RLHF` has demonstrated impressive efficacy across a diverse range of tasks, from programming to creative writing. However, its dependence on two-step reinforcement learning presents challenges, such as computational inefficiency and instability during training (Engstrom et al., 2020; Rafailov et al., 2024). To mitigate these limitations, alternative one-step approaches, such as direct preference optimization ( `DPO` ) and its variants, have been proposed (Rafailov et al., 2024; Meng et al., 2024; Tajwar et al., 2024). These methods replace `RLHF` with supervised learning, eliminating the need for explicit reward modeling. Instead, they directly define an implicit reward based on the likelihood of preference data, resulting in significant gains in efficiency while preserving competitive performance. While `DPO` theoretically aims to discover identical optimal policies as `RLHF`, it and its variants fundamentally adhere to the reward maximization objective and are determined by parametric models such as the Bradley-Terry (BT) model (Bradley & Terry, 1952), making them prone to overfitting (Yuan et al., 2024; Xiao et al., 2024b) and resulting in suboptimal alignment with preference data (Xu et al., 2024c; Wu et al., 2024). This raises a fundamental and open research question: _Can_ _we understand and design effective preference optimization algorithms from a new perspective?_ In this paper, we revisit `RLHF` from the perspective of imitation learning. In particular, we show that `RLHF` is a special case of a general imitation learning problem expressed exclusively in terms of pairwise preferences. We theoretically demonstrate that alignment with `RLHF` closely resembles imitation learning and implicitly optimizes the same objective. We then leverage this insight to design `DIL`, a general framework for effective alignment based on density ratio reward estimation. Our primary technical contributions are as follows: (i) We prove that `RLHF` for alignment is essentially an imitation learning problem and provide a novel analysis that offers explicit guidance for alignemnt 1 algorithm design. (ii) We propose `DIL`, a simple and generalized imitation learning framework for alignment. `DIL` unifies imitation learning on preference data and bridges the gap between density ratio estimation and preference alignment. (iii) Empirically, we validate the effectiveness of `DIL` on widely used benchmarks, demonstrating that it outperforms previous alignment methods. 2 R ELATED W ORK **Reinforcement Learning from Human Feedback.** `RLHF` has emerged as an effective approach for aligning LLMs with human preferences (Christiano et al., 2017). It involves first training a reward model from human feedback through supervised learning, which is then used to optimize a policy via RL algorithms, such as PPO (Schulman et al., 2017). `RLHF` has been successfully applied to a wide range of tasks, including summarization (Stiennon et al., 2020), instruction following (Ouyang et al., 2022), safety improvement (Bai et al., 2022), and truthfulness enhancement (Tian et al., 2023). **Offline Preference Optimization.** Recent literature highlights the inherent complexity of `RLHF`, prompting the search for more efficient offline alternatives. A significant advancement in this area is `DPO` (Rafailov et al., 2024). Unlike `RLHF`, which first learns an explicit reward model and then fits the policy to rewards, `DPO` bypasses this second approximation by directly learning a policy from collected data, without the need for reward modeling. `DPO` implicitly optimizes the same objective as existing `RLHF` algorithms, but it is simpler to implement and more straightforward to train. Other offline alignment methods, such as IPO (Azar et al., 2024), KTO (Ethayarajh et al., 2024), and others (Zhao et al., 2023; Xiao & Wang, 2021; Xu et al., 2024a; Meng et al., 2024; Xiao et al., 2025), have also been proposed. In contrast, we rethink and design the alignment objective from a novel offline imitation learning perspective. We show that `RLHF` can theoretically be viewed as imitation learning, which fits the chosen response distribution by minimizing the reverse KL divergence. **Imitation Learning** . Classical imitation learning (IL) methods often frame IL as inverse reinforcement learning (IRL) to better utilize expert demonstrations (Sammut et al., 1992; Abbeel & Ng, 2004). In the seminal work (Ho & Ermon, 2016), the authors introduce GAIL, which bypasses inner-loop reinforcement learning (RL) by establishing a connection between IL and generative adversarial networks (GANs) (Goodfellow et al., 2020). GAIL and its successor, AIRL (Fu et al., 2018), have made significant strides. However, these online methods typically require substantial environmental interactions, limiting their deployment in cost-sensitive or safety-sensitive domains. To address this issue, recent work on offline IL (Garg et al., 2021) focuses on learning a reward function from offline datasets to understand and generalize the intentions underlying expert behavior. IQ-Learn (Garg et al., 2021) simplifies AIRL’s game-theoretic objective over policy and reward functions into an optimization over the soft Q-function, which implicitly represents both reward and policy. Recently, some works (Sun & van der Schaar, 2024; Wulfmeier et al., 2024; Xiao et al., 2024a) have applied imitation learning to the alignment of large language models. Different from the above works, in this paper, we aim at building a close theoretical connection between `RLHF` and imitation learning, revealing that `RLHF` implicitly performs imitation learning on the chosen data distribution. 3 N OTATIONS AND P RELIMINARIES **Problem Setup.** Let the text sequence **x** = [ _x_ 1 _, x_ 2 _, . . ._ ] denote the input prompt, and **y** _w_ = [ _y_ 1 _, y_ 2 _, . . ._ ] and **y** _l_ denote two responses, typically sampled from the same reference policy _π_ ref ( **y** _|_ **x** ) . The response pairs are then presented to human labelers (or an oracle) who express preferences for responses given the prompt, denoted as **y** _w_ _≻_ **y** _l_ _|_ **x**, where **y** _w_ and **y** _l_ denote preferred and dispreferred responses, respectively. The preference distribution is typically expressed as: _p_ � **y** _w_ _≻_ **y** _l_ _| x_ � = _g_ � _r_ ( **x** _,_ **y** _w_ ) _−_ _r_ ( **x** _,_ **y** _l_ )� _._ (1) 1 where _g_ is the sigmoid function _σ_ ( _x_ ) = 1+ _e_ _[−][x]_ [ based on the Bradley-Terry (BT) preference assump-] tion (Bradley & Terry, 1952). Given a preference dataset _D_, containing feedback ( **x** _,_ **y** _w_ _,_ **y** _l_ ), the goal of alignment is to learn an LLM policy _π_ ( **y** _|_ **x** ) based on the preference data. **Reinforcement Learning from Human Feedback.** Given the estimated reward function _r_ ( **x** _,_ **y** ), dictating the human preferences, `RLHF` fine-tunes policy _π_ _**θ**_ by optimizing the following objective: max _π_ _**θ**_ [E] **[y]** _[∼][π]_ _**[θ]**_ [(] **[y]** _[|]_ **[x]** [)] � _r_ ( **x** _,_ **y** )� _−_ _β_ D KL � _π_ _**θ**_ ( **y** _|_ **x** ) _∥π_ ref ( **y** _|_ **x** )� _,_ (2) 2 where _β >_ 0 is an appropriate KL penalty coefficient. `RLHF` typically optimizes the above objective in Equation (2) using RL algorithms, such as PPO (Ouyang et al., 2022; Schulman et al., 2017). **Reward Modeling.** One standard approach to reward modeling is to fit a reward function _r_ _**ϕ**_ ( **x** _,_ **y** ) with the BT preference model in Equation (1). Specifically, the reward function _r_ _**ϕ**_ ( **x** _,_ **y** ) can be estimated by maximizing the log-likelihood over preference feedback ( **x** _,_ **y** _w_ _,_ **y** _l_ ): _L_ RM ( _**ϕ**_ ; _D_ ) = E ( **x** _,_ **y** _w_ _,_ **y** _l_ ) _∼D_ � _−_ log _σ_ � _r_ _**ϕ**_ ( **x** _,_ **y** _w_ ) _−_ _r_ _**ϕ**_ ( **x** _,_ **y** _l_ )� [�] _._ (3) **Supervised Fine-tuning (SFT)** . Given a demonstration dataset, the objective of SFT is minimizing the negative log-likelihood over the demonstration data as follows: _L_ SFT ( _**θ**_ ; _D_ ) = _−_ E ( **x** _,_ **y** ) _∼D_ [log _π_ _**θ**_ ( **y** _|_ **x** )] _._ (4) SFT is equivalent to behavior cloning (BC) (Pomerleau, 1988), a classical offline imitation learning method that minimizes the forward KL divergence between the learned policy and data policy: min _**θ**_ [KL] � _π_ data ( **y** _|_ **x** ) _∥π_ _**θ**_ ( **y** _|_ **x** )� = _−_ E _π_ data ( **y** _|_ **x** ) � log _π_ _**θ**_ ( **y** _|_ **x** )� _,_ (5) It is easy to see that the BC problem above shares the same optimal solutions as SFT in expectation. **Directed Preference Optimization.** To simplify the optimization process of `RLHF`, `DPO` uses the log-likelihood of the learning policy to implicitly represent the reward function: _r_ _**θ**_ ( **x** _,_ **y** ) = _β_ �log _π_ _**θ**_ ( **y** _|_ **x** ) _−_ log _π_ ref ( **y** _|_ **x** )� + _β_ log _Z_ _**θ**_ ( **x** ) _,_ (6) where _Z_ ( **x** ) = [�] **y** _[π]_ [ref] [(] **[y]** _[ |]_ **[ x]** [) exp(] _[r]_ _**[θ]**_ [(] **[x]** _[,]_ **[ y]** [)] _[/β]_ [)] [ is the partition function. By incorporating this] reward into the BT model in Equation (1), `DPO` (Rafailov et al., 2024) objective enables the comparison of response pairs, facilitating the discrimination between preferred and dispreferred responses: _[|]_ **[ x]** [)] _[π]_ _**[θ]**_ [(] **[y]** _[l]_ _._ (7) _π_ ref ( **y** _l_ _|_ **x** ) [)] � _L_ DPO ( _**θ**_ ; _D_ ) = E ( **x** _,_ **y** _w_ _,_ **y** _l_ ) _∼D_ _−_ log _σ_ ( _β_ log _[π]_ _**[θ]**_ [(] **[y]** _[w]_ _[|]_ **[ x]** [)] � _π_ ref ( **y** _w_ _|_ **x** _[|]_ **[ x]** [)] _[|]_ **[ x]** [)] _[π]_ _**[θ]**_ [(] **[y]** _[w]_ _[π]_ _**[θ]**_ [(] **[y]** _[l]_ _π_ ref ( **y** _w_ _|_ **x** ) _[−]_ _[β]_ [ log] _π_ ref ( **y** _l_ _|_ **x** **Energy-based Models.** Energy-based models (EBMs) (LeCun et al., 2006) define the distribution through an energy function. For **y** _∈_ R _[D]_, its probability density can be expressed as follows: _p_ _**θ**_ ( **y** ) = exp( _−E_ _**θ**_ ( **y** )) _/Z_ _**θ**_ ( **y** ) _,_ (8) where _E_ _**θ**_ ( **y** ) : R _[D]_ _→_ R is the energy function, mapping the data point **y** to a scalar, and _Z_ _**θ**_ ( **y** ) = � **y** [exp(] _[−][E]_ _**[θ]**_ [(] **[y]** [))][ is the unknown normalization constant (][Song & Kingma][,][ 2021][).] 4 M ETHODOLOGY 4.1 RLHF IS A F ORM OF I MITATION L EARNING In this section, we connect `RLHF` to the imitation learning framework. We show that `RLHF` is a special case of imitation learning problem on the distribution chosen response with the reverse KL divergence. Specifically, we firstly define the following policy based on EBMs (Haarnoja et al., 2017): _π_ _**ϕ**_ ( **y** _|_ **x** ) = _π_ ref ( **y** _|_ **x** )exp � _r_ _**ϕ**_ ( **x** _,_ **y** )� _/Z_ _**ϕ**_ ( **x** ) _,_ (9) where _**ϕ**_ denotes the parameter and _Z_ _**ϕ**_ ( **x** ) = [�] **y** _[π]_ [ref] [(] **[y]** _[ |]_ **[ x]** [) exp(] _[r]_ _**[ϕ]**_ [(] **[x]** _[,]_ **[ y]** [))] [. To learn the parameter] _**ϕ**_, one can apply behavior cloning (Pomerleau, 1988), a classical and widely used imitation learning method, which frames the task as minimizing the KL divergence between the policy _π_ _**ϕ**_ and the expert policy _π_ chosen generating the chosen response **y** _w_ . In other works, `IL` learns the parameter _**ϕ**_ such that the model distribution imitates the distribution of chosen response in the preference dataset: min _**ϕ**_ [KL] � _π_ chosen ( **y** _|_ **x** ) _∥π_ _**ϕ**_ ( **y** _|_ **x** )� _._ (10) Minimizing the above forward KL divergence with the chosen responses on preference data gives us: min E ( **x** _,_ **y** _w_ ) _∼D_ [ _−_ log _π_ ref ( **y** _w_ _|_ **x** )exp( _r_ _**ϕ**_ ( **x** _,_ **y** _w_ )) _/Z_ _**ϕ**_ ( **x** )] _⇒_ _**ϕ**_ min _**ϕ**_ E ( **x** _,_ **y** _w_ ) _∼D_ � _−_ _r_ _**ϕ**_ ( **x** _,_ **y** _w_ ) + log � **y** _[π]_ [ref] [(] **[y]** _[ |]_ **[ x]** [)exp] � _r_ _**ϕ**_ ( **x** _,_ **y** )� [�] _._ (11) 3 There are several options for sampling from the reference distribution _π_ ref ( **y** _|_ **x** ) . A choice that simplifies the above expression and yields `RLHF` in practice is _π_ ref ( **y** _|_ **x** ) = [1] 2 [I][(] _[Y]_ [ =] **[ y]** _[l]_ [) +] [1] 2 [I][(] _[Y]_ [ =] **y** _w_ ). In this case, the sample-based approximation of the second term gives us: min _**ϕ**_ E ( **x** _,_ **y** _w_ _,_ **y** _l_ ) _∼D_ � _−_ _r_ _**ϕ**_ ( **x** _,_ **y** _w_ ) + log � exp( _r_ _**ϕ**_ ( **x** _,_ **y** _w_ )) + exp( _r_ _**ϕ**_ ( **x** _,_ **y** _l_ ))� [�] = E ( **x** _,_ **y** _w_ _,_ **y** _l_ ) _∼D_ � _−_ log _σ_ � _r_ _**ϕ**_ ( **x** _,_ **y** _w_ ) _−_ _r_ _**ϕ**_ ( **x** _,_ **y** _l_ )� [�] _._ (12) One can note that the above imitation learning loss over energy-based policy is exactly the same as the reward loss based on BT assumption in Equation (3) in `RLHF` . By optimizing this loss function, we can directly obtain the optimal energy-based policy in Equation (9). Unfortunately, even if we use the estimate _r_ _**ϕ**_, it is still expensive to estimate the partition function _Z_ _**ϕ**_ ( **x** ), making this representation difficult to use in practice and significantly higher inference cost (Rafailov et al., 2024). To address this problem, we can utilize the reverse knowledge distillation which distills the optimal policy in Equation (9) into a analytical policy by using the following reverse KL divergence, which allows the final policy _π_ _**θ**_ to require only a single sample during the inference time: min _π_ _**θ**_ ( **y** _|_ **x** ) _||π_ ref ( **y** _|_ **x** )exp( _r_ _**ϕ**_ ( **x** _,_ **y** ) _/β_ ) _/Z_ _**ϕ**_ ( **x** ) _,_ (13) _**θ**_ [KL] � � where _β_ is the temperature hyperparameter in distillation process. This gives the following objective function after removing multiplicative and additive constants: _L_ ( _**θ**_ ) = _−_ E _π_ _**θ**_ ( **y** _|_ **x** ) � _r_ _**ϕ**_ ( **x** _,_ **y** )� + _β_ KL � _π_ _**θ**_ ( **y** _|_ **x** ) _∥π_ ref ( **y** _|_ **x** )� _._ (14) One can observe that this distillation objective exactly corresponds to the RL objective in Equation (2). In summary, we provide two key insights: (i) Reward learning in `RLHF` is equivalent to an imitation learning problem against the chosen responses, achieved by minimizing the forward KL divergence between _π_ chosen and _π_ _**ϕ**_ based on the EBMs shown in Equation (12). (ii) The RL step in `RLHF` can be interpreted as a reverse knowledge distillation process, where the imitated policy _π_ _**ϕ**_, based on EBMs, is distilled into a final analytical policy _π_ _**θ**_ by minimizing the reverse KL divergence in Equation (13), with the temperature determining the level of KL regularization. Formally, we have: **Proposition 4.1.** _Suppose the chosen response distribution_ _p_ ( **y** _|_ **x** ) _, the EBM_ _π_ _**ϕ**_ ( **y** _|_ **x** ) _, and the_ _model π_ _**θ**_ ( **y** _|_ **x** ) _. KL-regularized_ _`RLHF`_ _with β_ = 1 _can be viewed as the following problem:_ min _π_ _**θ**_ [KL(] _[π]_ _**[θ]**_ _[ ∥]_ _[π]_ _**ϕ**_ _[∗]_ [)] s _._ t _. π_ _**ϕ**_ _[∗]_ [= arg min] _π_ _**ϕ**_ [KL(] _[π]_ [chosen] _[ ∥]_ _[π]_ _**[ϕ]**_ [)] _[,]_ (15) _where π_ chosen ( **y** _|_ **x** ) = _π_ _**ϕ**_ ( **y** _|_ **x** ) = _π_ _**θ**_ ( **y** _|_ **x** ) _is the equilibrium._ Thus, conducting imitation learning on the chosen response corresponds to solving a standard KL-regularized `RLHF` problem. Additionally, we observe that the upper-level objective essentially optimizes a reverse KL (RKL) divergence, KL( _π_ _**θ**_ _∥_ _π_ chosen ), given that _π_ _**ϕ**_ _[∗]_ [=] _[ π]_ [chosen] [, which is the] optimum achieved by the lower-level objective. An interesting question is why `SFT`, which directly optimizes the forward KL (FKL) KL( _π_ chosen _∥_ _π_ _**θ**_ ) in Equation (5), performs worse than `RLHF` for alignment. Theoretically, minimizing `SFT` and `RLHF` should lead to the same optimal solution _π_ _**θ**_ . However, achieving this in practice requires full data coverage and infinite computations, conditions that are rarely met. Consequently, in practical settings, minimizing either KL divergence results in learned policies with distinct properties, as discussed in (Murphy, 2012; Tajwar et al., 2024). Specifically, FKL KL( _π_ chosen _∥_ _π_ _**θ**_ ) promotes mass-covering behavior, whereas RKL KL( _π_ _**θ**_ _∥_ _π_ chosen ) encourages mode-seeking behavior (Tajwar et al., 2024; Nachum et al., 2016; Agarwal et al., 2019). Mass-covering encourages assigning equal probability to all responses in the dataset, leading to an overestimation of the long tail of the target distribution, while mode-seeking concentrates the probability mass on specific high-reward regions. Thus, alignment focuses on generating a certain subset of high-reward responses, which is more effectively achieved by minimizing reverse KL, as theoretically shown by (Tajwar et al., 2024; Ji et al., 2024). 4.2 D IRECT I MITATION L EARNING In the last section, we revisit `RLHF` from the perspective of imitation learning. Our analysis explicitly suggests that `RLHF` is essentially optimized to align closely with the distribution of the chosen 4 Table 1: Summary of the variants of `DIL` with different _h_ -functions for Bregman divergence: _L_ DIL ( _**θ**_ ) = E _π_ chosen ( **y** _|_ **x** ) [ _ℓ_ 1 ( _f_ _**θ**_ )] + E _π_ rejected ( **y** _|_ **x** ) [ _ℓ_ _−_ 1 ( _f_ _**θ**_ )] as a function of log ratio _f_ _**θ**_ = log( _π_ _**θ**_ ( **y** _|_ **x** ) _/π_ ref ( **y** _|_ **x** )). _h_ -Bregman Density Ratio Estimation _h_ -function _**ℓ**_ **1** **(** _**f**_ _**θ**_ **)** _**ℓ**_ _**−**_ **1** **(** _**f**_ _**θ**_ **)** LSIF (Kanamori et al., 2009) _h_ ( _r_ ) = ( _r −_ 1) [2] _/_ 2 _−e_ _[f]_ _[θ]_ 12 _[e]_ [2] _[f]_ _**[θ]**_ BCE (Hastie et al., 2009) _h_ ( _r_ ) = _r_ log _r −_ ( _r_ + 1) log( _r_ + 1) log(1 + _e_ _[−][f]_ _**[θ]**_ ) log(1 + _e_ _[f]_ _**[θ]**_ ) UKL (Nguyen et al., 2010) _h_ ( _r_ ) = _r_ log _r −_ _r_ _−f_ _θ_ _e_ _[f]_ _[θ]_ responses. The sample-based approximation of EBMs in `RLHF` results in a reward loss similar to the BT model, as shown in Equation (12). However, the BT assumption may not always hold true, as discussed in (Azar et al., 2024; Munos et al., 2023; Sun & van der Schaar, 2024). Based on these insights, we propose a novel alignment method, `DIL`, without the BT assumption. We directly formulate the objective of imitation learning as minimizing the reverse KL divergence between _π_ _**θ**_ and the unknown distribution of the chosen response _π_ chosen (Kostrikov et al., 2019; Fu et al., 2018): min _**θ**_ _[L]_ [DIL] [(] _**[θ]**_ [) = KL] � _π_ _**θ**_ ( **y** _|_ **x** ) _∥π_ chosen ( **y** _|_ **x** ) � = E _π_ _**θ**_ ( **y** _|_ **x** ) � log � _π_ _**θ**_ ( **y** _|_ **x** ) _/π_ chosen ( **y** _|_ **x** ) �� _,_ (16) where we minimize RKL divergence, rather than FKL divergence as in `SFT`, as shown in Equation (5). However, mode-seeking with reverse KL divergence is generally challenging. Directly optimizing Equation (16) does not effectively leverage chosen preference data, particularly because the data policy _π_ chosen is unknown. In the RL literature, these challenges have been addressed through adversarial training (Ho & Ermon, 2016; Fu et al., 2018). However, these methods require learning a reward function using complex and unstable adversarial training, which is impractical for large models. In this paper, we propose a straightforward alternative that leverages preference data without learning a reward function via adversarial training. We reformulate the `DIL` objective as follows: _[|]_ **[ x]** [)] max _**θ**_ E _π_ _**θ**_ ( **y** _|_ **x** ) � log _[π]_ [chosen] _π_ [(] **[y]** **x** [chosen] [(] **[y]** _[|]_ **[ x]** [)] _[|]_ **[ x]** [)] _−_ log _[π]_ _**[θ]**_ [(] **[y]** _π_ ref ( **y** _|_ **x** ) _π_ ref ( **y** _|_ **x** ) _π_ ref ( **y** _|_ **x** ) = � E _π_ _**θ**_ ( **y** _|_ **x** ) � log _r_ ( **x** _,_ **y** )� _−_ KL� _π_ _**θ**_ ( **y** _|_ **x** ) _∥π_ ref ( **y** _|_ **x** )� _,_ (17) where _r_ ( **x** _,_ **y** ) ≜ _[π]_ [c] _π_ [h] ref [ose] ( [n] **y** [(] _|_ **[y]** **x** _[|]_ ) **[x]** [)] can be viewed as an auxiliary reward function. Equations (16) and (17) are equivalent by adding and subtracting the same term of log _π_ ref ( **y** _|_ **x** ) in the expectation. Interestingly, we find that even when only preference data is available, this objective takes a form similar to the `RLHF` objective in Equation (2). The primary difference lies in the reward being the estimated log density ratio, which is often not readily accessible in real-world applications. Optimizing this objective, which involves the density ratio _r_ ( **x** _,_ **y** ), is not straightforward. In Figure 1: The illustration of different losses (LSIF, the next section, we demonstrate how to effi- BCE, and UKL), as shown in Table 1. ciently optimize it by effectively utilizing offline human preference data. 4.3 D ENSITY R ATIO R EWARD E STIMATION Before delving into the problem in Equation (17), we first describe how to calculate the auxiliary reward function in terms of the density ratio. In the tabular setting, we can directly compute _π_ ref ( **y** _|_ **x** ) and _π_ chosen ( **y** _|_ **x** ) . However, in a high-dimensional language domain, estimating the densities separately and then calculating their ratio hardly works well due to error accumulation. In this paper, we choose to directly estimate the density ratio _π_ chosen ( **y** _|_ **x** ) _/π_ ref ( **y** _|_ **x** ) based on the Bregman divergence (Sugiyama et al., 2012). Suppose _r_ _[∗]_ ( **x** _,_ **y** ) = _π_ chosen ( **y** _|_ **x** ) _/π_ ref ( **y** _|_ **x** ) is the target density ratio to be estimated with a parameterized discriminator _r_ _ϕ_ . Then, we have: min _**ϕ**_ [D] _[h]_ [(] _[r]_ _[∗]_ _[∥][r]_ _**[ϕ]**_ [) =] � **y** _[π]_ [ref] [(] **[y]** _[ |]_ **[ x]** [)B] _[h]_ [(] _[r]_ _[∗]_ [(] **[x]** _[,]_ **[ y]** [)] _[∥][r]_ _**[ϕ]**_ [(] **[x]** _[,]_ **[ y]** [))] = � **y** _[π]_ [ref] [(] **[y]** _[ |]_ **[ x]** [)] � _h_ [�] _r_ _[∗]_ ( **x** _,_ **y** ) [�] _−_ _h_ [�] _r_ _**ϕ**_ ( **x** _,_ **y** ) [�] _−_ _∂h_ [�] _r_ _**ϕ**_ ( **x** _,_ **y** ) [��] _r_ _[∗]_ ( **x** _,_ **y** ) _−_ _r_ _**ϕ**_ ( **x** _,_ **y** ) [��] _,_ (18) where B _h_ is the data-level Bregman divergence. For a twice continuously differentiable convex function _h_ with a bounded derivative _∂h_, this divergence quantifies the discrepancy between two 5 Idea Generation Category:
3Other
2QdsjiNXgj
# L AWMA : T HE P OWER OF S PECIALIZATION FOR L EGAL A NNOTATION **Ricardo Dominguez-Olmedo** [1] **, Vedant Nanda** [2] **, Rediet Abebe** _[∗]_ [1,3] **,** **Stefan Bechtold** _[∗][,]_ [4] **, Christoph Engel** _[∗][,]_ [5] **, Jens Frankenreiter** _[∗][,]_ [6] **, Krishna Gummadi** _[∗][,]_ [2] **,** **Moritz Hardt** _[∗][,]_ [1] **, and Michael Livermore** _[∗][,]_ [7] 1 Max Planck Institute for Intelligent Systems, T¨ubingen, and T¨ubingen AI Center 2 Max Planck Institute for Software Systems, Saarbr¨ucken [3] ELLIS Institute, T¨ubingen [4] ETH Zurich [5] Max Planck Institute for Research on Collective Goods, Bonn [6] Washington University in St. Louis School of Law [7] University of Virginia School of Law A BSTRACT Annotation and classification of legal text are central components of empirical legal research. Traditionally, these tasks are often delegated to trained research assistants. Motivated by the advances in language modeling, empirical legal scholars are increasingly turning to commercial models, hoping that it will alleviate the significant cost of human annotation. In this work, we present a comprehensive analysis of large language models’ current abilities to perform legal annotation tasks. To do so, we construct CaselawQA, a benchmark comprising 260 legal text classification tasks, nearly all new to the machine learning community. We demonstrate that commercial models, such as GPT-4.5 and Claude 3.7 Sonnet, achieve non-trivial accuracy but generally fall short of the performance required for legal work. We then demonstrate that small, lightly fine-tuned models vastly outperform commercial models. A few dozen to a few hundred labeled examples are usually enough to achieve higher accuracy. Our work points to a viable alternative to the predominant practice of prompting commercial models. For concrete legal annotation tasks with some available labeled data, researchers are likely better off using a fine-tuned open-source model. Code, datasets, and fine-tuned mod[els are available at https://github.com/socialfoundations/lawma.](https://github.com/socialfoundations/lawma) 1 I NTRODUCTION The legal system generates a staggering volume of complex documents. United States federal courts alone process hundreds of thousands of cases a year, each having substantial case files. Much empirical legal research involves the systematic collection and analysis of such data in order to understand how laws function in practice and what impact they have on society. What limits researchers across the board is the cost of annotating and classifying legal documents. Legal classification tasks vary in complexity, but often require substantial expertise and effort. Employing trained research assistants stretches to a few thousand documents at a time, but is no match for the sheer scale of legal data. There has long been an interest by empirical legal scholars in NLP tools for feature extraction (i.e., annotation) in lieu of human annotators (Livermore & Rockmore, 2019). Starting from sentiment analysis and topic models, to now large language models. The costs and error of existing methods is the single most important bottleneck in the empirical legal studies pipeline. Yet, the use of large language models to annotate legal text remains a critically understudied area. Nonetheless, motivated by the rapid advances in language models, law scholars increasingly try out commercial models, such as GPT-4, on a variety of legal tasks, hoping to boost the efficiency of legal research. The underlying assumption is that large models such as GPT-4 provide the best solution to the problem that is currently available. In this work, we critically examine this assumption. _∗_ Alphabetical order. 1 Figure 1: The cost of generality: Performance of various language models on the CaselawQA benchmark for legal annotation. Lawma 8B, specialized for legal annotation, outperforms all other models. Figure 2: Performance of the Lawma models. The smallest Lawma model, Lawma 135M, is competitive with the best-performing commercial model, Claude 3.7 Sonnet. 1.1 O UR CONTRIBUTIONS We introduce and study a collection of 260 legal classification tasks, nearly all new to the machine learning community. The tasks we introduce are actual legal annotation tasks based on the U.S. Supreme Court (Spaeth et al., 2023) and Court of Appeals (Songer) databases. These databases offer rich annotations for court cases, which we utilize as labels to create challenging multi-class classification tasks. We aggregate these tasks into an easy-to-use benchmark, which we call CaselawQA. We detail in Section 2 the process used to construct this benchmark. Our primary finding is that small, fine-tuned models substantially outperform large commercial models (Figure 1). Specifically, we fine-tune a series of small language models, ranging from 135M to 8B parameters, which we collectively refer to as the Lawma models. Our Lawma 8B model achieves **87%** accuracy on CaselawQA, outperforming all commercial models by at least **9** percentage points, with the best-performing commercial model, Claude 3.7 Sonnet, attaining **78%** accuracy. Although it is expected that fine-tuning helps, the superiority of fine-tuning an open-weights model at a much smaller scale is surprising. After all, commercial models are orders of magnitude larger. Our results demonstrate that, for legal annotation, researchers are better off using small specialized models rather than large general-purpose LLMs. We conduct various large-scale fine-tuning experiments that further demonstrate the benefits and practicality of specializing models for legal annotation: - Larger models respond better to fine-tuning than smaller models. Accuracy of the Lawma models increases steadily with model size (Figure 2). However, we observe signs of diminishing returns. This suggests that, in the future, major improvements may not come from model scale alone. - Fine-tuning is data efficient. A few hundred examples typically suffice to achieve higher accuracy than commercial models (Section 4.2, Figure 10). This is crucial, since labeling a few hundred data points is often financially feasible for many legal scholars, whereas labeling many thousands may not. 2 - Fine-tuning generalizes to unseen tasks. Fine-tuning Llama 3 8B Inst _only_ on the Court of Appeals tasks improves its average accuracy on Supreme Court tasks by 18.8 accuracy points (Appendix 4.3, Figure 11). - We can simultaneously fine-tune on all 260 tasks. There is not a large loss compared with fine-tuning on a specific task (Section D, Figure 13). This is desirable in practice, as it obviates the need to train and maintain a separate model for each task. - We contextualize our accuracy numbers with intercoder agreement rates. Our analysis reveals task heterogeneity in the relationship between model accuracy and intercoder agreement (Appendix C). Our results speak to the power of specialization for legal annotation. Our insights suggest that the empirical legal community should invest in an ecosystem of fine-tuned models for relevant annotation tasks. Such an ecosystem could radically expand the capacity of legal scholars to engage in quantitative work. From a benchmarking perspective, the tasks presented in this work are of independent interest. They are challenging multi-class classification problems that require some amount of legal expertise. The best models achieve non-trivial, but modest performance. And even fine-tuned models don’t reach intercoder agreement rates. These legal classification tasks are diverse, non-trivial evaluation tasks for future model advances. Finally, our work challenges the prevailing narrative about the suitability of “generalist” models. In commercial APIs, users are generally limited to prompting generalist models, as fine-tuning is costly for the model provider. But as we show, generalist models are neither sufficiently good nor best possible for many practical tasks. Specializing models to concrete tasks of interests, even with relatively small base models and few labeled examples, can provide a simple, practical, and far more accurate solution. 1.2 R ELATED WORK **Benchmarks for legal tasks.** LegalBench (Guha et al., 2023) is a recent multi-task benchmark for natural language understanding in legal domains. As of writing, LegalBench consists of 162 tasks gathered from 40 contributors. LegalBench draws on numerous earlier benchmarking efforts in different legal domains, specifically, inference on contracts (Koreeda & Manning, 2021; Hendrycks et al., 2021), merger agreement understanding (Wang et al., 2023), identifying the legal holding of a case (Zheng et al., 2021), statutory reasoning (Holzenberger & Van Durme, 2021), privacy compliance and policy (Wilson et al., 2016; Zimmeck et al., 2019; Ravichander et al., 2019), and identifying unfair clauses in terms of service (Lippi et al., 2019). Bhambhoria et al. (2024) evaluate the performance of general-purpose models on legal question-answering tasks and advocate for the development of open-source models tailored to the legal domain. We extend and strengthen these valuable efforts to benchmark large language models in legal settings. We focus on core legal classification tasks based on the U.S. Supreme Court Database (Spaeth et al., 2023) and the U.S. Courts of Appeals database (Songer). Our evaluation suite measures the performance of models in annotating court opinions, focusing on tasks that are of interest to the field of empirical legal studies. The tasks we study are complementary to those in LegalBench. We do not evaluate our model on LegalBench, since our model is specialized to the Supreme Court and Appeals Court data. **Large language models for the legal domain.** General-purpose language models are likely to be trained on a substantial amount of legal data because much of this data is publicly available on the internet. For example, the FreeLaw dataset includes a large collection of court opinions (Gao et al., 2021). Legal-BERT (Chalkidis et al., 2020) is a BERT-like transformer model that was pretrained on a few hundred thousand legal documents. The more recent SaulLM models (Colombo et al., 2024b;a) adapt the open-weights Mistral (Jiang et al., 2023; 2024) models to the legal domain both by continual pretraining and instruction-tuning on legal text. In contrast to Lawma, we consider SaulLM to be a general-purpose model for the legal domain, not tailored to any specific legal task. Our approach differs significantly; we focus on developing models specialized for annotation tasks of practical interest to empirical legal studies. We demonstrate that specialization is highly effective, with our Lawma models significantly outperforming all other evaluated LLMs. For a discussion on the adoption of large language models in the legal community, refer to Appendix A. 3 What follows is an opinion from the Supreme Court of the United States. Your task is to identify whether the opinion effectively says that the decision in this case "overruled" one or more of the Court\’s own precedents. Alteration also extends to language in the majority opinion that states that a precedent of the Supreme Court has been "disapproved," or is "no longer good law". Note, however, that alteration does not apply to cases in which the Court "distinguishes" a precedent. [COURT OPINION] Question: Did the the decision of the court overrule one or more of the Court’s own precedents? A. Yes B. No Think step by step. At the end, respond with "The final answer is [final_answer]", where [final_answer] is either a single uppercase letter (A-Z) or a numerical value (e.g., 9, 121). Figure 3: Example task corresponding to the Supreme Court “precedent alteration” variable. **Data annotation and labeling.** Hall & Wright (2008) provide an overview of the use of human annotators in empirical legal studies. Student coders have been deployed to extract a wide variety of features from legal data. Although student researchers are much less expensive than private attorneys, the costs can quickly become prohibitive. Depending on the size of the document and the complexity of the task, research assistants can label roughly dozens of examples per hour. Projects involving the labeling of hundreds of documents are financially feasible for many legal scholars, but projects involving many thousands of documents are largely impractical. In an example of a larger annotation effort, Frankenreiter et al. (2021) employed human coders to annotate several thousands of corporate charters. Using ChatGPT for a similar task, Frankenreiter & Talley (2024) estimated that employing human coders would have been approximately ten times more costly. Data annotation and labeling also play a major role in machine learning benchmarks and applications, see, e.g., Aroyo & Welty (2015); Gray & Suri (2019); Hardt & Recht (2022) for background. Dorner & Hardt (2024) give an extended discussion about label quality and annotator disagreement in the context of machine learning benchmarks. 1.3 L IMITATIONS While our fine-tuned models substantially outperform commercial models, we emphasize that our fine-tuned models are still far from perfect, and the variance in accuracy across tasks remains high. Although our work meets the ethical and technical recommendations by Kapoor et al. (2024) for “developers of legal AI”, we maintain caution about the use of large language models for consequential legal tasks. To which extent these models are suitable for use in specific applications requires additional substantive investigation. We add that the legal documents we consider are exclusively from either the U.S. Supreme Court or appellate courts in the United States. We cannot speak to how these results may change for tasks in other legal domains within the United States or legal systems in other countries. 2 C ASELAW QA In this work, we focus on legal classification tasks. Legal classification tasks range in complexity, from extremely simple tasks that require little specialized knowledge, to highly sophisticated tasks that involve specific legal knowledge, familiarity with legal principles or discourse, and the ability to engage in nuanced analogical or conceptual reasoning. For example, labeling the ideological valence of a decision requires the annotator to understand how specific legal issues map onto contemporary political debates. Labeling the standard of review applied by an appellate court requires detailed knowledge of these standards as well as the ability to parse procedural history. Many legal doctrines are quite complicated, involving multipart tests, nuanced exceptions, and balancing inquiries. 4 Our reasons to study legal classification tasks are both technical and substantive. From a technical machine learning perspective, these tasks provide highly non-trivial classification problems where even the best models leave much room for improvement. From a substantive legal perspective, efficient solutions to such classification problems have rich and important applications in legal research, see Appendix A.1 for a detailed discussion. 2.1 D ATA SOURCES Central to our study are the U.S. Supreme Court Database (Spaeth et al., 2023) (SCDB) and the U.S. Courts of Appeals database (Songer) (USCAD). The SCDB compiles comprehensive information on U.S. Supreme Court decisions from 1946 onward, and includes variables such as case outcomes, issue areas, legal provisions, and vote counts. The USCAD contains detailed information about decisions made by the U.S. Courts of Appeals from 1925 to 1988. It includes data on judicial decisions, panel compositions, and case characteristics. Both databases provide essential tools for scholars conducting quantitative analyses of the judicial system, decision-making, ideological trends, and the impact of various factors on case outcomes. The SCDB and USCAD have been instrumental in advancing research on judicial decision making within the fields of political science and empirical legal studies (Epstein et al., 2013; Segal & Spaeth, 2002; Martin & Quinn, 2002). These datasets have been used to drive a substantial research program by allowing scholars to systematically analyze large numbers of court cases, uncovering patterns, trends, and factors influencing judicial outcomes. By providing detailed information on case characteristics, judge attributes, and decision outcomes, these databases have enabled researchers to test theories of judicial behavior, examine the impact of ideology on court decisions, and explore the dynamics of judicial decision-making at different levels of the court system. The insights gained from research using these databases have had significant implications for legal practitioners, policymakers, and the broader legal community, contributing to a better understanding of how courts operate and how legal outcomes are shaped. 2.2 C ONSTRUCTION OF THE CLASSIFICATION TASKS We use the variables of the USDB and the USCAD to construct a set of classification tasks. We construct a total of 260 distinct classification tasks, 38 of them corresponding to the Supreme Court database and 232 to the U.S. Court of Appeals. The annotations in the USDB and USCAD serve as labels for these classification tasks. For each task, we additionally construct a prompt template consisting of a general description of the task, followed by a multiple choice question containing each of the possible variable codes. We formulate the task description, question, and answer choices by closely following the databases’ variable descriptions. See Figure 3 for an example task. For every case contained in the USDB and the USCAD, we use the provided case citations to search for its corresponding majority opinion of the court on the Caselaw Access Project, a database of digitized court opinions. We match a total of 24,916 court cases, which we divide into a 70%/10%/20% train/validation/test split. That is, models may not train on any of the court cases used for evaluation. Since many of the classification tasks contain heavily imbalanced classes, we subsample the majority class such that there are at most as many task examples in the majority class as task examples in all other classes combined. As a result, a constant classifier that outputs the majority class label will never achieve more than 50% accuracy on any individual task. This results in a more honest measure of model performance, as models cannot attain high accuracy simply because a task is heavily imbalanced. We report in Appendix E results without subsampling of the majority class. We plot some statistics of the tasks in Figure 4. First, court opinions tend to be long, with 12% having above 8,000 tokens, the typical maximum context size for current state-of-the-art models, such as Llama 3. Second, some tasks have a large number of classes, with 28% of tasks having more than 10 classes. Third, there is a large variability in terms of the number of task examples, ranging from a couple dozen to 18500 task examples. Our final dataset comprises 718,971 task examples. To reduce the compute required for evaluating the benchmark, we select at random 5,000 examples from the Supreme Court tasks and 5,000 examples from the Court of Appeals tasks. We include only court cases where the court opinion, including the head matter, contains at least 2,000 characters, ensuring the opinion is at least a few sentences long. These 10,000 task examples comprise the test 5 Idea Generation Category:
3Other
7El7K1DoyX
# S TABILIZING R EINFORCEMENT L EARNING IN D IFFERENTIABLE M ULTIPHYSICS S IMULATION **Eliot Xing & Vernon Luk & Jean Oh** Carnegie Mellon University {etaoxing, vluk, jeanoh}@cmu.edu A BSTRACT Recent advances in GPU-based parallel simulation have enabled practitioners to collect large amounts of data and train complex control policies using deep reinforcement learning (RL), on commodity GPUs. However, such successes for RL in robotics have been limited to tasks sufficiently simulated by fast rigid-body dynamics. Simulation techniques for soft bodies are comparatively several orders of magnitude slower, thereby limiting the use of RL due to sample complexity requirements. To address this challenge, this paper presents both a novel RL algorithm and a simulation platform to enable scaling RL on tasks involving rigid bodies and deformables. We introduce Soft Analytic Policy Optimization (SAPO), a maximum entropy first-order model-based actor-critic RL algorithm, which uses first-order analytic gradients from differentiable simulation to train a stochastic actor to maximize expected return and entropy. Alongside our approach, we develop Rewarped, a parallel differentiable multiphysics simulation platform that supports simulating various materials beyond rigid bodies. We re-implement challenging manipulation and locomotion tasks in Rewarped, and show that SAPO outperforms baselines over a range of tasks that involve interaction between rigid bodies, [articulations, and deformables. Additional details at rewarped.github.io.](https://rewarped.github.io) 1 I NTRODUCTION Progress in deep reinforcement learning (RL) has produced policies capable of impressive behavior, from playing games with superhuman performance (Silver et al., 2016; Vinyals et al., 2019) to controlling robots for assembly (Tang et al., 2023), dexterous manipulation (Andrychowicz et al., 2020; Akkaya et al., 2019), navigation (Wijmans et al., 2020; Kaufmann et al., 2023), and locomotion (Rudin et al., 2021; Radosavovic et al., 2024). However, standard model-free RL algorithms are extremely sample inefficient. Thus, the main practical bottleneck when using RL is the cost of acquiring large amounts of training data. To scale data collection for online RL, prior works developed distributed RL frameworks (Nair et al., 2015; Horgan et al., 2018; Espeholt et al., 2018) that run many processes across a large compute cluster, which is inaccessible to most researchers and practitioners. More recently, GPU-based parallel environments (Dalton et al., 2020; Freeman et al., 2021; Liang et al., 2018; Makoviychuk et al., 2021; Mittal et al., 2023; Gu et al., 2023) have enabled training RL at scale on a single consumer GPU. However, such successes of scaling RL in robotics have been limited to tasks sufficiently simulated by fast rigid-body dynamics (Makoviychuk et al., 2021), while physics-based simulation techniques for soft bodies are comparatively several orders of magnitude slower. Consequently for tasks involving deformable objects, such as robotic manipulation of rope (Nair et al., 2017; Chi et al., 2022), cloth (Ha & Song, 2022; Lin et al., 2022), elastics (Shen et al., 2022), liquids (Ichnowski et al., 2022; Zhou et al., 2023), dough (Shi et al., 2022; 2023; Lin et al., 2023), or granular piles (Wang et al., 2023; Xue et al., 2023), approaches based on motion planning, trajectory optimization, or model predictive control have been preferred over and outperform RL (Huang et al., 2020; Chen et al., 2022). How can we overcome this data bottleneck to scaling RL on tasks involving deformables? Modelbased reinforcement learning (MBRL) has shown promise at reducing sample complexity, by leveraging some known model or learning a world model to predict environment dynamics and rewards (Moerland et al., 2023). In contrast to rigid bodies however, soft bodies have more complex dynamics 1 AntRun HandReorient RollingFlat SoftJumper HandFlip FluidMove _Figure 1:_ **Visualizations of tasks implemented in Rewarped.** These are manipulation and locomotion tasks involving rigid and soft bodies. AntRun and HandReorient are tasks with articulated rigid bodies, while RollingFlat, SoftJumper, HandFlip, and FluidMove are tasks with deformables. and higher-dimensional state spaces. This makes learning to model dynamics of deformables highly nontrivial (Lin et al., 2021), often requiring specialized systems architecture and material-specific assumptions such as volume preservation or connectivity. Recent developments in differentiable physics-based simulators of deformables (Hu et al., 2019b; Du et al., 2021; Huang et al., 2020; Zhou et al., 2023; Wang et al., 2024; Liang et al., 2019; Qiao et al., 2021a; Li et al., 2022b; Heiden et al., 2023) have shown that first-order gradients from differentiable simulation can be used for gradient-based trajectory optimization and achieve low sample complexity. Yet such approaches are sensitive to initial conditions and get stuck in local optima due to non-smooth optimization landscapes or discontinuities induced by contacts (Li et al., 2022a; Antonova et al., 2023). Additionally, existing soft-body simulations are not easily parallelized, which limits scaling RL in them. Overall, there is no existing simulation platform that is parallelized, differentiable, and supports interaction between articulated rigid bodies and deformables. In this paper, we approach the sample efficiency problem using first-order model-based RL (FOMBRL), which leverages first-order analytic gradients from differentiable simulation to accelerate policy learning, without explicitly learning a world model. Thus far, FO-MBRL has been shown to achieve low sample complexity on articulated rigid-body locomotion tasks (Freeman et al., 2021; Xu et al., 2021), but has not yet been shown to work well for tasks involving deformables (Chen et al., 2022). We hypothesize that entropy regularization can stabilize policy optimization over analytic gradients from differentiable simulation, such as by smoothing the optimization landscape (Ahmed et al., 2019). To this end, we introduce a novel maximum entropy FO-MBRL algorithm, alongside a parallel differentiable multiphysics simulation platform for RL. **Contributions. i)** We introduce Soft Analytic Policy Optimization (SAPO), a first-order MBRL algorithm based on the maximum entropy RL framework. We formulate SAPO as an on-policy actorcritic RL algorithm, where a stochastic actor is trained to maximize expected return and entropy using first-order analytic gradients from differentiable simulation. **ii)** We present Rewarped, a scalable and easy-to-use platform which enables parallelizing RL environments of GPU-accelerated differentiable multiphysics simulation and supports various materials beyond rigid bodies. **iii)** We demonstrate that parallel differentiable simulation enables SAPO to outperform baselines over a range of challenging manipulation and locomotion tasks re-implemented using Rewarped that involve interaction between rigid bodies, articulations, and deformables such as elastic, plasticine, or fluid materials. 2 R ELATED W ORK We refer the reader to (Newbury et al., 2024) for an overview of differentiable simulation. We cover _non-parallel_ differentiable simulation and model-based RL in Appendix A. 2 Materials? Simulator _∇_ ? Rigid Articulated Elastic Plasticine Fluid Isaac Gym Isaac Lab / Orbit - ManiSkill - TinyDiffSim Brax MJX DaXBench DFlex Rewarped (ours) _Table 1:_ **Comparison of physics-based parallel simulation platforms for RL.** We use * to indicate [incomplete feature support at the time of writing. i) Isaac Lab / Orbit: Missing deformable tasks due](https://github.com/isaac-sim/IsaacLab/issues/748) [to breaking API changes and poor simulation stability / scaling. ii) ManiSkill: The latest version](https://github.com/haosulab/ManiSkill/issues/223) [ManiSkill3 does not yet support the soft body tasks introduced in v2. iii) MJX: Stability issues with](https://github.com/google-deepmind/mujoco/issues/1182) [autodifferentiation and gradients. iv) DaXBench: Plasticine task was omitted from benchmark and](https://github.com/AdaCompNUS/DaXBench/issues/5) requires additional development. v) DFlex: While later work (Murthy et al., 2021; Heiden et al., 2023) has built on DFlex to support elastic and cloth materials, their simulations were not parallelized. **Parallel differentiable simulation.** There are few prior works on parallel differentiable simulators capable of running many environments together, while also computing simulation gradients in batches. TinyDiffSim (Heiden et al., 2021) implements articulated rigid-body dynamics and contact models in C++/CUDA that can integrate with various autodifferentiation libraries. Brax (Freeman et al., 2021) implements a parallel simulator in JAX for articulated rigid-body dynamics with simple collision shape primitives. Recently, MJX is building on Brax to provide a JAX re-implementation of MuJoCo (Todorov et al., 2012), a physics engine widely used in RL and robotics, but does not have feature parity with MuJoCo yet. These aforementioned parallel differentiable simulators are only capable of modeling articulated rigid bodies. DaXBench (Chen et al., 2022) also uses JAX to enable fast parallel simulation of deformables such as rope and liquid by Material Point Method (MPM) or cloth by mass-spring systems, but does not support articulated rigid bodies. DFlex (Xu et al., 2021) presents a differentiable simulator based on source-code transformation (Griewank & Walther, 2008; Hu et al., 2020) of simulation kernel code to C++/CUDA, that integrates with PyTorch for tape-based autodifferentiation. Xu et al. (2021) use DFlex for parallel simulation of articulated rigid bodies for high-dimensional locomotion tasks. Later work (Murthy et al., 2021; Heiden et al., 2023) also used DFlex to develop differentiable simulations of cloth and elastic objects, but these were not parallelized and did not support interaction with articulated rigid bodies. To the best of our knowledge, there is no existing differentiable simulation platform that is parallelized with multiphysics support for interaction between rigid bodies, articulations, and various deformables. In this paper, we aim to close this gap with Rewarped, our platform for parallel differentiable multiphysics simulation, and in Table 1 we compare Rewarped against existing physics-based parallel simulation platforms. **Learning control with differentiable physics.** Gradient-based trajectory optimization is commonly used with differentiable simulation of soft bodies (Hu et al., 2019b; 2020; Huang et al., 2020; Li et al., 2023a; Zhou et al., 2023; Wang et al., 2024; Si et al., 2024; Du et al., 2021; Li et al., 2022b; Rojas et al., 2021; Qiao et al., 2020; 2021a; Liu et al., 2024; Chen et al., 2022; Heiden et al., 2023). Differentiable physics can provide physical priors for control in end-to-end learning systems, such as for quadruped locomotion (Song et al., 2024), drone navigation (Zhang et al., 2024), robot painting (Schaldenbrand et al., 2023), or motion imitation (Ren et al., 2023). Gradients from differentiable simulation can also be directly used for policy optimization. PODS (Zamora et al., 2021) proposes a first and second order policy improvement based on analytic gradients of a value function with respect to the policy’s action outputs. APG (Freeman et al., 2021) uses analytic simulation gradients to directly compute policy gradients. SHAC (Xu et al., 2021) presents an actor-critic algorithm, where the actor is optimized over a short horizon using analytic gradients, and a terminal value function helps smooth the optimization landscape. AHAC (Georgiev et al., 2024) modifies SHAC to adjust the policy horizon by truncating stiff contacts based on contact forces or the norm of the dynamics Jacobian. Several works also propose different ways to overcome bias and non-smooth dynamics resulting from contacts, by reweighting analytic gradients (Gao et al., 2024; Son et al., 2024) or 3 explicit smoothing (Suh et al., 2022; Zhang et al., 2023; Schwarke et al., 2024). In this work, we propose a maximum entropy FO-MBRL algorithm to stabilize policy learning with gradients from differentiable simulation. 3 B ACKGROUND **Reinforcement learning** (RL) considers an agent interacting with an environment, formalized as a Markov decision process (MDP) represented by a tuple ( _S, A, P, R, ρ_ 0 _, γ_ ) . In this work, we consider discrete-time, infinite-horizon MDPs with continuous action spaces, where _**s**_ _∈S_ are states, _**a**_ _∈A_ are actions, _P_ : _S × A →S_ is the transition function, _R_ : _S × A →_ R is a reward function, _ρ_ 0 ( _**s**_ ) is an initial state distribution, and _γ_ is the discount factor. We want to obtain a policy _π_ : _S →A_ which maximizes the expected discounted sum of rewards (return) E _π_ [ [�] _[∞]_ _t_ =0 _[γ]_ _[t]_ _[r]_ _[t]_ []] [ with] _[ r]_ _[t]_ [ =] _[ R]_ [(] _**[s]**_ _[t]_ _[,]_ _**[ a]**_ _[t]_ [)] [,] starting from state _**s**_ 0 _∼_ _ρ_ 0 . We also denote the state distribution _ρ_ _π_ ( _**s**_ ) and state-action distribution _ρ_ _π_ ( _**s**_ _,_ _**a**_ ) for trajectories generated by a policy _π_ ( _**a**_ _t_ _|_ _**s**_ _t_ ). In practice, the agent interacts with the environment for _T_ steps in a finite-length episode, yielding a trajectory _τ_ = ( _**s**_ 0 _,_ _**a**_ 0 _,_ _**s**_ 1 _,_ _**a**_ 1 _, . . .,_ _**s**_ _T −_ 1 _,_ _**a**_ _T −_ 1 ). We can define the _H_ -step return : _R_ 0: _H_ ( _τ_ ) = _H−_ 1 � _γ_ _[t]_ _r_ _t_ _,_ (1) _t_ =0 and standard RL objective to optimize _θ_ parameterizing a policy _π_ _θ_ to maximize the expected return : _J_ ( _π_ ) = E _**s**_ _τ_ 0 _∼∼ρρ_ 0 _π_ [[] _[R]_ [0:] _[T]_ [ ]] _[.]_ (2) Typically, the policy gradient theorem (Sutton et al., 1999) provides a useful expression of _∇_ _θ_ _J_ ( _π_ ) that does not depend on the derivative of state distribution _ρ_ _π_ ( _·_ ) : _∇_ _θ_ _J_ ( _π_ ) _∝_ � _ρ_ _π_ ( _**s**_ ) _S_ � _∇_ _θ_ _π_ ( _**a**_ _|_ _**s**_ ) _Q_ _[π]_ ( _**s**_ _,_ _**a**_ ) _d_ _**a**_ _d_ _**s**_ _,_ (3) _A_ where _Q_ _[π]_ ( _**s**_ _t_ _,_ _**a**_ _t_ ) = E _τ_ _∼ρ_ _π_ [ _R_ _t_ : _T_ ] is the _Q_ -function (state-action value function). We proceed to review zeroth-order versus first-order estimators of the policy gradient following the discussion in (Suh et al., 2022; Georgiev et al., 2024). We denote a single zeroth-order estimate : _∇_ ˆ [[0]] _θ_ _[J]_ [(] _[π]_ [) =] _[ R]_ [0:] _[T]_ _T −_ 1 � _∇_ _θ_ log _π_ ( _**a**_ _t_ _|_ _**s**_ _t_ ) _,_ (4) _t_ =0 [0] _N_ where the zeroth-order batched gradient (ZOBG) is the sample mean _∇_ _θ_ _[J]_ [(] _[π]_ [) =] _N_ [1] � _i_ =1 _[∇]_ [ˆ] [[0]] _θ_ _[J]_ [(] _[π]_ [)] and is an unbiased estimator, under some mild assumptions to ensure the gradients are well-defined. The ZOBG yields an _N_ -sample Monte-Carlo estimate commonly known as the REINFORCE estimator (Williams, 1992) in RL literature, or the score function / likelihood-ratio estimator. Policy gradient methods may use different forms of Equation 4 to adjust the bias and variance of the estimator (Schulman et al., 2015b). For instance, a baseline term can be used to reduce variance of the estimator, by substituting _R_ 0: _T_ with _R_ 0: _T_ _−_ _R_ _l_ : _H_ + _l_ . **Differentiable simulation** as the environment provides gradients for the transition dynamics _P_ and rewards _R_, so we can directly obtain an analytic value for _∇_ _θ_ _R_ 0: _T_ under policy _π_ _θ_ . In this setting, for a single first-order estimate : _∇_ ˆ [[1]] _θ_ _[J]_ [(] _[π]_ [) =] _[ ∇]_ _[θ]_ _[R]_ [0:] _[T]_ _[,]_ (5) ~~[~~ 1] _N_ then the first-order batched gradient (FOBG) is the sample mean _∇_ _θ_ _[J]_ [(] _[π]_ [) =] _N_ [1] � _i_ =1 _[∇]_ [ˆ] [[1]] _θ_ _[J]_ [(] _[π]_ [)] [, and] is also known as the pathwise derivative (Schulman et al., 2015a) or reparameterization trick (Kingma & Welling, 2014; Rezende et al., 2014; Titsias & Lázaro-Gredilla, 2014). **First-order model-based RL** (FO-MBRL) aims to use differentiable simulation (and its first-order analytic gradients) as a known differentiable model, in contrast to vanilla MBRL which either assumes a given non-differentiable model or learns a world model of dynamics and rewards from data. 4 **Analytic Policy Gradient** (APG, Freeman et al. (2021)) uses FOBG estimates to directly maximize the discounted return over a truncated horizon : _J_ ( _π_ ) = _t_ + _H−_ 1 � E ( _**s**_ _l_ _,_ _**a**_ _l_ ) _∼ρ_ _π_ [ _γ_ _[l][−][t]_ _r_ _l_ ] _,_ (6) _l_ = _t_ and is also referred to as Backpropagation Through Time (BPTT, Werbos (1990); Mozer (1995)), particularly when the horizon is the full episode length (Degrave et al., 2019; Huang et al., 2020). **Short-Horizon Actor-Critic** (SHAC, Xu et al. (2021)) is a FO-MBRL algorithm which learns a policy _π_ _θ_ and (terminal) value function _V_ _ψ_ : _J_ ( _π_ ) = _t_ + _H−_ 1 � E ( _**s**_ _l_ _,_ _**a**_ _l_ ) _∼ρ_ _π_ [ _γ_ _[l][−][t]_ _r_ _l_ + _γ_ _[t]_ _V_ ( _**s**_ _t_ + _H_ )] _,_ (7) _l_ = _t_ _L_ ( _V_ ) = _t_ + _H−_ 1 � E _**s**_ _l_ _∼ρ_ _π_ [ _||V_ ( _**s**_ ) _−_ _V_ [˜] ( _**s**_ ) _||_ [2] ] _,_ (8) _l_ = _t_ where _V_ [˜] ( _**s**_ _t_ ) are value estimates for state _**s**_ _t_ computed starting from time step _t_ over an _H_ -step horizon. TD ( _λ_ ) (Sutton, 1988) is used for value estimation, which computes _λ_ -returns _G_ _[λ]_ _t_ : _t_ + _H_ [as a] weighted average of value-bootstrapped _k_ -step returns _G_ _t_ : _t_ + _k_ : � ˜ _V_ ( _**s**_ _t_ ) = _G_ _[λ]_ _t_ : _t_ + _H_ [= (1] _[ −]_ _[λ]_ [)] _H−_ 1 _−t_ � _λ_ _[l][−]_ [1] _G_ _t_ : _t_ + _l_ � _l_ =1 + _λ_ _[H][−][t][−]_ [1] _G_ _t_ : _t_ + _H_ _,_ (9) _k−_ 1 where _G_ _t_ : _t_ + _k_ = _l_ =0 _[γ]_ _[l]_ _[r]_ _[t]_ [+] _[l]_ + _γ_ _[k]_ _V_ ( _**s**_ _t_ + _k_ ) . The policy and value function are optimized in �� � an alternating fashion per standard actor-critic formulation (Konda & Tsitsiklis, 1999). The policy gradient is obtained by FOBG estimation, with single first-order estimate : _∇_ ˆ [[1]] _θ_ _[J]_ [(] _[π]_ [) =] _[ ∇]_ _[θ]_ [(] _[R]_ [0:] _[H]_ [ +] _[ γ]_ _[H]_ _[V]_ [ (] _**[s]**_ _[H]_ [))] _[,]_ (10) and the value function is optimized as usual by backpropagating _∇_ _ψ_ _L_ ( _V_ ) of the mean-squared loss in Eq. 8. Combining value estimation with a truncated short-horizon window where _H ≪_ _T_ (Williams & Zipser, 1995), SHAC optimizes over a smoother surrogate reward landscape compared to BPTT over the entire _T_ -step episode. 4 S OFT A NALYTIC P OLICY O PTIMIZATION (SAPO) Empirically we observe that SHAC, a state-of-the-art FO-MBRL algorithm, is still prone to suboptimal convergence to local minima in the reward landscape (Appendix, Figure 5). We hypothesize that entropy regularization can stabilize policy optimization over analytic gradients from differentiable simulation, such as by smoothing the optimization landscape (Ahmed et al., 2019) or providing robustness under perturbations (Eysenbach & Levine, 2022). We draw on the maximum entropy RL framework (Kappen, 2005; Todorov, 2006; Ziebart et al., 2008; Toussaint, 2009; Theodorou et al., 2010; Haarnoja et al., 2017) to formulate Soft Analytic Policy Optimization (SAPO), a maximum entropy FO-MBRL algorithm (Section 4.1). To implement SAPO, we make several design choices, including modifications building on SHAC (Section 4.2). In Appendix B.1, we describe how we use visual encoders to learn policies from high-dimensional visual observations in differentiable simulation. Pseudocode for SAPO is shown in Appendix B.2, and the computational graph of SAPO is illustrated in Appendix Figure 4. 4.1 M AXIMUM ENTROPY RL IN DIFFERENTIABLE SIMULATION **Maximum entropy RL** (Ziebart et al., 2008; Ziebart, 2010) augments the standard (undiscounted) return maximization objective with the expected entropy of the policy over _ρ_ _π_ ( _**s**_ _t_ ) : _J_ ( _π_ ) = _∞_ � E ( _**s**_ _t_ _,_ _**a**_ _t_ ) _∼ρ_ _π_ [ _r_ _t_ + _αH_ _π_ [ _**a**_ _t_ _|_ _**s**_ _t_ ]] _,_ (11) _t_ =0 5 Idea Generation Category:
2Direct Enhancement
DRiLWb8bJg
# M ULTI -R OBOT M OTION P LANNING WITH D IFFUSION M ODELS **Yorai Shaoul** [*, 1] **, Itamar Mishani** [*, 1] **, Shivam Vats** [*, 2] **, Jiaoyang Li** [1] **& Maxim Likhachev** [1] 1 Carnegie Mellon University 2 Brown University - Equal contribution _{_ yshaoul,imishani,svats,jiaoyanl,maxim _}_ @cs.cmu.edu A BSTRACT Diffusion models have recently been successfully applied to a wide range of robotics applications for learning complex multi-modal behaviors from data. However, prior works have mostly been confined to single-robot and small-scale environments due to the high sample complexity of learning multi-robot diffusion models. In this paper, we propose a method for generating collision-free multi-robot trajectories that conform to underlying data distributions while using only single-robot data. Our algorithm, Multi-robot Multi-model planning Diffusion (MMD), does so by combining learned diffusion models with classical search-based techniques—generating data-driven motions under collision constraints. Scaling further, we show how to compose multiple diffusion models to plan in large environments where a single diffusion model fails to generalize well. We demonstrate the effectiveness of our approach in planning for dozens of robots in a variety of simulated scenarios motivated by logistics environments. 1 I NTRODUCTION Multi-robot motion planning (MRMP) is a fundamental challenge in many real-world applications where teams of robots have to work in close proximity to each other to complete their tasks. In automated warehouses, for example, hundreds of mobile robots and robotic manipulators need to coordinate with each other to transport and exchange items while avoiding collisions. Learning motions from demonstrations can oftentimes allow robots to complete tasks they couldn’t otherwise, like navigating a region in a pattern frequently followed by human workers; however, it is unclear how to best incorporate demonstrations in MRMP. In fact, MRMP at its simplest form, where robots are only concerned with finding short trajectories between start and goal configurations, is already known to be computationally intractable (Hopcroft & Wilfong, 1986)—significantly harder than single-agent motion planning due to the complexity of mutual interactions between robots. Attempting to simplify the problem, various approximate formulations have been proposed in the literature. For example, a popular approach is to formulate the problem as a multi-agent path finding problem (MAPF) (Stern et al., 2019) by discretizing space and time. While the latest MAPF planners (Li et al., 2021; Okumura, 2024) can compute near-optimal plans and scale to hundreds of agents, they make strong assumptions, such as constant velocities and rectilinear movements that limit their real-world application and reduce their ability to generate complex behaviors learned from demonstrations. In single-agent motion planning, methods that learn to plan from data (Xiao et al., 2022) have been widely used to circumvent similar limitations resulting from inaccurate models (Vemula et al., 2021), 1 partial observability (Choudhury et al., 2018) and slow planning (Sohn et al., 2015; Qureshi et al., 2020). More recently, diffusion models (DM) have emerged as the generative model of choice for learning visuomotor manipulation policies from demonstrations (Chi et al., 2024), motion planning (Carvalho et al., 2023), and reinforcement learning (Janner et al., 2022). However, there has been relatively little work on extending these ideas to multi-robot motion planning. This is due to the twin challenges of generating high quality multi-agent data and the _curse of dimensionality_, i.e., significantly higher sample complexity of learning multi-robot models. In this paper, we propose a data-efficient and scalable multi-robot diffusion planning algorithm, **M** ulti-robot **M** ulti-model planning **D** iffusion (MMD), that addresses both these challenges by combining constraint-based MAPF planners with diffusion models. Importantly, our approach calls for learning only _single-robot diffusion models_, which does away with the difficulty of obtaining multi-robot interaction data and breaks the curse of dimensionality. MMD generates collision-free trajectories by _constraining_ single-robot diffusion models using our novel spatio-temporal guiding functions and choosing constraint placement via strategies inspired by MAPF algorithms. Our contributions in this paper are threefold: (1) We propose a novel data-efficient framework for multirobot diffusion planning inspired by constraint-based search algorithms. (2) We provide a comparative analysis of the performance of five MMD algorithms, each based on a different MAPF algorithm, shedding light on their applicability to coordinating robots leveraging diffusion models for planning. (3) We show that we can scale our approach to arbitrarily large and diverse maps by learning and composing multiple diffusion models for each robot. Our experimental results from varied motion planning problems in simulated scenarios motivated by logistics environments suggest that our approach scales favorably with both the number of agents and the size of the environment when compared to alternatives. Video demonstrations and code are available at [https://multi-robot-diffusion.github.io/.](https://multi-robot-diffusion.github.io/) 2 P RELIMINARY In this section we define the MRMP problem and provide relevant background on constraint-based MAPF algorithms and on planning with diffusion models. Sec. 3 elaborates on how we combine these concepts to coordinate numerous robots that plan with diffusion models. 2.1 M ULTI -R OBOT M OTION P LANNING (MRMP) Given _n_ robots _R_ _i_, MRMP seeks a set of collision-free trajectories, one for each robot, that optimize a given objective function. Let _S_ _[i]_ be the state space of a single robot and a state be **s** _[i]_ := [ **q** _[i]_ _,_ ˙ **q** _[i]_ ] [⊺] _∈_ _S_ _[i]_ where **q** _[i]_ and ˙ **q** _[i]_ are the configuration and velocity of the robot. Each robot has an assigned start state **s** _[i]_ start _[∈S]_ _[i]_ [ and binary termination (goal) condition] _[ T]_ _[ i]_ [ :] _[ S]_ _[i]_ _[ →{]_ [0] _[,]_ [ 1] _[}]_ [. An MRMP] solution is a multi-robot trajectory _**τ**_ = _{_ _**τ**_ [1] _, · · ·_ _**τ**_ _[n]_ _}_, where _**τ**_ _[i]_ : [0 _, T_ _[i]_ ] _→S_ _[i]_ represents the trajectory of robot _R_ _i_ over the time interval [0 _, T_ _[i]_ ], with _T_ _[i]_ being the terminal time. In practice, we uniformly discretize the time horizon into _H_ time steps and optimize over a sequence of states _**τ**_ _[i]_ = _{_ **s** _[i]_ 1 _[,]_ **[ s]** _[i]_ 2 _[, ...,]_ **[ s]** _[i]_ _H_ _[}]_ [. Subscripts, e.g.,] _**[ τ]**_ _[ i]_ _t_ [, indicate indexing into a trajectory. Each trajectory] _**[ τ]**_ _[ i]_ must avoid collisions between robots and with obstacles in the environment. In MRMP, robots share a workspace _W_ (i.e., _W ⊆_ R [3] for general robots and _W ⊆_ R [2] for robots on the plane) and occupy some volume or area within _W_, which we denote as _R_ _i_ ( **q** _[i]_ ) _⊆W_ for robot _R_ _i_ in configuration **q** _[i]_ . The usual MRMP objective is to minimize the sum of the single-robot costs (e.g., the cumulative motion) across all robots. General cost functions can be defined on the trajectories, and the ob1 _n_ jective then becomes _J_ ( _**τ**_ ) = _n_ � _i_ =1 [cost][(] _**[τ]**_ _[ i]_ [)][. When learning from data, we are interested in] _data adherence_, i.e., the trajectories should match the underlying trajectory distribution. We define cost data ( _**τ**_ ) = _n_ [1] � _ni_ =1 [cost] [data] [(] _**[τ]**_ _[ i]_ [)][ to quantify how well, on average, trajectories in] _**[ τ]**_ [ follow their] underlying data distribution. This metric is task-specific; we provide some examples in Sec. 4. 2.2 M ULTI -A GENT P ATH F INDING (MAPF) The MAPF problem, a simpler form of MRMP, seeks the shortest collision-free _paths_ Π = _{_ _**π**_ [1] _,_ _**π**_ [2] _, · · ·_ _**π**_ _[n]_ _}_ for _n_ agents on a graph. This graph approximates their configuration space, with vertices corresponding to configurations and edges to transitions. Each _path_ _**π**_ _[i]_ = _{_ **q** _[i]_ 1 _[,][ · · ·]_ **[ q]** _[i]_ _H_ _[}]_ [ is a] trajectory without velocity that need not be dynamically feasible. In MAPF studies, constraint-based 2 algorithms have become popular due to their simplicity and scalability. These algorithms are effective, in part, because they avoid the complexity of the multi-agent configuration space by delegating planning to single-agent planners and avoid collision via constraints. For instance, if a configuration **q** _[i]_ for _R_ _i_ leads to a collision at time (or interval) _t_, this can be prevented by applying the constraint set _C_ = _{⟨R_ _i_ _,_ **q** _[i]_ _, t⟩}_ to the path _**π**_ _[i]_, thereby preventing the configuration from being used at that time. Several MAPF algorithms, including Prioritized Planning (PP) (Erdmann & Lozano-Perez, 1987) and Conflict-Based Search (CBS) (Sharon et al., 2015), use this mechanism to force singleagent planning queries to avoid states that would lead to collisions. We detail these methods in Sec. 3 and explain how, despite traditionally being used for MAPF, their principles can be applied to coordinating robots in continuous space that generate data-driven trajectories via diffusion models. 2.3 P LANNING WITH D IFFUSION M ODELS Motion planning diffusion models are generative models that learn a denoising process to recover a dynamically-feasible trajectory from noise (Carvalho et al., 2023; Janner et al., 2022). Given a dataset of multi-modal trajectories, diffusion models aim to generate new trajectories that follow the underlying distribution of the data. Additionally, these trajectories may be conditioned on a task objective _O_, for example, goal condition and collision avoidance. Specifically, given a task objective _O_, motion planning diffusion models aim to sample from the posterior distribution of trajectories: arg max log _p_ ( _**τ**_ _[i]_ _|O_ ) = arg min � _J_ ( _**τ**_ _[i]_ ) _−_ log _p_ ( _**τ**_ _[i]_ )� (1) _**τ**_ _[i]_ _**τ**_ _[i]_ The first term of the objective, _J_ ( _**τ**_ _[i]_ ), can be interpreted as a standard motion planning objective (Carvalho et al., 2023), in which we try to minimize a cost function (or, equivalently, maximize a reward function). The second term, log _p_ ( _**τ**_ _[i]_ ), is the prior corresponding to the data adherence discussed in Sec. 2.1. Diffusion models are a type of score-based model (Song et al., 2021), where the focus is on learning the score function (the gradient of the data distribution’s log-probability) rather than learning the probability distribution directly. The score function is learned using _denoising score matching_, a technique for learning to estimate the score by gradually denoising noisy samples. The diffusion inference process consists of a _K_ -step denoising process that takes a noisy trajectory _[K]_ _**τ**_ _[i]_ and recovers a feasible trajectory [0] _**τ**_ _[i]_, which also follows the data distribution. We use the notation 0 _**τ**_ _i_ _,_ 1 _**τ**_ _i_ _, · · ·,_ _K_ _**τ**_ _i_ to denote the evolution of the trajectory in the diffusion process. To generate a trajectory [0] _**τ**_ _[i]_ from a noise trajectory _[K]_ _**τ**_ _[i]_ _∼N_ ( **0** _,_ **I** ), we use Langevin dynamics sampling (Ho et al., 2020), an iterative process that is a type of Markov Chain Monte Carlo method. At each denoising step _k ∈{K, . . .,_ 1 _}_, a trajectory-space mean _µ_ _[i]_ _k−_ 1 [is sampled from the network] _[ µ]_ _[θ]_ [:] _µ_ _[i]_ _k−_ 1 [=] _[ µ]_ _[θ]_ [(] _[k]_ _**[τ]**_ _[ i]_ [)] (2) Now, with the variance prescribed by a deterministic schedule _{β_ _k_ _| k ∈{K, · · ·,_ 1 _}}_, the next trajectory in the denoising sequence is sampled from the following distribution: _k−_ 1 _**τ**_ _i_ _∼N_ � _µ_ _[i]_ _k−_ 1 [+] _[ ηβ]_ _[k][−]_ [1] _[∇]_ _**[τ]**_ _[J]_ [ (] _[µ]_ _[i]_ _k−_ 1 [)] _, β_ _k−_ 1 � (3) � ~~��~~ ~~�~~ Guidance The term _∇_ _**τ**_ _J_ ( _µ_ _[i]_ _k−_ 1 [)][ is the gradient of additional trajectory-space objectives (described in Eq.] 1) imposed on the generation process. This term, also called _guidance_, can include multiple weighted cost components, each optimizing for a different objective. For instance, we can have _J_ = _λ_ obj _J_ obj + _λ_ smooth _J_ smooth to penalize trajectories in collision with objects via _J_ obj and to encourage the trajectory to be dynamically feasible via _J_ smooth . We denote the trajectory generation process queried with a start state **s** _[i]_ start [, goal condition] _[ T]_ _[ i]_ [, and costraint set] _[ C]_ [ (Sec. 3.1) as] _[ f]_ _θ_ _[ i]_ [(] **[s]** start _[i]_ _[,][ T]_ _[ i]_ _[, C]_ [)][.] 3 M ETHOD We present Multi-robot Multi-model planning Diffusion (MMD), an algorithm for flexibly scaling diffusion planning to multiple robots and long horizons using only single-robot data. MMD imposes constraints on diffusion models to generate collision-free trajectories, addressing three main questions: _how_, _when_, and _where_ to impose them. First, we discuss integrating spatio-temporal constraints into the diffusion denoising process through guiding functions. Next, we introduce five MMD algorithms, each inspired by a MAPF algorithm regarding constraint placement and timing. Finally, we demonstrate how to sequence multiple models for long-horizon planning. 3 **Algorithm 1:** MMD-CBS sketch. Colored lines are only in MMD-PP, MMD-ECBS **Input:** Starts, goal conditions, and single-robot _n_ diffusion models � **s** _[i]_ start _[,][ T]_ _[ i]_ _[, f]_ _θ_ _[ i]_ � _i_ =1 **Output:** Trajectories _**τ**_ = � _**τ**_ _[i]_ [�] _[n]_ _i_ =1 _N_ root _←_ new CT node; _N_ root _.C_ _[i]_ _←∅∀i ∈{_ 1 _, · · · n}_ **for** _i ∈{_ 1 _, · · ·, n}_ **do** _C_ strong _[i]_ _[, C]_ weak _[i]_ _[←∅][,][ ∅]_ // Empty constraints set. _C_ strong _[i]_ _[←{⟨R]_ _i_ _[, N]_ root _[.]_ _**[τ]**_ _[ ⟩}]_ // Avoid other robots. _C_ weak _[i]_ _[←{⟨R]_ _[i]_ _[, N]_ [root] _[.]_ _**[τ]**_ _[ ⟩}]_ // Penalize collisions. _N_ root _._ _**τ**_ _[i]_ _←_ _f_ _θ_ _[i]_ [(] **[s]** start _[i]_ _[,][ T]_ _[ i]_ _[, C]_ strong _[i]_ _[∪]_ _[C]_ weak _[i]_ [)] **end** **return** _N_ root _._ _**τ**_ CT _←{N_ root _}_ // Initialize CT. **while** _CT ̸_ = _∅_ **do** _N ←_ arg min numConflicts ( _N_ _[′]_ _._ _**τ**_ ) _N_ _[′]_ _∈_ CT Remove _N_ from CT **if** _N._ _**τ**_ _conflict-free_ **then** **return** _N._ _**τ**_ // Return if collision free. **end** _p, t, R_ _i_ _, R_ _j_ _←_ getOneConflict ( _N._ _**τ**_ ) **for** _k ∈{i, j}_ // Split _N_ ; constrain conflicting robots. **do** _N_ _[′]_ _←_ _N._ copy _N_ _[′]_ _.C_ _[k]_ _←_ _C_ _[k]_ _∪{⟨R_ _k_ _, S_ _r_ ( _p_ ) _,_ **t** _⟩}_ _C_ weak _[k]_ _[←{R]_ _[k]_ _[,][ ⟨][N]_ _[ ′]_ _[.]_ _**[τ]**_ _[ ⟩}]_ // Penalize collisions. _N_ _[′]_ _._ _**τ**_ _[k]_ _←_ _f_ _θ_ _[k]_ [(] **[s]** start _[k]_ _[,][ T]_ _[ k]_ _[, N]_ _[ ′]_ _[.C]_ _[k]_ _[∪][C]_ weak _[k]_ [)] CT _←_ CT _∪{N_ _[′]_ _}_ **end** **end** 3.1 C ONSTRAINTS IN D IFFUSION M ODELS (a) Two robots aim to switch positions. Blindly generated single-robot trajectories collide. (b) The diffusion denoising process for the left robot in (a), under a temporallyactivated constraint (in red), yields multimodal trajectories. (c) Collision-free solution. Figure 1: An illustration of how MMDCBS generates collision-free trajectories with constrained diffusion models. An intuitive and effective constraint for multi-robot motion planning in robotics is the _sphere con-_ _straint_ [1] (Li et al., 2019; Shaoul et al., 2024b). It is defined by a point _p ∈W_ and restricts robots from being closer to _p_ than a radius _r ∈_ R at a certain time range **t** := [ _t −_ ∆ _t, t_ + ∆ _t_ ]. The sphere constraint can be _imposed as a soft-constraint_ on a diffusion model by incorporating it in its guiding function _J_ ( _·_ ). This can be done by adding a cost term _J_ c that repels the robots from the sphere’s center point _p_ . Let _[k]_ _**τ**_ _[i]_ be the generated trajectory for _R_ _i_ at step _k_ of the diffusion denoising process, and _⟨R_ _i_ _, S_ _r_ ( _p_ ) _,_ **t** _⟩_ be a sphere constraint centered at _p_ with radius _r_ over time interval **t** . The guidance cost term for _R_ _i_ can be defined as: _J_ c ( _[k]_ _**τ**_ _[i]_ _t_ [) :=] � max � _ϵ · r −_ _d_ � _R_ _i_ ( _[k]_ _**τ**_ _[i]_ _t_ [)] _[, p]_ � _,_ 0� (4) _t∈_ **t** with _d_ � _R_ _i_ ( _[k]_ _**τ**_ _[i]_ _t_ [)] _[, p]_ � as the distance from point _p_ to _R_ _i_ at _[k]_ _**τ**_ _[i]_ _t_ [, and] _[ ϵ][ ≥]_ [1][ a padding factor.] 3.2 C ONSTRAINING S TRATEGIES To determine _when_ and _where_ to apply constraints on diffusion models, MMD draws on MAPF strategies like CBS and PP. We propose five MMD variants, each inspired by a state-of-the-art search algorithm. Alg. 1 provides a summary of these methods and we elaborate upon them here [2] . 1 The sphere constraint generalizes the MAPF vertex constraint, as it constrains robots from visiting the point of collision itself instead of a single colliding configuration corresponding to a vertex in a graph. In MAPF, the point of collision and the graph vertex coincide. 2 In MMD, we use the search or prioritization logic found in MAPF algorithms for placing “strong” constraints on diffusion models, while all other aspects of MMD are more loosely inspired by MAPF algorithms. 4 **MMD-PP.** Prioritized Planning sequentially plans _paths_ for robots _R_ _i_ _∀_ _i ∈{_ 1 _, . . . n}_ . This ordering of robots is treated as a priority ordering in that, on each call, robot _R_ _i_ must generate a path _**π**_ _[i]_ that avoids all _R_ _j_ that previously planned. Robot _R_ _i_ does so by respecting the constraints _C_ := _{⟨R_ _i_ _,_ **q** _[i]_ _, t⟩| R_ _i_ ( **q** _[i]_ ) _∩R_ _j_ ( _**π**_ _[j]_ _t_ [)] _[ ̸]_ [=] _[ ∅∀][t][}]_ [. To translate this approach to] _[ trajectory]_ [ gener-] ation with diffusion models, _MMD-PP_ represents robot volumes using spheres, as is common in robotics, and uses the sphere representation of higher-priority robots as sphere soft-constraints for lower-priority robots. Specifically, let a high-priority robot _R_ _j_ be modeled with _M_ _j_ spheres and _p_ _[j]_ _m_ [and] _[ r]_ _m_ _[j]_ [be the position and radius of the] _[ m]_ [th] [ sphere at time] _[ t]_ [. Then, lower-priority robot] _[ R]_ _[i]_ generates a trajectory under the constraint set _{⟨R_ _i_ _, S_ _r_ _mj_ [(] _[p]_ _m_ _[j]_ [)] _[, t][⟩|][ m][ ∈{]_ [1] _[,][ · · ·][ M]_ _[j]_ _[}][, j][ ≺]_ _[i][}]_ [, where] _≺_ indicates priority precedence. In Alg. 1, _⟨R_ _i_ _,_ _**τ**_ _⟩_ means that all trajectories of _R_ _j_ = _̸_ _i_ in _**τ**_ must be similarly avoided. **MMD-CBS.** CBS is a popular MAPF solver that combines “low-level” planners for individual agents with a “high-level” constraint tree (CT) to resolve conflicts (i.e., collisions). The algorithm initiates by creating the root node _N_ root in the CT, planning paths for each agent independently, and storing these paths in _N_ root _._ Π. CBS repeatedly extracts nodes _N_ from the CT and inspects _N._ Π for conflicts. If no conflicts exist, the algorithm terminates, returning _N._ Π. Otherwise, CBS selects a conflict time _t_ where agents _R_ _i_ and _R_ _j_ collide at positions **q** _[i]_ = _**π**_ _[i]_ _t_ [and] **[ q]** _[j]_ [ =] _**[ π]**_ _[j]_ _t_ [in] _[ N.]_ [Π][. CBS then] splits node _N_ into two new CT nodes, _N_ _i_ and _N_ _j_, each inheriting the (initially empty) constraint set _N.C_ and paths _N._ Π from _N_, and incorporating a new constraint for preventing the respective agent from occupying the conflict position at time _t_ . For example, _N_ _i_ _.C ←_ _N.C ∪{⟨R_ _i_ _,_ **q** _[i]_ _, t⟩}_ for _R_ _i_ . Paths for _R_ _i_ and _R_ _j_ are then replanned using low-level planners under the updated constraints in _N_ _i_ _.C_ and _N_ _j_ _.C_ . The new CT nodes, with updated paths in _N_ _i_ _._ Π and _N_ _j_ _._ Π, are added to the CT. _MMD-CBS_ follows the general CBS structure. It keeps a CT of nodes _N_, each with trajectories _N._ _**τ**_, and uses motion planning diffusion models as low-level planners. The algorithm identifies a _collision point p_ for each conflict and resolves it by imposing sphere soft-constraints centered at _p_ on affected robots (see Fig. 1 for an illustration and Sec. A.4 for parameter values). **MMD-ECBS.** Enhanced-CBS (ECBS) (Barer et al., 2014) informs CBS low-level planners of the paths of other robots in the same CT node and steers the search towards solutions that are more likely to be collision-free. To emulate this in diffusion-based trajectory generation, MMD-ECBS imposes two types of soft constraints: “weak” and “strong.” For each robot _R_ _j_ with a trajectory in the CT node _N_, a weak soft-constraint that forbids _R_ _i_ from colliding with any other _R_ _j_ with _**τ**_ _[j]_ _∈_ _N._ _**τ**_ is imposed. This is done in a similar way to MMD-PP but with a lower penalty value (Sec. A.4). The strong constraints are the same as those in MMD-CBS, resolving previously observed conflicts. **Reusing Experience in CBS.** Recent studies indicate that leveraging previous single-robot solutions to guide replanning enhances the efficiency of CBS (Shaoul et al., 2024a). This is primarily because the motion planning problem between a CT node and its successors is nearly identical, with the only difference being a single constraint, making planning from scratch wasteful. This can be utilized in MMD replanning by initially adding noise to the stored trajectory for a limited number of steps (3 in our experiments; regular inference uses 25 steps) and then denoising with the new soft-constraints. This approach, in the context of single-robot planning, was first proposed in Janner et al. (2022) and further refined in Zhou et al. (2024). Adding this functionality to MMD-CBS and MMD-ECBS yields our two final MMD algorithms, **MMD-xCBS** and **MMD-xECBS**, respectively. Both reuse previous solutions to inform replanning and are otherwise unchanged. 3.3 S EQUENCING D IFFUSION M ODELS FOR L ONG H ORIZON P LANNING Diffusion models have shown notable success in learning trajectory distributions within specific contexts. However, they face challenges in modeling complex trajectory distributions and generalizing to diverse contexts (e.g., significantly different obstacle layouts). We propose utilizing an ensemble of local diffusion models for each robot to facilitate varying-context planning. Each local model is trained to capture a particular motion pattern, i.e., a trajectory distribution generated by a hidden cost function defined by a specific task dataset. For example, near a conveyor belt, we can define a motion pattern requiring robots to pass through either the top corridor right-to-left, or the bottom corridor left-to-right. By sequentially combining multiple local models, each corresponding to a local map segment, we enable long-horizon single-robot planning that is easier to learn, generalizes well to different contexts, and scales effectively to large maps. 5 Idea Generation Category:
0Conceptual Integration
AUCYptvAf3
# - A NALOG G ENIE : A G ENERATIVE E NGINE FOR A UTO MATIC D ISCOVERY OF A NALOG C IRCUIT T OPOLOGIES **Jian Gao** [1] **, Weidong Cao** [2] **, Junyi Yang** [1] **, Xuan Zhang** [1] 1 Northeastern University, 2 The George Washington University _{_ gao.jian3,yang.juny,xuan.zhang _}_ @northeastern.edu _{_ weidong.cao _}_ @gwu.edu A BSTRACT The massive and large-scale design of foundational semiconductor integrated circuits (ICs) is crucial to sustaining the advancement of many emerging and future technologies, such as generative AI, 5G/6G, and quantum computing. Excitingly, recent studies have shown the great capabilities of foundational models in expediting the design of digital ICs. Yet, applying generative AI techniques to accelerate the design of analog ICs remains a significant challenge due to critical domain-specific issues, such as the lack of a comprehensive dataset and effective representation methods for analog circuits. This paper proposes, **AnalogGenie**, a **Gen** erat **i** ve **e** ngine for automatic design/discovery of **Analog** circuit topologies–the most challenging and creative task in the conventional manual design flow of analog ICs. AnalogGenie addresses two key gaps in the field: building a foundational comprehensive dataset of analog circuit topology and developing a scalable sequence-based graph representation universal to analog circuits. Experimental results show the remarkable generation performance of AnalogGenie in broadening the variety of analog ICs, increasing the number of devices within a single design, and discovering unseen circuit topologies far beyond any prior arts. Our work paves the way to transform the longstanding time-consuming manual design flow of analog ICs to an automatic and massive manner powered by generative AI. Our source code is available at [https://github.com/xz-group/AnalogGenie.](https://github.com/xz-group/AnalogGenie) 1 I NTRODUCTION Semiconductor integrated circuits (ICs) are the foundational hardware cornerstone to advance many emerging technologies such as generative AI, 5G/6G, and quantum computing. The demand for and the scale of ICs are soaring to unprecedented levels with the ever-increasing information and computing workloads (e.g., training foundation models with billions of parameters) (Achiam et al., 2023). Thus, accelerating the design of advanced ICs is a key to sustaining the development of future technologies. Excitingly, recent breakthroughs in generative AI have presented transformative opportunities to expedite the conventional design flows of ICs. Domain-specific large language models (LLMs) have been developed to free human designers by automatically generating and correcting Hardware Description Languages (HDL) (Zhong et al., 2023; Blocklove et al., 2023; Chang et al., 2023; Thakur et al., 2024; 2023; Fu et al., 2023; Liu et al., 2023b; Wu et al., 2024; Liu et al., 2023a), which can be seamlessly used to synthesize digital ICs with desired functionalities. As an example, NVIDIA’s ChipNeMo (Liu et al., 2023a), a powerful domain-adapted LLM, can rapidly generate valuable digital designs with just a few prompts. Yet, applying generative AI to speed up the design of analog ICs–essential in ubiquitous electronic systems to bridge the interfaces between the physical world and cyberspace, ranging from enhancing performance in computing systems (e.g., high-speed memory interfaces and I/O links) to providing critical functionalities in communication and sensing systems (e.g., 5G/6G and quantum computing)–remains significantly understudied. The fundamental challenge arises from the intricate design complexities of analog ICs. Unlike digital ICs that can be universally and hierarchically abstracted into Boolean logic representations and easily described with high-level hardware description languages (e.g., Verilog and VHDL) or programming languages (e.g., C), analog ICs remain intractable to such abstraction due to their 1 lack of systematic hierarchical representation and the heuristic and knowledge-intensive nature of their design process (Gielen & Rutenbar, 2000). This makes it extremely hard to automate the design of analog ICs by developing programming languages similar to those used for digital ICs. As such, domain experts have followed a longstanding manual flow to design analog ICs. This process involves a number of time-consuming stages, such as selecting/creating an existing (new) circuit topology (i.e., defining the connections between devices), optimizing device parameters based on the topology to achieve desired performance, and designing the physical layout of the optimized circuit for manufacturing. Importantly, the topology generation stage is the foundation and most creative part of the analog IC design process, posing a formidable and perennial challenge to design automation. Addressing it is the key to accelerating the development of analog ICs. There have been several studies in tackling this problem with generative AI techniques. The early pioneering work, CktGNN (Dong et al., 2023), formulates the topology design as a graph generation task, as circuit topologies of analog ICs can be naturally represented as graph structures. It uses a graph variational autoencoder (VAE) to generate various circuit topologies for a specific type of analog ICs, i.e., operational-amplifiers (Op-Amps). More recently, foundational models have also been explored for designing analog circuit topologies. LaMAGIC (Chang et al., 2024), a fine-tuned masked language model (MLM), has been proposed to generate analog circuits with a fixed number of graph nodes. It shows a high success rate in designing a specific type of analog ICs, i.e., power converters (with fewer than 4 devices). AnalogCoder (Lai et al., 2024), another LLM-based work, uses domain-specific prompt engineering to generate analog circuits from well-established LLM models (e.g., GPT-4). Instead of directly generating circuit topologies, it generates PySpice codes that can be converted to a SPICE (Simulation Program with Integrated Circuit Emphasis) netlist–a textual high-level description of device connections used for circuit simulation. AnalogCoder can generate a range of conventional analog circuits that often have a limited number of devices on the order of ten. These methods have demonstrated the potential of applying generative AI to analog IC design. Yet, a vast untapped frontier remains. This work proposes, **AnalogGenie**, a **Gen** erat **i** ve **e** ngine (model) for automatic discovery of **analog** circuit topologies. In contrast to previous methods (Dong et al., 2023; Chang et al., 2024; Lai et al., 2024) that are limited to a smaller scale of generation (e.g., generating a single type of analog ICs, small-size analog ICs, or conventional analog ICs), **AnalogGenie addresses the problem** **of scalable and general design** . It can significantly broaden the variety of analog ICs, increase the number of devices within a single design, and discover unseen circuit topologies. A major obstacle to advancing generative models for scalable analog circuit design automation is the lack of a comprehensive dataset of analog circuit topologies. We bridge this gap by building a extensive dataset that consists of more than 3000 distinct analog circuit topologies with diverse functionalities (e.g., Op-Amps, Low Dropout Regulator (LDO), Bandgap reference, Comparator, Phase-Locked Loop (PLL), Low Noise Amplifier (LNA), Power Amplifiers (PA), Mixer, Voltage-Controlled Oscillator (VCO), etc) from public resources (Razavi, 2000; Razavi & Behzad, 2012; Johns & Martin, 2008; Gray et al., 2009; Allen & Holberg, 2011; Camenzind, 2005). In addition, we apply data augmentation techniques to expand these circuit topologies by over 70 _×_ . To the best of our knowledge, this is the largest circuit dataset that effectively incorporates and enhances existing real-world analog circuit topologies to the greatest extent. This enables AnalogGenie to effectively learn various analog topologies and significantly enhance its generation capabilities, surpassing all previous methods (Dong et al., 2023; Chang et al., 2024; Lai et al., 2024). Nonetheless, another key barrier to advancing the scalable design of analog circuits is short of a scalable and unambiguous representation of circuit topologies. AnalogCoder (Lai et al., 2024) relies on high-level text representations that use multiple tokens to describe a single connection between devices, making the generation prone to errors. CktGNN (Dong et al., 2023) and LaMAGIC (Chang et al., 2024) use graph-based representations with a fixed number of nodes, where each node represents a circuit device or subgraph. This ignores the critical low-level details essential in analog circuit design, leading to ambiguous and unscalable circuit generation. We propose a scalable sequence-style data structure that captures fundamental analog circuit design details while efficiently describing large circuit graphs. Specifically, we represent each circuit topology as an undirected graph where each node is a device pin (Figure 3). We then sequentialize it into an Eulerian circuit–a trail that visits every edge exactly once and starts and ends at the same node. This unique representation allows AnalogGenie to generate circuit topologies in a scalable, flexible, and efficient manner. These developed dataset and techniques can thus enable AnalogGenie with excep 2 **Topology** **PM1** **Vout** **Graph generation** Vdd PM1 Vout **Vin** **NM1** Vin NM1 Vss **SPICE** M2 (Vout Vout VDD VDD) pmos4 M1 (Vout Vin VSS VSS) nmos4 **Graph** Vdd PM1 Vout Vin NM1 Vss **(a) Typical data representation for** **analog circuit topology** Adding new device or connection to circuit by simply adding edge or node **Code generation** M2 (Vout Vout VDD VDD) pmos4 M1 (Vout Vin VSS VSS) nmos4 {?} {?} {?} {?} {?} {?} {?} {?} Adding new device or connection to circuit requires generating complete sentence where each token can lead to error **(b) Existing analog circuit topology** **generation paradigms** Figure 1: Current states of analog circuit topology generation. (a) Typical data representation for analog circuit topology. (b) Existing analog circuit topology generation paradigms. Graph provides a clear one-to-one mapping between the graph generation process and the circuit design process. PySpice code is a high-level representation, making its generation process more prone to error. tional capabilities to produce diverse, large, and unseen analog circuit topologies. The advancement holds both profound engineering and scientific significance, demonstrating that generative AI can not only meet human expertise but also unlock the possibilities beyond human capability. The key contributions in this paper are: (1) We propose a generative engine, AnalogGenie, built on a GPT model to generate diverse analog circuits by predicting the next device pin to connect in the circuit; (2) We introduce a sequence-based, pin-level graph representation that efficiently and expressively captures large analog circuit topologies; (3) We develop a comprehensive dataset of analog circuit topologies to advance research in analog electronic design automation using generative AI and introduce an augmentation scheme to enhance data diversity. (4) Experiment results show that AnalogGenie is capable of automatically generating far more, large-scale, valid, unseen, and high-performance topologies compared to existing graph generation and foundation model work. 2 P RELIMINARIES AND R ELATED W ORKS 2.1 D ESIGN P ROCESSES OF A NALOG C IRCUITS The design process of analog circuits begins with creating the circuit topology, which involves determining the device types (i.e., NMOS/PMOS transistor, capacitor, resistor, inductor, etc.) and the number of devices, and defining how they are interconnected. Following this, designers perform device sizing, i.e., optimizing the physical dimensions of devices to achieve desired performance. Finally, the physical layout (i.e., mask design) is developed to prepare for manufacturing. Note that a physical design is the representation of an IC in terms of planar geometric shapes corresponding to the different stacked physical layers (e.g., metal, oxide, or semiconductor) during the fabrication process. Of all these stages, topology design demands the most creative effort, as it needs to be conceptualized from scratch by human designers. While significant progress has been made in automating device sizing (Wang et al., 2020; Cao et al., 2022; Gao et al., 2023; Cao et al., 2024) and layout design (Kunal et al., 2019; Xu et al., 2019), the topology generation remains a challenging problem due to its abstract and complex nature. This work aims to address this thorny issue. 2.2 G ENERATIVE AI FOR A NALOG C IRCUIT T OPOLOGY G ENERATION An analog circuit topology can be naturally represented as a graph structure, providing a clear oneto-one mapping between graph generation and topology design (Figure 1(a)). For instance, adding 3 a node to a graph corresponds directly to adding a new device to a circuit topology, while adding an edge between nodes represents a new connection between devices. This intuitive representation has led most existing topology generation methods to focus on graph generation (Figure 1(b)), such as CktGNN (Dong et al., 2023) and LaMagic (Chang et al., 2024). Yet, these approaches are often limited to generating only a single type of circuit topology (e.g., Op-Amps or power converters). This is because they rely on one-shot generation by predicting the adjacency matrix directly and pre-define the number of nodes for generation, thereby suffering from low scalability (Zhu et al., 2022). In contrast, our work employs sequential graph generation (Section 3), offering far greater flexibility and more adaptability to various types of circuit designs. An analog circuit can also be compiled into a SPICE (Simulation Program with Integrated Circuit Emphasis) netlist (Figure 1(a)). A SPICE netlist is a text-based high-level description of the connection between devices (i.e., nets) in a circuit topology, which is used in the process of circuit performance simulation. Leveraging the powerful code and text generation capabilities of LLMs, recent work, AnalogCoder (Lai et al., 2024), applies domain-specific prompt engineering to existing LLMs to generate Python-style SPICE (i.e., PySpice) netlists for analog circuits. However, the availability of publicly accessible SPICE netlist data remains significantly limited compared to the wealth of publicly available analog circuit topologies. This is because analog circuit topologies are human-readable illustrations commonly found in textbooks and scientific publications, whereas netlists are software-oriented representations often used together with confidential semiconductor technologies to extract circuit performance by simulation tools. Another key challenge faced by code generation approaches is their reliance on high-level text-based circuit topology representations. Specifically, to add a new device or connection, autoregressive models must predict multiple tokens to generate a complete line of code (Figure 1(b)), making them more prone to errors compared to graph-based methods, which require only a single action per step. AnalogCoder (Lai et al., 2024) shows that even advanced models (e.g., GPT-4) struggle to correctly generate simple circuits with fewer than 10 devices. Thus, our work focuses on graph generation to achieve a more robust and scalable generation. 2.3 O PEN -S OURCE A NALOG C IRCUIT D ATASETS The lack of a comprehensive analog circuit dataset fundamentally hinders the development of generative AI-based methods to automate the design of analog ICs. While some circuit datasets exist in the field, such as those provided by Align (Kunal et al., 2019), CktGNN (Dong et al., 2023), and AMSNet (Tao et al., 2024), they are often limited to specific types of analog circuits (i.e., Op-Amp) without any label (e.g., circuit performance). In addition, most of their topologies are synthesized by permutating pre-defined template, resulting in non-unique designs. To address this fundamental gap, we have created a thorough dataset by collecting 3350 distinct analog circuit topologies with diverse functionalities (e.g., LDO, Bandgap reference, Comparator, PLL, LNA, PA, Mixer, VCO, etc) from public resources (Razavi, 2000; Razavi & Behzad, 2012; Johns & Martin, 2008; Gray et al., 2009; Allen & Holberg, 2011; Camenzind, 2005). To ensure accurate connections, each schematic is manually drawn in an industry-standard circuit design tool for performance simulation. We also labeled each circuit with its performance metrics. 3 A PPROACH AnalogGenie is a domain-specific GPT model designed to generate various analog circuit topologies with greatly improved scalability. To achieve this, we first introduce an expressiveness-enhanced graph representation that models each device pin as an individual node, ensuring that every connection and interaction between circuit devices is explicitly represented. Next, we develop a sequencestyle data structure to effectively handle large-scale analog circuits that can be typically modeled as large and sparse graphs. To further enhance the generation quality of AnalogGenie, we propose a data augmentation technique to address both data scarcity and the permutation invariance issue inherent in sequence data. Building upon these innovations, we customize a tokenizer to pre-train AnalogGenie and perform finetuning afterward, enabling AnalogGenie to generate specific type of high-performance circuits. 4 |Col1|gGenie gGenie| |---|---| ||MMMaaassskkkeeeddd<br>SSSeeelllfff---<br>AAAtttttteeennnsssiiiooonnn| ||| Figure 2: Overview of AnalogGenie. AnalogGenie represents each topology as a sequence and generates all sorts of analog circuit topology from scratch by predicting the device pin to connect. 3.1 E XPRESSIVENESS -E NHANCED G RAPH R EPRESENTATION FOR T OPOLOGY M ODELING Prior works (Dong et al., 2023; Lu et al., 2023) rely on high-level graph representations to generate circuit topologies, where each node represents a device or a subgraph. This method omits essential low-level device details, leading to the issue of ambiguous generation, i.e., a single generated graph can be interpreted as multiple unique topologies. To understand this, consider that an NMOS transistor (NM) has four device pins–drain (D), gate (G), source (S), and body (B). When an entire device is abstracted into a single node, it becomes challenging to interpret to which pin an edge connects (Figure 3(a)). Therefore, to ensure a unique one-on-one mapp ~~i~~ ng between the graph and the circuit topology–where every circuit connection is explicitly represented, an analog circuit has to be represented at the pin level (Figure 3(b)). Furthermore, previous methods restricted the graph representation of analog circuit topol ~~o~~ gies to directed acyclic graphs ( ~~D~~ AGs), greatly limiting the types of circuit topologies that can be learned ~~and~~ ~~ge~~ nerated. In this work, we adopt a ~~m~~ ore expressive and flexible represen ~~tation~~ ~~of~~ a ~~na~~ log circu ~~it~~ ~~top~~ ologies. Specifically, we represent t ~~h~~ e topology of an analog circuit as a fin ~~ite~~ con ~~n~~ ec ~~t~~ ed undirec ~~ted~~ graph _G_ = ( _V, E_ ), where _V_ = _{_ 1 _,_ 2 _, . . ., n}_ is the node set representing each device pin with _|V_ ~~_|_~~ = _n_ and ~~_E_~~ ~~_∈_~~ _V × V_ is the edge s ~~et~~ . For each node _i_ in a graph _G_, we let _N_ ( _v_ ) = _{u ∈_ _V |_ ( _u, v_ ) _∈_ _E}_ denote the set of neighboring nodes of _v_ . 3.2 S EQUENTIAL G RAPH R EPRESENTATION OF S CALABLE A NALOG C IRCUIT T OPOLO ~~G~~ I ~~E~~ S Previous methods (Dong et al., 2023; Lu et al., 2023) use adjacency matrices to represent circuit graphs. Yet, an adjacency matrix requires _O_ � _n_ [2] [�] space to store _n_ nodes, reg ~~ard~~ less of the number of edges, which is inefficient for sparse graphs. Analog circuit topologies are typically sparse ~~b~~ ecause most devices are connected only to their immediate neighbors. As a result, the number of edges _e_ is far smaller than _n_ [2], leaving the adjacency matrix filled with zeros and wasting significant space on non-existent edges. For example, the graph in Figure 2 has six nodes and six undirected edges or 12 directed edges. An adjacent matrix will need 6 _×_ 6 matrices to represent them, wasting 24 elements to store nothing. In contrast, our work represents the graph as an Eulerian circuit that stores only existing edges, making it much more efficient than adjacency matrices, particularly for handling large analog circuit topologies. More examples of the advantages of using the Eulerian circuit to represent large sparse graphs can be found in Appendix A.3. 5 Idea Generation Category:
0Conceptual Integration
jCPak79Kev
# A N O NLINE L EARNING T HEORY OF T RADING -V OLUME M AXIMIZATION **Tommaso Cesari** EECS University of Ottawa Ottawa, Canada tcesari@uottawa.ca **Roberto Colomboni** DEIB / Dept. of Computer Science Politecnico di Milano / University of Milan Milano, Italy robertocolomboni@polimi.it A BSTRACT We explore brokerage between traders in an online learning framework. At any round _t_, two traders meet to exchange an asset, provided the exchange is mutually beneficial. The broker proposes a trading price, and each trader tries to sell their asset or buy the asset from the other party, depending on whether the price is higher or lower than their private valuations. A trade happens if one trader is willing to sell and the other is willing to buy at the proposed price. Previous work provided guidance to a broker aiming at enhancing traders’ total earnings by maximizing the _gain from trade_, defined as the sum of the traders’ net utilities after each interaction. This classical notion of reward can be highly unfair to traders with small profit margins, and far from the real-life utility of the broker. For these reasons, we investigate how the broker should behave to maximize the trading volume, i.e., the _total number of trades_ . We model the traders’ valuations as an i.i.d. process with an unknown distribution. If the traders’ valuations are revealed after each interaction (full-feedback), and the traders’ valuations cumulative distribution function (cdf) is continuous, we provide an algorithm achieving logarithmic regret and show its optimality up to constants. If only their willingness to sell or buy at the proposed price is revealed after each interaction ( 2 -bit feedback), we provide an algorithm achieving poly-logarithmic regret when the traders’ valuations cdf is Lipschitz and show its near-optimality. We complement our results by analyzing the implications of dropping the regularity assumptions on the unknown traders’ valuations cdf. If we drop the continuous cdf assumption, the regret rate degrades to Θ( ~~√~~ _T_ ) in the full-feedback case, where _T_ is the time horizon. If we drop the Lipschitz cdf assumption, learning becomes impossible in the 2-bit feedback case. 1 I NTRODUCTION In modern financial markets, Over-the-Counter (OTC) trading platforms have emerged as dynamic and decentralized hubs, offering diverse alternatives to traditional exchanges. In recent years, these markets have experienced remarkable growth, solidifying their central role in the global financial ecosystem: OTC asset trading in the US surpassed 50 trillion USD in value in 2020 (Weill, 2020), with an upward trend documented since 2016 (www.bis.org, 2022). Brokers play a crucial role in OTC markets. Beyond acting as intermediaries between traders, they utilize their understanding of the market to identify the optimal prices for assets. Additionally, traders in these markets often respond to price changes: higher prices usually lead to selling, while lower prices typically result in buying (Sherstyuk et al., 2020). This adaptability appears across various asset classes, including stocks, derivatives, art, collectibles, precious metals and minerals, energy commodities (like gas and oil), and digital currencies (cryptocurrencies) (Boli´c et al., 2024). Our study draws inspiration from recent research analyzing the bilateral trade problem from an online learning perspective (Cesa-Bianchi et al., 2021; Azar et al., 2022; Cesa-Bianchi et al., 2023; 2024a; Boli´c et al., 2024; Bernasconi et al., 2024; Bachoc et al., 2024a;b). In particular, we build on insights from Boli´c et al. (2024), which addresses the brokerage problem in OTC markets where traders may decide to buy or sell their assets depending on prevailing market conditions. 1 1.1 M OTIVATIONS FOR C HOOSING T RADING V OLUME AS R EWARD Previous works have entirely focused on scenarios where brokers aim at maximizing the so-called cumulative _gain from trade_ —the sum of the net utilities of the traders over the entire sequence of interactions with the broker. This classical approach has the two following pitfalls. **Traders’ Perspective.** Gain-from-trade maximization can cause unfairness in settings where the majority of traders make a living off of small margins (e.g., in micro trading or high-frequency trading), and only a handful of high-payoff trades have the potential to occur. In these cases, gainfrom-trade maximization can lead to sacrificing the majority of the population in favor of a small minority of traders that are lucky enough to be paired with people that are willing to be greatly underpaid for the good on sale. In contrast, trading-volume maximization gives the same dignity to all traders, granting everybody the same opportunity to trade, independently of their buying power. For a striking concrete example of this pitfall, see Section 3. **Broker’s Perspective.** From the broker’s perspective, too, it might not be as beneficial to potentially miss out on traders’ exchanges by maximizing the gain from trade, given that, typically, brokers only earn when trades occur. For example, in settings where traders have to pay a small fee for each trade, it is clear that the broker’s ultimate goal is to maximize trading volume. Another example where maximizing trading volume is superior to maximizing the gain from trade is the one discussed in the Trader’s Perspective paragraph (and Section 3). In this case, a gain-from-trade maximizing broker would risk alienating the vast majority of the population which, realistically, would end up leaving a broker that does not give them trading opportunities, consequently hurting the broker’s bottom line. For these reasons, in this work, we aim at providing strategies that boost the trading volume by maximizing the _number of trades_ in the broker-traders interaction sequence. 1.2 S ETTING In what follows, for any two real numbers _a,b_, we denote their minimum by _a_ ∧ _b_ and their maximum by _a_ ∨ _b_ . We now describe the brokerage online learning protocol. For any time _t_ = 1 _,_ 2 _,..._ - Two traders arrive with their private valuations _V_ 2 _t_ −1 and _V_ 2 _t_ - The broker proposes a trading price _P_ _t_ - If the price _P_ _t_ is between the lowest valuation _V_ 2 _t_ −1 ∧ _V_ 2 _t_ and the highest valuation _V_ 2 _t_ −1 ∨ _V_ 2 _t_ —meaning the trader with the lower valuation is willing to sell at _P_ _t_ and the trader with the higher valuation is willing to buy at _P_ _t_ —the transaction occurs with the higher-valuation trader purchasing the asset from the lower-valuation trader at the price _P_ _t_ - The broker receives some feedback As commonly assumed in the existing bilateral trade literature, we assume valuations and prices belong to [0 _,_ 1] . While previous literature aims at maximizing the cumulative _gain from trade_ — defined as the sum of traders’ net utilities [1] in the whole interaction sequence—our objective is to maximize the _number of trades_ . Formally, for any _p,v_ 1 _,v_ 2 ∈[0 _,_ 1], our utility posting a price _p_ when the valuations of the traders are _v_ 1 and _v_ 2 is g( _p,v_ 1 _,v_ 2 ) ∶= I { _v_ 1 ∧ _v_ 2 ≤ _p_ ≤ _v_ 1 ∨ _v_ 2 } _._ The goal of the broker is to minimize the _regret_, defined, for any time horizon _T_ ∈ N, as _T_ _R_ _T_ ∶= sup E ∑(G _t_ ( _p_ ) − G _t_ ( _P_ _t_ _,_ _p_ ∈[0 _,_ 1] [ _t_ =1 ))] where G _t_ ( _q_ ) ∶= g( _q,V_ 2 _t_ −1 _,V_ 2 _t_ ) for all _q_ ∈[0 _,_ 1] and _t_ ∈ N, and the expectation is taken over the randomness present in ( _V_ _t_ ) _t_ ∈N and the (possible) randomness used by the broker’s algorithm to generate the prices ( _P_ _t_ ) _t_ ∈N . 1 Formally, for any _p, v_ 1 _, v_ 2 ∈[0 _,_ 1], the gain from trade of a price _p_ when the valuations of the traders are _v_ 1 and _v_ 2 is GFT( _p, v_ 1 _, v_ 2 ) ∶= ( _v_ 1 ∨ _v_ 2 − _v_ 1 ∧ _v_ 2 ) I { _v_ 1 ∧ _v_ 2 ≤ _p_ ≤ _v_ 1 ∨ _v_ 2 }. 2 |Col1|M-Lipschitz|Continuous|General<br>√| |---|---|---|---| |Full|Ω(lnT) Thm 2|O(lnT) Thm 1|Θ( T) Thm 5+6| |2-Bit|O(ln(MT)lnT), Ω(ln(MT)) Thm 3+4|Ω(T) Thm 7|Ω(T)| Table 1: Overview of all the regret regimes: ln _T_ (cyan), ln( _MT_ ) (green), ~~√~~ _T_ (yellow), and _T_ (red), depending on the feedback (full or 2-bit) and the assumption on the cdf ( _M_ -Lipschitz, continuous, or no assumptions). As in Boli´c et al. (2024), we assume that traders’ valuations _V,V_ 1 _,V_ 2 _,..._ are generated i.i.d. from an _unknown_ distribution _ν_ —a practical assumption for large and stable markets. [2] Finally, we consider the following two different types of feedback commonly studied in the online learning bilateral trade literature: - _Full-feedback._ At each round _t_, after having posted the price _P_ _t_, the broker has access to the traders’ valuations _V_ 2 _t_ −1 and _V_ 2 _t_ . - 2 _-bit feedback._ At each round _t_, after having posted the price _P_ _t_, the broker has access to the indicator functions I{ _V_ 2 _t_ −1 ≤ _P_ _t_ } and I{ _V_ 2 _t_ ≤ _P_ _t_ }. The full-feedback model draws its motivation from _direct revelation mechanisms_, where the traders disclose their valuations _V_ 2 _t_ −1 and _V_ 2 _t_ before each round, but the mechanism has access to this information only after having posted the current bid _P_ _t_ (Cesa-Bianchi et al., 2021; 2024a). The 2 -bit feedback model corresponds to _posted price_ mechanisms, where the broker has access only to the traders’ willingness to buy or sell at the proposed posted price, and the valuations _V_ 2 _t_ −1 and _V_ 2 _t_ are _never_ revealed. 1.3 O VERVIEW OF O UR C ONTRIBUTIONS In the full-feedback case, if the distribution _ν_ of the traders’ valuations has a _continuous_ cdf, we design an algorithm (Algorithm 1) suffering _O_ (ln _T_ ) regret in the time horizon _T_ (Theorem 1), and we provide a matching lower bound (Theorem 2). We complement these results by showing that dropping the continuous cdf assumption leads to a worse regret rate of Ω( ~~√~~ _T_ ) (Theorem 5), and we design an algorithm (Algorithm 3) achieving _O_ ( ~~√~~ _T_ ) regret (Theorem 6). In the 2 -bit feedback case, if the cdf of the traders’ valuations is _M_ -Lipschitz, we design an algorithm (Algorithm 2) achieving regret _O_ (ln( _MT_ ) ln _T_ ) (Theorem 3) where _T_ is the time horizon, and provide a near-matching lower bound Ω(ln ( _MT_ )) (Theorem 4). We complement these results by showing that the problem becomes unlearnable if we drop the Lipschitzness assumption (Theorem 7). For a full summary of our results, see Table 1. 1.4 T ECHNIQUES AND C HALLENGES Online learning with a continuous action domain and full-feedback is usually tackled by discretizing the action domain and then playing an optimal expert algorithm on the discretization, or by directly running exponential weights algorithms in the continuum (Maillard & Munos, 2010; Krichene et al., 2015; Cesa-Bianchi et al., 2024b). These approaches require that the (expected) reward function is Lipschitz and lead to a regret rate of order _O_ [̃] ( ~~√~~ _T_ ) . In contrast, our expected reward function is _not_ Lipschitz in general. To overcome this challenge, we leverage the specific structure of the problem by proving Lemma 1, which enables us to design an algorithm that achieves an exponentially better regret rate of _O_ (ln _T_ ) even when the underlying cdf—and hence the associated reward function— is only continuous. Moreover, we establish a matching Ω(ln _T_ ) lower bound that, surprisingly, applies even when the reward function is Lipschitz, demonstrating that additional Lipschitz regularity beyond continuity does not contribute to faster rates in this setting. This lower bound construction is particularly challenging because the shape of the function _p_ ↦ E[ _G_ _t_ ( _p_ )] can only be controlled indirectly through the traders’ valuation distribution: to avoid exceedingly complex calculations, 2 For further discussion on this assumption, see Appendix B. 3 extra care is required in selecting appropriate instances. Even then, we needed a subtle and somewhat intricate Bayesian argument to obtain the lower bound. In the 2 -bit feedback model, we remark that the available feedback is enough to reconstruct _bandit_ feedback. Consequently, when the underlying cdf—and hence the expected reward function—is _M_ -Lipschitz, a viable approach is to discretize the action space [0 _,_ 1] with _K_ uniformly spaced points and run an optimal bandit algorithm on the discretization. This approach immediately yields a regret rate of order _O_ ( _MT_ / _K_ + ~~√~~ _KT_ ) . This bound leads to a regret of order _O_ ( _M_ [1][/][3] _T_ [2][/][3] ) by tuning _K_ ∶= Θ( _M_ [2][/][3] _T_ [1][/][3] ) when _M_ is known to the learner, or of order _O_ ( _MT_ [2][/][3] ) by tuning _K_ ∶= Θ( _T_ [2][/][3] ) when the learner does not possess this knowledge. In contrast, we exploit the extra information provided by the 2 -bit feedback and the intuition provided by Lemma 1 to devise a binary search algorithm achieving the exponentially better rate of _O_ (ln( _MT_ ) ln _T_ ), with the additional feature of being oblivious to _M_ . Our corresponding lower bound shows that this rate is optimal (up to a ln _T_ factor), demonstrating through an information-theoretic argument that some sort of binary search is essentially a necessary step for optimal learning. 1.5 R ELATED W ORK Bilateral trade was originally studied in a one-shot setting where a broker has to devise a mechanism to make a buyer and a seller trade, and classical properties like incentive compatibility, individual rationality, budget balance, and efficiency were investigated. Since the pioneering work of Myerson and Satterthwaite and their celebrated impossibility result (Myerson & Satterthwaite, 1983), the study of bilateral trade has grown significantly, particularly from a game-theoretic and approximation perspective (McAfee, 2008; Colini-Baldeschi et al., 2016; 2017; Blumrosen & Mizrahi, 2016; Brustle et al., 2017; Colini-Baldeschi et al., 2020; Babaioff et al., 2020; Dütting et al., 2021; Deng et al., 2022; Kang et al., 2022; Fei, 2022; Archbold et al., 2023; Xu et al., 2024). For a comprehensive overview, refer to Cesa-Bianchi et al. (2024a). In recent years, the focus has expanded to include online learning settings for bilateral trade. Given their close relevance to our paper, we concentrate our discussion on these works. In Cesa-Bianchi et al. (2021); Azar et al. (2022); Cesa-Bianchi et al. (2024a; 2023); Bernasconi et al. (2024); Cesa-Bianchi et al. (2024b), the authors examined bilateral trade problems where the reward function is the _gain from trade_ and each trader has a fixed role as either a seller or a buyer. In Cesa-Bianchi et al. (2021), the authors investigated a scenario where seller and buyer valuations form two distinct i.i.d. sequences. In the full-feedback case, they achieved a regret bound of _O_ [̃] ( ~~√~~ _T_ ), which was later refined to _O_ ( ~~√~~ _T_ ) in Cesa-Bianchi et al. (2024a). They also demonstrated a worst case regret of Ω( ~~√~~ _T_ ) even when sellers’ and buyers’ valuations are independent of each other and their cdfs are Lipschitz. For the 2 -bit feedback scenario under i.i.d. valuations, Cesa-Bianchi et al. (2021) proved that any algorithm must suffer linear regret, even under either the _M_ -Lipschitz joint cdf assumption or the traders’ valuation independence assumption. However, when both conditions are simultaneously satisfied, they proposed an algorithm achieving a regret rate of _O_ [̃] ( _M_ [1][/][3] _T_ [2][/][3] ), later refined to _O_ ( _M_ [1][/][3] _T_ [2][/][3] ) in Cesa-Bianchi et al. (2024a). Cesa-Bianchi et al. (2021) also established a worst-case regret lower bound of Ω( _T_ [2][/][3] ) in this case, which, however, does not display any dependence on _M_ . Cesa-Bianchi et al. (2021; 2024a) also showed that the adversarial bilateral trade problem is unlearnable even with full-feedback. To achieve learnability beyond the i.i.d. case, Cesa-Bianchi et al. (2023; 2024b) explored weakly budget-balanced mechanisms, allowing the broker to post different selling and buying prices as long as the buyer pays more than what the seller receives. They demonstrated that learning can be achieved using weakly budget-balanced mechanisms in the 2 -bit feedback setting at a regret rate of _O_ [̃] ( _MT_ [3][/][4] ) when the joint seller/buyer cdf may vary over time but is _M_ -Lipschitz. Furthermore, for the same setting, they provided a Ω( _T_ [3][/][4] ) matching lower bound in the time horizon, even when the process is required to be i.i.d., but their lower bound does not feature any dependence on _M_ . Azar et al. (2022) investigated whether learning is possible in the adversarial case by considering _α_ -regret, achieving Θ [̃] ( ~~√~~ _T_ ) bounds for 2 -regret in full-feedback, and a _O_ [̃] ( _T_ [3][/][4] ) upper bound in 2 -bit feedback. Following another direction, Bernasconi et al. (2024) 4 explored globally budget-balanced mechanisms in the adversarial case, showing a Θ( ~~√~~ _T_ ) regret rate in full-feedback and a _O_ [̃] ( _T_ [3][/][4] ) rate in the 2-bit feedback case. The closest to our work is Boli´c et al. (2024), where the authors studied the same i.i.d. version of our trading problem with flexible seller and buyer roles, but with the target reward function being the _gain from trade_ . Under the _M_ -Lipschitz cdf assumption, they obtained tight Θ( _M_ ln _T_ ) regret in the full-feedback case. Surprisingly, in the same full-feedback case, but using our different reward function, we achieve a Θ(ln _T_ ) regret rate even when the cdf is only continuous: in our case, the additional Lipschitz regularity does not offer any speedup once the continuity assumption is fulfilled. Furthermore, under the _M_ -Lipschitz cdf assumption, Boli´c et al. (2024) proved a sharp rate of Θ( ~~√~~ _MT_ ) in the 2 -bit feedback case. Interestingly, using our different reward function, we achieve an exponentially faster upper bound of _O_ (ln( _MT_ ) ln _T_ ), which is tight up to a ln _T_ factor. If the Lipschitz cdf assumption is removed, the learning rate for both our problem and the one in Boli´c et al. (2024) degrades to Θ( ~~√~~ _T_ ) in the full-feedback case, and the problem becomes unlearnable in the 2-bit feedback case. 2 T HE M EDIAN L EMMA In this section, we present the Median Lemma (Lemma 1), a simple but crucial result for what follows, and the key upon which our main algorithms are based. At a high level, Lemma 1 states that a broker who aims at maximizing the number of trades should post prices that are as close as possible to the _median_ of the (unknown) traders’ valuation distribution _ν_, and the instantaneous regret which the broker incurs by playing any price is (proportional to) the _square_ of the distance between the median and the price, if distances are measured with respect to the pseudo-metric induced by the traders’ valuation cdf. **Lemma 1** (The median lemma) **.** _If the cdf_ _F_ _of_ _ν_ _is continuous, then, for any_ _t_ ∈ N _and any_ _p_ ∈[0 _,_ 1] _,_ 1 E[G _t_ ( _p_ )] = 2 _F_ ( _p_ )(1 − _F_ ( _p_ )) _and_ 2 [−] [E][[][G] _[t]_ [(] _[p]_ [)] =][ 2] [(] [1] 2 [−] _[F]_ [(] _[p]_ [))] 2 _._ _In particular, the function p_ ↦ E[G _t_ ( _p_ )] _is maximized at any point m_ ∈[0 _,_ 1] _such that F_ ( _m_ ) = [1] 2 _[.]_ Before presenting the proof of Lemma 1, we just remark that points _m_ ∈[0 _,_ 1] satisfying _F_ ( _m_ ) = 1/2 do exist by the intermediate value theorem, because _F_ (0) = 0, _F_ (1) = 1, and _F_ is continuous. _Proof._ For each _t_ ∈ N and each _p_ ∈[0 _,_ 1], we have that E[G _t_ ( _p_ )] = P[{ _V_ 2 _t_ −1 ≤ _p_ < _V_ 2 _t_ } ∪{ _V_ 2 _t_ ≤ _p_ ≤ _V_ 2 _t_ −1 }] = P[ _V_ 2 _t_ −1 ≤ _p_ ]P[ _p_ < _V_ 2 _t_ ] + P[ _V_ 2 _t_ ≤ _p_ ]P[ _p_ ≤ _V_ 2 _t_ −1 ] = 2 _F_ ( _p_ )(1 − _F_ ( _p_ )) _,_ where the second equality follows from additivity and independence, while in the last equality we leveraged the continuity of _F_ to obtain P[ _p_ ≤ _V_ 2 _t_ −1 ] = P[ _p_ < _V_ 2 _t_ −1 ] = 1 − _F_ ( _p_ ) . To conclude, it is 2 enough to note that, for each _p_ ∈[0 _,_ 1] it holds that [1] / 4 − _F_ ( _p_ )(1 − _F_ ( _p_ )) = ( [1] / 2 − _F_ ( _p_ )) . 3 T RADING V OLUME V S G AIN FROM T RADE In this section, we leverage Lemma 1 to show with a formal example that, unlike trading-volume maximizing brokers, gain-from-trade maximization brokers can be heavily biased towards small segments of the population and, as a result, end up hurting their own bottom lines. Assume that the distribution of the traders’ valuations _V,V_ 1 _,V_ 2 _,..._ have common density _f_ defined, for all _x_ ∈[0 _,_ 1], by _f_ ( _x_ ) ∶= ( [1] _ε_ [−] [1][)][I][{] [1] 2 [−] _[ε]_ [ ≤] _[x]_ [ ≤] [1] 2 [} +][ I][{][1][ −] _[ε]_ [ ≤] _[x]_ [ ≤] [1][}][, for some] _[ ε]_ [ ∈(][0] _[,]_ [1] 2 [)][.] At a high level, this population of traders is clustered into two segments: a _low_ -valuation cluster _L_ that believes that the good on sale has a value slightly smaller than [1] 2 [and a] _[ high]_ [-valuation cluster] _[ H]_ that believes the value is slightly smaller than 1 . If _ε_ ≈ 0, the overwhelming majority of the population belongs to the low-valuation cluster _L_ . In this case, we will prove that a gain-from-trade maximizing [1] [1] _ε_ [−] [1][)][I][{] 2 [1] [1] 2 [} +][ I][{][1][ −] _[ε]_ [ ≤] _[x]_ [ ≤] [1][}][, for some] _[ ε]_ [ ∈(][0] _[,]_ 2 [1] 2 [−] _[ε]_ [ ≤] _[x]_ [ ≤] [1] 2 [1] 2 [)][.] At a high level, this population of traders is clustered into two segments: a _low_ -valuation cluster _L_ that believes that the good on sale has a value slightly smaller than [1] [and a] _[ high]_ [-valuation cluster] _[ H]_ 5 Idea Generation Category:
2Direct Enhancement
OvU9u6wS2J
# - - CG-B ENCH : C LUE GROUNDED Q UESTION A NSWER ING B ENCHMARK FOR L ONG V IDEO U NDERSTANDING **Guo Chen** [1] _[,][∗]_ **, Yicheng Liu** [1] _[,][∗]_ **, Yifei Huang** [2] _[,]_ [3] _[,][∗]_ **, Yuping He** [1] **, Baoqi Pei** [2] _[,]_ [4] **, Jilan Xu** [2] _[,]_ [5] **Yali Wang** [2] **, Tong Lu** [1] **, Limin Wang** [1] _[,]_ [2] _[,][†]_ 1 State Key Laboratory for Novel Software Technology, Nanjing University 2 Shanghai Artificial Intelligence Laboratory 3 The University of Tokyo 4 Zhejiang University, 5 Fudan University chenguo1177@gmail.com A BSTRACT The existing video understanding benchmarks for multimodal large language models (MLLMs) mainly focus on short videos. The few benchmarks for long video understanding often rely on multiple-choice questions (MCQs). Due to the limitations of MCQ evaluations and the advanced reasoning abilities of MLLMs, models can often answer correctly by combining short video insights with elimination, without truly understanding the content. To bridge this gap, we introduce CG-Bench, a benchmark for clue-grounded question answering in long videos. CG-Bench emphasizes the model’s ability to retrieve relevant clues, enhancing evaluation credibility. It includes 1,219 manually curated videos organized into 14 primary, 171 secondary, and 638 tertiary categories, making it the largest benchmark for long video analysis. The dataset features 12,129 QA pairs in three question types: perception, reasoning, and hallucination. To address the limitations of MCQ-based evaluation, we develop two novel clue-based evaluation methods: clue-grounded white box and black box evaluations, assessing whether models generate answers based on accurate video understanding. We evaluate multiple closed-source and open-source MLLMs on CG-Bench. The results show that current models struggle significantly with long videos compared to short ones, and there is a notable gap between open-source and commercial models. We hope CG-Bench will drive the development of more reliable and capable MLLMs for long video comprehension. All annotations and video data are available at [https://cg-bench.github.io/leaderboard/.](https://cg-bench.github.io/leaderboard/) 1 I NTRODUCTION Recently, video understanding has made significant progress with the advent of multimodal large language models (MLLMs). To evaluate these models, many recent efforts have been made to create video understanding benchmarks (Li et al., 2023b; Mangalam et al., 2024; Liu et al., 2024e), providing assessments of model comprehension capabilities and clues for future improvement. Since early benchmarks only focus on short video clips, recent works have started to create benchmarks (Fu et al., 2024a; Wu et al., 2024b; Zhou et al., 2024; Huang et al., 2024) for longer videos ( _≥_ 10 minutes). However, these works employ multiple-choice questions (MCQ), where the difficulty level is heavily influenced by the configuration of negative options. In such scenarios, models (Chen et al., 2023d; Li et al., 2024a; Zhang et al., 2024b; Lin et al., 2024) tend to focus on only general video knowledge and use elimination to avoid selecting the negative options. As a result, the models can achieve correct answers without genuinely engaging with the relevant video content, leading to a lack of trustworthiness. One illustration can be found in question 2 of Figure 1, the option ‘A’ can be easily eliminated based purely on textual information. Recently, the NExT-GQA (Xiao et al., 2024) benchmark tries to address the problem of credible models by incorporating temporal grounding into MCQ. However, NExT-GQA is limited to the NextQA (Xiao et al., 2021) dataset, which lacks di _∗_ Equal contribution. _†_ Corresponding author. 1 Figure 1: _Left:_ examples of CG-Bench’s clue-grounded annotation. To correctly answer the questions, models need to ground their reasoning into the correct clue. _Right:_ CG-Bench provides an evaluation suite with two novel credibility evaluation criteria while supporting both MCQ and open-ended evaluations. versity and primarily consists of short videos. A comprehensive benchmark for credibly evaluating _generalist_ MLLMs for long video understanding, is still missing in the research community. To make up this gap, we introduce **CG-Bench**, illustrated in Figure 1, a novel benchmark designed to evaluate clue-grounded question answering in long videos. In contrast to traditional benchmarks that focus primarily on the accuracy of question answering, **CG-Bench** goes a step further by evaluating whether the model bases its answers on relevant clues within the video. **CG-Bench** designs two novel clue-based evaluation methods to provide more reliable model performance assessments. 1) _clue-grouded white box evaluation_ requires the model to directly provide the clue interval corresponding to the question while selecting the correct answer. 2) _clue-grouded black box evaluation_ requires the model to align the accuracy of video-level and clue-level MCQ. Furthermore, we propose a novel heuristic method, aided by human-annotated clues, for open-ended QA evaluation, to effectively balance the cost and performance. CG-Bench features 1,219 meticulously curated videos and 12,129 human-annotated questionanswer-clue (QAC) triplets, establishing it as the largest and held-out VideoQA and question grounding benchmark for long videos. It employs a highly detailed manual classification system, organizing each video into 14 primary categories, 171 secondary categories, and 638 tertiary categories. The benchmark includes three main question types: perception, reasoning, and hallucination. Perception questions are further divided into 10 subcategories, such as object and attribute recognition, while reasoning questions are categorized into 12 subcategories, including relation reasoning, etc. We evaluate a range of closed-source and open-source MLLMs using this benchmark. The commercial models, GPT-4o (OpenAI, 2024) and Gemini-1.5 Pro (Anil et al., 2023) achieve scores of 53.9 and 43.4, respectively, with 128 frames for long-video multiple-choice questions. The leading open-source MLLM, Qwen2-VL-72B (Wang et al., 2024b), scores 51.4 under the same conditions, indicating its initial benchmarking against GPT-4o. However, our credibility assessments and openended evaluations reveal a significant drop in accuracy for existing MLLMs, with scores decreasing from 53.9 to 21.7. This underscores the considerable room for improvement in current MLLMs for long video understanding. We hope this benchmark can become a vital tool for advancing research and development of more reliable and capable MLLMs. 2 R ELATED W ORK **Multimodal Large Language Models (MLLMs)** have rapidly gained popularity due to their proficiency in integrating visual and textual information (Liu et al., 2024a; 2023; Chen et al., 2023d; Wang et al., 2022; 2024d). Recent advancements, such as LLaVA-Next-Video (Zhang et al., 2024b), LLaVA-OneVision (Li et al., 2024a), InternVL2 (Chen et al., 2024e) and Eagle-2 (Li et al., 2025), focus on enhancing MLLMs by integrating LLM backbones with visual encoders and specialized adapters, or creating higher-quality multimodal instruction data. This results in improved performance across tasks that involve both text and images. Another area of focus is multimodal video understanding. Most models (Chen et al., 2024e; Li et al., 2023a; Maaz et al., 2023; Pei et al., 2024; Huang et al., 2018; 2020b) are optimized for short videos, typically a few seconds or at most a few minutes, without exploring their visual understanding with 2 Figure 2: Distribution of video root categories, displaying the number of videos within each category. Figure 3: Distribution of question root types, illustrating the frequency of different question types. longer context. In response, researchers have explored methods such as compressing video frames into fewer visual tokens to allow for the handling of longer videos, as seen in models like LLaMAVid (Li et al., 2023c), MovieChat (Song et al., 2024), MA-LMM (He et al., 2024), VideoChatFlash (Li et al., 2024b) and Oryx (Liu et al., 2024f). In addition, LongVA (Zhang et al., 2024a) and LongViLA (Xue et al., 2024) explore the system-level optimization for long-context MLLMs which can natively support long video understanding. Despite the continuous proposal of various MLLMs, their real-world performance in long video understanding is still under explored. **MLLM Benchmarks.** The development of benchmarks is becoming increasingly essential, especially for evaluating the MLLM performance in video understanding tasks. As the field develops, various benchmarks have been established to assess MLLMs across different modalities and video lengths. Previous efforts primarily focused on short videos, with traditional specialized VideoQA datasets like TVQA (Lei et al., 2018), NextQA (Xiao et al., 2021), and benchmarks for MLLM like VideoBench (Ning et al., 2023), MVBench (Li et al., 2023b) and EgoSchema (Mangalam et al., 2024). MVBench provides a comprehensive framework for evaluating general temporal understanding capabilities through question-answering on short clips, while EgoSchema focuses on egocentric video understanding with multi-choice questions. The videos in these benchmarks typically range from a few seconds to several tens of seconds, making them similar to image benchmarks and thus hindering the development of general video LLMs. Recently, several works such as VideoMME (Fu et al., 2024a), CinePile (Rawal et al., 2024), MLVU (Zhou et al., 2024), LongVideoBench (Wu et al., 2024b), MoVQA (Zhang et al., 2023b), HourVideo (Chandrasegaran et al., 2024), and LVBench (Wang et al., 2024c), have introduced long video benchmarks to evaluate MLLMs. VideoMME constructs a diverse video MCQ dataset, incorporating multimodal evaluations with visuals, subtitles, and audio. MLVU designs a range of tasks that focus on granular detail understanding to assess long video comprehension capabilities. However, a common limitation of these benchmarks is their reliance on MCQs, where the difficulty is heavily influenced by the construction of negative options. This allows MLLMs to often eliminate incorrect answers using sparse frames and common sense reasoning, which can inflate performances. With our clue interval annotation, CG-Bench enhances the evaluation quality of MLLMs in long video understanding by introducing new evaluation mechanisms on credibility. 3 CG-B ENCH 3.1 D ATASET C ONSTRUCTION The dataset construction process of CG-Bench consists of three steps: video collection, questionanswering-clue annotation, and quality review iteration. We provide details as follows. **Video Collection.** To avoid using videos that have been used for pre-training by existing MLLMs, we manually collect videos from the internet and provide new annotations on them. To facilitate the collection of raw videos from the Internet, we define 14 root domains as listed in Figure 2. During the collection process, we manually assign a brief tag (4-8 words) to categorize the content of each video. This supplementary tagging helps to ensure the diversity of the videos. We define 3 a video to be long if it exceeds 10 minutes in duration. Accordingly, we collected videos longer than 10 minutes while considering the distribution of video duration. Furthermore, we retain the accompanying subtitles and audio to provide multimodal information. We carefully review and filter the videos manually for 7 rounds. More details about the video collection can be found in the supplementary material. **Question-Answer-Clues Annotation.** After collecting the raw video data, we annotate it with highquality question-answer-clue (QAC) triplets. To ensure question diversity, we establish a taxonomy with three types: Perception, Reasoning, and Hallucination. As shown in Figure 3, Perception and Reasoning questions are further divided into 10 and 14 subcategories, respectively, while Hallucination questions combine elements of both. Annotators are instructed to include negative options to create a multiple-choice QA format, facilitating straightforward and cost-effective assessments. To minimize expression loss, annotators use their native language during the annotation process. Each video is annotated with 6 to 15 QAC triplets, depending on its duration. To ensure consistency in QAC triplets, we standardized the annotation process by first annotating the QA pairs and then identifying the clues. Annotators must watch the entire video, select a question type from the predefined categories, and then annotate a new question and its corresponding answer. Next, they select one or more intervals from the video to form a QAC triplet. Since the actual clue intervals often consist of multiple short moments, annotating each fragment is costly. Therefore, annotators are required to mark intervals that cover these short moments while ensuring the completeness of each event. **Review Iteration.** To ensure the difficulty and quality of the dataset, we conduct a repetitive review and iteration process to enhance annotation quality. We reject annotations that do not meet our quality standards and request annotators to revise them. Our quality requirements for annotations and the measures taken to ensure them are as follows: 1) _The rationality of the question, options,_ _and answer_ : we conduct manual reviews; 2) _The video dependency of the question, options, and_ _answer_ : we input questions and options into GPT-4 and filter out QA pairs that can be answered solely based on pure text; 3) _The difficulty of negative options in multiple-choice questions_ : we input the video, questions and options into MLLMs and filter out QA pairs that can be answered using only sparse frames and small models; 4) _The positional diversity of clue intervals:_ We monitor the distribution of clue duration and position and provide timely guidance to annotators. 3.2 D ATASET S TATISTICS & C OMPARISONS We present the detailed statistics of our dataset to provide a more comprehensive understanding, including meta-information, QAC triplets, qualitative analysis, and comparison to previous works. 3.2.1 D ATASET S TATISTICS **Video Meta.** Our dataset comprises a total of 1219 videos with multimodal information, including vision, audio, and subtitles. The duration of the videos varies between 10 and 80 minutes, with a distribution illustrated in Figure 4. Notably, videos that last between 20 and 30 minutes are the most prevalent. This selection process is manual, based on content relevance, which mirrors realworld duration distributions and highlights a long-tail effect for longer videos. As illustrated in Figure 2, each video is classified using a three-tiered tagging system that succinctly encapsulates its content and assigns it to fundamental categories. The primary classification is augmented by a secondary layer of 171 tags and a tertiary layer consisting of 638 tags. This multi-level tagging mechanism guarantees a broad diversity of data content. For a more detailed classification of tags, please consult the supplementary materials. **QAC Annotation.** CG-Bench includes 12,129 annotations consisting of questions, answers, and clues. Table 1 presents the sentence lengths and totals for the annotated questions and answers, highlighting the linguistic diversity within our dataset. Each QAC triplet is annotated with 4 to 7 negative samples, resulting in an approximately uniform distribution with ratios of options A to H of 12.4%, 14.7%, 12.1%, 14.8%, 15.1%, 16.1%, 11.6%, and 3.1%. There are a total of 14,362 clue intervals across all QAC triplets, with an average duration of 19.24 seconds each. Additionally, we conduct a further analysis of the positions of clue intervals within the video. Figure 5 illustrates the frequency with which each normalized timestamp is represented by intervals. This demonstrates the unbiased nature of our interval annotations and highlights the diversity of our QA content in temporal position. 4 |70<br>60<br>50<br>40<br>30<br>20<br>10<br>0 10 20 Durat30ion (|Col2|Col3| |---|---|---| |70<br>60<br>50<br>40<br>30<br>20<br>10<br>0 10 20 Durat30ion (||| |70<br>60<br>50<br>40<br>30<br>20<br>10<br>0 10 20 Durat30ion (||| |70<br>60<br>50<br>40<br>30<br>20<br>10<br>0 10 20 Durat30ion (||| |70<br>60<br>50<br>40<br>30<br>20<br>10<br>0 10 20 Durat30ion (||| |70<br>60<br>50<br>40<br>30<br>20<br>10<br>0 10 20 Durat30ion (||| |70<br>60<br>50<br>40<br>30<br>20<br>10<br>0 10 20 Durat30ion (||min40utes) 50 60| Figure 4: Video duration distribution, showing the number of videos for different duration intervals. |1200<br>1000<br>800<br>600<br>400<br>200<br>0 0 10 Tim2e0 Bi|Col2|Col3| |---|---|---| |1200<br>1000<br>800<br>600<br>400<br>200<br>0 0 10 Tim2e0 Bi||| |1200<br>1000<br>800<br>600<br>400<br>200<br>0 0 10 Tim2e0 Bi||| |1200<br>1000<br>800<br>600<br>400<br>200<br>0 0 10 Tim2e0 Bi||| |1200<br>1000<br>800<br>600<br>400<br>200<br>0 0 10 Tim2e0 Bi||| |1200<br>1000<br>800<br>600<br>400<br>200<br>0 0 10 Tim2e0 Bi||| |1200<br>1000<br>800<br>600<br>400<br>200<br>0 0 10 Tim2e0 Bi||ns (13/050) 40 50| Figure 5: Clue time coverage, illustrating the frequency of clues across different time bins. Annotation Statistics #QAC Triplets 12129 #Avg/QAC per video 9.95 #Avg/Option per QAC 6.96 #Avg/Clue per QAC 1.18 #Avg/Words of Questions 20.07 #Avg/Words of Options 22.88 #Avg/Duration of Clues 19.24 Table 1: Annotation statistics, detailing the number of QAC triplets, questions, options, and clues. Table 2: Comparison of benchmarks across key aspects: number of videos (#Video), average duration (#Duration), number of QA pairs (#QA Pairs), number of clues (#Clue), annotation method (M/A for manual/automatic), Open-Domain (OD), Open-Ended (OE), Multi-modal (MME), and Credibility (CE) Evaluation. **Benchmark** **#Video** **#Dur.(s)** **#QA Pairs** **#Clue** **Anno.** **OD** **OE** **MME** **CE** _Question-Clue Grounding_ NextGQA (Xiao et al., 2024) 1,000 39.5 - 10,531 M ✗ - - Ego4D-NLQ val (Grauman et al., 2022) 415 499.7 - 4,554 M ✗ - - Ego4D-NLQ test (Grauman et al., 2022) 333 493.7 - 4,005 M ✗ - - MultiHop-EgoQA test (Chen et al., 2024c) 360 - - 1,080 A&M ✗ - - E.T. Bench test (Liu et al., 2024d) - 129.3 - 2,011 M ✓ - - RexTime test (Chen et al., 2024a) - 141.1 - 2,143 A&M ✗ - - ~~**CG-Bench-QG**~~ ~~1~~, ~~219~~ ~~1624~~ . ~~4~~ ~~-~~ ~~14~~, ~~362~~ ~~M~~ ~~✓~~ ~~-~~ ~~-~~ ~~-~~ _Short-Video QA_ TVQA (Lei et al., 2018) 2,179 11.2 15,253 15,253 M ✗ ✗ ✗ ✗ STAR (Wu et al., 2024a) 914 11.9 7,098 7,098 A ✗ ✗ ✗ ✗ NextQA (Xiao et al., 2021) 1,000 44.0 8,564 ✗ A ✗ ✓ ✗ ✗ EgoSchema (Mangalam et al., 2024) 5,063 180.0 5,063 ✗ A&M ✗ ✗ ✗ ✗ TempCompass (Liu et al., 2024e) 410 11.4 7,540 ✗ A&M ✗ ✗ ✗ ✗ RexTime test (Chen et al., 2024a) - 141.1 - 2,143 A&M ✗ ✗ ✗ ✓ MVBench (Li et al., 2023b) 3,641 16.0 4,000 ✗ A&M ✗ ✗ ✗ ✗ MMBench-Video (Fang et al., 2024) 600 165.4 1,998 ✗ M ✓ ✓ ✗ ✗ ~~**CG-Bench-Clue**~~ ~~12~~, ~~129~~ ~~22~~ . ~~8~~ ~~12~~, ~~129~~ ~~-~~ ~~M~~ ~~✓~~ ~~-~~ ~~✓~~ ~~-~~ _Long-Video QA_ EgoTimeQA test (Di & Xie, 2024) 148 492 500 ✗ A ✗ ✗ ✗ ✗ MovieChat-1K (Song et al., 2024) 130 500.0 1,950 ✗ M ✗ ✗ ✗ ✗ Video-MME (Fu et al., 2024a) 900 1017.9 2,700 ✗ M ✓ ✗ ✓ ✗ LongVideoBench (Wu et al., 2024b) 966 1408.0 6,678 ✗ M ✓ ✗ ✗ ✗ MLVU (Zhou et al., 2024) 757 720.0 2,593 ✗ M ✗ ✗ ✗ ✗ ~~**CG-Bench**~~ ~~1~~, ~~219~~ ~~1624~~ . ~~4~~ ~~12~~, ~~129~~ ~~14~~, ~~362~~ ~~M~~ ~~✓~~ ~~✓~~ ~~✓~~ ~~✓~~ 3.2.2 C OMPARISON WITH P REVIOUS B ENCHMARKS CG-Bench is characterized by its diverse features, allowing it to be compared with three distinct types of benchmarks, as depicted in the three sections of Table 2: Question Clue Grounding, Short-Video QA, and Long-Video QA benchmarks. For the question clue grounding benchmarks, NextGQA (Xiao et al., 2024), Ego4D-NLQ (Grauman et al., 2022), MultiHop-EgoQA (Chen et al., 2024c), E.T. Bench (Liu et al., 2024d), and RexTime (Chen et al., 2024a) are primarily centered around action and egocentric domains. Their videos are sampled from academic datasets. In comparison, the question clue grounding part of CG-Bench, CG-Bench-QG, stands out with the highest number of videos and the longest average length, the diversity of which fosters a broad spectrum of question-grounding queries. Furthermore, we transform QAC triplets to our novel Short-Video QA benchmark, termed CGBench-Clue. When contrasted with prior short video benchmarks such as TempCompass (Liu et al., 2024e), MVBench (Li et al., 2023b) and MMBench-Video (Fang et al., 2024), our CG-Bench-Clue emerges as the _**largest**_, _**held-out**_, _**open-domain**_ and _**multimodal**_ Short-Video QA benchmark. As for the Long-Video QA benchmark, CG-Bench excels in the number of videos, length, quantity of questions, and annotation quality. Owing to our clue interval annotations, CG-Bench further facilitates reliable evaluations for long videos and open-ended evaluations with clue assistance, a feature that sets it apart from existing long video benchmarks like Video-MME (Fu et al., 2024a) and MLVU (Zhou et al., 2024). 5 Idea Generation Category:
3Other
le4IoZZHy1
# P REFERENCE O PTIMIZATION FOR R EASONING WITH P SEUDO F EEDBACK **Fangkai Jiao** [1] _[,]_ [3] _[†]_ **Geyang Guo** [4] _[†]_ **Xingxing Zhang** [2] **Nancy F. Chen** [3] _[,]_ [1] **Shafiq Joty** [5] _[,]_ [1] **Furu Wei** [2] 1 Nanyang Technological University 2 Microsoft Research 3 I 2 R, A*STAR 4 Georgia Institute of Technology 5 Salesforce Research A BSTRACT Preference optimization techniques, such as Direct Preference Optimization (DPO), are frequently employed to enhance the reasoning capabilities of large language models (LLMs) in domains like mathematical reasoning and coding, typically following supervised fine-tuning. These methods rely on high-quality labels for reasoning tasks to generate preference pairs; however, the availability of reasoning datasets with human-verified labels is limited. In this study, we introduce a novel approach to generate pseudo feedback for reasoning tasks by framing the labeling of solutions to reason problems as an evaluation against associated _test cases_ . We explore two forms of pseudo feedback based on test cases: one generated by frontier LLMs and the other by extending self-consistency to multitest-case. We conduct experiments on both mathematical reasoning and coding tasks using pseudo feedback for preference optimization, and observe improvements across both tasks. Specifically, using Mathstral-7B as our base model, we improve MATH results from 58.3 to 68.6, surpassing both NuminaMath-72B and GPT-4-Turbo-1106-preview. In GSM8K and College Math, our scores increase from 85.6 to 90.3 and from 34.3 to 42.3, respectively. Building on Deepseek-coder-7B-v1.5, we achieve a score of 24.3 on LiveCodeBench (from 21.1), surpassing Claude-3-Haiku. [1] 1 I NTRODUCTION Large language models (LLMs) have demonstrated exceptional capabilities in reasoning tasks such math reasoning and coding (Roziere et al., 2023; Dubey et al., 2024; Guo et al., 2024). A _de facto_ pipeline for enhancing the reasoning capabilities of LLMs involves further exposing them to reasoning specific data through continued pre-training or supervised fine-tuning (Roziere et al., 2023; Dubey et al., 2024; Yu et al., 2023; Tang et al., 2024; Dong et al., 2023), followed by preference learning techniques such as direct preference optimization (DPO; Rafailov et al. (2023)) or proximal policy optimization (PPO; Schulman et al. (2017)). Both DPO and PPO depend on reliable labels for reasoning problems to generate preference pairs and train reward models (Lightman et al., 2024; Uesato et al., 2022). Unfortunately, reasoning datasets with large-scale, human-verified labels remain limited, and scaling them through domain experts is becoming increasingly time-consuming and expensive, particularly as LLMs continue to evolve in capabilities (Burns et al., 2024; Bowman et al., 2022), which greatly limits the potential of preference learning methods such DPO and PPO. Scalable oversight (Bowman et al., 2022) demonstrates that the annotation effort of human experts can be significantly reduced with the assistance of non-expert LLMs. However, complete elimination of human annotation remains unattainable. Building on this, Khan et al. (2024a) further reduced labeling costs by incorporating a debating mechanism, though this approach is constrained to reason - Work done during internship at Microsoft Research. 1 [The code is released at: https://github.com/microsoft/unilm/tree/master/PFPO](https://github.com/microsoft/unilm/tree/master/PFPO) 1 ing tasks with a finite answer space (e.g., multiple-choice questions). Other works have employed self-consistency-based answers or their variants as pseudo-labels to filter self-generated solutions (Huang et al., 2022; Yang et al., 2024c), but these methods struggle to generalize to reasoning tasks that lack explicit answer labels (e.g., coding). To address these challenges, we frame the labeling of solutions to reasoning problems as the evaluation of these solutions against the _test cases_ of the problems. For tasks with explicit answer labels (e.g., mathematical reasoning and multiple-choice questions), we treat them as cases with a single test pair, where the input is empty, and the output is the answer label. In contrast, for tasks without explicit answer labels (e.g., coding), we consider them as problems with multiple test case pairs. A solution to a reasoning problem is deemed correct if and only if it passes all associated test cases. Sample solutions generated by an LLM for the same problem can be validated using the test case suite, with correct and incorrect solutions used to construct preference pairs for DPO training or to train a reward model for PPO. In this paper, we propose two types of pseudo feedback (i.e., pseudo test cases) for reasoning problems, both of which eliminate the need for human experts and can be applied at scale. First, we explore pseudo feedback from frontier LLMs, where we decompose the process of creating pseudo test cases into multiple steps to ensure that each step is manageable for frontier LLMs. Intuitively, if an LLM can pass test cases carefully curated by a stronger LLM, it is likely to provide a correct solution. Previous work Wang et al. (2022); Snell et al. (2024) has demonstrated that self-consistency improves the reasoning performance of LLMs. Based on this insight, we introduce a second form of pseudo feedback, utilizing self-consistency from our policy LLM, which is of vital importance when frontier LLMs are no longer available. Unlike the method in Wang et al. (2022), which is limited to single-test-case problems, our self-consistency feedback is designed to generalize to problems with multiple test cases. We also find that these two types of pseudo feedback complement each other and can be applied iteratively in a pipeline. We conducted experiments on both mathematical reasoning and coding using pseudo feedback for preference optimization and we observe improvements across both tasks. Specifically, using Mathstral-7B as our base model, we improved our MATH results from 58.3 to 68.6, surpassing both NuminaMath-72B and GPT-4-Turbo-1106-preview. In GSM8K and College Math, our results increased from 85.6 to 90.3 and from 34.3 to 42.3, respectively. Building on Deepseek-coder-7B-v1.5, we achieved a score of 24.3 on LiveCodeBench (from 21.1), surpassing Claude-3-Haiku. In a nutshell, our contribution in this paper can be summarized as follows: - We formulate the labeling of solutions to reasoning problems as the process of evaluating them against the associated _test cases_, which facilitates preference optimization. - We explore two types of pseudo feedback based on _test cases_ : one created from frontier LLMs and the other derived from generalized self-consistency w.r.t. multiple test cases. - Experiments on mathematical reasoning and coding demonstrate the superiority of these two types of feedback. We also find they can be applied in a pipeline and iteratively to further improve the reasoning performance. 2 R ELATED W ORK LLMs exhibit remarkable capabilities by tuning on high-quality data annotated by experts or more advanced models (Achiam et al., 2023). However, these external annotations can be costly, posing a challenge to further enhance model’s performance. Inspired by the natural evolution process of human intelligence, researchers explore self-evolution methods (Tao et al., 2024) that enable models to autonomously acquire, refine, and learn from their own knowledge. Some works (Wang et al., 2023b; Ding et al., 2024) reformulate the training objective to directly model performance improvement. Others tune the model with its own responses. They first filter the model’s outputs relying on ground truth labels (Zelikman et al., 2022; Wang et al., 2024b), expert annotations (Dubey et al., 2024), or more advanced models (Yang et al., 2024a; Kirchner et al., 2024), and then use the resulting refined examples for supervised or contrastive learning (Chen et al., 2024b; Yuan et al., 2024). However, they still depend on external supervision and cannot extend to larger unlabeled datasets. Recent work (Huang et al., 2022) constructs pseudo labels via self-consistency, but the improvement is limited, possibly due to model collapse (Shumailov et al., 2023; Alemohammad et al., 2023). 2 For related work about mathematical reasoning (Wei et al., 2022; He-Yueya et al., 2023; Chen et al., 2021b; 2022; Lightman et al., 2024; Wang et al., 2024a; Jiao et al., 2024; Lai et al., 2024; Cobbe et al., 2021b; Li et al., 2022a; Weng et al., 2022; Yu et al., 2023; Luo et al., 2023; Mitra et al., 2024; Yue et al., 2023) and code generation (Guo et al., 2024; DeepSeek-AI et al., 2024; Nijkamp et al., 2023b;a; Zelikman et al., 2022; Li et al., 2023a; 2022b; Wei et al., 2024; Le et al., 2022; Liu et al., 2023; Dou et al., 2024; Weyssow et al., 2024), we will discuss them in Appendix F due to the limitation of space. 3 M ETHOD In reasoning tasks such as mathematical reasoning and coding, the solution to a problem can be verified using a _standard_ answer or a set of test cases. This property makes it possible to automatically create preference pairs for an LLM solving reasoning tasks and further improve reasoning capabilities of the LLM with preference optimization. However, annotating reasoning problems with answers or test cases manually is expensive and time consuming. As a result, this process is difficult to executed in large scale. Therefore, we propose PFPO (Pseudo-Feedback Preference Optimization), a method to automatically create _pseudo_ answers or test cases to facilitate preference learning. In this section, we first introduce preference optimization for reasoning in Section 3.1 (assuming gold answers or test cases are available). Then we will go to details of PFPO, which creates pseudo answers or test cases. 3.1 P REFERENCE O PTIMIZATION FOR R EASONING Suppose we have a set of reasoning problems _x_ with their test cases _T_ : _D_ = _{_ ( _x_ _i_ _, T_ _i_ ) _}_ _[|D|]_ _i_ =1 [, where] _T_ = _{⟨_ _i_ 1 _, o_ 1 _⟩, ⟨_ _i_ 2 _, o_ 2 _⟩, . . ., ⟨_ _i_ _|T |_ _, o_ _|T |_ _⟩}_ and _⟨_ _i_ _k_ _, o_ _k_ _⟩_ is the input-output pair of a test case. Note that _T_ is a generalized representation for either a collection of test cases or the gold answer for problem _x_ . If _x_ is a coding problem, _T_ is a set of test cases to verify the correctness of the corresponding solution of _x_ . While if _x_ is one of the other reasoning problems such as mathematical reasoning or multi-choice science questions, there is only one test case in _T_ = _{⟨_ _i, o ⟩}_ and the input _i_ is empty. For example, “compute 1 + 1” is a math question with _i_ = _∅_ as its test case input and _o_ = 2 as its test case output. Given a reasoning problem _x_ and its test cases _T_, we are ready to evaluate the correctness of a solution _y_ produced by an LLM _π_ _θ_ as follows: _r_ = [1] _|T |_ � � 1 ( _g_ ( _y, i_ _k_ ) = _o_ _k_ )) (1) _k_ =1 _|T_ _|_ [(] where _g_ ( _·, ·_ ) is a function to either execute the solution _y_ or extract the answer from _y_ . In the most strict form, _y_ is a correct solution to problem _x_ when _r_ = 1. Otherwise (i.e., _r <_ 1), _y_ is an incorrect solution. Note that in mathematical reasoning, there is only one test case and _r ∈{_ 0 _,_ 1 _}_ . Note that given a problem _x_ and its corresponding test cases _T_, the process of verifying an arbitrary solution _y_ does not need any human labeling effort. We can construct preference pairs for an LLM automatically as follows. First, we use an LLM _π_ _θ_ to sample _N_ solutions _Y_ = _{y_ 1 _, y_ 2 _, . . ., y_ _N_ _}_ for problem _x_ and obtain their verification results _R_ = _{r_ 1 _, r_ 2 _, . . ., r_ _N_ _}_ . To further improve _π_ _θ_, we can use PPO (Schulman et al., 2017) to optimize these feedback online or use DPO (Rafailov et al., 2023) to do preference optimization offline. In this work, we employ DPO due to its simplicity. Then, we create preference pairs from _R_ and valid pairs ( _y_ _w_ _, y_ _l_ ) requires _r_ _w_ = 1 and _r_ _l_ _<_ 1. _P_ = _{_ ( _y_ _w_ _, y_ _l_ ) _|r_ _w_ = 1 _, r_ _l_ _<_ 1 _, r_ _w_ _∈_ _R, r_ _l_ _∈_ _R}_ (2) Given these valid preference pairs, We optimize our LLM _π_ _θ_ using the following objective: _L_ DPO ( _π_ _θ_ ; _π_ ref ; _D_ ) = _−_ E _x∈D,y_ _w_ _,y_ _l_ _∼π_ _θ_ ( _·|x_ ) log _σ_ _β_ log _[π]_ _[θ]_ [(] _[y]_ _[w]_ _[|][x]_ [)] � � _π_ ref ( _y_ _w_ _|x_ _[π]_ _[θ]_ [(] _[y]_ _[w]_ _[|][x]_ [)] _[π]_ _[θ]_ [(] _[y]_ _[l]_ _[|][x]_ [)] _π_ ref ( _y_ _w_ _|x_ ) _[−]_ _[β]_ [ log] _π_ ref ( _y_ _l_ _|x_ ) _π_ ref ( _y_ _l_ _|x_ ) (3) �� where _π_ ref is the reference model before the DPO stage (usually it is the model of the supervised fine-tuning stage). _β_ is a hyper-parameter to control the distance between _π_ _θ_ and _π_ ref . 3 **Question:** **Please complete the following Python** **function to implement the addition** **algorithm for vectors.** def vector_add(a: List[float], b: List[float]): Policy Model 𝜋 ! |Col1|Input<br>a=[1, 2, 3]<br>b=[3, 4, 5]<br>Output<br>[4, 6, 8]| |---|---| ||| ||Input<br>a=[7, 7, 7]<br>b=[1, 1, 1]<br>Output<br>[8, 8, 8]| Figure 1: The process of employing self-consistency ( _i.e.,_ majority voting) to construct pseudo test cases for code generation problem. The outputs owing the highest frequency will be treated as pseudo outputs for verifying generated programs. 3.2 P SEUDO F EEDBACK P REFERENCE O PTIMIZATION FOR R EASONING In this section, we introduce how to obtain pseudo feedback for reasoning problems and the method of leverage them for preference learning. 3.2.1 P SEUDO F EEDBACK FROM FRONTIER LLM **Single-Test-Case Feedback** For single-test-case reasoning tasks such as mathematical reasoning, the input is _explicitly_ given in the problem itself. Therefore, we do not need to create new test cases. Given a problem _x_, we can use a frontier LLM to generate a solution ˜ _y_ and extract its _pseudo_ answer _g_ (˜ _y, ·_ ) as our pseudo feedback (see Equation 1). The solution _y ∼_ _π_ _θ_ ( _·|x_ ) from our model is likely to be correct if _g_ ( _y, ∅_ ) = _g_ (˜ _y, ∅_ ): _r_ = 1 ( _g_ ( _y, ∅_ ) = _g_ (˜ _y, ∅_ ))) (4) Since solutions of many reasoning datasets used for LLM supervised fine-tuning are created by frontier LLM (Taori et al., 2023; Tang et al., 2024), we can re-use the SFT datasets and extract the pseudo feedback as a _free lunch_ . **Multi-Test-Case Feedback** For multi-test-case reasoning tasks such as coding, test cases for coding problems are usually not available and manually label them are expensive. We choose to generate pseudo test cases by prompting frontier LLMs. There are three steps to generate test cases as shown in Figure 1: - Step 1: Given a problem _x_, generate input test cases _I_ = _{i_ [˜] 1 _,_ _i_ [˜] 2 _, . . .,_ _i_ [˜] _K_ _}_ by prompting a _general_ [2] LLM. - Step 2: Generate pseudo (code) solutions _Y_ = _{y_ 1 _[′]_ _[, y]_ 2 _[′]_ _[, . . ., y]_ _|Y|_ _[′]_ _[}]_ [ for problem] _[ x]_ [ using a] frontier LLM. - Step 3: Generate pseudo output test cases _O_ = _{o_ _[′]_ 1 _[, o]_ _[′]_ 2 _[, . . ., o]_ _[′]_ _K_ _[}]_ [ using majority voting by] executing all solutions in _Y_ for each input test case in _I_ . The output test case _o_ _[′]_ _k_ [corresponds to the input][ ˜] _[i]_ _[k]_ [ is obtained as follows: after executing all pseudo] solutions, we obtain a set of candidate pseudo output _O_ _k_ _[′]_ [=] _[ {][g]_ [(] _[y]_ 1 _[′]_ _[,]_ [ ˜] _[i]_ _[k]_ [)] _[, g]_ [(] _[y]_ 2 _[′]_ _[,]_ [ ˜] _[i]_ _[k]_ [)] _[, . . ., g]_ [(] _[y]_ _|Y|_ _[′]_ _[,]_ [ ˜] _[i]_ _[k]_ [)] _[}]_ [.] The output test case _o_ _[′]_ _k_ [is the most frequent element in] _[ O]_ _k_ _[′]_ [:] _o_ _[′]_ _k_ [= arg max] _f_ ( _o_ ) (5) _o∈O_ _k_ _[′]_ where _f_ ( _o_ ) = _|{x ∈O_ _k_ _[′]_ _[|][ x]_ [ =] _[ o][}|]_ [ is a frequency function that gives the number of times an element] _o_ appears in _O_ _k_ _[′]_ [. The resulting set of pseudo test cases is] _[ T]_ _[ ′]_ [ =] _[ {⟨]_ _[i]_ [˜] [1] _[, o]_ 1 _[′]_ _[⟩][,][ ⟨]_ _[i]_ [˜] [2] _[, o]_ _[′]_ 2 _[⟩][, . . .,][ ⟨]_ _i_ [˜] _K_ _, o_ _[′]_ _K_ _[⟩}]_ [.] At this point, we can verify arbitrary solution _y_ to problem _x_ as in Equation 1. 2 Here we differentiate _general_ LLM with the _frontier_ one as generating only the inputs is much easier compared with solving the problem itself. Thus this process does not necessarily rely on SOTA LLMs. 4 𝑦 !,% : … The answer is 7. Model 𝜋 " 𝑦 !,$, 𝑦 !,%, ⋯, 𝑦 !,) 𝑦 !,$ ≫ 𝑦 !,% Figure 2: The overall training workflow of our method. For simplicity, we only show single step with outcome feedback for mathematical reasoning. Given an arbitrary prompt, we will sample multiple solutions from the current policy model and construct preference pairs according to pseudo feedback from frontier LLM or self-consistency. Finally, the constructed preference pair will be used to improve the policy model through DPO. Note that we do not choose to generate both input and output test cases in a single step by prompting LLMs, as Gu et al. (2024) have pointed out that, generating the test case output based given input is a challenging task, which requires strong reasoning capabilities of LLMs. Also note that the single-test-case pseudo feedback described earlier is essentially equivalent to multi-test-case method feedback with number of test cases equals to one and the input test case is empty. 3.2.2 P SEUDO F EEDBACK FROM S ELF -C ONSISTENCY Methods above leverage frontier LLMs to create pseudo feedback. We can alternatively create feedback from our own policy model _π_ _θ_ to facilitate self-improvement without external guidance. We start from the method for the multi-test-case reasoning tasks, since the single-test-case counterpart is a special case of it. Specifically, we re-use the input test cases generated in Step 1 (Section 3.2.1). The main difference starts from Step 2. In the second step, we use our policy model _π_ _θ_ to sample pseudo solutions. The pseudo output in the third step is also based on executing all pseudo solutions from our policy model _π_ _θ_ . We can apply the same process to single-test-case reasoning tasks such as mathematical reasoning, which is equivalent to using majority voted answer from _π_ _θ_ samples as pseudo feedback. We can again use Equation 1 to verify the correctness of solutions from _π_ _θ_ . 3.2.3 P REFERENCE L EARNING UNDER P SEUDO F EEDBACK Given the problem _x_ and the pseudo feedback (i.e., test cases) _T_ _[′]_ we have created, the preference optimization process is as follows. We first sample _N_ solutions _Y_ = _{y_ 1 _, y_ 2 _, . . ., y_ _N_ _}_ to problem _x_ from our policy _π_ _θ_ . We then obtain our verification results _R_ = _{r_ 1 _, r_ 2 _, . . ., r_ _N_ _}_ using Equation 1 (i.e., executing all solutions on all test cases). We then move to create preference pairs using a different method as in Equation 2: _P_ _o_ = _{_ ( _y_ _w_ _, y_ _l_ ) _|r_ _w_ _≥_ _ϵ, r_ _w_ _−_ _r_ _l_ _> σ, r_ _w_ _∈_ _R, r_ _l_ _∈_ _R}_ (6) where _ϵ_ and _σ_ are two hyper-parameters controlling the quality lower-bound of positive samples and the margin, respectively. Because our pseudo test cases may contains errors and if a solution _y_ _k_ is required to pass all test cases, we may end up with no positive solutions for problem _x_ . As a result, if a solution passes enough tests ( _r_ _w_ _≥_ _ϵ_ ) and significantly more tests than another solution ( _r_ _w_ _−_ _r_ _l_ _> σ_ ), we treat them (( _y_ _w_ _, y_ _l_ )) as an eligible preference pair. The above preference pairs in _P_ _o_ are based on outcome feedback of test cases. Recent studies (Wang et al., 2024a; Jiao et al., 2024) demonstrate that the outcome feedback can be used to estimate the expected returns of intermediate reasoning steps, which can help the model produce better reasoning trajectory. Motivated from this, we also construct the step-level process preference data. Following pDPO (Jiao et al., 2024), given the solution prefix ˆ _y_, we employ the same policy model to sample _M_ completions following ˆ _y_, and treat the averaged outcome feedback ˆ _r_ of the completions as the 5 expected returns of ˆ _y_ . ˆ _r_ = E _y∼π_ _θ_ ( _·|x,y_ ˆ) _r_ (ˆ _y ◦_ _y_ ) (7) where _◦_ is the concatenation operator. After that, the process preference data can be defined as: _P_ _s_ = _{_ ( ˆ _y_ _w_ _,_ ˆ _y_ _l_ ) _|r_ ˆ _w_ _≥_ _ϵ,_ ˆ _r_ _w_ _−_ _r_ ˆ _l_ _> σ,_ ˆ _r_ _w_ _∈_ _R,_ ˆ _r_ _l_ _∈_ _R}_ (8) The final preference data we use is a combination of the outcome and process preference datasets _P_ = _P_ _o_ � _P_ _s_ . We use the DPO objective (Equation 3) to optimize the policy model _π_ _θ_ . **Iterative Training** PFPO can be applied after the supervised fine-tuning (SFT) stage and we can train the policy model iteratively with both the feedback from frontier LLMs and from selfconsistency. Imperially, we find applying LLM feedback first and then followed by self-consistency feedback rather than the opposite achieves better results. Figure 2 illustrates the process of single step. 4 E XPERIMENT 4.1 E XPERIMENTAL S ETUP **Prompt Collection** For mathematical reasoning, we followed Tang et al. (2024) to create 800K prompts under the help of GPT-4o. 500K of them are paired with one solution written by GPT4o to construct pseudo feedback from frontier LLM. The 500K data is named as MathScale-500K, and the other prompts are called MathScale-300K. We also filtered out around 790K prompts from NuminaMath [3] by removing those that we cannot extract the predicted answers from or having appeared in the test set of MATH. For validation, we randomly sampled 2,000 question-solution pairs from the training set of MWPBench (Tang et al., 2024) after removing the questions from GSM8K (Cobbe et al., 2021a). For code generation, we have collected the problems from the training set of APPs (Hendrycks et al., 2021a), Magicoder (Wei et al., 2024) and xCodeEval (Khan et al., 2024b), which contains 5,000, 9,000 and 6,400 questions, respectively. We remove all prompts where the test case inputs are failed to be synthesized. We randomly sampled 500 questions from the training set of APPs for validation. For APPs and xCodeEval, we use GPT-4o to generate the test case inputs. For Magicoder, we employ Mistral-Large-Instruct-2407 [4] for test case inputs generation, because of large size of the original magicoder dataset. The detailed prompt can be found in Appendix C.1. **Evaluation** For mathematical reasoning, the performance is evaluated on the test set of MATH (Hendrycks et al., 2021b), GSM8K (Cobbe et al., 2021a), and College Math (Tang et al., 2024) by Accuracy. For code generation, we evaluate the models on HumanEval (Chen et al., 2021a), MBPP (sanitized version) (Austin et al., 2021), APPs (Hendrycks et al., 2021a), and LiveCodeBench (Jain et al., 2024) by Pass@1. Without specific clarification, all evaluations are conducted using zero-shot prompting and greedy decoding. For simplicity, we only highlight the main results our method. For more details include the detailed source of prompts, from which checkpoints are the models initialized, please refer to Appendix A. All hyper-parameters for different experiments can be found in Appendix B. 4.2 E XPERIMENTAL R ESULTS 4.2.1 M ATHEMATICAL R EASONING We took two models for experiments: Llama-3.1-8B-base (Dubey et al., 2024) and Mathstral-7Bv0.1. We first conducted SFT on 500K MathScale data with GPT-4o annotation, followed by our method, with GPT-4o generated labels as pseudo feedback. As shown in Table 1, the pseudo feedback from GPT-4o can achieve consistent improvements on Llama-3.1-8b-base and Mathstral-7B. On MATH and College MATH, PFPO-LLM have made 1.2 and 4.1 averaged aboslute improvements compared with Llama-3.1-8b w/ SFT and Mathstral-7B w/ SFT, respectively. 3 [https://huggingface.co/datasets/AI-MO/NuminaMath-CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) 4 [https://huggingface.co/mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407) 6 Table 1: Overall results on mathematical reasoning benchmarks. PFPO-LLM refers to the training phase employing the pseudo feedback from frontier model (GPT-4o), while PFPO-Self indicates the phase using pseudo feedback constructed from self-generated solutions. NuminaMath-72B-CoT is built on Qwen2-72B by fine-tuning on NuminaMath. _[†]_ : Results are from Chan et al. (2024). We employ an evaluation strategy similar to Yang et al. (2024b). |Col1|MATH GSM8K College Math| |---|---| |GPT-4o-2024-0512<br>GPT-4-Turbo-2024-0409<br>GPT-4-Turbo-1106-preview†<br>GPT-4-0613|78.7 95.8 46.7<br>72.8 94.8 44.2<br>64.3 — —<br>55.0 93.5 39.0| |NuminaMath-72B-CoT (Beeching et al., 2024)<br>Llama-3.1-8B-Instruct (Dubey et al., 2024)<br>Llama-3.1-70B-Instruct (Dubey et al., 2024)|67.1 91.7 39.8<br>47.5 84.5 27.5<br>68.1 95.5 41.8| |Llama-3.1-8B-base (Dubey et al., 2024)<br>w/ SFT<br>w/ PFPO-LLM Iter. 0<br>w/ PFPO-Self Iter. 1<br>w/ PFPO-Self Iter. 2<br>w/ PFPO-Self Iter. 3<br>w/ PFPO-Self Iter. 4<br>w/ PFPO-Self Iter. 5|20.3 (4-shot) 56.7 (8-shot) 20.1 (4-shot)<br>53.8 85.1 34.6<br>55.0 86.6 35.8<br>55.9 87.6 36.6<br>56.6 88.9 37.0<br>57.0 88.8 36.7<br>57.4 89.1 37.6<br>57.8 89.6 38.0| |Mathstral-7B-v0.1 (Mistral AI Team, 2024b)<br>w/ SFT<br>w/ PFPO-LLM Iter. 0<br>w/ PFPO-Self Iter. 1<br>w/ PFPO-Self Iter. 2<br>w/ PFPO-Self Iter. 3|58.3 85.6 34.3<br>61.4 87.3 38.4<br>66.7 90.0 41.3<br>67.8 90.8 42.0<br>68.6 90.3 42.2<br>68.2 90.4 42.3| At the second phase, we started iterative pDPO training on unseen prompts with self-consistencybased pseudo feedback. For Llama-3.1-8b, we used the prompts from NuminaMath-790K to synthesize solutions and construct pseudo feedback via self-consistency. The prompts are divided into non-overlapped splits for iterative training. As shown in the table, by employing pseudo feedback, the models achieve continuous improvements across different iterations. Specifically, our method achieves the best results at Iteration 5. Compared with Llama-3.1-8b w/ PFPO-LLM Iter. 0, it achieves consistent improvements with 2.8 on MATH, 3.0 on GSM8K, as well 2.2 on College Math, revealing the potential of iterative preference optimization via pseudo feedback from self-consistency. For Mathstral-7B, we use the prompts in MathScale-300K for iterative training with pseudo feedback, since we did not observe improvements on NuminaMath. The prompts across different iterations are the same. As shown in the table, Mathstral-7B w/ PFPO-Self Iter. 2 achieves 1.9 absolute improvements on MATH, compared with the SFT model. And it can outperform the stronger counterparts like NuminaMath-72B-CoT [5], Llama-3.1-70B-Instruct, and GPT-4-Turbo1106-preview, with only 7B parameters, which have demonstrated the effectiveness of pseudo feedback. Besides, we find the performance will saturate after several iterations. We discuss the possible reasons in Appendix A.2. 4.2.2 C ODE G ENERATION For code generation, we selected Deepseek-coder-7B-v1.5-Instruct (Guo et al., 2024) for experiments. We first use GPT-4o to generate 11 program solutions for each question in the APPs training set, and use the ground-truth test cases to remove those having failed tests. The left solutions are kept to first fine-tune the base model. The resulted model is referred as _w/ SFT (APPs)_ . **Direct Preference Optimization via Test Case Execution Feedback** As described in Section 3.2.2, we have constructed preference dataset via executing the generated programs over real or synthetic test cases. The evaluation results are shown in Table 2 and 3. 5 [https://huggingface.co/AI-MO/NuminaMath-72B-CoT](https://huggingface.co/AI-MO/NuminaMath-72B-CoT) 7 Table 2: Overall results (Pass@1) on program generation benchmarks. PFPO-Self refers to our training from pseudo feedback method, and the content in the brackets afterwards indicates the source of prompts. Specifically, _M.C._ refers to the prompt set of Magicoder (Wei et al., 2024), and _xCode._ is the short for xCodeEval (Khan et al., 2024b). _Introductory_, _Interview_, and _Competition_ indicate the three difficulty levels of APPs. w/ (p)DPO (APPs) refers to that the execution feedback is synthesized based on the groundtruth test cases annotated in APPs training set. |Col1|APPs<br>Overall Introductory Interview Competition|HumanEval MBPP| |---|---|---| |GPT-4-0613<br>GPT-4o-2024-0513|35.1 61.8 34.4 10.6<br>34.0 56.6 32.2 16.7|87.8 82.1<br>93.3 87.2| |Llama-3.1-8B-Instruct (Dubey et al., 2024)<br>Llama-3.1-70B-Instruct (Dubey et al., 2024)<br>Codestral-22B-V0.1 (Mistral AI Team, 2024a)<br>CodeQwen1.5-7B-chat (Qwen Team, 2024)<br>Qwen2.5-Coder-7B-Instruct (Hui et al., 2024)<br>Deepseek-coder-33B-Instruct (Guo et al., 2024)|11.5 29.4 8.5 2.7<br>24.9 51.8 21.3 9.1<br>20.3 45.2 16.9 5.8<br>8.6 24.1 16.8 2.0<br>15.7 37.3 12.3 4.1<br>18.4 44.2 14.5 4.4|72.6 71.2<br>80.5 83.3<br>81.1 78.2<br>85.6 80.5<br>85.4 86.0<br>77.4 79.0| |Deepseek-coder-v1.5-Instruct<br>w/ SFT (APPs)|14.3 35.7 10.8 3.2<br>15.4 37.8 11.6 4.1|75.6 73.9<br>72.0 72.8| |w/ DPO (APPs)<br>w/ pDPO (APPs)|16.3 36.2 13.3 5.3<br>16.9 37.3 13.8 6.1|74.4 74.3<br>73.8 73.2| |w/ PFPO-LLM Iter. 0 (APPs)|17.9 38.3 14.7 7.1|73.8 75.9| |w/ PFPO-Self Iter. 0 (APPs)<br>w/ PFPO-Self Iter. 1 (APPs & M.C.)<br>w/ PFPO-Self Iter. 2 (APPs & M.C. & xCode.)|17.4 37.5 14.8 5.4<br>18.0 39.2 14.9 6.2<br>19.1 40.9 15.9 6.9|73.2 75.1<br>79.3 75.5<br>73.8 75.1| Table 3: Overall results on LiveCodeBench. We follow the recommended setting by sampling 10 solutions for each problem with temperature as 0.2, and estimating the Pass@1 results. The cutoff date of the test questions is from **2023-09-01** to **2024-09-01** . All results except those of our models [are referenced from the official leaderboard ( https://livecodebench.github.io/).](https://livecodebench.github.io/leaderboard.html) |Col1|Overall Easy Medium Hard| |---|---| |Claude-3.5-Sonnet<br>Claude-3-Sonnet<br>Claude-3-Haiku<br>GPT-3.5-Turbo-0125|51.3 87.2 45.3 11.0<br>26.9 67.2 7.3 1.4<br>24.0 61.3 5.5 0.9<br>24.0 55.0 11.6 0.3| |Llama-3.1-70B-Instruct (Dubey et al., 2024)<br>Llama-3-70B-Instruct (Dubey et al., 2024)<br>CodeQwen1.5-7B-Chat (Qwen Team, 2024)<br>DeepSeekCoder-V2-236B (DeepSeek-AI et al., 2024)<br>Deepseek-Coder-33B-Instruct (Guo et al., 2024)|31.8 67.9 17.3 4.1<br>27.4 59.4 15.6 1.3<br>16.8 35.9 10.9 0.3<br>41.9 79.9 32.0 4.9<br>23.4 56.1 8.6 0.9| |Deepseek-coder-7B-v1.5-Insturct<br>w/ SFT (APPs)|21.1 51.3 7.4 0.2<br>22.9 53.0 10.6 0.2| |w/ DPO (APPs)<br>w/ pDPO (APPs)|22.9 53.7 9.4 1.0<br>22.9 55.0 8.1 1.3| |w/ PFPO-LLM Iter. 0 (APPs)|24.0 56.8 9.3 1.4| |w/ PFPO-Self Iter. 0 (APPs)<br>w/ PFPO-Self Iter. 1 (APPs & M.C.)<br>w/ PFPO-Self Iter. 2 (APPs & M.C. & xCode)|23.4 54.2 10.3 0.7<br>23.7 55.8 9.5 1.1<br>24.3 56.8 9.8 1.6| First, we aim to discuss the effectiveness of fully synthetic test Table 4: The averaged number cases, a topic that has not yet been extensively explored. We use of test cases of each problem in _w/ DPO_ and _w/ pDPO_ to denote methods utilizing ground truth the training set of APPs. test cases to gather execution feedback, while PFPO-Self Iter. 0 (APPs) employs the same prompt set but simulates execution Idea Generation Category:
2Direct Enhancement
jkUp3lybXf
# L EARNING N EURAL N ETWORKS WITH D ISTRIBUTION S HIFT : E FFICIENTLY C ERTIFIABLE G UARANTEES **Gautam Chandrasekaran, Adam R. Klivans, Lin Lin Lee, Konstantinos Stavropoulos** The University of Texas at Austin _{_ gautamc,klivans,kstavrop _}_ @cs.utexas.edu llee3@utexas.edu A BSTRACT We give the first provably efficient algorithms for learning neural networks with respect to distribution shift. We work in the Testable Learning with Distribution Shift framework (TDS learning) of Klivans et al. (2024a), where the learner receives labeled examples from a training distribution and unlabeled examples from a test distribution and must either output a hypothesis with low test error or reject if distribution shift is detected. No assumptions are made on the test distribution. All prior work in TDS learning focuses on classification, while here we must handle the setting of nonconvex regression. Our results apply to real-valued networks with arbitrary Lipschitz activations and work whenever the training distribution has strictly sub-exponential tails. For training distributions that are bounded and hypercontractive, we give a fully polynomial-time algorithm for TDS learning one hidden-layer networks with sigmoid activations. We achieve this by importing classical kernel methods into the TDS framework using data-dependent feature maps and a type of kernel matrix that couples samples from both train and test distributions. 1 I NTRODUCTION Understanding when a model will generalize from a known training distribution to an unknown test distribution is a critical challenge in trustworthy machine learning and domain adaptation. Traditional approaches to this problem prove generalization bounds in terms of various notions of distance between train and test distributions (Ben-David et al., 2006; 2010; Mansour et al., 2009) but do not provide efficient algorithms. Recent work due to Klivans et al. (2024a) departs from this paradigm and defines the model of Testable Learning with Distribution Shift (TDS learning), where a learner may reject altogether if significant distribution shift is detected. When the learner accepts, however, it outputs a classifier and a proof that the classifier has nearly optimal test error. A sequence of works has given the first set of efficient algorithms in the TDS learning model for well-studied function classes where no assumptions are taken on the test distribution (Klivans et al., 2024a;b; Chandrasekaran et al., 2024; Goel et al., 2024). These results, however, hold for classification and therefore do not apply to (nonconvex) regression problems and in particular to a long line of work giving provably efficient algorithms for learning simple classes of neural networks under natural distributional assumptions on the training marginal (Goel & Klivans, 2019; Diakonikolas et al., 2020a;c; 2022; Chen et al., 2022b; 2023; Wang et al., 2023; Gollakota et al., 2024a; Diakonikolas & Kane, 2024). The main contribution of this work is the first set of efficient TDS learning algorithms for broad classes of (nonconvex) regression problems. Our results apply to neural networks with arbitrary Lipschitz activations of any constant depth. As one example, we obtain a fully polynomial-time algorithm for learning one hidden-layer neural networks with sigmoid activations with respect to any bounded and hypercontractive training distribution. For bounded training distributions, the running times of our algorithms match the best known running times for ordinary PAC or agnostic learning (without distribution shift). We emphasize that unlike all prior work in domain adaptation, we make no assumptions on the test distribution. 1 **Regression Setting.** We assume access to labeled examples from the training distribution and unlabeled examples from the marginal of the test distribution. We consider the squared loss _L_ _D_ ( _h_ ) = ~~�~~ E ( _**x**_ _,y_ ) _∼D_ [( _y −_ _h_ ( _**x**_ )) [2] ]. The error benchmark is analogous to the benchmark for TDS learning in classification (Klivans et al., 2024a) and depends on two quantities: the optimum training error achievable by a classifier in the learnt class, opt = min _f_ _∈F_ [ _L_ _D_ ( _f_ )], and the best joint error achievable by a single classifier on both the training and test distributions, _λ_ = min _f_ _′_ _∈F_ [ _L_ _D_ ( _f_ _[′]_ ) + _L_ _D_ _′_ ( _f_ _[′]_ )]. Achieving an error of opt + _λ_ is the standard goal in domain adaptation (Ben-David et al., 2006; Blitzer et al., 2007; Mansour et al., 2009). We now formally define the TDS learning framework for regression: **Definition 1.1** (Testable Regression with Distribution Shift) **.** For _ϵ, δ ∈_ (0 _,_ 1) and a function class _F ⊆{_ R _[d]_ _→_ R _}_, the learner receives iid labeled examples from some unknown training distribution _D_ over R _[d]_ _×_ R and iid unlabeled examples from the marginal _D_ _**x**_ _[′]_ [of another unknown test distribu-] tion _D_ _[′]_ over R _[d]_ _×_ R. The learner either rejects, or it accepts and outputs hypothesis _h_ : R _[d]_ _→_ R such that the following are true. 1. (Soundness) With probability at least 1 _−_ _δ_, if the algorithm accepts, then the output _h_ satisfies _L_ _D_ _[′]_ ( _h_ ) _≤_ min _f_ _∈F_ [ _L_ _D_ ( _f_ )] + min _f_ _[′]_ _∈F_ [ _L_ _D_ ( _f_ _[′]_ ) + _L_ _D_ _[′]_ ( _f_ _[′]_ )] + _ϵ_ . 2. (Completeness) If _D_ _**x**_ = _D_ _**x**_ _[′]_ [, then the algorithm accepts with probability at least][ 1] _[ −]_ _[δ]_ [.] 1.1 T ECHNICAL S TATEMENT OF R ESULTS Our results hold for classes of Lipschitz neural networks. In particular, we consider functions _f_ of the following form. Let _σ_ : R _→_ R be an activation function. Let **W** = � _W_ [(1)] _, . . . W_ [(] _[t]_ [)] [�] with _W_ [(] _[i]_ [)] _∈_ R _[s]_ _[i]_ _[×][s]_ _[i][−]_ [1] be the tuple of weight matrices. Here, _s_ 0 = _d_ is the input dimension and _s_ _t_ = 1. Define recursively the function _f_ _i_ : R _[d]_ _→_ R _[s]_ _[i]_ as _f_ _i_ ( _**x**_ ) = _W_ [(] _[i]_ [)] _·σ_ � _f_ _i−_ 1 ( _**x**_ )� with _f_ 1 ( _**x**_ ) = _W_ [(1)] _·_ _**x**_ . The function _f_ : R _[d]_ _→_ R computed by the neural network ( **W** _, σ_ ) is defined as _f_ ( _**x**_ ) := _f_ _t_ ( _**x**_ ). The depth of this network is _t_ . We now present our main results on TDS learning for neural networks. |Function Class|Runtime (Bounded)|Runtime (Subgaussian)| |---|---|---| |One hidden-layer Sigmoid Net|poly(d, M, 1/ϵ)|dpoly(k log(M/ϵ))| |Single ReLU|poly(d, M) · 2O(1/ϵ)|dpoly(log(M)/ϵ)| |Sigmoid Nets|2O((log(1/ϵ))t−1)<br>poly(d, M) ·|dpoly(k log(M)(log(1/ϵ)t−1))| |1-Lipschitz Nets|√<br>poly(d, M) · 2O ˜(k k2t−1/ϵ)|dpoly(k2t−1 log(M)/ϵ)| Table 1: In the above table, _k_ denotes the number of neurons in the first hidden layer. _M_ denotes a bound on the labels of the train and test distributions. One hidden-layer Sigmoid nets refers to depth 2 neural networks with sigmoid activation. The bounded distributions considered in the above table have support on the unit ball. We assume that all relevant parameters of the neural network are bounded by constants. For more detailed statements and proofs, see Definition A.12 and (1) Corollaries B.4 and B.6 and Theorems B.3 and B.5 for the bounded case, and (2) Theorems C.9 and C.10 for the Subgaussian case. From the above table, we highlight that in the cases of bounded distributions with (1) one hiddenlayer Sigmoid Nets, and (2) Single ReLU with _ϵ <_ 1 _/_ log _d_, we obtain TDS algorithms that run in polynomial time in all parameters. Moreover, for the last row, regarding Lipschitz Nets, each neuron is allowed to have a different and unknown Lipschitz activation. Therefore, in particular, our results capture the class of single-index models (see, e.g., Kakade et al. (2011); Gollakota et al. (2024a)). In the results of Table 1, we assume bounded labels for both the training and test distributions. This assumption can be relaxed to a bound on any moment whose degree is strictly higher than 2 (see Corollary D.2). In fact, such an assumption is necessary, as we show in Proposition D.1. 2 1.2 O UR T ECHNIQUES **TDS Learning via Kernel Methods.** The major technical contribution of this work is devoted to importing classical kernel methods into the TDS learning framework. A first attempt at testing distribution shift with respect to a fixed feature map would be to form two corresponding covariance matrices of the expanded features, one from samples drawn from the training distribution and the other from samples drawn from the test distribution, and test if these two matrices have similar eigendecompositions. This approach only yields efficient algorithms for linear kernels, however, as here we are interested in spectral properties of covariance matrices in the feature space corresponding to low-degree polynomials, whose dimension is too large. Instead we form a new data-dependent and concise reference feature map _ϕ_, that depends on examples from both _D_ _**x**_ and _D_ _**x**_ _[′]_ [. We show that this feature map approximately represents the ground] truth, i.e., some function with both low training and test error (this is due to the representer theorem, see Proposition 3.7). To certify that error bounds transfer from _D_ _**x**_ to _D_ _**x**_ _[′]_ [, we require] _[ relative]_ _error_ closeness between covariance matrix Φ _[′]_ = E _**x**_ _∼D_ _**x**_ _′_ [ _ϕ_ ( _**x**_ ) _ϕ_ ( _**x**_ ) _[⊤]_ ] of the feature expansion _ϕ_ over the test marginal with the corresponding matrix Φ = E _**x**_ _∼D_ _**x**_ [ _ϕ_ ( _**x**_ ) _ϕ_ ( _**x**_ ) _[⊤]_ ] over the training marginal. We draw fresh sets of verification examples and show how the kernel trick can be used to efficiently achieve these approximations even though _ϕ_ is a nonstandard feature map. For more technical details, see Section 3.1. By instantiating the above results using a type of polynomial kernel, we can reduce the problem of TDS learning neural networks to the problem of obtaining an appropriate polynomial approximator. Our final _training_ algorithm (as opposed to the testing phase) will essentially be kernelized polynomial regression. **TDS Learning and Uniform Approximation.** Prior work in TDS learning has established connections between polynomial approximation theory and efficient algorithms in the TDS setting. In particular, the existence of low-degree sandwiching approximators for a concept class is known to imply dimension-efficient TDS learning algorithms for binary classification. The notion of sandwiching approximators for a function _f_ refers to a pair of low-degree polynomials _p_ up _, p_ down with two main properties: (1) _p_ down _≤_ _f ≤_ _p_ up everywhere and (2) the expected absolute distance between _p_ up and _p_ down over some reference distribution is small. The first property is of particular importance in the TDS setting, since it holds everywhere and, therefore, it holds for any test distribution unconditionally. Here we make the simple observation that the incomparable notion of uniform approximation suffices for TDS learning. A uniform approximator is a polynomial _p_ that approximates a function _f_ pointwise, meaning that _|p −_ _f_ _|_ is small in every point within a ball around the origin (there is no known direct relationship between sandwiching and uniform approximators). In our setting, uniform approximation is more convenient, due to the existence of powerful tools from polynomial approximation theory regarding Lipschitz and analytic functions. Contrary to the sandwiching property, the uniform approximation property cannot hold everywhere if the approximated function class contains high-(or infinite-)degree functions. When the training distribution has strictly sub-exponential tails, however, the expected error of approximation outside the radius of approximation is negligible. Importantly, this property can be certified for the test distribution by using a moment-matching tester. See also Section 4. 1.3 R ELATED W ORK **Learning with Distribution Shift.** The field of domain adaptation has been studying the distribution shift problem for almost two decades (Ben-David et al., 2006; Blitzer et al., 2007; Ben-David et al., 2010; Mansour et al., 2009; David et al., 2010; Mousavi Kalan et al., 2020; Redko et al., 2020; Kalavasis et al., 2024; Hanneke & Kpotufe, 2019; 2024; Awasthi et al., 2024), providing useful insights regarding the information-theoretic (im)possibilities for learning with distribution shift. The first efficient end-to-end algorithms for non-trivial concept classes with distribution shift were given for TDS learning in Klivans et al. (2024a;b); Chandrasekaran et al. (2024) and for PQ learning, originally defined by Goldwasser et al. (2020), in Goel et al. (2024). These works focus on binary classification for classes like halfspaces, halfspace intersections, and geometric concepts. In the regression setting, we need to handle unbounded loss functions, but we are also able to use Lipschitz 3 properties of real-valued networks to obtain results even for deeper architectures. For the special case of linear regression, efficient algorithms for learning with distribution shift are known to exist (see, e.g., Lei et al. (2021)), but our results capture much broader classes. Another distinction between the existing works in TDS learning and our work, is that our results require significantly milder assumptions on the training distribution. In particular, while all prior works on TDS learning require both concentration and anti-concentration for the training marginal (Klivans et al., 2024a;b; Chandrasekaran et al., 2024), we only assume strictly subexponential concentration in every direction. This is possible because the function classes we consider are Lipschitz, which is not the case for binary classification. **Testable Learning.** More broadly, TDS learning is related to the notion of testable learning (Rubinfeld & Vasilyan, 2023; Gollakota et al., 2023; 2024c; Diakonikolas et al., 2023; Gollakota et al., 2024b; Diakonikolas et al., 2024; Slot et al., 2024), originally defined by Rubinfeld & Vasilyan (2023) for standard agnostic learning, aiming to certify optimal performance for learning algorithms without relying directly on any distributional assumptions. The main difference between testable agnostic learning and TDS learning is that in TDS learning, we allow for distribution shift, while in testable agnostic learning the training and test distributions are the same. Because of this, TDS learning remains challenging even in the absence of label noise, in which case testable learning becomes trivial (Klivans et al., 2024a). **Efficient Learning of Neural Networks.** Many works have focused on providing upper and lower bounds on the computational complexity of learning neural networks in the standard (distributionshift-free) setting (Goel et al., 2017; Goel & Klivans, 2019; Goel et al., 2020a;b; Diakonikolas et al., 2020a;b;c; 2022; Chen et al., 2022a;b; 2023; Wang et al., 2023; Gollakota et al., 2024a; Diakonikolas & Kane, 2024; Li et al., 2020; Gao et al., 2019; Zhang et al., 2019; Vempala & Wilmes, 2019; AllenZhu et al., 2019; Bakshi et al., 2019; Manurangsi & Reichman, 2018; Ge et al., 2019; 2018; Du et al., 2018; Goel et al., 2018; Tian, 2017; Li & Yuan, 2017; Brutzkus & Globerson, 2017; Zhong et al., 2017; Zhang et al., 2016b; Janzamin et al., 2015). The majority of the upper bounds either require noiseless labels and shallow architectures or work only under Gaussian training marginals. Our results not only hold in the presence of distribution shift, but also capture deeper architectures, under any strictly subexponential training marginal and allow adversarial label noise. The upper bounds that are closest to our work are those given by Goel et al. (2017). They consider ReLU as well as sigmoid networks, allow for adversarial label noise and assume that the training marginal is bounded but otherwise arbitrary. Our results in Section 3 extend all of the results in Goel et al. (2017) to the TDS setting, by assuming additionally that the training distribution is hypercontractive (see Definition 3.9). This additional assumption is important to ensure that our tests will pass when there is no distribution shift. For a more thorough technical comparison with Goel et al. (2017), see Section 3. In Section 4, we provide upper bounds for TDS learning of Lipschitz networks even when the training marginal is an arbitrary strictly subexponential distribution. In particular, our results imply new bounds for standard agnostic learning of single ReLU neurons, where we achieve runtime _d_ [poly(1] _[/ϵ]_ [)] . The only known upper bounds work under the Gaussian marginal (Diakonikolas et al., 2020a), achieving similar runtime. In fact, in the statistical query framework (Kearns, 1998), it is known that _d_ [poly(1] _[/ϵ]_ [)] runtime is necessary for agnostically learning the ReLU, even under the Gaussian distribution (Diakonikolas et al., 2020b; Goel et al., 2020b). 2 P RELIMINARIES We use standard vector and matrix notation. We denote with R _,_ N the sets of real and natural numbers accordingly. We denote with _D_ labeled distributions over R _[d]_ _×_ R and with _D_ _**x**_ the marginal of _D_ on the features in R _[d]_ . For a set _S_ of points in R _[d]_, we define the empirical probabilities (resp. expectations) as **Pr** _**x**_ _∼S_ [ _E_ ( _**x**_ )] = _|S_ 1 _|_ � _**x**_ _∈S_ [1] _[{][E]_ [(] _**[x]**_ [)] _[}]_ [ (resp.][ E] _**[x]**_ _[∼][S]_ [[] _[f]_ [(] _**[x]**_ [)] =] _|S_ 1 _|_ � _**x**_ _∈S_ _[f]_ [(] _**[x]**_ [)][). We] _**x**_ _∈S_ [1] _[{][E]_ [(] _**[x]**_ [)] _[}]_ [ (resp.][ E] _**[x]**_ _[∼][S]_ [[] _[f]_ [(] _**[x]**_ [)] =] _|S_ 1 _|_ � expectations) as **Pr** _**x**_ _∼S_ [ _E_ ( _**x**_ )] = _|S|_ � _**x**_ _∈S_ [1] _[{][E]_ [(] _**[x]**_ [)] _[}]_ [ (resp.][ E] _**[x]**_ _[∼][S]_ [[] _[f]_ [(] _**[x]**_ [)] =] _|S|_ � _**x**_ _∈S_ _[f]_ [(] _**[x]**_ [)][). We] denote with _S_ [¯] the labeled version of _S_ and we define the clipping function cl _M_ : R _→_ [ _−M, M_ ], that maps a number _t ∈_ R either to itself if _t ∈_ [ _−M, M_ ], or to _M ·_ sign( _t_ ) otherwise. **Loss function.** Throughout this work, we denote with _L_ _D_ ( _h_ ) the squared loss of a hypothesis _h_ : R _[d]_ _→_ R with respect to a labeled distribution _D_, i.e., _L_ _D_ ( _h_ ) = �E ( _**x**_ _,y_ ) _∼D_ [( _y −_ _h_ ( _**x**_ )) [2] ]. More 4 **Loss function.** Throughout this work, we denote with _L_ _D_ ( _h_ ) the squared loss of a hypothesis _h_ : R _[d]_ _→_ R with respect to a labeled distribution _D_, i.e., _L_ _D_ ( _h_ ) = �E ( _**x**_ _,y_ ) _∼D_ [( _y −_ _h_ ( _**x**_ )) [2] ]. More over, for any function _f_ : R _[d]_ _→_ R, we denote with _∥f_ _∥_ _D_ the quantity _∥f_ _∥_ _D_ = ~~�~~ over, for any function _f_ : R _[d]_ _→_ R, we denote with _∥f_ _∥_ _D_ the quantity _∥f_ _∥_ _D_ = ~~�~~ E _**x**_ _∼D_ _**x**_ [( _f_ ( _**x**_ )) [2] ]. For a set of labeled examples _S_ [¯], we denote with _L_ _S_ ¯ ( _h_ ) the empirical loss on _S_ [¯], i.e., _L_ _S_ ¯ ( _h_ ) = � _|S_ 1 [¯] _|_ � ( _**x**_ _,y_ ) _∈S_ [¯] [(] _[y][ −]_ _[h]_ [(] _**[x]**_ [))] [2] [ and similarly for] _[ ∥][f]_ _[∥]_ _[S]_ [.] **Distributional Assumptions.** In order to obtain efficient algorithms, we will either assume that the training marginal _D_ _**x**_ is bounded and hypercontractive (Section 3) or that it has strictly subexponential tails in every direction (Section 4). We make no assumptions on the test marginal _D_ _**x**_ _[′]_ [.] Regarding the labels, we assume some mild bound on the moments of the training and the test labels, e.g., (a) that E _y∼D_ _y_ [ _y_ [4] ] _,_ E _y∼D_ _y′_ [ _y_ [4] ] _≤_ _M_ or (b) that _y ∈_ [ _−M, M_ ] a.s. for both _D_ and _D_ _[′]_ . Although, ideally, we want to avoid any assumptions on the test distribution, as we show in Proposition D.1, a bound on some constant-degree moment of the test labels is necessary. 3 B OUNDED T RAINING M ARGINALS We begin with the scenario where the training distribution is known to be bounded. In this case, it is known that one-hidden-layer sigmoid networks can be agnostically learned (in the classical sense, without distribution shift) in fully polynomial time and single ReLU neurons can be learned up to 1 error _O_ ( log( _d_ ) [)][ in polynomial time (][Goel et al.][,][ 2017][). These results are based on a kernel-based] approach, combined with results from polynomial approximation theory. While polynomial approximations can reduce the nonconvex agnostic learning problem to a convex one through polynomial feature expansions, the kernel trick enables further pruning of the search space, which is important for obtaining polynomial-time algorithms. Our work demonstrates another useful implication of the kernel trick: it leads to efficient algorithms for testing distribution shift. We will require the following standard notions: **Definition 3.1** (Kernels (Mercer, 1909)) **.** A function _K_ : R _[d]_ _×_ R _[d]_ _→_ R is a kernel. If for any set of _m_ points _**x**_ 1 _, . . .,_ _**x**_ _m_ in R _[d]_, the matrix ( _K_ ( _**x**_ _i_ _,_ _**x**_ _j_ )) ( _i,j_ ) _∈_ [ _m_ ] is positive semidefinite, we say that the kernel _K_ is positive definite. The kernel _K_ is symmetric if for all _**x**_ _,_ _**x**_ _[′]_ _∈_ R _[d]_, _K_ ( _**x**_ _,_ _**x**_ _[′]_ ) = _K_ ( _**x**_ _[′]_ _,_ _**x**_ ). Any PSD kernel is associated with some Hilbert space H and some feature map from R _[d]_ to H. **Fact 3.2** (Reproducing Kernel Hilbert Space) **.** _For any positive definite and symmetric (PDS) kernel_ _K, there is a Hilbert space_ H _, equipped with the inner product ⟨·, ·⟩_ : H _×_ H _→_ R _and a function_ _ψ_ : R _[d]_ _→_ H _such that K_ ( _**x**_ _,_ _**x**_ _[′]_ ) = _⟨ψ_ ( _**x**_ ) _, ψ_ ( _**x**_ _[′]_ ) _⟩_ _for all_ _**x**_ _,_ _**x**_ _[′]_ _∈_ R _[d]_ _. We call_ H _the reproducing_ _kernel Hilbert space (RKHS) for K and ψ the feature map for K._ There are three main properties of the kernel method. First, although the associated feature map _ψ_ may correspond to a vector in an infinite-dimensional space, the kernel _K_ ( _**x**_ _,_ _**x**_ _[′]_ ) may still be efficiently evaluated, due to its analytic expression in terms of _**x**_, _**x**_ _[′]_ . Second, the function class _F_ _K_ = _{_ _**x**_ _�→⟨_ _**v**_ _, ψ_ ( _**x**_ ) _⟩_ : _**v**_ _∈_ H _, ⟨_ _**v**_ _,_ _**v**_ _⟩≤_ _B}_ has Rademacher complexity independent from the dimension of H, as long as the maximum value of _K_ ( _**x**_ _,_ _**x**_ ) for _**x**_ in the domain is bounded (Thm. 6.12 in Mohri et al. (2018)). Third, the time complexity of finding the function in _F_ _K_ that best fits a dataset is actually polynomial to the size of the dataset, due to the representer theorem (Thm. 6.11 in Mohri et al. (2018)). Taken together, these properties constitute the basis of the kernel method, implying learners with runtime independent from the effective dimension of the learning problem. In order to apply the kernel method to learn some function class _F_, it suffices to show that the class _F_ can be represented sufficiently well by the class _F_ _K_ . We give the following definition. **Definition 3.3** (Approximate Representation) **.** Let _F_ be a function class over R _[d]_, _K_ : R _[d]_ _×_ R _[d]_ _→_ R a PDS kernel, where H is the corresponding RKHS and _ψ_ the feature map for _K_ . We say that _F_ can be ( _ϵ, B_ )-approximately represented within radius _R_ with respect to _K_ if for any _f ∈F_, there is _**v**_ _∈_ H with _⟨_ _**v**_ _,_ _**v**_ _⟩≤_ _B_ such that _|f_ ( _**x**_ ) _−⟨_ _**v**_ _, ψ_ ( _**x**_ ) _⟩| ≤_ _ϵ_, for all _**x**_ _∈_ R _[d]_ : _∥_ _**x**_ _∥_ 2 _≤_ _R_ . For the purposes of TDS learning, we will also require the training marginal to have be hypercontractive with respect to the kernel at hand. This is important to ensure that our test will accept whenever there is no distribution shift. More formally, we require the following. 5 _|S_ 1 [¯] _|_ � ( _**x**_ _,y_ ) _∈S_ [¯] [(] _[y][ −]_ _[h]_ [(] _**[x]**_ [))] [2] [ and similarly for] _[ ∥][f]_ _[∥]_ _[S]_ [.] Idea Generation Category:
0Conceptual Integration
ed7zI29lRF
# E VENT -D RIVEN O NLINE V ERTICAL F EDERATED L EARNING **Ganyu Wang** [1] **Boyu Wang** [1] _[,]_ [2] **Bin Gu** [3] _[∗]_ **Charles Ling** [1] _[,]_ [2] _[∗]_ 1 Western University 2 Vector Institute 3 Jilin University gwang382@uwo.ca bwang@csd.uwo.ca jsgubin@gmail.com charles.ling@uwo.ca A BSTRACT Online learning is more adaptable to real-world scenarios in Vertical Federated Learning (VFL) compared to offline learning. However, integrating online learning into VFL presents challenges due to the unique nature of VFL, where clients possess non-intersecting feature sets for the same sample. In real-world scenarios, the clients may not receive data streaming for the disjoint features for the same entity synchronously. Instead, the data are typically generated by an _event_ relevant to only a subset of clients. We are the first to identify these challenges in online VFL, which have been overlooked by previous research. To address these challenges, we proposed an event-driven online VFL framework. In this framework, only a subset of clients were activated during each event, while the remaining clients passively collaborated in the learning process. Furthermore, we incorporated _dynamic_ _local regret (DLR)_ into VFL to address the challenges posed by online learning problems with non-convex models within a non-stationary environment. We conducted a comprehensive regret analysis of our proposed framework, specifically examining the DLR under non-convex conditions with event-driven online VFL. Extensive experiments demonstrated that our proposed framework was more stable than the existing online VFL framework under non-stationary data conditions while also significantly reducing communication and computation costs. 1 I NTRODUCTION Vertical Federated Learning (VFL) (Vepakomma et al., 2018; Yang et al., 2019; Liu et al., 2019; Chen et al., 2020; Gu et al., 2020; Zhang et al., 2021b;a; Wang et al., 2023; Qi et al., 2022; Wang et al., 2024) is a privacy-preserving machine learning paradigm wherein multiple entities collaborate to construct a model without sharing their raw data. In VFL, each participant possesses nonintersecting features for the same set of samples, which is significantly different from the Horizontal Federated Learning (HFL) (McMahan et al., 2017; Karimireddy et al., 2020; Li et al., 2020; 2021; Marfoq et al., 2022; Mishchenko et al., 2019) where each client possesses the non-overlap samples of the same features. [1] Current research on VFL primarily focuses on the offline scenario, characterized by a pre-established dataset. However, the limitations of offline learning become obvious when building real-world applications of VFL. First, offline learning is unsuitable for scenarios where the dataset undergoes continual updates, which is typical in real-world applications. For example, in the application scenario of VFL scenarios involving companies as clients (Wei et al., 2022; Vepakomma et al., 2018), new data is constantly generated as new customers engage with the companies, or as existing customers update their records through ongoing activities. Similarly, in VFL scenarios involving edge devices (Wang & Xu, 2023; Liu et al., 2022), the sensors, acting as the clients, continuously receive data streams from the environment rather than maintaining a static dataset. Second, the dynamic nature of real-world environments leads to data distribution drift, which is particularly evident in edge devices. In response, the offline learning paradigm requires retraining the model from scratch to accommodate shifts in data distribution. While this retraining process may not present significant _∗_ Corresponding authors. 1 The term Federated Learning commonly refers to HFL. However, this is not the setting of this study. 1 challenges in centralized learning, it becomes prohibitively expensive in distributed learning scenarios, as the training process imposes substantial communication and computation costs in distributed learning. Consequently, online learning may provide greater adaptability in VFL by allowing models to update continuously as new data arrives and handle dynamic environments. However, applying online learning to VFL is not straightforward due to its inherent nature. First, in online VFL, clients receive non-intersecting features of the data from the environment. In real-world scenarios, it is rare for all clients to receive all these features of a sample simultaneously. Instead, it is more common for only a subset of clients to obtain the relevant features in response to a specific _event_ . For example, in VFL implementations within large companies (Wei et al., 2022; Hu et al., 2019; Vepakomma et al., 2018), when a customer takes an action such as making a payment or a purchase, this action typically involves only one company of the VFL, while the data from the other companies remain unchanged. Similarly, in VFL involving sensor networks (Wang & Xu, 2023; Liu et al., 2022), only the sensor triggered by an event will be activated, while others remain inactive (Suh, 2007; Heemels et al., 2012; Trimpe & D’Andrea, 2014; Beuchert et al., 2020). The above scenario has brought about the demand for an event-driven online VFL framework, wherein certain participants are activated by events during each round, thereby dominating the learning process, while the rest of the participants remain inactive or passively cooperate with the learning. The second challenge in online VFL lies in addressing non-convex models and dynamic environments, which are prevalent in practical applications. Current research in online VFL still primarily focuses on online convex optimization, assuming both convex models and stationary data streams (Wang & Xu, 2023). While convex models are easier to optimize, they fail to capture the complex patterns necessary for tackling more challenging tasks. Additionally, the assumption of stationary data streams is unrealistic in dynamic real-world environments, where data distributions can shift over time. For example, the environmental sensor that monitors the air quality may experience dynamic change due to natural phenomena. These are challenges that current online VFL frameworks have yet to resolve. To address the aforementioned challenges, we propose a novel event-driven online VFL framework, which is well-suited for the online learning scenario in non-convex cases and non-stationary environments. Figure 1 depicts a schematic graph of our framework. In our framework, a subset of the clients are activated by the event at each round, while others passively contribute to the training. This approach substantially reduces communicationcomputation costs and facilitates the client model in learning relevant content. Moreover, we adapt the dynamic local regret approach proposed by Aydore et al. (2019) to our eventdriven online VFL framework to effectively handle online learning in non-convex cases and non-stationary environments. In summary, the contributions of our paper are: Figure 1: Event-driven online VFL - We identify the unrealistic assumption of synchronous data reception in online VFL research and propose a novel event-driven online VFL paradigm that is better suited to real-world scenarios. - We adapt the dynamic local regret approach to our event-driven online VFL to effectively handle non-convex models in non-stationary data streaming scenarios, and we theoretically prove the dynamic local regret bound for this framework, which incorporates partial activation of the client. - Our experiments demonstrate that our event-driven online VFL exhibits greater stability compared to existing methods when confronted with non-stationary data conditions. Additionally, it significantly reduces communication and computation costs. **Notation** We use a square bracket with multiple items to denote concatenation for convenience. For instance, given _w_ _A_ _∈_ R _[d]_ _[A]_, _w_ _B_ _∈_ R _[d]_ _[B]_, we define [ _w_ _A_ _, w_ _B_ ] ≜ [ _w_ _A_ _[⊤]_ _[, w]_ _B_ _[⊤]_ []] _[⊤]_ [. The superscript] _[ t]_ 2 attached to the parameter _w_ indicates the number of rounds (time step). A square bracket enclosing a single integer represents the set of natural numbers from 1 to that particular number. For instance, [ _M_ ] = _{_ 1 _,_ 2 _, . . ., M_ _}_ . 2 R ELATED WORK : ONLINE HFL AND ONLINE VFL [2] **Online HFL** Most existing online federated learning research focuses on HFL because extending the HFL framework to online learning is relatively straightforward. In HFL, each participant possesses the complete feature set of the local sample. Therefore, online HFL can be easily achieved by assigning each client a unique data stream containing non-overlapping samples. The current research on online HFL is focused on speeding up optimization (Mitra et al., 2021; Eshraghi & Liang, 2020), reducing communication cost (Hong & Chae, 2021) and dealing with concept drift (Ganguly & Aggarwal, 2023). Mitra et al. (2021) applied online mirror descent within the Federated Learning framework, demonstrating sub-linear regret in convex scenarios. Hong & Chae (2021) introduced a randomized multi-kernel algorithm for online federated learning, which maintains the performance of the multi-kernel algorithm while mitigating the linearly increasing communication cost. Kwon et al. (2023) incorporated client sampling and quantization into online federated learning. Ganguly & Aggarwal (2023) proposed a non-stationary detection and restart algorithm for online federated learning, addressing the concept of drift during online learning. **Online VFL** In VFL, each client possesses a non-overlapping feature set, which presents challenges when integrating online learning into this framework. The existing approach to Online VFL is proposed by Wang & Xu (2023), which applies online convex optimization to synchronous VFL. However, they na¨ıvely assume that all clients receive a synchronous data stream, which does not align with real-world applications. Apart from this study, no other research has explored online VFL and the characteristics of dataset streaming in this context. Through the exploration of eventdriven mechanisms, we open up new possibilities for real-time data streaming processing across distributed nodes within the VFL framework. 3 M ETHOD 3.1 P ROBLEM DEFINITION In the VFL framework, there is a single server and _M_ clients. The server produces the label _y_ _[t]_ at round _t_, while each client may receive non-intersecting features _x_ _[t]_ _m_ [from the environment during] the same round. The model for client _m_, denoted as _h_ _m_ ( _w_ _m_ ; _x_ _[t]_ _m_ [)][, is parameterized by] _[ w]_ _[m]_ _[∈]_ [R] _[d]_ _[m]_ and takes the local feature _x_ _[t]_ _m_ [as input to produce an embedding. We denote] **[ w]** [ as the concatenation] of the parameter from all clients, i.e. **w** = [ _w_ 1 _, · · ·, w_ _M_ ]. The server, parameterized by _w_ 0, recieves embeddings _h_ _m_ ( _·_ ) from all clients and then calculates the losses with the label _y_ _[t]_ . We define the VFL framework in composite form. _f_ _[t]_ ( _w_ 0 _,_ **w** _, x_ _[t]_ ; _y_ _[t]_ ) = _f_ � _w_ 0 _, h_ 1 ( _w_ 1 ; _x_ _[t]_ 1 [)] _[,][ · · ·][, h]_ _[M]_ [(] _[w]_ _[M]_ [;] _[ x]_ _[t]_ _M_ [);] _[ y]_ _[t]_ [�] (1) where _f_ ( _·_ ) denotes the model on the server. For brevity, we denote _f_ _[t]_ ( _w_ 0 _,_ **w** _, x_ _[t]_ ; _y_ _[t]_ ) by _f_ _[t]_ ( _w_ 0 _,_ **w** ) throughout all following sections. Following the work from Aydore et al. (2019), we employ dynamic local regret analysis in online non-convex optimization. To reformulate the dynamic local regret under the context of the VFL framework, we begin by introducing the concept of exponentially weighted sliding-window average as the basis for its computation. **Definition 1.** _**Exponential weighted sliding-window average:**_ _Let w_ 0 _[t]_ _[∈]_ [R] _[d]_ [0] _[ denote the server’s]_ _parameter at time t, w_ _m_ _[t]_ _[∈]_ [R] _[d]_ _[m]_ _[ be the client][ m][’s parameter at time][ t][.]_ **[ w]** [ = [] _[w]_ [1] _[, w]_ [2] _[· · ·][ w]_ _[M]_ []] _[. Let][ l]_ _denote the length of the sliding window. Then the exponential weighted sliding-window average can_ _be defined as follows:_ _S_ _t,l,α_ ( _w_ 0 _[t]_ _[,]_ **[ w]** _[t]_ [)][ ≜] [1] _W_ _l−_ 1 � _α_ _[i]_ _f_ _[t][−][i]_ ( _w_ 0 _[t][−][i]_ _,_ **w** _[t][−][i]_ ) (2) _i_ =0 2 Extra discussion on the related work of VFL and online learning is in Appendix E. 3 _where_ 0 _< α <_ 1 _and the superscript i of the α_ _[i]_ _indicates the exponent. W_ = [�] _[l]_ _i_ _[−]_ =0 [1] _[α]_ _[i]_ _[ serves as]_ _the normalization parameter for the exponential average, ensuring that_ _W_ 1 � _li−_ =01 _[α]_ _[i]_ [ = 1] _[. It is worth]_ _noting that this window gives more weight to recent values, with the weight decaying exponentially,_ _and the loss for f_ _[t][−][i]_ ( _w_ 0 _[t][−][i]_ _,_ **w** _[t][−][i]_ ) _is computed on the past parameter at round t −_ _i._ Then, the DLR can be formally defined based on the accumulated square norm of the gradient of the exponentially weighted sliding-window average. **Definition 2.** _**Dynamic**_ _l_ _**-local regret:**_ _Let S_ _t,l,α_ ( _w_ 0 _[t]_ _[,]_ **[ w]** _[t]_ [)] _[ be the sliding-window defined above,][ w]_ [0] _be the server’s parameter and_ **w** _be the aggregated clients’ parameter. The Dynamic l-Local Regret_ _can be defined as:_ _DLR_ _l_ ( _T_ ) ≜ 3.2 A DAPT DLR TO ONLINE VFL _T_ � _t_ =1 _t_ 2 �� _∇S_ _t,l,α_ ( _w_ 0 _[,]_ **[ w]** _[t]_ [)] �� (3) We integrate the dynamic exponentially time-smoothed online gradient descent method introduced by Aydore et al. (2019) into the VFL framework by incorporating a buffer to store the past gradients. **Server update** Based on the special characteristic of the dynamic local regret, the server is required to maintain a buffer of the past intermediate derivative values of length _l_ . At each time step, the server computes the gradient and updates the buffer by enqueuing the latest gradient and dequeuing the oldest. Subsequently, the server utilizes the buffer to compute the dynamic exponentially time-smoothed gradient. Specifically, the partial derivative w.r.t. the server is shown in Eq. 4, and the buffer stores _l_ past gradients. _∇_ _w_ 0 _S_ _t,l,α_ ( _w_ 0 _[t]_ _[,]_ **[ w]** _[t]_ [) =] [1] _W_ _l−_ 1 � _i_ =0 _α_ ~~�~~ _[i]_ _∇_ _w_ 0 _f_ _[t][−][i]_ ~~��~~ ( _w_ 0 _[t][−][i]_ _,_ **w** _[t][−][i]_ ~~�~~ ) Server Buffer, _i_ = 0 _,_ 1 _, · · · l −_ 1 (4) Finally, the server updated its model with stochastic gradient descent, i.e. _w_ 0 _[t]_ [+1] _←_ _w_ 0 _[t]_ _[−]_ _[η]_ [0] _[·]_ _∇_ _w_ 0 _S_ _t,l,α_ ( _w_ 0 _[t]_ _[,]_ **[ w]** _[t]_ [)][, where] _[ η]_ [0] [is the learning rate for the server.] **Client update** The client cannot calculate the partial derivative w.r.t. its model by themselves because they do not hold the label. Consequently, they depend on the server to transmit the partial _∂f_ _[t]_ ( _w_ 0 _[t]_ _[,]_ **[w]** _[t]_ [)] derivative _v_ _m_ _[t]_ [=] _∂h_ _m_ ( _w_ _m_ _[t]_ ; _x_ _[t]_ _m_ ) [w.r.t. the client’s model output] _[ h]_ _[m]_ [ to facilitate model updates. The] partial derivative w.r.t. the client _m_ ’s model is computed through chain rules: _∇_ _w_ _m_ _S_ [˜] _t,l,α_ ( _w_ 0 _[t]_ _[,]_ **[ w]** _[t]_ [) =] [1] _W_ = [1] _W_ _l−_ 1 � _α_ _[i]_ _∇_ _w_ _m_ _f_ _[t][−][i]_ ( _w_ 0 _[t][−][i]_ _,_ **w** _[t][−][i]_ ) _i_ =0 _l−_ 1 0 _,_ **w** _[t][−][i]_ ) _m_ [;] _[ x]_ _[t]_ _m_ _[−][i]_ [)] _α_ _[i]_ _[ ∂][f]_ _[ t][−][i]_ [(] _[w]_ _[t][−][i]_ _[∂h]_ _[m]_ [(] _[w]_ _[t][−][i]_ � _i_ =0 ~~�~~ _∂h_ _m_ ( _w_ _m_ _[t][−][i]_ [;] _[ x]_ _[t]_ _m_ _[−][i]_ ~~��~~ [)] _[ ·]_ _∂w_ _m_ _[t][−][i]_ ~~�~~ Client Buffer, _i_ = 0 _,_ 1 _· · · l −_ 1 (5) _m_ [;] _[x]_ _[t]_ _m_ [)] After receiving _v_ _m_ _[t]_ [from the server, the clients update their buffer by enqueuing the] _[ v]_ _m_ _[t]_ _[·]_ _[ ∂h]_ _[m]_ _∂w_ [(] _[w]_ _[t]_ _m_ _[t]_ . Finally, the client is updated with _w_ _m_ _[t]_ [+1] _←_ _w_ _m_ _[t]_ _[−]_ _[η]_ _[m]_ _[∇]_ _[w]_ _m_ _[S]_ _[t,l,α]_ [(] _[w]_ 0 _[t]_ _[,]_ **[ w]** _[t]_ [)][.] 3.3 E VENT - DRIVEN ONLINE VFL FRAMEWORK The event-driven online VFL framework is designed, and the procedures for both clients and servers are formalized in the algorithm 1. [3] When an event occurs at round _t_, the activated client _m_ will receive the data _x_ _[t]_ _m_ [. Subsequently, it sends the embedding] _[ h]_ _[m]_ [(] _[w]_ _m_ _[t]_ [;] _[ x]_ _[t]_ _m_ [)][ to the server. The server] then requests embeddings from the passive clients _m ∈_ _A_ [¯] _t_ _⊂_ [ _M_ ]. After gathering the embedding, 3 A synchronous version of algorithm 1 is provided in the Appendix C.1. 4 the server calculates the partial derivative of _f_ _[t]_ ( _·_ ) w.r.t. its local model _w_ 0 _[t]_ [and w.r.t. the output] of the activated clients _h_ _m_ ( _w_ _m_ _[t]_ [;] _[ x]_ _[t]_ _m_ [)][. Subsequently, the server updates the buffer for the sequence] of partial derivatives in DLR. Following this, the server updates its local model with the partial derivative w.r.t. its model _w_ 0 (Eq. 4) and sends the partial derivative w.r.t. the client’s output _v_ _m_ _[t]_ to the activated client. After receiving from the server, each activated client _m ∈_ _A_ _t_ calculates the partial derivative w.r.t. their parameter via chain rule (Eq. 5), and then they update their parameter accordingly. At each round, we denote the set of the activated client as _A_ _t_ _⊂_ [ _M_ ], where [ _M_ ] represents the set of all clients. The set of passive clients is denoted by _A_ [¯] _t_ = [ _M_ ] _\ A_ _t_ . **Algorithm 1** Event-driven online VFL **Input:** window length _l_, coefficient _α_, learning rate _{η_ _m_ _}_ _[M]_ _m_ =0 [,] **Output:** server model _w_ 0, client models _w_ _m_ _∈_ [ _M_ ] 0: initialize model _w_ _m_ for all participants _m ∈{_ 0 _,_ 1 _, ...M_ _}_ 1: **Client procedure:** 2: **if** activated by an event **then:** 3: sampling the environment to obtain _x_ _[t]_ _m_ 4: send _h_ _m_ ( _w_ _m_ _[t]_ [;] _[ x]_ _[t]_ _m_ [)][ to the server] 5: receive _v_ _m_ _[t]_ [from the server] _m_ [;] _[x]_ _[t]_ _m_ [)] 6: enqueue _v_ _m_ _[t]_ _[·]_ _[ ∂h]_ _[m]_ _∂w_ [(] _[w]_ _[t]_ _m_ _[t]_ into the client’s buffer 7: calculate _∇_ _w_ _m_ _S_ [˜] _t,l,α_ ( _w_ 0 _[t]_ _[,]_ **[ w]** _[t]_ [)][ with client’s buffer] 8: update the parameter _w_ _m_ _[t]_ [+1] _←_ _w_ _m_ _[t]_ _[−]_ _[η]_ _[m]_ _[· ∇]_ _[w]_ _m_ _[S]_ _[t,l,α]_ [(] _[w]_ 0 _[t]_ _[,]_ **[ w]** _[t]_ [)] 9: **else if** the server’s query _t_ is received **then:** 10: sampling the environment to obtain _x_ _[t]_ _m_ 11: send _h_ _m_ ( _w_ _m_ _[t]_ [;] _[ x]_ _[t]_ _m_ [)][ to the server] 12: enqueue **0** to the client’s buffer 13: **Server procedure:** 14: **when** server receives _h_ _m_ ( _w_ _m_ _[t]_ [;] _[ x]_ _[t]_ _m_ [)][ from activated client] _[ m][ ∈]_ _[A]_ _[t]_ [,] **[ do]** [:] 15: send query _t_ to all passive client _m ∈_ _A_ [¯] _t_ 16: calculate _∇_ _w_ 0 _f_ _[t]_ ( _w_ 0 _[t][−][i]_ _,_ **w** _[t][−][i]_ ) 17: updates its model _w_ 0 _[t]_ [+1] _←_ _w_ 0 _[t]_ _[−]_ _[η]_ [0] _[· ∇]_ _[w]_ 0 _[S]_ _[t,l,α]_ [(] _[w]_ 0 _[t]_ _[,]_ **[ w]** _[t]_ [)] 18: sends _v_ _m_ = _[∂S]_ _∂h_ _[t]_ _m_ _[,][l][,]_ ( _[α]_ _w_ [(] _m_ _[t]_ _[w]_ 0 _[t]_ ; _x_ _[,]_ [ ˜] **[w]** _[t]_ _m_ _[t]_ ) [)] [to all activated clients] _[ m][ ∈]_ _[A]_ _[t]_ 4 R EGRET ANALYSIS 4.1 A SSUMPTION Assumption 1 to 5 are the assumptions for analysis of the dynamic local regret bound under the nonconvex case. Specifically, Assumption 1 is used for modeling the smoothness of the loss function _·_ _f_ ( ), with which we can link the difference of the gradients with the difference of the input in the definition domain. These are the basic assumptions for solving the non-convex optimization problem in VFL (Liu et al., 2019; Zhang et al., 2021a; Chen et al., 2020; Castiglia et al., 2022). Assumption 2 and assumption 3 are the common assumptions in the analysis of stochastic nonconvex optimization (Aydore et al., 2019; Hazan et al., 2017). The unbiased gradient assumption means that the expected value of the stochastic gradient equals the true gradient for the underlying distribution of the sample. The bounded variance assumption ensures that the variability in the stochastic gradient estimates is limited. Assumption 4 is used for bounding the magnitude of the gradient for all participant’s models. This is a common assumption for the non-convex optimization of VFL (Gu et al., 2020; Castiglia et al., 2022; Wang et al., 2023; Zhang et al., 2021a) and the regret analysis for online learning in VFL (Wang & Xu, 2023). This assumption is specifically employed to bound the difference between the gradient with missing elements (due to the eventdriven framework) and the ideal gradient without such omissions. Assumption 5 assumes that the loss value is bounded, which is mainly used to bound the values of the difference between the sliding window average and to simplify the theoretical result. This is a common assumption in the regret analysis in online learning under non-convex case (Aydore et al., 2019; Hazan et al., 2017). 5 Idea Generation Category:
2Direct Enhancement
FCBbh0HCrF
# T ET S PHERE S PLATTING : R EPRESENTING H IGH -Q UALITY G EOMETRY WITH L AGRANGIAN V OLUMETRIC M ESHES Input Image Reconstructed Shape Figure 1: (a) Eulerian vs. Lagrangian geometry representations: Compared to Eulerian methods that rely on a fixed grid, TetSphere splatting, a Lagrangian method, uses a set of volumetric tetrahedral spheres that deform to represent the geometry. TetSphere splatting supports applications such as reconstruction, image-to-3D, and text-to-3D generation (b-d). A BSTRACT We introduce TetSphere Splatting, a Lagrangian geometry representation designed for high-quality 3D shape modeling. TetSphere splatting leverages an underused yet powerful geometric primitive – volumetric tetrahedral meshes. It represents 3D shapes by deforming a collection of tetrahedral spheres, with geometric regularizations and constraints that effectively resolve common mesh issues such as irregular triangles, non-manifoldness, and floating artifacts. Experimental results on multi-view and single-view reconstruction highlight TetSphere splatting’s superior mesh quality while maintaining competitive reconstruction accuracy compared to state-of-the-art methods. Additionally, TetSphere splatting demonstrates versatility by seamlessly integrating into generative modeling tasks, such as image-to-3D and text-to-3D generation. Code is available at [https://github.com/gmh14/tssplat.](https://github.com/gmh14/tssplat) 1 I NTRODUCTION Accurate 3D shape modeling is critical for many real-world applications. Recent advancements in reconstruction (Mildenhall et al., 2020; Wang et al., 2021b; Kerbl et al., 2023), generative modeling (Poole et al., 2022; Liu et al., 2023b;c; Long et al., 2023), and inverse rendering (Mehta et al., 2022; Nicolet et al., 2021; Palfinger, 2022) have significantly improved the geometric precision and visual quality of 3D shapes, pushing the boundaries of automatic digital asset generation. - Both authors contributed equally to this research. 1 Ground Truth NeRF FlexiCubes DMesh Gaussian Splatting Ours Figure 2: Visual comparison of mesh quality across widely used shape representations, including NeRF (Mildenhall et al., 2020), FlexiCubes (Shen et al., 2023b) (Eulerian), DMesh (Son et al., 2024), and Gaussian Splatting (Huang et al., 2024) (Lagrangian). These methods exhibit mesh quality issues, such as irregular or degenerated triangles, non-manifoldness, and floating artifacts. Our method demonstrates uniform surface triangles, improved mesh quality, and structure integrity. Central to these advancements are geometry representations, which can be broadly categorized into two types: _Eulerian_ and _Lagrangian_ representations. _Eulerian_ representations describe a geometry on a set of pre-defined, fixed coordinates in 3D world space, where each coordinate position is associated with properties, such as occupancy within the volume or distance from the surface. Widely used Eulerian representations include neural networks that take _continuous_ spatial coordinates as input to model density fields (Mildenhall et al., 2020) or signed distance functions (Wang et al., 2021b; Mehta et al., 2022), as well as deformable grids that use _discrete_ coordinates, with signed distance values defined at grid vertices (Shen et al., 2021; Gao et al., 2022; Shen et al., 2023b). Despite their popularity, Eulerian representations face a trade-off between computational complexity and geometry quality: Capturing intricate geometric details of the shape requires either a high-capacity neural network or a high-resolution grid, both of which are computationally expensive to optimize in terms of time and memory. This trade-off often limits Eulerian representations when modeling thin, slender structures, as their pre-defined resolution is often insufficient to capture fine details. Recently, there has been a growing shift in the community towards _Lagrangian_ representations, which are typically more computationally efficient than Eulerian methods (Kerbl et al., 2023; Gu´edon & Lepetit, 2024; Huang et al., 2024; Son et al., 2024; Chen et al., 2024). Lagrangian representations describe a 3D shape by tracking the movement of a set of geometry primitives in 3D world space. These geometry primitives can be adaptively positioned based on the local geometry of the shape, typically requiring fewer computational resources than Eulerian methods, especially when modeling shapes with fine geometric details. An illustrative comparison between Eulerian and Lagrangian geometry representation is shown in Fig. 1 (a). Two of the most commonly used Lagrangian primitives are 3D Gaussians (Kerbl et al., 2023), which represent geometry using point clouds, and surface triangles (Son et al., 2024; Chen et al., 2024), which are coupled with their connectivity to form surface meshes. While these Lagrangian representations are favored for their computational efficiency, they often struggle with poor _mesh quality_ due to their reliance on tracking individual points or triangles, which can lack overall structural coherence. For instance, Gaussian point clouds can move freely in space, often resulting in noisy meshes, while surface triangles, when coupled with connectivity, can form non-manifold surfaces or irregular, and sometimes degenerated, triangles. The resulting geometry, exhibiting these geometric issues, is unsuitable for downstream tasks such as rendering and simulation, where high-quality meshes are crucial for high fidelity. To address these challenges, we propose a novel Lagrangian geometry representation, _TetSphere_ _Splatting_, designed to construct geometry with an emphasis on producing high-quality meshes. Our key insights stem from the fact that existing Lagrangian primitives are too fine-grained to ensure high-quality meshes. Mesh quality depends not only on individual primitives but also on their interactions: For example, the absence of irregular or degenerated triangles relies on the proper alignment of primitives, while manifoldness depends on how well they are connected. Our representation uses volumetric tetrahedral spheres, termed _TetSphere_, as geometric primitives. Unlike existing primitives that are individual points or triangles, each TetSphere is a volumetric sphere composed of a set of points connected through tetrahedralization. Initialized as a uniform sphere, each TetSphere can be deformed into complex shapes. Together, a collection of these deformed TetSpheres represents a 3D shape, which is in line with the Lagrangian approach. This more structured primitive allows geometric regularization and constraints to be imposed among points within each TetSphere, ensuring 2 mesh quality is maintained throughout deformation. The volumetric nature of TetSpheres also establishes a cohesive arrangement of points throughout the volume, ensuring structural integrity and effectively reducing common surface mesh issues such as irregular triangles or non-manifoldness. We further present a computational framework for TetSphere splatting. Similar to Gaussian splatting (Kerbl et al., 2023), our method “splats” TetSpheres to conform to the target shape. We formulate the deformation of TetSphere as a geometric energy optimization problem, consisting of differentiable rendering loss, bi-harmonic energy of the deformation gradient field, and non-inversion constraints, all effectively solvable via gradient descent. For evaluation, we conduct quantitative comparisons on two tasks to assess the geometry quality: multi-view and single-view reconstruction. In addition to the commonly used metrics for evaluating reconstruction accuracy, we introduce three metrics to evaluate mesh quality, focusing on key aspects of 3D model usability: surface triangles uniformity, manifoldness, and structural integrity. Compared to state-of-the-art methods, TetSphere splatting demonstrates superior mesh quality while maintaining competitive performance on other metrics. Furthermore, as demonstrations of its versatility, we showcase TetSphere splatting’s utility in downstream applications of 3D shape generation from both single images and text. 2 R ELATED W ORK **Eulerian and Lagrangian geometry representations.** The differentiation between Eulerian and Lagrangian representations originates from computational fluid dynamics (Chung, 2002) but extends more broadly into computational geometry and physics. Using fluid simulation as an analogy, an Eulerian view would analyze fluid presence at fixed points in space, whereas a Lagrangian perspective follows specific fluid particles. Neural implicit representations, such as DeepSDF (Park et al., 2019) and NeRF (Mildenhall et al., 2021) are modern adaptations of Eulerian concepts, processing 3D positions as inputs to neural networks. These methods theoretically allow for infinite resolution through NN-based parameterization but can result in slow optimization speeds due to NN training. Explicit or hybrid Eulerian representations, such as DMTet (Shen et al., 2021) and TetGAN (Yang et al., 2019), incorporate explicit irregular grids but can still cause substantial memory usage for high-resolution shapes. Mosaic SDF (Yariv et al., 2024) uses Lagrangian volumetric grids that move in space but are designed for 3D generation tasks only where ground-truth shapes are required. Gaussian splatting (Tang et al., 2023a) exemplifies a Lagrangian approach by moving Gaussian point clouds in space. Surface triangles (Son et al., 2024; Chen et al., 2024) are another example of a Lagrangian approach, where the surface is discretized into a collection of connected triangular elements tracked individually. Our TetSphere can be viewed as introducing constraints among points due to tetrahedral meshing, with enhanced mesh quality. **3D object reconstruction.** 3D reconstruction is an inherently ill-posed problem, and extensive research has been dedicated to addressing it (Fu et al., 2021; Fahim et al., 2021). Early approaches utilized a combination of 2D image encoders and 3D decoders trained on 3D data with both explicit representations, including voxels (Chen & Zhang, 2019; Xie et al., 2019; 2020), meshes (Wang et al., 2018; Gkioxari et al., 2019), and point clouds (Mandikal et al., 2018; Groueix et al., 2018), and implicit representations such as NeRF (Yu et al., 2020; Jang & Agapito, 2021; M¨uller et al., 2022), SDF(Park et al., 2019; Mittal et al., 2022; Xu et al., 2019), and occupancy networks(Mescheder et al., 2019; Bian et al., 2021). Recently, an active research direction has been leveraging 2D generative models for 3D reconstruction, including the use of SDS and supplementary losses (Lin et al., 2023; Sun et al., 2023; Liu et al., 2023b; Melas-Kyriazi et al., 2023; Shen et al., 2023a; Xu et al., 2022; Gu et al., 2023; Deng et al., 2022). The recent introduction of large-scale 3D dataset propelled feedforward large reconstruction models (Hong et al., 2023b; Wang et al., 2023b; Xu et al., 2023; Weng et al., 2024; Tang et al., 2024; He & Wang, 2023; Tochilkin et al., 2024; Wei et al., 2024; Zhang et al., 2024; Xu et al., 2024b). Their feed-forward inference accelerates the speed of 3D reconstruction, but often at a sacrifice of relatively low resolution and geometry quality. There is also a body of work from the inverse rendering community focused on 3D reconstruction (Nicolet et al., 2021; Palfinger, 2022), including two leveraging explicit Lagrangian representations with differentiable rendering losses (Nicolet et al., 2021; Palfinger, 2022; Vicini et al., 2022; Worchel et al., 2022). Nicolet et al. (2021) uses Laplacian preconditioning when computing the gradient. Palfinger (2022) introduces an adaptive remeshing algorithm based on estimating the optimal local edge length. Both methods use a single surface sphere, whereas our method uses multiple tetrahedral spheres as primitives. Detailed discussion and experimental comparisons are provided in Appendix A. 3 Figure 3: Overall pipeline: TetSphere splatting represents a 3D shape using a collection of TetSpheres. Each TetSphere is a tetrahedral sphere that can be deformed from its initial uniform state through deformation gradient. The deformation process is optimized by minimizing rendering loss and geometric energy terms. Our method also relates to text-to-3D content generation as it is one of the applications of TetSphere splatting. We leave a detailed discussion on these related works in Appendix J. 3 T ET S PHERE S PLATTING We use tetrahedral spheres as our primitive of choice. Unlike point clouds, tetrahedral meshes enforce structured local connectivity between points owing to tetrahedralization. This preserves the geometric integrity of the 3D shape and also enhances the surface quality by imposing geometric regularization across the entire mesh interior. We formulate the reconstruction of shapes through TetSphere splatting as a deformation of tetrahedron spheres. Starting from a set of tetrahedral spheres, we adjust the positions of their vertices to align the rendered images of these meshes with the corresponding target multi-view images. Vertex movement is constrained by two geometric regularizations on the tetrahedral meshes, derived from the field of geometry processing (Bærentzen et al., 2012). These regularizations, which penalize the non-smooth deformation (via bi-harmonic energy) and prevent the inversion of mesh elements (via local injectivity), have proven effective in ensuring that the resulting tetrahedral meshes are of superior quality and maintain structural integrity. Fig. 3 illustrates the overall pipeline. 3.1 T ETRAHEDRAL S PHERE P RIMITIVE The primitive of TetSphere splatting is a tetrahedralized sphere, called _TetSphere_, with _N_ vertices and _T_ tetrahedra. By applying principles from the Finite Element Method (FEM) (Sifakis & Barbic, 2012), the mesh of each sphere is composed of tetrahedral elements, with each tetrahedron constituting a 3D discrete piecewise linear volumetric entity. We denote the position vector of all vertices of the _i_ -th deformed sphere mesh as _x_ _i_ _∈_ R [3] _[N]_ . The deformation gradient of the _j_ -th tetrahedron in the _i_ -th sphere is denoted as **F** [(] **x** _[i,j]_ [)] _∈_ R [3] _[×]_ [3], which quantitatively describes how each tetrahedron’s shape transforms (Sifakis & Barbic, 2012). Essentially, the deformation gradient **F** [(] **x** _[i,j]_ [)] serves as a measure of the spatial changes a tetrahedron undergoes from its original configuration to its deformed state. Refer to Fig. 3 for a visual explanation and Appendix F for an in-depth derivation. Rather than using a single sphere, our representation employs a collection of spheres to accurately represent arbitrary shapes. Consequently, the complete shape is the union of all spheres. By adopting multiple spheres, this approach ensures that each local region of a shape is detailed independently, enabling a highly accurate representation. Moreover, it allows for the representation of shapes with 4 Figure 4: TetSphere splatting with deforming tetrahedral spheres. Color-coded regions represent the bi-harmonic energy values (red: high, blue: low) across tetrahedra, one of the geometric regularizations employed in our deformation optimization process. arbitrary topologies. Such a claim is theoretically guaranteed by the paracompactness property of manifold shapes (James, 2000). Using tetrahedral spheres offers several technical benefits compared with prevalent representations for object reconstruction, as demonstrated in Fig. 2: - Compared to neural representations (e.g., NeRF), our tetrahedral representation does not rely on neural networks, thus inherently accelerating the optimization process. - Compared to Eulerian representations (such as DMTet), our approach entirely avoids the need for iso-surface extraction – an operation that often degrades mesh quality owing to the predetermined resolution of the grid space. - Compared to other Lagrangian representations, such as Gaussian point clouds and triangle meshes, our method offers a volumetric representation through the use of tetrahedral meshes. Each tetrahedron imposes constraints among vertices, leading to superior mesh quality. 3.2 T ET S PHERE S PLATTING AS S HAPE D EFORMATION To reconstruct the geometry of the target object, we deform the initial TetSpheres by changing their vertex positions. Two primary goals govern this process: ensuring the deformed TetSpheres align with the input multi-view images and maintaining high mesh quality that adheres to necessary geometry constraints. Fig. 4 illustrates the iterative process of TetSphere splatting. To maintain the mesh quality, we leverage bi-harmonic energy – defined in the literature on geometry processing (Botsch & Sorkine, 2007) as an energy quantifying smoothness throughout a field – to the deformation gradient field. This geometric regularization ensures the smoothness of the deformation gradient field across the deformation process, thus preventing irregular mesh or bumpy surfaces. It is important to highlight that this bi-harmonic regularization does _not_ lead to over-smoothness of the final result. This is because the energy targets the deformation gradient field, which measures the _relative_ changes in vertex positions, rather than the _absolute_ positions themselves. Such an approach allows for the preservation of sharp local geometric details, akin to techniques used in physical simulations (Wen et al., 2023). Furthermore, we introduce a geometric constraint to guarantee local injectivity in all deformed elements (Sch¨uller et al., 2013). This ensures that the elements maintain their orientation during the deformation, avoiding inversions or inside-out configurations. This constraint can be mathematically expressed as det( **F** [(] **x** _[i,j]_ [)] ) _>_ 0 _._ Importantly, these two terms – bi-harmonic energy for smoothness and local injectivity for element orientation – are universally applicable to any tetrahedral meshes, stemming from their fundamental basis in geometry processing (Bærentzen et al., 2012). More details are discussed in Appendix L. Let **x** = [ _x_ 1 _, ..., x_ _M_ ] _∈_ R [3] _[NM]_ denote the positions of vertices across all _M_ TetSpheres, and **F** **x** _∈_ R [9] _[MT]_ = [vec( **F** [(1] **x** _[,]_ [1)] ) _, ...,_ vec( **F** [(] **x** _[M,T]_ [ )] )] denote the flattened deformation gradient fields of all TetSpheres, where _N_ and _T_ denote the number of vertices and tetrahedra within each TetSphere, respectively. In the bi-harmonic energy, the Laplacian matrix is defined based on the connectivity of the tetrahedron faces, denoted as **L** _∈_ R [9] _[MT][ ×]_ [9] _[MT]_ . This matrix is block symmetric, where 5 Idea Generation Category:
0Conceptual Integration
8enWnd6Gp3
# O MNI R E : O MNI U RBAN S CENE R ECONSTRUCTION **Ziyu Chen** [1] _[,]_ [6] _[∗]_ **Jiawei Yang** [6] **Jiahui Huang** [5] **Riccardo de Lutio** [5] **Janick Martinez Esturo** [5] **Boris Ivanovic** [5] **Or Litany** [2] _[,]_ [5] **Zan Gojcic** [5] **Sanja Fidler** [3] _[,]_ [5] **Marco Pavone** [4] _[,]_ [5] **Li Song** [1] **Yue Wang** [5] _[,]_ [6] 1Shanghai Jiao Tong University 2 Technion 3 University of Toronto 4 Stanford University 5 NVIDIA Research 6 University of Southern California A BSTRACT We introduce OmniRe, a comprehensive system for efficiently creating highfidelity digital twins of dynamic real-world scenes from on-device logs. Recent methods using neural fields or Gaussian Splatting primarily focus on vehicles, hindering a holistic framework for all dynamic foregrounds demanded by downstream applications, e.g., the simulation of human behavior. OmniRe extends beyond vehicle modeling to enable accurate, full-length reconstruction of diverse dynamic objects in urban scenes. Our approach builds scene graphs on 3DGS and constructs multiple Gaussian representations in canonical spaces that model various dynamic actors, including vehicles, pedestrians, cyclists, and others. OmniRe allows holistically reconstructing any dynamic object in the scene, enabling advanced simulations (~60 Hz) that include human-participated scenarios, such as pedestrian behavior simulation and human-vehicle interaction. This comprehensive simulation capability is unmatched by existing methods. Extensive evaluations on the Waymo dataset show that our approach outperforms prior state-of-the-art methods quantitatively and qualitatively by a large margin. We further extend our results to 5 additional popular driving datasets to demonstrate its generalizability [on common urban scenes. Code and results are available at omnire.](https://ziyc.github.io/omnire/) 1 I NTRODUCTION Creating photorealistic digital twins of 4D real-world is valuable for enabling high-fidelity simulation, robust algorithm training and evaluation. As autonomous driving algorithms increasingly adopt end-to-end models, the need for scalable and domain-gap-free simulation environments, where these systems can be evaluated in closed-loop, is becoming more evident. While traditional artist-generated assets are reaching their limits in scale, diversity, and realism, recent advances in data-driven methods offer a strong alternative by creating realistic digital twins directly from real-world sensor data. Indeed, neural radiance fields (NeRFs) (Mildenhall et al., 2020; Barron et al., 2021; Yang et al., 2023b; Guo et al., 2023; Yang et al., 2023a; Wu et al., 2023b) and Gaussian Splatting (GS) (Kerbl et al., 2023; Yan et al., 2024) have emerged as powerful tools for reconstructing 3D scenes with high levels of visual and geometric fidelity. However, accurately and holistically reconstructing dynamic urban scenes remains a significant challenge, especially due to the diverse dynamic actors and their complex rigid and non-rigid motions in real-world environments. Several works have already tried to tackle this challenge. Early methods typically ignore dynamic actors and reconstruct only static parts of the scene (Tancik et al., 2022; Martin-Brualla et al., 2021; Rematas et al., 2022; Guo et al., 2023). Subsequent works aim to reconstruct the dynamic scenes by either **(i)** modeling the scenes as a combination of a static and time-dependent dynamic neural field, where the static-dynamic decomposition is an emergent property (Yang et al., 2023a; Turki et al., 2023), or **(ii)** building a scene graph, in which dynamic actors and the static background are represented as nodes and reconstructed in their canonical frame. The nodes of the scene graph are connected with edges that encode relative transformation representing the motion of each actor through time (Ost et al., 2021; Kundu et al., 2022; Yang et al., 2023b; Wu et al., 2023b; Tonderski _∗_ Work done during a research internship at University of Southern California. � Ziyu Chen <ziyu.sjtu@gmail.com> Yue Wang <yue.w@usc.edu> 1 Figure 1: (a) Decomposition of different parts of a scene. (b) Out-of-distribution categories that are overlooked by previous methods can be accurately handled by OmniRe . (c) OmniRe enables diverse applications including vehicle editing (c1, c2), human-vehicle interaction (c3), human behavior simulation (c4), etc. et al., 2024; Fischer et al., 2024b). But both approaches fall short of meeting the requirements for comprehensive and interactive digital twins: while providing a more general formulation, methods of **(i)** lack editability and cannot be directly controlled with classical behavior models. Previous methods following **(ii)** still focus primarily on vehicles, which can be represented as rigid bodies, thereby largely neglecting other vulnerable road users (VRUs) such as pedestrians and cyclists that are fundamental and critical in urban scene simulation. To fill this critical gap, our work aims to model all dynamic actors, including vehicles, pedestrians, and cyclists, and many others, in a manner that allows for interactive simulation. This leads to two primary challenges: **(i)** developing a unified approach for modeling diverse non-rigid dynamic actors, given the wide range of non-rigid categories in real-world scenes; **(ii)** giving specific focus on humans, as their behavior is critical for decision-making in scenarios like driving, where pedestrian actions directly impact safety. Thus, precise joint-level reconstruction (Lei et al., 2023; Jiang et al., 2022; Kocabas et al., 2024) is crucial for fine control of human behavior in the simulator. To address the specific challenge of modeling human actors, we must consider several additional complexities. First, in-the-wild scenarios present significant challenges for capturing human motion dynamics due to unfavorable sensor observations and cluttered environments with frequent occlusions (Wang et al., 2024; Yang et al., 2021; Wang et al., 2023). Furthermore, reconstructing high-fidelity human appearance from sparse sensor data beyond mere geometry dynamics adds additional complexity. Lastly, interactions with large equipment, such as wheelchairs or strollers, which cannot be represented by explicit templates (e.g., SMPL), further complicate both appearance and geometry modeling. To address these challenges, we propose an “omni” system capable of handling diverse actors for urban digital twins. Our method OmniRe efficiently reconstructs high-fidelity urban scenes that include static backgrounds, driving vehicles, and non-rigidly moving dynamic actors (see Fig. 1). Specifically, we construct a dynamic neural scene graph (Ost et al., 2021) based on 3D Gaussian Splatting (Kerbl et al., 2023), with dedicated Gaussian representations for different kinds of dynamic actors in their local canonical spaces. In our framework, backgrounds and vehicles are represented as static Gaussians, while vehicles undergo rigid body transformations to simulate their motion over time. For non-rigid actors, we incorporate the SMPL model to enable joint-level control for pedestrians using dynamic Gaussians, as SMPL provides a prior template geometry for 3DGS initialization and explicit control for modeling desired human behaviors, which is advantageous for downstream simulation applications. To extract SMPL parameters for human motion modeling, we 2 designed a novel human body pose estimation pipeline dedicated to driving logs with multi-camera setups and severe in-the-wild occlusions. For other template-less dynamic actors, we propose a shared deformation field approach in a self-supervised manner. This framework enables a unified representation of all non-rigid categories and achieves specialized joint-level control for pedestrians. Thus, OmniRe allows for accurate representation and controllable reconstruction of most objects of interest in the scene. Notably, our representation is directly amenable to behavior and animation models that are commonly used in AV simulation ( _e.g._, Fig. 1-(c)). To summarize, we make the following contributions: - We introduce OmniRe, a holistic framework for dynamic urban scene reconstruction that embodies the “omni” principle of dynamic category coverage and representation flexibility. OmniRe leverages dynamic neural scene graphs based on Gaussian representations to unify the reconstruction of static backgrounds, driving vehicles, and non-rigidly moving dynamic actors (§ 4). It enables high-fidelity scene reconstruction and human-centered simulations, such as pedestrian behavior and human-vehicle interaction—capabilities unmatched by existing methods. (§ 5). - We address the challenges of modeling humans and other dynamic actors from driving logs such as occlusion, cluttered environments, and the limitations of existing human pose prediction models (§ 4.2). We demonstrate our method on six popular driving datasets to show its generalizability [in urban driving scenes (project page). While our findings are based on AV scenarios, they can](https://ziyc.github.io/omnire/) generalize to other domains. - We perform extensive experiments and ablations to demonstrate the benefits of our holistic framework. OmniRe achieves state-of-the-art performance in scene reconstruction and novel view synthesis (NVS), significantly outperforming previous methods in terms of full image metrics (+1.88 PSNR for reconstruction and +2.38 PNSR for NVS). The differences are pronounced for dynamic actors, such as vehicles (+1.18 PSNR), and humans (+4.09 PSNR for reconstruction and +3.06 PSNR for NVS) (Tab. 1). 2 R ELATED W ORK **Dynamic Scene Modeling.** Neural representations are dominating novel view synthesis (Mildenhall et al., 2020; Barron et al., 2022; 2021; Müller et al., 2022; Fridovich-Keil et al., 2022; Kerbl et al., 2023). These have been extended in different ways to enable dynamic scene reconstruction. _Deformation-based_ approaches (Pumarola et al., 2020; Park et al., 2021a; Tretschk et al., 2021; Park et al., 2021b; Cai et al., 2022) and recently DeformableGS (Yang et al., 2023c) and (Wu et al., 2023a) propose to model dynamic scenes using a 3D neural representation for the canonical space, coupled with a deformation network mapping time-dependent observations to canonical deformations. These are generally limited to small scenes with limited movement, making them inadequate for challenging urban dynamic scenes. _Modulation-based_ techniques operate by directly feeding the image timestamps (or latent codes) as an additional input to a neural representation (Xian et al., 2021; Li et al., 2021; 2022; Luiten et al., 2024). However, this generally results in an underconstrained formulation, therefore requiring additional supervision, such as depth (Li et al., 2021) and optical flow (Xian et al., 2021), or multi-view inputs captured from synchronized cameras (Li et al., 2022; Luiten et al., 2024). D [2] NeRF (Wu et al., 2022) proposed to expand on this formulation by partitioning the scene into static and dynamic fields. Following this, SUDS (Turki et al., 2023) and EmerNeRF (Yang et al., 2023a) have shown impressive reconstruction ability for dynamic autonomous driving scenes. However, they model all dynamic elements using a single dynamic field, rather than modeling each separately, thus they lack controllability, limiting their practicality as sensor simulators. _Explicit decomposition_ of the scene into separate agents enables controlling them individually. These agents can be represented as bounding boxes in a scene graph as in Neural Scene Graphs (NSG) (Ost et al., 2021) that is widely adopted in UniSim (Yang et al., 2023b), MARS (Wu et al., 2023b), NeuRAD (Tonderski et al., 2024), ML-NSG (Fischer et al., 2024b) and recent Gaussian-based works StreetGaussians (Yan et al., 2024), DrivingGaussians (Zhou et al., 2023), and HUGS (Zhou et al., 2024). However, these approaches handle only rigid objects due to limitations of time-independent representations (Ost et al., 2021; Wu et al., 2023b; Yang et al., 2023b; Zhou et al., 2023; 2024; Yan et al., 2024; Tonderski et al., 2024; Fischer et al., 2024b) or limitations of deformation-based techniques (Yang et al., 2023c; Huang et al., 2023). A recent concurrent work Fischer et al. (2024a) also considers non-rigid modeling using a deformation field, addressing a subset of the challenges in modeling holistic dynamics, but does not address fine-grained human models that allow flexible control. To address them, OmniRe proposes a 3 Gaussian scene graph that incorporates various Gaussian representations for both rigid and non-rigid objects, providing extra flexibility and controllability for diverse actors. **Human Modeling.** Human bodies have variable appearance and complex motions, calling for dedicated modeling techniques. NeuMan (Jiang et al., 2022) proposes to employ the SMPL body model (Loper et al., 2015) to warp ray points to a canonical space. This approach enables the reconstruction of non-rigid human bodies and warrants fine control. Similarly, recent works such as GART (Lei et al., 2023), GauHuman (Hu & Liu, 2023) and HumanGaussians (Kocabas et al., 2024) have combined the Gaussian representation and the SMPL model. However, these methods are not directly applicable in-the-wild. As for recovering human dynamics in driving scenes, Yang et al. (2021) focuses on shape and pose reconstruction for LiDAR simulation, while Wang et al. (2023; 2024) aim to recreate natural and accurate human motion from partial observations. However, these methods focus solely on shape and pose estimation and are limited in appearance modeling. In contrast, our method not only models human appearance but also integrates this modeling within a holistic scene framework, to achieve a comprehensive solution. Urban scenes typically involve numerous pedestrians, with sparse observation, often accompanied by severe occlusion. We analyze these challenges in detail and address them in § 4.2. 3 P RELIMINARIES **3D Gaussian Splatting.** First introduced in Kerbl et al. (2023), 3D Gaussian Splatting (3DGS) represents scenes via a set of colored blobs _G_ = _{_ _**g**_ _}_ whose intensity distribution is a Gaussian. Each Gaussian (blob) _**g**_ = ( _o,_ _**µ**_ _,_ **q** _,_ _**s**_ _,_ _**c**_ ) contains the following attributes: opacity _o ∈_ (0 _,_ 1), mean position _**µ**_ _∈_ R [3], rotation **q** _∈_ R [4] represented as a quaternion, anisotropic scaling factors _**s**_ _∈_ R [3] + [,] and view-dependent colors _**c**_ _∈_ R _[F]_ represented as spherical harmonics (SH) coefficients. To compute the color _C_ of a pixel, Gaussians overlapping with this pixel are sorted by their distance to the camera _i−_ 1 center (sorted by _i ∈N_ ) and _α_ -blended: _C_ = [�] _i∈N_ _**[c]**_ _[i]_ _[α]_ _[i]_ � =1 [(1] _[ −]_ _[α]_ _[j]_ [)] _[,]_ [ where] _[ α]_ _[i]_ [ is computed] _i−_ 1 center (sorted by _i ∈N_ ) and _α_ -blended: _C_ = [�] _i∈N_ _**[c]**_ _[i]_ _[α]_ _[i]_ � _j_ =1 [(1] _[ −]_ _[α]_ _[j]_ [)] _[,]_ [ where] _[ α]_ _[i]_ [ is computed] as _α_ _i_ = _o_ _i_ exp( _−_ [1] [(] **[p]** _[ −]_ _**[µ]**_ _[i]_ [)] _[T]_ **[ Σ]** _i_ _[−]_ [1] [(] **[p]** _[ −]_ _**[µ]**_ _[i]_ [))] [,] **[ Σ]** _[i]_ [ is the 2D projection covariance. We further define] as _α_ _i_ = _o_ _i_ exp( _−_ 2 [1] [(] **[p]** _[ −]_ _**[µ]**_ _[i]_ [)] _[T]_ **[ Σ]** _i_ _[−]_ [1] [(] **[p]** _[ −]_ _**[µ]**_ _[i]_ [))] [,] **[ Σ]** _[i]_ [ is the 2D projection covariance. We further define] the application of a rigid (affine) transformation **T** = ( **R** _,_ **t** ) _∈_ SE(3) to all Gaussians in the set as: **T** _⊗G_ = ( _o,_ **R** _**µ**_ + **t** _,_ Rot( **R** _,_ **q** ) _,_ _**s**_ _,_ _**c**_ ) _,_ where Rot( _·_ ) denotes rotating the quaternion by the rotation matrix. **Skinned Multi-Person Linear (SMPL) Model.** SMPL (Loper et al., 2015) is a parametric human body model that combines the advantages of a parametric mesh with linear blending skinning (LBS) to manipulate body shape and pose. At its core, SMPL uses a template mesh _M_ _h_ = ( _V, F_ ) defined in a canonical rest pose, parameterized by _n_ _v_ vertices _V ∈_ R _[n]_ _[v]_ _[×]_ [3] . The template mesh can be shaped and transformed using shape parameters _**β**_ and pose parameters _**θ**_ : _V_ _S_ = _V_ + _B_ _S_ ( _**β**_ ) + _B_ _P_ ( _**θ**_ ), where _B_ _S_ ( _**β**_ ) _∈_ R _[n]_ _[v]_ _[×]_ [3] and _B_ _P_ ( _**θ**_ ) _∈_ R _[n]_ _[v]_ _[×]_ [3] are the _xyz_ offsets to individual vertices (Kocabas et al., 2024) and _V_ _S_ are the vertex locations in the shaped space. To further deform the vertices _V_ _S_ to achieve the desired pose _**θ**_ _[′]_, SMPL utilizes pre-defined LBS weights _**W**_ _∈_ R _[n]_ _[k]_ _[×][n]_ _[v]_ and the joint transformations _**G**_ to define the deformation of each vertex _**v**_ _i_ : _**v**_ _i_ _[′]_ = ( [�] _k_ _**[W]**_ _[k,i]_ _**[G]**_ _[k]_ [)] _**[ v]**_ _[i]_ [, where] _[ n]_ _[k]_ [ is the number of joints, and the joint transformations] _**[ G]**_ [ are] derived from the source pose _**θ**_, the target pose _**θ**_ _[′]_ and shape _**β**_ . The pose parameters include the body pose component _**θ**_ _b_ _∈_ R [23] _[×]_ [3] _[×]_ [3] and the global orientation component _**θ**_ _g_ _∈_ R [3] _[×]_ [3] . For more details of SMPL, we refer readers to Loper et al. (2015). Our method obtains pose parameters _**θ**_ for each pedestrian across all frames, as well as their individual shape parameters _**β**_ _∈_ R [10], these pose sequences initialize the non-rigid dynamics of pedestrians. The detailed process is described in § 4.2. 4 M ETHOD As overviewed in Fig. 2, we build a comprehensive 3DGS framework that holistically reconstructs both the static background and diverse _movable_ entities. We discuss our systematic approach that represents different semantic classes with diverse Gaussian representations in § 4.1, highlighting that this complex yet efficient system-level framework is one of our primary contributions. Modeling humans in unconstrained environments is particularly challenging due to the complexity of human motions and the difficulty of accurately modeling geometry and appearance due to severe occlusions in the wild. We present our approach to this problem in § 4.2, which significantly expands our 4 |Movable Entities<br>G¯rigid Rigid Nodes<br>v Cars, Buses<br>G¯ hSMPL G¯ hdeform<br>SMPL Nodes Deformable Nodes<br>Pedestrians Pedestrians, Cyclists<br>Static Background<br>Background Node Sky Node|(R, t) timet<br>Gvrigid(t)<br>Gvrigid(t)<br>GhSMPL(t)<br>Gbg GhSMPL(t) Ghdeform(t)<br>Gaussian Scene Graph<br>Jointly Optimize|SMPL Nodes<br>θ(t) Deformation<br>G¯ G(t)<br>Deformable Nodes Optimizable<br>G¯ Shared G(t)<br>Deformation<br>Network<br>Instance Embed. t<br>Non-Rigid Modeling| |---|---|---| |Movable Entities<br>G¯rigid Rigid Nodes<br>v Cars, Buses<br>G¯ hSMPL G¯ hdeform<br>SMPL Nodes Deformable Nodes<br>Pedestrians Pedestrians, Cyclists<br>Static Background<br>Background Node Sky Node||| |Movable Entities<br>G¯rigid Rigid Nodes<br>v Cars, Buses<br>G¯ hSMPL G¯ hdeform<br>SMPL Nodes Deformable Nodes<br>Pedestrians Pedestrians, Cyclists<br>Static Background<br>Background Node Sky Node||| Scene Assets Rendering Supervision Figure 2: **Method Overview.** Gaussians of all foreground models are defined in their local or canonical spaces. At a given time _t_, the Gaussians are deformed and transformed into the world space, forming a Gaussian scene graph together with background Gaussians to model the entire scene. The Gaussians in the scene graph are rasterized to render images and depth, and are jointly optimized using reconstruction losses. We utilize SMPL Gaussians to model non-rigid human bodies and deformable Gaussians to handle out-of-distribution non-rigid categories. effectiveness in common driving scenes. Lastly, we show how the scene representation is end-to-end optimized to obtain faithful and controllable reconstructions in § 4.3. 4.1 D YNAMIC G AUSSIAN S CENE G RAPH M ODELING **Gaussian Scene Graph.** To allow for flexible control of diverse _movable_ objects in the scene without sacrificing reconstruction quality, we opt for a _Gaussian Scene Graph_ representation. Our scene graph is composed of the following nodes: (1) a _Sky Node_ representing the sky that is far away from the ego-car, (2) a _Background Node_ representing the static scene background such as buildings, roads, and vegetation, (3) a set of _Rigid Nodes_, each representing a rigidly movable object such as a vehicle, (4) a set of _Non-rigid Nodes_ that model non-rigid individuals, e.g. pedestrians and cyclists. Nodes of type (2,3,4) can be converted directly into world-space Gaussians which we will introduce next. These Gaussians are concatenated and rendered using the rasterizer proposed in Kerbl et al. (2023). The Sky Node is represented by an optimizable environment texture map, similar to Chen et al. (2023), rendered separately, and composited with the rasterized Gaussian image with simple alpha blending. **Background Node.** The background node is represented by a set of static Gaussians _G_ [bg] . These Gaussians are initialized by accumulating the LiDAR points and additional points generated randomly in accordance with the strategy described in Chen et al. (2023). **Rigid Nodes.** Gaussians representing the vehicles ( _e.g._ cars or trucks) are defined as _G_ [¯] _v_ [rigid] in the object’s local space (denoted by the upper bar), where _v_ is the index of the vehicle/node. While the Gaussians within a vehicle will not change over time in the local space, the positions of Gaussians in world space will change according to the vehicle’s pose **T** _v_ _∈_ SE(3) . At a given time _t ∈_ R, the Gaussians are transformed into world space by simply applying the pose transformation: _G_ _v_ [rigid] ( _t_ ) = **T** _v_ ( _t_ ) _⊗_ _G_ [¯] _v_ [rigid] _._ (1) **Non-Rigid Nodes.** Non-rigid individuals are often overlooked by previous methods (Zhou et al., 2024; Yan et al., 2024; Zhou et al., 2023; Fischer et al., 2024b) due to the modeling complexity, despite their importance for human-centered simulation. Unlike rigid vehicles, non-rigid dynamic classes such as pedestrians and cyclists, require extra consideration of both their global movements in world space and their continuous deformations in local space to accurately reconstruct their dynamics. To enable a reconstruction that fully explains the underlying geometry, we further subdivide the non-rigid nodes into two categories: _SMPL Nodes_ for walking or running pedestrians with SMPL templates that enable joint-level control and _Deformable Nodes_ for out-of-distribution non-rigid instances (such as cyclists and other template-less dynamic entities). **Non-Rigid SMPL Nodes.** As introduced in § 3, SMPL provides a parametric way of representing human poses and deformations, and we hence use the model parameters ( _**θ**_ ( _t_ ) _,_ _**β**_ ) to drive the 3D 5 Gaussians within the nodes. Here _**θ**_ ( _t_ ) _∈_ R [24] _[×]_ [3] _[×]_ [3] represents the human posture that changes over time _t_ . For each node indexed by _h_, We tessellate the SMPL template mesh _M_ _h_ instantiated from the resting pose (the _‘Da’_ pose) with 3D Gaussians _G_ [¯] _h_ [SMPL] using a strategy similar to Lei et al. (2023), where each Gaussian is binded to its corresponding vertex of _M_ _h_ . The world-space Gaussians for each node can be then computed as: _G_ _h_ [SMPL] ( _t_ ) = **T** _h_ ( _t_ ) _⊗_ LBS( _**θ**_ ( _t_ ) _,_ _G_ [¯] _h_ [SMPL] ) _._ (2) Here **T** _h_ ( _t_ ) _∈_ SE(3) is the global pose of the node at time _t_, and LBS( _·_ ) is the linear blend skinning operation that deforms the Gaussians according to the SMPL pose parameters. In order to compute the LBS operator, one first precomputes the skinning weights of each Gaussian in _G_ [¯] _h_ [SMPL] w.r.t. the SMPL key joints. Once _**θ**_ changes over time, the key joints’ transformations are updated and linearly interpolated onto the Gaussians to obtain the final deformed positions and rotations, while other attributes in the Gaussian remain unchanged. Crucially, it is highly challenging to accurately optimize the SMPL poses _**θ**_ ( _t_ ) from scratch just with sensor observations, even for single-person or indoor scenarios (Jiang et al., 2022; Lei et al., 2023; Kocabas et al., 2024). Hence a rough initialization of _**θ**_ ( _t_ ) is typically needed, whose details are deferred to a dedicated section § 4.2. **Non-Rigid Deformable Nodes.** These nodes act as a unified representation for other significant non-rigid instances, including those that fall beyond the scope of SMPL modeling, such as extremely faraway pedestrians for which even state-of-the-art SMPL estimators cannot provide accurate estimations, or out-of-distribution, template-less non-rigid individuals. Hence, we propose to use a general deformation network _F_ _φ_ with parameter _φ_ to fit the non-rigid motions within the nodes. Specifically, for node _h_, the world-space Gaussians are defined as: _G_ _h_ [deform] ( _t_ ) = **T** _h_ ( _t_ ) _⊗_ � _G_ ¯ _h_ deform _⊕F_ _φ_ ( _G_ [¯] _h_ [deform] _,_ _**e**_ _h_ _, t_ )� _,_ (3) where the deformation network generates the changes of the Gaussian attributes from time _t_ to the canonical space Gaussians _G_ [¯] _h_ [deform], outputting the changes in position _δ_ _**µ**_ _h_ ( _t_ ), rotation _δ_ **q** _h_ ( _t_ ), and the scaling factors _δ_ _**s**_ _h_ ( _t_ ) . The changes are applied back to _G_ [¯] _h_ [deform] with the _⊕_ operator that internally performs a simple arithmetic addition that results in ( _o,_ _**µ**_ + _δ_ _**µ**_ ( _t_ ) _,_ **q** + _δ_ **q** ( _t_ ) _,_ _**s**_ + _δ_ _**s**_ ( _t_ ) _,_ _**c**_ ) . Notably, previous approaches such as Yang et al. (2023c) utilizes a single deformation network for the entire scene, and usually fail in highly complex outdoor dynamic scenes with rapid movements. On the contrary, in our work, we define a per-node deformation field which has much more representation power. To maintain computational efficiency, the network weights _φ_ are shared and the identities of the nodes are disambiguated via an instance embedding parameter _**e**_ _h_ . Experimental results in § 5.2 show that deformable Gaussians are essential for achieving good reconstruction quality. **Sky Node.** We use a separate optimizable environmental map to fit the sky color from viewing directions. Compositing the sky image _C_ sky with the rendered Gaussians _C_ _G_ consisting of ( _G_ [bg] _, {G_ _v_ [rigid] _}, {G_ _h_ [SMPL] _}, {G_ _h_ [deform] _}_ ), we obtain the final rendering as: _C_ = _C_ _G_ + (1 _−_ _O_ _G_ ) _C_ sky _,_ (4) where _O_ _G_ = [�] _[N]_ _i_ =1 _[α]_ _[i]_ � _ij−_ =11 [(1] _[ −]_ _[α]_ _[j]_ [)][ is the rendered opacity mask of Gaussians.] 4.2 R ECONSTRUCTING I N - THE -W ILD H UMANS Reconstructing humans from driving logs faces challenges as in-the-wild pose estimators (Goel et al., 2023; Rajasegaran et al., 2022) are typically designed for single video input and often miss predictions in occlusion cases. We designed a pipeline that addresses these limitations to predict accurate and temporally consistent human poses from multi-view videos with frequent occlusions. Formally, given a set of 3D tracklets for _N_ pedestrians **T** = _{_ **T** _h_ _}_ _[N]_ _h_ =0 _[−]_ [1] [from the dataset, our goal] is to obtain the corresponding SMPL pose sets: _**θ**_ = _{_ _**θ**_ _h_ _}_ _[N]_ _h_ =0 _[−]_ [1] [. Here,] **[ T]** _[h]_ [ and] _**[ θ]**_ _[h]_ [ (For brevity,] [ (] _[t]_ [)] is omitted) represent the boxes sequence and body pose sequence of the _h_ -th human. We apply 4D-Humans (Goel et al., 2023) to each camera’s video independently in our multi-camera setup. This yields separately processed results of human tracklets and poses: **T** [ˆ] = [�] _[C]_ _c_ =0 _[−]_ [1] **[T]** [ˆ] _[c]_ [ and] [ ˆ] _**[θ]**_ [ =][ �] _[C]_ _c_ =0 _[−]_ [1] _**[θ]**_ [ˆ] _[c]_ [,] where **T** [ˆ] _[c]_ = _{_ **T** [ˆ] _[c]_ _j_ _[}]_ _[j][∈D]_ _[c]_ [ and] [ ˆ] _**[θ]**_ _[c]_ [ =] _[ {]_ _**[θ]**_ [ˆ] _j_ _[c]_ _[}]_ _[j][∈D]_ _[c]_ [ represent the predicted tracklets and poses from camera] _c_, respectively. Here, _D_ _[c]_ is the set of detected human indices in camera _c_ . Our task is to reconstruct _**θ**_ using _**θ**_ [ˆ] . We achieve this through the following steps: 6 2 0 0 ~~𝜃~~ መ 𝑗1 1 1 1 ~~𝜃~~ መ 𝑗2 2 1 0 1 Inconsistent ID across Cameras Consistent ID across Cameras (a) Human ID Matching (b) Pose Completion Figure 3: **Human Pose Processing.** (a) Human ID matching ensures consistent identification across cameras. (b) Missing pose completion to recover poses of occluded individuals. **Tracklet Matching:** We define a matching function _M_ that finds the most similar predicted tracklets for each ground truth tracklet by computing the maximum mean IoU of their 2D projections: _**θ**_ ˜ _h_ = _M_ ( _h,_ ˆ _**θ**_ _,_ **T** _,_ ˆ **T** ) _._ (5) This function learns a matching between ground truth tracklets and predicted tracklets, then outputs the corresponding matched pose sequences. Consider 3-camera setup as an example ( Fig. 3(a)), if the _h_ -th ground truth tracklet matches with detected tracklets _j_ 0 _, j_ 1 _, j_ 2 in cameras 0 to 2 respectively, then _**θ**_ [˜] _h_ = _{_ _**θ**_ [ˆ] _j_ [0] 0 _[,]_ [ ˆ] _**[θ]**_ _j_ [1] 1 _[,]_ [ ˆ] _**[θ]**_ _j_ [2] 2 _[}]_ [, where] [ ˆ] _**[θ]**_ _j_ _[c]_ _k_ [is the pose sequence from camera] _[ c]_ [ for the detected tracklet] _[ j]_ _[k]_ [.] **Pose Completion:** As visualized in Fig. 3(b), 4D-Humans (Goel et al., 2023) fails to predict SMPL poses for occluded individuals in driving scenes, we design a process to recover missing poses: _**θ**_ _h_ = _H_ ( _**θ**_ [˜] _h_ _,_ **T** _,_ **T** [ˆ] ) _._ (6) Here, function _H_ identifies missing detections by comparing the ground truth and predicted tracklets, and interpolates missing poses to complete _**θ**_ _h_ from _**θ**_ [˜] _h_ . 4.3 O PTIMIZATION We simultaneously optimize all the parameters as mentioned in § 4.1 in _a single stage_ to reconstruct the entire scene. These parameters include: **(1)** all the Gaussian attributes (opacity, mean positions, scaling, rotation, and appearance) in their local spaces, namely _G_ [bg] _, {G_ [¯] _v_ [rigid] _}, {G_ [¯] _h_ [SMPL] _}, {G_ [¯] _h_ [deform] _}_, **(2)** the poses of both rigid and non-rigid nodes for each frame _t_, i.e., _{_ **T** _v_ ( _t_ ) _}, {_ **T** _h_ ( _t_ ) _}_, **(3)** the human poses of all the SMPL nodes for each frame _t_, i.e., _{_ _**θ**_ ( _t_ ) _}_, and the corresponding skinning weights, **(4)** the weight _φ_ of the deformation network _F_, **(5)** the weight of the sky model. We use the following objective function for optimization: _L_ = (1 _−_ _λ_ _r_ ) _L_ 1 + _λ_ _r_ _L_ SSIM + _λ_ depth _L_ depth + _λ_ opacity _L_ opacity + _L_ reg _,_ (7) where _L_ 1 and _L_ SSIM are the L1 and SSIM losses on rendered images, _L_ depth compares the rendered depth of Gaussians with sparse depth signals from LiDAR, _L_ opacity encourages the opacity of the Gaussians to align with the non-sky mask, and _L_ reg represents various regularization terms applied to different Gaussian representations. Detailed descriptions of loss terms are provided in the Appendix. 5 E XPERIMENTS **Dataset.** We conduct experiments on the Waymo Open Dataset (Sun et al., 2020), which comprises real-world driving logs. We tested up to 32 dynamic scenes in Waymo, including eight highly complex dynamic scenes that, in addition to typical vehicles, also contain diverse dynamic classes such as pedestrians and cyclists. Each selected segment contains approximately 150 frames. The segment IDs are listed in Tab. 12 and Tab. 6. To further demonstrate our effectiveness on common driving scenes, we extend our results to 5 additional popular driving datasets: NuScenes (Caesar et al., 2020), Argoverse2 (Wilson et al., 2023), PandaSet Idea Generation Category:
0Conceptual Integration
11xgiMEI5o
## - H OMOMORPHISM E XPRESSIVITY OF S PECTRAL I N VARIANT G RAPH N EURAL N ETWORKS **Jingchu Gai** [1] **Yiheng Du** [1] **Bohang Zhang** [1] _[∗]_ **Haggai Maron** [2] _[,]_ [3] **Liwei Wang** [1] 1 Peking University 2 Technion 3 NVIDIA Research gaijingchu@stu.pku.edu.cn, zhangbohang@pku.edu.cn, duyiheng@stu.pku.edu.cn hmaron@nvidia.com, wanglw@pku.edu.cn A BSTRACT Graph spectra are an important class of structural features on graphs that have shown promising results in enhancing Graph Neural Networks (GNNs). Despite their widespread practical use, the theoretical understanding of the power of spectral invariants — particularly their contribution to GNNs — remains incomplete. In this paper, we address this fundamental question through the lens of homomorphism expressivity, providing a comprehensive and quantitative analysis of the expressive power of spectral invariants. Specifically, we prove that spectral invariant GNNs can homomorphism-count exactly a class of specific tree-like graphs which we refer to as _parallel trees_ . We highlight the significance of this result in various contexts, including establishing a quantitative expressiveness hierarchy across different architectural variants, offering insights into the impact of GNN depth, and understanding the subgraph counting capabilities of spectral invariant GNNs. In particular, our results significantly extend Arvind et al. (2024) and settle their open questions. Finally, we generalize our analysis to higher-order GNNs and answer an open question raised by Zhang et al. (2024b). 1 I NTRODUCTION The graph spectrum, defined as the eigenvalues of a graph matrix, is an important class of graph invariants. It encapsulates rich graph structural information including the graph connectivity, bipartiteness, node clustering patterns, diameter, and more (Brouwer & Haemers, 2011). Besides eigenvalues, generalized spectral information may also include projection matrices, which further encodes node relations such as distances and random walk properties, enabling the definition of more fine-grained graph invariants (F¨urer, 2010). These spectral invariants possesses strong _expres-_ _sive power_ . For example, a well-known conjecture raised by Van Dam & Haemers (2003); Haemers & Spence (2004) claimed that almost all graphs can be uniquely determined by their spectra up to isomorphism. The rare exceptions, known as cospectral graphs, tend to be highly similar in their structure and continue to be an active area of research in graph theory (Lorenzen, 2022). In the machine learning community, spectral invariants have recently gained increasing popularity in designing Graph Neural Networks (GNNs) (Bruna et al., 2013; Defferrard et al., 2016; Lim et al., 2023; Huang et al., 2024; Feldman et al., 2023; Zhang et al., 2024b; Black et al., 2024), owing to several reasons. From a practical perspective, graph spectra have been shown to be closely related to certain practical applications such as molecular property prediction (Bonchev, 2018). Moreover, a recent line of works (Xu et al., 2019; Morris et al., 2019; Li et al., 2020; Chen et al., 2020; Zhang et al., 2023b) has pointed out that the expressive power of classic message-passing GNNs (MPNNs) are inherently limited, and cannot encode important graph structure like connectivity or distance. Incorporating spectral invariants into the design of MPNNs can naturally alleviate the limitations. Therefore, from both theoretical and practical perspectives, it is beneficial to give a systematic understanding of the power of spectral invariants and their corresponding GNNs. The earliest study in this area may be traced back to F¨urer (2010), who first linked the power of several spectral invariants to the classic Weisfeiler-Lehman test (Weisfeiler & Lehman, 1968) by proving that these invariants are upper bounded by 2-FWL. More recently, Rattan & Seppelt (2023) further revealed a strict _∗_ Project lead. 1 expressivity gap between F¨urer’s spectral invariants and 2-FWL. Zhang et al. (2024b) and Arvind et al. (2024) analyzed _refinement-based_ spectral invariants, which offer insights into the power of real GNN architectures. Yet, all of these works study expressiveness through the lens of WeisfeilerLehman tests, which has inherent limitations. So far, there remains a lack of _comprehensive_ understanding of the _practical_ power of spectral invariants and their corresponding GNN architectures. **Current work.** In this paper, we investigate the aforementioned questions via a novel perspective called _graph homomorphism_ . Specifically, Zhang et al. (2024a) recently proposed homomorphism expressivity as a quantitative framework to better understand the expressive power of various GNN architectures. As homomorphism expressivity is a fine-grained and practical measure, it naturally addresses several limitations of the WL test. However, extending this framework to other architectures, such as spectral invariant GNNs, poses significant challenges. In fact, whether homomorphism expressivity exists for a given architecture remains an open research direction (see Zhang et al. (2024a)). In our context, this problem becomes even challenging since homomorphism and spectral invariants correspond to two orthogonal branches in graph theory. Here, we provide affirmative answers to all these questions by formally proving that the homomorphism expressivity for spectral invariant GNNs exists and can be elegantly characterized as a special class of _parallel trees_ (Theorem 3.3). This offers deep insights into a series of previous studies, extending their results and answering several open questions. We summarize our results below: - **Separation power of spectral invariants/GNNs** . We offer a new proof that projection-based spectral invariants and corresponding GNNs are strictly bounded by 2-FWL (Corollary 3.4). Moreover, we establish a _quantitative hierarchy_ among raw spectra information, projection, refinement-based spectral invariant, and various combinatorial variants of WL tests (see Figure 4). This (i) recovers and extends results in Rattan & Seppelt (2023), and (ii) provides clear insights into the hierarchy established in Zhang et al. (2024b). - **The power of refinement** . We offer a systematic understanding of the role of refinement in spectral invariant GNNs. We show increasing the number of iterations always leads to a strict improvement in expressive power (Corollary 3.11), thus settling a key open question raised in Arvind et al. (2024). Moreover, our counterexamples establish a tight lower bound on the number of iterations required to achieve maximal expressivity, which is in the same order of graph size. This advances a line of research regarding iteration numbers in WL tests (F¨urer, 2001; Kiefer & Schweitzer, 2016; Lichter et al., 2019). - **Substructure counting power of spectral invariants/GNNs** . On the practical side, we precisely characterize the power of spectral invariants/GNNs in counting certain subgraphs as well as the required iterations. For example, they can count all cycles within 7 vertices, while using 1 iteration already suffices to count all cycles within 6 vertices (Corollary 3.15). Empirically, a set of experiments on both synthetic and real-world tasks validate our theoretical results, showing that the homomorphism expressivity of spectral invariant GNNs well reflects their performance in down-stream tasks. 2 P RELIMINARIES **Notations.** We use _{ }_ and _{{ }}_ to denote sets and multisets, respectively. The cardinality of a given (multi)set _S_ is denoted as _|S|_ . In this paper, we consider finite, undirected, simple graphs with no self-loops or repeated edges, and without loss of generality we only consider connected graphs. Let _G_ = ( _V_ _G_ _, E_ _G_ ) be a graph with vertex set _V_ _G_ and edge set _E_ _G_, where each edge in _E_ _G_ is a set _{u, v} ⊂_ _V_ _G_ of cardinality two. The _neighbors_ of vertex _u_ is denoted as _N_ _G_ ( _u_ ) := _{v ∈_ _V_ _G_ _|{u, v} ∈_ _E_ _G_ _}_ . A _walk_ of length _k_ is a sequence of vertices _u_ 0 _, · · ·, u_ _k_ _∈_ _V_ _G_ such that _{u_ _i−_ 1 _, u_ _i_ _} ∈_ _E_ _G_ for all _i ∈_ [ _k_ ]. It is further called a _path_ if _u_ _i_ _̸_ = _u_ _j_ for all _i < j_, and it is called a _cycle_ if _u_ 0 _, · · ·, u_ _k−_ 1 is a path and _u_ 0 = _u_ _k_ . The shortest path distance between two nodes _u, v ∈_ _V_ _G_, denoted as dis _G_ ( _u, v_ ), is the minimum length of walk from _u_ to _v_ . A graph _F_ = ( _V_ _F_ _, E_ _F_ ) is a _subgraph_ of _G_ if _V_ _F_ _⊂_ _V_ _G_ and _E_ _F_ _⊂_ _E_ _G_ . We use _P_ _n_ (resp. _C_ _n_ ) to denote a graph corresponding to a path (resp. cycle) of _n_ vertices. A graph is called a tree if it is connected and contains no cycle as a subgraph. We denote by _T_ _[r]_ the rooted tree _T_ with root _r_ . The depth of a rooted tree _T_ _[r]_ is defined as dep( _T_ _[r]_ ) = max _u∈V_ _T_ dis _T_ ( _r, u_ ), and the depth of _T_ is defined as dep( _T_ ) = min _r∈V_ _T_ dep( _T_ _[r]_ ). 2 2.1 S PECTRAL INVARIANT GNN S Let _G_ be a graph of _n_ vertices where _V_ _G_ = [ _n_ ], and denote by _**A**_ _∈{_ 0 _,_ 1 _}_ _[n][×][n]_ the adjacency matrix of _G_ . The _spectrum_ of _G_ is defined as the multiset of all eigenvalues of _**A**_ . In addition to eigenvalues, eigenspaces also provide important spectral information. Formally, the eigenspace associated with some eigenvalue _λ_ can be characterized by its projection matrix _**P**_ _λ_ . It follows that there exist a unique set of orthogonal projection matrices _{_ _**P**_ _λ_ _}_ _λ∈_ Λ, where Λ is the set of all distinct eigenvalues of _**A**_, such that _**A**_ = [�] _λ∈_ Λ _[λ]_ _**[P]**_ _[λ]_ [, and the following conditions hold:][ �] _λ_ _**[P]**_ _[λ]_ [ =] _**[ I]**_ [,] _**P**_ _λ_ _**P**_ _λ_ _′_ = 0 for _λ ̸_ = _λ_ _[′]_, and _**AP**_ _λ_ = _**P**_ _λ_ _**A**_ for all _λ ∈_ Λ. Combining the projection matrices with the associated eigenvalues naturally define an invariant between node pairs, which we denote by _P_ : _P_ ( _u, v_ ) := _{{_ ( _λ,_ _**P**_ _λ_ ( _u, v_ )) _|λ ∈_ Λ _}}_ for _u, v ∈_ _V_ _G_ _._ Then, one can define the so-called “spectral invariant” of a graph as follows. Consider the following color refinement process by treating _P_ ( _u, v_ ) as the edge feature between vertices _u_ and _v_ : _χ_ [Spec] _G_ _[,]_ [(] _[d]_ [+1)] ( _u_ ) = hash _χ_ [Spec] _G_ _[,]_ [(] _[d]_ [)] ( _u_ ) _, {{_ ( _χ_ [Spec] _G_ _[,]_ [(] _[d]_ [)] ( _v_ ) _, P_ ( _u, v_ )) _|v ∈_ _V_ _G_ _}}_ for _u ∈_ _V_ _G_ _, d ∈_ _N_ + _,_ � � where all colors _χ_ [Spec] _G_ _[,]_ [(0)] ( _u_ ) ( _u ∈_ _V_ _G_ ) are constant in initialization, and hash is a perfect hash function. For each iteration _d_, the mapping _χ_ [Spec] _G_ _[,]_ [(] _[d]_ [)] induces an equivalence relation over vertex set _V_ _G_, and the relation gets _refined_ with the increase of _d_ . Therefore, with a sufficiently large number of iterations _d ≤|V_ _G_ _|_, the relations get _stable_ . The spectral invariant _χ_ _G_ [Spec] _[,]_ [(] _[∞]_ [)] ( _G_ ) is then defined to be the multiset of stable node colors. We can similarly define _χ_ [Spec] _G_ _[,]_ [(] _[d]_ [)] ( _G_ ) to be the multiset of node colors after _d_ iterations (Arvind et al., 2024). We remark that _χ_ [Spec] _G_ _[,]_ [(1)] ( _G_ ) is exactly the F¨urer’s (weak) spectral invariant proposed in F¨urer (2010). Owing to the relation between GNNs and color refinement algorithms, one can easily transform the above refinement process into a GNN architecture by replacing hash function with a continuous, non-linear, parameterized function, while maintaining the same expressive power (Xu et al., 2019; Morris et al., 2019). We call the resulting architecture Spectral Invariant GNNs (see Zhang et al. (2024b) for concrete implementations of spectral invariant GNN layer). Without ambiguity, we may also refer to _χ_ [Spec] _G_ _[,]_ [(] _[d]_ [)] ( _G_ ) as the graph representation computed by a _d_ -layer spectral invariant GNN. 2.2 H OMOMORPHISM EXPRESSIVITY Given two graphs _F_ and _G_, a homomorphism from _F_ to _G_ is a mapping _f_ : _V_ _F_ _→_ _V_ _G_ that preserves edge relations, i.e., _{f_ ( _u_ ) _, f_ ( _v_ ) _} ∈_ _E_ _G_ for all _{u, v} ∈_ _E_ _F_ . We denote by Hom( _F, G_ ) the set of all homomorphisms from _F_ to _G_ and define hom( _F, G_ ) = _|_ Hom( _F, G_ ) _|_, which counts the number of homomorphisms. If _f_ is further surjective on both vertices and edges of _G_, we call _G_ a _homomorphic image_ of _F_ . A mapping _f_ : _V_ _F_ _→_ _V_ _G_ is called an isomorphism if _f_ is a bijection and both _f_ and its inverse _f_ _[−]_ [1] are homomorphisms. We denote by sub( _F, G_ ) the number of subgraphs of _G_ that is isomorphic to _F_ . In Zhang et al. (2024a), the authors introduced the concept the homomorphism expressivity to quantify the expressive power of a color refinement algorithm (or GNN). It is formally defined as follows: **Definition 2.1.** Let _M_ be a color refinement algorithm (or GNN) that outputs a graph invariant _χ_ _[M]_ _G_ [(] _[G]_ [)][ given graph] _[ G]_ [. The homomorphism expressivity of] _[ M]_ [, denoted by] _[ F]_ _[M]_ [, is a family of] connected graphs [1] satisfying the following conditions: a) For any two graphs _G, H_, _χ_ _[M]_ _G_ [(] _[G]_ [) =] _[ χ]_ _[M]_ _H_ [(] _[H]_ [)] _[ iff]_ [ hom][(] _[F, G]_ [) =][ hom][(] _[F, H]_ [)][ for all] _[ F][ ∈F]_ _[M]_ [;] b) _F_ _[M]_ is maximal, i.e., for any connected graph _F /∈F_ _[M]_, there exists a pair of graphs _G, H_ such that _χ_ _[M]_ _G_ [(] _[G]_ [) =] _[ χ]_ _H_ _[M]_ [(] _[H]_ [)][ and][ hom][(] _[F, G]_ [)] _[ ̸]_ [=][ hom][(] _[F, H]_ [)][.] By characterizing the set _F_ _[M]_ for different GNN models _M_, one can quantitatively understand the expressivity gap between two models by simply computing their set inclusion relation and set difference. Zhang et al. (2024a) examines several representative GNNs under this framework, including 1 For simplicity, we focus on _connected_ graphs in this paper. The results can be easily generalized to disconnected graphs following Seppelt (2024). 3 (a) A parallel edge with endpoints ( _u, v_ ) (b) An example of parallel tree and its tree skeleton Figure 1: Illustration of a parallel edge with endpoints ( _u, v_ ) in (a) and a parallel tree with its skeleton on the right in (b). the standard MPNNs and Folklore GNNs (Maron et al., 2019; Azizian & Lelarge, 2021), and recent architectures such as Subgraph GNN (Bevilacqua et al., 2022; Qian et al., 2022; Cotta et al., 2021) and Local GNN (Morris et al., 2020; Zhang et al., 2023a). However, one implicit challenge not reflected in Definition 2.1(a) is that the set _F_ _[M]_ may not even exist for a general GNN _M_ . Proving the existence corresponds to an involved research topic known as homomorphism distinguishing closedness (Roberson, 2022; Seppelt, 2024; Neuen, 2023), which is highly non-trivial. In the next section, we will give affirmative results showing that the homomorphism expressivity of spectral invariant GNNs does exist and give an elegant description of the graph family. 3 H OMOMORPHISM E XPRESSIVITY OF S PECTRAL I NVARIANT GNN S In this section, we investigate the homomorphism expressivity of spectral invariants and the corresponding GNNs. We will provide a complete characterization of the set _F_ [Spec] _[,]_ [(] _[d]_ [)] for arbitrary model depth _d ∈_ N _∪{∞}_ . This allows us to analyze spectral invariants in a novel perspective, significantly extending prior research and resolving previously unanswered questions. 3.1 M AIN RESULTS Our idea is motivated by the previous finding that the homomorphism expressivity of MPNNs is exactly the family of all trees (Zhang et al., 2024a). Note that in the definition of spectral invariant GNN, if one replaces _P_ ( _u, v_ ) by the standard adjacency _**A**_ _uv_, the resulting architecture is just an MPNN. Such a relationship perhaps implies that the homomorphism expressivity of spectral invariant GNNs also comprises “tree-like” graphs. We will show this is indeed true. To present our results, let us define a special class of graphs, referred to as _parallel trees_ : **Definition 3.1** ( **Parallel Edge** ) **.** A graph _G_ is called a _parallel edge_ if there exist two different vertices _u, v ∈_ _V_ _G_ such that the edge set _E_ _G_ can be partitioned into a sequence of simple paths _P_ 1 _, . . ., P_ _m_, where all paths share endpoints ( _u, v_ ). We refer to ( _u, v_ ) as the endpoints of _G_ . **Definition 3.2** ( **Parallel Tree** ) **.** A graph _F_ is called a _parallel tree_ if there exists a tree _T_ such that _F_ can be obtained from _T_ by replacing each edge _{u, v} ∈_ _E_ _T_ with a parallel edge that has endpoints _{u, v}_ . We refer to _T_ as the _parallel tree skeleton_ of graph _F_ . Given a parallel tree _F_, define the _parallel tree depth_ of _F_ as the minimum depth of any parallel tree skeleton of _F_ . We give an illustration of parallel edge and parallel tree in Figure 1. With the above definitions, we are ready to state our main theorem: **Theorem 3.3.** _For any d ∈_ N _, the homomorphism expressivity of spectral invariant GNNs with d_ _iterations exists and can be characterized as follows:_ _F_ [Spec] _[,]_ [(] _[d]_ [)] = _{F | F has parallel tree depth at most d}._ _Specifically, the following properties hold:_ - _Given any graphs G and H, χ_ [Spec] _G_ _[,]_ [(] _[d]_ [)] ( _G_ ) = _χ_ [Spec] _H_ _[,]_ [(] _[d]_ [)] ( _H_ ) _if and only if, for all connected graphs_ _F with parallel tree depth at most d,_ hom( _F, G_ ) = hom( _F, H_ ) _._ - _F_ [Spec] _[,]_ [(] _[d]_ [)] _is maximal; that is, for any connected graph F /∈F_ [Spec] _[,]_ [(] _[d]_ [)] _, there exist graphs G and_ _H such that χ_ [Spec] _G_ _[,]_ [(] _[d]_ [)] ( _G_ ) = _χ_ [Spec] _H_ _[,]_ [(] _[d]_ [)] ( _H_ ) _and_ hom( _F, G_ ) _̸_ = hom( _F, H_ ) _._ We will present a concise proof sketch of Theorem 3.3 in Section 3.3. Next, in Section 3.2, we will interpret this result in the context of GNNs and discuss its significance, including how it extends previous findings and addresses open problems identified in earlier studies. 4 3.2 I MPLICATIONS Our theory has a wide range of applications, which will be separately discussed in detail below. 3.2.1 C OMPARISON WITH 2-FWL Firstly, we compare the expressive power of spectral invariant GNNs with the expressive power of the standard Weisfeiler-Lehman (WL) test. It immediately follows that the expressive power of spectral invariant GNNs strictly lies between the expressive power of 1-WL and 2-FWL test. **Corollary 3.4.** _The expressive power of spectral invariant GNNs is strictly_ _stronger than_ 1 _-WL and strictly weaker than_ 2 _-FWL._ _Proof._ According to Zhang et al. (2024a), the homomorphism expressivity of 2-FWL encompasses the set of all graphs with treewidth at most 2. A classical result in graph theory states that any subgraph of any series-parallel graph has treewidth at most 2 (Diestel, 2017). Since any parallel tree is clearly a subgraph of some series-parallel graph, its treewidth is at most 2. It follows that the homomorphism expressivity of parallel trees is contained within that of the 2-FWL. To show the gap, we give a counterexample graph in Figure 2. This implies that the expressive power of spectral invariant GNNs is strictly weaker than that of the 2-FWL. The proof for the case of 1-WL is similar and we omit it for clarity. 3.2.2 H IERARCHY Figure 2: A counterexample graph in _F_ [2] _[−]_ [FWL] _\F_ [Spec] _[,]_ [(] _[∞]_ [)] . Theorem 3.3 not only provides insights into the relationship between the expressive power of spectral invariant GNNs and 2-FWL, but also allows for a comparison with a wide range of graph invariants and the corresponding GNNs. Specifically, similar to the analysis in Corollary 3.4, for any GNN models _A_ and _B_ such that their homomorphism expressivity exists, if _F_ _[A]_ ⊊ _F_ _[B]_, then _A_ is strictly weaker than _B_ in expressive power. We now use this property to establish a comprehensive hierarchy by linking spectral invariant GNNs to other fundamental graph invariants and GNNs. **Corollary 3.5.** _Spectral invariant GNN with_ 1 _iteration is strictly weaker than subgraph GNN (also_ _referred to as_ (1 _,_ 1) _-WL in Rattan & Seppelt (2023))._ _Proof._ According to Zhang et al. (2024a), the homomorphism expressivity of subgraph GNNs contains all graphs that become a forest upon the deletion of a specific vertex. On the other hand, Theorem 3.3 states that the homomorphism expressivity of spectral invariant GNNs with one iteration contains all parallel trees of depth 1. Since any parallel tree of depth 1 becomes a forest when deleting the root vertex, we have proved that _F_ [Spec] _[,]_ [(1)] is a subset of that of subgraph GNNs. Finally, one can easily construct a counterexample graph to prove the strict separation. **Remark 3.6.** Our result recovers and strengthens the main result in Rattan & Seppelt (2023), which only studied spectral invariants with 1 iteration (F¨urer’s weak spectral invariant). We will next show this result actually does _not_ hold in case of more than 1 iterations. **Corollary 3.7.** _Spectral invariant GNNs with_ 2 _iterations are incomparable to subgraph GNNs._ We provide a counterexample in Figure 3. Nevertheless, we can still bound the expressive power of spectral invariant GNNs with multiple iterations to that of Local 2-GNN, as stated in the following: **Corollary 3.8.** _For any d ∈_ N + _∪{∞}, spectral invariant GNNs with d iterations are strictly weaker_ _than Local 2-GNN (Morris et al., 2020; Zhang et al., 2024a)._ _Proof._ According to Zhang et al. (2024a), the homomorphism expressivity of Local 2-GNNs contains all graphs that admit a strong nested ear decomposition. Since any parallel edge can be partitioned into ears with the same endpoints, one can easily construct a nested ear decomposition for any parallel tree. This shows _F_ [Spec] _[,]_ [(] _[d]_ [)] is a subset of that of Local 2-GNN. The expressivity gap can be seen using the same counterexample graph in Figure 2. 5 (a) Counterexample for Corollary 3.7 (b) Counterexample for Corollary 3.11 Figure 3: Counterexample for Corollary 3.7 and Corollary 3.11 **Remark 3.9.** Corollaries 3.7 and 3.8 significantly extend the findings of Arvind et al. (2024, Theorem 17) and provide additional insights into Zhang et al. (2024b, Theorem 4.3). **The power of projection.** We next conduct a fine-grained analysis by separating eigenvalues and projections to better understand their individual contributions to enhancing the expressive power of GNN models. We first prove the following theorem: **Theorem 3.10.** _The homomorphism expressivity of graph spectra is the set of all cycles C_ _n_ _(n ≥_ 3 _)_ _plus paths P_ 1 _and P_ 2 _, i.e., {C_ _n_ _|n ≥_ 3 _} ∪{P_ 1 _, P_ 2 _}._ The proof of Theorem 3.10 is provided in Appendix C, which has the same structure as that of Theorem 3.3. Previously, Van Dam & Haemers (2003); Dell et al. (2018) have proved that the spectra of two graphs _G_ and _H_ are identical if and only if for every cycle _F_, hom( _F, G_ ) = hom( _F, H_ ). We extend their result by further proving the maximal property (Definition 2.1(b)), which only adds two trivial graphs _P_ 1 and _P_ 2 to the homomorphism expressivity. From this result, one can easily see that using eigenvalues alone can already improve the expressive power of an MPNN since the homomorphism expressivity of MPNN contains only trees (but not cycles). To understand the role of projection, one can compare the set _{C_ _n_ _|n ≥_ 3 _}∪{P_ 1 _, P_ 2 _}_ with _F_ [Spec] _[,]_ [(1)] (the homomorphism expressivity of F¨urer’s spectral invariant). Clearly, the set of all parallel trees of depth 1 is strictly larger than _{C_ _n_ _|n ≥_ 3 _}∪{P_ 1 _, P_ 2 _}_, confirming that adding projection information significantly enhances the expressive power beyond graph spectra. **The power of refinement.** We finally investigate the power of iterations _d_ (or number of GNN layers) in enhancing the model’s expressive power. We have the following result: **Corollary 3.11.** _For any d ∈_ N _, spectral invariant GNNs with d_ + 1 _iterations are strictly more_ _powerful than spectral invariant GNNs with d iterations._ _Proof._ For any _k ∈_ N, we can construct a counterexample formed by replacing each edge in the path graph _P_ 2 _k_ +2 with a parallel edge. We illustrate the construction in Figure 3(b). One can easily see that the resulting graph is in _F_ [Spec] _[,]_ [(] _[k]_ [+1)] but not _F_ [Spec] _[,]_ [(] _[k]_ [)] . **Remark 3.12.** Corollary 3.11 addresses the key open question posed in Arvind et al. (2024), who conjectured that spectral invariant GNNs converge within _constant_ iterations. Specifically, the authors questioned whether, for _d ≥_ 4, spectral invariant GNNs with _d_ + 1 iterations are as powerful as those with _d_ iterations. We disproved this conjecture by providing a family of example graphs that cannot be distinguished in _d_ iterations but can be distinguished in _d_ + 1 iterations. Our counterexamples further leads to the following result: **Corollary 3.13.** _For any d ∈_ N + _, There exist two graphs with O_ ( _d_ ) _vertices such that spectral_ _invariant GNNs require at least d iterations to distinguish between them._ Corollary 3.13 establishes a tight bound on the number of layers needed for spectral invariant GNNs to reach maximal expressivity, showing that it scales with the order of graph size. This advances an important research topic that aims to study the relation between expressiveness and iteration number of color refinement algorithms (F¨urer, 2001; Kiefer & Schweitzer, 2016; Lichter et al., 2019). To summarize all the above results, we illustrate the hierarchy established for spectral invariant GNNs and other mainstream GNNs in Figure 4. 3.2.3 S UBGRAPH C OUNT In fact, our results can go beyond the WL framework and reveal the expressive power of spectral invariant GNNs in a more practical perspective. As an example, we will show below how Theorem 3.3 can be used to understand the subgraph counting capabilities of spectral invariant GNNs. 6 Figure 4: Hierarchy of spectral invariant GNN (abbreviated as Spectral IGN) and other mainstream GNNs. Each arrow points to the strictly stronger architecture. Given any graph _F_, we say a GNN model _M_ can subgraph-count substructure _F_ if for any graphs _G_ and _H_, the condition _χ_ _[M]_ _G_ [(] _[G]_ [) =] _[ χ]_ _[M]_ _H_ [(] _[H]_ [)][ implies][ sub][(] _[F, G]_ [) =][ sub][(] _[F, H]_ [)][. Denote by][ Spasm][(] _[F]_ [)][ the] set of all homomorphic images of _F_ . Previous results have proved that, if the homomorphism expressivity _F_ _[M]_ exists for model _M_, then _M_ can subgraph-count _F_ if and only if Spasm( _F_ ) _⊂F_ _[M]_ (Seppelt, 2023; Zhang et al., 2024a). This allows us to precisely analyze which substructure can be subgraph-counted by spectral invariant GNNs. **Corollary 3.14.** _Spectral invariant GNN can count cycles and paths with up to_ 7 _vertices._ _Proof._ For cycles or paths with at most 7 vertices, one can check by enumeration that their homomorphic images are all parallel trees. For cycles or paths with at least 8 vertices, the 4-clique is a valid homomorphic image but is not a parallel tree. We can further strengthen the above results by studying the number of iterations needed to count substructures. We have the following results: **Corollary 3.15.** _The following holds:_ _1. Spectral invariant GNNs can subgraph-count all cycles up to_ 7 _vertices within_ 2 _iterations._ _2. The above upper bound is tight: spectral invariant GNNs with only_ 1 _iteration (i.e., F¨urer’s_ _weak spectral invariant) cannot subgraph-count_ 7 _-cycle._ _3. Spectral invariant GNNs with_ 1 _iteration suffice to subgraph-count all cycles up to_ 6 _vertices._ **Remark 3.16.** The subgraph counting power of spectral invariant has long been studied in the literature. Cvetkovic et al. (1997) proved that the graph angles (which can be determined by projection) can subgraph-count all cycles of length no more than 5. In comparison, our results significantly extend their findings, which even match the cycle counting power of 2-FWL (Arvind et al., 2020). Moreover, we show that F¨urer’s weak spectral invariant can already count 6-cycles, thus extending the work of F¨urer (2017). 3.3 P ROOF SKETCH In this section, we provide a proof sketch of Theorem 3.3, with the complete proof presented in the Appendix. We begin by demonstrating that the information encoded by spectral invariants is closely related to encoding _walk information_ in the aggregation process of GNNs. This corresponds to the following lemma (proved in Appendix B.2, see also Arvind et al. (2024)): **Lemma 3.17.** _(_ _**Equivalence of encoding walk and encoding spectral information**_ _)_ _Let G_ = ( _V_ _G_ _, E_ _G_ ) _be a graph, with its adjacency matrix denoted by_ _**A**_ _. For vertices x, y ∈_ _V_ _G_ _, define_ _ω_ _G_ _[k]_ [(] _[x, y]_ [) =] _**[ A]**_ _[k]_ _x,y_ _[for all][ k][ ∈{]_ [0] _[,]_ [ 1] _[,]_ [ 2] _[, . . .,][ |][V]_ _[G]_ _[|}][, which represents the number of][ k][-walks from]_ _vertex x to vertex y. Define the tuple ω_ _G_ _[∗]_ [(] _[x, y]_ [) = (] _[ω]_ _G_ [0] [(] _[x, y]_ [)] _[, ω]_ _G_ [1] [(] _[x, y]_ [)] _[, . . ., ω]_ _G_ _[n][−]_ [1] ( _x, y_ )) _, where_ _n_ = _|V_ _G_ _|. Define the walk-encoding GNN with Idea Generation Category:
3Other
rdv6yeMFpn
# - I NVERSE B ENCH : B ENCHMARKING P LUG AND -P LAY - D IFFUSION P RIORS FOR I NVERSE P ROBLEMS IN P HYS ICAL S CIENCES **Hongkai Zheng** [1] _[,][∗]_ **, Wenda Chu** [1] _[,][∗]_ **, Bingliang Zhang** [1] _[,][∗]_ **, Zihui Wu** [1] _[,][∗]_ **, Austin Wang** [1] **,** **Berthy T. Feng** [1], **Caifeng Zou** [1], **Yu Sun** [2], **Nikola Kovachki** [3], **Zachary E. Ross** [1], **Katherine L. Bouman** [1], **Yisong Yue** [1] 1 California Institute of Technology, 2 Johns Hopkins University, 3 NVIDIA A BSTRACT Plug-and-play diffusion priors (PnPDP) have emerged as a promising research direction for solving inverse problems. However, current studies primarily focus on natural image restoration, leaving the performance of these algorithms in scientific inverse problems largely unexplored. To address this gap, we introduce I NVERSE B ENCH, a framework that evaluates diffusion models across five distinct scientific inverse problems. These problems present unique structural challenges that differ from existing benchmarks, arising from critical scientific applications such as optical tomography, medical imaging, black hole imaging, seismology, and fluid dynamics. With I NVERSE B ENCH, we benchmark 14 inverse problem algorithms that use plug-and-play diffusion priors against strong, domain-specific baselines, offering valuable new insights into the strengths and weaknesses of existing algorithms. To facilitate further research and development, we open-source the codebase, along with datasets and pre-trained models, [at https://devzhk.github.io/InverseBench/.](https://devzhk.github.io/InverseBench/) 1 I NTRODUCTION Inverse problems are fundamental in many domains of science and engineering, where the goal is to infer the unknown source from indirect and noisy observations. Example domains include astronomy (Chael et al., 2019), geophysics (Virieux & Operto, 2009), optical microscopy (Choi et al., 2007), medical imaging (Lustig et al., 2007), fluid dynamics (Iglesias et al., 2013), among others. These inverse problems are often challenging due to their ill-posedness, complexity in the underlying physics, and unknown measurement noise. The use of diffusion models (DMs) (Sohl-Dickstein et al., 2015; Dhariwal & Nichol, 2021) for solving inverse problems has become increasingly popular. One attractive approach is PnPDP methods that use the DM as a plug-and-play prior (Wang et al., 2022; Dou & Song, 2024), where the inference objective is decomposed into the prior (using a pre-trained diffusion model) and the likelihood of fitting the observations (using a suitable forward model). The advantage of this idea is twofold: (1) As a powerful class of generative models, DMs can efficiently encode the complex and highdimensional prior distribution, which is essential to overcome ill-posedness. (2) As plug-and-play priors, DMs can accommodate different problems without any re-training by decoupling the prior and likelihood. However, current algorithms are primarily evaluated and compared on a fairly narrow set of image restoration tasks such as inpainting, super-resolution, and deblurring (Kadkhodaie & Simoncelli, 2021; Song et al., 2023a; Mardani et al., 2024). These problems differ greatly from those from science and engineering applications such as geophysics (Virieux & Operto, 2009), astronomy (Porth et al., 2019), oceanography (Carton & Giese, 2008), and many other fields, which have very different structural challenges arising from the underlying physics. It is unclear how much insight can be carried over from image restoration to scientific inverse problems. In this paper, we introduce I NVERSE B ENCH, a comprehensive benchmarking framework designed to evaluate PnP diffusion prior approaches in a systematic and easily extensible manner. We curate _∗_ These authors contributed equally to this work. 1 Figure 1: Illustration of five benchmark problems in the I NVERSE B ENCH . _G_ represents the forward model that produces observations from the source. _G_ _[†]_ represents the inverse map. In the linear inverse scattering problem (left two), the observation is the recorded data from the receivers and the unknown source we aim to infer is the permittivity map of the object. The bottom panel displays the efficiency and accuracy plots for our benchmarked algorithms. Certain characteristics of the problem cause the efficiency and accuracy trade-offs of each algorithm to vary across tasks. In these plots, the larger radius of the points indicates greater interaction with the forward function _G_, as measured by the number of forward model evaluations. a diverse set of five inverse problems from distinct scientific domains: optical tomography, black hole imaging, medical imaging, seismology, and fluid dynamics. These problems present structural challenges that differ significantly from natural image restoration tasks (cf. Figure 1 and Table 2), and encompass a broad spectrum of complexities across multiple scientific fields. Most notably, the forward model (which maps the source to observations) is defined using various types of physicsbased models which can be highly nonlinear and difficult to evaluate. We select 14 representative plug-and-play diffusion prior algorithms proposed for solving inverse problems, providing a thorough comparison of their performance across different scientific inverse problems and further insights into their efficacy and limitations. Additionally, we establish strong, domain-specific baselines for each inverse problem, providing a meaningful reference point for assessing the effectiveness of diffusion model-based approaches against traditional methods. Through extensive experiments, we find that PnP diffusion prior methods generally exhibit strong performance given a suitable dataset for training a diffusion prior. This performance is consistent even as we vary the forward model (which is a strength of a PnP approach), given appropriate tuning. However, for forward models that require certain constraints on the input (e.g., uses a PDE solver), performance can be very sensitive to hyperparameter tuning. Moreover, the strength of using a diffusion prior can also be a limitation, as PnP diffusion prior methods have difficulty when the source image is out of the prior distribution (i.e., the use of diffusion models makes it difficult to recover “surprising” results). Additionally, we find that PnP methods that use multiple queries of the forward model tend to outperform simpler methods like DPS, at the cost of requiring additional tuning and computation, which points to an interesting direction for future method development. I NVERSE B ENCH is implemented as a highly modular framework that can interface with new inverse problems and algorithms to run experiments at scale. We open-source the codebase, along with [datasets and pre-trained models, at https://devzhk.github.io/InverseBench/.](https://devzhk.github.io/InverseBench/) 2 P RELIMINARIES 2.1 I NVERSE PROBLEMS Following the typical setup, we have _observations_ _**y**_ _∈_ C _[m]_ from an unknown source _**z**_ _∈_ C _[n]_ via a _forward model G_ : C _[n]_ _→_ C _[m]_ . The inverse problem is to design a mapping _G_ _[†]_ to infer _**z**_ from _**y**_ : _**z**_ _←_ _G_ _[†]_ ( _**y**_ ) _,_ where _**y**_ = _G_ ( _**z**_ _, ξ_ ) _._ (1) 2 Here, _ξ_ represents noise in the forward model. In scientific applications, _G_ represents the measurement or sensing device (telescopes, infrared cameras, seismometers, electron microscopes, etc.). Inverse problems typically present four major challenges: (1) Many inverse problems are ill-posed, meaning that a solution may not exist, may not be unique, or may not be stable (Hadamard, 2014). For example, in black hole imaging, there could be multiple solutions that match the same sparse measurements. (2) The measurement noise is generally not separately observed (it is part of the observations _**y**_ ), and accounting for it in the inverse problem can be challenging, especially for poorly characterized noise profiles (e.g., non-Gaussian). (3) The forward model might be highly nonlinear and lack a closed-form expression, leading to computational and numerical challenges in method design. (4) Designing an appropriate prior for the unknown source is also a critical challenge. For some problems, it is necessary for the designed prior to capture the complex structure of the solution space while remaining computationally tractable. All these challenges necessitate some kind of regularization. While classic optimization approaches often employ simple regularizers (e.g., local isotropic smoothness), these fail to capture global or anisotropic properties. The use of diffusion models as a prior is attractive as a way to capture these more complex properties. 2.2 D IFFUSION M ODELS Diffusion models are a powerful class of deep generative models that can capture complicated highdimensional distributions such as natural images (Rombach et al., 2022), proteins (Fu et al., 2024), small molecules (Luo et al., 2024), robotic trajectories (Chi et al., 2023), amongst other domains. Given their strong performance and compatibility with Bayesian inference, using diffusion models to model the solution space as a prior is a promising idea (Chung et al., 2023; Song et al., 2022). We consider the continuous formulation of diffusion models proposed by Song et al. (2020), which expresses the forward diffusion and backward denoising process as stochastic differential equations (SDEs). The forward process transforms a data distribution _**x**_ 0 _∼_ _p_ data into an approximately Gaussian one _**x**_ _T_ _∼N_ (0 _, σ_ [2] ( _T_ ) _**I**_ ) by gradually adding Gaussian noise according to: d _**x**_ _t_ = _f_ ( _**x**_ _t_ _, t_ )d _t_ + _g_ ( _t_ )d _**w**_ _t_ _,_ (2) where _f_ is a predefined vector-valued drift, _g_ is the diffusion coefficient, _**w**_ is the standard Wiener process with time _t_ flowing from 0 to _T_ . The backward process sequentially denoises the Gaussian noise into clean data, which is given by the reverse-time SDE: d _**x**_ _t_ = _f_ ( _**x**_ _t_ _, t_ ) _−_ [1] d _t_ + _g_ ( _t_ )d ¯ _**w**_ _t_ _,_ (3) � 2 _[g]_ [2] [(] _[t]_ [)] _[∇]_ _**[x]**_ _[t]_ [ log] _[ p]_ _[t]_ [(] _**[x]**_ _[t]_ [)] � where _p_ _t_ ( _**x**_ _t_ ) is the probability density of _**x**_ _t_ at time _t_ and ¯ _**w**_ _t_ is the reverse-time Wiener process. The diffusion model is trained to learn the score function _∇_ _**x**_ _t_ log _p_ _t_ ( _**x**_ _t_ ). Once trained, the diffusion model can generate new samples from the learned data distribution by solving Eq. (3). 2.3 P LUG - AND - PLAY D IFFUSION P RIORS FOR I NVERSE P ROBLEMS We use the term _Plug-and-Play Diffusion Prior_ (PnPDP) to refer to the class of recent methods that use diffusion models (or the denoising network within) as plug-and-play priors (Venkatakrishnan et al., 2013) for solving inverse problems. The basic idea is to either modify or use Eq. (3) to generate samples from _p_ ( _**x**_ _|_ _**y**_ ) rather than the prior _p_ ( _**x**_ ), which under Bayes rule can be expressed as _p_ ( _**x**_ _|_ _**y**_ ) _∝_ _p_ ( _**x**_ ) _p_ ( _**y**_ _|_ _**x**_ ). The first term _p_ ( _**x**_ ) can be modeled using a diffusion prior, and the second term _p_ ( _**y**_ _|_ _**x**_ ) can be computed using the forward model. Broadly speaking, existing PnPDP approaches can be grouped into four categories described below. Table 1 lists the 14 representative algorithms we selected, and notes their different requirements on the forward model. To avoid confusion, we use Courier font when referring to a specific algorithm in the main text throughout the paper (e.g., PnP-DM for Wu et al. (2024)). **Guidance-based methods** Arguably the most popular approach to solving inverse problems with a pretrained diffusion model is guidance-based methods (Song et al., 2023a; Wang et al., 2022; Kawar et al., 2022; Rout et al., 2023; Chung et al., 2023), which modify Eq. (3) by adding a likelihood score term, _∇_ _**x**_ _t_ log _p_ _t_ ( _**y**_ _|_ _**x**_ _t_ ), along the diffusion trajectory. This term is related to the forward model _G_ 3 Table 1: Requirements on the forward model of the algorithms evaluated in our experiments. **Category** **Method** **SVD** **Pseudo inverse** **Linear** **Gradient** Linear guidance DDRM (Kawar et al., 2022) ✓ ✓ ✓ – DDNM (Wang et al., 2022) ✗ ✓ ✓ – ΠGDM (Song et al., 2023a) ✗ ✓ ✗ – General guidance DPS (Chung et al., 2023) ✗ ✗ ✗ ✓ LGD (Song et al., 2023b) ✗ ✗ ✗ ✓ DPG (Tang et al., 2023) ✗ ✗ ✗ ✗ SCG (Huang et al., 2024) ✗ ✗ ✗ ✗ EnKG (Zheng et al., 2024) ✗ ✗ ✗ ✗ Variable-splitting DiffPIR (Zhu et al., 2023) ✗ ✗ ✗ ✓ PnP-DM (Wu et al., 2024) ✗ ✗ ✗ ✓ DAPS (Zhang et al., 2024) ✗ ✗ ✗ ✓ Variational Bayes RED-diff (Mardani et al., 2023) ✗ ✗ ✗ ✓ Sequential Monte Carlo FPS (Dou & Song, 2024) ✗ ✗ ✓ – MCGDiff (Cardoso et al., 2024) ✓ ✓ ✓ – Table 2: Characteristics of different inverse problems in I NVERSE B ENCH, from left to right: whether the forward model is linear, whether one can compute the SVD from the forward model, whether the inverse problem operates in the complex domain, whether the forward model can be solved in closed form, whether one can access gradients from the forward model, and the noise type. **Problem** **Linear** **SVD** **Complex domain** **Closed-form forward** **Gradient access** **Noise type** Linear inverse scattering ✓ ✓ ✓ ✓ ✓ Gaussian Compressed sensing MRI ✓ ✗ ✓ ✓ ✓ Real-world Black hole imaging ✗ ✗ ✗ ✓ ✓ Non-additive Full waveform inversion ✗ ✗ ✗ ✗ ✓ Noise-free Navier-Stokes equation ✗ ✗ ✗ ✗ ✗ Gaussian if the final clean _**x**_ 0 is a candidate source _**z**_, in which case _p_ ( _**y**_ _|_ _**x**_ 0 ) can be estimated by querying _G_ . However, log _p_ _t_ ( _**y**_ _|_ _**x**_ _t_ ) is generally intractable so various approximations have been proposed (Song et al., 2022; Chung et al., 2023; Song et al., 2023a; Boys et al., 2023). **Variable splitting** Variable splitting is a widely used strategy for solving regularized optimization problems and conducting Bayesian inference (Vono et al., 2019; Chen et al., 2022; Lee et al., 2021). The core idea is to split the inference into two alternating steps (Wu et al., 2024; Zhu et al., 2023; Li et al., 2024a; Song et al., 2024; Zhang et al., 2024; Xu & Chi, 2024). The first step uses the forward model to update or sample in the neighborhood of the most recent _**x**_ _t_ . The second step runs unconditional inference on _p_ ( _**x**_ _t_ ), which amounts to running Eq. (3) for a small amount of time. **Variational Bayes** Variational Bayes methods approximate intractable distributions such as _p_ ( _**x**_ _|_ _**y**_ ) using some simpler parameterized distribution _q_ _θ_ (Zhang et al., 2018). The key idea is to find a _q_ _θ_ _∗_ that, in a KL-divergence sense, both fits the observations _**y**_ and agrees with the prior _p_ ( _**x**_ ). Instead of directly sampling according to Eq. (3), it uses the diffusion model as a prior within a variational inference framework (Mardani et al., 2023; Feng et al., 2023; Feng & Bouman, 2024). **Sequential Monte Carlo** Sequential Monte Carlo (SMC) methods draw samples iteratively from a sequence of probability distributions. These methods represent probability distributions by a set of particles with associated weights, which asymptotically converge to a target distribution following a sequence of proposal and reweighting steps. Recent works have extended SMC methods to the sequential diffusion sampling process (Wu et al., 2023; Trippe et al., 2023; Cardoso et al., 2024; Dou & Song, 2024), enabling zero-shot posterior sampling with diffusion priors. However, these methods are typically applicable only to inverse problems with linear forward models. 3 I NVERSE B ENCH In this section, we introduce the formulation and specific challenges of the five scientific inverse problems considered in I NVERSE B ENCH : linear inverse scattering, compressed sensing MRI, black 4 hole imaging, full waveform inversion, and the Navier-Stokes equation. The characteristics of these inverse problems are summarized in Table 2. Their computational characteristics are summarized in Figure 6. Detailed descriptions and formal definitions can be found in Appendix B. **Linear inverse scattering** Inverse scattering is an inverse problem that arises from optical microscopy, where the goal is to recover the unknown permittivity contrast _**z**_ _∈_ R _[n]_ from the measured scattered lightfield _**y**_ sc _∈_ C _[m]_ . We consider the following formulation of inverse scattering _**y**_ sc = _**H**_ ( _**u**_ tot _⊙_ _**z**_ ) + _**n**_ _∈_ C _[m]_ where _**u**_ tot = _**G**_ ( _**u**_ in _⊙_ _**z**_ ) _._ (4) Here _**G**_ _∈_ C _[n][×][n]_ and _**H**_ _∈_ C _[m][×][n]_ are the discretized Green’s functions that model the responses of the optical system, _**u**_ in and _**u**_ tot are the input and total lightfields, _⊙_ is the elementwise product, and _**n**_ is the measurement noise. Since this problem is a linearized version of the general nonlinear inverse scattering problem based on the first Born approximation, we refer to it as linear inverse scattering. This problem allows us to test algorithms designed specifically for linear problems. **Compressed sensing MRI** Compressed sensing MRI is a technique that accelerates the scan time of MRI via subsampling. We consider the parallel imaging (PI) setup of CS-MRI, which is widely adopted in research and practice. Mathematically, PI CS-MRI can be formulated as an inverse problem that aims to recover an image _**z**_ _∈_ C _[n]_ from _**y**_ _j_ = _**P F S**_ _j_ _**z**_ + _**n**_ _j_ _∈_ C _[m]_ for _j_ = 1 _, ..., J_ where _**P**_ _∈{_ 0 _,_ 1 _}_ _[m][×][n]_ is a subsampling operator and _**F**_ is the Fourier transform; _**y**_ _j_, _**S**_ _j_, and _**n**_ _j_ are the measurements, sensitivity map, and the noise of the _j_ -th coil, respectively. Compressed sensing MRI is a linear problem, but it poses significant challenges due to its high-dimensional nature, involvement of priors in the complex domain, and attention to fine-grained details. **Black hole imaging** The measurements for black hole imaging (BHI) are obtained through Very Long Baseline Interferometry (VLBI). In this technique, each pair of telescopes ( _a, b_ ) provides a _visibility_ (van Cittert, 1934; Zernike, 1938): a measurement that samples a particular spatial Fourier frequency of the source image related to the projected baseline between telescopes at time _t_ _V_ _a,b_ _[t]_ [=] _[ g]_ _a_ _[t]_ _[g]_ _b_ _[t]_ _[e]_ _[−][i]_ [(] _[ϕ]_ _a_ _[t]_ _[−][ϕ]_ _b_ _[t]_ [)] _**I**_ _a,b_ _[t]_ [(] _**[z]**_ [) +] _**[ η]**_ _a,b_ _[t]_ _[.]_ (5) The ideal visibilities _**I**_ _[t]_ _a,b_ [(] _**[z]**_ [)][, representing the Fourier component of the image] _**[ z]**_ [, are corrupted by] Gaussian thermal noise _**η**_ _a,b_ _[t]_ [as well as telescope-dependent amplitude errors] _[ g]_ _a_ _[t]_ [,] _[ g]_ _b_ _[t]_ [and phase errors] _ϕ_ _[t]_ _a_ [,] _[ ϕ]_ _[t]_ _b_ [(EHTC, 2019a). To mitigate the impact of these amplitude and phase errors, derived data] products called _closure quantities_, namely _closure phases_ and _log closure amplitudes_, can be used to constrain inference (Blackburn et al., 2020): � _**y**_ _t,_ [cp] ( _a,b,c_ ) [=][ ∠][(] _[V]_ _[ t]_ _a,b_ _[V]_ _[ t]_ _b,c_ _[V]_ _[ t]_ _a,c_ [)] _[ ∈]_ [R] _[,]_ _**y**_ _t,_ [logca] ( _a,b,c,d_ ) [= log] _|V_ _a_ _[t]_ _,b_ _[||][V]_ _[ t]_ _c,d_ _[|]_ � _|V_ _a,c_ _[t]_ _||V_ _b,d_ _[t]_ _[|]_ _∈_ R _._ (6) Here, ∠ and _| · |_ denote the complex angle and amplitude. Given a total of _M_ telescopes, the number of closure phase measurements _**y**_ _t,_ [cp] ( _a,b,c_ ) [at time] _[ t]_ [ is] [(] _[M]_ _[−]_ [1][)(] 2 _[M]_ _[−]_ [2][)], and the number of log Here, ∠ and _| · |_ denote the complex angle and amplitude. Given a total of _M_ telescopes, the number of closure phase measurements _**y**_ _t,_ [cp] ( _a,b,c_ ) [at time] _[ t]_ [ is] [(] _[M]_ _[−]_ [1][)(] 2 _[M]_ _[−]_ [2][)], and the number of log closure amplitude measurements _**y**_ _t,_ [logca] ( _a,b,c,d_ ) [is] _[ M]_ [(] _[M]_ 2 _[−]_ [3][)], after accounting for redundancy. Closure quantities are nonlinear transformations of the visibilities, making a forward model that uses them for black hole imaging non-convex. The inverse problem is further complicated by the need for super-resolution imaging beyond the intrinsic resolution of the Event Horizon Telescope (EHT) observations (i.e., maximum probed spatial frequency), as well as phase ambiguities, which can lead to multiple modes in the posterior distribution (Sun & Bouman, 2021; Sun et al., 2024). Another challenge of BHI is that measurement noise is non-additive due to the usage of the closure quantities. closure amplitude measurements _**y**_ _t,_ [logca] ( _a,b,c,d_ ) [is] _[ M]_ [(] _[M]_ 2 _[−]_ [3][)] **Full waveform inversion** Full waveform inversion (FWI) aims to infer subsurface physical properties (e.g. compressional and shear wave velocities) using the full information of recorded waveforms. In this work, we consider the problem of recovering the compressional wave velocity _v_ := _v_ ( _**x**_ ) (discretized as _**z**_ _∈_ R _[n]_ ) from the observed wavefield _u_ _r_ (discretized as _**y**_ _∈_ R _[m]_ ): _**y**_ = _**P u**_ _,_ (7) 5 where _**P**_ is the sampling operator for receivers where observational data is available, and _**u**_ is the discretization of the pressure wavefield _u_ := _u_ ( _**x**_ _, t_ ), which is a function of location _**x**_ and time _t_ . Here, _u_ is the solution to the acoustic (scalar) wave equation that models seismic wave propagation in heterogeneous acoustic media with constant density: 1 _∂_ [2] _u_ (8) _v_ [2] _∂t_ [2] _[ −∇]_ [2] _[u]_ [ =] _[ q,]_ where _q_ := _q_ ( _**x**_ _, t_ ) is the source function (discretized as _**q**_ ). Eq. (8) can be discretized as: _**Au**_ = _**q**_ _,_ where _**A**_ represents the discretized operator _v_ 1 [2] _∂t_ _[∂]_ [2][2] _[ −∇]_ [2] [. Typically we only have observations at] the free surface, the inverse problem has non-unique solutions. One of the major challenges for FWI is the prohibitive computational expense, especially for large problems, as it usually requires numerous calls to the forward modeling process. Moreover, the conventional method for FWI, called the adjoint-state method, casts it as a local optimization problem (Virieux et al., 2017; Virieux & Operto, 2009). This means that a sufficiently accurate initial model is required, as the solution is only sought in its vicinity. FWI conventionally needs to start with a smoothed model derived from simpler ray-based methods (Liu et al., 2017; Maguire et al., 2022), which imposes a significantly strong prior. A general method with less reliance on initialization is highly desired. **Navier-Stokes equation** Navier-Stokes equation is a classic benchmarking problem from fluid dynamics (Iglesias et al., 2013). Its applications range from ocean dynamics to climate modeling where observations of the atmosphere are used to calibrate the initial condition for the downstream numerical forecasting. We consider the forward model that is given by the following 2D NavierStokes equation for a viscous, incompressible fluid in vorticity form on a torus. _∂_ _t_ _**w**_ ( _**x**_ _, t_ ) + _**u**_ ( _**x**_ _, t_ ) _· ∇_ _**w**_ ( _**x**_ _, t_ ) = _ν_ ∆ _**w**_ ( _**x**_ _, t_ ) + _f_ ( _**x**_ ) _,_ _**x**_ _∈_ (0 _,_ 2 _π_ ) [2] _, t ∈_ (0 _, T_ ] _∇·_ _**u**_ ( _**x**_ _, t_ ) = **0** _,_ _**x**_ _∈_ (0 _,_ 2 _π_ ) [2] _, t ∈_ [0 _, T_ ] (9) _**w**_ ( _**x**_ _,_ 0) = _**w**_ 0 ( _**x**_ ) _,_ _**x**_ _∈_ (0 _,_ 2 _π_ ) [2] where _**u**_ _∈_ _C_ �[0 _, T_ ]; _H_ per _[r]_ [((0] _[,]_ [ 2] _[π]_ [)] [2] [;][ R] [2] [)] � for any _r >_ 0 is the velocity field, _**w**_ = _∇×_ _**u**_ is the vorticity, _**w**_ 0 _∈_ _L_ [2] per �(0 _,_ 2 _π_ ) [2] ; R� is the initial vorticity, _ν ∈_ R + is the viscosity coefficient, and _f ∈_ _L_ [2] per �(0 _,_ 2 _π_ ) [2] ; R� is the forcing function. The solution operator _G_ is defined as the operator mapping the vorticity from the initial vorticity to the vorticity at time _T_, i.e. _G_ : _**w**_ 0 _→_ _**w**_ _T_ . We consider the problem of recovering the initial vorticity field _**z**_ := _**w**_ 0 from the noisy partial observation _**y**_ of the vorticity field _**w**_ _T_ at time _T_ given by _**y**_ = _**P L**_ ( _**z**_ ) + _**n**_ where _**P**_ is the sampling operator, _**n**_ is the measurement noise, and _**L**_ ( _·_ ) is the discretized solution operator of Eq. (9). The Navier-Stokes equation does not admit a closed-form solution and thus there is no closed-form gradient available for the solution operator. Moreover, obtaining an accurate numerical gradient via automatic differentiation through the numerical solver is challenging due to the extensive computation graph expanded after thousands of discrete time steps. 4 E XPERIMENTS 4.1 E XPERIMENTAL S ETUP Here we provide a brief summary of our experimental setup. More details about the inverse problems and their corresponding datasets can be found in Appendix B. Technical details of DM pretraining can be found in Appendix B.6. **Black hole imaging** We leverage a dataset of General Relativistic MagnetoHydroDynamic (GRMHD) (Mizuno, 2022) simulated black hole images as our training data. The training set consists of 50,000 resized 64 _×_ 64 images. Since this dataset is not publicly available, we generate synthetic images from a pre-trained diffusion model for both the validation and test datasets. Specifically, we use 5 sampled images for the validation set and 100 sampled images for the test set. 6 **Full waveform inversion** We adapt the CurveFaultB dataset (Deng et al., 2022), which presents the velocity maps that contain faults caused by shifted rock layers. We resize the original data to resolution 128 _×_ 128 with bilinear interpolation and anti-aliasing. The training set consists of 50,000 velocity maps. The test and validation sets contain 10 and 1 velocity maps, respectively. **Linear inverse scattering** We create a dataset of fluorescence microscopy images using the online simulator (Wiesner et al., 2019). The training set consists of 10,000 HL60 nucleus permittivity images. The test and validation sets contain 100 and 10 permittivity images, respectively. We curate the test and validation samples so that all test samples have less than 0.6 cosine similarities to those in the training set. **Compressed sensing MRI** We use the multi-coil raw _k_ -space data from the fastMRI knee dataset (Zbontar et al., 2018). We exclude the first and last 5 slices of each volume for training and validation as they do not contain much anatomical information and resize all images down to 320 _×_ 320 following the preprocessing procedure of (Jalal et al., 2021). In total, we use 25,012 images for training, 6 images for hyperparameter search, and 94 images for testing. **Navier-Stokes** We create a dataset of non-trivial initial vorticity fields by first sampling from a Gaussian random field and then evolving Eq.9 for five time units. The equation setup follows Iglesias et al. (2013); Li et al. (2024b). We set the Reynolds number to 200 and spatial resolution to 128 _×_ 128. The training set consists of 10,000 samples. The test and validation sets contain 10 and 1 samples, respectively. **Pretraining of diffusion model priors** For each problem, we train a diffusion model on the training set using the pipeline from (Karras et al., 2022), and use the same checkpoint for all diffusion plug-and-play methods on each problem for a fair comparison. See more details in Appendix B.6. 4.2 E VALUATION METRICS **Accuracy metrics** We use the Peak Signal-to-Noise Ratio (PSNR), Structure Similarity Index Measure (SSIM), as the generic ways to quantify recovery of the true source. For all the problems � except for black hole imaging, we use the _ℓ_ 2 error _∥G_ ( _**z**_ ) _−_ _**y**_ _∥_ 2 to measure the consistency with the observation _**y**_ . For black hole imaging, the closure quantities are invariant under translation, and so we measure the best fit under any shift alignment. We also assess the Blur PSNR, where images are blurred to match the target resolution of the telescope. We evaluate data misfit via the _**χ**_ [2] statistic on two closure quantities: the closure phase ( _**χ**_ [2] cp [) and the log closure amplitude (] _**[χ]**_ [2] logca [). A] _**[ χ]**_ [2] [ value] close to 1 indicates better data fitting. To facilitate a comparison between underfitting ( _**χ**_ [2] _>_ 1) and overfitting ( _**χ**_ [2] _<_ 1), we report a unified metric defined as ˜ _**χ**_ [2] = _**χ**_ [2] _·_ 1 _{_ _**χ**_ [2] _≥_ 1 _}_ + [1] (10) _**χ**_ [2] _[ ·]_ [ 1] _[{]_ _**[χ]**_ [2] _[ <]_ [ 1] _[}][.]_ � For FWI and Navier-Stokes experiments, we also use the relative _ℓ_ 2 error _∥_ _**z**_ _−_ _**z**_ _∥_ 2 _/∥_ _**z**_ _∥_ 2 as it is a commonly used primary accuracy metric in PDE problems (Iglesias et al., 2013). **Efficiency metrics** We define a set of efficiency metrics in Table 9 to evaluate the computational complexity of inverse algorithms more thoroughly. These metrics fall into two categories: (1) total metrics that measure the overall computational cost; (2) sequential metrics that help identify bottlenecks where forward model or diffusion model queries cannot be parallelized. **Ranking score** To assess the relative ranking of different PnP diffusion models across various problems, we define the following ranking score for each problem. Given a set of accuracy or efficiency metrics _{h_ _k_ _}_ _[K]_ _k_ =1 [, we rank the algorithms according to each individual metric. Suppose] algorithm _l_ has the rank _R_ _k_ ( _l_ ) out of _L_ algorithms under the metric _k_ . Its ranking score on this metric is given by score _k_ ( _l_ ) = 100 _×_ ( _L −_ _R_ _k_ ( _l_ ) + 1) _/L_ . For each problem, we calculate the average ranking score to assess overall performance: score [problem] ( _l_ ) = [1] _K_ 7 _K_ � score _k_ ( _l_ ) _._ _k_ =1 Ground truth FISTA DDRM DDNM ∏GDM DPS LGD Pseudo-inverse DiffPIR PnP-DM DAPS RED-diff FPS MCG-diff Ground truth EHT-Imaging SMILI DPS LGD DiffPIR PnP-DM DAPS RED-diff Ground truth Adam Adam* LBFGS* DPS LGD DiffPIR PnP-DM DAPS RED-diff Ground truth EKI DPS-GSG SCG DPG EnKG Figure 2: Qualitative comparison showing representative examples of PnP-DP methods and domainspecific baselines Idea Generation Category:
3Other
U3PBITXNG6
# R EADING Y OUR H EART : L EARNING ECG W ORDS - AND S ENTENCES VIA P RE TRAINING ECG L ANGUAGE M ODEL **Jiarui Jin** [1] _[,]_ [2] _[,]_ [3] _[,]_ [4] _[,][∗]_ **, Haoyu Wang** [1] _[,]_ [4] _[,][∗]_ **, Hongyan Li** [2] _[,]_ [3] _[,][†]_ **, Jun Li** [1] **, Jiahui Pan** [4] _[,][†]_ **, Shenda Hong** [1] _[,][†]_ 1 National Institute of Health Data Science, Peking University 2 State Key Laboratory of General Artificial Intelligence, Peking University 3 School of Intelligence Science and Technology, Peking University 4 School of Artificial Intelligence, South China Normal University hongshenda@pku.edu.cn, panjiahui@m.scnu.edu.cn, leehy@pku.edu.cn A BSTRACT Electrocardiogram (ECG) is essential for the clinical diagnosis of arrhythmias and other heart diseases, but deep learning methods based on ECG often face limitations due to the need for high-quality annotations. Although previous ECG self-supervised learning (eSSL) methods have made significant progress in representation learning from unannotated ECG data, they typically treat ECG signals as ordinary time-series data, segmenting the signals using fixed-size and fixed-step time windows, which often ignore the form and rhythm characteristics and latent semantic relationships in ECG signals. In this work, we introduce a novel perspective on ECG signals, treating heartbeats as words and rhythms as sentences. Based on this perspective, we first designed the QRS-Tokenizer, which generates semantically meaningful ECG sentences from the raw ECG signals. Building on these, we then propose HeartLang, a novel self-supervised learning framework for ECG language processing, learning general representations at form and rhythm levels. Additionally, we construct the largest heartbeat-based ECG vocabulary to date, which will further advance the development of ECG language processing. We evaluated HeartLang across six public ECG datasets, where it demonstrated robust competitiveness against other eSSL methods. Our data and code are publicly [available at https://github.com/PKUDigitalHealth/HeartLang.](https://github.com/PKUDigitalHealth/HeartLang) 1 I NTRODUCTION Electrocardiogram (ECG) is a common type of clinical data used to monitor cardiac activity, and is frequently employed in diagnosing cardiac diseases or conditions impairing myocardial function (Hong et al., 2020; Liu et al., 2021). A primary limitation of using supervised deep learning methods for ECG signal analysis is their dependency on largescale, expert-reviewed, annotated high-quality data. Moreover, even with sufficient data, these methods are often designed to address specific Figure 1: Two perspectives on ECG signals. tasks, which curtails the generalization ability of the model. To overcome these challenges, ECG self-supervised learning (eSSL) has demonstrated efficacy by training on vast amounts of unlabeled ECG recordings to learn generic ECG signal representations, which are then fine-tuned for specific downstream tasks (Pup & Atzori, 2023). _∗_ Equal contribution. _†_ Corresponding authors. 1 Current eSSL methods can be primarily classified into two categories: contrastive-based methods and reconstruction-based methods. The core principle of contrastive-based methods involves creating positive and negative sample pairs, aiming to maximize the similarity of positive pairs and minimize the similarity of negative pairs (Zhang et al., 2023b). Reconstruction-based methods focus on training a model to reconstruct the original input from partial or transformed data, thus learning effective data representations (Zhang et al., 2023c). However, almost all methods treat ECG signals as ordinary time-series data, which have two significant drawbacks: **Ignoring Form and Rhythm Characteristics of ECG.** ECG diagnostics from multi-level characteristics are essential Hong et al. (2019). For example, myocardial infarction is diagnosed by observing ST segment elevation of a single heartbeat (Vogel et al., 2019). Likewise, cardiac rhythm characteristics are critical, as arrhythmias like atrial fibrillation (AF) are identified based on the overall cardiac rhythm (Carrington et al., 2022). However, existing eSSL methods typically employ fixed-size and fixed-step time windows to segment the signal (Song et al., 2024). This perspective treats ECG signals as ordinary time-series signals, thereby ignoring the unique form and rhythm characteristics inherent to ECG signals, ultimately leading to a decline in the effectiveness of selfsupervised learning for both. **Ignoring Latent Semantic Relationships of ECG.** Due to significant differences in heart rate and other factors between different subjects, and even among different samples from the same subject (Lan et al., 2022), using fixed-size and fixed-step time windows to segment data leads to substantial discrepancies among samples. The differences between samples disrupt the potential semantic relationships between different heartbeats, which in turn negatively impacts the effectiveness of learning a generalized representation in self-supervised learning. To address these challenges, we propose a self-supervised learning framework named **HeartLang** for ECG language processing (ELP). A distinguishing feature of ECG signals is the clear visibility of heart rate patterns, where individual heartbeats are easily identifiable. The core concept of our framework treats heartbeats as words and rhythms as sentences, enabling self-supervised learning at both form and rhythm levels to capture multi-level general representations. Our method consists of four key components: (1) the QRS-Tokenizer, which generates the ECG sentences from the raw ECG signals; (2) the ST-ECGFormer, which leverages spatio-temporal information to enhance latent semantic extraction from ECG sentences; (3) the construction of the largest ECG vocabulary to date, where heartbeat quantization and reconstruction enable form-level representation learning; and (4) masked ECG sentence pre-training, which facilitates rhythm-level general representation learning. Through these approaches, our method can learn both form-level and rhythm-level representations of ECG signals without labels, and extract latent semantic representations in ECG sentences. The main contributions of this work are summarized in below: - We propose HeartLang, a novel self-supervised learning framework for ECG language processing, designed to learn general representations at form and rhythm levels and extract latent semantic relationships from unlabeled ECG signals. - We present a paradigm-shifting perspective of ECG signals, treating them as a language with distinct words (heartbeats) and sentences (rhythms), and design a QRS-Tokenizer that generates the ECG sentences from the raw ECG signals based on this perspective. - We design ST-ECGFormer, a novel transformer-based backbone network for ECG signal analysis, which leverages the spatio-temporal features in ECG signals to enhance representation learning and optimize latent semantic relationships extraction for ECG sentences. - To the best of our knowledge, we have constructed the largest ECG vocabulary based on heartbeats to date. This ECG vocabulary includes a wide variety of heartbeat morphological representations across different cardiac conditions, which will further advance the development of ECG language processing. 2 R ELATED W ORK 2.1 S ELF - SUPERVISED L EARNING FOR ECG S IGNALS In recent years, ECG self-supervised learning (eSSL) has demonstrated its ability to learn general representations from unlabeled ECG signals, significantly improving the performance of down 2 stream tasks (Lai et al., 2023). eSSL methods can be broadly categorized into two types: contrastivebased methods and reconstruction-based methods. For contrastive-based approaches, CLOCS (Kiyasseh et al., 2021) enhances contrastive learning by leveraging cross-space, time, and patientlevel relationships in ECG signals, while ASTCL (Wang et al., 2024) employs adversarial learning to capture spatio-temporal invariances in ECG signals. ISL (Lan et al., 2022) enhances cross-subject generalization ability through inter-subject and intra-subject contrastive learning, while BTFS (Yang & Hong, 2022) enhances ECG signal classification performance by combining time-domain and frequency-domain contrastive learning. On the other hand, reconstruction-based methods like MaeFE (Zhang et al., 2023a) and ST-MEM (Na et al., 2024) adopt a spatio-temporal approach, learning general ECG representations by masking and reconstructing temporal or spatial content. CRT (Zhang et al., 2023c) obtains general representations in ECG signals by mutually reconstructing the time-domain and frequency-domain data. However, existing eSSL methods predominantly focus on spatio-temporal or time-frequency domain representation learning of ECG signals, treating them as ordinary time-series data. This perspective often neglects the morphologically rich semantic information embedded in individual heartbeats. 2.2 ECG L ANGUAGE P ROCESSING ECG language processing (ELP) is an emerging paradigm for handling ECG signals, first proposed by Mousavi et al. (2021). Since ECG signals inherently possess significant and clear semantic information in heartbeats, they can be processed using methods similar to natural language processing (NLP). Both Mousavi et al. (2021) and Choi et al. (2023) employ approaches that segment different waves within heartbeats to construct vocabularies for modeling. However, when dealing with ECG signals of varying quality, existing methods struggle to accurately segment fine-grained waveforms. Moreover, current ELP methods have relatively small vocabularies (no more than 70 clusters), which limits the richness of the semantic information. In addition, research on ELP remains sparse, highlighting it as a field in urgent need of further exploration. To address these limitations, we propose a new perspective that directly treats heartbeats as words for modeling and have built the largest ECG vocabulary to date, consisting of 5,394 words, which will significantly advance the development of the ELP research field. 3 M ETHOD In this section, we provide a detailed explanation of the specific structure of the HeartLang framework. We first define multi-lead ECG data as _X ∈_ R _[C][×][T]_, where _C_ represents the number of ECG leads (electrodes) and _T_ represents the total timestamps. The configuration of ECG leads follows the standard 12-lead ECG setup. The overview of the framework is shown in Figure 2. The use of the framework can be divided into four steps. First, a QRS-Tokenizer is used to generate the ECG sentences from the raw ECG signals as described in the Section 3.1. Second, constructing the ECG vocabulary is achieved through the steps in the Section 3.3. Third, masked ECG sentence pre-training of the framework is performed as described in the Section 3.4. Finally, fine-tuning is performed for downstream tasks. 3.1 G ENERATING ECG S ENTENCES U SING THE QRS-T OKENIZER **QRS Detection.** A key concept of our method is to treat heartbeats as words, thus making the segmentation of the original ECG signal into semantic heartbeat patches essential. We introduce QRS-Tokenizer, a tokenizer that generates the ECG sentences from the raw ECG signals based on QRS waveforms. Initially, the I-lead signal is bandpass filtered between 5 and 20 Hz, followed by moving wave integration (MWI) using a Ricker wavelet on the filtered signal, and the squared integration signal is saved. The local maxima of the MWI signal are then traversed, with each maximum that occurs after the refractory period and exceeds the QRS detection threshold being classified as a QRS complex. Following detection, we obtain the indices of the detected QRS complexes _Q_ = _{q_ _i_ _|i_ = 1 _, . . ., N_ _}_, where _N_ denotes the number of detected QRS indices per sample, which varies between ECG recordings. Assuming the time window size is _t_ . For each lead, we center each index in _Q_, using the midpoint between every two indices as the interval boundaries, and independently segment the QRS complex 3 Figure 2: Framework of HeartLang. patches for each lead. If the segmented region is smaller than _t_, padding it with zeros to match the required size. We refer to these segmented heartbeat patches as individual ECG words, as they are independently extracted from each subject and lack cross-subject generalization. **Generating ECG Sentences.** After segmentation, we concatenate the individual ECG words of the 12 leads in sequence, forming the overall ECG sentence _x ∈_ R _[l][×][t]_, where _l_ represents the sequence length and _t_ denotes the time window size. Given the variability in heart rates across samples, the resulting sequence lengths are inconsistent. Similar to natural language processing, we set _l_ to the maximum length of the ECG sentence. If the length of the ECG sentence is less than _l_, it will be padded to _l_ through the zero-filled patches; if the length of the ECG sentence exceeds _l_, the interval length will be truncated to _l_ . In this paper, _l_ is set to 256, and _t_ is set to 96. 3.2 ST-ECGF ORMER B ACKBONE N ETWORK To more effectively capture spatio-temporal features and latent semantic relationships within ECG sentences, we designed a backbone network called ST-ECGFormer. This backbone network is employed in various components of the HeartLang, including vector-quantized heartbeat reconstruction (VQ-HBR) training, masked ECG sentence pre-training, and downstream tasks fine-tuning. **Token Embedding.** ECG signals exhibit high temporal resolution, and the QRS complexes that form ECG sentences contain rich temporal features. These QRS complexes are mapped into a higher-dimensional token feature space, allowing their distinguishing features to be more effectively extracted and encoded. We apply a 1-D convolutional layer-based mapping function to transform each individual ECG word into a corresponding token. After this transformation, the ECG sentence can be represented as _x_ _t_ _∈_ R _[l][×][D]_, where _D_ denotes the dimension of the token feature space. **Spatio-temporal and Position Embedding.** To enable the spatial and temporal information of the ECG sentence to be better captured by HeartLang, a temporal embedding set _TE_ = _{te_ 0 _, te_ 1 _, te_ 2 _, . . ., te_ 10 _}_ and a spatial embedding set _SE_ = _{se_ 0 _, se_ 1 _, se_ 2 _, . . ., se_ 12 _}_, both _D_ dimensional and learnable during the training process, were initialized. For the spatial embedding, we divide the original 12-lead ECG signals into 12 segments, with each lead corresponding to a spatial embedding. The spatial embedding of each individual ECG word is mapped back to the lead from which it originated. For the temporal embedding, the original 10-second ECG signal is divided into 10 segments, where each second corresponds to a temporal embedding. We assign the temporal embedding of each individual ECG word to the time interval of its QRS complex indices _Q_ . Specifically, for zero-filled patches, their spatial and temporal embeddings are set to _te_ 0 and _se_ 0, respec 4 tively, to ensure feature consistency. Next, a class token is added at the beginning of the sequence to enhance the representation. Additionally, a position embedding list _PE_ = _{pe_ 0 _, pe_ 1 _, pe_ 2 _, . . ., pe_ _l_ _}_ is introduced to reinforce the sequential relationships between individual ECG words. Thus, the ECG sentence can be described by the following formula: _x_ _[′]_ = _x_ _t_ + _TE_ _u_ + _SE_ _v_ + _PE,_ _u ∈{te_ 0 _, te_ 1 _, te_ 2 _, . . ., te_ 10 _}, v ∈{se_ 0 _, se_ 1 _, se_ 2 _, . . ., se_ 12 _}._ **Transformer Encoder.** Finally, the ECG sentence will be input into the transformer encoder (Vaswani et al., 2017). To ensure stability during the training process, we employ the pre-layer normalization strategy (Xiong et al., 2020), which applies layer normalization to the input of the attention mechanism: _Q_ = _LN_ ( _x_ _[′]_ ) _w_ _[Q]_ _,_ _K_ = _LN_ ( _x_ _[′]_ ) _w_ _[K]_ _,_ _V_ = _LN_ ( _x_ _[′]_ ) _w_ _[V]_ _,_ _QK_ T Attention( _Q, K, V_ ) = softmax � _√d_ _head_ _V,_ � where _d_ _head_ denotes the dimension of each head in the multi-head attention, and LN represents layer normalization. 3.3 V ECTOR -Q UANTIZED H EARTBEAT R ECONSTRUCTION T RAINING The individual ECG words segmented by the QRS-Tokenizer lack generalization properties, as each individual ECG word is independent. We aim for the HeartLang to learn general representations across subjects during the subsequent pre-training stage. To achieve this, we introduce an ECG vocabulary, a codebook containing collective ECG words that have cross-subject generalization properties. We believe that the same type of heartbeat across different subjects should be consistent in semantic level. Similar individual ECG words from different subjects are mapped to the same discrete and compact collective ECG word, allowing physiological differences between subjects to be overcome and form-level features of heartbeats to be learned. The construction of the ECG vocabulary is jointly optimized by quantization and reconstruction processes, as depicted in the upper half of Figure 2. This concept is inspired by VQ-NSP (Jiang et al., 2024), which encodes EEG signals into discrete latent representations and decodes them. **Vector Quantization.** We first define an ECG vocabulary _V_ = _{v_ _i_ _|i_ = 1 _, . . ., k} ∈_ R _[k][×][d]_, where _k_ is the number of collective ECG words in the vocabulary and _d_ is the dimension of each collective ECG word. Given an ECG signal sample _X ∈_ R _[C][×][T]_, it is first generated by the QRS-Tokenizer into ECG sentence _x ∈_ R _[l][×][t]_ . After the ECG sentence is input into the ST-ECGFormer, a set of collective ECG word embeddings _P_ = _{p_ _i_ _|i_ = 1 _, . . ., l}_ is obtained. Then, a quantizer is used to convert them into collective ECG word embeddings. The ECG vocabulary looks up the nearest neighbor of each interval representation _p_ _i_ in _V_ . We use cosine similarity to find the closest collective ECG word embedding. This procedure can be formulated as _z_ _i_ = arg min _∥ℓ_ 2 ( _p_ _i_ ) _−_ _ℓ_ 2 ( _v_ _i_ ) _∥_ 2 _,_ where _v_ _i_ is the collective ECG word embedding, and _ℓ_ 2 represents _ℓ_ 2 normalization. **Heartbeat Reconstruction.** Due to the high signal-to-noise ratio of ECG signals, reconstructing the raw signals directly can efficiently train an ECG vocabulary and effectively learn form-level features of heartbeats. After being labeled by the quantizer, the normalized discrete collective ECG word embeddings _{ℓ_ 2 ( _z_ _i_ ) _|i_ = 1 _, . . ., l}_ are fed into the transformer decoder. This process can be represented as ˆ _x_ = _l_ � _f_ _d_ ( _ℓ_ 2 ( _v_ _z_ _i_ )) _,_ _i_ =1 where ˆ _x_ is the reconstructed ECG sentence and _f_ _d_ is the decoder. To make the update of the ECG vocabulary more stable, we adopt an exponential moving average (EMA) strategy. The mean squared error (MSE) loss is utilized to guide the quantization and reconstruction processes. Finally, the loss function for training the VQ-HBR process is defined as � ˆ 2 _∥x_ _i_ _−_ _x∥_ 2 [2] [+] �� _sg_ � _ℓ_ 2 ( _p_ _i_ )� _−_ _ℓ_ 2 � _v_ _z_ _i_ ��� 2 2 � + �� _ℓ_ 2 ( _p_ _i_ ) _−_ _sg_ � _ℓ_ 2 � _v_ _z_ _i_ ���� 2 5 _L_ _V_ = � _x∈D_ _l_ � _i_ =1 _,_ Idea Generation Category:
0Conceptual Integration
6Hz1Ko087B
# N ONLINEAR MULTIREGION NEURAL DYNAMICS WITH ## PARAMETRIC IMPULSE RESPONSE COMMUNICATION CHANNELS **Matthew Dowling & Cristina Savin** _[∗]_ Center for Neural Science New York University _{_ md6276,csavin _}_ @nyu.edu A BSTRACT Cognition arises from the coordinated interaction of brain regions with distinct computational roles. Despite improvements in our ability to extract the dynamics underlying circuit computation from population activity recorded in individual areas, understanding how multiple areas jointly support distributed computation remains a challenge. As part of this effort, we propose a multi-region neural dynamics model composed of two building blocks: _i)_ within-region (potentially driven) nonlinear dynamics and _ii)_ communication channels between regions, parameterized through their impulse response. Together, these choices make it possible to learn nonlinear neural population dynamics and understand the flow of information between regions by drawing from the rich literature of linear systems theory. We develop a state noise inversion free variational filtering and learning algorithm for our model and show, through neuroscientifically inspired numerical experiments, how the proposed model can reveal interpretable characterizations of the local computations within and the flow of information between neural populations. We further validate the efficacy of our approach using simultaneous population recordings from areas V1 and V2. 1 I NTRODUCTION Perception, choice and action engage neural circuits distributed across the brain (Chen et al., 2024; Khilkevich et al., 2024; Noel et al., 2024; Pinto et al., 2022; Machado et al., 2022; Ebrahimi et al., 2022). Despite technological advances that facilitate recording from multiple, anatomically distinct, populations of neurons (Steinmetz et al., 2021), understanding neural computation at the level of multiple interacting populations remains a statistical and theoretical challenge. Making progress requires new theoretical frameworks describing how global computations arise from multiple interacting circuits (Perich & Rajan, 2020), each with potentially complex local nonlinear dynamics, and new statistical tools that extract such structure directly from recorded neural activity during behavior. One prominent set of approaches for measuring interarea interactions based on neural data focus on _communication subspaces_ (Semedo et al., 2019). Rather than modeling local circuit dynamics explicitly, these approaches aim to partition population response variability into ‘private’ dimensions, that are local to an area, and ‘shared’ dimensions reflecting the flow of information across areas. In its simplest form, this partitioning is formalized as low-rank regression or canonical correlation analysis, for directional or undirectional communication, respectively (Semedo et al., 2020). Additionally, Gaussian Process (GP) priors for the latents enforce temporal regularities, and explicitly model features like communication delays (Gokcen et al., 2022; 2024), frequency and phase delays (Li et al., 2024) or additional task-relevant covariates (Balzani et al., 2023). Building upon a decade of progress in latent state estimation from neural population activity (Paninski et al., 2010; Cunningham & Yu, 2014; Duncker & Sahani, 2021), other approaches directly model the dynamics within each areas and the interactions between them. The simplest such models use linear dynamical systems (LDS) for capturing within area dynamics. For instance, gLARA (group _∗_ Center for Data Science, NYU 1 latent autoregressive analysis) (Semedo et al., 2014) assumes that within- and between- population dynamics are both governed by LDSs. More recently, the state-space representation of finitely differentiable GPs (Li et al., 2024) blurs the distinction between GP-prior based communication subspaces and LDS methods, although these are mainly leveraged for efficient inference. Oftentimes, multi-area approaches can be seen as special cases of single area models, with additional parameter constraints. For instance, Glaser et al. (2020) adapts recurrent switching linear dynamical systems (rSLDS) to construct a multi-population sticky rSLDS (mp-srSLDS) of neural dynamics. This allows for nonlinear within area dynamics and instantaneous linear information flow between them. The most complex multi-area model is MR-SDS (multi-region switching dynamical systems), which uses neural networks to parametrize arbitrary nonlinearities for within area dynamics and across areas communication and uses switching to capture global transitions between such nonlinear systems to model behavioral state (Karniol-Tambour et al., 2022). Closest to the circuit level, multi-region recurrent neural networks (RNNs) can be fit directly to single neuron responses Perich et al. (2020), which provides direct current estimates but leaves understanding the low dimensional dynamical systems structure of the solution to post-hoc investigation. Overall, different approaches provide different trade-offs between flexibility and interpretability (see Appendix Table S1). None of the existing methods fully reflect the nature of distributed computation as formalized in current circuit level theories (Bredenberg et al., 2024; Langdon et al., 2023; Miˇsi´c & Sporns, 2016). Here we develop a probabilistic generative model that accounts for the nonlinear nature of neural dynamics and characterizes communication between regions using channels that are parameterized by their impulse response – blending expressive nonlinear region specific dynamics with interpretable characterizations of the flow of information between regions. Our major methodological contributions include i) the generative model of latent neural dynamics that combines node-specific nonlinear dynamical systems parameterized by deep neural networks with linear communication channels between regions, which we term MRDS-IR (for MultiRegion Dynamical Systems with Impulse Response communication channels) ii) an end-to-end variational methodology, using a state-noise inversion free filtering algorithm, streamlining the treatment of approximate inference in state-space graphical models with hybrid stochastic/deterministic transitions. Through several neuroscientifically inspired numerical experiments including integration, gating of information flow and rhythmic timing, we demonstrate the use of our approach to make sense of the underlying computation behind observed multi-population neural responses. We also show that our approach reveals meaningful features of neural activity in joint population recordings from visual areas V1 and V2. 2 M ODELING MULTI - AREA NEURAL DYNAMICS DURING BEHAVIOR 2.1 B ACKGROUND **State-space models.** State-space graphical models provide a principled framework for data driven learning of neural population dynamics (Paninski et al., 2010). For a single neural population, recorded neural activity, **y** _t_ _∈_ R _[N]_, at time, _t_, is modeled as reflecting a lower-dimensional population latent state, **z** _t_ _∈_ R _[L]_, which evolves as a dynamical system parameterized by _**θ**_, **z** _t_ = **f** _**θ**_ ( **z** _t−_ 1 _,_ **c** _t_ ) + **w** _t_ (latent process) **y** _t_ _|_ **z** _t_ _∼_ _p_ ( **y** _t_ _|_ **z** _t_ ) (observation model) (1) where **w** _t_ _∼N_ ( **0** _,_ **Q** ), and **c** _t_ denotes (optional) inputs/stimuli. Generalizing this formalism to simultaneous recordings from _K_ regions, a natural choice is to partition the latent space into _K_ groups of latent variables, with population responses in any given depending only on the latents of that region (Gokcen et al., 2022; Li et al., 2024; Karniol-Tambour et al., 2022; Semedo et al., 2014), _p_ ( **y** _t_ [(] _[k]_ [)] _|_ **z** [(1)] _t_ _[,]_ **[ z]** [(2)] _t_ _[, . . .,]_ **[ z]** [(] _t_ _[K]_ [)] ) = _p_ ( **y** _t_ [(] _[k]_ [)] _|_ **z** [(] _t_ _[k]_ [)] ) _,_ where **y** _t_ [(] _[k]_ [)] and **z** [(] _t_ _[k]_ [)] are the population activity and latent state associated with region _k_, respectively. Within this common structure, different multi-region models make different choices for the functional form of the latent space and the dependencies linking latents across regions. This structure determines not only the model’s expressiveness but also its ability to capture crucial aspects of neural population dynamics. One important such feature is the latency in communication between regions, reflecting the time delays inherent in signal propagation, which is absent in most process models (Glaser et al., 2020; Karniol-Tambour et al., 2022) or realized by introducing dependence on a finite state history (Semedo et al., 2014). To provide a more flexible framework for modeling signal propagation between regions, we consider principles from linear system theory. 2 **Characterizing communication channels via their impulse response.** The general premise of our approach for modeling communication channels is that signal propagation between regions can be well approximated by sufficiently expressive linear filters, allowing for propagation delays and temporal filtering, e.g. preferential transfer of information in a specific frequency band (Bastos et al., 2015), while keeping the model tractable. Two fundamental concepts for understanding a linear system are i) its impulse response and ii) its transfer function; they offer complementary perspectives on the system’s input to output map, characterizing information flow in both time and frequency (Kailath, 1980; Chen, 1984; Brockett, 2015). Consider an _N_ in dimensional input signal, **u** _t_, driving a linear system to produce an _N_ out dimensional output signal, **x** _t_ . The impulse response of the system, **h** _t_, is a _N_ out _× N_ in dimensional matrix, with entry ( _i, j_ ) given by the [ **x** _t_ ] _i_ output when [ **u** _t_ ] _j_ is the unit impulse. By superposition, **x** _t_ and **u** _t_ are related by convolution so that, _̸_ _̸_ **x** _t_ = _̸_ _̸_ _t_ � **h** _t−τ_ **u** _τ_ _._ (2) _τ_ = _−∞_ _̸_ _̸_ An alternative characterization, more suited to understanding frequency-domain properties, is the transfer function, which for discrete-time systems is the _Z_ -transform of the impulse response, _̸_ _̸_ H( _z_ ) = _̸_ _̸_ _∞_ � _z_ _[−][t]_ **h** _t_ (3) _t_ = _−∞_ _̸_ _̸_ where the transfer function, H( _z_ ), is also an _N_ out _× N_ in dimensional matrix whose ( _i, j_ ) entry characterizes how frequency content changes from input dimension _j_ to output dimension _i_ . If **x** ( _z_ ) and **u** ( _z_ ) are the _Z_ -transform of **x** _t_ and **u** _t_ respectively, then in the _Z_ -domain they can be related by **x** ( _z_ ) = H( _z_ ) **u** ( _z_ ). Importantly, if the entries of H( _z_ ) are all rational in _z_ and the degree of the denominator exceeds that of the numerator, then a finite-dimensional _realization_ of that system can be implemented by an LDS. [1] This means that for any strictly proper [2] transfer function satisfying those properties, there exists a tuple ( **A** _,_ **B** _,_ **C** ) that parameterize an LDS, **x** _t_ = **C** _**γ**_ _t_ _**γ**_ _t_ = **A** _**γ**_ _t−_ 1 + **Bu** _t_ (4) whose impulse response and transfer function match **h** _t_ and H( _z_ ) respectively, and can be written in terms of the LDS parameters as, **h** _t_ = **CA** _[t][−]_ [1] **B** H( _z_ ) = **C** ( _z_ **I** _−_ **A** ) _[−]_ [1] **B** (5) Consequently, impulse response descriptions of communication channels can be directly incorporated into state-space model descriptions of multi-region neural dynamics. For understanding temporal characteristics such as delays in communication channels, the impulse response can provide an informative description; how information may be attenuated or amplified at different frequencies is better understood through the transfer function. 2.2 T HE MRDS-IR GENERATIVE MODEL We consider region specific latent states driven by their own recurrent dynamics subject to filtered content of other region’s latent state history, with dynamics of the form, **z** [(] _t_ _[k]_ [)] = **f** _k_ ( **z** [(] _t−_ _[k]_ [)] 1 [) +] � _H_ _k,ℓ_ ( **z** [(] 1: _[ℓ]_ _t_ [)] _−_ 1 [) +] _[ G]_ _[k]_ [(] **[c]** [(] _t_ _[k]_ [)] ) + **w** _t_ [(] _[k]_ [)] (6) _ℓ_ = _̸_ _k_ where _G_ _k_ maps region specific stimuli/inputs to the latent space, and _H_ _k,ℓ_ transforms the latent state history of region _ℓ_ into an input to region _k_ – acting as a directed and causal [3] _channel_ that controls the transmission of information between regions. This state-space structure is mathematically general, with many existing multi-region neural dynamics models in the literature as special cases. We model channels between regions, _H_ _k,ℓ_, as linear filters parameterized by their impulse response, as explained above, which allows us to build a fully Markovian representation in a higher dimensional state-space ( Astr¨om & Wittenmark, 2013). We structure the latent state-space according to [˚] the following coupled difference equations, **z** [(] _t_ _[k]_ [)] = **f** _k_ ( **z** [(] _t−_ _[k]_ [)] 1 [) +] � _ℓ_ = _̸_ _k_ **[C]** _[k,ℓ]_ _**[γ]**_ _t_ [(] _−_ _[k,ℓ]_ 1 [)] [+] **[ G]** _[k]_ **[c]** _t_ [(] _[k]_ [)] + **w** _t_ [(] _[k]_ [)] (7) _**γ**_ _t_ [(] _[k,ℓ]_ [)] = **A** _k,ℓ_ _**γ**_ _t_ [(] _−_ _[k,ℓ]_ 1 [)] [+] **[ B]** _[k,ℓ]_ **[z]** [(] _t−_ _[ℓ]_ [)] 1 (8) 1 There exist infinite state-space model realizations of minimum state dimension (Rosenbrock, 1970). 2 Hence the lack of the **D** matrix that may be familiar in a more general treatment (Chen, 1984) 3 ‘Causal’ is used in the sense of systems theory, to mean that current outputs do not depend on future inputs. 3 where **z** [(] _t_ _[k]_ [)] are _L_ _k_ dimensional region specific states, **w** _t_ [(] _[k]_ [)] _∼N_ ( **0** _,_ **Q** _k_ ), _**γ**_ _t_ [(] _[k,ℓ]_ [)] are _L_ _ℓ_ _M_ _k,ℓ_ dimensional states of the channel from _ℓ_ to _k_, **A** _k,ℓ_ is _L_ _ℓ_ _M_ _k,ℓ_ _× L_ _ℓ_ _M_ _k,ℓ_, **B** _k,ℓ_ is _L_ _ℓ_ _M_ _k,ℓ_ _× L_ _ℓ_ and **C** _k,ℓ_ is _L_ _k_ _× L_ _ℓ_ _M_ _k,ℓ_ . We parameterize **f** _k_ ( _·_ ) as deep neural networks capable of learning highly nonlinear transition operators (specifically, we use the minimally gated unit (Heck & Salem, 2017), similar to Schimel et al. (2021)). The channel parameters, ( **A** _k,ℓ_ _,_ **B** _k,ℓ_ _,_ **C** _k,ℓ_ ), are learned alongside the other generative model parameters, and are constrained so that channel dynamics remain stable. To streamline notation, we define **Γ** [(] _t_ _[k]_ [)] as an extended latent state containing the latent state vectors of the LDS that processes messages entering node _k_, **Γ** [(] _t_ _[k]_ [)] := _**γ**_ _t_ [(] _[k,]_ [1)] _, . . .,_ _**γ**_ _t_ [(] _[k,k][−]_ [1)] _,_ _**γ**_ _t_ [(] _[k,k]_ [+1)] _. . .,_ _**γ**_ _t_ [(] _[k,K]_ [)] � � For further brevity we also define **s** [(] _t_ _[k]_ [)] := ( **z** [(] _t_ _[k]_ [)] _,_ **Γ** [(] _t_ _[k]_ [)] ), and adopt the notational convention that variables without a superscript represent the concatenation of all variables with that name, **z** _t_ := **z** [(1)] _t_ _[, . . .,]_ **[ z]** [(] _t_ _[K]_ [)] **Γ** _t_ = **Γ** [(1)] _t_ _[, . . .,]_ **[ Γ]** [(] _t_ _[K]_ [)] **s** _t_ = ( **z** _t_ _,_ **Γ** _t_ ) � � � � So that the full latent dynamics model, _p_ _**θ**_ ( **s** _t_ _|_ **s** _t−_ 1 ) = _N_ ( **s** _t_ _|_ **m** _**θ**_ ( **s** _t−_ 1 ) _,_ **Q** ) factors as, _̸_ _p_ _**θ**_ ( **z** [(] _t_ _[k]_ [)] _|_ **z** [(] _t−_ _[k]_ [)] 1 _[,]_ **[ Γ]** [(] _t−_ _[k]_ [)] 1 [)] � _k_ _ℓ_ = _̸_ _k_ _p_ _**θ**_ ( **s** _t_ _|_ **s** _t−_ 1 ) = � _̸_ � _δ_ ( _**γ**_ _t_ [(] _[k,ℓ]_ [)] _|_ _**γ**_ _t_ [(] _−_ _[k,ℓ]_ 1 [)] _[,]_ **[ z]** [(] _t−_ _[ℓ]_ [)] 1 [)] (9) _ℓ_ = _̸_ _k_ _̸_ where _δ_ ( _·_ ) is the Dirac delta function, and appears as a consequence of communication channel dynamics having no noise component. More than notational brevity, this representation also simplifies the algebraic complexity of developing an efficient message passing algorithm for posterior inference; since, as we discuss shortly, a state-noise inversion free algorithm can be developed, allowing us to formulate deterministic transitions as degenerate Gaussian distributions, or delta measures. For the observation model, similar to other latent variable models of multiregion communication, each region’s instantaneous activity is made dependent only on the latent variables associated with that region. This leads to a factorized likelihood, which in the linear Gaussian or Poisson GLM (generalized linear model) case we parameterize each region as, _p_ ( **y** _t_ [(] _[k]_ [)] _|_ **z** [(] _t_ _[k]_ [)] ) = _N_ ( **y** _t_ [(] _[k]_ [)] _|_ **D** _k_ **z** [(] _t_ _[k]_ [)] + **d** _k_ _,_ **R** _k_ ) (10) _p_ ( **y** _t_ [(] _[k]_ [)] _|_ **z** [(] _t_ _[k]_ [)] ) = Poisson **y** _t_ [(] _[k]_ [)] _|_ exp( **D** _k_ **z** [(] _t_ _[k]_ [)] + **d** _k_ ) (11) � � An important concept worth noting, is that because one regions’ latent variables do not _instan-_ _taneously_ affect another region’s activity, causal message passing algorithms should also not use observations of one region to update the filtering belief of another. For linear and Gaussian observation models, this structure arises naturally, but for amortized approximate inference (full details in Appendix D), we make sure to adhere to this principle when causally updating our filtered beliefs. **Parameterizing channels.** The recurrent dynamics of between channel filters are parameterized as real representations of a diagonal matrix with _M_ complex-conjugate roots, _̸_ **A** _k,l_ = diag �� _ab_ _k,l,k,l,_ 11 **II** _LL_ _ℓℓ_ _−ba_ _k,l,k,l,_ 11 **II** _LL_ _ℓℓ_ _̸_ _a_ _k,l,M_ **I** _L_ _ℓ_ _−b_ _k,l,M_ **I** _L_ _ℓ_ � _, . . .,_ � _b_ _k,l,M_ **I** _L_ _ℓ_ _a_ _k,l,M_ **I** _L_ _ℓ_ _̸_ (12) �� _̸_ Increasing _M_ increases the _order_ of the linear filter and makes it possible to learn linear filters with increasingly nuanced frequency responses (as a result of adding additional pole-zero structures) (Stoica et al., 2005). Constraining **A** _k,l_ to be diagonal might first seem like a restrictive choice, however, with **B** _k,ℓ_ and **C** _k,ℓ_ free, this parameterization is able to capture any rational transfer function with greatest common denominator of order _M_ and no repeated poles (Aoki, 2013), and is the basis for Gilbert’s method of constructing minimal realizations (Gilbert, 1963). Additionally, considering that diagonal matrices are dense in the space of square matrices (Golub & Van Loan, 2013), it is not possible to learn non-trivial Jordan block structures through gradient descent without enforcing those structures. We parameterize the complex conjugate roots of each block using their representation in polar coordinates and enforce stability during optimization through clipping if a root’s radius exceeds 1; however, an alternative that allows for unconstrained optimization would be the stable exponential parameterization introduced in Orvieto et al. (2023). For the readout/readin matrices, **C** _k,ℓ_ and **B** _k,ℓ_, their parameters are optimized without any additional constraints or structure. While we make this choice for practical simplicity, more sophisticated a priori pole-zero specifications could be introduced by considering the sparsity structure of these matrices (Kailath, 1980; Kay, 1988). 4 **Inference and end-to-end learning.** While the choice of a nonlinear dynamics for local population activity is well motivated by the inability of linear dynamics models to capture key features of neural computation, such as attractor structure (Khona & Fiete, 2022), this choice also renders the exact posterior intractable – necessitating approximate inference. We approach this problem using an end-to-end variational inference methodology (Blei et al., 2017), so that gradients of the evidence lower bound (ELBO), _L_ ( _q_ ) = � E _q_ _t_ �log _p_ � **y** _t_ [(] _[k]_ [)] _|_ **z** [(] _t_ _[k]_ [)] �� _−_ E _q_ _t−_ 1 �D KL � _q_ ( **s** [(] _t_ _[k]_ [)] _|_ **s** [(] _t−_ _[k]_ [)] 1 [)] ������ _p_ _**θ**_ ( **s** ( _tk_ ) _|_ **s** [(] _t−_ _[k]_ [)] 1 [)] �� (13) _t,k_ can be used to optimize the parameters of an approximation _q_ ( **s** _t_ _|_ **s** _t−_ 1 ) _≈_ _p_ ( **s** _t_ _|_ **s** _t−_ 1 _,_ **y** _t_ ), and parameters of the generative model (derivation of the ELBO in Appendix B). A Monte-Carlo approximation of the ELBO, suitable for gradient based optimization, can be obtained by recursively sampling **s** _[s]_ _t_ _[∼]_ _[q]_ [(] **[s]** _[t]_ _[|]_ **[ s]** _[s]_ _t−_ 1 [)][ from the conditional variational approximation. For efficient sampling] amenable to the reparameterization trick (Kingma & Welling, 2014), we parameterize the variational conditional as the product of a Gaussian potential, depending on **y** _t_, and the prior transition model by setting, _q_ ( **s** _t_ _|_ **s** _t−_ 1 ) = _N_ ( **s** _t_ _|_ **m** _t|t−_ 1 _,_ **P** _t|t−_ 1 ) _∝_ _**ϕ**_ ( **y** _t_ _|_ **s** _t_ ) _× p_ _**θ**_ ( **s** _t_ _|_ **s** _t−_ 1 ) (14) with _p_ _**θ**_ ( **s** _t_ _|_ **s** _t−_ 1 ) = _N_ ( **s** _t_ _|_ **m** _**θ**_ ( **s** _t−_ 1 ) _,_ **Q** ). This parameterization, inspired by conjugate potential amortized inference networks such as the structured variational autoencoder (SVAE) (Johnson et al., 2016), forces the backpropagated gradients of the ELBO to traverse through the latent dynamics model – an important component for learning meaningful dynamics models capable of long horizon forecasts (Karl et al., 2017; Klushyn et al., 2021). Each Gaussian potential has the form, _**ϕ**_ ( **y** _t_ _|_ **s** _t_ ) _∝_ exp � **k** ( **y** _t_ ) _[⊤]_ **s** _t_ + _||_ **K** ( **y** _t_ ) **s** _t_ _||_ [2] [�] (15) When the observation model is not conjugate **k** ( _·_ ) and **K** ( _·_ ) are parameterized by neural networks whose parameters are learned maximizing the ELBO, but in the case of a linear and Gaussian observation model, _p_ ( **y** _t_ _|_ **s** _t_ ) = _N_ ( **y** _t_ _|_ **Ds** _t_ _,_ **R** ), have optimal closed form solutions, **k** ( **y** _t_ ) = **D** _[⊤]_ **R** _[−]_ [1] **y** _t_ **K** ( **y** _t_ ) **K** ( **y** _t_ ) _[⊤]_ = **D** _[⊤]_ **R** _[−]_ [1] **D** (16) Now, given a sample **s** _[s]_ _t−_ 1 [, forming the conditional Gaussian approximation statistics,] **m** _t|t−_ 1 = **m** _**θ**_ ( **s** _[s]_ _t−_ 1 [) +] **[ Qg]** _[t]_ **P** _t|t−_ 1 = **Q** _−_ **QK** _t_ ( **I** + **K** _[⊤]_ _t_ **[QK]** _[t]_ [)] _[−]_ [1] **[K]** _[⊤]_ _t_ **[Q]** (17) with **g** _t_ = **k** _t_ _−_ **K** _t_ ( **I** + **K** _[⊤]_ _t_ **[QK]** _[t]_ [)] _[−]_ [1] **[K]** _[⊤]_ _t_ [(] **[Qk]** _[t]_ _[−]_ **[m]** _**[θ]**_ [(] **[s]** _[s]_ _t−_ 1 [))][, we can sample] **[ s]** _[s]_ _t_ _[∼]_ _[q]_ [(] **[s]** _[t]_ _[|]_ **[ s]** _[s]_ _t−_ 1 [)][.] Having formulated the recursive belief updates without requiring inversion of the state-noise further allows us to treat hybrid/stochastic latent transitions similarly (Appendix A). Proceeding with this recursion until time _T_ produces a completely differentiable trajectory sampled from the series of causally constructed beliefs which can be used to evaluate the ELBO. We offer a more in depth discussion of the approximate filtering algorithm in Appendix D; there, we also cover in greater detail how block structures appearing in **k** _t_ and **K** _t_ due to the multi-region observation model and deterministic channel transitions can be exploited to reduce the computational complexity of inference. 3 R ESULTS 3.1 MRDS-IR RECOVERS GROUND TRUTH DYNAMICS We first validated the efficacy of our inference algorithm by simulating a synthetic dataset with matched generative model structure (Fig. 1A). The synthetic three region system was crafted so that each area’s recurrent dynamics are characterized by slow dynamic structures believed to play important roles for neural computation (Fig. 1B from left to right – a stable limit cycle produced by van der Pol’s oscillator, a stable spiral, and a ring attractor), with a limited set of connections between them, themselves modeled as linear with predefined impulse response functions (Fig. 1C and D, black). From this system we generate 1000 trials of 200 time points each and project each region’s latent state to a 100 dimensional observation space via a linear gaussian likelihood. We fitted a MRDS-IR model with three nodes and all-to-all connections between them, and a linear gaussian observation model to this simulated data, and assessed whether the estimated model could i) recover individual region dynamics ii) recover the linear filters between channels and iii) correctly identify whether a channel was ‘open’ or ‘closed.’ Examining the estimated flow fields of each of the regions (Fig. 1B, bottom row), one can see that our estimator was able to learn the true autonomous dynamical systems structure, up to expected model invariances(axis rotation, and re-scaling). Fig. 1C shows the ground truth and recovered impulse 5 _̸_ _̸_ true model .15 .0 -.15 .15 .0 -.15 _̸_ _̸_ **E** 437.8 437.4 437.0 _̸_ 0.15 0.10 0.05 0.00 _̸_ **D** _̸_ _̸_ _̸_ pred. _̸_ _̸_ _̸_ _̸_ _̸_ time 0 20 40 zero pole _̸_ _̸_ _̸_ _̸_ Figure 1: **Ground truth model recovery. A)** Structure of the ground truth data: three nodes and the flow of information between them. **B)** Single node population dynamics for the (top) ground truth data and (bottom) learned dynamical system after fitting the model; trajectories from the autonomous dynamics in light gray. **C)** Learned and ground truth impulse responses for the open/used (left) and closed/unused (right) channels. **D)** Pole-zero plot of the impulse response channel (black: ground truth, red: estimates). **E)** negative ELBO and predicted neural responses _R_ [2] for MRDS-IR (ours) and the linear (LN) and nonlinear (NL) baseline models, see text for details. responses of the (left) open channels and (right) closed channels. We found that the recovered impulse responses match in periodicity, although the units of amplitude are arbitrary. Importantly, the model learned to prune inactive channels by setting their amplitude close to zero. [4] The polezero plot in Fig. 1D confirms that the estimated active channels have frequency responses matching ground truth. Reassuringly, we found that our model –which matches the true data statistics– is revealed as the better fit in model comparison using either the ELBO or the R [2] of latent trajectory predictions for a forecast horizon of 150 time points regressed to ground truth examples(Fig. 1E); where the latter metric helps to assess prediction capability of the learned dynamics. This is not a given, as a simpler model could in principle fit the data better (due to finite data, fewer parameters, and a smaller inductive bias), but we confirmed that our model fitting procedure can still recover a fuller description of the underlying multi-region population activity. 3.2 R EVERSE ENGINEERING DISTRIBUTED COMPUTATION IN AN INTEGRATION TASK Next, we tested the ability of MRDS-IR to reveal the principles underlying multi-region neural computation in a distributed temporal integration task (Fig. 2A), which requires long time scales in the dynamics and gating of information flow between regions. Rather than engineering a multi-region dynamical system computation directly, we chose to use trained RNNs and reverse engineered their function using either the ground truth trained RNN parameters or the MRDS-IR corresponding estimates. We also included CURBD (Perich et al., 2020) as baseline comparison. Unlike the previous experiment, here there is a model mismatch between ground truth and the MRDS-IR estimator. Since CURBD uses RNNs for multi-region dynamics, it provides a particularly stringent comparison, but our explicit input conditioning which links the underlying dynamics to task meaning may help MRDS-IR to better identify the underlying computations behind the measured neural activity. Concretely, the simulated circuit used three regions, each with a low-rank RNN architecture (Mastrogiuseppe & Ostojic, 2018; Beiran et al., 2023) so that low-dimensional dynamics could be easily visualized and compared. The activity in each region _k_ evolves as _̸_ � **W** _k,ℓ_ _ϕ_ ( **y** _t_ [(] _−_ _[ℓ]_ [)] 1 [) +] **[ G]** _[k]_ **[c]** _t_ [(] _[k]_ [)] + _**ϵ**_ [(] _t_ _[k]_ [)] _ℓ_ = _̸_ _k_ **y** _t_ [(] _[k]_ [)] = (1 _−_ [∆] _̸_ _τ_ _k_ _̸_ _τ_ [∆] _k_ [)] **[y]** _t_ [(] _−_ _[k]_ [)] 1 [+] _τ_ [∆] _k_ _̸_  _̸_  _,_ (18) _̸_  **W** _k_ _ϕ_ ( **y** _t_ [(] _−_ _[k]_ [)] 1 [) +] �  _ℓ_ = _̸_ _k_ _̸_ where **c** [(] _t_ _[k]_ [)] is input to region _k_, read out linearly by **G** _k_, **W** _k_ = **M** _k_ **N** _[⊤]_ _k_ [and] **[ W]** _[k,ℓ]_ [=] **[ M]** _[k,ℓ]_ **[N]** _[⊤]_ _k,ℓ_ are low-rank within/between population weight matrices, of ranks 1 and 2, respectively, _**ϵ**_ _t_ _∼_ _N_ ( **0** _, σ_ [2] **I** ). Each RNN region had 128 neurons with tanh nonlinearities, and linear readouts for region-specific outputs. The task requires each of the three nodes to integrate their respective inputs (which are constant over the course of a trial), with their computation gated by delays and region-specific go cues. Fig. 2B shows one particular example trial. At the beginning of each trial, continuous ‘cue 1’ and 4 Loosely defined, the closed channel amplitude is much smaller than local signals in the receiving area, although this can be more precisely validated by model comparison. 6 cue 1 go input **D** comm. channels estim.trajectory RNN MRDS-IR CURBD RNN estim.dynamics 100 bins time 0.95 0.63 0.63 0.03 0.88 0.82 **B** go 1 go 3 -1 |cue 2<br>cue 1|Col2|Col3|Col4| |---|---|---|---| |cue 2<br>cue 1|||| |cue 2<br>cue 1|||| 0 200 400 cue sum .2 1.6 **E** 1 ours 0.5 0.1 -0.3 0 200 forecast horizon Figure 2: **A distributed working memory circuit. A)** A diagram of circuit connectivity. **B)** Structure of a single trial. Cues 1 and 2 determine speed of temporal integration; go signals start integration in the corresponding region; computation in region 2 is gated by state of region 1 with a temporal delay. **C)** Ground truth versus inferred dynamics comparison. (left) RNN autonomous dynamics and and example trajectories colored by the sum of the cues received by population 1 and 2; (middle) the dynamics learned and (right) corresponding single trajectories estimated by our MRDS-IR. **D)** Estimated inter-area communication compared to ground truth RNN signals and CURBD; numbers indicate goodness of fit measured by the _R_ [2] to single trial RNN currents. **E)** Comparison of MRDSIR and CURBD in terms of ability to predict single trial future neural responses over a forecast horizon up to 225 bins ( _R_ [2] of model predicted observations matched to true data, left) with one example neuron predictions for each region (right); dashed grey line marks beginning of forecasting Idea Generation Category:
0Conceptual Integration
LbgIZpSUCe
# Stem-OB: G ENERALIZABLE V ISUAL I MITATION - L EARNING WITH S TEM -L IKE C ONVERGENT O BSER VATION THROUGH D IFFUSION I NVERSION **Kaizhe Hu** [123] _[∗]_ **Zihang Rui** [1] _[∗]_ **Yao He** [4] **Yuyao Liu** [1] **Pu Hua** [123] **Huazhe Xu** [123] 1 Tsinghua University 2 Shanghai Qi Zhi Institute 3 Shanghai AI Lab 4 Stanford University hukaizhe22@mails.tsinghua.edu.cn, huazhe ~~x~~ u@mail.tsinghua.edu.cn Figure 1: **Left:** The tree of _Stem-OB_ inversion is composed of different objects progressively inverted through a diffusion inversion process. Moving downward alone the tree’s branches, objects of different textures, appearances, and categories gradually get closer, eventually converging into the same root of Gaussian noise, where they are completely indistinguishable. **Right:** Real-world tasks success rate, where _Stem-OB_ showcases a significant improvement. A BSTRACT Visual imitation learning methods demonstrate strong performance, yet they lack generalization when faced with visual input perturbations like variations in lighting and textures. This limitation hampers their practical application in real-world settings. To address this, we propose _**Stem-OB**_ that leverages the inversion process of pretrained image diffusion models to suppress low-level visual differences while maintaining high-level scene structures. This image inversion process is akin to transforming the observation into a shared representation, from which other observations also stem. _Stem-OB_ offers a simple yet effective plug-and-play solution that stands in contrast to data augmentation approaches. It demonstrates robustness to various unspecified appearance changes without the need for additional training. We provide theoretical insights and empirical results that validate the efficacy of our approach in simulated and real settings. _Stem-OB_ shows an exceptionally significant improvement in real-world robotic tasks, where challenging light and appearance changes are present, with an average increase of **22.2%** [in success rates compared to the best baseline. See our website for more videos.](https://github.com/hukz18/Stem-Ob/) 1 I NTRODUCTION Visual Imitation Learning (IL), where an agent learns to mimic the behavior of the demonstrator by learning a direct mapping from visual observations to low-level actions, has gained popularity in recent real-world robot tasks (Chi et al., 2023; Zhao et al., 2023; Wang et al., 2023a; Chi et al., _∗_ Indicates equal contribution. 1 2024; Ze et al., 2024). Despite the versatility demonstrated by visual IL, learned policies are often brittle and fail to generalize to unseen environments, even minor perturbations such as altering the lighting conditions or the texture of the object may lead to failure of the learned policy (Xie et al., 2023; Yuan et al., 2024b). The underlying reason is that the high-dimensional visual observation space is redundant with virtually infinite variations in appearance that are irrelevant to the task and hard to generalize. As human beings, we can easily manipulate objects that have different appearances. For example, we can pick up a coffee cup regardless of its color, texture, or the lighting condition of the room. This is partially because our visual system is capable of abstracting the high-level semantics of the scene, such as the silhouette of the object, the structure and arrangement of different objects, etc in a hierarchical manner (Hochstein & Ahissar, 2002), effectively merging scenes with perceptual differences to similar “meta” observations. Augmentation techniques such as Spectrum Random Masking (SRM) (Huang et al., 2022) and Mixup (Zhang et al., 2018) remove details from observations to encourage the model to focus on structural features; however, they lack the ability to distinguish between low-level and high-level features. It is preferable if we can sweep the photometrical differences while maintaining the highlevel structure for the scene. Achieving this requires a semantic understanding of the observations, and naively perturbing the data with Gaussian noise can lead to irreversible information loss. Pretrained large image diffusion models, such as Stable Diffusion (Rombach et al., 2022; Esser et al., 2024), embed essential world knowledge for visual understanding. Apart from synthesizing new images from random noise, these models are capable to perform a reverse procedure called inversion (Song et al., 2022), which converts an image back to the space of random noises. A recent study (Yue et al., 2024) indicates that this inversion process selectively eliminates information from the image. Rather than uniformly removing information from different semantic hierarchies, it will push those images with similar structures closer in the early stages of the inversion process. Inversion is like the reprogramming of a differentiated cell back to a stem cell, which bears the totipotency to differentiate into any cell type. This characteristic aligns perfectly with our will of enhancing the robustness and generalizability of visual IL algorithms to visual variations. To distill such property into a visual IL policy, we propose an imitation learning pipeline which applies diffusion inversion to the visual observations. We name our method _Stem-OB_ to highlight the similarity between the inversed observation and the stem cell in biology, as illustrated in Figure 1. To be specific, our method is as simple as inverting the image for reasonable steps before sending them to the downstream visual IL algorithms. The number of steps is chosen empirically to balance removing irrelevant details without erasing essential high-level information. From this perspective, our approach fundamentally distinguishes from generative augmentation methods, which aim to enrich the training dataset with more unseen objects and appearances (Yu et al., 2023; Mandlekar et al., 2023). Moreover, _Stem-OB_ is indifferent to many unspecified appearance changes, in contrast to augmentation-based methods that must concentrate on a few selected types of generalization, thereby introducing inevitable inductive biases. We provide theoretical analysis and a user study to support our claim that _Stem-OB_ can effectively merge scenes with perceptual differences to similar “stem observations”. Empirical study demonstrates the effectiveness of our approach in a variety of simulated and real-world tasks and a range of different perturbations. _Stem-OB_ proves to be particularly effective in real-world tasks where appearance and lighting changes hamper the other baselines, establishing an overall improvement in the success rate of **22.2%** . What’s better, no inference time inversion is required for _Stem-OB_ to take effect, making the deployment of our method virtually free of computational cost. 2 R ELATED W ORKS 2.1 V ISUAL I MITATION L EARNING AND G ENERALIZATION Visual Imitation Learning (VIL) is a branch of Imitation learning (IL) that focuses on learning action mappings from visual observations. It typically follows two approaches: directly imitating expert policies, as in behavior cloning and DAgger (Ross et al., 2011), or inferring a reward function that aligns the agent’s behavior with expert demonstrations, like inverse reinforcement learning (Ng & 2 Figure 2: **Overview of** _**Stem-OB**_ : **(a)** . _Stem-OB_ has been evaluated in both real-world and simulated environments. **(b)** . The trained visual IL policies are directly applied to the original observation space _O_, demonstrating robustness to unseen environmental disturbances. **(c)** . We train the visual IL policy _**π**_ on the diffusion-inversed latent space _O_ [ˆ] _[t/T]_ [ˆ], where _t_ [ˆ] denotes a specific inversion step out of a total of _T_ . Each composite rectangle in the diffusion inversion process, made up of three smaller sections, represents the latent vector of an image, with the smaller section depict the finer attributes (gray). During the inversion process, finer attributes converge earlier than coarser ones. Russell, 2000) and GAIL (Ho & Ermon, 2016). The former approach, favored in recent works due to its scalability and practicality in complex, real-world tasks, has led to several advancements. Notable methods include Diffusion Policy (Chi et al., 2023), which leverages a diffusion model (Ho et al., 2020) to maximize the likelihood of expert actions, and Action Chunk Transformer (Zhao et al., 2023) that uses a Transformer (Vaswani, 2017). To enhance the generalization and robustness of visual imitation learning algorithms, various approaches have been explored (Cetin & Celiktutan, 2021). For instance, Li et al. (2023) leverage trajectory and step level similarity through an estimated reward function, while Wan et al. (2023) improve visual robustness by separating task-relevant and irrelevant dynamics models before applying the GAIL framework (Ho & Ermon, 2016). Zhang et al. (2023a) use mutual information constraints to create compact representations that generalize to unseen environments. However, these approaches are not directly applicable to methods like diffusion policy, which focus on imitation without reward functions or dynamics models. Zheng et al. (2023) propose to filter extraneous action subsequence, yet their focus is not on visual perturbations. Most relevant to our setting, several works in robust visual reinforcement learning have explored adding noise in image or frequency domain to improve generalizability (Huang et al., 2022; Lee & Hwang, 2024; Yuan et al., 2024a), however, they lack the semantic understanding of the augmentation process. 2.2 I NVERSION OF D IFFUSION M ODELS AND ITS A PPLICATION Diffusion model inversion aims to recover an initial noise distribution from a given image, enabling image reconstruction from the same noise via backward denoising. A common approach is DDIM inversion (Song et al., 2022), which estimates the previous noise using predictions from the current diffusion step, though the approximation will introduce cumulative errors. To address this issue, several methods employ learnable text embeddings (Mokady et al., 2023; Miyake et al., 2023), fixedpoint iterations (Pan et al., 2023; Garibi et al., 2024), or gradient-based techniques (Samuel et al., 2024) to refine the result. Another approach, based on the stochastic DDPM scheduler (Ho et al., 2020), reconstructs noisy images at each diffusion step (Huberman-Spiegelglas et al., 2024; Brack et al., 2024). In contrast to the DDIM inversion methods, the noise ˜ _ϵ_ _t_ of each step is statistically independent, making them ideal for our application since we can obtain the noisy image of a certain step without the need to recover other steps, greatly reducing the calculation cost. Diffusion Inversion is the crucial part of diffusion-based image editing methods (Meng et al., 2021; Kawar et al., 2023), which typically involves first inverting the diffusion process to recover the noise latent, then denoise the latent with desired editing conditions. Recent works also explore to apply attention control over the denoising process to improve the fidelity of the edited image Hertz et al. 3 (2022); Tumanyan et al. (2023), and have shown promising application in robot learning tasks Gao et al. (2024); Ju et al. (2025); Zhu et al. (2024). Beyond that, inversion is also used in tasks like concept extraction Huang et al. (2023) and personalization Gal et al. (2022). Most recently, Wang & Chen (2024) proposed the use of diffusion inversion to interpolate between image categories to improve classification performance. And Wang et al. (2023b) uses diffusion inversion to erase out sub-optimal trajectories from the dataset. 3 P ROBLEM D EFINITION Given a dataset of observation _O_ and action _A_ pairs, the goal of VIL is to learn a policy _π_ _θ_ ( _A|O_ ) that maps observations to actions. The policy is typically parameterized by a neural network with parameters _θ_, and is trained to minimize the negative log-likelihood of the actions. To achieve the goal of generalizing to unseen environments, we seek a method to preprocess or transform the observations _O_ such that task-irrelevant details are suppressed while preserving the high-level semantic structure that is critical for the task. The problem can be transformed into learning a transformation _T_ as the input to the policy _π_ _θ_ ( _A|T_ ( _O_ )), where _T_ ( _O_ ) is the transformed observation emphasizing high-level semantics while removing irrelevant details. 4 P RELIMINARY We begin by outlining the fundamentals of Diffusion Inversion. A diffusion model operates with two passes: a backward denoising pass, which generates an image from noise, and a forward pass, where noise is incrementally added to an image until it becomes pure Gaussian noise. This forward process is a Markov chain that starts with _**x**_ 0, and gradually adds noise to obtain latent variables _**x**_ 1 _,_ _**x**_ 2 _, ...,_ _**x**_ _T_ . Each step in this process is a Gaussian transition following the common form _**x**_ _t_ = _[√]_ _α_ _t_ _**x**_ _t−_ 1 + _σ_ _t_ _**ϵ**_ _t_ _∼N_ ( _**x**_ _t_ _|_ _[√]_ _α_ _t_ _**x**_ _t_ _, σ_ _t_ [2] _**[I]**_ [)] (1) where _α_ _t_ _∈_ (0 _,_ 1) represents the scheduler parameter at each step _t_, while _σ_ _t_ characterizes the variance of the Gaussian noise _**ϵ**_ _t_ introduced at each step. In Denoising Diffusion Probabilistic Models (DDPM) (Ho et al., 2020), _σ_ _t_ = _[√]_ 1 _−_ _α_ _t_ . Consequently, equation Eq. (1) can be reformulated as Eq. (2) by applying the cumulative product ¯ _α_ _t_ = [�] _[t]_ _i_ =1 _[α]_ _[i]_ _**x**_ _t_ = _[√]_ _α_ ¯ _t_ _**x**_ 0 + _√_ ¯ ¯ ¯ 1 _−_ _α_ _t_ _**ϵ**_ _t_ _∼N_ ( _**x**_ _t_ _|_ _[√]_ _α_ _t_ _**x**_ _t−_ 1 _,_ (1 _−_ _α_ _t_ ) _**I**_ ) (2) Diffusion inversion is similar to the forward process in that they both maps an image to a noise, however, inversion tries to preserve the image’s information and obtain the specific noise that can reconstruct the image during a backward denoising process. **DDPM inversion.** We follow the DDPM inversion proposed in Huberman-Spiegelglas et al. (2024), _**x**_ _t_ = _[√]_ _α_ ¯ _t_ _**x**_ 0 + _√_ 1 _−_ _α_ ¯ _t_ _**ϵ**_ ˜ _t_ (3) The DDPM inversion we consider here differs slightly from Eq. (2), as ˜ _**ϵ**_ _t_ _∼N_ ( **0** _,_ _**I**_ ) are mutually independent distributions, in contrast to the highly correlated _**ϵ**_ _t_ in Eq. (2). As mentioned by Huberman-Spiegelglas et al. (2024), the independence of ˜ _**ϵ**_ _t_ results in a sequence of latent vectors where the structures of _**x**_ 0 are more strongly imprinted into the noise maps. An error reduction step is conducted in reverse order after the diffusion forward process to improve image reconstruction accuracy during the denoising process: ˆ _**z**_ _t_ = _**x**_ _t−_ 1 _−_ _µ_ ( _**x**_ _t_ ) _/σ_ _t_ _,_ _**x**_ _t−_ 1 = ˆ _µ_ ( _**x**_ _t_ ) + _σ_ _t_ _**z**_ _t_ (4) **DDIM inversion.** We follow the DDIM inversion proposed in Song et al. (2022), where at each forward diffusion step _α_ ¯ _t_ _**x**_ _t_ = ¯ _**x**_ _t−_ 1 + ~~�~~ _α_ _t−_ 1 � ~~[�]~~ 1 _α_ ¯ _t_ _−_ 1 _−_ 4 ~~�~~ 1 ¯ _−_ 1 _**ϵ**_ _**θ**_ ( _**x**_ _t−_ 1 _, t, C_ ) (5) _α_ _t−_ 1 � Note that the noise _**ϵ**_ _**θ**_ ( _**x**_ _t−_ 1 _, t, C_ ) is now generated by a network trained to predict the noise based on the previous vector _**x**_ _t−_ 1 and the text embedding _C_ which, in our case, is _∅_ . 5 M ETHOD In this section, we introduce the intuition and implementation of our framework. We first propose the intuition of applying inversion on observations through theoretical analysis based on attribute loss, a diffusion-based measurement of image semantic similarity. Then, we conduct an illustrative experiment and a user study to validate our intuition. Finally, we explain how to practically implement _Stem-OB_ and incorporate diffusion inversion into a visual imitation learning framework. 5.1 I NTUITION D EVIATION BY A TTRIBUTE LOSS Intuitively, as the diffusion inversion process moves forward, a source image and another variation of it become increasingly indistinguishable. The variation here could be low-level changes like lightning conditions, but also includes semantic changes such as replacing an object. If there are two different variations, we want to show that as the inversion step increases, the pair with minor alterations will become indistinguishable sooner than the pair with larger and structural changes. We borrow the definition of attribute loss from Yue et al. (2024) to quantify the semantic overlapping of the two images at time step _t_ during a inversion process: _loss_ ( _**x**_ 0 _,_ _**y**_ 0 _, t_ ) = [1] (6) 2 [OVL][(] _[q]_ [(] _**[x]**_ _[t]_ _[|]_ _**[x]**_ [0] [)] _[, q]_ [(] _**[y]**_ _[t]_ _[|]_ _**[y]**_ **[0]** [))] where _**x**_ 0 and _**y**_ 0 are the latent variables of the two images, and OVL is the overlapping coefficient quantifying the overlapping area of two probability density functions. For an inversion process where each step follows a Gaussian transition, it takes the form _**x**_ _t_ = _[√]_ _α_ ¯ _t_ _**x**_ 0 + _σϵ ∼_ _N_ ( _[√]_ _α_ ¯ _t_ _**x**_ 0 _, σ_ [2] **I** ). The OVL can be further calculated as the overlapping area of two Gaussian distributions, i.e., _loss_ ( _**x**_ 0 _,_ _**y**_ 0 _, t_ ) = [1] 2 �1 _−_ erf( _[||√][α]_ [¯] _[t]_ ~~[(]~~ _**[y]**_ [0] _[ −]_ _**[x]**_ [0] [)] _[||]_ 2 _√_ _[ −]_ _**[x]**_ [0] ) (7) 2 _σ_ � where erf is the error function, which is strictly increasing. Given a source image _**x**_ 0 and its variations ˆ _**x**_ 0 and ˜ _**x**_ 0, with ˜ _**x**_ 0 undergoing a larger variation than ˆ _**x**_ 0, the following conclusion can be easily observed under the same diffusion scheduling: _τ_ ( _**x**_ **0** _,_ ˆ _**x**_ 0 _, ρ_ ) _< τ_ ( _**x**_ **0** _,_ ˜ _**x**_ 0 _, ρ_ ) _, s.t.||_ _**x**_ ˆ 0 _−_ _**x**_ 0 _|| < ||_ _**x**_ ˜ 0 _−_ _**x**_ 0 _||_ (8) Here, _τ_ ( _**x**_ 0 _,_ _**y**_ 0 _, ρ_ ) = inf _{t >_ 0 _|_ loss( _**x**_ 0 _,_ _**y**_ 0 _, t_ ) _> ρ}_ represents the earliest step where the loss between _**x**_ 0 and _**y**_ 0 exceeds the threshold _ρ_, and _|| · ||_ measures the difference between an image and its variation. Eq. (8) provides a theoretical grounding for our intuition: images with fine-grained attribute changes tend to become indistinguishable sooner than those with coarse-grained modifications under identical diffusion schedules. We can further derive the attribute loss for DDPM inversion _loss_ _DDP M_ ( _**x**_ 0 _,_ _**y**_ 0 _, t_ ) = [1] 2 1 _−_ erf( _[||√][α]_ [¯] _[t]_ ~~[(]~~ _**[y]**_ [0] _[ −]_ ¯ _**[x]**_ [0] [)] _[||]_ � 2� _**[y]**_ [0] _[ −]_ ¯ _**[x]**_ [0] ) (9) 2(1 _−_ _α_ _t_ ) � Additionally, we derive that the attribute loss for DDIM inversion exhibits a similar form under certain assumptions. The detailed derivation can be found in Appendix A.2. 2 � _loss_ _DDIM_ ( _**x**_ 0 _,_ _**y**_ 0 _, t_ ) = [1] 2 � _||_ ( _**y**_ 0 _−_ _**x**_ 0 ) _||_ 1 _−_ erf � � [�] (10) 2 1 ¯ _α_ _i−_ 1 _[−]_ [1] � � ~~�~~ 1 _α_ ¯ _i_ _[−]_ [1] _[ −]_ ~~�~~ 1 2 [�] _[t]_ _i_ =1 _α_ ¯ _i_ Because ¯ _α_ _t_ _∈_ (0 _,_ 1) is strictly decreasing, the attribute loss tends to increase as the time step increases. Furthermore, as discussed in Yue et al. (2024), this attribute loss is equivalent to how likely the DM falsely reconstruct _**x**_ _t_ sampled from _q_ ( _**x**_ _t_ _|_ _**x**_ 0 ) closer to _**y**_ 0 instead of _**x**_ 0, and vise versa. 5 Idea Generation Category:
0Conceptual Integration
xaYlO03tIk
# B EYOND N EXT T OKEN P REDICTION : P ATCH -L EVEL T RAINING FOR L ARGE L ANGUAGE M ODELS **Chenze Shao, Fandong Meng** _[∗]_ **, Jie Zhou** Pattern Recognition Center, WeChat AI, Tencent Inc, China _{_ chenzeshao,fandongmeng,withtomzhou _}_ @tencent.com A BSTRACT The prohibitive training costs of Large Language Models (LLMs) have emerged as a significant bottleneck in the development of next-generation LLMs. In this paper, we show that it is possible to significantly reduce the training costs of LLMs without sacrificing their performance. Specifically, we introduce patch-level training for LLMs, in which multiple tokens are aggregated into a unit of higher information density, referred to as a ‘patch’, to serve as the fundamental text unit for training LLMs. During patch-level training, we feed the language model shorter sequences of patches and train it to predict the next patch, thereby processing the majority of the training data at a significantly reduced cost. Following this, the model continues token-level training on the remaining training data to align with the inference mode. Experiments on a diverse range of models (370M-2.7B parameters) demonstrate that patch-level training can reduce the overall training costs to 0.5 _×_, without compromising the model performance compared to token-level training. Source [code: https://github.com/shaochenze/PatchTrain.](https://github.com/shaochenze/PatchTrain) 1 I NTRODUCTION Large Language Models (LLMs, Achiam et al., 2023; Touvron et al., 2023a;b; Team et al., 2023; Bai et al., 2023) have achieved remarkable progress in language understanding and generation, which are primarily attributed to their unprecedented model capacity and the corresponding growth in the volume of training data they require (Kaplan et al., 2020; Hoffmann et al., 2022). However, this scaling up comes with a substantial rise in computational costs, making the training efficiency of LLMs a critical concern. Despite the ongoing efforts on efficient LLMs (Wan et al., 2023), it remains a formidable challenge to reduce training costs without compromising the model performance. Specifically, the amount of compute (FLOPs) required for training LLMs is approximately proportional to both the number of model parameters _N_ and the number of text units (i.e., tokens) _D_ in the training data. This relationship can be expressed as: _C ≈_ 6 _ND._ (1) Therefore, strategies for reducing training costs can target either the reduction in the number of model parameters _N_ or the number of text units _D_ . One prominent approach to reduce the parameter size _N_ is called model growth (Gong et al., 2019; Yang et al., 2020; Chen et al., 2022). Rather than equipping the model with full parameters from the beginning, it advocates for a progressive expansion of the model’s parameter size throughout the training phase, thereby reducing the average parameter size during training. Nonetheless, a model’s performance hinges on an adequate number of parameters to store extensive knowledge and develop intricate reasoning capabilities. The inadequacy of parameters inherently limits the scope of knowledge and capabilities a model can develop, rendering it challenging to match the performance of training with the full parameter set. The second pathway to lowering training costs is reducing the number of text units _D_ within the training data. This direction remains largely unexplored but intuitively holds more promise, as the knowledge embedded within training data is sparsely distributed across numerous tokens, with each token encapsulating a minimal amount of information. This sparse distribution of information results _∗_ Corresponding author. 1 Figure 1: Visualization of overall training costs with patch compression for a fraction _λ_ of training data and patch size _K_ . Figure 2: Negative log-likelihood (NLL) loss on test set w.r.t the number of processed tokens during the training of 370M-parameter Transformers. in a scenario where, despite the substantial computational costs incurred during each learning step, only a small fraction of model parameters that are relevant to the current token are effectively updated. By increasing the amount of information the model processes at each learning step—that is, by augmenting the information density of text units and thus reducing the number of text units _D_ —we could potentially boost training efficiency significantly without sacrificing model performance. Building on these insights, this paper introduces patch-level training for large language models, in which multiple tokens are aggregated into a unit of higher information density, referred to as a ‘patch’, to serve as the fundamental text unit for training LLMs. Specifically, we divide the training process into two stages: patch-level training and token-level training. During patch-level training, we feed the language model shorter sequences of patches and train it to predict the next patch, thereby processing the majority of the training data at a significantly reduced cost. The resulting parameters are used to initialize the token-level model, which then continues training on the remaining data to adapt the knowledge gained during patch-level training to the token-level. Figure 1 illustrates the efficiency advantage of patch-level training, where the area of the shape represents the overall training costs. With a patch size of _K_, the amount of compute required for patch-level training is 1 _/K_ of that required for token-level training. When a fraction _λ_ of the training data is compressed into patches, the overall training costs are reduced to _λ/K_ + 1 _−_ _λ_ times the original costs. For instance, to halve the training costs, one could set the patch size _K_ = 4 and conduct patch-level training on _λ_ = 2 _/_ 3 of the training data. Employing the above settings ( _K_ = 4 _, λ_ = 2 _/_ 3 ), we train a series of LLMs of varying sizes (370M2.7B parameters) on the Pile dataset (Gao et al., 2020). Figure 2 illustrates the trend of NLL loss against the number of training tokens for the 370M model. After initialization with patch-level training, the model experiences a rapid decrease in loss as it continues token-level training on the remaining data. Remarkably, it achieves an even lower loss in comparison with training from scratch, while reducing training costs by half. By further adjusting the hyperparameter settings, even higher acceleration rates can be achieved, with only a slight sacrifice in model performance. 2 Figure 3: Overview of patch-level training. Every consecutive _K_ token embeddings are averaged to form the patch embedding. The sequence model is fed the patch sequence and trained to predict the next patch. The cross-entropy loss is computed based on each patch prediction vector and all the subsequent _K_ tokens in its next patch. 2 P ATCH -L EVEL T RAINING In this section, we outline the patch-level training approach for large language models, as illustrated in Figure 3. Initially, the token sequence is transformed into a patch sequence by compressing every _K_ consecutive tokens into a single patch. This patch sequence is then fed into the sequence model, and the model is trained to predict all tokens in the next patch. The knowledge acquired during patch-level training is subsequently transferred to the token-level model. Specifically, we use the parameters obtained from the patch-level model to initialize the token-level model, and then proceed with token-level training on the remaining data. While formulating the patch-level model structure, our goal is to minimize the discrepancy between patch-level and token-level models, thereby ensuring that the knowledge gained during patch-level training can be smoothly transferred to the token-level model. In practice, we observed that it is crucial for at least one end (input or output) of the patch-level model to remain consistent with the token-level model, which acts as an anchor to align their representations. The detailed ablation is presented in section 3.7. Given the context length _T_ for token-level training, we set the context length for patch-level training as _KT_, which is then compressed to a patch sequence of length _T_ to maintain consistency with the subsequent token-level training. To avoid introducing unnecessary parameters during token-to-patch compression, we represent the patch embedding as the average of its associated token embeddings. Let _p_ _i_ be the _i_ -th patch, _x_ _iK_ + _k_ be the _k_ -th token in the _i_ -th patch, and _E_ be the embedding function. The patch embedding is: _E_ ( _p_ _i_ ) = [1] _K_ _K−_ 1 � _E_ ( _x_ _iK_ + _k_ ) _._ (2) _k_ =0 The patch-level model is trained through next patch prediction, i.e., predicting the _K_ tokens in the next patch. The simultaneous prediction of multiple tokens has been explored in speculative decoding, which typically employs multiple output heads and each head is responsible for predicting a distinct token (Cai et al., 2024; Lin et al., 2024). However, this approach would also entail additional parameters that may be unfavorable for the subsequent knowledge transfer. Instead, we maintain a single output head and make its prediction cover all tokens in the next patch. Specifically, we calculate the cross-entropy loss for all the subsequent _K_ tokens based on the same patch prediction _P_ ( _·|p_ _<i_ ), resulting in the following loss function: _K−_ 1 � log _P_ ( _x_ _iK_ + _k_ _|p_ _<i_ ) _._ (3) _k_ =0 3 _L_ _patch_ = _−_ _T_ � _i_ =1 Since the model finally works at the token-level, it is essential to reserve some training data to adapt the patch-level model to token-level. Specifically, we conduct patch-level training on a fraction _λ_ of the training data, and then use the resulting parameters to initialize the token-level model. Following this, the token-level model continues training on the remaining data to adapt the knowledge gained during patch-level training to the token-level. As illustrated in Figure 1, the overall training costs are reduced to _λ/K_ + 1 _−_ _λ_ times the original costs of token-level training. When the amount of training data is limited, this approach can also be utilized for efficient multi-epoch training. For example, given a budget of _N_ epochs, we can conduct patch-level training on the first _Nλ_ epochs, and then switch to token-level training for _N_ (1 _−_ _λ_ ) epochs. 3 E XPERIMENTS 3.1 S ETUP **Datasets.** We evaluate our approach on standard language modeling tasks, using the Pile dataset (Gao et al., 2020) containing 360B tokens for training [1] . We assess the performance of LLMs from multiple aspects, including their perplexity, zero-shot accuracy, and instruction-following ability. Perplexity is calculated on the WikiText-103 test set (Merity et al., 2017). We evaluate the zero-shot capabilities of language models on 6 NLP benchmarks, including MMLU (Hendrycks et al., 2021), HellaSwag (Zellers et al., 2019), PIQA (Bisk et al., 2020), WinoGrande (Sakaguchi et al., 2020), ARC-E, and ARC-C (Clark et al., 2018) [2] . For the pre-trained LLMs, we conduct instruction fine-tuning using the Alpaca dataset by GPT4 (Taori et al., 2023), and then evaluate their instruction-following abilities on MT-Bench (Zheng et al., 2024). **Models.** We use the Transformer backbone (Vaswani et al., 2017) and adopt most of the architecture designs from LLaMA (Touvron et al., 2023a). We apply pre-normalization using RMSNorm (Zhang & Sennrich, 2019), use the SwiGLU activation function (Shazeer, 2020), and rotary positional embeddings (Su et al., 2021). We also apply FlashAttention2 (Dao, 2024) to accelerate attention computation. We scale the model demension and obtain 4 different sizes of Transformers: Transformer-370M (hidden ~~s~~ ize=1024, intermediate ~~s~~ ize=2752, hidden ~~l~~ ayers=24, attention ~~h~~ eads=16), Transformer-780M (hidden ~~s~~ ize=1536, intermediate ~~s~~ ize=4128, hidden ~~l~~ ayers=24, attention ~~h~~ eads=16), Transformer-1.3B (hidden ~~s~~ ize=2048, intermediate ~~s~~ ize=5504, hidden ~~l~~ ayers=24, attention ~~h~~ eads=16), Transformer-2.7B (hidden ~~s~~ ize=2560, intermediate size=6880, hidden ~~l~~ ayers=32, attention ~~h~~ eads=32). **Implementation Details.** Unless otherwise specified, the patch size _K_ is 4. The context length for token-level training 2048 . For patch-level training, the context length is the patch size _K ∗_ 2048 . The global batch size is 2 _M_ tokens, and the total number of training steps is _N_ = 180000 . For patch-level training, the number of training steps is _Nλ_, and then the model proceeds with token-level training for _N_ (1 _−_ _λ_ ) steps. After patch-level training, only the obtained model parameters are used for initialization, and all other states like the optimizer and learning rate scheduler are reset. We use the tokenizer of LLaMA2, whose vocabulary size is 32000 . Our models are optimized by the AdamW optimizer (Loshchilov & Hutter, 2019) with _β_ 1 = 0 _._ 9 _, β_ 2 = 0 _._ 95 _, ϵ_ = 1 _e −_ 8 . The learning rate is 3 _e −_ 4 and the cosine learning rate schedule is applied with warmup of 2000 steps. We use a weight decay of 0 _._ 1 and gradient clipping of 1 _._ 0, and no dropout is applied during training. 3.2 M AIN R ESULTS We train a series of LLMs of varying sizes (370M-2.7B parameters) on the Pile dataset. We employ patch-level training with the settings of _K_ = 4 _, λ_ = 2 _/_ 3, which theoretically reduces the training costs to 0 _._ 5 _×_ . Please refer to Appendix B for the actual speed measurement. For the Transformer370M, we also explore other choices of _λ_ to evaluate its impact. Table 1 presents the performance comparison of the resulting models. Remarkably, our approach consumes only half of the compute 1 Previous works generally refer to the Pile dataset as having 300B tokens, but our actual measurement is 360B. The discrepancy is likely due to differences in tokenizers; we use the LLaMA2 tokenizer, which has a relatively small vocabulary, possibly resulting in more tokens. The perplexity scores are also incomparable with models using other tokenizers. 2 https://github.com/EleutherAI/lm-evaluation-harness 4 Table 1: Performance comparison of Transformers trained on the Pile dataset. _λ_ denotes the proportion of training data used for patch-level training, with the patch size _K_ fixed at 4. ‘PPL’ represents the perplexity score on the WikiText-103 test set. For zero-shot evaluations, we report the normalized accuracy across 6 NLP benchmarks. ‘Average’ means the average zero-shot accuracy. |Model Type Cost|PPL|MMLU HellaSwag PIQA WinoG ARC-E ARC-C|Average| |---|---|---|---| |Transformer-370M 1.0×<br>+ Patch (λ = 1/2) 0.625×<br>+ Patch (λ = 2/3) 0.5×<br>+ Patch (λ = 4/5) 0.4×|10.9<br>10.6<br>10.7<br>11.0|22.9 40.8 67.5 53.1 44.3 24.7<br>23.5 42.0 67.9 52.1 46.1 25.6<br>23.7 41.1 68.0 51.9 46.0 24.2<br>23.3 40.5 67.5 51.7 44.9 24.5|42.2<br>42.9<br>42.5<br>42.1| |Transformer-780M 1.0×<br>+ Patch (λ = 2/3) 0.5×|9.2<br>9.1|24.4 48.5 69.0 55.4 49.0 26.7<br>24.1 49.1 70.6 54.8 51.3 28.2|45.5<br>46.3| |Transformer-1.3B 1.0×<br>+ Patch (λ = 2/3) 0.5×|8.2<br>8.2|23.9 54.5 71.2 57.3 55.1 28.9<br>24.3 54.1 71.6 57.8 55.6 30.4|48.5<br>49.0| |Transformer-2.7B 1.0×<br>+ Patch (λ = 2/3) 0.5×|7.1<br>7.2|25.3 62.2 74.3 61.5 61.2 34.3<br>25.4 61.9 74.9 62.4 61.9 34.6|53.1<br>53.5| Figure 4: Instruction-following abilities evaluated on MT-bench, a multi-turn question set. and incurs almost no performance loss. It matches the baseline model in terms of perplexity and even demonstrates a consistent gain in zero-shot evaluations, raising the average accuracy by approximately 0.5%. The model performance is also influenced by the choice of _λ_ . Within the range of values we set, a smaller _λ_ leads to better model performance but also entails larger training costs. A more detailed study on the hyperparameter _λ_ will be presented in Section 3.6. We further conduct instruction fine-tuning using the Alpaca dataset by GPT4 to examine the impact of patch-level training on the model’s instruction-following ability. We evaluate our models using MT-Bench, a multi-turn question set, and present the experimental results in Figure 4. As can be seen, our approach maintains a similar instruction-following ability to the original models, with some experiencing a score decrease (Transformer-370M, Transformer-1.3B) and others showing an improvement (Transformer-780M, Transformer-2.7B), which can be viewed as regular variations. Our primary motivation for patch-level training is to enhance the model’s knowledge acquisition efficiency. Interestingly, experimental results show that this approach can sometimes lead to performance improvements, which is beyond our initial expectation. We initially thought that the extended context length in patch-level training contributes to the improvements. However, when we decreased the context length during patch-level training from _KT_ = 8192 to _T_ = 2048 for Transformer-370M ( _λ_ = 1 _/_ 2 ), the model performance only experiences a slight decline (PPL _↑_ 0.06, zero-shot accuracy _↓_ 0.2), yet still surpasses the baseline, implying that context length is not the primary factor. We hypothesize that two other factors might be responsible for these improvements: firstly, the patch-level initialization could potentially serve as a form of regularization; secondly, by compressing consecutive tokens into patches, the model might more effectively recognize and capture long-range dependencies due to the reduced token distance. 5 Table 2: Performance comparison of Transformers trained on 60B tokens for 6 epochs. |Model Type Cost|PPL|MMLU HellaSwag PIQA WinoG ARC-E ARC-C|Average| |---|---|---|---| |Transformer-370M 1.0×<br>+ Patch (λ = 1/2) 0.625×<br>+ Patch (λ = 2/3) 0.5×<br>+ Patch (λ = 4/5) 0.4×|11.0<br>10.4<br>10.5<br>10.7|23.6 40.8 66.5 50.8 44.8 25.2<br>23.9 43.3 67.5 55.6 44.4 26.1<br>24.7 42.4 67.9 51.9 45.3 24.7<br>23.0 41.5 67.0 52.0 45.1 25.4|42.0<br>43.5<br>42.8<br>42.3| 3.3 M ULTI -E POCH T RAINING Given that patch-level training consumes training data more rapidly, it is more data-hungry compared to token-level training. Consequently, it is essential to consider scenarios where training data is relatively limited and assess the performance of patch-level training when training data is reused for multi-epoch training (Muennighoff et al., 2023). We randomly extract a subset of 60B tokens from the Pile dataset and increase the number of training epochs to _N_ = 6 . In this way, the model is first trained on patch-level for _Nλ_ epochs, followed by _N_ (1 _−_ _λ_ ) epochs of token-level training. The results presented in Table 2 demonstrate that patch-level training maintains its superiority in terms of training efficiency and performance on multi-epoch training. Intriguingly, when both consuming 360B tokens, patch-level training for multiple epochs even outperforms its single-epoch variant in Table 1. This unexpected advantage may stem from the synergistic effect of integrating patch-level and token-level training on the same data, which likely enhances model regularization. It also suggests that our approach can be data-efficient by initializing the model with patch-level training for one or multiple epochs, offering a promising direction for boosting model performance. 3.4 S CALING P ROPERTIES In the above, we have validated the effectiveness of patch-level training across several model sizes (370M-2.7B), using a training set of 360B tokens. However, state-of-the-art LLMs are generally trained on model sizes and datasets that are at least an order of magnitude larger than our settings. Therefore, it is crucial to know the scaling properties of patch-level training, i.e., how it performs when applied to larger training datasets and models. In Table 1, we notice a trend related to the model size: the performance advantage of patch-level training appears to decrease as the model size increases. Table 3 describes this trend more precisely, indicating that the model with patch-level training experiences smaller performance gains from the increase in model size. On the other hand, Table 4 presents the changes of cross-entropy loss when maintaining a constant model size and varying the size of the training data. As the data size increases, the performance of patch-level training improves at a faster rate compared to the baseline model. Table 3: Test losses when scaling the model size from 370M to 2.7B and training on the Pile dataset (360B tokens). ‘ _↓_ ’ indicates the loss reduction compared to the previous model size. **Model Size** **370M** **780M** **1.3B** **2.7B** Transformer 2.392 2.217 ( _↓_ 0.175) 2.102 ( _↓_ 0.115) 1.961 ( _↓_ 0.141) + Patch ( _λ_ = 2 _/_ 3) 2.368 2.208 ( _↓_ 0.160) 2.108 ( _↓_ 0.100) 1.980 ( _↓_ 0.128) Table 4: Test losses of Transformer-370M when scaling the size of training data from 45B to 360B. ‘ _↓_ ’ indicates the loss reduction compared to the previous data size. The batch size is adjusted to maintain a consistent number of training steps. **Data Size** **45B** **90B** **180B** **360B** Transformer 2.526 2.460 ( _↓_ 0.066) 2.423 ( _↓_ 0.037) 2.392 ( _↓_ 0.031) + Patch ( _λ_ = 2 _/_ 3) 2.553 2.468 ( _↓_ 0.085) 2.413 ( _↓_ 0.055) 2.368 ( _↓_ 0.045) 6 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| ||||||K=2<br>K=4|K=2<br>K=4||| ||||||K=2<br>K=4|K=2<br>K=4||| |||||||K=8<br>K=1<br>w/o|6<br>initialization|| |||||||||| |||||||||| Figure 5: Test losses of Transformer-370M w.r.t the number of processed tokens. Models are initialized by patch-level training with patch size _K_ . This phenomenon can be explained from the perspective of knowledge transfer. As the data size increases, more training data is employed to adjust the model from patch-level to token-level, facilitating a smoother knowledge transfer process. However, an increase in model size implies a greater number of model parameters to be transferred to the token-level, which raises the level of transfer difficulty and necessitates more training data. Based on this explanation, patch-level training is better suited for scenarios with abundant training data. Note that the above conclusions are drawn under the settings of _K_ = 4 _, λ_ = 2 _/_ 3, which may vary with changes in the patch size _K_ and the patch-level data fraction _λ_ . At present, we have not identified a general scaling law for patch-level training that incorporates _K_ and _λ_ . Instead, we have made some observations regarding their effects on model performance, which will be discussed in the following. 3.5 E FFECT OF P ATCH S IZE _K_ We investigate the effect of patch size under the settings of 90B training tokens, 370M model parameters, a batch size of 512K, and _λ_ = 1 _/_ 2 . The results are shown in Figure 5. Across different patch sizes, the loss curves for patch sizes _K_ = 2 and _K_ = 4 are nearly indistinguishable, while further increasing the patch size to 8 or 16 results in a certain performance decline. Despite this, these models still exhibit significant performance improvements when compared to the model trained from scratch, which does not benefit from the initialization of patch-level training. Overall, the patch size of _K_ = 4 strikes a favorable trade-off between training efficiency and performance. Considering that larger patch sizes can process more data at the same cost, we also experiment with patch-level training using _K_ = 8 on 90B tokens, which costs similar compute as _K_ = 4 on 45B tokens. Following this, both models proceed with token-level training on 45B tokens, and coincidentally, their loss curves are nearly identical. In this context, the advantage of _K_ = 4 lies in its data efficiency, as it achieves similar performance while consuming less data. 3.6 E FFECT OF _λ_ The hyperparameter _λ_ allocates the ratio of training data between patch-level and token-level training. A larger _λ_ results in more tokens being compressed into patches, leading to a higher acceleration rate, but it may also leave insufficient data to adjust the model to the token-level. In this section, we investigate the effect of _λ_ under the settings of 370M model parameters, a batch size of 512K, and a patch size of _K_ = 4. We consider two scenarios: 1. Unlimited computational budget: We assess the impact of varying _λ_ while keeping the data size constant (90B tokens). The results are shown in Figure 6. 2. Unlimited training data: We identify the optimal _λ_ under a fixed amount of computational budget (tokens + patches = 56.25B). For example, when _λ_ = 1 _/_ 2, the size of training data should be 90B tokens, with 45B tokens being compressed into 11.25B patches. The results are shown in Figure 7. 7 |13.0|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10| |---|---|---|---|---|---|---|---|---|---| |3.0|||||||||| |2.5<br>2.0<br>1.5|||||||||| |2.5<br>2.0<br>1.5|||||||||| |1.0|||||||||| Figure 6: Effect of varying _λ_ while keeping the data size constant. |12.50|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |2.25||||||| |2.00||||||| |2.00||||||| |1.75<br>1.50<br>1.25||||||| |1.75<br>1.50<br>1.25||||||| Figure 7: Effect of varying _λ_ while keeping the computational cost constant. Figure 6 shows that the model performance initially rises and later falls as _λ_ increases, with a turning point near _λ_ = 1 _/_ 4 . The performance improvements when _λ <_ 1 _/_ 4 can be attributed to the inherent benefits of patch-level training, as analyzed in Section 3.2. When _λ_ exceeds 3 _/_ 4, further increasing _λ_ leaves insufficient data to adjust the model to the token-level, leading to a rapid decline in performance. Figure 7, on the other hand, shows that when computational budget are limited, the optimal value for _λ_ is around 2 _/_ 3 . Note that these conclusions are specific to the current settings and should be used as a reference only. The optimal _λ_ may vary depending on factors such as data size and patch size. To determine the optimal value of _λ_ in any scenario, it is essential to establish the scaling law for patch-level training. 3.7 E FFECT OF A RCHITECTURE In the current setup, the architecture of the patch-level model is identical to that of the token-level model, facilitating smoother model transfer but also leading to some concerns. For instance, the token-to-patch transformation and the prediction of the next patch do not take into account the order of tokens within a patch. One may conjecture that patch-level models might benefit from adopting architectures that are better suited for representing and predicting patches, where the additional parameters introduced in such architectures could simply be discarded in the next stage. We evaluate this strategy under the settings of 90B training tokens, 370M model parameters, a batch size of 512K, and _λ_ = 1 _/_ 2 . Specifically, we incorporate a linear projection layer at both the input and output sides of the model. On the input side, the conversion of token embeddings into patch embeddings is facilitated through a linear projection _w_ _in_ _∈_ R _[Kd][×][d]_ . On the output side, a linear projection _w_ _out_ _∈_ R _[d][×][Kd]_ is employed to transform patch-level representations back into token-level representations, followed by _K_ softmax layers to obtain the probability distribution of each token. The effects of these two modules are detailed in Table 5. Table 5: Impact of architecture modifications in patch-level models. ‘+InputProj’ and ‘+OutputProj’ denote the incorporation of linear projections at model’s input and output, respectively. ‘Patch PPL’ and ‘Token PPL’ are perplexities of the patch-level and token-level model, respectively. **Model** **Transformer** **+InputProj** **+OutputProj** **+Both** Patch PPL 159.17 146.93 99.48 86.50 Token PPL 11.46 11.63 11.50 12.33 Overall, while these modifications are effective in reducing the patch-level loss, they do not translate into benefits for the subsequent token-level training. Particularly, when linear projections are applied at both the model input and output, the performance of the subsequent token-level model significantly declines. It suggests that it is crucial for at least one end (input or output) of the patch-level model to remain consistent with the token-level model, which acts as an anchor to align their representations. It also shows that there is no direct correlation between patch-level loss and the final performance of the token-level model, so a large loss during patch-level training does not imply ineffective learning. Therefore, we opt to preserve the original Transformer architecture for patch-level training. 8 |Col1|Col2|Col3|Col4| |---|---|---|---| |0|5 1|0 1|5 20| Figure 8: Percentage of activated neurons for models of different patch sizes. Output neurons of each model layer (FFN output) with an absolute value greater than 0.5 are classified as activated. 3.8 N EURON A CTIVATION In this section, we quantitatively explain why patch-level training leads to better learning efficiency from the perspective of neuron activation. The training of LLMs is essentially a process of embedding knowledge from the training set into the model’s parameters. During this process, the model employs all of its parameters to encode every token and updates the relevant parameters based on the gradient feedback. We argue that this is an inefficient process for large models, as the knowledge encapsulated in each token is only associated with a small subset of model parameters, resulting in a limited number of effectively activated and updated parameters. We substantiate this by measuring the percentage of activated neurons for models of different patch sizes, as depicted in Figure 8. In the token-level model ( _K_ =1 ), only a small proportion of neurons are activated, suggesting that the model has enough capacity to handle text units with higher information density, thus indicating significant room for improvement in training efficiency. By grouping multiple tokens into a patch, the information density processed at each step is increased, which is manifested as increased neuron activation rates. From this observation, it becomes evident that patch-level training makes more comprehensive utilization of the model’s capabilities, therefore demonstrating higher training efficiency. 4 R ELATED W ORK **Model Growth.** Our approach draws inspiration from transfer learning, reducing training costs by transferring knowledge acquired at a lower training cost (patch-level) to a model with a higher training cost (token-level). A similar strategy has been employed in studies of model growth, which train large models at a relatively lower cost by progressively increasing the model size during training. For example, Gong et al. (2019); Yang et al. (2020) improve the training efficiency by transferring knowledge from a shallow model to a deep model, where model layers are progressively stacked during training. Gu et al. (2021) further proposes progressive compound growth, where the model grows at multiple dimensions during training, including the context length, model width, and the number of layers. Subsequent studies primarily focus on the initialization problem during the model growth process, i.e., how to expand the small model into a large one. Chen et al. (2022); Yao et al. (2024) aim to achieve function-preserving growth (Chen et al., 2015) that the post-growth model have the same function as the pre-growth model, which intuitively ensures smooth knowledge transfer. Wang et al. (2023); Pan et al. (2023) introduce learnable linear operators that linearly map the parameters of the small model to initialize the large model. Compared to model growth, patch-level training is more flexible and generalizable as it does not necessitate specialized model architectures or carefully crafted model mapping strategies. More importantly, patch-level training merely alters the information density of text units while preserving the full set of model parameters. Therefore, it enables the model to develop its complete capabilities without compromising its performance. **Multi-Token Prediction.** Our approach improves training efficiency by concurrently predicting all tokens in the next patch. Similar attempts of multi-token prediction have been made in the past to improve the inference efficiency, including non-autoregressive generation (Gu et al., 2018) and speculative decoding (Stern et al., 2018; Leviathan et al., 2023; Chen Idea Generation Category:
1Cross-Domain Application
dDpB23VbVa
# - P OLICY D ESIGN IN L ONG RUN W ELFARE D YNAMICS Jiduan Wu [1,2], Rediet Abebe [1,3,*], Moritz Hardt [1,*], and Ana-Andreea Stoica [1,*] 1 _Max Planck Institute for Intelligent Systems, T¨ubingen and T¨ubingen AI Center_ 2 _Department of Computer Science, ETH Z¨urich_ 3 _ELLIS Institute, T¨ubingen_ A BSTRACT Improving social welfare is a complex challenge requiring policymakers to optimize objectives across multiple time horizons. Evaluating the impact of such policies presents a fundamental challenge, as those that appear suboptimal in the short run may yield significant long-term benefits. We tackle this challenge by analyzing the long-term dynamics of two prominent policy frameworks: Rawlsian policies, which prioritize those with the greatest need, and utilitarian policies, which maximize immediate welfare gains. Conventional wisdom suggests these policies are at odds, as Rawlsian policies are assumed to come at the cost of reducing the average social welfare, which their utilitarian counterparts directly optimize. We challenge this assumption by analyzing these policies in a sequential decision-making framework where individuals’ welfare levels stochastically decay over time, and policymakers can intervene to prevent this decay. Under reasonable assumptions, we prove that interventions following Rawlsian policies can outperform utilitarian policies in the long run, even when the latter dominate in the short run. We characterize the exact conditions under which Rawlsian policies can outperform utilitarian policies. We further illustrate our theoretical findings using simulations, which highlight the risks of evaluating policies based solely on their short-term effects. Our results underscore the necessity of considering long-term horizons in designing and evaluating welfare policies; the true efficacy of even well-established policies may only emerge over time. 1 I NTRODUCTION An important application of sequential decision making is the problem of promoting long-run social welfare through a sequence of targeted interventions in a population. Policies for this problem face a two-fold challenge. On the one hand, they must be effective at optimizing the long-term objective. On the other hand, they must appeal to the political and normative expectations of policy makers. In particular, simple policies supported by established moral and political arguments are desirable. Two families of policies have been particularly influential in the context of Western welfare programs. One targets individuals of largest immediate welfare gain. The other targets those most seriously in need. While the former derives from utilitarian moral principles, the latter is associated with Rawl’s theory of justice. Many scholars, however, have criticized Rawlsian policy for its presumed failure to maximize social welfare. Indeed, there is no obvious reason why allocating resources to those of lowest welfare should also maximize average welfare in the long run. In this work, we study a stochastic dynamic model of long-term welfare in a population. Surprisingly, under reasonable assumptions on the welfare dynamics, Rawlsian policy turns out to outperform an idealized utilitarian policy that chooses the individual of largest treatment effect at each step. This is the case even though the Rawlsian policy is suboptimal on a short-term horizon. Although our motivation is social welfare, our results hold a broader lesson for sequential decision making. Simple policies can be highly effective, but their long-run efficiency may not be apparent on a short time horizon. - Alphabetical order. 1 1.1 O UR C ONTRIBUTIONS We propose a multi-agent stochastic dynamical model to describe long-run welfare in a population of individuals. Our model draws from classical economic theory of industrial project management, extending so-called _attention allocation policies_ (Radner & Rothschild, 1975) into social policies. In our model, each individual _i_ has a welfare level _U_ _i_ ( _t_ ) at each timestep _t_ . At each timestep, a social planner allocates an intervention to one or more of _N_ agents using some policy _π_ . The welfare values evolve according to a stochastic dynamical system. Absent an intervention, an individual’s welfare decays in expectation according to a function _g_ _i_ ( _U_ _i_ ( _t_ )) _>_ 0. When the social planner allocates an intervention to an individual, however, the individual’s welfare increases in expectation according to a function _f_ _i_ ( _U_ _i_ ( _t_ )) _>_ 0. We are interested in comparing Rawlsian and utilitarian policies based on the _long-term social welfare_ they achieve, i.e. the asymptotic individual welfare increase, defined as lim _t→∞_ [(] _[U]_ _[i]_ [(] _[t]_ [)] _[ −]_ _[U]_ _[i]_ [(0))] _[/t]_ [, averaged over all individuals.] We make two substantive assumptions about welfare dynamics. The well-known Matthew effect (Merton, 1968; Rigney, 2010), or “rich-get-richer” while “poor-get-poorer” dynamic, suggests that inequality amplifies over time. We capture this effect by assuming that the return function _f_ _i_ ( _·_ ) is increasing with welfare, while the decay function _g_ _i_ ( _·_ ) decreases with welfare. The other assumption is a uniform boundedness assumption: the bounds of the return on intervention and decay functions are the same for all individuals. In other words, no individual can achieve a highest/lowest possible level of return or decay that is much higher or much lower than anyone else. Under these assumptions, we find a sufficient condition for comparing policies. This condition states that a policy can, in principle, avoid the decay of any individual’s welfare below 0. We call this a _survival condition_ and note that it rests on the functional form and bounds of the return and decay functions. Informally, our main result shows: Under the survival condition, Matthew effect, and uniform boundedness, a Rawlsian policy will achieve better long-term social welfare than a utilitarian policy almost surely. We complement this result by characterizing a condition in which the reverse is true: under a socalled “ruin condition” (when a policy cannot prevent an individual’s unbounded welfare decay), a utilitarian policy will achieve better long-term social welfare than a Rawlsian policy almost surely. To prove our results, we present a series of theoretical results that characterize in closed form the rate of growth of individual welfare under Rawlsian and utilitarian policies (Sections 3 and 4). Our proof extends the elegant argument of Radner & Rothschild (1975), who studied a fully homogeneous case in which the return and decay functions are constant terms. This generalization in turn requires a non-trivial departure from the original proof including a variant of Lundberg’s classical inequality for submartingale processes. The proof may be of independent interest for similar problems arising in sequential decision making and reinforcement learning. We illustrate our theoretical results by simulating our model with initial conditions drawn from real data from the Survey of Income and Program Participation (SIPP) of the U.S. Census Bureau in Section 5. We see a delayed effect of a Rawlsian policy, noting that it obtains lower social welfare in the short-term, yet quickly converges to a higher social welfare value than the utilitarian policy. We highlight limitations of our work and directions for future study in Section 6. Finally, we discuss potential extensions of our work (e.g., when the functions _f_ _i_ _, g_ _i_ violate the uniform boundedness assumption) in the appendix. 1.2 R ELATED WORK Welfare-based social policies have a long history in economics research (Sen, 1979; Kaplow & Shavell, 2000; Adler, 2011). Although Rawlsian principles are based on distributive justice and egalitarian goals (Harsanyi, 1975; Blau & Abramovitz, 2010), debates remain regarding their efficiency as compared to utilitarian policies (Arrow, 1973; Sen, 1976). The direct comparison between Rawlsian and utilitarian policies generally remains an open area of research, with some empirical and model-based comparisons made in the context of optimal taxation policies (Atkinson, 1995) and income inequality (Mongin & Pivato, 2021). 2 We build on the model proposed by Radner & Rothschild (1975) in the context of industrial project management, which analyzes the behavior of the system under different attention allocation mechanisms. We generalize and re-purpose their model by equipping it with various functional forms of the return and decay functions that capture societal behaviors and analyzing several additional policies. Our modeling choices include the Matthew effect (Merton, 1968): individuals with higher level of welfare may benefit the most from interventions (“rich-get-richer”), whereas individuals with low wealth experience more severe income shocks absent any interventions from the social planner (“poor-get-poorer”). Such effects have been documented in the context of economic inequality (Rigney, 2010; Stiglitz, 2012) and optimal taxation policy for reducing societal inequality (Atkinson, 2015). Closely related to our work are recent modeling frameworks for wealth fluctuations and policy design. Two recent papers develop algorithms for selecting the optimal candidates for intervening, subject to different objectives: Abebe et al. (2020) analyze two policy objectives in a population that undergoes income shocks and proposes algorithms for allocating subsidies optimally; their objectives aim to minimize the probability of ruin for any given individual. Arunachaleswaran et al. (2022) analyze the theoretical complexity and give approximation algorithms for the optimal selection of candidates under a social welfare and a Rawlsian objective, considering a transition matrix of welfare states. In addition, Heidari & Kleinberg (2021) study the optimal policy for allocating interventions in a population with two welfare states (advantaged and disadvantaged), over a finite time horizon. Acharya et al. (2023) study the effect interventions in a welfare-based dynamic system with feedback loops in societal inequality. Their interventions include allocating subsidies to those among most in need, without a comparison between different types of policies on the social welfare. In contrast, we study the effect of different policies in the long-run, formulating a sufficient condition for a Rawlsian policy to achieve better welfare than a utilitarian policy. A related line of work focuses on reinforcement learning algorithms for deriving optimal policies. In particular, Zheng et al. (2020) propose a framework for a integrating AI into two-level optimization problem in the context of optimal taxation policy, with subsequent work improving the generality of the model (Curry et al., 2023). Offline and online algorithms have been proposed for finding optimal policies with fairness considerations (Zimmer et al., 2021; Zhou, 2024) as well as in contexts with strategic agents Liu et al. (2022). The problem of optimal policy selection can be tackled using a continuous-state MDP under the average-reward criteria, with early works considering bounded reward rates (Doshi, 1976) and subsequent extensions that do not require boundedness (Guo & Rieder, 2006). These works find theoretical guarantees for the _existence_ of optimal policies, convergence rates, as well as optimality gaps. Often, such works do not find tractable, closed-form solutions for the optimal policy, but rather build heuristics with theoretical guarantees that can closely approximate an optimal policy. Finally, the problem of allocating resources through objectives such as a maximin rule includes lines of work in fair division (Procaccia & Wang, 2014) as well as machine learning, often as a constraint in a larger optimization problem (Binns, 2018; Heidari et al., 2019). Other works have studied Rawlsian principles under a finite time horizon (Dwork et al., 2012; Zafar et al., 2017; Diana et al., 2021) or as a static optimization problem (Chen & Hooker, 2020; Stark et al., 2014). Some works have studied the long-term effect of fair algorithms in the context of hiring (Hu & Chen, 2018) and resource-allocation (Liu et al., 2018). Kube et al. (2019) and Azizi et al. (2018) offer data-driven approaches for optimal assigning subsidies to individuals who experience homelessness; their approach uses a prioritization scheme that aims to minimize the probability of an individual to re-enter homelessness, based on an automated prediction. 2 A MODEL OF WELFARE DYNAMICS AND SOCIAL POLICIES **Preliminaries.** Consider _N_ individuals indexed by _i_ = 1 _, . . ., N_ . Each individual _i_ has a welfare value of _U_ _i_ ( _t_ ) at each timestep _t ≥_ 0 _._ The initial welfare values _U_ _i_ (0) are drawn from a distribution (e.g. a capped normal distribution; different choices of the initial distribution do not change our results). Here, welfare may represent the household income level, expenditure, monthly income, or other variables that define individual welfare. An intervention at time _t_ is defined through a vector _**a**_ ( _t_ ) := ( _a_ _i_ ( _t_ )) _i_, where an amount of _a_ _i_ ( _t_ ) budget is allocated to individual _i_ by the social planner. The exact decision of _who_ receives an 3 amount of budget and _how much_ they receive is decided by the social planner through a _social_ _policy_ . The social planner has a budget _M_ for allocating interventions at every timestep _t ≥_ 0: � _i_ _[a]_ _[i]_ [(] _[t]_ [) =] _[ M]_ [, for] _[ M][ ∈]_ [N] _[,]_ [ 1] _[ ≤]_ _[M][ ≤]_ _[N,]_ [ and][ 0] _[ ≤]_ _[a]_ _[i]_ [(] _[t]_ [)] _[ ≤]_ [1][. In this first analysis, we consider the] case when the social planner can only allocate an integer unit to each individual, so _a_ _i_ ( _t_ ) _∈{_ 0 _,_ 1 _}_ . 2.1 A DYNAMIC MODEL OF WELFARE FLUCTUATIONS . Absent any intervention, we assume that the welfare of individuals fluctuates at every timestep according to a function _g_ _i_ : R _→_ R +, defined as a function of the welfare value for each individual _i_ . We denote the function _g_ _i_ ( _·_ ) as the **decay function**, capturing the welfare decrease in natural conditions (e.g., income shocks due to accidents, economic conditions, natural disasters). In contrast, we model the impact of interventions on individuals’ welfare at each timestep through a function _f_ _i_ : R _→_ R +, defined for all individuals _i_ . We refer to _f_ _i_ ( _·_ ) as the intervention **return** **function**, capturing the effect of intervening on an individual (e.g., a new job through an employment program, social benefits, cash transfers). Let _F_ _t_ be a _σ−_ algebra denoting the space of events up to time step _t_ . We model the rate of change of individual welfare between different timesteps under interventions as: E[ _U_ _i_ ( _t_ + 1) _−_ _U_ _i_ ( _t_ ) _| F_ _t_ ] = _a_ _i_ ( _t_ ) _· f_ _i_ ( _U_ _i_ ( _t_ )) _−_ (1 _−_ _a_ _i_ ( _t_ )) _· g_ _i_ ( _U_ _i_ ( _t_ )) (1) Treatment ( _a_ _i_ ( _t_ ) = 1) in our model has two effects. On the one hand, the treated individual realizes the return _f_ _i_ ( _U_ _i_ ( _t_ )). On the other hand, the treated individual avoids the decay _−g_ _i_ ( _U_ _i_ ( _t_ )) _._ The _individual treatment effect_ of allocating an intervention to individual _i_ at time _t_ therefore corresponds to the expression _f_ _i_ ( _U_ _i_ ( _t_ )) + _g_ _i_ ( _U_ _i_ ( _t_ )) _._ Note that this quantity varies both in time and by individual. Conceptually, targeted individuals have a positive return, whereas non-targeted individuals suffer a decay in their welfare. 2.2 S OCIAL POLICIES A _policy π_ selects an individual for treatment at each step. This corresponds to setting the coefficients _{a_ _i_ ( _t_ ) _}_ at each timestep _t_ . We restrict our attention to policies that allocate _M_ units of resources to _M_ individuals with each individual receiving exactly one unit at each time step. Let Top _s_ ( _S_ ) denote the set of _s_ largest elements of set _S_ . A natural utilitarian policy is the one that chooses the individual of largest treatment effect. We call this the **max-fg policy** : _a_ _i_ ( _t_ ) = 1 _,_ _i ∈_ Top _M_ _{f_ _k_ ( _U_ _k_ ( _t_ )) + _g_ _k_ ( _U_ _k_ ( _t_ )) _}_ _[N]_ _k_ =1 _,_ � � (max-fg) �0 _,_ otherwise _._ Note that this policy requires full information about individual treatment effects at each time step. This may be an unrealistic requirement in many applications. We call this the **max-U policy** : _a_ _i_ ( _t_ ) = 1 _,_ _i ∈_ Top _M_ _{U_ _k_ ( _t_ ) _}_ _[N]_ _k_ =1 _,_ � � (max-U) �0 _,_ otherwise _._ The max-U policy is _welfare-based_ and requires only welfare measurements for its implementation. This utilitarian welfare-based policy directly contrasts with the Rawlsian policy that chooses the individual of minimum welfare at each step. We call this the **min-U policy** : _a_ _i_ ( _t_ ) = 1 _,_ _i ∈_ Top _M_ _{U_ _k_ ( _t_ ) _}_ _[N]_ _k_ =1 _,_ � � (min-U) �0 _,_ otherwise _._ Radner & Rothschild (1975) studied these policies under the names “putting out fires” for min-U and “staying with a winner” for max-U with _M_ = 1. 4 We explore a variation of the utilitarian policy that only uses knowledge of the intervention return functions _f_ _i_ ( _·_ ), i.e. the policy will allocate a unit of effort to the individual with the highest intervention return: _a_ _i_ ( _t_ ) = 1 _,_ _i ∈_ Top _M_ _{f_ _k_ ( _U_ _k_ ( _t_ )) _}_ _[N]_ _k_ =1 _,_ � � (max-f) �0 _,_ otherwise _._ We call this **max-f** . In contrast to max-fg, the max-f policy requires only partial information about the interventions, only measured through the return on interventions which may be less costly to measure. By analogy, we consider a variant of the Rawlsian policy here that only use knowledge of _·_ the decay functions _g_ _i_ ( ). That is, the **max-g** policy will allocate a unit of effort to the individual with the highest decay: _a_ _i_ ( _t_ ) = 1 _,_ _i ∈_ Top _M_ _{g_ _k_ ( _U_ _k_ ( _t_ )) _}_ _[N]_ _k_ =1 _,_ � � (max-g) �0 _,_ otherwise _._ _Tie-breaking rule:_ Among individuals with the same welfare, we favor the one with the lowest index _i ∈_ [ _N_ ]. This applies to all policies. For the policies that use the treatment effect information, max-f and max-fg, we break the tie in favor of the individual with the lowest index. For the max-g policy, among individuals with the same _g_ _i_ value, we break the tie in favor of the individual with the lowest welfare value, arguing that this best captures a Rawlsian principle. When max-g prioritizes the lowest index individual, results do not qualitatively change (see Appendix E, Figure 4). **Policy goal.** The goal of a policy is to promote long-term social welfare. Our main results focus on the long-term social welfare comparison of Rawlsian and utilitarian policies. We capture long-term social welfare as the average asymptotic welfare gain among individuals, defined as follows. **Definition 1** (Long-term social welfare) **.** _The long-term average social welfare induced by policy π_ _on a population of N individuals is defined as_ _._ (2) _t_ ¯ _R_ _π_ := _N_ [1] _N_ � _U_ _i_ ( _t_ ) _−_ _U_ _i_ (0) � _R_ _i_ _,_ _R_ _i_ := lim _t→∞_ _t_ _i_ =1 _where R_ _i_ _defines the rate of growth of individual i, asymptotically._ Note the welfare level _U_ _i_ ( _t_ ) depends on the policy _π_, as _π_ determines _**a**_ ( _t_ ) at every timestep, and therefore the subsequent _U_ _i_ ( _t_ + 1) through the model described in Equation 1. 2.3 M ODELING CHOICES The comparison between Rawlsian and utilitarian policies depends on an important condition, called a ‘survival’ condition. Survival means that no individual in a population will obtain negative welfare. The survival condition is necessary and sufficient to obtain a positive probability of survival for all individuals under some policy, as noted by Radner & Rothschild (1975). Such a policy only exists under the survival condition, and in fact, Rawlsian policies are examples as we will show later in Section 3. This is a sufficient condition for comparing policies in the long run. Formally, the survival condition can be stated in terms of a weighted sum of the _f_ _i_ ( _·_ ) and _g_ _i_ ( _·_ ) function bounds (assuming those exist): **Assumption 1** (Survival condition) **.** _We assume_ _ζ_ [¯] (( _f_ 1 _[−]_ _[, . . ., f]_ _N_ _[ −]_ [)] _[,]_ [ (] _[g]_ 1 [+] _[, . . ., g]_ _N_ [+] [))] _[ >]_ [ 0] _[ where]_ [ ¯] _[ζ]_ [ :] R [2] _[N]_ _→_ R _is defined as_ _j_ =1 _−_ 1   � []  _N_ � []  _j_ =1 ¯ _ζ_ (( _x_ 1 _, . . ., x_ _N_ ) _,_ ( _y_ 1 _, . . ., y_ _N_ )) := _M −_ � _N_ � _i_ =1 _y_ _i_ _x_ _i_ + _y_ _i_ _,_ (3) 1 _x_ _j_ + _y_ _j_ _and f_ _i_ [+] [:= sup] _[ f]_ _[i]_ [(] _[x]_ [)] _[ >]_ [ 0] _[, f]_ _i_ _[ −]_ [:= inf] _[ f]_ _[i]_ [(] _[x]_ [)] _[ >]_ [ 0] _[, g]_ _i_ [+] [:= sup] _[ g]_ _[i]_ [(] _[x]_ [)] _[ >]_ [ 0] _[, g]_ _i_ _[−]_ [:= inf] _[ g]_ _[i]_ [(] _[x]_ [)] _[ >]_ [ 0] _[ .]_ Next, we formally state the modeling conditions that capture a Matthew effect, as motivated in the introduction, as well as a uniform boundedness condition. 5 Idea Generation Category:
0Conceptual Integration
d8hYXbxX71
# T OWARDS N EURAL S CALING L AWS FOR T IME S ERIES F OUNDATION M ODELS **Qingren Yao** [1] _[,]_ [2] **, Chao-Han Huck Yang** [3] **, Renhe Jiang** [4] **, Yuxuan Liang** [2] _[∗]_ **, Ming Jin** [1] **, Shirui Pan** _[∗]_ [1] 1 Griffith University 2 The Hong Kong University of Science and Technology (Guangzhou) 3 NVIDIA Research 4 The University of Tokyo A BSTRACT Scaling laws offer valuable insights into the design of time series foundation models (TSFMs). However, previous research has largely focused on the scaling laws of TSFMs for in-distribution (ID) data, leaving their out-of-distribution (OOD) scaling behavior and the influence of model architectures less explored. In this work, we examine two common TSFM architectures—encoder-only and decoderonly Transformers—and investigate their scaling behavior on both ID and OOD data. These models are trained and evaluated across varying parameter counts, compute budgets, and dataset sizes. Our experiments reveal that the negative loglikelihood of TSFMs exhibits similar scaling behavior in both OOD and ID settings. We further compare the scaling properties across different architectures, incorporating two state-of-the-art TSFMs as case studies, showing that model architecture plays a significant role in scaling. The encoder-only Transformers demonstrate better scalability than the decoder-only Transformers in ID data, while the architectural enhancements in the two advanced TSFMs primarily improve ID performance but reduce OOD scalability. While scaling up TSFMs is expected to drive performance breakthroughs, the lack of a comprehensive understanding of TSFM scaling laws has hindered the development of a robust framework to guide model scaling. We fill this gap in this work by synthesizing our findings and providing practical guidelines for designing and scaling larger TSFMs with enhanced model capabilities. 1 I NTRODUCTION Time series analysis is an important piece of data mining, facilitating decision-making and scientific inference across various domains (Zhang et al., 2023). As an important analysis task, time series forecasting has long been studied and drives a wide range of practical applications, from energy, climate and quantitative finance to urban computing and system management (Jin et al., 2023; Nie et al., 2024; Wen et al., 2024). Various methods have been proposed for this task, ranging from classical statistic models (Hyndman & Athanasopoulos, 2013), bespoke dynamical models (Prado, 2020), to the more recent deep-learning based approaches (Wen et al., 2022). Despite their competitive performance, the methods are typically designed for specific tasks, poor to generalize to other domains (Fan et al., 2023; Rasul et al., 2023). Concurrently, we are witnessing a paradigm shift in time series forecasting from task-specific models to universal models, with the emergence of time series foundation models (TSFMs). Timer (Liu et al., 2024), Moirai (Woo et al., 2024), and more recently proposed Time-MoE (Shi et al., 2024b) show trends of scaling in both data volume and model size, aiming to achieve performance breakthroughs through more resource investment. The neural scaling law (Kaplan et al., 2020) quantitatively describes how model performance grows with the scaling of three basic training factors: model parameters, computational resources and training dataset size. Establishing such scaling laws is crucial for developing TSFMs, as it provides a framework for predicting expected performance gains, enabling the community to rationally allocate efforts toward key designs. The exploration on scaling laws for TSFMs is still in an initial stage; recent research has primarily focused on studying ID scaling behavior (Edwards et al., _∗_ Correspondence to: Y. Liang _<_ yuxliang@outlook.com _>_ and M. Jin _<_ mingjinedu@gmail.com _>_ 1 2024; Shi et al., 2024a). In practical applications, TSFMs primarily face challenges from unseen scenarios (Wang et al., 2024), where OOD forecasting capability is most critical. This raises an unresolved question: _do neural scaling laws also apply to predict out-of-distribution forecasting_ _performance?_ Moreover, various architectures of TSFMs have been arising, but they typically focus on performance improvement at specific scales. No studies have investigated the scaling behaviors across different architectures, leaving a key question unanswered: _how do model architectures affect_ _scalability?_ Although we are seeing an increasing investment in training resources for TSFMs, the bottlenecks and potential driving factors for developing larger TSFMs remain unclear. This raises another practical question: _how to design TSFMs from the perspective of scalability?_ In this paper, we aim to provide empirical answers to the above research questions. To investigate the scaling laws in OOD scenarios, we trained a family of encoder-only Transformer-based TSFMs, varying three basic training factors: model sizes, compute budgets, and training set sizes. We evaluated their performance on both ID and OOD test sets and established scaling laws for three training factors in each scenario. To examine the impact of model architecture on scaling behavior, we trained decoder-only Transformer based TSFMs and compared them with the encoder-only versions. Additionally, we included two state-of-the-art TSFMs, Moirai (Woo et al., 2024) and Chronos (Ansari et al., 2024), as case studies for detailed analysis. Our experiment results suggest that the negative log-likelihood loss of TSFMs exhibits similar scaling behavior in both OOD and ID scenarios; encoder-only Transformers have a similar scalability with decoder-only Transformers, with a slight advantage in ID data; the architectural modifications introduced by two advanced TSFMs mainly improve ID performance but compromise OOD scalability. Based on the findings and comparative analysis, we finally provided design principles for TSFMs from a scaling perspective. Our contributions are summarized as follows: - **Scaling laws across data distributions.** We extend the scaling laws for TSFMs from ID scenarios to OOD scenarios across three core training factors: model size, computational resources, and dataset size, establishing a foundation for predicting expected OOD performance gains of TSFMs. - **Scaling laws across model architectures.** We investigate the scaling patterns of different TSFM architectures, showing the scalability varies depending on the model architecture and the design of specific modules. - **Scaling laws-guided design principles.** We provide practical design principles for TSFMs from the perspective of data, model and compute scaling, via analyzing the commonalities and differences in scaling behaviors across data distributions and model architectures 2 P RELIMINARY To investigate the scaling laws of TSFMs, we curated a large, diverse, and balanced dataset for pre-training. Leveraging this dataset, we trained both _encoder-only_ and _decoder-only_ transformers within two state-of-the-art TSFMs: Moirai and Chronos. For comparative analysis, we evaluated these models on (i) _in-distribution_ and (ii) _out-of-distribution_ test sets, focusing on key performance metrics to examine the scaling behavior across architectures. 2.1 D ATASETS A large scale, diverse, balanced and high quality pre-training dataset is the foundation to build FMs. To this end, we constructed our time series corpus for TSFM pre-training from the large-scale open time series archive, LOTSA (Woo et al., 2024). The corpus comprises approximately 17B time points from 39 datasets spanning seven distinct domains. To ensure that the model performs fairly across all domains, we maintained a balanced ratio of data from different domains. Furthermore, we performed quality filtering on the corpus by constraining the signal-to-noise ratio of a time series to be greater than 20 dB, ensuring that the pre-training corpus exhibits strong predictability. A detailed breakdown of the data sources is provided in Appendix A, with a summary in Table 1. To assess the impact of pre-training data scale on model performance, we partitioned the corpus into subsets containing 10M, 100M, and 1B time points, ensuring that each subset maintained similar diversity. For each subset, 95% of the data was allocated for model training, with the remaining 5% reserved as a validation set to evaluate in-distribution forecasting performance. Additionally, 2 Table 1: **Dataset summary.** M indicates million and B indicates billion. |Domain Transport Climate Energy CloudOps Health Sales Web|Total| |---|---| |Datasets 8 2 14 3 9 1 2<br>Time points 4.82B 4.73B 4.76B 2.15B 232M 140M 40M<br>Proportion 28.52% 28.06% 28.21% 12.76% 1.38% 0.83% 0.24 %|39<br>16.8B<br>100%| Mixture Distribution Output Embedding Patch Embedding Input Time Series Figure 1: **Architectures of Baseline Time Series Foundation Models.** As the most widely used two Transformer architectures, encoder-only Transformer and decoder-only Transformer are selected to as our baseline. A time series is divided into multiple patches, each treated as a token and fed into the Transformer model. The shaded patches represent the future horizon to be predicted. we used a subset from a widely recognized long-sequence prediction benchmark (Wu et al., 2023) to test the model’s _out-of-distribution_ forecasting capabilities. To further enhance the reliability, we also incorporated a subset of the Monash dataset (Godahewa et al., 2021) as additional OOD test data. The details of the dataset composition and properties are provided in Appendix A Table 3. 2.2 M ODELS TSFMs are predominantly built upon the Transformer architecture (Wen et al., 2022). For our baseline models, we selected two widely adopted architectures: the encoder-only Transformers (Woo et al., 2024) (Moirai) and the decoder-only Transformers (Ansari et al., 2024) (Chronos). The primary distinction between them lies in the attention mechanisms applied to the inputs, as illustrated in Figure 1. To better adapt them for time series forecasting, we introduce three key modifications in input layer, positional encoding and prediction head. More details are given in Appendix B. **Patch Embedding.** There are several approaches for generating inputs for transformer-based TSFMs, including point embedding, patch embedding, and lagged feature embedding. Due to the high computational cost of point embedding for long sequences and the limited robustness of lagged feature embedding, we adopt patch embedding in our models. This method, initially introduced by Vision Transformers (Dosovitskiy et al., 2020) and later adapted by PatchTST (Nie et al., 2023) for time series forecasting, divides the time series into non-overlapping segments, which are then projected into a feature space. **Rotary Position Embedding.** This technique (RoPE) has rapidly gained popularity as a positional encoding method in recent large language models (Su et al., 2024). Given the improved pre-training efficiency observed with RoPE (Woo et al., 2023), we adopt RoPE as a replacement for the original Transformer’s positional encoding. RoPE encodes absolute positions using a rotation matrix while embedding relative position dependencies directly into the self-attention mechanism. **Mixture of Distributions.** Our models are designed to predict the probability distribution of future time series. However, real-world time series often exhibit complex distributions, including outliers, heavy tails, and extreme skew, which pose significant challenges for accurate modeling. To address 3 these complexities, we incorporate a more flexible output likelihood by utilizing Student-T mixture models (Flunkert et al., 2017). Compared to the commonly used Gaussian mixture models, StudentT mixture models offer greater robustness in handling outliers and heavy-tailed distributions. An empirical comparison between the two mixture distributions is shown in Appendidx B Figure 9. Our models are characterized by several key hyper-parameters: the number of layers ( _n_ layer ), the input/output dimensions of the residual stream ( _d_ m ), the dimensions of the intermediate feed-forward layers ( _d_ ff ), the number of attention heads per layer ( _n_ heads ), and the dimension of the attention output ( _d_ head ). The overall model size can be expressed as: _N ≈_ _n_ layer (4 _d_ m _n_ heads _d_ head + 2 _d_ m _∗_ _d_ ff ) = 2 _d_ m _n_ layer (2 _n_ heads _d_ head + _d_ ff ) (1) = 12 _n_ layer _d_ [2] m with the standard _n_ head _· d_ head = _d_ m = _d_ ff _/_ 4 _,_ where the embedding layer, prediction head, biases and other sub-leading terms are excluded for a cleaner scaling laws. The embedding layer uses a patch size of 32 with 32 _d_ m parameters. The mixture distribution prediction head comprises multiple independent linear layers that predict each Student-t mixture distribution parameter for a patch separately, with 512 _d_ m parameters in total. In the study, we explore models with _∼_ 10 [3] to _∼_ 10 [8] trainable parameters. 2.3 T RAINING AND E VALUATION D ETAILS In this study, we focus exclusively on uni-variate time series forecasting to avoid the confounding effects introduced by multivariate time series, such as variable interactions, correlations, and the complexities of modeling multivariate relationships. Future research will address these factors, aiming to establish more comprehensive scaling laws for multivariate time series models. **Training Details** . Our training objective is to optimize the mixture distribution log-likelihood. We utilize the AdamW optimizer with a batch size of 128, and a maximum learning rate of 10 _[−]_ [3] with a linear warm-up of 10 [4] training steps, followed by cosine decay for the remaining 9 _×_ 10 [4] steps. To facilitate learning data representations across diverse domains with varying series lengths and sample sizes, we visited each sample with probability _p_ _i_ = _t_ _i_ _/T_, where _t_ _i_ is the series’ time points and _T_ is the corpus’ total time points. In addition, we follow the approach used in Moirai (Woo et al., 2024) and Timer (Liu et al., 2024) by capping the sampling probability at 0.05 to ensure a more balanced contribution from each dataset. We then randomly selected a segment from each chosen sample. **Evaluation Details** . We evaluate the model on a randomly selected 10% subset of the test data every 10 [3] steps to reduce computational costs. For performance measurement, we observed that nonnormalized metrics like MAE and MSE are highly sensitive to the amplitude of time series data, often causing the overall average to be disproportionately influenced by high-amplitude datasets. To mitigate this issue, we primarily use the normalized metric, mean absolute percentage error (MAPE), along with the negative log-likelihood (NLL), to assess forecasting performance. For a more comprehensive understanding of TSFM scaling laws, we also include additional results using symmetric mean absolute percentage error (SMAPE), mean absolute scaled error (MASE), and continuous ranked probability score (CRPS) in Appendix D.4. Detailed descriptions of these metrics are provided in the Appendix C.3. 3 S CALING L AWS FOR T IME S ERIES F OUNDATION M ODELS In this section, we first present experimental results using the encoder-only Transformer to explore scaling laws across different data distributions. Following this, we conduct a comparative study on the scaling behavior of encoder-only and decoder-only TSFMs, Chronos and Moirai, to investigate how various scaling factors influence the characteristics of time series models. 3.1 S CALING L AWS A CROSS D ATA D ISTRIBUTIONS **Parameter Scaling.** In Figure 2, we display the ID and OOD performance of a wide variety of encoder-only Transformers, ranging from small models with 1K parameters through large models 4 Figure 2: **Parameter Scaling.** The scaling effect of total trainable model parameters on the indistribution (ID) and out-of-distribution (OOD) forecasting performance, which is evaluated using NLL and MAPE metrics. When evaluated with NLL, both ID and OOD results follow an approximate power law scaling with parameter count, exhibiting consistent trends across different data distributions. The blue and red horizontal dashed lines represent the baselines of the exponential smoothing (ETS) forecasting method. with 100M parameters. We trained models on the full pre-training corpus to convergence and report the minimum NLL and MAPE. We can see that both ID and OOD performance roughly follow power-law behavior over five orders of magnitude in model sizes. Formally, the power law can be expressed as: _L_ ( _N_ ) _≈_ _N_ _c_ � _N_ _α_ _N_ _,_ (2) � where _L_ is the performance metric function (i.e., MAPE, or NLL), _N_ is a given parameter count, _N_ _c_ is the normalization coefficient, and _α_ _N_ is the exponent value that indicates the degree of performance improvement expected as we scale up _N_ . Observing the NLL metric, the lines fitting the scaling laws for both ID and OOD data exhibits a roughly constant shift and close slopes. This implies that while models incur a consistent performance bias when transferred to OOD data, their scaling patterns correlate well with their performance on the ID data. When evaluated using MAPE, the power-law for OOD scenario shows a bigger exponent value than ID scenario. This indicates that increasing model size yields greater improvements in OOD performance than ID performance. In other words, for models with weak OOD generalization capabilities, increasing model size may enable them to perform equally well on both ID and OOD data. To evaluate whether the benefits of large-scale pre-training are warranted, we compare the pretrained models with the classical exponential smoothing (ETS) forecasting method. The results indicate that the pre-trained models consistently outperform ETS on ID data and progressively excel on OOD data as the model size increases. This suggests that pre-trained models must reach a certain scale, at least 3M parameters in this case, to demonstrate a level of superiority on OOD data that justifies their high pre-training cost. **Compute Scaling.** Following the similar method in (Kaplan et al., 2020), we estimate the compute budget using the formula _C_ = 6 _NBS_, where _B_ is the batch size, _S_ is the number of parameter updates, i.e. the input sequence length, and 6 is the factor to account for the forward and backward passes. The ID and OOD test loss for compute budget varying over six orders of magnitude are shown in Figure 3. We see that the optimal results for each compute budget are achieved by different model sizes _N_, but the lowest loss decreases according to a approximate power law with respect to the amount of training compute. The lowest losses appear as the heavy lines, which can be fit with _L_ ( _C_ ) _≈_ _C_ _c_ � _C_ 5 _α_ _C_ _._ (3) � Idea Generation Category:
2Direct Enhancement
uCqxDfLYrB
## H IGH -D IMENSIONAL B AYESIAN O PTIMISATION WITH G AUSSIAN P ROCESS P RIOR V ARIATIONAL A UTOENCODERS **Siddharth Ramchandran** _[∗]_ Department of Computer Science Aalto University Espoo, Finland **Harri L¨ahdesm¨aki** Department of Computer Science Aalto University Espoo, Finland **Manuel Haussmann** Department of Mathematics and Computer Science University of Southern Denmark Odense, Denmark A BSTRACT Bayesian optimisation (BO) using a Gaussian process (GP)-based surrogate model is a powerful tool for solving black-box optimisation problems but does not scale well to high-dimensional data. Previous works have proposed to use variational autoencoders (VAEs) to project high-dimensional data onto a low-dimensional latent space and to implement BO in the inferred latent space. In this work, we propose a conditional generative model for efficient high-dimensional BO that uses a GP surrogate model together with GP prior VAEs. A GP prior VAE extends the standard VAE by conditioning the generative and inference model on auxiliary covariates, capturing complex correlations across samples with a GP. Our model incorporates the observed target quantity values as auxiliary covariates learning a structured latent space that is better suited for the GP-based BO surrogate model. It handles partially observed auxiliary covariates using a unifying probabilistic framework and can also incorporate additional auxiliary covariates that may be available in real-world applications. We demonstrate that our method improves upon existing latent space BO methods on simulated datasets as well as on commonly used benchmarks. 1 I NTRODUCTION Bayesian optimisation (BO) (Mockus, 1989; Shahriari et al., 2015; Frazier, 2018) is a technique for complex optimisation problems, where the true functional form of a target quantity of interest is unknown. This target quantity may be expensive to compute or may require time consuming experiments to obtain its value. Hence, one would like to minimise the number of evaluations that are required to optimise it. Although BO offers an approach for black-box optimisation problems, it does not efficiently scale to high-dimensional data settings (Shahriari et al., 2015). Variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) are a popular family of latent-variable models that are often used to learn low-dimensional representations of high-dimensional data. The low-dimensional latent space afforded by VAEs, that is representative of the high-dimensional, potentially discrete-valued data on which it is trained, offers a powerful scaling strategy for BO. BO is performed on the inferred low-dimensional continuous-valued manifold instead of the high-dimensional data space (Gomez-Bombarelli et al. ´, 2018). This method of combining the benefits of VAEs with BO, known as VAE BO, is a general-purpose high-dimensional black-box optimisation method with many practical applications, such as molecule discovery (Gomez- ´ Bombarelli et al., 2018; Griffiths & Hernandez-Lobato ´, 2020; Jin et al., 2018), neural architecture _∗_ Correspondence to: siddharth.ramchandran@aalto.fi 1 Decoder Step 4 **Repeat** Step 0 **Fit GP** **prior VAE** **to data** Step 1 **Encode** **molecules** Encoder Latent space coloured by quantity of interest Figure 1: _An overview of our model._ Consider the example application of discovering novel drug-like molecules. Our method uses a GP prior VAE with an additive kernel over various partially observed auxiliary covariates such as molecular weight, number of hydrogen bonds, total polar surface area, etc. and the partially observed quantity of interest (represented by _x_ [(] _[r]_ [)] in this image for the _r_ [th] additive kernel) to learn a structured latent space. The black-box function evaluates the quantity of interest for the chosen molecule. search (Kandasamy et al., 2018; Ru et al., 2021) and chemical synthesis (Felton et al., 2020; Shields et al., 2021; Korovina et al., 2020). Sohn et al. (2015) proposed conditional VAEs (cVAEs) as an extension that conditions a generative model on auxiliary covariates. However, similar to standard VAEs, this family of models ignores possible correlations between data samples. The Gaussian process (GP) prior VAE (Casale et al., 2018) extends the conditional VAE framework by replacing the i.i.d. standard Gaussian prior on the latent variables with a GP prior in order to capture arbitrary, but preferably smooth, correlations between data samples. These models have been shown to compare favourably to VAEs and cVAEs as well as effectively handle missing data in the observations. Ramchandran et al. (2024) introduced a method to impute the missing auxiliary covariates in cVAEs and thereby enhance their applicability to real-world datasets. **Our Contribution** We propose a novel conditional deep generative model for high-dimensional BO that improves upon the existing VAE BO methods. Our proposed model uses a GP prior VAE to learn a low-dimensional, structured latent representation of the data samples, and implements the GP surrogate model to optimise the target quantity (or quantities) of interest in the repeatedly re-trained latent space. We use these partially observed target quantity values directly as auxiliary covariates to condition the GP prior VAE model. The model also incorporates additional (partially or fully) observed auxiliary covariates that may be available for a given application. Furthermore, it can effectively handle missing values in both the high-dimensional observations as well as the auxiliary covariates using a principled technique that is particularly developed for learning conditional VAEs. Fig. 1 summarises our model. Our contributions can be summarised as follows: - We introduce a conditional VAE-based method for efficiently performing Bayesian optimisation on high-dimensional datasets. - We learn structured latent representations of high-dimensional data points using a GP prior VAE that handle missing values in the observations, target quantity values, and in other possible auxiliary covariates. - We demonstrate the efficacy of our method on a synthetic dataset and on common benchmarks. [The source code is available at https://github.com/SidRama/GP-prior-VAE-BO.](https://github.com/SidRama/GP-prior-VAE-BO) 2 2 R ELATED W ORKS Bayesian optimisation is a popular black-box optimisation technique that is challenging to scale to high-dimensional data (Mockus, 1989; Shahriari et al., 2015; Frazier, 2018). Binois & Wycoff (2022) reviews the recent advancements in improving the efficiency of Bayesian Optimisation (BO) for high-dimensional problems, particularly through various structural model assumptions. To address the curse of dimensionality, Griffiths & Hernandez-Lobato ´ (2020) uses an autoencoder to learn a low-dimensional, non-linear manifold to scale BO to high-dimensional datasets. They perform a constrained BO over the latent space in order to incorporate the application-specific idiosyncrasies and thereby generate a high proportion of valid reconstructions. Stanton et al. (2022) integrate Denoising Autoencoders with a discriminative multi-task Gaussian process head into BO to learn a latent space that captures meaningful features of biological sequences. As autoencoders cannot be used to sample novel observations from their representation space, VAEs are an approach to make it possible to leverage the low-dimensional latent representation for generative purposes (Kusner et al., 2017; Gomez-Bombarelli et al. ´, 2018). However, a vanilla VAE BO is sub-optimal as the learnt latent space is not constructed by leveraging the black-box function labels (Urtasun & Darrell, 2007; Siivola et al., 2021; Grosnit et al., 2021). Building upon this, some methods: use an automatic statistician perspective by learning the kernel combination of the surrogate GP (Lu et al., 2018), use manifold GPs in the encoder and manifold multi-output GPs in the decoder (Moriconi et al., 2020), reformulate the encoder to effectively act both as the encoder for the VAE as well as a deep kernel for the surrogate model within a local Bayesian optimisation framework using trust region method (Maus et al., 2022), and use label guidance in the latent space (Eissman et al., 2018; Tripp et al., 2020; Maus et al., 2022). Furthermore, Grosnit et al. (2021) proposed a method that combines VAEs with deep metric learning. They make use of label guidance from the labelled data points by incorporating various metric losses (e.g., triplet loss, contrasting loss, log ratio loss, etc.). However, this method does not incorporate additional information in the form of auxiliary covariates and the triplet loss requires an additional matching procedure as a pre-processing step, which can be time consuming. Other relevant works include (Notin et al., 2021; Maus et al., 2023; Lee et al., 2024) Variational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014) are popular deep learning methods that map high-dimensional, complex data to a low-dimensional space and vice-versa. Most VAE-based models assume the data to be fully observed or choose to substitute unobserved values of the encoder input with zeros (Nazabal et al., 2020; Mattei & Frellsen, 2019). Conditional variational autoencoders (Sohn et al., 2015) include information about the auxiliary covariates into both the inference and generative networks. Building upon this idea, Gaussian process prior VAEs have been proposed as an extension to incorporate arbitrary correlations as well as auxiliary covariates via Gaussian process priors (Casale et al., 2018; Fortuin et al., 2020; Ramchandran et al., 2021). These methods have shown competitive performance as well as handle missing values in the observed data. Ramchandran et al. (2024) proposed a conditional VAE-based learning approach that can robustly handle missing values in the auxiliary covariates. 3 B ACKGROUND Throughout the paper, we use the following notation: _**y**_ _∈Y_ is a high-dimensional observation, _c ∈_ R is the target quantity that we want to optimise, _**x**_ = [ _x_ 1 _, . . ., x_ _Q_ ] _∈X_ denotes additional auxiliary covariates, and _**z**_ _∈Z_ = R _[L]_ is a _L_ -dimensional latent variable. We define ˜ _**x**_ = [ _c,_ _**x**_ ] _∈_ R _× X_ . A set of _N_ observations is denoted as _Y_ = [ _**y**_ 1 _, . . .,_ _**y**_ _N_ ], with _X_, _X_ [˜], and _Z_ defined analogously. The target quantity _**c**_ = [ _c_ 1 _, . . ., c_ _N_ ] _[T]_ is typically partially observed. 3.1 B AYESIAN O PTIMISATION Bayesian optimisation is a technique for performing efficient global optimisation of black-box functions (or unknown scoring functions) that are difficult to compute and whose functional form may not be known (Kushner, 1962; 1964; Mockus, 1989; Frazier, 2018). Given a function _f_ : _Y �→_ R we aim to find a point _**y**_ _∈Y_ that corresponds to the global optimum of _f_ . The black-box function _f_ is also referred to as a utility function as it is a measure of the target quantity, _c_ = _f_ ( _**y**_ ), that we are trying to optimise and informs us on the quality of the chosen sample. The problem can be written as (assuming maximisation), _**y**_ _[∗]_ = arg max _**y**_ _∈Y_ _f_ ( _**y**_ ) . Since, the unknown function _f_ is assumed to 3 be difficult or expensive to evaluate, Bayesian optimisation requires a surrogate model to model the true function _f_ as well as an acquisition function which is a function of the posterior and guides the process of choosing the next sample point until a stopping criteria is met or the evaluation budget _B_ is exhausted. **Gaussian Processes and the Surrogate Model** We use a non-parametric Gaussian process as the surrogate model of _f_ as GPs define a probability distribution over functions and for Gaussian likelihood models the posterior distribution is analytically tractable. Moreover, they maintain smoothness and uncertainty estimates to guide the exploration of new points as well as represent prior beliefs (Schulz et al., 2018). Following Williams & Rasmussen (2006), for inputs _**y**_ _,_ _**y**_ _[′]_ _∈_ _Y_, a GP is defined as _g_ ( _**y**_ ) _∼_ _GP_ ( _µ_ ( _**y**_ ) _, k_ ( _**y**_ _,_ _**y**_ _[′]_ )) where _µ_ ( _**y**_ ) is the mean and _k_ ( _**y**_ _,_ _**y**_ _[′]_ ) is a kernel function given by _k_ ( _**y**_ _,_ _**y**_ _[′]_ ) = cov( _g_ ( _**y**_ ) _, g_ ( _**y**_ _[′]_ )) . For _N_ data points _Y_ = [ _**y**_ 1 _, . . .,_ _**y**_ _N_ ], the induced prior probability density _g_ ( _Y_ ) = [ _g_ ( _**y**_ 1 ) _, . . ., g_ ( _**y**_ _N_ )] _[T]_ is a multivariate Gaussian distribution: _g_ ( _Y_ ) _∼N_ ( **0** _, K_ _Y,Y_ ) . We assume _µ_ ( _**y**_ ) _≡_ 0 throughout this work. The elements of the covariance matrix are defined by the kernel function [ _K_ _Y,Y_ ] _i,j_ = _k_ ( _**y**_ _i_ _,_ _**y**_ _j_ ) . GPs are intractable for large datasets as the time complexity scales by _O_ ( _N_ [3] ) . Several approximate methods have been proposed to address this through sparse Gaussian processes (Smola & Bartlett, 2000; Lawrence et al., 2002; Quinonero-Candela & Rasmussen, 2005) or via (stochastic) variational formulations (Titsias, 2009; Hensman et al., 2013) for sparse approximations. **Acquisition Functions** An acquisition function is a function of the posterior that captures the trade-off between exploration and exploitation of our surrogate of the function _f_ given the known evaluations. It is responsible for selecting the next candidate point in _Y_ that should be evaluated or measured. We use an acquisition function _α_ ( _**y**_ ) to choose the next sample point _**y**_ _N_ +1 = arg max _y_ _α_ ( _**y**_ ) . A good acquisition function exploits regions around the current maximum by selecting points to query from that region while also suggesting points from unexplored regions in order to escape a local maxima. There are several candidate functions such as upper confidence bound, expected improvement, probability of improvement, and Thompson sampling (Shahriari et al., 2015). Our proposed method is agnostic to the choice of acquisition function. 3.2 V ARIATIONAL A UTOENCODERS We define a latent variable generative model as _p_ _ω_ ( _**y**_ _,_ _**z**_ ) = _p_ _ψ_ ( _**y**_ _|_ _**z**_ ) _p_ _θ_ ( _**z**_ ) which is parameterised by _ω_ = _{ψ, θ}_, and where _**z**_ is unobserved. We are generally interested in inferring this latent variable _**z**_ given _**y**_ . The posterior distribution, _p_ _ω_ ( _**z**_ _|_ _**y**_ ) = _p_ _ψ_ ( _**y**_ _|_ _**z**_ ) _p_ _θ_ ( _**z**_ ) _/p_ _ω_ ( _**y**_ ), is usually intractable due to the lack of a closed-form marginalisation over the latent space (Murphy, 2023). The standard VAE model comprises the generative model (the probabilistic decoder) _p_ _ψ_ ( _**y**_ _|_ _**z**_ ) and an inference model (the probabilistic encoder) _q_ _ϕ_ ( _**z**_ _|_ _**y**_ ) that approximates the true posterior. VAEs use amortised variational inference that exploits the inference model _q_ _ϕ_ ( _**z**_ _|_ _**y**_ ) to obtain approximate distributions for each _**z**_ _n_ . The encoder and decoder are typically parameterised by deep neural networks. In variational inference we minimise the Kullback-Leibler (KL) divergence from _q_ _ϕ_ ( _**z**_ _|_ _**y**_ ) to _p_ _ω_ ( _**z**_ _|_ _**y**_ ), or equivalently maximise the ELBO of the marginal log-likelihood w.r.t. _ϕ_ . For VAEs, approximate inference is typically conducted alongside learning the generative model’s parameters, that is, w.r.t. _ϕ, ψ, θ_ : log _p_ _ω_ ( _Y_ ) _≥L_ ( _ϕ, ψ, θ_ ; _Y_ ) ≜ _N_ � E _q_ _ϕ_ [log _p_ _ψ_ ( _**y**_ _n_ _|_ _**z**_ _n_ )] _−_ KL[ _q_ _ϕ_ ( _**z**_ _n_ _|_ _**y**_ _n_ ) _||p_ _θ_ ( _**z**_ _n_ )] _→_ _ϕ,ψ,θ_ max _[.]_ _n_ =1 It is straightforward to apply computationally efficient mini-batch based stochastic gradient descent to the above equation. 4 O UR M ETHOD 4.1 B AYESIAN O PTIMISATION WITH VAE S The low-dimensional nonlinear latent manifold learnt by a VAE can be used to perform BO (Kusner et al., 2017; Gomez-Bombarelli et al. ´, 2018; Tripp et al., 2020). The VAE is first pre-trained on the high-dimensional observations without access to the utility function values. As described in 4 Sec. 3.2, the encoder _q_ _ϕ_ ( _**z**_ _|_ _**y**_ ) of the learnt VAE is used to map the observations _**y**_ _∈Y_ onto a low-dimensional latent representation _**z**_ _∈Z_ . The VAE-based methods then perform latent space optimisation (LSO) (Tripp et al., 2020) by fitting a surrogate model over the latent space to model the utility function of interest. The VAE BO aims to identify a _**z**_ _[∗]_ such that the corresponding _**y**_ _[∗]_, that is obtained from the pre-trained decoder, minimises a utility function of interest, _f_ ( _**y**_ _[∗]_ ) . In other words, we would like to obtain a _**z**_ _[∗]_ such that we maximise the expectation over the utility function evaluated on _**y**_ _[∗]_ _∼_ _p_ _ψ_ ( _**y**_ _[∗]_ _|_ _**z**_ _[∗]_ ), i.e., arg max _**z**_ _∈Z_ E _**y**_ _∼p_ _ψ_ ( _·|_ _**z**_ ) [ _f_ ( _**y**_ )] . Once we have a new _**y**_ _[∗]_ and its associated utility function value _c_, we append them to the training dataset and update the parameters _ϕ_ and _ψ_ either after each BO step or at a chosen frequency. Tripp et al. (2020) use this approach with the help of a weighted retraining scheme according to their utility function values. 4.2 G AUSSIAN P ROCESS P RIOR VAE S FOR BO A limitation of standard VAE BO is that it infers an unconditional latent-variable model without any guidance from the observed target quantities. Departures from this limitation have been proposed, e.g., in (Eissman et al., 2018; Tripp et al., 2020; Maus et al., 2022). Recently, Grosnit et al. (2021) built upon VAE BOs by using deep metric learning to actively steer the generative model to maintain a latent manifold that is useful for the BO task. We propose to use GP prior VAEs that guide the generative model by conditioning the GP prior with auxiliary covari ates. Figure 2: _Our proposed model._ Solid lines refer to the generative model and dashed lines to the inference model. Empty circles are unobserved, shaded circles are observed, and partially shaded circles are partially observed. Target quantity _c_ _[′]_ and possible additional covariates _**x**_ _[′]_ refer to the new candidate observation that will be added to the training set. _c_ _**x**_ _c_ _[′]_ _**x**_ _[′]_ _N_ + 1 BO _**y**_ _**z**_ covariates GP _N_ The key distinction of GP prior VAEs is that shaded circles are observed, and partially shaded the factorisable conditional prior defined over circles are partially observed. Target quantity _c_ _[′]_ the latent space _p_ _θ_ ( _Z|X_ ) = [�] _[N]_ _i_ =1 _[p]_ _[θ]_ [(] _**[z]**_ _[i]_ _[ |]_ _**[ x]**_ _[i]_ [)] and possible additional covariates _**x**_ _[′]_ refer to the is replaced by a GP prior. Assuming a func- new candidate observation that will be added to tion _**τ**_ : _X →Z_, which maps auxiliary cov the training set. ariates to the _L_ -dimensional latent space, we denote _**z**_ = _**τ**_ ( _**x**_ ) = ( _τ_ 1 ( _**x**_ ) _, . . ., τ_ _L_ ( _**x**_ )) _[T]_ . GP prior VAEs model each latent dimension with an independent GP _τ_ _l_ ( _**x**_ ) _∼GP_ ( _µ_ _l_ ( _**x**_ ) _, k_ _l_ ( _**x**_ _,_ _**x**_ _[′]_ _| θ_ _l_ )), where _µ_ _l_ ( _**x**_ ) is the mean, _k_ _l_ ( _**x**_ _,_ _**x**_ _[′]_ _| θ_ _l_ ) is the covariance function, and _θ_ _l_ denotes the parameters of the covariance function. The GP prior for the _l_ [th] latent dimension can be written as a joint multivariate Gaussian distribution for the function values ¯ _**z**_ _l_ = _τ_ _l_ ( _X_ ) = ( _τ_ _l_ ( _**x**_ 1 ) _, . . ., τ_ _l_ ( _**x**_ _N_ )) _[T]_, such ¯ ( _l_ ) that _p_ _θ_ (¯ _**z**_ _l_ _| X_ ) = _p_ _θ_ ( _τ_ _l_ ( _X_ )) = _N_ � _**z**_ _l_ _|_ **0** _, K_ _XX_ �, where _{K_ _XX_ [(] _[l]_ [)] _[}]_ _[i,j]_ [ =] _[ k]_ _[l]_ [(] _**[x]**_ _[i]_ _[,]_ _**[ x]**_ _[j]_ _[ |][ θ]_ _[l]_ [)] [. Our joint] conditional prior is _p_ _θ_ ( _Z | X_ ) = [�] _[L]_ _l_ =1 _[p]_ _[θ]_ [(¯] _**[z]**_ _[l]_ _[ |][ X]_ [) =][ �] _[L]_ _l_ =1 _[N]_ [(¯] _**[z]**_ _[l]_ _[ |]_ **[ 0]** _[, K]_ _XX_ [(] _[l]_ [)] [)][.] We propose to learn a low-dimensional latent embedding for BO using a GP prior VAE that is conditioned on the target quantity of interest, i.e., _p_ _θ_ ( _Z | c_ ) . We hypothesise that using the target quantity as the conditioning variable will automatically guide the latent embeddings to a smooth manifold that is beneficial for the BO task. Since the target quantity _c ∈_ R, the GP prior VAE can be defined using any of the commonly used smooth kernel functions, such as the squared exponential kernel. Following the same reasoning, if data points _**y**_ have any additional known properties _**x**_, we can incorporate those in the GP prior VAE framework as well by conditioning the latent variable generation with both _c_ and _**x**_ (we denote ˜ _**x**_ = [ _c,_ _**x**_ ] ), i.e., _p_ _θ_ ( _Z |_ _X_ [˜] ) . If all auxiliary covariates in ˜ _**x**_ are continuous, we could incorporate ˜ _**x**_, e.g., via a single squared exponential kernel with a shared lengthscale parameter or use an automatic relevance determination (ARD) kernel to define covariate-specific length-scales. In practice, however, some of the auxiliary covariates may be, e.g., binary or categorical. Ramchandran et al. (2021) have shown that it is possible to have flexible and expressive covariance functions depending on the nature of the auxiliary covariates. In this work, we similarly assume _Q_ + 1 additive covariance functions, _k_ _l_ (˜ _**x**_ _,_ ˜ _**x**_ _[′]_ _| θ_ _l_ ) = _k_ _l_ ( _c, c_ _[′]_ _| θ_ _l_ ) + [�] _[Q]_ _r_ =1 _[k]_ _[l,r]_ [(] _[x]_ _[r]_ _[, x]_ _r_ _[′]_ _[|][ θ]_ _[l,r]_ [) +] _[ σ]_ _zl_ [2] [,] implying that _K_ _X_ [(] ˜ _[l]_ [)] _X_ [˜] [=] _[ K]_ _cc_ [(] _[l]_ [)] [+][ �] _[Q]_ _r_ =1 _[K]_ _X_ [(] _[l,r]_ _r_ _X_ [)] _r_ [+] _[ σ]_ _zl_ [2] _[I]_ _[N]_ [, where the choice of the kernels depends on] the application and _X_ _r_ denotes the _r_ [th] auxiliary variable. 5 Idea Generation Category:
0Conceptual Integration
SIuD7CySb4
# S CALING W EARABLE F OUNDATION M ODELS **Girish Narayanswamy** _[◦][,][†][,]_ [1] _[,]_ [3] _[∗]_ **, Xin Liu** _[◦][,][†][,]_ [1] **, Kumar Ayush** [1] **, Yuzhe Yang** [1] **, Xuhai Xu** [1] **,** **Shun Liao** [1] **, Jake Garrison** [1] **, Shyam Tailor** [1] **, Jake Sunshine** [1] _[,]_ [3] **, Yun Liu** [1] **, Tim Althoff** [1] _[,]_ [3] **,** **Shrikanth Narayanan** [1] **, Pushmeet Kohli** [2] **, Jiening Zhan** [1] **, Mark Malhotra** [1] **,** **Shwetak Patel** [1] _[,]_ [3] **, Samy Abdel-Ghaffar** [1] **, Daniel McDuff** _[†][,]_ [1] _◦_ Co-first, _†_ Corresponding, 1 Google Research, 2 Google DeepMind, 3 University of Washington `girishvn@uw.edu`, _{_ `xliucs,dmcduff` _}_ `@google.com` A BSTRACT Wearable sensors have become ubiquitous thanks to a variety of health tracking features. The resulting continuous and longitudinal measurements from everyday life generate large volumes of data. However, making sense of these observations for scientific and actionable insights is non-trivial. Inspired by the empirical success of generative modeling, where large neural networks learn powerful representations from vast amounts of text, image, video, or audio data, we investigate the scaling properties of wearable sensor foundation models across compute, data, and model size. Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, accelerometer, electrodermal activity, skin temperature, and altimeter per-minute data from over 165,000 people, we create `LSM`, a multimodal foundation model built on the largest wearable-signals dataset with the most extensive range of sensor modalities to date. Our results establish the scaling laws of `LSM` for tasks such as imputation, interpolation and extrapolation across both time and sensor modalities. Moreover, we highlight how `LSM` enables sample-efficient downstream learning for tasks including exercise and activity recognition. 1 I NTRODUCTION Wearable devices that monitor physiological and behavioral signals have become ubiquitous. Increasing evidence suggests that these devices can significantly contribute to promoting healthy behaviors (Ringeval et al., 2020), detecting diseases (Yang et al., 2022), and enhancing the design and implementation of treatments (Munos et al., 2016). These devices generate large volumes of continuous, longitudinal, and multimodal data. However such wearable time-series data can be difficult for consumers and experts to interpret. To this end, algorithms have been developed to translate time-series sensor data into human-readable representations, such as step counts and heart rates. Historically, such algorithms for wearable sensors have relied on supervised, discriminative models designed to detect specific events or activities (Lubitz et al., 2022). This approach faces several significant limitations. First, the _limited volume and severe data imbalance_ of labeled events results in large amounts of valuable _unlabeled_ data being left unused. Second, supervised models are typically trained for _a single task_ (e.g., classification), producing representations that may not generalize well to other tasks. Third, training data is often collected from _small study populations_ (usually involving only tens or hundreds of participants), leading to a lack of diversity in the data. Self-supervised learning (SSL) using generic pretext tasks (Noroozi et al., 2017; Caron et al., 2018; Yang et al., 2023) can yield versatile representations that are useful for a wide range of downstream applications. SSL allows for the use of a much larger proportion of available data without being restricted to labeled data regions (e.g., a limited number of subjects who self-report labels for exercises/activities). These advantages have motivated efforts to apply similar training strategies to build models from large volumes of unlabeled wearable data (Adaimi et al., 2024; Thapa et al., 2024; Yuan et al., 2024; Abbaspourazad et al., 2023) (see Table 1 for a summary). Building on this, the empirical and theoretical success of scaling laws in neural models (Kaplan et al., 2020; Bahri et al., 2024) suggests that model performance improves predictably as compute, data, - Work done during an internship at Google. 1 **A** **Multimodal Wearable Sensor Data** **B** **Training** Skin Conductance Level Hea� Rate Features Masked Pretraining Task Skin Temperature **Evaluation** **i** **ii** Generative **iii** Discriminative **iv** Temporal Imputation Sensor Imputation Temporal Forecasting Exercise / Activity Class. **C** Altitude **Scaling Wearable Sensor Models** **Data Size (Hours):** **Model Params:** Figure 1: **Scaling Foundation Models on Wearable Data.** Self-supervised pretraining enhances the performance of models trained on wearable sensor data. **(A)** We present a systematic scaling analysis of sensor models using up to 40 million hours of multimodal data from over 165,000 people. **(B)** Using a random masking pretext task, we evaluate on tasks of imputation, forecasting, and downstream classification. **(C)** Experiments show scaling compute, data, and model size are all effective. Scaling, in this figure, is shown on the random imputation task. and model parameters increase. These findings raise a critical research question: **Do scaling laws** **apply to models trained on wearable sensor data?** We aim to investigate whether the principles that drive the scaling of neural networks in domains like language and vision also extend to largescale, multimodal wearable sensor data. Understanding how scaling manifests in this context could not only shape model design but also enhance generalization across diverse tasks and datasets. In this paper, we present the results of our scaling experiments on the largest and the most diverse wearable dataset published to date, comprising up to 40 million hours of multimodal sensor data from over 165,000 users (Fig. 1). Leveraging this data, we train a foundation model, referred to as the **Large Sensor Model** ( `LSM` ), which is designed to capture generalizable representations across diverse populations, wearable sensor modalities, and downstream tasks. We demonstrate the scaling properties of `LSM` with respect to compute, data size, and model parameters, leading to substantial performance gains on generative imputation, interpolation, and extrapolation, as well as on downstream discriminative tasks. Our contributions can be summarized as follows: _•_ Implementation of the largest study to date on the scaling of sensor models, encompassing 40 millions hours, over 165,000 users, and multiple sensor modalities, including photoplethysmography (PPG), accelerometer, electrodermal activity (EDA), skin temperature, and altimeter signals. _•_ Identification of key strategies for training large-scale wearable sensor foundation models ( `LSM` ), and the scaling properties of `LSM` s with respect to compute, data size, and model parameters. _•_ Demonstration of `LSM` ’s ability to impute, interpolate, and extrapolate across temporal and sensor modalities, with a particular focus on generalization to unseen users. _•_ Verification that the `LSM` learned representations can be applied to downstream classification tasks, such as exercise and activity recognition, using ecologically valid, user-annotated events. 2 R ELATED W ORK **Sensor Foundation Models.** Recent advances have demonstrated improved accuracy, robustness, and generalizability of models for sensor data by utilizing self-supervised pretraining on large-scale corpora of behavioral and physiological signals. Existing sensor foundation models primarily lever 2 Table 1: **Comparisons of Studies on Wearable Sensor Foundation Models.** **Study** (000s) (000s) Adaimi et al. (2024) 0.05 0.20        Abbaspourazad et al. (2023) 141 400        Yuan et al. (2024) 100 15,700        ~~`LSM`~~ ~~**(Ours)**~~ ~~**165**~~ ~~**40**~~ **,** ~~**000**~~ ~~~~ ~~~~ ~~~~ ~~~~ ~~~~ ~~~~ ~~~~ ECG : Electrocardiography, PPG : Photoplethysmography, ACC : Accelerometer, SCL : Skin Conductance Level, TMP : Skin Temperature, ALT : Altimeter age contrastive learning, creating positive and negative data pairs. Yuan et al. (2024) employ time domain augmentations (e.g., reversal, warping, permutation) to formulate the SSL task for motion data. Abbaspourazad et al. (2023) adopt a similar strategy, incorporating Gaussian noise, time and magnitude warping, and channel swapping. Thapa et al. (2024) generate data pairs using different sensory modalities. Most recently, RelCon (Xu et al., 2025) extends these ideas to show the utility of a relative contrastive loss in building a model more robust to false negatives and positives. In contrast, we focus on _masked input modeling_ due to its generative capabilities and explore the resulting properties when scaling compute, data, and model size. Compared to prior work we consider more sensor inputs, a larger data sample, and systematically investigate scaling laws (Table 1). **Time-Series Foundation Models.** Wearable sensor data typically takes the form of multivariate time-series signals. Historically, time-series models such as TiDE (Das et al., 2023), PatchTST (Nie et al., 2023), and TimesNet (Wu et al., 2023) have focused on tasks such as anomaly detection, and single target forecasting and imputation in specific domains such as energy use, transportation, finance, and climate. More recently, models such as TimesFM (Das et al., 2024), Chronos (Ansari et al., 2024), and MOMENT (Goswami et al., 2024) have shown the utility of self-supervised pretraining in building representations that better generalize to diverse applications. Recent works have also explored the potential of language foundation models to zero-shot reason on temporal data (Liu et al., 2023; Merrill et al., 2024), and to bootstrap time-series models (Zhou et al., 2023). Despite these advances, signals from disparate domains may exhibit considerably different properties. As such, we focus our exploration of generalist models in the _wearable sensor_ domain. **Scaling Laws in Deep Learning.** The scaling of computational resources, data volume, and model size has driven remarkable advances in deep learning (Zhai et al., 2022; Kaplan et al., 2020; Xie et al., 2023). Recent investigations indicate that testing loss follows a power law relationship with each of these three resources when the other two are held constant (Kaplan et al., 2020). Power law behavior has been observed across various domains, including large language models (Kaplan et al., 2020), large vision models (Zhai et al., 2022), transfer learning (Hestness et al., 2017), and multimodal models (Aghajanyan et al., 2023). In this work, we further this research direction and investigate the scaling behavior of models trained with multimodal _wearable sensor_ data. 3 D ATA FOR W EARABLE F OUNDATION M ODELS 3.1 S ENSOR D ATA AND P ROCESSING Fitbit Sense 2 and Pixel Watch 2 have five _sensors_ of highest relevance to this work: photoplethysmography, accelerometer, skin conductance, skin temperature, and altimeter/pressure sensors. From these input signals we compute a set of 26 _signals_ (features), as described in Table 18 of Appendix G. Raw sensor data is not stored at this scale as it would impact the battery life and memory on the device. Thus, we focus on one-minute resolution signals (see Appendix G for additional discussions). PPG **Photoplethysmography.** A validated algorithm (Nissen et al., 2022) is used to extract heart rate (HR) once per second via PPG. The per-minute HR data was calculated by taking the mean of 3 the interpolated, per-second data across non-overlapping one-minute windows. An on-device peak detection algorithm identified PPG-based R-wave peaks from which RR intervals were calculated. RR intervals are susceptible to noise from multiple sources, including movement, electronic noise, and missed heartbeats. To account for noise, outliers were removed using a sliding 5-minute window median-filter (Natarajan et al., 2020). The percentage of valid RR intervals for the 5-minute window is then calculated. Nine standard heart rate variability (HRV) metrics (Shaffer & Ginsberg, 2017) are calculated every minute over a sliding 5-minute window: RR 80 _[th]_ percentile, RR 20 _[th]_ percentile, RR median, RR mean, RR Shannon Entropy, RR differences Shannon Entropy, percentage of RR intervals greater than 30ms (PNN30), root mean squared difference of RR intervals, standard deviation of RR, and a boolean indicator of whether the optical sensor was on the wrist. ACC **Accelerometer.** Ten signals are extracted from the 3-axis accelerometer: jerk, steps, accelerometer log energy, covariance, log energy ratio, the mean and standard deviation number of zero crossings, arm-tilt, kurtosis, and sleep coefficient. These signals are extracted by converting the 3-axis accelerometer to root mean squared magnitude (1D) and applying a high-pass filter (HPF) to the remove the DC component. In parallel, the 3-axis accelerometer signal is put through a secondorder band-pass filter (BPF) and the principal component of the filtered 3-axis signal covariance matrix is calculated and updated every 25 seconds. Jerk is a measure based on the time-derivative of the acceleration calculated from the principal component. It is the logarithm of the ratio of the absolute of the t=1 autocorrelation lag over the t=0 autocorrelation lag. Steps is a per-minute count of steps taken calculated based on a machine learned classifier. Log energy is the logarithm of the sum of the squared HPF signal over the window. Covariance is the log of the condition number of the acceleration covariance matrix. Log energy ratio is the logarithm of the ratio of energy computed from principal-component over the magnitude of the HPF signal. Zero-crossings are the number of crossings in the principal component. Mean and standard deviation are calculated from the sample window. Arm tilt is the log of mean square root of squared X and Z axes. Kurtosis is calculated from the BPF signal. Each of these features is originally computed every 51 seconds and then resampled to a minutely resolution. Sleep coefficient is calculated as the sum of the 3-axes max-min range and binned into 16 log scaled bins before being input into a classifier to predict sleep probability. SCL **Skin Conductance.** The electrodermal activity (EDA) sensor is used to infer sympathetic arousal via changes in micro-sweat levels, a physiological response to stress. Two electrodes on the back of the device measure changes in skin conductance level (SCL), which varies with skin moisture levels. SCL data is sampled at 200 Hz, downsampled to 25 Hz via a boxcar filter, and smoothed with a 5-minute median low-pass filter (McDuff et al., 2024). Per-minute tonic SCL slope and magnitude are then calculated. Due to the nature of the sensing mode operation, SCL data is only collected during non-exercise wake-periods. TMP **Skin Temperature.** A temperature sensor located near the wrist-facing surface of the device takes measurement every 10 seconds. Per-minute slope and magnitude values are calculated via linear regression. Skin temperature signals are available whenever EDA signals are available. ALT **Altimeter.** The standard deviation of the altimeter (pressure sensor) measurements. All sensor signals were globally normalized (z-score) to remove differences in magnitude due to different units of measurement. As the masked autoencoder cannot process missing data, we imputed minutes that had missing values. Within each 300-minute window, missing data between valid data points was linearly interpolated, and leading missing minutes were backfilled. 3.2 B UILDING A L ARGE S CALE P RETRAINING S ENSOR D ATASET To build the large dataset for our experiments we sampled wearable data from 165,090 subjects during the period January 1 _[st]_ 2023 to July 2 _[nd]_ 2024. The subjects wore Fitbit Sense 2 or Google Pixel Watch 2 devices and consented for their data to be used for research and development of new health and wellness products and services. We sub-selected individuals wearing one of these devices as older device generations included fewer sensors. The subjects were asked for self-reported sex, age and weight. Table 2(a) summarizes the characteristics of the pretraining data. All data were de-identified and not linked with any other information. To create a dataset that maximized the number of subjects we randomly sampled 10 5-hour windows of data from each subject, for a total of 8 million hours (6.6 million pretrain hours, 1.7 million test hours). We explore the extremes of data scaling by experimenting with a subject-imbalanced 40 million hour pretraining dataset 4 Table 2: **Details of the Datasets.** Summary of the demographic composition of our pretraining set and class distribution of our downstream set samples. (a) **Demographics of the Pretraining Set.** **Category** **# People** **%** **Sex** Female 110,780 67.1% Male 53,895 32.6% Not Specified 415 0.3% **Age** 18-39 55,653 33.7% 40-59 75,627 45.8% 60-79 32,251 19.6% _≥_ 80 1,548 0.9% Not Specified 11 0.0% **BMI** Healthy ( _<_ 25) 57,015 34.6% Overweight (25-30) 52,950 32.1% Obese ( _≥_ 30) 54,727 33.1% Not Specified 398 0.2% **Total** 165,090 100% (b) **Class Distribution of the Downstream Set.** **Class** **# Training** **# Testing** Exercise 3,272 671 Non-Exercise 6,195 1,329 **Total** 9,467 2,000 Biking 1,191 412 Elliptical 152 49 High Intensity Training 332 104 Strength Training 229 425 Swimming 2,332 441 Running 1,860 315 Walking 7,607 1,418 Weightlifting 669 98 **Total** 14,372 3,262 (Appendix C.1). Note that these datasets are comprised of wearable data from daily-living, including diverse timestamps and a range of life events, and are not biased toward specific events or activities. The dataset was split 80-20, based on subjects, into train-test splits (132072 subjects in training, 33018 subjects in testing) as described in Table 2(a). We then created several “slices” of the training set to conduct the scaling experiments. The test set remains identical throughout all experiments. In the “sample-scaling” experiments we shuffled the training data and took _N_ samples per experiment. In the “subject-scaling” experiments we grouped the training data by subject identifier and took all samples from _M_ subjects per experiment. 4 S ENSOR M ODELING T ASKS 4.1 G ENERATIVE T ASKS We posit that defining generative tasks in the training of wearable sensor models may not only result in learned representations that are useful for downstream classification tasks, but also produce models that can impute missing or incomplete data (interpolate) and extrapolate future sensor values (forecast). To train the model and to test these capabilities we define several tasks (see Fig. 2). **Random Imputation.** Our primary pretext task involves removing patches randomly from the input sample across the time-axis and signal-axis. During training this requires the model to infer missing values and make predictions based on the partial input. **Temporal Interpolation.** Sensor inputs can be missing for a number of reasons. Devices need to be removed from the wrist for charging, and certain sensors might be turned off for periods to save on battery life (McDuff et al., 2024). Interpolation of sensor data is an important and necessary step for many algorithms. In this task we test the model’s ability to fill gaps in the data where all sensor data is missing for a period of time, usually between two observations. **Sensor Imputation.** Sensor imputation refers to the process of inferring a subset of partially missing sensor-streams, from other continuously online sensing modalities. By leveraging correlations between different physiological signals, sensor imputation ensures that insights can be derived even when some sensor modalities are absent, enhancing the overall versatility and capabilities of multisensor systems. Under the constraints of hardware limitations (battery, wireless connectivity, etc.), sensor imputation can enable the delivery of more realistic metrics to the user (e.g., step count, average resting heart rate) even if when sensors are not continuously online. **Temporal Extrapolation (Forecasting).** A more challenging task than interpolation is extrapolation of sensor values forward in time. Temporal extrapolation involves predicting future sensor 5 Idea Generation Category:
1Cross-Domain Application
yb4QE6b22f
# - CBQ: C ROSS -B LOCK Q UANTIZATION FOR L ARGE L AN GUAGE M ODELS **Xin Ding** **[1]** _[∗†]_ **Xiaoyu Liu** **[1]** _[∗†]_ **Zhijun Tu** **[2]** **Yun Zhang** **[3]** **Wei Li** **[2]** **Jie Hu** **[2]** **Hanting Chen** [2] **Yehui Tang** [2] **Zhiwei Xiong** [1] **Baoqun Yin** [1] _[ ‡]_ **Yunhe Wang** [2] _[‡]_ 1 University of Science and Technology of China 2 Huawei Noah’s Ark Lab 3 Hong Kong University of Science and Technology (GZ) A BSTRACT Post-training quantization (PTQ) has played a pivotal role in compressing large language models (LLMs) at ultra-low costs. Although current PTQ methods have achieved promising results by addressing outliers and employing layer- or blockwise loss optimization techniques, they still suffer from significant performance degradation at ultra-low bits precision. To dissect this issue, we conducted an indepth analysis of quantization errors specific to LLMs and surprisingly discovered that, unlike traditional sources of quantization errors, the growing number of model parameters, combined with the reduction in quantization bits, intensifies inter-layer and intra-layer dependencies, which severely impact quantization accuracy. This finding highlights a critical challenge in quantizing LLMs. To address this, we propose CBQ, a cross-block reconstruction-based PTQ method for LLMs. CBQ leverages a cross-block dependency to establish long-range dependencies across multiple blocks and integrates an adaptive LoRA-Rounding technique to manage intra-layer dependencies. To further enhance performance, CBQ incorporates a coarse-to-fine pre-processing mechanism for processing weights and activations. Extensive experiments show that CBQ achieves superior low-bit quantization (W4A4, W4A8, W2A16) and outperforms existing state-of-the-art methods across various LLMs and datasets. Notably, CBQ only takes 4.3 hours to quantize a weightonly quantization of a 4-bit LLAMA1-65B model, achieving a commendable trade off between performance and efficiency. 1 I NTRODUCTION Large language models (LLMs) (Wei et al. (2022a); Radford et al.; Zhang et al.; Brown et al. (2020b); Dettmers et al. (2022)), have sparked immense academic and industrial interest owing to their remarkable performance in handling complex natural languages tasks (Hendrycks et al. (2020b); Bisk et al. (2020b); He et al. (2017); Ainslie et al. (2023); Liu et al. (2024b)). During to significant computational resources for inference and deployment, the post-training quantization (PTQ) technique (Choi et al. (2018); Frantar et al. (2022a); Nagel et al. (2019); Wei et al. (2023); Li et al. (2025)) operating with limited calibration data and computational resources is more in demand for compressing LLMs. Existing PTQ methods typically optimize models on a layer or block basis, addressing outliers (Wei et al. (2022b; 2023); Chee et al. (2024); Liu et al. (2024a)) and employing first- or second-order optimization techniques (predominantly optimizing models on a layer-by-layer or block-by-block basis) (Shao et al. (2023); Frantar et al. (2022b); Liu et al. (2023a)). However, these approaches often suffer from significant performance degradation, particularly in low-bit settings such as W2A16 and W4A4, as illustrated in Table 1, due to inherent limitations. Previous work, like AdaRound (Nagel et al. (2020)), analyzed rounding errors and showed that simple rounding is not always the optimal quantization strategy, greatly improving quantization for CNNs. This inspired us to analyze quantization loss for LLMs, comparing high-bit and low-bit scenarios. We found that in low-bit quantization, _∗_ Equal Contribution _†_ This work was done during Xin Ding and Xiaoyu Liu’s internship at Huawei Noah’s Ark Lab _‡_ Corresponding author:bqyin@ustc.edu.cn, yunhe.wang@huawei.com 1 intra-layer and inter-layer dependencies within models become more pronounced, especially as model size increases. This indicates that previous methods, whether focused on optimizing quantization parameters within a layer or block through first- or second-order techniques, or on refining rounding errors, fall short of achieving optimal outcomes. Instead, it is essential to fully account for the inter-layer and intra-layer relationships. To address this, we propose CBQ, a cross-block reconstruction-based PTQ method tailored for LLMs, surpassing traditional layer-wise and block-wise reconstruction techniques. CBQ introduces a crossblock dependency (CBD) into block-wise reconstruction, maintaining the integrity of the model’s internal dependencies during quantization. Our approach optimizes multiple transformer blocks within a sliding window with overlapping, allowing for more effective and non-local optimization of quantization parameters. Using the CBD method, CBQ incorporates a LoRA-Rounding technique, employing two low-rank matrices to learn adaptive compensation values for quantized weights. Notably, we jointly optimize the compensation matrices and the step sizes of weights and activations within the overlapping window, which helps manage intra-layer dependencies to rectify weight quantization errors while preserving training efficiency. Furthermore, CBQ introduces a novel unified coarse-to-fine pre-processing (CFP) strategy from a statistical perspective to evaluate outliers in weights and activations, precisely handling outliers while minimizing damage to normal channels. CFP employs a quartile criterion to initially estimate the range of outliers and then assesses the intra-class and inter-class distances between outliers and normal values to precisely identify their locations. This approach facilitates the truncation of weight outliers and the application of equivalent scaling to activation outliers. The contributions of this paper are summarized as follows: - We performed a comprehensive analysis of the error sources in low-bit quantization scenarios for LLMs, and theoretically demonstrated the significant impact of intra-layer and inter-layer dependencies on the effectiveness of model quantization. - We propose CBQ, a unified PTQ method designed for LLMs, incorporating a cross-block reconstruction strategy that introduces a Cross-Block Dependency (CBD) mechanism to preserve the model’s internal dependencies during quantization, and LoRA-Rounding to utilize intra-layer dependencies for optimizing adaptive compensation matrices. - We design a coarse-to-fine pre-processing strategy (CFP) that can simultaneously detect and manage outliers in both weights and activations, effectively preventing disruption to normal activation channels and weights. - Extensive experiments demonstrate the effectiveness of our method in ultra-low bit quantization settings such as W4A4, W4A8, and W2A16. Notably, it outperforms state-of-the-art methods across diverse models and benchmark datasets. 2 M OTIVATION To analyze the sources of quantization errors in large models when quantizing weights or activations, we assume a matrix _M_ representing a set of weights or activations as the current quantization target, and _L_ denotes the quantization loss of the model under this matrix. Let _ε_ denote a small perturbation introduced by quantization and _L_ ( _M_ ) represent the task loss that we aim to minimize. Then,we can derive the following equation within the Taylor expansion: 2 _[ε]_ _[T]_ _[ ·]_ **[ H]** [(] _[M]_ [)] _[ ·][ ε]_ [ (1)] E[ _L_ ( _M_ + _ε_ ) _−L_ ( _M_ )] _≈_ E[ _ε_ _[T]_ _·_ _[∂][L]_ _[∂]_ [2] _[L]_ 2 _[ε]_ _[T]_ _∂M_ _∂M_ _[∂][L]_ [+ 1] 2 _[∂]_ _[L]_ _∂M_ [2] _[ ε]_ [ +] _[ O]_ [(] _[||][ε][||]_ [3] [)]] _[ ≈]_ _[ε]_ _[T]_ _[ ·][ g]_ [(] _[M]_ [)] [ + 1] 2 As discussed in previous work (Frantar et al. (2022b)), The error _ε_ introduced by quantization is sufficiently small, the higher-order terms in the Taylor expansion can be neglected. Therefore, we analyze the first- and second-order terms, _g_ [(] _[M]_ [)] and **H** [(] _[M]_ [)], which can be defined as follows. _g_ [(] _[M]_ [)] = E[ _∇_ _M_ _L_ ( _M_ )] = _K_ � _i_ _K_ � _j_ _∂L_ (2) _∂M_ _i_ _∂_ [2] _L_ (3) _∂M_ _i_ _∂M_ _j_ **H** [(] _[M]_ [)] = E[ _∇_ [2] _M_ _[L]_ [(] _[M]_ [)] =] 2 _K_ � _i_ ~~4~~ ~~bit~~ ~~Weight~~ ~~4~~ ~~bit~~ ~~Layer~~ ~~0~~ . ~~01~~ ~~0~~ . ~~01~~ (a) ~~2~~ ~~bit~~ ~~Weight~~ ~~2~~ ~~bit~~ ~~Layer~~ ~~4~~ ~~bit~~ ~~Layer1~~ ~~Scale~~ ~~2~~ ~~bit~~ ~~Layer1~~ ~~Scale~~ ~~0~~ . ~~05~~ ~~0~~ . ~~05~~ High LOW (b) (c) Figure 1: (a) Visualization of the absolute values of the Hessian matrix for weights within a single layer of LLAMA-7B, (b) Hessian matrix visualization of the loss with respect to the scale across 32 layers of LLAMA-7B, and (c) the relationship between the average scale of the first two transformer blocks in LLAMA-7B and the corresponding loss. Let _K_ denote the number of elements in the LLM involved in the quantization. Using Equation 2 and 3, the influence of any two elements _i_ and _j_ on the final quantization loss can be calculated. From equations 1, 2, 3, it can be observed that when the quantization perturbation _ε_ is small, _||ε||_ [2] is also small, allowing us to disregard the implications of the equation 3. In this case, the quantization error is primarily related to the current quantization target _M_, analogous to high-bit quantization. However, when performing low-bit quantization, _||ε||_ [2] increases, necessitating consideration of the impact described by Equation 3. This indicates that when _i ̸_ = _j_, relationships between different _M_ are introduced. This relationship manifests in two aspects: when quantizing a single layer, it reflects intra-layer dependencies among parameters, and when quantizing the entire model, inter-layer dependencies must also be considered. Furthermore, given that the complexity of the Hessian matrix **H** is proportional to _O_ ( _n_ [2] ), where _n_ represents the number of parameters, the growth in model size, both in terms of parameters and layers, leads to a marked intensification of intra-layer and inter-layer dependencies. To better illustrate intra-layer and inter-layer dependencies, we visualize Equation 3 for both individual layers and the entire model using LLAMA-7B. Additionally, we present visualizations of the dependencies between adjacent blocks, as referenced in Figure 1. By analyzing Figure 1, we observe a notable increase in the values of off-diagonal elements during lower-bit quantization. This increase indicates a strengthening of both inter-layer and intra-layer dependencies, with closer elements exhibiting stronger correlations. Furthermore, comparisons of the scales between adjacent layers provide a clearer understanding of the substantial impact that inter-layer dependencies have on final quantization outcomes in low-bit scenarios. Therefore, taking into account both intra-layer and inter-layer dependencies, we present the quantization framework for LLMs under low-bit settings, which can be expressed by the following equation: arg min _h⊆_ **H** + � E( _T_ _k_ ( _W_ _[k]_ _, X_ _[k]_ ) _, QT_ _k_ ( _Q_ ( _W_ _[k]_ ) + ∆ _[k]_ _W_ _[, Q]_ [(] _[X]_ _[k]_ [))] _[,]_ (4) _k∈h_ where _T_ and _QT_ represent the floating-point and quantized transformer blocks, respectively. _Q_ _·_ ( _·_ ) represents the quantization process. E( _·_ ) represents the metric to evaluate the reconstruction errors between outputs of quantized block and full-precision block. We jointly optimize all transformer blocks with inter-layer dependencies while compensating for intra-layer relationships using �∆ _[k]_ _W_ _[|][k][ ⊆]_ **[H]** [+] � . 3 Full-precision model |t|Col2| |---|---| ||| Figure 2: Workflow of the proposed CBQ. CBQ firstly utilizes a coarse-to-fine preprocessing to handle the outliers of weights and activations, and then employs a cross-block optimization strategy to learn quantization step sizes and weight adaptive rounding matrices with supervision from the corresponding full-precision model. This sequential block-wise method minimizes aggregate error propagation through cross-block dependency modeling. 3 M ETHOD In this section, we introduce the proposed cross-block quantization framework tailored to LLMs. As illustrated in Fig. 2, CBQ firstly handles the outliers of weights and activations, and then jointly learns step sizes of weights and activations and weight-compensation matrices in a cross-block manner. CBQ reconstructs the output feature of the last block in each sliding window based on the corresponding supervision of the full-precision model. 3.1 C ROSS - BLOCK RECONSTRUCTION To maintain inter-layer dependencies, it is necessary to optimize the layers with significant dependencies together. As shown in Figure 1, the strongest dependencies are typically observed between adjacent layers. Therefore, we introduce a cross-block dependency (CBD) scheme using a sliding window approach. This scheme enables the simultaneous optimization of multiple blocks within the window. Furthermore, the two adjacent sliding windows have overlapping blocks, ensuring that the blocks between the windows are also interconnected. The CBD scheme enhances the connectivity and cooperation between blocks, enabling them to jointly contribute to the quantization process. This holistic optimization strategy leads to better overall performance and addresses the limitations of block-wise reconstruction in capturing cross-block dependencies. We formulate the optimization with the CBD scheme as arg min E( _T_ _i,k_ ( _W_ _[i,k]_ _, X_ _[i,k]_ ) _, T_ _i,k_ ( _Q_ ( _W_ _[i,k]_ ) _, Q_ ( _X_ _[i,k]_ )) _,_ (5) _S_ _X_ _[i,k]_ _[,S]_ _W_ _[i,k]_ _[,]_ [∆] _[i,k]_ _W_ where 1 _≤_ _i ≤_ _k ≤_ _K_, _T_ _i,k_ represents the blocks from block _i_ to block _k_ within one sliding window, and the same applies to the symbols _S_ _X_ _[i,k]_ [,] _[ S]_ _W_ _[i,k]_ [and][ ∆] _[i,k]_ _W_ [. The optimization object] _[ L]_ _[rec]_ [ is as follow:] _L_ _rec_ = E( _T_ _i,k_ ( _W_ _[i,k]_ _, X_ _[i,k]_ ) _, T_ _i,k_ ( _Q_ ( _W_ _[i,k]_ ) _, Q_ ( _X_ _[i,k]_ )) (6) For the distance metric, we incorporate _L_ 2 and Kullback-Leibler divergence (KLD) loss (Kullback & Leibler (1951)) to measure reconstruction error. KLD computes the likelihood distribution between output features that undergo the softmax function. It tends to suppress outliers in the feature space and enhance the robustness of the optimization process. By incorporating both terms, our method captures both the spatial distance and the distribution discrepancy, leading to a more comprehensive and robust optimization process. Then the distance metric is formulated as: E( _h_ 1 _, h_ 2 ) = _||h_ 1 _−_ _h_ 2 _| |_ 2 + _D_ _KL_ ( _σ_ ( _h_ 1 ) _, σ_ ( _h_ 2 )) _,_ (7) where _h_ 1 and _h_ 2 are hidden states from the outputs of full-precision blocks and quantized blocks, respectively. _σ_ is the softmax function. _||·| |_ 2 represents the _L_ 2 distance and _D_ _KL_ ( _·_ ) represents the KLD distance. We provide the ablation study on the loss functions in Appendix B.Table 5. 4 3.2 L O RA-R OUNDING FOR WEIGHT QUANTIZATION AdaRound (Nagel et al. (2020)) introduces to learn a better weight-rounding matrix for post-training quantization that adapts to the data and the task loss. As shown in Eq. 8, we can obtain the weightrounding matrix ∆ _W_ _∈_ R _[d][×][k]_ with a learnable matrix _V ∈_ R _[d][×][k]_ with a rectified sigmoid function: ∆ _W_ = Clip(Sigmoid( _V_ )( _ζ −_ _γ_ ) + _γ,_ 0 _,_ 1) _,_ (8) where _ζ_ and _γ_ are stretch parameters and are fixed to 1.1 and -0.1, and Clip( _·_ ) clamps the inputs into a given range. The size of the weight-rounding matrix ∆ _W_ is the same as the original weights. When the transformer blocks are within the overlap of the CBD sliding window mechanism, the rounding matrix can serve as an effective representation of intra-layer dependencies. We utilize it as a compensation matrix and jointly optimize it with the quantization step sizes for weights and activations, which can be expressed as follows: arg min E( _T_ _i,k_ ( _W_ _[i,k]_ _, X_ _[i,k]_ ) _, T_ _i,k_ ( _Q_ ( _W_ _[i,k]_ ) + ∆ _[j,k]_ _W_ _[, Q]_ [(] _[X]_ _[i,k]_ [))] (9) _S_ _X_ _[i,k]_ _[,S]_ _W_ _[i,k]_ _[,]_ [∆] _[j,k]_ _W_ s.t. _j_ = _k_ + 1 _−_ _overlap_ (10) However, as shown in Experiment Table 3b, we found that LLMs with billion-level parameters result in an exceptionally large ∆ _[j,k]_ _W_ [, which can lead to significant computational overhead and substantially] impact the convergence of training. (Shao et al. (2023)) has also mentioned that AdaRound cannot be applied to models with billions of parameters due to the vast solution space, which aligns with our experimental findings. Thus, we employ low-rank adaptive learning on the compensation matrices, decomposing _V_ with much smaller low-rank matrices, and only optimize them in post-training quantization, the decomposition is defined as: ∆ _W_ = _A_ 1 _× A_ 2 _, A_ 1 _∈_ R _[d][×][r]_ _, A_ 2 _∈_ R _[r][×][k]_ _,_ (11) Where the rank _r <<_ min( _d, k_ ), we utilize a random Gaussian initialization for _A_ 1 and zero for _A_ 2, thus ∆ _W_ is set to zero at the beginning of post-training quantization. During training, each element of ∆ _W_ is encouraged into 0 or 1 with a regularizer loss: _L_ _com_ = � 1 _−|_ 2∆ _W_ ( _i, j_ ) _−_ 1 _|_ _[β]_ _,_ (12) _i,j_ Where _β_ is a annealing factor. Following (Nagel et al. (2020)), _β_ is set to higher in the initial phase and set to lower in the later phase of the optimization to encourage it to converge to 0 or 1. We also conduct ∆ _W_ = _⌊_ ∆ _W_ _⌉_ in the later phase of the optimization to force each element into _{_ 0 _,_ 1 _}_ exactly. **Compared with vanilla AdaRound for LLMs** . The proposed LoRA-Rounding reduces the number of learnable parameters from _d × k_ to ( _d_ + _k_ ) _× r_ and changes the training strategy, significantly accelerating the optimization process, we conduct ablation experiments in the next section 5.3. 3.3 O VERALL LOSS In summary, by leveraging CBD and a low-rank decomposition of the weight-compensated matrix, We slide the window to the last block with an interval and update all the quantization parameters _S_ _W_ _, S_ _X_ _, A_ 1 _, A_ 2 within the window, ensuring the preservation of both intra-layer and inter-layer relationships of the model, thereby achieving optimal performance. The total loss for optimizing the _i_ _[th]_ block to the _k_ _[th]_ block within a sliding window is formulated as _L_ _total_ = _L_ _rec_ + _γL_ _com_ _,_ (13) where the _γ_ is the hyper-parameter to balance the reconstruction error and compensation error. 3.4 C OARSE - TO - FINE PRE - PROCESSING Outlier handling is crucial in quantizing LLMs. Figure 3 in Appendix F illustrates the prevalent outliers in weights and activations, which pose significant challenges to the quantization process. Although there are many existing studies based on outliers problem, these studies typically focus on outliers in either weights or activations individually, such as in (Chee et al. (2024); Wei et al. (2022b); 5 Idea Generation Category:
0Conceptual Integration
eW4yh6HKz4
# - - E DGE R UNNER : A UTO REGRESSIVE A UTO ENCODER FOR A RTISTIC M ESH G ENERATION **Jiaxiang Tang** [1] _[∗]_ **Zhaoshuo Li** [2] **Zekun Hao** [2] **Xian Liu** [2] **Gang Zeng** [1] **Ming-Yu Liu** [2] **Qinsheng Zhang** [2] 1 State Key Laboratory of General Artificial Intelligence, Peking University. 2 NVIDIA Research. **[https://research.nvidia.com/labs/dir/edgerunner/](https://research.nvidia.com/labs/dir/edgerunner/)** Dense Mesh Input Point Cloud Point Cloud Conditioned Mesh Generation Single-view Image Image Conditioned Mesh Generation Diversity of Generation with Different Random Seeds (Point Cloud Conditioned) Figure 1: **EdgeRunner** efficiently generates diverse, high-quality artistic meshes conditioned on point clouds or single-view images. A BSTRACT Current auto-regressive mesh generation methods suffer from issues such as incompleteness, insufficient detail, and poor generalization. In this paper, we propose an Auto-regressive Auto-encoder (ArAE) model capable of generating highquality 3D meshes with up to 4,000 faces at a spatial resolution of 512 [3] . We introduce a novel mesh tokenization algorithm that efficiently compresses triangular meshes into 1D token sequences, significantly enhancing training efficiency. Furthermore, our model compresses variable-length triangular meshes into a fixedlength latent space, enabling training latent diffusion models for better generaliza _∗_ This work is done while interning with NVIDIA. 1 tion. Extensive experiments demonstrate the superior quality, diversity, and generalization capabilities of our model in both point cloud and image-conditioned mesh generation tasks. 1 I NTRODUCTION Automatic 3D content generation, particularly the generation of widely used polygonal meshes, holds the potential to revolutionize industries such as digital gaming, virtual reality, and filmmaking. Generative models can make 3D asset creation more accessible to non-experts by drastically reducing the time and effort involved. This democratization opens up opportunities for a wider range of individuals to contribute to and innovate within the 3D content space, fostering greater creativity and efficiency across these sectors. Previous research on 3D generation has explored a variety of approaches. For example, optimization-based methods, such as using score distillation sampling (SDS) (Poole et al., 2022; Lin et al., 2023; Liu et al., 2023b; Tang et al., 2023a), lift 2D diffusion priors into 3D without requiring any 3D data. In contrast, large reconstruction models (LRM) (Hong et al., 2023; Wang et al., 2023b; Xu et al., 2023b; Li et al., 2023; Weng et al., 2024b) directly train feed-forward models to predict neural radiance fields (NeRF) or Gaussian Splatting from single or multi-view image inputs. Lastly, 3D-native latent diffusion models (Zhang et al., 2024c; Wu et al., 2024b; Li et al., 2024c) encode 3D assets into latent spaces and generate diverse contents by performing diffusion steps in the latent space. However, all these approaches rely on continuous 3D representations, such as NeRF or SDF grids, which lose the discrete face indices in triangular meshes during conversion. Consequently, they require post-processing, such as marching cubes (Lorensen & Cline, 1998) and re-meshing algorithms, to extract triangular meshes. These meshes differ significantly from artistcreated ones, which are more concise, symmetric, and aesthetically structured. Additionally, these methods are limited to generating watertight meshes and cannot produce single-layered surfaces. Recently, several approaches (Siddiqui et al., 2024a; Chen et al., 2024d;b; Weng et al., 2024a; Chen et al., 2024e) have attempted to tokenize meshes into 1D sequences and leverage auto-regressive models for direct mesh generation. Specifically, MeshGPT (Siddiqui et al., 2024a) proposes to empirically sort the triangular faces and apply a vector-quantization variational auto-encoder (VQVAE) to tokenzie the mesh. MeshXL (Chen et al., 2024b) directly flattens the vertex coordinates and does not use any compression other than vertex discretization. Since these methods directly learn from mesh vertices and faces, they can preserve the topology information and generate artistic meshes. However, these auto-regressive mesh generation approaches still face several challenges. (1) Generation of a large number of faces: due to the inefficient face tokenization algorithms, most prior methods can only generate meshes with fewer than 1,600 faces, which is insufficient for representing complex objects. (2) Generation of high-resolution surface: previous works quantize mesh vertices to a discrete grid of only 128 [3] resolution, which results in significant accuracy loss and unsmooth surfaces. (3) Model generalization: training auto-regressive models with difficult input modalities is challenging. Previous approaches often struggle to generalize beyond the training domain when conditioning on single-view images. In this paper, we present a novel approach named **EdgeRunner** to address the aforementioned challenges. Firstly, we introduce a mesh tokenization method based on EdgeBreaker (Rossignac, 1999) that compresses sequence length by 50% and reduces long-range dependency between tokens, significantly improving the training efficiency. Secondly, we propose an Auto-regressive Auto-encoder (ArAE) that compresses variable-length triangular meshes into fixed-length latent codes. This latent space can be used to train latent diffusion models conditioned on other modalities, offering better generalization capabilities. We also enhance the training pipeline to support higher quantization resolution. These improvements enable EdgeRunner to generate diverse, high-quality artistic meshes with up to 4,000 faces and vertices discretized at a resolution of 512 [3] — resulting in sequences that are twice as long and four times higher in resolution compared to previous methods. In summary, our contributions are as follows: 1. We introduce a novel mesh tokenization algorithm, adapted from EdgeBreaker, which supports lossless face compression, prevents flipped faces, and reduces long-range dependencies to facilitate learning. 2 2. We propose an Auto-regressive Auto-encoder (ArAE), comprising a lightweight encoder and an auto-regressive decoder, capable of compressing variable-length triangular meshes into fixed-length latent codes. 3. We demonstrate that the latent space of ArAE can be leveraged to train latent diffusion models for better generalization, enabling conditioning of different input modalities such as single-view images. 4. Extensive experiments show that our method generates high-quality and diverse artistic meshes from point clouds or single-view images, exhibiting improved generalization and robustness compared to previous methods. 2 R ELATED W ORK 2.1 O PTIMIZATION - BASED 3D G ENERATION Early 3D generation methods relied on SDS-based optimization techniques (Jain et al., 2022; Poole et al., 2022; Wang et al., 2023a; Mohammad Khalid et al., 2022; Michel et al., 2022) due to limited 3D data. Subsequent works advanced in generation quality (Lin et al., 2023; Wang et al., 2023d; Chen et al., 2023c;e; Sun et al., 2023; Qiu et al., 2024), reducing generation time (Tang et al., 2023a; Yi et al., 2023; Lorraine et al., 2023; Xu et al., 2024a), enabling 3D editing (Zhuang et al., 2023; Singer et al., 2023; Raj et al., 2023; Chen et al., 2024c), and conditioning on images (Xu et al., 2023a; Tang et al., 2023b; Melas-Kyriazi et al., 2023; Liu et al., 2023b; Qian et al., 2023; Shi et al., 2023). Other approaches first predict multi-view images, then apply reconstruction algorithms to generate the final 3D models (Long et al., 2023; Li et al., 2024b; Pang et al., 2024; Tang et al., 2024b). Recently, Unique3D (Wu et al., 2024a) introduced a method that combines high-resolution multi-view diffusion models with an efficient mesh reconstruction algorithm, achieving both high quality and fast image-to-3D generation. 2.2 F EED - FORWARD 3D G ENERATION With the introduction of large-scale datasets (Deitke et al., 2023b;a), more recent works propose to use feed-forward 3D models. The Large Reconstruction Model (LRM)(Hong et al., 2023) demonstrated that end-to-end training of a triplane-NeRF regression model scales effectively to large datasets and generates 3D assets within seconds. While LRM significantly accelerates generation speed, the resulting meshes often exhibit lower quality and a lack of diversity. Subsequent research has sought to improve generation quality by incorporating multi-view images as inputs(Xu et al., 2024b; Li et al., 2023; Wang et al., 2023b; He & Wang, 2023; Siddiqui et al., 2024b; Xie et al., 2024; Wang et al., 2024) and by adopting more efficient 3D representations (Zhang et al., 2024a; Li et al., 2024a; Wei et al., 2024; Zou et al., 2023; Tang et al., 2024a; Xu et al., 2024d; Zhang et al., 2024b; Chen et al., 2024a; Yi et al., 2024). 2.3 D IFFUSION - BASED 3D G ENERATION Analogous to 2D diffusion models for image generation, significant efforts have been made to develop 3D-native diffusion models capable of conditional 3D generation. Early approaches typically rely on uncompressed 3D representations, such as point clouds, NeRFs, tetrahedral grids, and volumes (Nichol et al., 2022; Jun & Nichol, 2023; Gupta et al., 2023; Cheng et al., 2023; Ntavelis et al., 2023; Zheng et al., 2023; Zhang et al., 2023; Liu et al., 2023c; M¨uller et al., 2023; Chen et al., 2023d; Cao et al., 2023; Chen et al., 2023a; Wang et al., 2023c; Yariv et al., 2023; Liu et al., 2023a; Xu et al., 2024c; Yan et al., 2024) to train diffusion models. However, these methods are often limited by small-scale datasets and struggle with generalization or producing high-quality assets. More recent approaches have focused on adapting latent diffusion models to 3D (Zhao et al., 2023; Zhang et al., 2024c; Wu et al., 2024b; Li et al., 2024c; Lan et al., 2024; Hong et al., 2024; Tang et al., 2023c; Chen et al., 2024f). These methods first train a VAE to compress 3D representations into a more compact form, which enables more efficient diffusion model training. Unlike the straightforward image representations in 2D, 3D latent diffusion models involve numerous design choices, leading to varied performance outcomes. For example, CLAY (Zhang et al., 2024c) has demonstrated that a transformer-based 3D latent diffusion model can scale to large datasets and generalize well across diverse input conditions. 3 **Auto-regressive Auto-encoder (ArAE)** **Latent Diffusion** Figure 2: **Pipeline of our method** . Our ArAE model compresses variable-length mesh into fixedlength latent code, which can be further used to train latent diffusion models conditioned on other input modalities, such as single-view images. 2.4 A UTO - REGRESSIVE M ESH G ENERATION The above works require additional post-processing steps to extract triangular meshes and fail to model the mesh topology. Recenlty, approaches using auto-regressive models to directly generate meshes have emerged. MeshGPT (Siddiqui et al., 2024a) pioneered this approach by tokenizing a mesh through face sorting and compression with a VQ-VAE, followed by using an auto-regressive transformer to predict the token sequence. This method allows for the generation of meshes with varying face counts and incorporates direct supervision from topology information, which is often overlooked in other approaches. Subsequent works (Chen et al., 2024b; Weng et al., 2024a; Chen et al., 2024d) have explored different model architectures and extended this approach to conditional generation tasks, such as point cloud generation. However, these methods are limited to meshes with fewer than 1,000 faces due to the computational cost of mesh tokenization and exhibit limited generalization capabilities. A concurrent work, MeshAnythingV2 (Chen et al., 2024e), introduces an improved mesh tokenization technique, increasing the maximum number of faces to 1,600. Our approach also falls under the category of auto-regressive mesh generation but aims to further extend the maximum face count and provide control over the target face number during inference. 3 E DGE R UNNER 3.1 C OMPACT M ESH T OKENIZATION Auto-regressive models process information in the form of discrete token sequences. Thus, compact tokenization is crucial as it allows information to be represented with fewer tokens accurately. For example, text tokenizers have been a central research direction for large language models (LLMs). The GPT and Llama series (Touvron et al., 2023; Brown et al., 2020) utilize the byte-pair encoding (BPE) tokenizer, which combines sub-word units into single tokens for highly compact and lossless compression. In contrast, tokenization techniques used in prior auto-regressive mesh generation works mainly suffer from two issues. Some prior works use _lossy_ VQ-VAEs (Siddiqui et al., 2024a; Chen et al., 2024d; Weng et al., 2024a), which sacrifices the mesh generation quality. Others opt for _zero-_ _compression_ by not using face tokenizers (Chen et al., 2024b), which poses training challenges due to the inefficiency. In this paper, we introduce a tokenization scheme that allows us to represent a mesh compactly and efficiently, which is based on the well-established triangular mesh compression algorithm EdgeBreaker (Rossignac, 1999). **The key insight for mesh compression is to maximize edge sharing** **between adjacent triangles** . By sharing an edge with the previous triangle, the next triangle re 4 ℋ 134 ℋ 435 ℋ 146 ~~ℋ~~ 167 ℋ 178 ℋ 182 ℋ 289 4 3 Vertex Next twin Previous twin End Begin of Sub-Sequence Traversal Direction #Face = 944 #Token = 3968 Ratio = [#Tokens] 9 × #Faces [= 46.70%] #Face = 600 #Token = 2490 Ratio = [#Tokens] 9 × #Faces [= 46.11%] |N|Col2| |---|---| |N|EOS| **Tokenize by Face Traversal** **Compression Ratio on Complex Meshes** Figure 3: **Illustration of our mesh tokenizer** . Our tokenizer traverses the 3D mesh triangle-bytriangle and converts it into a 1D token sequence. Through edge sharing, we reach a compression rate of 50% (4 or 5 tokens per face on average) compared to na¨ıve tokenization of 9 tokens per face. quires only one additional vertex instead of three. We illustrate our mesh tokenization process with an example below, and provide more details in the appendix. **Half-edge.** EdgeBreaker (Rossignac, 1999) uses half-edge data structures (Weiler, 1986) for triangular face traversal. An illustration is provided in Figure 4. We use _H_ _··_ _[·]_ [to denote a half-edge.] For example, _H_ 41 [3] [is the half-edge pointing from vertex 4 to 1,] with vertex 3 across the face. Starting from _H_ 41 [3] [, we can traverse] to the _next_ half-edge _H_ 13 [4] [and the] _[ next twin]_ [ half-edge] _[ H]_ 31 [2] [. Re-] versely, the _previous_ half-edge is _H_ 34 [1] [and the] _[ previous twin]_ [ half-] edge is _H_ 43 [5] [. The half-edge data structure has also been used in] recent learning-based mesh generation work (Shen et al., 2024). 1 2 3 4 5 **Vertex Tokenization.** To tokenize a mesh into a discrete se- Figure 4: Half-edge representaquence, vertex coordinates require discretization. Following pre- tion for triangular faces. vious works (Siddiqui et al., 2024a), we normalize the mesh to a unit cube and quantize the continuous vertex coordinates into integers according to a quantization resolution, which is 512 in this work. Each vertex is therefore represented by three integer coordinates, which are then flattened in XYZ order as tokens. With some abuse of notion, we denote the XYZ tokens as a single vertex token using 1 . **Face Tokenization.** We traverse through all faces following the half-edges. To illustrate the process, we use the mesh example in Figure 3. The process starts with one half-edge, where _H_ 23 [1] [is picked] as the beginning of the current traversal. We signify the start of a traversal as B . We then append the vertex across the half-edge 1 as the first vertex token. Within the same triangular face, the two vertices 2 3 are also appended following the direction of _H_ 23 [1] [.] During traversal, we visit the _next twin_ half-edge whenever possible, and only reverse the halfedge direction to the _previous twin_ half-edge when we exhaust all triangles in the current traversal. Returning to the example in Figure 3, we follow _H_ 23 [1] [and reach] _[ H]_ 13 [4] [. Thus, we append] ~~[N]~~ [ to signify] the _next twin_ traversal direction and we only need to append 4 as 1 3 are shared. The same process is repeated for _H_ 43 [5] [with] ~~[N]~~ 5 added to the current sub-sequence. We have completed the current traversal as no adjacent faces can be found for _H_ 43 [5] [. The sub-sequence] for the current traversal is thus B 1 2 3 ~~N~~ 4 ~~N~~ 5 . To begin a new sub-sequence, we reversely retrieve the last-added half-edges to traverse in the opposite directions. As the last-added half-edge _H_ 43 [5] [doesn’t have any adjacent faces, we skip it] and instead consider _H_ 13 [4] [. We go opposite to its] _[ previous twin]_ [ half-edge] _[ H]_ 14 [6] [. As this is a new] sub-sequence, B 6 1 4 are added. We continue finding the un-visited faces in the neighborhood of _H_ 14 [6] [and arriving at its] _[ previous]_ _twin_ half-edge _H_ 16 [7] [. Thus, we add] [ P] 7 to the current sub-mesh sequence as 6 1 are shared. The 5 Idea Generation Category:
2Direct Enhancement
81cta3WQVI
# CogVideoX: Text-to-Video Diffusion Mod- els with An Expert Transformer **Zhuoyi Yang** _[⋆][‡]_ **Jiayan Teng** _[⋆][‡]_ **Wendi Zheng** _[‡]_ **Ming Ding** _[†]_ **Shiyu Huang** _[†]_ **Jiazheng Xu** _[‡]_ **Yuanming Yang** _[‡]_ **Wenyi Hong** _[‡]_ **Xiaohan Zhang** _[†]_ **Guanyu Feng** _[†]_ **Da Yin** _[†]_ **Yuxuan Zhang** _[†]_ **Weihan Wang** _[†]_ **Yean Cheng** _[†]_ **Bin Xu** _[‡]_ **Xiaotao Gu** _[†]_ **Yuxiao Dong** _[‡]_ **Jie Tang** _[‡]_ _†_ Zhipu AI _‡_ Tsinghua University Figure 1: CogVideoX can generate long-duration, high-resolution videos with coherent actions and rich semantics. Abstract We present CogVideoX, a large-scale text-to-video generation model based on diffusion transformer, which can generate 10-second continuous videos that align seamlessly with text prompts, with a frame rate of 16 fps and resolution of 768 _×_ 1360 pixels. Previous video generation models often struggled with limited motion and short durations. It is especially difficult to generate videos with coherent narratives based on text. We propose several designs to address these issues. First, we introduce a 3D Variational Autoencoder (VAE) to compress videos across spatial and temporal dimensions, enhancing both the compression rate and video fidelity. Second, to improve text-video alignment, we propose an expert transformer with expert adaptive LayerNorm to facilitate the deep fusion between the two modalities. Third, by employing progressive training and multi-resolution frame packing, CogVideoX excels at generating coherent, long-duration videos with diverse shapes and dynamic movements. In addition, we develop an effective pipeline that includes various pre-processing strategies for text and video data. Our innovative video captioning model significantly improves generation quality and semantic alignment. Results show that CogVideoX achieves state-of-the-art performance in both automated benchmarks and human evaluation. We publish the code and model checkpoints *Equal contributions. Core contributors: Zhuoyi, Jiayan, Wendi, Ming, Shiyu and Xiaotao. {yangzy22,tengjy24}@mails.tsinghua.edu.cn, corresponding author: jietang@tsinghua.edu.cn [Visiting our demo website https://yzy-thu.github.io/CogVideoX-demo/ to watch more videos!](https://yzy-thu.github.io/CogVideoX-demo/) 1 of CogVideoX along with our VAE model and video captioning model at [https://github.com/THUDM/CogVideo.](https://github.com/THUDM/CogVideo) 1 Introduction The rapid development of text-to-video models has been phenomenal, driven by both the Transformer architecture (Vaswani et al., 2017) and diffusion model (Ho et al., 2020). Early attempts to pretrain and scale Transformers to generate videos from text have shown great promise, such as CogVideo (Hong et al., 2022) and Phenaki (Villegas et al., 2022). Meanwhile, diffusion models have recently made exciting advancements in video generation(Singer et al., 2022; Ho et al., 2022). By using Transformers as the backbone of diffusion models, i.e., Diffusion Transformers (DiT) (Peebles & Xie, 2023), text-to-video generation has reached a new milestone, as evidenced by the impressive Sora showcases (OpenAI, 2024). Despite these rapid advancements in DiTs, it remains technically unclear how to achieve long-term consistent video generation with dynamic plots. For example, previous models had difficulty generating a video based on a prompt like “a bolt of lightning splits a rock, and a person jumps out from inside the rock”. In this work, we train and introduce CogVideoX, a set of large-scale diffusion transformer models designed for generating long-term, temporally consistent videos with rich motion semantics. We address the challenges mentioned above by developing a 3D Variational Autoencoder, an expert Transformer, a progressive training pipeline, and a video data filtering and captioning pipeline, respectively. First, to efficiently consume high-dimension video data, we design and train a 3D causal VAE that compresses the video along both spatial and temporal dimensions. Compared to previous method(Blattmann et al., 2023) of fine-tuning 2D VAE, this strategy helps significantly reduce the sequence length and associated training compute and also helps prevent flicker in the generated videos, that is, ensuring continuity among frames. Second, to improve the alignment between videos and texts, we propose an expert Transformer with expert adaptive LayerNorm to facilitate the fusion between the two modalities. To ensure the temporal consistency in video generation and capture largescale motions, we propose to use 3D full attention to comprehensively model the video along both temporal and spatial dimensions. Third, as most video data available online lacks accurate textual descriptions, we develop a video captioning pipeline capable of accurately describing video content. This pipeline is used to generate new textual descriptions for all video training data, which significantly enhances CogVideoX’s ability to grasp precise semantic understanding. Figure 2: The performance of openly-accessible text-to-video models in different aspects. In addition, we adopt and design progressive training techniques, including multi-resolution frame pack and resolution progressive training, to further enhance the generation performance and stability of CogVideoX. Furthermore, we propose Explicit Uniform Sampling, which stablizes the training loss curve and accelerates convergence by setting different timestep sampling intervals on each data parallel rank. To date, we have completed the CogVideoX training with two sizes: 5 billion and 2 billion, respectively. Both machine and human evaluations suggest that CogVideoX-5B outperforms well-known video models and CogVideoX-2B is very competitive across most dimensions. Figure 2 shows the performance of CogVideoX-5B and CogVideoX-2B in different aspects. It shows that CogVideoX has the property of being scalable. As the size of model parameters, data volume, and training volume increase, the performance will get better in the future. 2 Our contributions can be summarized as follows: - We propose CogVideoX, a simple and scalable structure with a 3D causal VAE and an expert transformer, designed for generating coherent, long-duration, high-action videos. It can generate long videos with multiple aspect ratios, up to 768 _×_ 1360 resolution, 10 seconds in length, at 16fps. - We evaluate CogVideoX through automated metric evaluation and human assessment, compared with openly-accessible top-performing text-to-video models. CogVideoX achieves state-of-the-art performance. - We publicly release our 5B and 2B models, including text-to-video and image-tovideo versions, the first commercial-grade open-source video generation models. We hope it can advance the filed of video generation. 2 The CogVideoX Architecture In the section, we present the CogVideoX model. Figure 3 illustrates the overall architecture. Given a pair of video and text input, we design a **3D causal VAE** to compress the video into the latent space, and the latents are then patchified and unfolded into a long sequence denoted as _z_ vision . Simultaneously, we encode the textual input into text embeddings _z_ text using T5 (Raffel et al., 2020). Subsequently, _z_ text and _z_ vision are concatenated along the sequence dimension. The concatenated embeddings are then fed into a stack of **expert** **transformer** blocks. Finally, the model output are unpatchified to restore the original latent shape, which is then decoded using a 3D causal VAE decoder to reconstruct the video. We illustrate the technical design of the 3D causal VAE and expert transfomer in detail. Figure 3: **The overall architecture of CogVideoX.** 2.1 3D Causal VAE Videos contain both spatial and temporal information, typically resulting in much larger data volumes than images. To tackle the computational challenge of modeling video data, 3 Table 1: Ablation with different variants of 3D VAE. The baseline is SDXL(Podell et al., 2023) 2D VAE. Flickering calculates the L1 difference between each pair of adjacent frames to evaluate the degree of flickering in the video. We use variant B for pretraining. Variants Baseline A B C D E Compression 8 _×_ 8 _×_ 1 8 _×_ 8 _×_ 4 8 _×_ 8 _×_ 4 8 _×_ 8 _×_ 4 8 _×_ 8 _×_ 8 16 _×_ 16 _×_ 8 Latent channel 4 8 16 32 32 128 Flickering _↓_ 93.2 87.6 86.3 87.7 87.8 87.3 PSNR _↑_ 28.4 27.2 28.7 30.5 29 27.9 we propose to implement a video compression module based on 3D Variational Autoencoders (Yu et al., 2023b). The idea is to incorporate three-dimentional convolutions to compress videos both spatially and temporally. This can help achieve a higher compression ratio with largely improved quality and continuity of video reconstruction. Figure 4: (a) The structure of the 3D VAE in CogVideoX. It comprises an encoder, a decoder and a latent space regularizer, achieving a 8 _×_ 8 _×_ 4 compression from pixels to the latents. (b) The context parallel implementation on the temporally causal convolution. Figure 4 (a) shows the structure of the proposed 3D VAE. It comprises an encoder, a decoder and a Kullback-Leibler (KL) regularizer. The encoder and decoder consist of symmetrically arranged stages, respectively performing 2 _×_ downsampling and upsampling by the interleaving of ResNet block stacked stages. Some blocks perform 3D downsampling (upsampling), while others only perform 2D downsampling (upsampling). We adopt the temporally causal convolution (Yu et al., 2023b), which places all the paddings at the beginning of the convolution space, as shown in Figure 4 (b). This ensures that future information does not influence the present or past predictions. We also conducted ablation studies comparing different compression ratios and latent channels in table 1. After using 3D structures, the reconstructed video shows almost no more jitter, and as the latent channels increase, the restoration quality improves. However, when spatialtemporal compression is too aggressive (16 _×_ 16 _×_ 8), even if the channel dimensions are correspondingly increased, the convergence of the model also becomes extremely difficult. Exploring VAE with larger compression ratios is our future work. Given that processing long-duration videos introduces excessive GPU memory usage, we apply context parallel at the temporal dimension for 3D convolution to distribute computation among multiple devices. As illustrated by Figure 4 (b), due to the causal nature of the convolution, each rank simply sends a segment of length _k −_ 1 to the next rank, where _k_ indicates the temporal kernel size. This results in relatively low communication overhead. During training, we first train a 3D VAE at 256 _×_ 256 resolution and 17 frames to save computation. Randomly select 8 or 16 fps to enhance the model’s robustness. We observe 4 that the model can encode larger resolution videos well without additional training as it has no attention modules, but this isn’t effective when encoding videos with more frames. Therefore, we conduct a two-stage training by first training on 17-frame videos and finetuning by context parallel on 161-frame videos. Both stages utilize a weighted combination of the L1 reconstruction loss, LPIPS (Zhang et al., 2018) perceptual loss, and KL loss. After a few thousand training steps, we additionally introduce the GAN loss from a 3D discriminator. 2.2 Expert Transformer We introduce the design choices in Transformer for CogVideoX, including the patching, positional embedding, and attention strategies. **Patchify.** The 3D causal VAE encodes a video latent of shape _T × H × W × C_, where _T_ represents the number of frames, _H_ and _W_ represent the height and width of each frame, _C_ represents the channel number, respectively. The video latents are then patchified, generating sequence _z_ vision of length _[T]_ _[·]_ _[H]_ _[·]_ _[W]_ [. When] _[ q >]_ [ 1][, we repeat the first frame of videos and] _[T]_ _[H]_ _q_ _[·]_ _p_ _[W]_ _p_ _[·]_ _p_ sequence _z_ vision of length _[T]_ _q_ _[·]_ _[H]_ _p_ _[·]_ _[W]_ _p_ [. When] _[ q >]_ [ 1][, we repeat the first frame of videos and] images at the beginning of the sequence to enable joint training of images and videos. **3D-RoPE.** Rotary Position Embedding (RoPE) (Su et al., 2024) is a relative positional encoding that has been demonstrated to capture inter-token relationships effectively in LLMs, particularly excelling in modeling long sequences. To adapt to video data, we extend the original RoPE to 3D-RoPE. Each latent in the video tensor can be represented by a 3D coordinate ( _x, y, t_ ). We independently apply 1D-RoPE to each dimension of the coordinates, each occupying 3 _/_ 8, 3 _/_ 8, and 2 _/_ 8 of the hidden states’ channel. The resulting encoding is then concatenated along the channel dimension to obtain the final 3D-RoPE encoding. **Expert Adaptive Layernorm.** We concatenate the embeddings of both text and video at the input stage to better align visual and semantic information. However, the feature spaces of these two modalities differ significantly, and their embeddings may even have different numerical scales. To better process them within the same sequence, we employ the Expert Adaptive Layernorm to handle each modality independently. As shown in Figure 3, following DiT (Peebles & Xie, 2023), we use the timestep _t_ of the diffusion process as the input to the modulation module. Then, the Vision Expert Adaptive Layernorm (Vison Expert AdaLN) and Text Expert Adaptive Layernorm (Text Expert AdaLN) apply this modulation to the vision hidden states and text hidden states, respectively. This strategy promotes the alignment of feature spaces across two modalities while minimizing additional parameters. Figure 5: The separated spatial and temporal attention makes it challenging to handle the large motion between adjacent frames. In the figure, the head of the person in frame _i_ + 1 cannot directly attend to the head in frame _i_ . Instead, visual information can only be implicitly transmitted through other background patches. This can lead to inconsistency issues in the generated videos. **3D** **Full** **Attention.** Previous works (Singer et al., 2022; Guo et al., 2023) often employ separated spatial and temporal attention to reduce computational complexity and facilitate fine-tuning from text-to-image models. However, as illustrated in Figure 5, this separated attention approach requires extensive implicit transmission of visual information, significantly increasing the learning complexity and making it challenging to maintain the consistency of large-movement objects. Considering the great success of long-context training in LLMs (AI@Meta, 2024) and the efficiency of FlashAttention (Dao et al., 2022), we propose a 3D text-video hybrid attention mechanism. This mechanism not only achieves better results but can also be easily adapted to various parallel acceleration methods. 5 Idea Generation Category:
0Conceptual Integration
LQzN6TRFg9
# K RONECKER M ASK AND I NTERPRETIVE P ROMPTS ARE L ANGUAGE -A CTION V IDEO L EARNERS **Jingyi Yang** [1] _[∗]_ **,** **Zitong Yu** [23] **,** _[∗]_ **Xiuming Ni** [4] **,** **Jia He** [4] **,** **Hui Li** [1] _[†]_ 1 University of Science and Technology of China, 2 Great Bay University, 3 Dongguan Key Laboratory for Intelligence and Information Technology, 4 Anhui Tsinglink Information Technology Co.,Ltd. yangjingyi@mail.ustc.edu.cn, yuzitong@gbu.edu.cn, _{_ nixm,hejia _}_ @tsinglink.com, mythlee@.ustc.edu.cn A BSTRACT Contrastive language-image pretraining (CLIP) has significantly advanced imagebased vision learning. A pressing topic subsequently arises: how can we effectively adapt CLIP to the video domain? Recent studies have focused on adjusting either the textual or visual branch of CLIP for action recognition. However, we argue that adaptations of both branches are crucial. In this paper, we propose **CLAVER** : a **C** ontrastive **L** anguage- **A** ction **V** ideo Learn **er**, designed to shift CLIP’s focus from the alignment of static visual objects and concrete nouns to the alignment of dynamic action behaviors and abstract verbs. Specifically, we introduce a novel Kronecker mask attention for temporal modeling. Our tailored Kronecker mask offers three benefits 1) it expands the temporal receptive field for each token, 2) it serves as an effective spatiotemporal heterogeneity inductive bias, mitigating the issue of spatiotemporal homogenization, and 3) it can be seamlessly plugged into transformer-based models. Regarding the textual branch, we leverage large language models to generate diverse, sentence-level and semantically rich interpretive prompts of actions, which shift the model’s focus towards the verb comprehension. Extensive experiments on various benchmarks and learning scenarios demonstrate the superiority and generality of our approach. Code is [available at https://github.com/yjyddq/CLAVER.](https://github.com/yjyddq/CLAVER) 1 I NTRODUCTION Video action recognition has long been a representative topic in video understanding. Over the past decade, there has been a continuous pursuit of learning spatiotemporal representations, giving rise to diverse architectures, such as traditional two-stream networks Simonyan & Zisserman (2014); Wang et al. (2016); Zhou et al. (2018); Karpathy et al. (2014); Xie et al. (2024), 3D convolutional neural networks Carreira & Zisserman (2017); Feichtenhofer (2020); Feichtenhofer et al. (2019); Hara et al. (2017); Qiu et al. (2017); Tran et al. (2015; 2018); Wang et al. (2018); Xie et al. (2018), and Video Vision Transformers Arnab et al. (2021); Bertasius et al. (2021); Fan et al. (2021); Liu et al. (2022); Patrick et al. (2021); Zhao et al. (2022); Li et al. (2022a); Yan et al. (2022). Recently, there has been increasing interest in leveraging visual-language models (VLMs) like CLIP Radford et al. (2021), Florence Yuan et al. (2021), and ALIGN Jia et al. (2021) for various video tasks, owing to the superior generalization abilities of these models. Several studies Wang et al. (2021); Lin et al. (2022); Ni et al. (2022); Ju et al. (2022); Rasheed et al. (2023); Tu et al. (2023); Chen et al. (2023) have devoted to adapt the CLIP for video action recognition, but they often focus on adjusting a single branch. According to predecessor studies, transferring CLIP from the image domain to the video domain involves two key considerations: 1) how to perform effective temporal modeling. 2) how to design suitable text descriptions for verb understanding that align with rich text semantics in the VLM’s pre-training dataset. We argue that addressing both issues simultaneously is crucial. _∗_ Equal Contribution _†_ Corresponding Author 1 _**Commands + Examples + Action Concept**_ |Col1|Col2|Col3|Texts| |---|---|---|---| |Verb or Phra|se abseiling, air drumming, answering questions|se abseiling, air drumming, answering questions|se abseiling, air drumming, answering questions| |Gtf wcrpc sdEIpd qafkf h o io a oro o uo e ntr r ne i m t tp ' cr sn n ee p ou urt n sh i tie u ft t m m r wu h ce l e e r rv l io i l au u lar a em s tta a o oi rer s lad at e sr s il l ac m dr i oi nl l e tt e n nap th e eo ri p gi ge a s go n so inpd d p eoo n i d t na g ue o l s ma aa a Za ncc sad d r atc n. n eb id n el i uy uao nre et c da nu,C dd svs s mi m nt di s s ofs s en wf dl eee o,i eic c b m s e o bg soo rnmin rt eesoe el y r t s nr aua t i bh xrtd uo bn nn mi tm sc u mi ich ioo ps =f t t rl age n n l re ak tw e e,ia > . um n gr eet n i thd = sa i rm es S n c ds h ri i.> qe t is s tn n a o ii e nss vth iu nr m hp hgA js ns aee d se gu oo w ib e ta epb nr=cx e.m o y s hr n a i bl i gs >u a xStt t ae tte o d isa e e sr i i h os b cs A ui iosr on d eai ts m o hls, cl n na i a wec. ie an ru a opent ca sn dt sA nos et nh mg l g o 'c d di de d ec t = es gob ee mt p rt rm r > c nen rsi sm ah fue t c ei bo o es A e utp ceim oy ix san sm in, s nu nit tne n ns snc le gmc r dli iiib ei st nv io de col nus ec ik iv wii ei nn a ve n g .e n tn a td ee p c qo t ies d o olge e d s rd( ito uf rm s i v s o l o uiie cip itn i7 w i om esu rn a b akers me s6 n i i(c gi le lee it n eat prnP se tah ravh a t sr tl qaw a mn dr oe ue t ot,o a ut cda h a oo rmea i -r c ci e eto ts s y n nr s as e oh ge ishtn d t oi e vle d sp nn t ra a nh s eh ls e a r s tr i vg or dt,c i m) oe uc .s ro r in u st, l ua nnan m t i iIp hi. pei rl s bi c gs o is l m te yS sso s i tmo u .un n r t .is et en iaw i t n Rr ig shm r x ns ng itde a v i e na m in p gmir fp on e t n sgni g ra ut lg ldoc g pupa e vwe ih n th o,in osta e du s n d oia atm iii nct t sst cve dn a t c t dh o e b iip e gs i mpto el mlo ev fo iot c, ii rn rz a si u e fnohs de psd on e n tt etn ni we n y ra vu st e d scs te d fa t ihi,a n p a, d to l h he d pl e se i iloda l isn nt e ve rily nrn s i e ak) y g ea nnv t ne gg i nys e t wsoe c c t m o cec f v rii i r ioh mno r o oe aue e en oetr d n n mrln x g d si ni a lr o m ic ii,d pa p s va moe v f w f ta ac oe ieu in e us hc ol d ui nt nr rrd d sp tk r ens iu ds ei ho ve d e iu iica er s ani rcn s cner ol te sm, cg ad sc o cf yv n g ta tr ti u na ei . tun un ici os cc e op l ag de ia ie nxt c l s t i f it t. eo aiso son nn,rg d|Col2| |---|---| |LLaMA-3|ChatGPT| |Zmbtheuo aamv tteb pmoa rf e oi ns mm ta uos s tpiieno cs p,a u fhnl oe aurarpml tbp hh te hya aest n i dcaf anowldu ena s lc dlo -t ac bi tvi eiai ot il nn ys g eot, fth bt a utitnhil gedc. o i cnSm hgib o m srip tn erle oe es g n ari gncattt phie ho,r yn fn .s l a e,Zt xluiiok imbne iba lm il a t do yisav . nianc ge h ioa gnn hed -'e se n xbeeorr gdc yyi s ate cot tivhiety|Zmbtheuo aamv tteb pmoa rf e oi ns mm ta uos s tpiieno cs p,a u fhnl oe aurarpml tbp hh te hya aest n i dcaf anowldu ena s lc dlo -t ac bi tvi eiai ot il nn ys g eot, fth bt a utitnhil gedc. o i cnSm hgib o m srip tn erle oe es g n ari gncattt phie ho,r yn fn .s l a e,Zt xluiiok imbne iba lm il a t do yisav . nianc ge h ioa gnn hed -'e se n xbeeorr gdc yyi s ate cot tivhiety| _**Action Interpretive Prompt**_ |Col1|Kronecker Mask<br>Temporal Transformer|Col3|Col4| |---|---|---|---| ||0 1 2|3 N|-1 N| ||Image Encoder|Image Encoder|Image Encoder| ||Image Encoder||| ||Image Encoder||| ||||| Figure 1: An overview of **CLAVER** . ( **Right** ) Image encoder and KMT transformer are assembled as a video encoder. ( **Left** ) How to get the interpretive prompts for actions. In addressing the issue 1), several studies Wang et al. (2021); Ju et al. (2022); Chen et al. (2023); Rasheed et al. (2023) implement straightforward and simple strategies such as mean pooling or 1D-temporal convolution across the temporal dimension, or employing temporal attention among class tokens. X-CLIP Ni et al. (2022) and CLIP-ViP Xue et al. (2022) introduce extra tokens for cross-frame communication. Alternatively, some studies Lin et al. (2022); Tu et al. (2023) engineer tailored modules. In our work, we aim to elucidate the distinctions and intrinsic correlations between space and time, as well as design more general spatiotemporal modeling approaches. Regarding the issue 2), some studies Hendricks & Nematzadeh (2021); Thrush et al. (2022) indicate that VLMs tend to focus on the correspondence between visual objects and nouns rather than action behaviors and verbs. We consider the essential gap is that visual objects are static and presented in lower dimensions, and nouns are concrete and easily understandable. However, action behaviors are dynamic that presented in higher dimensions, and verbs are intricate and abstract. Several existing methods Ni et al. (2022); Lin et al. (2022); Rasheed et al. (2023); Tu et al. (2023) use verbs or phrases as direct text descriptions. ActionCLIP Wang et al. (2021) integrates prompt templates to expand verbs or phrases into sentences. Ju et al. Ju et al. (2022) propose trainable continuous prompts to construct virtual prompt templates. However, these methods do not address aforementioned issues in essence. Alternatively VFC Momeni et al. (2023) and MAXI Lin et al. (2023) consider leveraging large language models (LLMs) to provide positive and negative text samples for contrastive learning or multiple instance learning, while ASU Chen et al. (2023) presents the concept of semantic units to supplement the semantic information of action labels. To address these issues, we propose a **C** ontrastive **L** anguage- **A** ction **V** ideo Learn **er** ( **CLAVER**, Fig. 1) to efficiently adapt the CLIP for video action recognition. Specifically, for the issue 1), we first obtain the frame-level visual representation from the image encoder, then apply tailored Kronecker mask for temporal modeling with a wider temporal receptive field to establish long-range and widerange dependencies among frames, while mitigating spatiotemporal homogenization. Additionally, we reveal the intrinsic correlations between space and time from the perspective of Kronecker mask attention. Regarding the issue 2), we leverage LLMs to effectively generate diverse, sentence-level, and semantically rich interpretations of actions, augmenting text descriptions during training and testing. This approach allows the text descriptions to be presented in a more flexible, sentence-level form during inference. In summary, our main contributions are four-fold: - We propose the **C** ontrastive **L** anguage- **A** ction **V** ideo Learn **er** ( **CLAVER** ) to adapt both the visual and textual branches, efficiently shifting the alignment in CLIP from visual objects and nouns to action behaviors and verbs. - We propose the Kronecker mask temporal attention and Kronecker mask causal temporal attention for temporal modeling, aiming to capture the long-range and wide-range dependencies among frames with spatiotemporal heterogeneity. - We introduce interpretive prompts of actions to facilitate the alignment of action behaviors and verbs, thereby improving zero-shot and few-shot generalization capabilities. - Extensive qualitative and quantitative experiments demonstrate the effectiveness of CLAVER. Our method achieves superior or competitive performance on Kinetics-400 and Kinetics-600 under fully-supervised scenario, and on HMDB-51 and UCF-101 under zeroshot, few-shot scenarios. 2 2 R ELATED W ORK **Video Recognition.** Among early video recognition methods, 3D convolution is widely employed Qiu et al. (2017); Tran et al. (2015; 2018); Xie et al. (2018); Feichtenhofer et al. (2019); Feichtenhofer (2020); Yu et al. (2021). Some studies Qiu et al. (2017); Tran et al. (2018); Xie et al. (2018) propose to factorize convolutional operations across spatial and temporal dimensions, while others design the specific temporal modules to embed them into 2D CNNs Li et al. (2020b); Lin et al. (2019); Liu et al. (2021). Over the past few years, there has been an influx of transformer-based video works Arnab et al. (2021); Neimark et al. (2021); Bertasius et al. (2021); Fan et al. (2021); Liu et al. (2022); Yan et al. (2022); Li et al. (2022a), demonstrating promising performance. For example, some methods Arnab et al. (2021); Neimark et al. (2021); Girdhar & Grauman (2021) adopt a factorized encoder structure for spatial-temporal fusion. Alternatively, another family employ a factorized attention structure. Such as TimeSformer Bertasius et al. (2021), ViViT Arnab et al. (2021), and ATA Zhao et al. (2022) which proposes alignment-guided tepmoral attention. While Video Swin Transformer Liu et al. (2022) employs 3D window attention. The representative attention calculation forms in these works can be summarized into joint attention and factorized (divided) spatiotemporal attention. In this paper, to make minimal modifications to the original structure, we adopt a factorized encoder structure for video stream. **Visual-language Representation Learning.** Visual-textual multi-modality is a hot topic in recent years. Several studies based on masked image modeling (MIM) have achieved commendable performance Li et al. (2020a); Lu et al. (2019); Su et al. (2019); Tan & Bansal (2019). There are also efforts focused on video-language representation learning Miech et al. (2019); Sun et al. (2019a;b); Zhu & Yang (2020); Xu et al. (2021). Concurrently, contrastive language-image pretraining Radford et al. (2021); Jia et al. (2021); Yuan et al. (2021) achieved remarkable progress, particularly in demonstrating impressive zero-shot generalization capacities. CLIP Radford et al. (2021) is one of the most representative works, with numerous follow-up studies have explored to adapt it for downstream tasks. For example, object detection, semantic segmentation, video retrieval and captioning, etc.Gu et al. (2021); Vinker et al. (2022); Li et al. (2022b); Luo et al. (2022); Xu et al. (2022). Additionally, there are also many applications in the video action recognition Wang et al. (2021); Ni et al. (2022); Lin et al. (2022); Ju et al. (2022); Pan et al. (2022); Rasheed et al. (2023); Chen et al. (2023); Tu et al. (2023); Lin et al. (2023); Momeni et al. (2023); Yang et al. (2023). For instance, ViFiCLIP Rasheed et al. (2023) aim to minimize modifications to original models and facilitate efficient transfer, while Chen Ju et al. Ju et al. (2022) suggest optimizing a few prompt vectors for adapting CLIP to various video understanding tasks. X-CLIP Ni et al. (2022) proposes an efficient cross-frame attention module. ILA Tu et al. (2023) designs implicit mask-based alignment to align features of two adjacent frames and EVL Lin et al. (2022) proposes a image encoder and video decoder structure. Regrading to the language branch, most previous works directly use verbs or phrases that lack rich semantics which overlook the importance of semantics. With the advancement of large language models like the GPT-3 Brown et al. (2020), PaLM Chowdhery et al. (2023), LLaMAs Touvron et al. (2023a;b); Abhimanyu Dubey et al. (2024) and ChatGPT OpenAI (2024). LLMs can replace manual labor Chen & Huang (2021); Qian et al. (2022) and automatically generate texts that meet human expectations to benefit visual-textual learning. For example, LaCLIP Fan et al. (2024) employs LLMs to rewrite text descriptions associated with each image for text augmentation. VFC Momeni et al. (2023) and MAXI Lin et al. (2023) leverage LLMs to generate positive and negative texts with diversity for language-video learning. 3 M ETHODOLOGY In Sec. 3.1, we overview our proposed contrastive language-action video learner architecture. Then, we elaborate on the detail of the Kronecker mask attention in Sec. 3.2. Finally, we present the technique details of action interpretive prompt in Sec. 3.3. 3.1 O VERVIEW Our contrastive language-action video learner architecture is illustrated in Fig. 1. We utilize a video encoder to obtain video representations, comprising two transformers-based components: an image encoder (ViT) from CLIP, a Kronecker mask temporal transformer. The text encoder 3 **Spatial Patch** **Temporal Patch** **Joint Patch** **Kronecker Mask Attention** **Focal Patch** **Spatial Patch** **Temporal Patch** **Joint Patch** **Joint Attention** **(a)** **(b)** **(c)** **(d)** **Spatial & Kronecker Mask Temporal Attention** **Spatial & Pipeline Temporal Attention** **Spatial & Alignment-guided Temporal Attention** Figure 2: ( **Left** ) Red indicates the currently focal patch, green patches are visible in spatial attention, orange patches are visible in temporal attention, purple patches are visible in joint attention. ( **Right** ) Kronecker mask attention: Several attentions can be seen as employing tailored Kronecker masks for joint attention. aims to align text representations with the video representations. Concretely, given a video clip _V_ = [ _v_ 0 _, · · · v_ _t_ _, · · · v_ _T −_ 1 ] _∈_ R _[T][ ×][H][×][W][ ×]_ [3] _, v_ _t_ _∈_ R _[H][×][W][ ×]_ [3] and corresponding text descriptions _C_ = [ _c_ 0 _, · · · c_ _m_ _, · · · c_ _M_ _−_ 1 ] _∈_ R _[M]_ _[×][N]_ _, c_ _m_ _∈_ R _[N]_, where _T, H, W_ are the number of frames, height, width, respectively, _M_ is the number of diverse text descriptions (share the same central concept) for an action category, _N_ is the max sequence length. We feed texts _C_ into the text encoder _f_ _θ_ _C_ ( _·_ ) to obtain text representations **C** = [ **c** 0 _, · · ·_ **c** _M_ _−_ 1 ]. For the video stream, firstly, we input the video clip _V_ to the image encoder _f_ _θ_ _I_ ( _·_ ) to obtain frame-level representations **I** _t_ . **I** _t_ = _f_ _θ_ _I_ (PE( _v_ _t_ ) + **e** _[pos]_ ) _,_ **C** = _f_ _θ_ _C_ ( _C_ ) _,_ (1) where PE( _·_ ) is the patch embedding. Each frame is split into _L_ = _HP_ _[×]_ _[W]_ _P_ [patches,] _[ L]_ [ is the] number of patches, _P_ is the patch size, **e** _[pos]_ is the absolute positional embedding, PE( _v_ _t_ ) + **e** _[pos]_ = [ _v_ _t,_ 0 + **e** _[pos]_ 0 _,_ PE( _v_ _t,_ 1 )+ **e** _[pos]_ 1 _, · · ·,_ PE( _v_ _t,l_ )+ **e** _[pos]_ _l_ _, · · ·,_ PE( _v_ _t,L_ )+ **e** _[pos]_ _L_ [] = [] **[z]** _[t,]_ [0] _[,][ · · ·][,]_ **[ z]** _[t,l]_ _[,][ · · ·][,]_ **[ z]** _[t,L]_ []][,] _v_ _t,_ 0 is the class token. **I** _t_ = _f_ _θ_ _I_ ([ **z** _t,l_ ] _t∈T,l∈L_ +1 ) = [ **I** _t,_ 0 _, · · ·,_ **I** _t,l_ _, · · ·,_ **I** _t,L_ ]. Then, we add absolute temporal embedding **e** _[tem]_ to the **I** _t_ _, t ∈_ _T_ and feed them into the Kronecker mask temporal transformer _f_ _θ_ _V_ ( _·_ ). Finally, by selecting the class token from each frame and averaging them, we obtain a video representation **v** with the same dimension as **c** _m_ _, m ∈_ _M_ . **V** = _f_ _θ_ _V_ ([ **I** _t_ ] _t∈T_ + **e** _[tem]_ ) _,_ **v** = Avg([ **V** _t,_ 0 ] _t∈T_ ) _,_ (2) where **V** = _f_ _θ_ _V_ ([ **I** _t_ ] _t∈T_ + **e** _[tem]_ ) = _f_ _θ_ _V_ ([ **I** _t_ + **e** _[tem]_ _t_ ] _t∈T_ ) = _f_ _θ_ _V_ ([[ **I** 0 _,_ 0 + **e** _[tem]_ 0 _, · · ·,_ **I** 0 _,L_ + **e** _[tem]_ 0 ] _, · · ·,_ [ **I** _t,_ 0 + **e** _[tem]_ _t_ _, · · ·,_ **I** _t,L_ + **e** _[tem]_ _t_ ] _, · · ·,_ [ **I** _T −_ 1 _,_ 0 + **e** _[tem]_ _T −_ 1 _[,][ · · ·][,]_ **[ I]** _[T][ −]_ [1] _[,L]_ [ +] **[ e]** _[tem]_ _T −_ 1 []])] = [[ **V** 0 _,_ 0 _, · · ·,_ **V** 0 _,L_ ] _, · · ·,_ [ **V** _t,_ 0 _, · · ·,_ **V** _t,L_ ] _, · · ·,_ [ **V** _T −_ 1 _,_ 0 _, · · ·,_ **V** _T −_ 1 _,L_ ]], Avg( _·_ ) is the average pooling function. Our optimization goal is to maximize the cosine similarity between video **v** and its corresponding texts **c** _m_ _∈_ **C** representations: _⟨_ **v** _,_ **c** _m_ _⟩_ sim( **v** _,_ **c** _m_ ) = (3) _∥_ **v** _∥· ∥_ **c** _m_ _∥_ _[.]_ 3.2 K RONECKER M ASK A TTENTION For an image _∈_ R _[H][×][W][ ×]_ [3], it is first split into patches and then flattened into a token sequence after patch embedding. The resulting feature shape is ( _L, D_ ), _L_ = _HP_ _[×]_ _[W]_ _P_ [, where] _[ D]_ [ is the hidden] patch embedding. The resulting feature shape is ( _L, D_ ), _L_ = _P_ _[×]_ _P_ [, where] _[ D]_ [ is the hidden] dimension. This process is a standard operation in ViT Dosovitskiy et al. (2020), denoted as Spatial Attention (SA): **Z** ( _L,D_ ) = Softmax( **Q** ( _L,D_ ) **K** **[T]** ( _L,D_ ) _[/]_ _√D_ ) **V** ( _L,D_ ) _,_ (4) where **Q**, **K**, **V** represent the query, key, value matrices, respectively. For a video _∈_ R _[T][ ×][H][×][W][ ×]_ [3], _T_ is the number of frames, previous studies Arnab et al. (2021); Bertasius et al. (2021); Neimark et al. (2021); Guo et al. (2021); Tong et al. (2022); Feichtenhofer et al. (2022) typically employ either joint attention Bertasius et al. (2021); Feichtenhofer et al. (2022) or 4 _D_ ) **V** ( _L,D_ ) _,_ (4) factorized (divided) attention Bertasius et al. (2021); Arnab et al. (2021). Joint attention flattens a video into a longer token sequence, resulting in a feature shape of ( _T × L, D_ ). This token interaction mode is illustrated in Fig. 2 **Left** (a). Joint attention encounters spatiotemporal homogenization issues, where random shuffling of tokens not affecting (or slightly) the final pooling result. In contrast, factorized attention factorize joint attention into spatial attention and temporal attention to avoid spatiotemporal homogenization. They utilize the same spatial attention as Eqn. 4, while the temporal attention varies. Two common temporal attentions are pipeline temporal attention Arnab et al. (2021); Bertasius et al. (2021) and class-token-only temporal attention Arnab et al. (2021); Ni et al. (2022); Wang et al. (2021). Their feature shapes are ( _L, T, D_ ) and ( _T, D_ ), respectively. Pipeline temporal attention has a limited temporal receptive field, which is limited to a fixed time pipeline (tube), hence the term ”pipeline temporal attention”, as shown in Fig. 2 **Left** (c). It has limited scope of capture dynamic information since objects of interest do not always appear in the same 2D location across frames. Although ATA Zhao et al. (2022) utilizes alignment techniques to bend the time pipeline to capture dynamic objects, it still has a limited temporal receptive field, in Fig. 2 **Left** (d). Similarly, class-token-only temporal attention retains only the class token time pipeline and discard others, which both face limited receptive field and may discard lots of potentially valuable information. It is worth noting that in pipeline temporal attention, mean pooling is necessary across all tokens, otherwise, it is equivalent to class-token-only temporal attention. To address aforementioned drawbacks, we propose Kronecker Mask Temporal Attention (KMTA). Specifically, we allow each patch (token) at timestamp _t_ can interact with all other patches (tokens), excluding those sharing the same timestamp _t_, as illustrated in Fig. 2 **Left** (b). Compared to pipeline temporal attention, KMTA expands the temporal receptive field width of each token. KMTA can be achieved through joint attention incorporated a Kronecker mask, as shown in Fig. 2 **Right** (left down). Additionally, KMTA alleviates the impact of spatiotemporal homogenization due to the presence of the Kronecker mask. The trick for obtaining the Kronecker mask is Kronecker product _⊗_ : _mp×nq_   **A** _m×n_ _⊗_ **B** _p×q_ =   _a_ 11 **B** _a_ 12 **B** _· · ·_ _a_ 1 _n_ **B** _a_ 21 **B** _a_ 22 **B** _· · ·_ _a_ 2 _n_ **B** ... ... ... ... _a_ _m_ 1 **B** _a_ _m_ 2 **B** _· · ·_ _a_ _mn_ **B** (5) Eqn. 5 is the definition of _⊗_, where **A** _∈_ R _[m][×][n]_, **B** _∈_ R _[p][×][q]_ . Thus, it is referred to as the Kronecker mask. Kronecker Mask Temporal Attention (KMTA) can be formulated as: **M** ( _T ×L,T ×L_ ) = [ **I** ( _T,T_ ) _⊗_ ( **J** ( _L,L_ ) _−_ **I** ( _L,L_ ) )] 1== _−_ inf _,_ (6) **Z** ( _T ×L,D_ ) = Softmax( **Q** ( _T ×L,D_ ) **K** **[T]** ( _T ×L,D_ ) _[/]_ _√D_ + **M** ( _T ×L,T ×L_ ) ) **V** ( _T ×L,D_ ) _,_ (7) where **I** ( _−,−_ ) is an identity matrix, **J** ( _−,−_ ) is an all-ones matrix, [ ] 1== _−_ inf means replacing 1 in the matrix with negative infinity ( _−_ inf), and **M** ( _T ×L,T ×L_ ) is the Kronecker mask. A transformer equipped with KMTA is referred to as a Kronecker Mask Temporal (KMT) transformer. In fact that both the spatial and temporal attention we mentioned above can be derived by combining a tailored Kronecker mask with joint attention. Therefore, we collectively refer to them as Kronecker Mask Attention, as depicted in Fig. 2 ( **Right** ). The Kronecker mask serves as a prior spatiotemporal heterogeneity inductive bias. Spatial attention allows intra-frame interactions but blocks inter-frame interactions, while Kronecker mask temporal attention allows inter-frame interactions but blocks intra-frame interactions, exhibiting a spatiotemporal structural complementarity. **Kronecker Mask Causal Temporal Attention** Moreover, we design another type of ~~**Masked**~~ ~~**Masked**~~ **1** ~~**1**~~ **1** **1** **11** Kronecker mask for temporal model- **P0 P1 P2 P3 P0 P1 P2 P3 P0 P1 P2 P3** ing, known as Kronecker mask causal ~~**Masked**~~ temporal attention (KMCTA) that aims to alleviate the low-rank bottleneck, as shown in Fig. 3. The mask **M** [˜] ( _T ×L,T ×L_ ) of KMCTA can be formulated as: Figure 3: Kronecker mask causal temporal attention. ˜ **M** ( _T ×L,T ×L_ ) = [ **I** ( _T,T_ ) _⊗_ ( **J** ( _L,L_ ) _−_ **I** ( _L,L_ ) ) + ( **U** ( _T,T_ ) _−_ **I** ( _T,T_ ) ) _⊗_ **J** ( _L,L_ ) ] 1== _−_ inf _,_ (8) where **U** ( _−,−_ ) is an upper triangular matrix, and all elements of the upper triangle are 1. A transformer equipped with KMCTA is referred to as a KMCT transformer. KMCTA ensures the causality ~~**Masked**~~ ~~**Masked**~~ Figure 3: Kronecker mask causal temporal attention. 5 Idea Generation Category:
0Conceptual Integration
RUF7j1cJzK
# - S CALING O FFLINE M ODEL -B ASED RL VIA J OINTLY O PTIMIZED W ORLD -A CTION M ODEL P RETRAINING **Jie Cheng** [1] _[,]_ [2] **, Ruixi Qiao** [1] _[,]_ [2] **, Yingwei Ma** [3] **, Binhua Li** [3] **,** **Gang Xiong** [1] _[,]_ [2], **Qinghai Miao** [2], **Yongbin Li** [3] _[∗]_, **Yisheng Lv** [1] _[,]_ [2] _[∗]_ 1 State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences 2 School of Artificial Intelligence, University of Chinese Academy of Sciences 3 Alibaba Group A BSTRACT A significant aspiration of offline reinforcement learning (RL) is to develop a generalist agent with high capabilities from large and heterogeneous datasets. However, prior approaches that scale offline RL either rely heavily on expert trajectories or struggle to generalize to diverse unseen tasks. Inspired by the excellent generalization of world model in conditional video generation, we explore the potential of image observation-based world model for scaling offline RL and enhancing generalization on novel tasks. In this paper, we introduce JOWA: **J** ointly- **O** ptimized **W** orld- **A** ction model, an offline model-based RL agent pretrained on multiple Atari games with 6 billion tokens data to learn generalpurpose representation and decision-making ability. Our method jointly optimizes a world-action model through a shared transformer backbone, which stabilize temporal difference learning with large models during pretraining. Moreover, we propose a provably efficient and parallelizable planning algorithm to compensate for the Q-value estimation error and thus search out better policies. Experimental results indicate that our largest agent, with 150 million parameters, achieves 78.9% human-level performance on pretrained games using only 10% subsampled offline data, outperforming existing state-of-the-art large-scale offline RL baselines by 71.4% on averange. Furthermore, JOWA scales favorably with model capacity and can sample-efficiently transfer to novel games using only 5k offline fine-tuning data (approximately 4 trajectories) per game, demonstrating superior generalization. The code and checkpoints will be released at [https://github.com/CJReinforce/JOWA.](https://github.com/CJReinforce/JOWA) 1 I NTRODUCTION In recent years, building large-scale generalist models capable of solving multiple tasks has become a dominant research focus in natural language processing (NLP) and multi-modality (Yue et al., 2024). Their success is largely driven by the scaling law (Kaplan et al., 2020), which posits that increasing model size and data leads to improved performance. However, similar scaling trends have not been extensively observed in reinforcement learning (RL). Unlike in vision and language domains, RL has traditionally favored smaller models tailored to single tasks or multiple tasks within the same environment. Concerningly, previous studies have shown that scaling model capacity can lead to instabilities or performance degradation (Kumar et al., 2020a; Ota et al., 2021; Sokar et al., 2023). While some efforts have been made to scale offline RL across multiple tasks (Lee et al., 2022; Xu et al., 2022), they predominantly rely on supervised learning (SL) approaches, such as conditional behavior cloning, rather than temporal difference (TD) learning, and heavily rely on large amounts of expert trajectories. Kumar et al. (2023) scaled offline Q-learning using ResNet-based representation network with separate Q-heads for each game, but this approach only learns generalizable representations and still requires substantial data and gradient steps to adapt to unseen games due to reset of Q-heads during fine-tuning. Therefore, _∗_ Equal corresponding author 1 _scaling TD-based offline RL for simultaneous_ _**general-purpose representation and decision-making**_ _remains a critical challenge._ While Hansen et al. (2024) attempted to address this challenge for continuous control tasks with low-dimensional proprioceptive states using model-based RL, it lacks generalization due to heterogeneous proprioceptors across task domains. Meanwhile, world model has decoupled from model-based RL and evolved into a distinct research area within computer vision and multi-modality, primarily focusing on conditional video generation (Hong et al., 2022; Blattmann et al., 2023; Yu et al., 2023; Yang et al., 2024; Bruce et al., 2024). Notably, SORA (Brooks et al., 2024) has demonstrated superior generalization performance as a world simulator through large-scale training of generative models on time-series image data. This motivates our investigation into a compelling question: “ _Can image observation-based world model_ _scale offline RL across multiple tasks while enhancing generalization to diverse unseen tasks?_ ” To address this question, we introduce JOWA: **J** ointly- **O** ptimized **W** orld- **A** ction model, an offline model-based RL agent pretrained across multiple visual games with approximately 6 billion tokens data. Crucially, JOWA unlocks scaling trends and achieves sample-efficient adaption to novel games. By utilizing a shared transformer backbone for both the world model and Q-value criticism, JOWA learns both generalizable representations and decision-making skills. This architecture allows the transformer to absorb gradients back-propagated from both world modeling and TD losses, enabling joint optimization. The world modeling loss acts as a regularizer, stabilizing TD-learning for large models. Additionally, we propose a provably efficient and parallelizable planning algorithm to compensate for the Q-value estimation error, allowing for consistent identification of the optimal policy at inference time and sample-efficient transfer to novel games. To evaluate the performance of JOWA, we train a single model to play 15 Atari games, similar to Lee et al. (2022) but using a reduced yet sufficient dataset of 10M transitions per game—termed as the low-data regime to highlight the data efficiency. This setup presents a significant challenge due to the unique dynamics, visuals, and agent embodiments of games. To further test JOWA’s generalization, we perform offline fine-tuning on 5 unseen games, using minimal fine-tuning data. Our contributions are threefold: First, we introduce JOWA, an offline model-based RL method capable of training a single high-performing generalist agent across multiple Atari games. JOWA attains 78.9% human-level performance on pretrained games using only 10% of the original dataset (Agarwal et al., 2020), outperforming existing state-of-the-art large-scale offline RL baselines by 71.4% on averange. Second, we demonstrate that JOWA unlocks scaling trends, with performance improving as model capacity increases. Third, JOWA enables sample-efficient transfer to diverse unseen games with 64.7% DQN-normalized score using only 5k transitions per game, surpassing baselines by 69.9% on average. Our ablation studies highlight the significance of two key design features of JOWA: joint optimization and planning, along with other training choices. We release all the training, evaluation, and fine-tuning code, as well as model weights, to support future research. 2 R ELATED WORK **Offline Reinforcement Learning.** Offline RL algorithms learn a policy entirely from the static offline dataset without online interactions. Model-free offline RL incorporates conservatism to mitigate extrapolation error (Jin et al., 2021) primarily through policy constraints (Fujimoto et al., 2019; Kumar et al., 2019; Fujimoto & Gu, 2021; Cheng et al., 2024; Liu et al., 2024; 2025) and value regularization (Kumar et al., 2020b). Model-based offline RL approximates the environment using world models and performs conservative policy optimization (Yu et al., 2020; 2021). While these works focus on single-task settings, our work explores scaling offline model-based RL across diverse, challenging multi-task Atari games (Lee et al., 2022; Kumar et al., 2023; Wu et al., 2024) aiming for sample-efficient transfer to novel games. **Mutli-Task Reinforcement Learning.** Multi-task reinforcement learning (MTRL) aims to learn a shared policy for diverse tasks. A common approach is to formulate the multi-task model as task-condition, such as language-conditioned tasks (Ahn et al., 2022; Jang et al., 2022) and goalconditional RL (Plappert et al., 2018). In multi-task offline RL, conditional sequence modeling approaches based on decision transformer (Chen et al., 2021) or diffusion model (Janner et al., 2022) typically rely on large amounts of expert trajectories (Xu et al., 2022; Lee et al., 2022; Wu et al., 2024; Hu et al., 2024; He et al., 2024). Beyond that, FICC (Ye et al., 2022) pretrains multitask representation and dynamic models with action-free videos and then fine-tunes a model-based 2 agent on each task for fast adaptation. Scaled-QL (Kumar et al., 2023) scales offline Q-learning using a shared feature network across tasks with separate Q-value heads for each task. Our work advances offline TD learning to multi-task settings without task-specific Q-value heads through a jointly-optimized world-action model. Table 1 compares experimental environments and opensource status of multi-task offline RL algorithms. Following Lee et al. (2022); Kumar et al. (2023); Wu et al. (2024), we focus on the multi-game regime, which presents greater challenges due to high-dimensional observations and diverse, stochastic environment dynamics. Table 1: Comparison of methods in multi-task offline RL. _♣_ and _♠_ represent two training paradigms of agents, conditional BC and TD-learning, respectively. |Method|Experimental environment|Col3|Open source| |---|---|---|---| |Method|Observation Action<br>Benchmark<br>space space|Dynamics|Check-<br>Train Eval<br>points| |Method|Observation Action<br>Benchmark<br>space space|per task across tasks|per task across tasks| |HarmoDT♣(Hu et al., 2024)<br>TD-MPC2♠(Hansen et al., 2024)|Meta-World<br>state continous<br>or DMControl|same or<br>deterministic<br>similar|✓ ✓ ✗<br>✓ ✓ ✓| |FICC♠(Ye et al., 2022)1<br>MGDT♣(Lee et al., 2022)<br>Elastic DT♣(Wu et al., 2024)<br>Scaled-QL♠(Kumar et al., 2023)2<br>JOWA (ours)♠|Atari 2600 pixels discrete|stochastic diverse|✓ ✗ ✗ ✗<br>✗ ✓ ✓<br>✓ ✓ ✗<br>✓ ✓ ✗ ✗ ✗<br>✓ ✓ ✓| 3 P RELIMINARIES AND PROBLEM SETUP 3.1 O NLINE DISTRIBUTIONAL RL (C51) In distributional RL, the distribution _Z_ over returns replaces the Q-value in the Bellman optimality equation. The Q-value is the mean of the value distribution _Z_ that can be computed through a distributional Bellman optimality operator (Bellemare et al., 2017), _D_ _′_ _T_ _[∗]_ _Z_ ( _s, a_ ) := _R_ ( _s, a_ ) + _γZ_ ( _s_ _,_ arg max (1) _a_ _[′]_ _[ Q]_ [(] _[s]_ _[′]_ _[, a]_ _[′]_ [))] _D_ where the formula _Y_ := _U_ denotes equality of probability laws, that is the random variable _Y_ is distributed according to the same law as _U_ . The C51 algorithm (Bellemare et al., 2017) models _Z_ ( _s, a_ ) using a discrete distribution supported on _N_ fixed atoms _z_ 1 _≤· · · ≤_ _z_ _N_ uniformly spaced over a predetermined interval. Given a current value distribution, C51 applies a projection step to map the target distribution onto its finite element support and optimizes as follows: _L_ TD = _D_ KL ( _T_ _[∗]_ _Z_ _θ_ _−_ ( _s, a_ ) _∥Z_ _θ_ ( _s, a_ )) (2) 3.2 V ALUE REGULARIZATION BASED O FFLINE RL (CQL) To be conservatism on unseen actions, CQL (Kumar et al., 2020b) introduces a regularizer to the TD-loss, which minimizes Q-values for unseen actions while maximizing Q-values for actions in the dataset to counteract excessive underestimation. The loss function for CQL is given by: � �� _−_ E _s,a∼D_ [ _Q_ _θ_ ( _s, a_ )] � _L_ CQL = _α_ E _s∼D_ � log �� _a_ _[′]_ exp ( _Q_ _θ_ ( _s, a_ _[′]_ )) _a_ _[′]_ + _L_ TD (3) 3.3 P ROBLEM SETUP We consider a multi-task offline RL problem: given a static dataset of transitions _D_ = _{_ ( _s_ _t_ _, a_ _t_ _, r_ _t_ _, d_ _t_ _, s_ _t_ +1 ) _i_ _}_ collected from various environments with arbitrary behaviour polices, our goal is to learn a single policy that maximizes the expected return _R_ _t_ = [�] _k≥t_ _[γ]_ _[k][−][t]_ _[r]_ _[k]_ [ on all con-] sidered environments and can be efficiently fine-tuned to new tasks, where _γ_ is the discount factor. Considering the computational budget, we use 15 Atari games for pretraining and 5 games for outof-distribution (OOD) experiments. The whole training process took around 12 days on A100 GPUs. 1 FICC only open-sourced the world model pretraining code without the agent fine-tuning code. 2 Scaled-QL released the preliminary code which is not run-able out-of-the-box. 3 Figure 1: Architecture of JOWA. We use a shared transformer backbone for both world modeling and Q-value criticism to enable joint optimization. VQ-VAE tokenizes images into visual tokens. The sum of vocabulary embeddings, position embeddings and task embeddings forms the input embeddings space for the transformer backbone. Our offline dataset is derived from the the DQN-Replay dataset (Agarwal et al., 2020), which consists of 84 _×_ 84 grayscale images as observations and a full action space with 18 discrete actions. 4 J OINTLY -O PTIMIZED W ORLD -A CTION M ODEL In this section, we first detail the architecture of JOWA and the loss functions for joint optimization in section 4.1. Next, we introduce the provably efficient and parallelizable planning algorithm employed to compensate for the Q-value estimation error in section 4.2. Finally, we present the overall training pipeline in section 4.3. 4.1 W ORLD -A CTION M ODEL 4.1.1 A RCHITECTURE Figure 1 illustrates JOWA’s architecture, which uses a transformer backbone (Vaswani, 2017) to simultaneously learn world dynamics and Q-values across environments. This dual capability is achieved through distinct prediction heads that process the transformer’s output embedding _e_ . The world dynamics are modeled via supervised learning using three heads: next observation token predictor _p_ _o_, reward predictor _p_ _r_, and termination predictor _p_ _d_ . The Q-values head _h_ _Q_ learns the Q-function through TD-learning, based on implicit representations of historical trajectories. In the following sections, we refer to the _{_ transformer, _p_ _o_, _p_ _r_, _p_ _d_ _}_ components as the ”world-part” with parameters _θ_, and the _{_ transformer, _h_ _Q_ _}_ components as the ”action-part” with parameters _ϕ_ . We use VQ-VAE, a discrete autoencoder, to tokenize image observations, representing highdimensional images as a sequence of _K_ tokens. The VQ-VAE is trained with an equally weighted combination of _L_ 1 reconstruction loss, commitment loss (Van Den Oord et al., 2017), and perceptual loss (Esser et al., 2021). Then the transformer operates on the interleaved observation and action tokens represented as ( _z_ 0 [1] _[, . . ., z]_ 0 _[K]_ _[, a]_ [0] _[, . . ., z]_ _L_ [1] _[, . . ., z]_ _L_ _[K]_ _[, a]_ _[L]_ [)][, where] _[ L]_ [ is the maximum timesteps of] the trajectory segments. For multi-task learning, we incorporate learnable task embeddings for both observation and action tokens. 4.1.2 T RAINING OF W ORLD - PART M ODULE At each timestep _t_, the world-part module models the following distributions: ˆ ˆ Dynamics predictor: _z_ _t_ _[k]_ _[∼]_ _[p]_ _[o]_ � _z_ _t_ _[k]_ _[|][ f]_ [(] _[z]_ _[≤][t][−]_ [1] _[, z]_ _t_ _[<k]_ _, a_ _≤t−_ 1 _, u_ )� (4) Reward predictor: _r_ ˆ _t_ _∼_ _p_ _r_ � _r_ ˆ _t_ _| f_ ( _z_ _≤t_ _, a_ _≤t_ _, u_ )� (5) Termination predictor: _d_ [ˆ] _t_ _∼_ _p_ _d_ � _d_ ˆ _t_ _| f_ ( _z_ _≤t_ _, a_ _≤t_ _, u_ )� (6) 4 where _f_ represents the transformer backbone, _u_ is the task ID to index the corresponding task embedding, _k ∈{_ 1 _, · · ·, K}_, and _t ∈{_ 1 _, · · ·, L}_ . To unify the category of loss functions for balanced training (Vandenhende et al., 2021), we convert scalar rewards to ternary quantities _{−_ 1 _,_ 0 _,_ 1 _}_ using the sign function. This allows all three predictors to be optimized as classification problems by cross-entropy loss. Given _L_ -timesteps segments sampled from the offline dataset, the loss function for the world-part module is formulated as: _L_ world ( _θ_ ) = [1] _L_ _L_ � _t_ =1 1 � _K_ _K_ � _−_ ln _p_ _o_ � _z_ _t_ _[k]_ _[|][ f]_ [(] _[z]_ _[≤][t][−]_ [1] _[, z]_ _t_ _[<k]_ _, a_ _≤t−_ 1 _, u_ )� _k_ =1 _−_ ln _p_ _r_ � _r_ _t_ _| f_ ( _z_ _≤t_ _, a_ _≤t_ _, u_ )� _−_ ln _p_ _d_ � _d_ _t_ _| f_ ( _z_ _≤t_ _, a_ _≤t_ _, u_ )�� (7) 4.1.3 T RAINING OF A CTION - PART M ODULE We use CQL for offline TD-learning during pretraining. To ensure training stability and enhance scaling performance (Farebrother et al., 2024), we employ distributional TD-error (Bellemare et al., 2017) instead of the mean-square TD-error, maintaining consistency with the _L_ world loss category. The action-part module computes the return distribution for all actions given an observation and historical information. The value function _Q_ ( _o_ _t_ _, a_ _t_ ) is the mean of _Z_ ( _o_ _t_ _, a_ _t_ ). Then the loss function for the action-part module is formulated as: _L_ action ( _ϕ_ ) = _α_ [log ( [�] _a_ [exp (] _[Q]_ [(] _[o]_ _[t]_ _[, a]_ [)))] _[ −]_ _[Q]_ [(] _[o]_ _[t]_ _[, a]_ _[t]_ [)] +] _[ D]_ [KL] � _T_ _[∗]_ _Z_ _[−]_ ( _o_ _t_ _, a_ _t_ ) _∥Z_ ( _o_ _t_ _, a_ _t_ )� (8) where _T_ _[∗]_ is the distributional Bellman optimality operator defined in Equation (1), and _Z_ _[−]_ is the target distribution computed through a target Q-values head _h_ _Q_ _−_ . We set _α_ = 0 _._ 1 in experiments. Therefore, the joint optimization objective for the world-action model is formulated as: _L_ ( _θ, ϕ_ ) = _βL_ world ( _θ_ ) + _L_ action ( _ϕ_ ) (9) with the coefficient _β >_ 0. We set _β_ = 0 _._ 1 in our experiments. Both _L_ world and _L_ action back-propagate gradients to the transformer. Previous works observed that TD methods suffer from greater instability with larger model size (Kumar et al., 2020a; Sokar et al., 2023). However, through jointly optimizing the world-action model, _L_ world serves as a regularizer to stabilize TD-learning in large-scale models. Moreover, the world-part module enables planning at decision time for optimal inference and sample-efficient transfer, detailed in the following sections. 4.2 P ARALLELIZABLE PLANNING AT INFERENCE TIME The world-part module enables planning at decision time to compensate for inaccurate Q-value estimates, allowing JOWA to consistently search out the optimal policy. We model this process as a tree search problem and present a practical, parallelizable search algorithm. Given an estimated optimal Q-value _Q_ [ˆ] _[∗]_, a learned world model with dynamic predictor _P_ [ˆ] and reward predictor ˆ _r_, our objective is to find the optimal action _a_ _[∗]_ 0 [maximizing the ground-truth optimal] Q-value _Q_ _[∗]_, starting from state _s_ 0 . To do so, we rewrite the Bellman optimality equation as: � _Q_ _[∗]_ ( _s_ 0 _, a_ 0 ) = max _π_ _Q∗_ _a_ 1 _,s···_ 1 _,,a···_ _H_ _,s_ E _−H_ 1 _∼∼Pπ_ _Q∗_ _H−_ 1 � � _t_ =0 � _γ_ _[t]_ _r_ ( _s_ _t_ _, a_ _t_ ) + _γ_ _[H]_ max _a_ _H_ _[Q]_ _[∗]_ [(] _[s]_ _[H]_ _[, a]_ _[H]_ [)] _t_ =0 (10) where _π_ _Q_ _∗_ is the policy induced by the optimal Q-function. For Equation (10), the optimal policy is the greedy policy based on _Q_ _[∗]_ . The proof is provided in the Appendix A. To derive the search objective function, we follow three steps: (i) replace the ground-truth functions in the right side of Equation (10) with estimated or learned functions. (ii) leverage the estimated optimal Q-function _Q_ [ˆ] _[∗]_ to reduce the policy space for search, restricting the actions to those with top- _K_ highest Q-values. Denote the constrained policy space as Π ˆ _Q_ _[∗]_ [, where] _[ ∀][π][ ∈]_ [Π] [ ˆ] _Q_ _[∗]_ _[,][ ∀][s][ ∈]_ _S, ∀a /∈_ top- _K_ ( _Q_ [ˆ] _[∗]_ ( _s, ·_ )), we have _π_ ( _a|s_ ) = 0. (iii) maximize over _a_ 0 on both sides of Equation 5 Idea Generation Category:
0Conceptual Integration
T1OvCSFaum
# L IGHTWEIGHT P REDICTIVE 3D G AUSSIAN S PLATS **Junli Cao** [1] _[,]_ [2] **Vidit Goel** [2] **Chaoyang Wang** [2] **Anil Kag** [2] **Ju Hu** [2] **Sergei Korolev** [2] **Chenfanfu Jiang** [1] **Sergey Tulyakov** [2] **Jian Ren** [2] 1 University of California, Los Angeles 2 Snap, Inc. 3DGS Compact GS Light GS Ours |5,641,234 points|2,215,928 points|1,990,120 points|1,205,064 points| |---|---|---|---| |PSNR 27.25↑|PSNR 26.81↑|PSNR 26.73↑|PSNR 27.63↑| ||||| dering such representations is indeed extremely efficient, storing and transmitting them is often prohibitively expensive. To represent large-scale scenes, one often needs to store millions of 3D Gaussian, which can occupy up to gigabytes of storage. This creates a significant practical barrier, preventing widespread adoption on resource-constrained devices. In this work, we propose a new _representation_ that dramatically reduces the hard drive footprint while featuring similar or improved quality when compared to the standard 3D Gaussian splats. This representation leverages the inherent feature sharing among splats in the close proximity using a hierarchical tree structure, with which only the _parent_ splats need to be stored. We present a method for constructing tree structures from naturally _unstructured_ point clouds. Additionally, we propose the _adaptive tree manipulation_ to prune the redundant trees in the space, while spawn new ones from the significant _children_ splats during the optimization process. On the benchmark datasets, we achieve 20 _×_ storage reduction in hard-drive footprint with improved fidelity compared to the vanilla 3DGS and 2 _×_ -5 _×_ reduction compared to the exiting compact solutions. More importantly, we demonstrate the practical application of our method in real-world rendering on _mobile devices_ and _AR glasses_ [in our Webpage.](https://anonymous0submissions.github.io/LPGS/) 1 I NTRODUCTION Gaussian Splatting (3DGS)-based methods are taking the graphics and vision communities by a storm (Luiten et al., 2023; Wu et al., 2023; Yang et al., 2023). They strike the right balance between high-fidelity rendering, fast convergence, and efficient inference (Kerbl et al., 2023). The latter two benefits make 3DGS-based methods superior to Neural Radiance Fields (NeRFs)-based 1 techniques (Mildenhall et al., 2020; Martin-Brualla et al., 2021; Barron et al., 2022b). Indeed, while NeRFs (Barron et al., 2022a) show high-fidelity renderings too, apart from several exceptions (Cao et al., 2023; Wang et al., 2022; Chen et al., 2023b; M¨uller et al., 2022a), their training and inference time is often prohibiting real-time and edge-based applications. 3DGS-based approaches represent a 3D scene using an explicit, point-based representation (Aliev et al., 2020). The 3D Gaussians are efficiently rasterized to 2D images, with much faster rendering than neural volumetric rendering approaches (Kerbl et al., 2023). However, to represent sophisticated geometry and texture, especially for large-scale scenes, a significant amount of splats along with their attributes need to be stored, which can amount to even gigabytes of storage. In a world of connected devices, real-time experiences and applications, this storage requirement imposes a heavy toll on the hard-drive and the transmission bandwidth. Hence, several initial solutions have been proposed to reduce the storage for 3DGS, such as incorporating a sparse voxel grid (Lu et al., 2023) or applying more aggressive pruning of the 3D points (Fan et al., 2024; Lee et al., 2024). Yet, existing studies still suffer either from large storage requirements (Lu et al., 2023) or inferior rendering quality compared to 3DGS (Fan et al., 2024; Lee et al., 2024). In this work, we present a lightweight hierarchical Gaussian splats representation that takes advantage of the spatial relationships among unstructured and isolated splats, offering improved rendering quality while significantly reducing storage requirements. Intuitively, splats in close proximity exhibit similar geometry and texture. Therefore, we leverage feature sharing among nearby splats and propose structuring them into a hierarchical tree, where the _parent_ splats are employed to neuralpredict splats that share similar features. We call these neural-predicted splats the _children_ splats. Note that _children_ splats do not have to be stored and can be neural-predicted on-the-fly instead. We use hash-grid (M¨uller et al., 2022b) to encode the offsets that are used to estimate the 3D locations of _children_ splats. In addition, within the same hash grid, we first query the features of both the _parent_ and _children_ splats and apply an attention-based mechanism to attend to them. This attention is crucial for facilitating feature sharing within the tree. The attended features are then input into a shallow MLP to predict the Gaussian attributes. We opt for the hash-grid due to its ability to facilitate feature sharing in close proximity by interpolating spatially adjacent feature vectors. Our representation is _independent_ of grid-based structures; any representation that encourages feature sharing can be utilized ( _e.g_ ., K-plane (Fridovich-Keil et al., 2023)). To build such tree structures, we first allow every point obtained from SfM to be considered as a _parent_ splat, and be used to predict its _children_ splats. Since the splats in our representation are structured and treated as a cohesive unit, we further introduce the _Adaptive Tree Manipulation_ (ATM) module to manage the tree during the optimization process. Specifically, we do not impose a limit on the depth of the tree. This means that a _children_ splat can serve as a _parent_ in the next optimization iteration and has its own _children_ splats if it is deemed significant. Additionally, insignificant _parent_ splats are pruned along with their insignificant _children_ . Note that significant _children_ are promoted to _parent_ regardless of the significance of their _parent_ . For instance, an insignificant _parent_ may be removed in the next optimization iteration, but it can still have significant _children_ that are promoted to _parent_ . This flexible tree manipulation enables certain areas with complex geometry to include more splats for more accurate modeling. Fig. 1 shows the Garden scene (Barron et al., 2022a) reconstructed by the standard Gaussian Splats (Kerbl et al., 2023), Compact GS (Lee et al., 2024), Light GS (Fan et al., 2024) and the proposed approach. First, we observe a significantly reduced density of points in the point cloud reconstructed by our approach. This, and the predicting of the attributes instead of storing them, significantly reduces the storage requirement for our method. Second, we show improved PSNR scores and visual quality, when we zoom-in into the details of the rendered images. We summarize our contributions as follows: 1. We propose a hierarchical tree structure to model the inherent spatial relationships among splats and an attention mechanism to enhance the relationship within the hierarchy. 2. We propose Adaptive Tree Manipulation in conjunction with the hierarchical representation to effectively refine the tree for improved modeling. 3. Our representation achieves 20 _×_ reduction on average in hard-drive footprint, with improved PSNR and comparable SSIM and LPIPS comparing to 3DGS and 2 _×_ -5 _×_ storage reduction comparing the exiting works. Additionally, we showcase the practical real-world rendering applications of our method on mobile devices and AR glasses. 2 2 R ELATED W ORK **Novel View Synthesis.** Research on rendering scenes from unseen viewpoints with photorealism has evolved over several decades (Greene, 1986; Chen & Williams, 2023; Levoy & Hanrahan, 2023; Buehler et al., 2023; Srinivasan et al., 2019). Traditional approaches typically rely on explicit depth estimation to warp pixels for generating novel views (Kalantari et al., 2016; Penner & Zhang, 2017; Choi et al., 2019; Riegler & Koltun, 2021). However, the accuracy of depth estimation algorithms is critical, and handling disocclusions during rendering adds complexity. An alternative approach involves Multiplane Images (MPI) (Zhou et al., 2018; Srinivasan et al., 2019; Flynn et al., 2019), which learn a representation associating objects within the scene with fronto-parallel layers. This structured representation facilitates efficient rendering from different viewpoints while preserving depth relationships and occlusions. More recently, Neural Radiance Fields (NeRF) (Mildenhall et al., 2020) have gained popularity for their ability to achieve highly realistic rendering, even in scenarios involving complex view-dependent lighting effects such as transparency and reflectance. However, the weakness of NeRF lies in its volumetric rendering formulation, which necessitates sampling a large number of points per ray to render a single pixel. This high computational cost limits the usage of NeRF for real-time or on-device applications. While efforts to reduce computational requirements for volumetric rendering have been a focus of recent research (Liu et al., 2020; Neff et al., 2021; Garbin et al., 2021; Reiser et al., 2021; Lindell et al., 2021; Yu et al., 2021; M¨uller et al., 2022a; Fridovich-Keil et al., 2022; Lombardi et al., 2021; Cao et al., 2023; Gupta et al., 2024), point-based rendering, particularly 3D Gaussian Splatting (3DGS) (Kerbl et al., 2023), presents another promising direction for real-time view synthesis. **Efficient Representation for 3D Gaussian Splatting.** Despite the benefits of 3DGS (Kerbl et al., 2023), the disadvantages of bulky storage are noteworthy. As a result, several approaches (Fan et al., 2024; Lee et al., 2024; Lu et al., 2023; Girish et al., 2023; Niedermayr et al., 2024; Morgenstern et al., 2023; Navaneet et al.) have been proposed for compressing 3DGS. Several compression techniques have been explored such as pruning the redundant Gaussian (Fan et al., 2024; Lee et al., 2024) and utilizing codebooks (Fan et al., 2024; Lee et al., 2024; Niedermayr et al., 2024; Navaneet et al.). LightGS (Fan et al., 2024) introduces a point pruning and recovery process to minimize redundancy in Gaussian splats, utilizes distillation and pseudo-view augmentation to distill spherical harmonics to a lower degree, and employs quantization to further reduce storage. While LightGS achieves considerable storage reduction, it results in noticeable fidelity degradation compared to the original Gaussian splatting due to quantization. CompactGS (Lee et al., 2024) proposes using a grid-based neural field to implicitly represent view-dependent colors rather than explicitly storing spherical harmonics per point, offering promising storage efficiency without significant fidelity loss. Eagles (Girish et al., 2023) utilizes quantized embedding to quantize the per-point attributes and pruning strategy to remove redundant Gaussian, leading to lower storage memory. ScaffoldGS (Lu et al., 2023) exploits the spatial feature sharing by distributing local splats using anchor points, reparameterizing splats positions relative to these anchors to enable anchor-based point growing and pruning strategies for redundancy reduction in 3DGS. Our method shares similarities with CompactGS (Lee et al., 2024) and ScaffoldGS (Lu et al., 2023) while exhibits crucial _differences_ . _First_, unlike CompactGS, we utilize a combination of neural fields and self-attention layers to predict not only view-dependent colors but also geometric properties. _Second_, in contrast to both approaches which explicitly store the position of every splat in the point cloud, our method only stores a small subset of splats, referred to as _parent_, while predicting the remaining points on-the-fly during rendering. This substantially reduces memory footprint. _Third_, anchor-based representation essentially creates a tree structure but restricts the depth of the tree to one. The growth strategy focuses solely on the anchor points, neglecting the growth directly from the splats. In contrast, our hierarchical tree representation combined with the proposed ATM take into account the significance of both _parent_ and _children_ splats, allowing for a strategy of sub-tree expansion. 3 P RELIMINARIES 3D Gaussian Splatting (3DGS) (Kerbl et al., 2023) represents a scene with 3D points **x** . The points are initialized with a coarse point cloud obtained using Structure-from-Motion (SfM) (Schonberger & Frahm, 2016). These Gaussians, _G_ ( **x** ), serve as the anisotropic volumetric splats defined by their position (mean _µ_ ) and 3D covariance (Σ) as _G_ ( **x** ) = _e_ _[−]_ [1] 2 [(] **[x]** _[−][µ]_ [)] _[T]_ [ Σ] _[−]_ [1] [(] **[x]** _[−][µ]_ [)] . To ensure Σ remaining 3 (a) Pipeline Parent Predictive children parent _f_ _a_ children _f_ _ak_ (b) Adaptive Tree Manipulation Scale, rotation Color Next iteration _α < ϵ_ _α_ : children promoted _∇_ _pos_ _> τ_ _[c]_ to parents _α < ϵ_ _α_ Iteration N _pos_ _pos_ Iteration N + 1 _pos_ Figure 2: _Top_ : Overview of our tree construction pipeline with the initial _parent_ splats derived from SfM. The _children_ spats are inferred **on-the-fly** from the _parent_ splats via querying the displacement hash-grid. To estimate the Gaussian attributes like scale, rotation, color, and opacity, attribute features _f_ a and _f_ a _k_ obtained **on-the-fly** from the attributes hash-grid are aggregated with self-attention. _Bottom_ : Tree manipulation through ATM. The significant _children_ splats are promoted to _parent_ (regardless of the status of their _parent_, _e.g_ ., pruned) such that they have their own _children_ in the next iteration. Bad trees ( _e.g_ ., transparent _parent_ ) are removed together with the insignificant _children_ . positive semi-definite during optimization, it is represented with an equivalent yet effective formulation with the scaling matrix _S_ and the rotation matrix _R_, such that Σ = _RSS_ _[T]_ _R_ _[T]_ . The attributes of the 3D splats ( _e.g_ ., location, covariance, and opacity) together with the directional appearance of the radiance filed, represented via the spherical harmonics (SH) (Sara Fridovich-Keil and Alex Yu et al., 2022), are end-to-end learned using optimization. To render an image, 3D _G_ ( **x** ) are first transformed into 2D Gaussians (denoted as _G_ _[′]_ ( **x** )) (Zwicker et al., 2001). 3DGS uses an efficient tiled-based rasterizer that presorts primitives for the entire image, allowing fast _α_ -blending of anisotropic splats. The color _C_ of a pixel is computed by blending _N_ 2D Gaussians that overlap at the pixel as: _C_ = [�] _i∈N_ _[c]_ _[i]_ _[α]_ _[i]_ _[G]_ _i_ _[′]_ [(] **[x]** [)][ �] _[i]_ _j_ _[−]_ =1 [1] [(1] _[ −]_ _[α]_ _[j]_ _[G]_ _[′]_ _j_ [(] **[x]** [))] _[,]_ [where] _[ c]_ _[i]_ represents view-dependent colors for each splat, _α_ _i_ is the opacity. With the highly optimized rasterizer for modern GPUs, 3DGS render high-fidelity scenes in real-time across many platforms. These benefits come with a cost. 3DGS require a significant number of 3D Gaussians, sometimes needing gigabytes for complex large-scale scenes. This requirement limits their application on edge devices, as downloading gigabytes over the network and storing them is hardly feasible or practical. 4 M ETHOD We show a high-level overview of our approach in Fig. 2. Our primary motivation is to use a hierarchical representation( _i.e_ ., tree) to model the spatial relationships among the splats. We show that the locations of _children_ splats and associated attributes —position, color, scale, _etc_ .— can be derived from the _parent_ using a small neural network. This allows us to store only the _parent_ splats along with the weights of the neural network. To achieve this, we initially represent a 3D scene as a forest of depth-1 tree structures where the _parent_ splats are initialized from SfM (Sch¨onberger & Frahm, 2016) and the _children_ are neural-predicted on-the-fly. The trees are then refined and expanded to sub-trees during the optimization process using _Adaptive Tree Manipulation_ . Formally, we represent a scene using _S_ = _{X_ 1 _, X_ 2 _, . . . X_ _n_ _}_, where _X_ _i_ is tree and each node contains the attributes, such as position ( _x_ ), color ( _c_ ), opacity ( _α_ ), scale ( _s_ ), and rotation ( _r_ ). This representation can be stored very efficiently, as for each tree we need to save only the positions and scales of _parent_ splats and small neural network shared across the trees, to predict all the other attributes of the tree. 4.1 N EURAL R EPRESENTATION FOR L IGHTWEIGHT P REDICTIVE S PLATS We model close relationship between a parent and children nodes. Specifically, we assume that the children nodes are in the vicinity of parent node and have similar geometric and appearance attributes such as shape, color and opacity. We satisfy these requirements by using a hash-grid based approach (M¨uller et al., 2022b; Chen et al., 2023a) as our representation, which has an inherent 4 property to return similar features when queried with the points located nearby via feature interpolation. Below we describe how a tree ( _X_ _i_ ) can be represented in storage efficient manner. In what follows we drop the index _i_ . We use the notation _node_ to refer the splat in the context of a _tree_ . For a hash-grid _H_ ( _·_ ) shared across the trees and _parent_ node positions _x_ _p_, we query the features as _f_ = _H_ ( _x_ _p_ ) and use them to predict the displacement of _children_ and attributes of the tree. We divide _f_ into two halves _f ≡{f_ ∆ _∈_ R _[D/]_ [2] _, f_ _a_ _∈_ R _[D/]_ [2] _}_, where the first half ( _f_ ∆ ) represents displacement and is used to predict the position of children. The second half ( _f_ _a_ ) is used to predict other attributes. **Predicting Position.** We want _children_ and _parent_ nodes to represent similar geometry and appearance. Hence, _children_ should be located in the vicinity of the _parents_ nodes. We model the position of _children_ as their displacement from their _parent_ nodes. For the _parent_ we predict the position of _k_ [th] child using _x_ _k_ = _x_ _p_ + _g_ pos ( _f_ ∆ )[ _k_ ] where _g_ pos is an MLP with output shape _K ×_ 3. Having the positions of all nodes in the tree, we can predict the rest of the attributes, such as scale, rotation, color, and opacity. We reuse the hash-grid to get the attribute feature ( _f_ a _k_ ) for _k_ [th] child node using _H_ ( _x_ _k_ ). A naive approach to extract the remaining attributes using _f_ a and _f_ a _k_ is to pass the latter to an MLP get scale, rotation, color and opacity. We found such approach to be sub optimal. A hash-grid representation implicitly makes the representation of spatially points similar. There is no mechanism to share information between the features after they are computed. Since there is relation between physical attributes of the _parent_ and _children_ nodes, having such information sharing mechanism is beneficial. To this end, we propose a modified self-attention mechanism to better capture the inter-dependencies between _children_ and _parent_ nodes. To do so, we first obtain the aggregated feature _F_ a _∈_ R _[K]_ [+1] _[×][D/]_ [2] by concatenating features of all the nodes in the tree, such as _F_ a = Concat( _{f_ a _,_ ( _f_ a1 _, . . ., f_ a _K_ _}_ ), where Concat is a concatenation operation. We then apply a modified self-attention operation on _F_ a to get the final feature _F_ a _[′]_ [:] _F_ a _[′]_ [=] _[ F]_ a [+] _[ λσ]_ [(] _[P]_ [1] [(] _[F]_ [a] [)] _[ ∗P]_ [2] [(] _[F]_ [a] [)] _[T]_ ) _∗F_ a _,_ (1) _d_ ~~_√_~~ where _σ_ ( _·_ ) is a Softmax function, _P_ _i_ ( _·_ ) is a projection matrix, _d_ is a scaling factor set as _D/_ 2, _λ_ is a hyper-parameter for balancing the information trade-off from the attention mechanism and _∗_ denotes the matrix multiplication. Different from vanilla attention (Vaswani et al., 2017), we do not apply positional embedding, so that Eq. 1 is permutation invariant which is an important property to maintain while working with point clouds (Qi et al., 2016). Further, we use the unprojected _F_ a when multiplying with _σ_ ( _·_ ), since we empirically found no performance gain by projecting _F_ a . Next, we split _F_ a _[′]_ [in] _[ K]_ [ +1][ attribute feature vector to predict the remaining attributes for each node in the tree.] **Predicting Scale and Rotation.** It is vital to properly initialize the scale of Gaussians for stable training. For instance, Gaussians with small scales make minimal contributions to the rendering quality, mainly because of their limited volume. In contrast, large Gaussians can potentially contribute to every pixel during rasterization, leading to a significant amount of GPU memory. Hence, to make training stable and minimize storage needs at the same time, we adopt a middle-ground strategy. More specifically, we represent the scales of _children_ as a scaled version of their _parents_ ( _s_ p ): _s_ _k_ = ˆ _s_ _k_ _s_ _p_ where ˆ _s_ _k_ is predicted by an MLP. In case of rotation, we directly regress it for both _parents_ and _children_ nodes using the corresponding attribute feature vector. We share the weights of the MLP to regress both scale and rotation. We experimentally found, that including position of node ( _x_ _k_ ), the distance of the point to the center of the axis aligned bounding box ( _b_ _k_ ) along with attribute feature (( _f_ a _[′]_ _k_ [) improves performance:][ ˆ] _[s]_ _[k]_ _[, r]_ _[k]_ [=] _[ g]_ _[rs]_ [(] _[f]_ a _[ ′]_ _k_ _[, x]_ _[k]_ _[, b]_ _[k]_ [)][.] **Predicting Color and Opacity.** 3DGS uses degree-3 spherical harmonics (SH) for view-dependent color representation (Kerbl et al., 2023). However, we find such design is unnecessary and the color can be directly predicted using from feature vectors and a viewing direction. We use an MLP that takes them as an input and directly predicts the color as output, _c_ _k_ = _g_ _c_ ( _f_ a _[′]_ _k_ _[, d]_ _[k]_ [)][ where] _[ d]_ _[k]_ [is] the viewing direction of the node in the tree. This reduces the storage by a significant amount as previously each splat storing the spherical harmonics individually. To predict opacity, we use another MLP with inputs as _f_ a _[′]_ _k_ [and the position of the node to get corresponding opacity,] _[ o]_ _[k]_ [=] _[ g]_ _[o]_ [(] _[f]_ a _[ ′]_ _k_ _[, x]_ _[k]_ [)][.] We described all the operations above for a single tree. The same operation is extended for all the trees. Further, the neural networks for all the operations share their weights across all the trees. To summarize, the proposed representation efficiently represents the tree structure with hash-grid based neural representations _H_ ( _·_ ) and a few MLPs. We only store position and scale of parent nodes and the weights of our neural networks, while the rest of the properties is regressed as described above. 5 Idea Generation Category:
2Direct Enhancement
PbheqxnO1e