MacTok: Robust Continuous Tokenization for Image Generation
Abstract
MacTok, a masked augmenting continuous tokenizer, prevents posterior collapse in variational frameworks through image masking and semantic guidance while achieving efficient, high-fidelity visual tokenization with significantly reduced token requirements.
Continuous image tokenizers enable efficient visual generation, and those based on variational frameworks can learn smooth, structured latent representations through KL regularization. Yet this often leads to posterior collapse when using fewer tokens, where the encoder fails to encode informative features into the compressed latent space. To address this, we introduce MacTok, a Masked Augmenting 1D Continuous Tokenizer that leverages image masking and representation alignment to prevent collapse while learning compact and robust representations. MacTok applies both random masking to regularize latent learning and DINO-guided semantic masking to emphasize informative regions in images, forcing the model to encode robust semantics from incomplete visual evidence. Combined with global and local representation alignment, MacTok preserves rich discriminative information in a highly compressed 1D latent space, requiring only 64 or 128 tokens. On ImageNet, MacTok achieves a competitive gFID of 1.44 at 256times256 and a state-of-the-art 1.52 at 512times512 with SiT-XL, while reducing token usage by up to 64times. These results confirm that masking and semantic guidance together prevent posterior collapse and achieve efficient, high-fidelity tokenization.
Get this paper in your agent:
hf papers read 2603.29634 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper