Improve model card for RobusTok (Image Tokenizer Needs Post-Training)

#4
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +82 -1
README.md CHANGED
@@ -1 +1,82 @@
1
- Image Tokenizers Needs Post-Training
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: unconditional-image-generation
3
+ ---
4
+
5
+ # Image Tokenizer Needs Post-Training
6
+
7
+ This repository contains **RobusTok**, a novel image tokenizer presented in the paper [Image Tokenizer Needs Post-Training](https://huggingface.co/papers/2509.12474).
8
+
9
+ <div align="center">
10
+
11
+ [![Project Page](https://img.shields.io/badge/%20project%20page-lightblue)](https://qiuk2.github.io/works/RobusTok/index.html)&nbsp;
12
+ [![GitHub Code](https://img.shields.io/badge/%20GitHub%20Code-brightgreen)](https://github.com/qiuk2/RobusTok)&nbsp;
13
+ [![🤗 Weights](https://img.shields.io/badge/%F0%9F%A4%97%20Weights-yellow)](https://huggingface.co/qiuk6/RobusTok)&nbsp;
14
+
15
+ </div>
16
+
17
+ <div align="center">
18
+ <img src="https://github.com/qiuk2/RobusTok/raw/main/assets/teaser.png" alt="Teaser" width="95%">
19
+ </div>
20
+
21
+ ---
22
+
23
+ ## About RobusTok
24
+
25
+ Recent image generative models typically rely on a frozen image tokenizer to capture the image distribution in a latent space. However, a significant discrepancy exists between the reconstruction and generation distribution, as current tokenizers often prioritize the reconstruction task without fully considering generation errors during sampling.
26
+
27
+ **RobusTok** addresses this by proposing a novel tokenizer training scheme that includes both main-training and post-training:
28
+ * **Main training:** Constructs a robust latent space by simulating sampling noises and unexpected tokens.
29
+ * **Post-training:** Further optimizes the tokenizer decoder with respect to a well-trained generative model, mitigating the distribution difference between generated and reconstructed tokens.
30
+
31
+ This approach significantly enhances the robustness of the tokenizer, boosting generation quality and convergence speed.
32
+
33
+ ## Key Highlights of Post-Training
34
+
35
+ - 🚀 **Better generative quality**: Achieves notable improvements in gFID (e.g., 1.60 gFID → 1.36 gFID with a ~400M generator).
36
+ - 🔑 **Generalizability**: Applicable to both autoregressive & diffusion models.
37
+ - ⚡ **Efficiency**: Provides strong results with relatively small generative models.
38
+
39
+ ## Model Zoo
40
+
41
+ | Generator \ Tokenizer | RobusTok w/o. P.T | RobusTok w/. P.T |
42
+ |---|---:|---:|
43
+ | Base ([weights](https://huggingface.co/qiuk6/RobusTok/resolve/main/rar_b.bin?download=true)) | gFID = 1.83 | gFID = 1.60 |
44
+ | Large ([weights](https://huggingface.co/qiuk6/RobusTok/resolve/main/rar_l.bin?download=true)) | gFID = 1.60 | gFID = 1.36 |
45
+
46
+ ## Usage
47
+
48
+ Due to the specialized nature of RobusTok's tokenizer and generator training and inference pipeline, detailed usage instructions, installation guides, and code examples are provided in the [official GitHub repository](https://github.com/qiuk2/RobusTok). This includes scripts for:
49
+
50
+ * Environment setup and package installation.
51
+ * Dataset preparation.
52
+ * Main training for the tokenizer.
53
+ * Training code for the generator.
54
+ * Post-training for the tokenizer.
55
+ * Inference and evaluation (see [Inference Code](https://github.com/qiuk2/RobusTok#inference-code)).
56
+
57
+ ## Visualization
58
+
59
+ <div align="center">
60
+ <img src="https://github.com/qiuk2/RobusTok/raw/main/assets/ft-diff.png" alt="vis" width="95%">
61
+ <p>
62
+ Visualization of 256&times;256 image generation before (top) and after (bottom) post-training. Three improvements are observed: (a) OOD mitigation, (b) Color fidelity, (c) detail refinement.
63
+ </p>
64
+ </div>
65
+
66
+ ---
67
+
68
+ ## Citation
69
+
70
+ If our work assists your research, feel free to give us a star ⭐ or cite us using:
71
+
72
+ ```bibtex
73
+ @misc{qiu2025imagetokenizerneedsposttraining,
74
+ title={Image Tokenizer Needs Post-Training},
75
+ author={Kai Qiu and Xiang Li and Hao Chen and Jason Kuen and Xiaohao Xu and Jiuxiang Gu and Yinyi Luo and Bhiksha Raj and Zhe Lin and Marios Savvides},
76
+ year={2025},
77
+ eprint={2509.12474},
78
+ archivePrefix={arXiv},
79
+ primaryClass={cs.CV},
80
+ url={https://arxiv.org/abs/2509.12474},
81
+ }
82
+ ```