| | --- |
| | license: gemma |
| | license_name: license |
| | license_link: LICENSE |
| | base_model: |
| | - google/gemma-2-2b |
| | pipeline_tag: translation |
| | library_name: transformers |
| | language: |
| | - ar |
| | - bn |
| | - cs |
| | - de |
| | - en |
| | - es |
| | - fa |
| | - fr |
| | - he |
| | - hi |
| | - id |
| | - it |
| | - ja |
| | - km |
| | - ko |
| | - lo |
| | - ms |
| | - my |
| | - nl |
| | - pl |
| | - pt |
| | - ru |
| | - th |
| | - tl |
| | - tr |
| | - ur |
| | - vi |
| | - zh |
| | --- |
| | |
| |
|
| |
|
| | ## Model Description |
| | GemmaX2-28-2B-Pretrain is a language model developed through continual pretraining of Gemma2-2B using a mix of 56 billion tokens from both monolingual and parallel data across 28 different languages. Please find more details in our paper: [Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study](https://arxiv.org/pdf/2502.02481). |
| |
|
| | - **Developed by:** Xiaomi |
| | - **Model type:** GemmaX2-28-2B-Pretrain is obtained by continually pretraining Gemma2-2B on a large amount of monolingual and parallel data. Subsequently, GemmaX2-28-2B-v0.1 is derived through supervised finetuning on a small set of high-quality translation instruction data. |
| | - **Languages:** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, Polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese. |
| | - **Github:** Please find more details in our [Github repository](https://github.com/xiaomi-research/gemmax). |
| |
|
| |
|
| | **Note that GemmaX2-28-2B-Pretrain is NOT translation model.** |
| |
|
| |
|
| | ## Training Data |
| |
|
| | We collect monolingual data from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400). For parallel data, we collect all Chinese-centric and English-centric parallel datasets from the [OPUS](https://opus.nlpl.eu/) collection up to August 2024 and conduct a series of filtering processes, such as language identification, semantic duplication filtering, quality filtering, and more. |
| |
|
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @misc{cui2025multilingualmachinetranslationopen, |
| | title={Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study}, |
| | author={Menglong Cui and Pengzhi Gao and Wei Liu and Jian Luan and Bin Wang}, |
| | year={2025}, |
| | eprint={2502.02481}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL}, |
| | url={https://arxiv.org/abs/2502.02481}, |
| | } |
| | ``` |