File size: 5,969 Bytes
16d34bf cc4e226 0ea528c cc4e226 16d34bf ad2b916 16d34bf cc4e226 16d34bf cc4e226 ad2b916 cc4e226 ad2b916 97f3f85 ad2b916 cc4e226 97f3f85 ad2b916 97f3f85 ad2b916 97f3f85 ad2b916 97f3f85 ad2b916 cc4e226 ad2b916 cc4e226 ad2b916 cc4e226 ad2b916 cc4e226 ad2b916 56dc58c ad2b916 56dc58c ad2b916 16d34bf 20916a4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
# Infinity-Parser-7B
<div align="left">
๐ป <a href="https://github.com/infly-ai/INF-MLLM/tree/main/Infinity-Parser">Github</a> |
๐ <a href="https://huggingface.co/datasets/infly/Infinity-Doc-55K">Dataset</a> |
๐ <a href="https://arxiv.org/pdf/2506.03197">Paper</a> |
๐ <a href="https://huggingface.co/spaces/infly/Infinity-Parser-Demo">Demo</a>
</div>
# Introduction
We develop Infinity-Parser, an end-to-end scanned document parsing model trained with reinforcement learning. By incorporating verifiable rewards based on layout and content, Infinity-Parser maintains the original document's structure and content with high fidelity. Extensive evaluations on benchmarks in cluding OmniDocBench, olmOCR-Bench, PubTabNet, and FinTabNet show that Infinity-Parser consistently achieves state-of-the-art performance across a broad range of document types, languages, and structural complexities, substantially outperforming both specialized document parsing systems and general-purpose vision-language models while preserving the modelโs general multimodal understanding capability.
## Key Features
- LayoutRL Framework: a reinforcement learning framework that explicitly trains models to be layout-aware through verifiable multi-aspect rewards combining edit distance, paragraph accuracy, and reading order preservation.
- Infinity-Doc-400K Dataset: a large-scale dataset of 400K scanned documents that integrates high-quality synthetic data with diverse real-world samples, featuring rich layout variations and comprehensive structural annotations.
- Infinity-Parser Model: a VLM-based parser that achieves new state-of-the-art performance on OCR, table and formula extraction, and reading-order detection benchmarks in both English and Chinese, while maintaining nearly the same general multimodal understanding capability as the base model.
# Architecture
Overview of Infinity-Parser training framework. Our model is optimized via reinforcement finetuning with edit distance, layout, and order-based rewards.

# Performance
## olmOCR-bench

## OmniDocBench

## Table Recognition

# Quick Start
## Vllm Inference
We recommend using the vLLM backend for accelerated inference.
It supports image and PDF inputs, automatically parses the document content, and exports the results in Markdown format to a specified directory.
Before starting, make sure that **PyTorch** is correctly installed according to the official installation guide at [https://pytorch.org/](https://pytorch.org/).
```shell
pip install .
parser --model /path/model --input dir/PDF/Image --output output_folders --batch_size 128 --tp 1
```
Adjust the tensor parallelism (tp) value โ 1, 2, or 4 โ and the batch size according to the number of GPUs and the available memory.
<details>
<summary> [The information of result folder] </summary>
The result folder contains the following contents:
```
output_folders/
โโโ <file_name>/output.md
โโโ ...
โโโ ...
```
</details>
## Using Transformers to Inference
<details>
<summary> Transformers Inference Example </summary>
```python
import torch
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model_path = "infly/Infinity-Parser-7B"
prompt = "Please transform the documentโs contents into Markdown format."
print("Loading model and processor...")
# Default: Load the model on the available device(s)
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# model_path, torch_dtype="auto", device_map="auto"
# )
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
# Default processor
# processor = AutoProcessor.from_pretrained(model_path)
# Recommended processor
min_pixels = 256 * 28 * 28 # 448 * 448
max_pixels = 2304 * 28 * 28 # 1344 * 1344
processor = AutoProcessor.from_pretrained(model_path, min_pixels=min_pixels, max_pixels=max_pixels)
print("Preparing messages for inference...")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://ofasys-multimodal-wlcb-3-toshanghai.oss-accelerate.aliyuncs.com/wpf272043/keepme/image/receipt.png",
},
{"type": "text", "text": prompt},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
print("Generating results...")
generated_ids = model.generate(**inputs, max_new_tokens=4096)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
# Visualization
## Comparison Examples

# Citation
```
@misc{wang2025infinityparserlayoutaware,
title={Infinity Parser: Layout Aware Reinforcement Learning for Scanned Document Parsing},
author={Baode Wang and Biao Wu and Weizhen Li and Meng Fang and Zuming Huang and Jun Huang and Haozhe Wang and Yanjie Liang and Ling Chen and Wei Chu and Yuan Qi},
year={2025},
eprint={2506.03197},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.03197},
}
```
# License
This model is licensed under apache-2.0. |