Datasets:
Formats:
parquet
Languages:
English
Size:
1K - 10K
ArXiv:
| task_categories: | |
| - image-text-to-text | |
| - visual-question-answering | |
| language: | |
| - en | |
| tags: | |
| - visual-question-answering | |
| - multimodal | |
| - reinforcement-learning | |
| - visual-reasoning | |
| - spatial-reasoning | |
| - transit-maps | |
| # ReasonMap-Plus Dataset | |
| The **ReasonMap-Plus** dataset is an extended dataset introduced in the paper [RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning](https://huggingface.co/papers/2510.02240). It addresses the challenge of sparse rewards in fine-grained visual reasoning for multimodal large language models (MLLMs), particularly in structured and information-rich settings like transit maps. ReasonMap-Plus achieves this by introducing dense reward signals through Visual Question Answering (VQA) tasks, which enables effective cold-start training of fine-grained visual understanding skills. This repository contains the `ReasonMap-Plus` data for evaluation and `ReasonMap-Train` for RewardMap training. | |
| - **Paper:** [RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning](https://huggingface.co/papers/2510.02240) | |
| - **Project Page:** https://fscdc.github.io/RewardMap | |
| - **Code:** https://github.com/fscdc/RewardMap | |
| ## Sample Usage | |
| To get started with the `ReasonMap-Plus` dataset, follow these steps to install dependencies, download the data, and prepare it for training. | |
| ### 1. Install dependencies | |
| If you face any issues with the installation, please feel free to open an issue on the GitHub repository. | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| ### 2. Download the dataset | |
| You can download [ReasonMap-Plus](https://huggingface.co/datasets/FSCCS/ReasonMap-Plus) (for evaluation) and [ReasonMap-Train](https://huggingface.co/datasets/FSCCS/ReasonMap-Train) (for RewardMap training) from Hugging Face or by running the following command: | |
| ```bash | |
| python utils/download_dataset.py | |
| ``` | |
| Then, put the data under the folder `data`. | |
| ### 3. Prepare data for Supervised Fine-Tuning (SFT) | |
| If you plan to use tools like [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for SFT training, first prepare the datasets by running the following command: | |
| ```bash | |
| python utils/prepare_data_for_sft.py --dataset_dir path/to/your_data | |
| ``` | |
| ### 4. Data Format Example | |
| Your data will be transferred into a format similar to this for SFT: | |
| ```json | |
| { | |
| "conversations": [ | |
| { | |
| "from": "human", | |
| "value": "<image> Please solve the multiple choice problem and put your answer (one of ABCD) in one \"\\boxed{}\". According to the subway map, how many intermediate stops are there between Danube Station and lbn Battuta Station (except for this two stops)? \ | |
| A) 8 \ | |
| B) 1 \ | |
| C) 25 \ | |
| D) 12 \ | |
| " | |
| }, | |
| { | |
| "from": "gpt", | |
| "value": "B" | |
| } | |
| ], | |
| "images": [ | |
| "./maps/united_arab_emirates/dubai.png" | |
| ] | |
| }, | |
| ``` | |
| ### 5. Training Example | |
| You can train the `RewardMap` model using the provided scripts: | |
| ```bash | |
| # RewardMap training | |
| bash scripts/reward_map.sh | |
| ``` | |
| ## Citation | |
| If you find this paper useful in your research, please consider citing our paper: | |
| ```bibtex | |
| @article{feng2025rewardmap, | |
| title={RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning}, | |
| author={Feng, Sicheng and Tuo, Kaiwen and Wang, Song and Kong, Lingdong and Zhu, Jianke and Wang, Huan}, | |
| journal={arXiv preprint arXiv:2510.02240}, | |
| year={2025} | |
| } | |
| ``` |