Datasets:
Formats:
parquet
Languages:
English
Size:
1K - 10K
ArXiv:
File size: 3,474 Bytes
3028cba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
task_categories:
- image-text-to-text
- visual-question-answering
language:
- en
tags:
- visual-question-answering
- multimodal
- reinforcement-learning
- visual-reasoning
- spatial-reasoning
- transit-maps
---
# ReasonMap-Plus Dataset
The **ReasonMap-Plus** dataset is an extended dataset introduced in the paper [RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning](https://huggingface.co/papers/2510.02240). It addresses the challenge of sparse rewards in fine-grained visual reasoning for multimodal large language models (MLLMs), particularly in structured and information-rich settings like transit maps. ReasonMap-Plus achieves this by introducing dense reward signals through Visual Question Answering (VQA) tasks, which enables effective cold-start training of fine-grained visual understanding skills. This repository contains the `ReasonMap-Plus` data for evaluation and `ReasonMap-Train` for RewardMap training.
- **Paper:** [RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning](https://huggingface.co/papers/2510.02240)
- **Project Page:** https://fscdc.github.io/RewardMap
- **Code:** https://github.com/fscdc/RewardMap
## Sample Usage
To get started with the `ReasonMap-Plus` dataset, follow these steps to install dependencies, download the data, and prepare it for training.
### 1. Install dependencies
If you face any issues with the installation, please feel free to open an issue on the GitHub repository.
```bash
pip install -r requirements.txt
```
### 2. Download the dataset
You can download [ReasonMap-Plus](https://huggingface.co/datasets/FSCCS/ReasonMap-Plus) (for evaluation) and [ReasonMap-Train](https://huggingface.co/datasets/FSCCS/ReasonMap-Train) (for RewardMap training) from Hugging Face or by running the following command:
```bash
python utils/download_dataset.py
```
Then, put the data under the folder `data`.
### 3. Prepare data for Supervised Fine-Tuning (SFT)
If you plan to use tools like [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for SFT training, first prepare the datasets by running the following command:
```bash
python utils/prepare_data_for_sft.py --dataset_dir path/to/your_data
```
### 4. Data Format Example
Your data will be transferred into a format similar to this for SFT:
```json
{
"conversations": [
{
"from": "human",
"value": "<image> Please solve the multiple choice problem and put your answer (one of ABCD) in one \"\\boxed{}\". According to the subway map, how many intermediate stops are there between Danube Station and lbn Battuta Station (except for this two stops)? \
A) 8 \
B) 1 \
C) 25 \
D) 12 \
"
},
{
"from": "gpt",
"value": "B"
}
],
"images": [
"./maps/united_arab_emirates/dubai.png"
]
},
```
### 5. Training Example
You can train the `RewardMap` model using the provided scripts:
```bash
# RewardMap training
bash scripts/reward_map.sh
```
## Citation
If you find this paper useful in your research, please consider citing our paper:
```bibtex
@article{feng2025rewardmap,
title={RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning},
author={Feng, Sicheng and Tuo, Kaiwen and Wang, Song and Kong, Lingdong and Zhu, Jianke and Wang, Huan},
journal={arXiv preprint arXiv:2510.02240},
year={2025}
}
``` |