Datasets:
Tasks:
Text Generation
Formats:
csv
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
multimodal
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
size_categories:
|
| 8 |
+
- 10K<n<100K
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
## MCiteBench Dataset
|
| 12 |
+
|
| 13 |
+
MCiteBench is a benchmark to evaluate multimodal citation text generation in Multimodal Large Language Models (MLLMs). For more details, please refer to our [paper](test).
|
| 14 |
+
|
| 15 |
+
## Data Download
|
| 16 |
+
|
| 17 |
+
Please download the `MCiteBench_full_dataset.zip`. It contains the `data.jsonl` file and the `visual_resources` folder.
|
| 18 |
+
|
| 19 |
+
## Data Format
|
| 20 |
+
The data format for `data_example.jsonl` and `data.jsonl` is as follows:
|
| 21 |
+
|
| 22 |
+
```yaml
|
| 23 |
+
question_id: [str] # The ID of the question
|
| 24 |
+
pdf_id: [str] # The ID of the associated PDF document
|
| 25 |
+
url: [str] # The URL path of the corresponding openreview submission
|
| 26 |
+
question_type: [str] # The type of question, with possible values: "explanation" or "locating"
|
| 27 |
+
question: [str] # The text of the question
|
| 28 |
+
answer: [str] # The answer to the question, which can be a string, list, float, or integer, depending on the context
|
| 29 |
+
|
| 30 |
+
evidence_keys: [list] # A list of abstract references or identifiers for evidence, such as "section x", "line y", "figure z", or "table k".
|
| 31 |
+
# These are not the actual content but pointers or descriptions indicating where the evidence can be found.
|
| 32 |
+
# Example: ["section 2.1", "line 45", "Figure 3"]
|
| 33 |
+
evidence_contents: [list] # A list of resolved or actual evidence content corresponding to the `evidence_keys`.
|
| 34 |
+
# These can include text excerpts, image file paths, or table file paths that provide the actual evidence for the answer.
|
| 35 |
+
# Each item in this list corresponds directly to the same-index item in `evidence_keys`.
|
| 36 |
+
# Example: ["This is the content of section 2.1.", "/path/to/figure_3_add_caption.jpg"]
|
| 37 |
+
evidence_modal: [str] # The modality type of the evidence, with possible values: ['figure', 'table', 'text', 'mixed'] indicating the source type of the evidence
|
| 38 |
+
evidence_count: [int] # The total count of all evidence related to the question
|
| 39 |
+
distractor_count: [int] # The total number of distractor items, meaning information blocks that are irrelevant or misleading for the answer
|
| 40 |
+
info_count: [int] # The total number of information blocks in the document, including text, tables, images, etc.
|
| 41 |
+
text_2_idx: [dict[str, str]] # A dictionary mapping text information to corresponding indices
|
| 42 |
+
idx_2_text: [dict[str, str]] # A reverse dictionary mapping indices back to the corresponding text content
|
| 43 |
+
image_2_idx: [dict[str, str]] # A dictionary mapping image paths to corresponding indices
|
| 44 |
+
idx_2_image: [dict[str, str]] # A reverse dictionary mapping indices back to image paths
|
| 45 |
+
table_2_idx: [dict[str, str]] # A dictionary mapping table paths to corresponding indices
|
| 46 |
+
idx_2_table: [dict[str, str]] # A reverse dictionary mapping indices back to table paths
|
| 47 |
+
meta_data: [dict] # Additional metadata used during the construction of the data
|
| 48 |
+
distractor_contents: [list] # Similar to `evidence_contents`, but contains distractors, which are irrelevant or misleading information
|
| 49 |
+
```
|