nielsr HF Staff commited on
Commit
0ec9d3e
ยท
verified ยท
1 Parent(s): 6cc5ff5

Improve dataset card: Add task category, sample usage, and descriptive intro

Browse files

This PR enhances the dataset card for `MultiTableQA-HybridQA` by:

- Adding `task_categories: ['table-question-answering']` to the metadata, improving discoverability.
- Including a descriptive introductory sentence to clarify that this repository hosts the `MultiTableQA-HybridQA` dataset, part of the larger MultiTableQA benchmark.
- Incorporating a comprehensive "Sample Usage" section with installation instructions, data preparation steps, and examples for running T-RAG retrieval and downstream LLM inference, directly extracted from the associated `T-RAG` GitHub repository.
- Updating the BibTeX citation to reflect the correct paper title as provided in the GitHub repository.

Files changed (1) hide show
  1. README.md +87 -9
README.md CHANGED
@@ -1,15 +1,19 @@
1
  ---
 
2
  configs:
3
  - config_name: table
4
- data_files: "hybridqa_table.jsonl"
5
  - config_name: test_query
6
- data_files: "hybridqa_query.jsonl"
7
-
8
- license: mit
9
  ---
 
 
 
10
  ๐Ÿ“„ [Paper](https://arxiv.org/abs/2504.01346) | ๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป [Code](https://github.com/jiaruzouu/T-RAG)
11
 
12
- For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
13
  | Dataset | Link |
14
  |-----------------------|------|
15
  | MultiTableQA-TATQA | ๐Ÿค— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) |
@@ -21,19 +25,93 @@ For MultiTableQA, we release a comprehensive benchmark, including five different
21
 
22
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ---
25
  # Citation
26
 
27
  If you find our work useful, please cite:
28
 
29
  ```bibtex
30
- @misc{zou2025gtrgraphtableragcrosstablequestion,
31
- title={GTR: Graph-Table-RAG for Cross-Table Question Answering},
32
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
33
  year={2025},
34
  eprint={2504.01346},
35
  archivePrefix={arXiv},
36
  primaryClass={cs.CL},
37
- url={https://arxiv.org/abs/2504.01346},
38
  }
39
- ```
 
1
  ---
2
+ license: mit
3
  configs:
4
  - config_name: table
5
+ data_files: hybridqa_table.jsonl
6
  - config_name: test_query
7
+ data_files: hybridqa_query.jsonl
8
+ task_categories:
9
+ - table-question-answering
10
  ---
11
+
12
+ This Hugging Face dataset repository contains **MultiTableQA-HybridQA**, one of the datasets released as part of the comprehensive **MultiTableQA** benchmark, introduced in the paper [RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking](https://arxiv.org/abs/2504.01346).
13
+
14
  ๐Ÿ“„ [Paper](https://arxiv.org/abs/2504.01346) | ๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป [Code](https://github.com/jiaruzouu/T-RAG)
15
 
16
+ For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
17
  | Dataset | Link |
18
  |-----------------------|------|
19
  | MultiTableQA-TATQA | ๐Ÿค— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) |
 
25
 
26
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
27
 
28
+ ---
29
+
30
+ ### Sample Usage
31
+
32
+ This section provides a quick guide to setting up the environment, preparing the MultiTableQA data, running T-RAG retrieval, and performing downstream inference with LLMs, based on the official [T-RAG GitHub repository](https://github.com/jiaruzouu/T-RAG).
33
+
34
+ #### 1. Installation
35
+
36
+ First, clone the repository and install the necessary dependencies:
37
+
38
+ ```bash
39
+ git clone https://github.com/jiaruzouu/T-RAG.git
40
+ cd T-RAG
41
+
42
+ conda create -n trag python=3.11.9
43
+ conda activate trag
44
+
45
+ # Install dependencies
46
+ pip install -r requirements.txt
47
+ ```
48
+
49
+ #### 2. MultiTableQA Data Preparation
50
+
51
+ To download and preprocess the MultiTableQA benchmark:
52
+
53
+ ```bash
54
+ cd table2graph
55
+ bash scripts/prepare_data.sh
56
+ ```
57
+
58
+ This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits.
59
+
60
+ #### 3. Run T-RAG Retrieval
61
+
62
+ To run hierarchical index construction and multi-stage retrieval:
63
+
64
+ **Stage 1 & 2: Table to Graph Construction & Coarse-grained Multi-way Retrieval**
65
+
66
+ ```bash
67
+ cd src
68
+ cd table2graph
69
+ bash scripts/table_cluster_run.sh # or python scripts/table_cluster_run.py
70
+ ```
71
+
72
+ **Stage 3: Fine-grained sub-graph Retrieval**
73
+
74
+ ```bash
75
+ cd src
76
+ cd table2graph
77
+ python scripts/subgraph_retrieve_run.py
78
+ ```
79
+
80
+ *Note: Our method supports different embedding methods such as E5, contriever, sentence-transformer, etc.*
81
+
82
+ #### 4. Downstream Inference with LLMs
83
+
84
+ Evaluate T-RAG with an (open/closed-source) LLM of your choice (e.g., GPT-4o, Claude-3.5, Qwen):
85
+
86
+ For Closed-source LLM, please first insert your key under `key.json`:
87
+ ```json
88
+ {
89
+ "openai": "<YOUR_OPENAI_API_KEY>",
90
+ "claude": "<YOUR_CLAUDE_API_KEY>"
91
+ }
92
+ ```
93
+
94
+ To run end-to-end model inference and evaluation:
95
+
96
+ ```bash
97
+ cd src
98
+ cd downstream_inference
99
+ bash scripts/overall_run.sh
100
+ ```
101
+
102
  ---
103
  # Citation
104
 
105
  If you find our work useful, please cite:
106
 
107
  ```bibtex
108
+ @misc{zou2025rag,
109
+ title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking},
110
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
111
  year={2025},
112
  eprint={2504.01346},
113
  archivePrefix={arXiv},
114
  primaryClass={cs.CL},
115
+ url={https://arxiv.org/abs/2504.01346},
116
  }
117
+ ```