File size: 3,807 Bytes
c87807c
0ec9d3e
c87807c
 
0ec9d3e
c87807c
0ec9d3e
 
 
c87807c
0ec9d3e
 
 
6cc5ff5
f730bdc
0ec9d3e
f730bdc
 
 
 
 
 
 
 
 
 
 
0ec9d3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f730bdc
 
 
 
 
 
0ec9d3e
 
f730bdc
 
 
 
 
0ec9d3e
f730bdc
0ec9d3e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: mit
configs:
- config_name: table
  data_files: hybridqa_table.jsonl
- config_name: test_query
  data_files: hybridqa_query.jsonl
task_categories:
- table-question-answering
---

This Hugging Face dataset repository contains **MultiTableQA-HybridQA**, one of the datasets released as part of the comprehensive **MultiTableQA** benchmark, introduced in the paper [RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking](https://arxiv.org/abs/2504.01346).

πŸ“„ [Paper](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG)

For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
| Dataset              | Link |
|-----------------------|------|
| MultiTableQA-TATQA    | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA)   |
| MultiTableQA-TabFact  | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TabFact) |
| MultiTableQA-SQA      | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_SQA)     |
| MultiTableQA-WTQ      | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_WTQ)     |
| MultiTableQA-HybridQA | πŸ€— [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_HybridQA)|


MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.

---

### Sample Usage

This section provides a quick guide to setting up the environment, preparing the MultiTableQA data, running T-RAG retrieval, and performing downstream inference with LLMs, based on the official [T-RAG GitHub repository](https://github.com/jiaruzouu/T-RAG).

#### 1. Installation

First, clone the repository and install the necessary dependencies:

```bash
git clone https://github.com/jiaruzouu/T-RAG.git
cd T-RAG

conda create -n trag python=3.11.9
conda activate trag

# Install dependencies
pip install -r requirements.txt
```

#### 2. MultiTableQA Data Preparation

To download and preprocess the MultiTableQA benchmark:

```bash
cd table2graph
bash scripts/prepare_data.sh
```

This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits.

#### 3. Run T-RAG Retrieval

To run hierarchical index construction and multi-stage retrieval:

**Stage 1 & 2: Table to Graph Construction & Coarse-grained Multi-way Retrieval**

```bash
cd src
cd table2graph
bash scripts/table_cluster_run.sh #  or python scripts/table_cluster_run.py
```

**Stage 3: Fine-grained sub-graph Retrieval**

```bash
cd src
cd table2graph
python scripts/subgraph_retrieve_run.py
```

*Note: Our method supports different embedding methods such as E5, contriever, sentence-transformer, etc.*

#### 4. Downstream Inference with LLMs

Evaluate T-RAG with an (open/closed-source) LLM of your choice (e.g., GPT-4o, Claude-3.5, Qwen):

For Closed-source LLM, please first insert your key under `key.json`:
```json
{
    "openai": "<YOUR_OPENAI_API_KEY>",
    "claude": "<YOUR_CLAUDE_API_KEY>"
}
```

To run end-to-end model inference and evaluation:

```bash
cd src
cd downstream_inference
bash scripts/overall_run.sh
```

---
# Citation

If you find our work useful, please cite:

```bibtex
@misc{zou2025rag,
      title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking},
      author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
      year={2025},
      eprint={2504.01346},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.01346},
}
```