|
|
--- |
|
|
license: mit |
|
|
configs: |
|
|
- config_name: table |
|
|
data_files: hybridqa_table.jsonl |
|
|
- config_name: test_query |
|
|
data_files: hybridqa_query.jsonl |
|
|
task_categories: |
|
|
- table-question-answering |
|
|
--- |
|
|
|
|
|
This Hugging Face dataset repository contains **MultiTableQA-HybridQA**, one of the datasets released as part of the comprehensive **MultiTableQA** benchmark, introduced in the paper [RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking](https://arxiv.org/abs/2504.01346). |
|
|
|
|
|
π [Paper](https://arxiv.org/abs/2504.01346) | π¨π»βπ» [Code](https://github.com/jiaruzouu/T-RAG) |
|
|
|
|
|
For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA: |
|
|
| Dataset | Link | |
|
|
|-----------------------|------| |
|
|
| MultiTableQA-TATQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) | |
|
|
| MultiTableQA-TabFact | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TabFact) | |
|
|
| MultiTableQA-SQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_SQA) | |
|
|
| MultiTableQA-WTQ | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_WTQ) | |
|
|
| MultiTableQA-HybridQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_HybridQA)| |
|
|
|
|
|
|
|
|
MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations. |
|
|
|
|
|
--- |
|
|
|
|
|
### Sample Usage |
|
|
|
|
|
This section provides a quick guide to setting up the environment, preparing the MultiTableQA data, running T-RAG retrieval, and performing downstream inference with LLMs, based on the official [T-RAG GitHub repository](https://github.com/jiaruzouu/T-RAG). |
|
|
|
|
|
#### 1. Installation |
|
|
|
|
|
First, clone the repository and install the necessary dependencies: |
|
|
|
|
|
```bash |
|
|
git clone https://github.com/jiaruzouu/T-RAG.git |
|
|
cd T-RAG |
|
|
|
|
|
conda create -n trag python=3.11.9 |
|
|
conda activate trag |
|
|
|
|
|
# Install dependencies |
|
|
pip install -r requirements.txt |
|
|
``` |
|
|
|
|
|
#### 2. MultiTableQA Data Preparation |
|
|
|
|
|
To download and preprocess the MultiTableQA benchmark: |
|
|
|
|
|
```bash |
|
|
cd table2graph |
|
|
bash scripts/prepare_data.sh |
|
|
``` |
|
|
|
|
|
This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits. |
|
|
|
|
|
#### 3. Run T-RAG Retrieval |
|
|
|
|
|
To run hierarchical index construction and multi-stage retrieval: |
|
|
|
|
|
**Stage 1 & 2: Table to Graph Construction & Coarse-grained Multi-way Retrieval** |
|
|
|
|
|
```bash |
|
|
cd src |
|
|
cd table2graph |
|
|
bash scripts/table_cluster_run.sh # or python scripts/table_cluster_run.py |
|
|
``` |
|
|
|
|
|
**Stage 3: Fine-grained sub-graph Retrieval** |
|
|
|
|
|
```bash |
|
|
cd src |
|
|
cd table2graph |
|
|
python scripts/subgraph_retrieve_run.py |
|
|
``` |
|
|
|
|
|
*Note: Our method supports different embedding methods such as E5, contriever, sentence-transformer, etc.* |
|
|
|
|
|
#### 4. Downstream Inference with LLMs |
|
|
|
|
|
Evaluate T-RAG with an (open/closed-source) LLM of your choice (e.g., GPT-4o, Claude-3.5, Qwen): |
|
|
|
|
|
For Closed-source LLM, please first insert your key under `key.json`: |
|
|
```json |
|
|
{ |
|
|
"openai": "<YOUR_OPENAI_API_KEY>", |
|
|
"claude": "<YOUR_CLAUDE_API_KEY>" |
|
|
} |
|
|
``` |
|
|
|
|
|
To run end-to-end model inference and evaluation: |
|
|
|
|
|
```bash |
|
|
cd src |
|
|
cd downstream_inference |
|
|
bash scripts/overall_run.sh |
|
|
``` |
|
|
|
|
|
--- |
|
|
# Citation |
|
|
|
|
|
If you find our work useful, please cite: |
|
|
|
|
|
```bibtex |
|
|
@misc{zou2025rag, |
|
|
title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking}, |
|
|
author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He}, |
|
|
year={2025}, |
|
|
eprint={2504.01346}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2504.01346}, |
|
|
} |
|
|
``` |