nielsr's picture
nielsr HF Staff
Improve dataset card: Add task category, sample usage, and descriptive intro
0ec9d3e verified
|
raw
history blame
3.81 kB
metadata
license: mit
configs:
  - config_name: table
    data_files: hybridqa_table.jsonl
  - config_name: test_query
    data_files: hybridqa_query.jsonl
task_categories:
  - table-question-answering

This Hugging Face dataset repository contains MultiTableQA-HybridQA, one of the datasets released as part of the comprehensive MultiTableQA benchmark, introduced in the paper RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking.

πŸ“„ Paper | πŸ‘¨πŸ»β€πŸ’» Code

For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:

Dataset Link
MultiTableQA-TATQA πŸ€— dataset link
MultiTableQA-TabFact πŸ€— dataset link
MultiTableQA-SQA πŸ€— dataset link
MultiTableQA-WTQ πŸ€— dataset link
MultiTableQA-HybridQA πŸ€— dataset link

MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.


Sample Usage

This section provides a quick guide to setting up the environment, preparing the MultiTableQA data, running T-RAG retrieval, and performing downstream inference with LLMs, based on the official T-RAG GitHub repository.

1. Installation

First, clone the repository and install the necessary dependencies:

git clone https://github.com/jiaruzouu/T-RAG.git
cd T-RAG

conda create -n trag python=3.11.9
conda activate trag

# Install dependencies
pip install -r requirements.txt

2. MultiTableQA Data Preparation

To download and preprocess the MultiTableQA benchmark:

cd table2graph
bash scripts/prepare_data.sh

This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits.

3. Run T-RAG Retrieval

To run hierarchical index construction and multi-stage retrieval:

Stage 1 & 2: Table to Graph Construction & Coarse-grained Multi-way Retrieval

cd src
cd table2graph
bash scripts/table_cluster_run.sh #  or python scripts/table_cluster_run.py

Stage 3: Fine-grained sub-graph Retrieval

cd src
cd table2graph
python scripts/subgraph_retrieve_run.py

Note: Our method supports different embedding methods such as E5, contriever, sentence-transformer, etc.

4. Downstream Inference with LLMs

Evaluate T-RAG with an (open/closed-source) LLM of your choice (e.g., GPT-4o, Claude-3.5, Qwen):

For Closed-source LLM, please first insert your key under key.json:

{
    "openai": "<YOUR_OPENAI_API_KEY>",
    "claude": "<YOUR_CLAUDE_API_KEY>"
}

To run end-to-end model inference and evaluation:

cd src
cd downstream_inference
bash scripts/overall_run.sh

Citation

If you find our work useful, please cite:

@misc{zou2025rag,
      title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking},
      author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
      year={2025},
      eprint={2504.01346},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.01346},
}