Add task category, comprehensive usage, and update citation title

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +104 -6
README.md CHANGED
@@ -1,14 +1,27 @@
1
  ---
 
2
  configs:
3
  - config_name: table
4
- data_files: "tatqa_table.jsonl"
5
  - config_name: test_query
6
- data_files: "tatqa_query.jsonl"
7
-
8
- license: mit
9
  ---
 
10
  πŸ“„ [Paper](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG)
11
 
 
 
 
 
 
 
 
 
 
 
 
12
  For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
13
  | Dataset | Link |
14
  |-----------------------|------|
@@ -21,14 +34,99 @@ For MultiTableQA, we release a comprehensive benchmark, including five different
21
 
22
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ---
25
  # Citation
26
 
27
  If you find our work useful, please cite:
28
 
29
  ```bibtex
30
- @misc{zou2025gtrgraphtableragcrosstablequestion,
31
- title={GTR: Graph-Table-RAG for Cross-Table Question Answering},
32
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
33
  year={2025},
34
  eprint={2504.01346},
 
1
  ---
2
+ license: mit
3
  configs:
4
  - config_name: table
5
+ data_files: tatqa_table.jsonl
6
  - config_name: test_query
7
+ data_files: tatqa_query.jsonl
8
+ task_categories:
9
+ - table-question-answering
10
  ---
11
+
12
  πŸ“„ [Paper](https://arxiv.org/abs/2504.01346) | πŸ‘¨πŸ»β€πŸ’» [Code](https://github.com/jiaruzouu/T-RAG)
13
 
14
+ ## πŸ” Introduction
15
+
16
+ Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
17
+
18
+ This repository provides the implementation of **T-RAG**, a novel table-corpora-aware RAG framework featuring:
19
+
20
+ - **Hierarchical Memory Index** – organizes heterogeneous table knowledge at multiple granularities.
21
+ - **Multi-Stage Retrieval** – coarse-to-fine retrieval combining clustering, subgraph reasoning, and PageRank.
22
+ - **Graph-Aware Prompting** – injects relational priors into LLMs for structured tabular reasoning.
23
+ - **MultiTableQA Benchmark** – a large-scale dataset with **57,193 tables** and **23,758 questions** across various tabular tasks.
24
+
25
  For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
26
  | Dataset | Link |
27
  |-----------------------|------|
 
34
 
35
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
36
 
37
+ ---
38
+
39
+ ## ✨ Sample Usage
40
+
41
+ To get started with the T-RAG framework and the MultiTableQA benchmark, follow these steps.
42
+
43
+ ### πŸš€ Installation
44
+
45
+ ```bash
46
+ git clone https://github.com/jiaruzouu/T-RAG.git
47
+ cd T-RAG
48
+
49
+ conda create -n trag python=3.11.9
50
+ conda activate trag
51
+
52
+ # Install dependencies
53
+ pip install -r requirements.txt
54
+ ```
55
+
56
+ ### 1. MultiTableQA Data Preparation
57
+
58
+ To download and preprocess the **MultiTableQA** benchmark:
59
+
60
+ ```bash
61
+ cd table2graph
62
+ bash scripts/prepare_data.sh
63
+ ```
64
+
65
+ This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits.
66
+
67
+ ### 2. Run T-RAG Retrieval
68
+
69
+ To run hierarchical index construction and multi-stage retrieval:
70
+
71
+ **Stage 1 & 2: Table to Graph Construction & Coarse-grained Multi-way Retrieval**
72
+
73
+ Stages 1 & 2 include:
74
+ - Table Linearization
75
+ - Multi-way Feature Extraction
76
+ - Hypergraph Construction by Multi-way Clustering
77
+ - Typical Node Selection for Efficient Table Retrieval
78
+ - Query-Cluster Assignment
79
+
80
+ To run this,
81
+
82
+ ```bash
83
+ cd src
84
+ cd table2graph
85
+ bash scripts/table_cluster_run.sh # or python scripts/table_cluster_run.py
86
+ ```
87
+
88
+ **Stage 3: Fine-grained sub-graph Retrieval**
89
+ Stage 3 includes:
90
+ - Local Subgraph Construction
91
+ - Iterative Personalized PageRank for Retrieval.
92
+
93
+ To run this,
94
+ ```bash
95
+ cd src
96
+ cd table2graph
97
+ python scripts/subgraph_retrieve_run.py
98
+ ```
99
+
100
+ *Note: Our method supports different embedding methods such as E5, contriever, sentence-transformer, etc.**
101
+
102
+ ### 3. Downstream Inference with LLMs
103
+
104
+ Evaluate T-RAG with an (open/closed-source) LLM of your choice (e.g., GPT-4o, Claude-3.5, Qwen):
105
+
106
+ For Closed-source LLM, please first insert your key under `key.json`
107
+ ```json
108
+ {
109
+ "openai": "<YOUR_OPENAI_API_KEY>",
110
+ "claude": "<YOUR_CLAUDE_API_KEY>"
111
+ }
112
+ ```
113
+
114
+ To run end-to-end model inference and evaluation,
115
+
116
+ ```bash
117
+ cd src
118
+ cd downstream_inference
119
+ bash scripts/overall_run.sh
120
+ ```
121
+
122
  ---
123
  # Citation
124
 
125
  If you find our work useful, please cite:
126
 
127
  ```bibtex
128
+ @misc{zou2025rag,
129
+ title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking},
130
  author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
131
  year={2025},
132
  eprint={2504.01346},