jiaruz2 commited on
Commit
fe841cc
·
verified ·
1 Parent(s): 29d6388

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -92
README.md CHANGED
@@ -15,13 +15,6 @@ task_categories:
15
 
16
  Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
17
 
18
- This repository provides the implementation of **T-RAG**, a novel table-corpora-aware RAG framework featuring:
19
-
20
- - **Hierarchical Memory Index** – organizes heterogeneous table knowledge at multiple granularities.
21
- - **Multi-Stage Retrieval** – coarse-to-fine retrieval combining clustering, subgraph reasoning, and PageRank.
22
- - **Graph-Aware Prompting** – injects relational priors into LLMs for structured tabular reasoning.
23
- - **MultiTableQA Benchmark** – a large-scale dataset with **57,193 tables** and **23,758 questions** across various tabular tasks.
24
-
25
  For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
26
  | Dataset | Link |
27
  |-----------------------|------|
@@ -34,91 +27,6 @@ For MultiTableQA, we release a comprehensive benchmark, including five different
34
 
35
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
36
 
37
- ---
38
-
39
- ## ✨ Sample Usage
40
-
41
- To get started with the T-RAG framework and the MultiTableQA benchmark, follow these steps.
42
-
43
- ### 🚀 Installation
44
-
45
- ```bash
46
- git clone https://github.com/jiaruzouu/T-RAG.git
47
- cd T-RAG
48
-
49
- conda create -n trag python=3.11.9
50
- conda activate trag
51
-
52
- # Install dependencies
53
- pip install -r requirements.txt
54
- ```
55
-
56
- ### 1. MultiTableQA Data Preparation
57
-
58
- To download and preprocess the **MultiTableQA** benchmark:
59
-
60
- ```bash
61
- cd table2graph
62
- bash scripts/prepare_data.sh
63
- ```
64
-
65
- This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits.
66
-
67
- ### 2. Run T-RAG Retrieval
68
-
69
- To run hierarchical index construction and multi-stage retrieval:
70
-
71
- **Stage 1 & 2: Table to Graph Construction & Coarse-grained Multi-way Retrieval**
72
-
73
- Stages 1 & 2 include:
74
- - Table Linearization
75
- - Multi-way Feature Extraction
76
- - Hypergraph Construction by Multi-way Clustering
77
- - Typical Node Selection for Efficient Table Retrieval
78
- - Query-Cluster Assignment
79
-
80
- To run this,
81
-
82
- ```bash
83
- cd src
84
- cd table2graph
85
- bash scripts/table_cluster_run.sh # or python scripts/table_cluster_run.py
86
- ```
87
-
88
- **Stage 3: Fine-grained sub-graph Retrieval**
89
- Stage 3 includes:
90
- - Local Subgraph Construction
91
- - Iterative Personalized PageRank for Retrieval.
92
-
93
- To run this,
94
- ```bash
95
- cd src
96
- cd table2graph
97
- python scripts/subgraph_retrieve_run.py
98
- ```
99
-
100
- *Note: Our method supports different embedding methods such as E5, contriever, sentence-transformer, etc.**
101
-
102
- ### 3. Downstream Inference with LLMs
103
-
104
- Evaluate T-RAG with an (open/closed-source) LLM of your choice (e.g., GPT-4o, Claude-3.5, Qwen):
105
-
106
- For Closed-source LLM, please first insert your key under `key.json`
107
- ```json
108
- {
109
- "openai": "<YOUR_OPENAI_API_KEY>",
110
- "claude": "<YOUR_CLAUDE_API_KEY>"
111
- }
112
- ```
113
-
114
- To run end-to-end model inference and evaluation,
115
-
116
- ```bash
117
- cd src
118
- cd downstream_inference
119
- bash scripts/overall_run.sh
120
- ```
121
-
122
  ---
123
  # Citation
124
 
 
15
 
16
  Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
17
 
 
 
 
 
 
 
 
18
  For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
19
  | Dataset | Link |
20
  |-----------------------|------|
 
27
 
28
  MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ---
31
  # Citation
32