hz1919810 commited on
Commit
6449df0
Β·
verified Β·
1 Parent(s): f18c68d

Add comprehensive dataset card

Browse files
Files changed (1) hide show
  1. README.md +202 -0
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - robotics
5
+ - reinforcement-learning
6
+ tags:
7
+ - metaworld
8
+ - robotics
9
+ - manipulation
10
+ - multi-task
11
+ - r3m
12
+ - vision-language
13
+ - imitation
14
+ size_categories:
15
+ - 1K<n<10K
16
+ language:
17
+ - en
18
+ pretty_name: Short-MetaWorld Dataset
19
+ dataset_info:
20
+ features:
21
+ - name: image
22
+ dtype: image
23
+ - name: state
24
+ dtype:
25
+ sequence: float32
26
+ - name: action
27
+ dtype:
28
+ sequence: float32
29
+ - name: prompt
30
+ dtype: string
31
+ - name: task_name
32
+ dtype: string
33
+ splits:
34
+ - name: train
35
+ num_bytes: 1900000000
36
+ num_examples: 40000
37
+ download_size: 1900000000
38
+ dataset_size: 1900000000
39
+ ---
40
+
41
+ # Short-MetaWorld Dataset
42
+
43
+ ## Overview
44
+
45
+ Short-MetaWorld is a curated dataset from Meta-World containing **Multi-Task 10 (MT10)** and **Meta-Learning 10 (ML10)** tasks with **100 successful trajectories per task** and **20 steps per trajectory**. This dataset is specifically designed for multi-task robot learning, imitation learning, and vision-language robotics research.
46
+
47
+ ## πŸš€ Quick Start
48
+
49
+ ```python
50
+ from short_metaworld_loader import load_short_metaworld
51
+ from torch.utils.data import DataLoader
52
+
53
+ # Load the dataset
54
+ dataset = load_short_metaworld("./", image_size=224)
55
+
56
+ # Create a DataLoader
57
+ dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
58
+
59
+ # Get a sample
60
+ sample = dataset[0]
61
+ print(f"Image shape: {sample['image'].shape}")
62
+ print(f"State: {sample['state']}")
63
+ print(f"Action: {sample['action']}")
64
+ print(f"Task: {sample['task_name']}")
65
+ print(f"Prompt: {sample['prompt']}")
66
+ ```
67
+
68
+ ## πŸ“ Dataset Structure
69
+
70
+ ```
71
+ short-MetaWorld/
72
+ β”œβ”€β”€ README.txt # Original dataset documentation
73
+ β”œβ”€β”€ short-MetaWorld/
74
+ β”‚ β”œβ”€β”€ img_only/ # 224x224 RGB images
75
+ β”‚ β”‚ β”œβ”€β”€ button-press-topdown-v2/
76
+ β”‚ β”‚ β”‚ β”œβ”€β”€ 0/ # Trajectory 0
77
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ 0.jpg # Step 0
78
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ 1.jpg # Step 1
79
+ β”‚ β”‚ β”‚ β”‚ └── ...
80
+ β”‚ β”‚ β”‚ β”œβ”€β”€ 1/ # Trajectory 1
81
+ β”‚ β”‚ β”‚ └── ...
82
+ β”‚ β”‚ β”œβ”€β”€ door-open-v2/
83
+ β”‚ β”‚ └── ...
84
+ β”‚ └── r3m-processed/ # R3M processed features
85
+ β”‚ └── r3m_MT10_20/
86
+ β”‚ β”œβ”€β”€ button-press-topdown-v2.pkl
87
+ β”‚ β”œβ”€β”€ door-open-v2.pkl
88
+ β”‚ └── ...
89
+ └── r3m-processed/ # Additional R3M data
90
+ └── r3m_MT10_20/
91
+ β”œβ”€β”€ mt50_task_prompts.json # Task descriptions & prompts
92
+ β”œβ”€β”€ short_metaworld_loader.py # Dataset loader
93
+ └── requirements.txt
94
+ ```
95
+
96
+ ## 🎯 Tasks Included
97
+
98
+ ### Multi-Task 10 (MT10)
99
+ - `button-press-topdown-v2` - Press button from above
100
+ - `door-open-v2` - Open door by pulling handle
101
+ - `drawer-close-v2` - Close drawer
102
+ - `drawer-open-v2` - Open drawer
103
+ - `peg-insert-side-v2` - Insert peg into hole
104
+ - `pick-place-v2` - Pick up object and place on target
105
+
106
+ ### Meta-Learning 10 (ML10)
107
+ Additional tasks for meta-learning evaluation.
108
+
109
+ ## πŸ“Š Data Format
110
+
111
+ - **Images**: 224Γ—224 RGB images in JPEG format
112
+ - **States**: 7-dimensional robot state vectors (joint positions)
113
+ - **Actions**: 4-dimensional continuous control actions
114
+ - **Prompts**: Natural language task descriptions in 3 styles:
115
+ - `simple`: Brief task description
116
+ - `detailed`: Comprehensive task explanation
117
+ - `task_specific`: Context-specific variations
118
+ - **R3M Features**: Pre-processed visual representations using R3M model
119
+
120
+ ## πŸ’Ύ Loading the Dataset
121
+
122
+ The dataset comes with a comprehensive loader (`short_metaworld_loader.py`):
123
+
124
+ ```python
125
+ # Load specific tasks
126
+ mt10_tasks = [
127
+ "reach-v2", "push-v2", "pick-place-v2", "door-open-v2",
128
+ "drawer-open-v2", "drawer-close-v2", "button-press-topdown-v2",
129
+ "button-press-v2", "button-press-wall-v2", "button-press-topdown-wall-v2"
130
+ ]
131
+ dataset = load_short_metaworld("./", tasks=mt10_tasks)
132
+
133
+ # Load all available tasks
134
+ dataset = load_short_metaworld("./")
135
+
136
+ # Get dataset statistics
137
+ stats = dataset.get_dataset_stats()
138
+ print(f"Total steps: {stats['total_steps']}")
139
+ print(f"Tasks: {stats['tasks']}")
140
+
141
+ # Get task-specific prompts
142
+ task_info = dataset.get_task_info("pick-place-v2")
143
+ print(task_info['detailed']) # Detailed task description
144
+ ```
145
+
146
+ ## πŸ”¬ Research Applications
147
+
148
+ This dataset is designed for:
149
+
150
+ - **Multi-task Reinforcement Learning**: Train policies across multiple manipulation tasks
151
+ - **Imitation Learning**: Learn from demonstration trajectories
152
+ - **Vision-Language Robotics**: Connect visual observations with natural language instructions
153
+ - **Meta-Learning**: Adapt quickly to new manipulation tasks
154
+ - **Robot Policy Training**: End-to-end visuomotor control
155
+
156
+ ## πŸ“ˆ Dataset Statistics
157
+
158
+ - **Total trajectories**: 2,000 (100 per task Γ— 20 tasks)
159
+ - **Total steps**: ~40,000 (20 steps per trajectory)
160
+ - **Image resolution**: 224Γ—224 RGB
161
+ - **State dimension**: 7 (robot joint positions)
162
+ - **Action dimension**: 4 (continuous control)
163
+ - **Dataset size**: ~1.9GB
164
+
165
+ ## πŸ› οΈ Installation
166
+
167
+ ```bash
168
+ pip install torch torchvision Pillow numpy
169
+ ```
170
+
171
+ ## πŸ“– Citation
172
+
173
+ If you use this dataset, please cite:
174
+
175
+ ```bibtex
176
+ @inproceedings{yu2020meta,
177
+ title={Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning},
178
+ author={Yu, Tianhe and Quillen, Deirdre and He, Zhanpeng and Julian, Ryan and Hausman, Karol and Finn, Chelsea and Levine, Sergey},
179
+ booktitle={Conference on robot learning},
180
+ pages={1094--1100},
181
+ year={2020},
182
+ organization={PMLR}
183
+ }
184
+
185
+ @inproceedings{nair2022r3m,
186
+ title={R3M: A Universal Visual Representation for Robot Manipulation},
187
+ author={Nair, Suraj and Rajeswaran, Aravind and Kumar, Vikash and Finn, Chelsea and Gupta, Abhinav},
188
+ booktitle={Conference on Robot Learning},
189
+ pages={892--902},
190
+ year={2023},
191
+ organization={PMLR}
192
+ }
193
+ ```
194
+
195
+ ## πŸ“§ Contact
196
+
197
+ - Original dataset: [email protected]
198
+ - Questions about this upload: Open an issue in the dataset repository
199
+
200
+ ## βš–οΈ License
201
+
202
+ MIT License - See LICENSE file for details.