short-metaworld-vla / README.md
hz1919810's picture
Add comprehensive dataset card
6449df0 verified
metadata
license: mit
task_categories:
  - robotics
  - reinforcement-learning
tags:
  - metaworld
  - robotics
  - manipulation
  - multi-task
  - r3m
  - vision-language
  - imitation
size_categories:
  - 1K<n<10K
language:
  - en
pretty_name: Short-MetaWorld Dataset
dataset_info:
  features:
    - name: image
      dtype: image
    - name: state
      dtype:
        sequence: float32
    - name: action
      dtype:
        sequence: float32
    - name: prompt
      dtype: string
    - name: task_name
      dtype: string
  splits:
    - name: train
      num_bytes: 1900000000
      num_examples: 40000
  download_size: 1900000000
  dataset_size: 1900000000

Short-MetaWorld Dataset

Overview

Short-MetaWorld is a curated dataset from Meta-World containing Multi-Task 10 (MT10) and Meta-Learning 10 (ML10) tasks with 100 successful trajectories per task and 20 steps per trajectory. This dataset is specifically designed for multi-task robot learning, imitation learning, and vision-language robotics research.

πŸš€ Quick Start

from short_metaworld_loader import load_short_metaworld
from torch.utils.data import DataLoader

# Load the dataset
dataset = load_short_metaworld("./", image_size=224)

# Create a DataLoader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)

# Get a sample
sample = dataset[0]
print(f"Image shape: {sample['image'].shape}")
print(f"State: {sample['state']}")
print(f"Action: {sample['action']}")
print(f"Task: {sample['task_name']}")
print(f"Prompt: {sample['prompt']}")

πŸ“ Dataset Structure

short-MetaWorld/
β”œβ”€β”€ README.txt                     # Original dataset documentation
β”œβ”€β”€ short-MetaWorld/
β”‚   β”œβ”€β”€ img_only/                    # 224x224 RGB images
β”‚   β”‚   β”œβ”€β”€ button-press-topdown-v2/
β”‚   β”‚   β”‚   β”œβ”€β”€ 0/                   # Trajectory 0
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0.jpg           # Step 0
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 1.jpg           # Step 1
β”‚   β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”‚   β”œβ”€β”€ 1/                   # Trajectory 1
β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”œβ”€β”€ door-open-v2/
β”‚   β”‚   └── ...
β”‚   └── r3m-processed/              # R3M processed features
β”‚       └── r3m_MT10_20/
β”‚           β”œβ”€β”€ button-press-topdown-v2.pkl
β”‚           β”œβ”€β”€ door-open-v2.pkl
β”‚           └── ...
└── r3m-processed/                  # Additional R3M data
    └── r3m_MT10_20/
β”œβ”€β”€ mt50_task_prompts.json          # Task descriptions & prompts
β”œβ”€β”€ short_metaworld_loader.py       # Dataset loader
└── requirements.txt

🎯 Tasks Included

Multi-Task 10 (MT10)

  • button-press-topdown-v2 - Press button from above
  • door-open-v2 - Open door by pulling handle
  • drawer-close-v2 - Close drawer
  • drawer-open-v2 - Open drawer
  • peg-insert-side-v2 - Insert peg into hole
  • pick-place-v2 - Pick up object and place on target

Meta-Learning 10 (ML10)

Additional tasks for meta-learning evaluation.

πŸ“Š Data Format

  • Images: 224Γ—224 RGB images in JPEG format
  • States: 7-dimensional robot state vectors (joint positions)
  • Actions: 4-dimensional continuous control actions
  • Prompts: Natural language task descriptions in 3 styles:
    • simple: Brief task description
    • detailed: Comprehensive task explanation
    • task_specific: Context-specific variations
  • R3M Features: Pre-processed visual representations using R3M model

πŸ’Ύ Loading the Dataset

The dataset comes with a comprehensive loader (short_metaworld_loader.py):

# Load specific tasks
mt10_tasks = [
    "reach-v2", "push-v2", "pick-place-v2", "door-open-v2", 
    "drawer-open-v2", "drawer-close-v2", "button-press-topdown-v2",
    "button-press-v2", "button-press-wall-v2", "button-press-topdown-wall-v2"
]
dataset = load_short_metaworld("./", tasks=mt10_tasks)

# Load all available tasks
dataset = load_short_metaworld("./")

# Get dataset statistics
stats = dataset.get_dataset_stats()
print(f"Total steps: {stats['total_steps']}")
print(f"Tasks: {stats['tasks']}")

# Get task-specific prompts
task_info = dataset.get_task_info("pick-place-v2")
print(task_info['detailed'])  # Detailed task description

πŸ”¬ Research Applications

This dataset is designed for:

  • Multi-task Reinforcement Learning: Train policies across multiple manipulation tasks
  • Imitation Learning: Learn from demonstration trajectories
  • Vision-Language Robotics: Connect visual observations with natural language instructions
  • Meta-Learning: Adapt quickly to new manipulation tasks
  • Robot Policy Training: End-to-end visuomotor control

πŸ“ˆ Dataset Statistics

  • Total trajectories: 2,000 (100 per task Γ— 20 tasks)
  • Total steps: ~40,000 (20 steps per trajectory)
  • Image resolution: 224Γ—224 RGB
  • State dimension: 7 (robot joint positions)
  • Action dimension: 4 (continuous control)
  • Dataset size: ~1.9GB

πŸ› οΈ Installation

pip install torch torchvision Pillow numpy

πŸ“– Citation

If you use this dataset, please cite:

@inproceedings{yu2020meta,
  title={Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning},
  author={Yu, Tianhe and Quillen, Deirdre and He, Zhanpeng and Julian, Ryan and Hausman, Karol and Finn, Chelsea and Levine, Sergey},
  booktitle={Conference on robot learning},
  pages={1094--1100},
  year={2020},
  organization={PMLR}
}

@inproceedings{nair2022r3m,
  title={R3M: A Universal Visual Representation for Robot Manipulation},
  author={Nair, Suraj and Rajeswaran, Aravind and Kumar, Vikash and Finn, Chelsea and Gupta, Abhinav},
  booktitle={Conference on Robot Learning},
  pages={892--902},
  year={2023},
  organization={PMLR}
}

πŸ“§ Contact

  • Original dataset: [email protected]
  • Questions about this upload: Open an issue in the dataset repository

βš–οΈ License

MIT License - See LICENSE file for details.