NV-DLynch's picture
Update README.md
c88bc78 verified
metadata
license: cc-by-4.0
task_categories:
  - robotics
tags:
  - robotics

Dataset Description:

This dataset is multimodal collections of trajectories generated in Isaac Lab. It supports humanoid (GR1) tabletop manipulation tasks for industrial settings. Each dataset entry provides the full context (state, vision, language, action) needed to train and evaluate generalist robot policies for tasks like pouring nuts or sorting pipes by color.

Dataset Name # Trajectories
Exhaust-Pipe-Sorting-task 1000
Nut-Pouring-task 1000

This dataset is ideal for behavior cloning, policy learning, and generalist robotic manipulation research. It has been for post training GR00T N1 model.

This dataset is ready for commercial use.

Dataset Owner

NVIDIA Corporation

Dataset Creation Date:

05/09/2025

License/Terms of Use:

This dataset is governed by the Creative Commons Attribution 4.0 International License (CC-BY-4.0).

Intended Usage:

This dataset is intended for:

  • Training robot manipulation policies using behavior cloning.
  • Research in generalist robotics and task-conditioned agents.
  • Sim-to-real / Sim-to-Sim transfer studies.

Dataset Characterization:

Data Collection Method

  • Automated
  • Automatic/Sensors
  • Synthetic

5 human teleoperated demonstrations are collected through Apple Vision Pro in Isaac Lab. All 1,000 demos are generated automatically using a synthetic motion trajectory generation framework, Mimicgen [1]. Each demo is generated at 20 Hz.

Labeling Method

Not Applicable

Dataset Format:

We provide the Mimic-generated 1000 demonstrations in HDF5 dataset files, and GR00T-Lerobot formatted datasets converted from HDF5 files. Each demo in GR00T-Lerobot datasets consists of a time-indexed sequence of the following modalities:

Actions

  • action (FP64): joint desired positions for all body joints (26 DoF)

Observations

  • observation.state (FP64): joint positions for all body joints (26 DoF)

Task-specific

  • timestamp (FP64): simulation time in seconds of each recorded data entry.
  • annotation.human.action.task_description (INT64): index referring to the language instruction recorded in the metadata
  • annotation.human.action.valid (INT64): index indicating validity of annotaion recorded in the metadata
  • episode_index (INT64): index indicating the order of each demo
  • task_index (INT64): index used in multi-task data loader. Not applicable to Gr00t-N1 post training, always set to 0.

Videos

  • 256 x 256 RGB videos in mp4 format from first-person-view camera

In additional, a set of metadata describing the followings is provided,

  • episodes.jsonl contains a list of all the episodes in the entire dataset. Each episode contains a list of tasks and the length of the episode.
  • tasks.jsonl contains a list of all the tasks in the entire dataset.
  • modality.json contains the modality configuration.
  • info.json contains the dataset information.

Dataset Quantification:

Record Count

Exhaust Pipe Sorting Task

  • Number of demonstrations/trajectories: 1000
  • Number of RGB videos: 1000

Nut Pouring Task

  • Number of demonstrations/trajectories: 1000
  • Number of RGB videos: 1000

Total Storage

26.5 GB

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Reference(s):

[1] @inproceedings{mandlekar2023mimicgen, title={MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations}, author={Mandlekar, Ajay and Nasiriany, Soroush and Wen, Bowen and Akinola, Iretiayo and Narang, Yashraj and Fan, Linxi and Zhu, Yuke and Fox, Dieter}, booktitle={7th Annual Conference on Robot Learning}, year={2023} }