Open X-Embodiment: Robotic Learning Datasets and RT-X Models Paper • 2310.08864 • Published Oct 13, 2023 • 2
Vision-Based Manipulators Need to Also See from Their Hands Paper • 2203.12677 • Published Mar 15, 2022 • 1
NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis Paper • 2301.08556 • Published Jan 18, 2023
Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success Paper • 2502.19645 • Published Feb 27 • 1
FAST: Efficient Action Tokenization for Vision-Language-Action Models Paper • 2501.09747 • Published Jan 16 • 27
openvla/openvla-7b-finetuned-libero-spatial Image-Text-to-Text • 8B • Updated Oct 9, 2024 • 5.38k • 4
OpenVLA: An Open-Source Vision-Language-Action Model Paper • 2406.09246 • Published Jun 13, 2024 • 41
OpenVLA: An Open-Source Vision-Language-Action Model Paper • 2406.09246 • Published Jun 13, 2024 • 41
OpenVLA: An Open-Source Vision-Language-Action Model Paper • 2406.09246 • Published Jun 13, 2024 • 41
Eliciting Compatible Demonstrations for Multi-Human Imitation Learning Paper • 2210.08073 • Published Oct 14, 2022
DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset Paper • 2403.12945 • Published Mar 19, 2024