VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
Paper
•
2509.09372
•
Published
•
239
The models of VLA-Adapter-Pro (The underlying architecture is the same, but the implementation has been improved, resulting in significantly improved)
Note Inference log: https://huggingface.co/VLA-Adapter/LIBERO-Spatial-Pro/blob/main/Inference--Spatial_Pro--99.6.log
Note Inference log: https://huggingface.co/VLA-Adapter/LIBERO-Long-Pro/blob/main/Inference--Long_Pro--96.4.log
Note Inference log: https://huggingface.co/VLA-Adapter/CALVIN-ABC-Pro/blob/main/Inference--CALVIN_Pro--4.50.log
Note Inference log: https://huggingface.co/VLA-Adapter/LIBERO-Object-Pro/blob/main/Inference--Object_Pro--99.6.log
Note Inference log: https://huggingface.co/VLA-Adapter/LIBERO-Goal-Pro/blob/main/Inference--Goal_Pro--98.2.log