Robotics
Safetensors
English
hungchiayu commited on
Commit
a3dc47c
Β·
verified Β·
1 Parent(s): 54733af

Push model using huggingface_hub.

Browse files
Files changed (3) hide show
  1. README.md +7 -71
  2. config.json +9 -0
  3. model.safetensors +3 -0
README.md CHANGED
@@ -1,74 +1,10 @@
1
  ---
2
- base_model:
3
- - declare-lab/nora-long
4
- datasets:
5
- - TomNickson/OpenX-Embodiment
6
- - jxu124/OpenX-Embodiment
7
- language:
8
- - en
9
- license: mit
10
- pipeline_tag: robotics
11
  ---
12
 
13
- # NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
14
-
15
- [![Project Website](https://img.shields.io/badge/Project-Website-blue.svg)](https://declare-lab.github.io/nora-1.5)
16
- [![Model](https://img.shields.io/badge/Model-NORA--1.5-brightgreen)](https://huggingface.co/declare-lab/nora-1.5)
17
- [![arXiv](https://img.shields.io/badge/arXiv-2511.14659-b31b1b.svg)](https://arxiv.org/abs/2511.14659)
18
- [![Code](https://img.shields.io/badge/GitHub-Code-black.svg?logo=github)](https://github.com/declare-lab/nora-1.5)
19
- ![Status](https://img.shields.io/badge/Status-Active-orange)
20
-
21
- πŸ”₯ Project NORA is supported by Gemini and Lambda Labs! We are thankful to them.
22
-
23
- NORA-1.5 is a **Vision-Language-Action (VLA)** model that improves generalization and real-world decision making through **post-training with world-model-based and action-based preference rewards**.
24
- The model builds upon the NORA foundation to achieve stronger **instruction following**, **closed-loop control**, and **real-robot success**, demonstrating reliability across **LIBERO** and **SimplerEnv** environments.
25
-
26
- <p align="center">
27
- <img src="https://declare-lab.github.io/assets/images/nora-1.5-arxiv-teaser.png" width="100%">
28
- </p>
29
-
30
-
31
- ---
32
-
33
- ## 🌐 Project Website
34
-
35
- πŸ”— **https://declare-lab.github.io/nora-1.5**
36
-
37
- ---
38
-
39
- ## πŸš€ Key Features
40
-
41
- - **Vision-Language-Action architecture** with enhanced **task completion rate** and **distraction rate**
42
- - **Action-based preference optimization** using expert preference rewards
43
- - **World-model-based preference learning** for improved planning and consistency
44
- - Strong **closed-loop control**, enabling deployment in real robot settings
45
- - Supports **multi-task**, **long-horizon**, and **few-shot generalization**
46
- - Compatible with **LeRobot**, **LIBERO**, **SimplerEnv**, and custom environments
47
-
48
- ---
49
-
50
- ## πŸ“† TODO <a name="todos"></a> ~ 1 week
51
- - [ ] Release the inference code of Nora-1.5
52
- - [ ] Release all relevant model checkpoints(Pretrained, libero, SimplerEnv etc)
53
- - [ ] Release the training/fine-tuning code of Nora-1.5 with LeRobot Dataset
54
- - [ ] Release SimplerEnv evaluation code
55
-
56
- ## Minimal Inference Sample (Will update)
57
- ```python
58
- from inference.modelling_expert import VLAWithExpert
59
-
60
- model = VLAWithExpert()
61
- model.to('cuda')
62
- outputs = model.sample_actions(PIL IMAGE,instruction,num_steps=10) ## Outputs 7 Dof action of normalized and unnormalized action
63
- ```
64
-
65
- ## Citation
66
-
67
- ```bibtex
68
- @article{hung2025nora15,
69
- title={NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-Based Preference Rewards},
70
- author={Hung, Chia-Yu and Majumder, Navonil and Deng, Haoyuan, Liu Renhang, Yankang Ang, Amir Zadeh, Chuan Li, Dorien Herremans, Ziwei Wang, and Soujanya Poria},
71
- journal={arXiv preprint},\
72
- year={2025}
73
- }
74
- ```
 
1
  ---
2
+ tags:
3
+ - model_hub_mixin
4
+ - pytorch_model_hub_mixin
 
 
 
 
 
 
5
  ---
6
 
7
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
+ - Code: [More Information Needed]
9
+ - Paper: [More Information Needed]
10
+ - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "action_chunk_length": 5,
3
+ "action_dim": 7,
4
+ "fast_tokenizer_id": "physical-intelligence/fast",
5
+ "lm_expert_num_attention_head": 6,
6
+ "lm_expert_width_multiplier": 0.375,
7
+ "processor_id": "declare-lab/nora",
8
+ "vlm_model_id": "declare-lab/nora-long"
9
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c2aa44abe813e131d19d40591e97b78f084d1f716db253d203190cce68ae5ef
3
+ size 7973435350