YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

SpatialVLA Merged Model

This model is a merged version of:

  • Base Model: /remote-home/share/chenglong/Workplace/SpatialVLA/ckpts_pretrained/spatialvla-4b-224-pt
  • LoRA Adapter: /remote-home/share/chenglong/Workplace/SpatialVLA/outputs/spatialvla_4b_finetune/2025-10-27/09-34-46_glasses_sigma12_dataset_spatialvla-4b-224-pt_lr5e-6_bs16_node1_gpu2_r32_a32_ep10_linear+emb+h/checkpoint-30000

Merge Details

  • LoRA Rank (r): 32
  • LoRA Alpha: 32
  • Target Modules: fc1, position_embedding_head.3, v_proj, position_embedding_head.0, q_proj, gate_proj, o_proj, spatial_embed_tokens, down_proj, k_proj, lm_head, linear, fc2, out_proj, up_proj
  • Merge Date: <function get_file_binaries_from_pathnames at 0x7ffa5ce5b010>

Usage

This merged model can be used directly without PEFT:

import torch
from transformers import AutoModel, AutoProcessor

model_path = "/remote-home/share/chenglong/Workplace/SpatialVLA/ckpts_merged/glassvla-4b-sam2-lora-percent10-30k-sigma-12"
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
model = AutoModel.from_pretrained(
    model_path, 
    trust_remote_code=True, 
    torch_dtype=torch.bfloat16
).eval().cuda()

# Use the model for inference
# ... your inference code here ...

Notes

  • This is a fully merged model, so the LoRA adapter is no longer needed.
  • The model can be used just like the original base model.
  • All weights have been merged into a single set of parameters.
Downloads last month
1
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support