Skip to content

Models

Get all available VLA model types that can be fine-tuned.

from qualia import Qualia
client = Qualia()
models = client.models.list()

Response

VLAModel[]
[
{
"id": "smolvla",
"name": "SmolVLA",
"description": "Lightweight VLA model for efficient training",
"base_model_id": "lerobot/smolvla_base",
"camera_slots": ["cam_1", "cam_2", "cam_3"],
"supports_custom_model": true
},
{
"id": "pi0",
"name": "Pi0",
"description": "Physical Intelligence Pi0 model",
"base_model_id": "lerobot/pi0_base",
"camera_slots": ["cam_1", "cam_2", "cam_3"],
"supports_custom_model": true
},
{
"id": "pi05",
"name": "Pi0.5",
"description": "Physical Intelligence Pi0.5 model",
"base_model_id": "lerobot/pi05_base",
"camera_slots": ["cam_1", "cam_2", "cam_3"],
"supports_custom_model": true
},
{
"id": "act",
"name": "ACT",
"description": "Action Chunking Transformer model",
"base_model_id": null,
"camera_slots": ["cam_1", "cam_2", "cam_3"],
"supports_custom_model": false
},
{
"id": "gr00t_n1_5",
"name": "GR00T N1.5",
"description": "NVIDIA GR00T N1.5 foundation model",
"base_model_id": null,
"camera_slots": ["cam_1", "cam_2", "cam_3"],
"supports_custom_model": false
},
{
"id": "sarm",
"name": "SARM",
"description": "SARM reward model for Reward-Aware Behavior Cloning",
"base_model_id": null,
"camera_slots": ["cam_1"],
"supports_custom_model": false
}
]
TypeDescriptionCustom ModelRA-BC Support
smolvlaLightweight VLA modelYesYes
pi0Physical Intelligence Pi0 modelYesYes
pi05Physical Intelligence Pi0.5 modelYesYes
actAction Chunking Transformer modelNoNo
gr00t_n1_5NVIDIA GR00T N1.5 foundation modelNoNo
sarmSARM reward model.NoNo

The camera_slots field indicates which camera inputs the model expects. Use these slot names as keys in camera_mappings when creating finetune jobs.

Models with supports_custom_model: false use a fixed base model and do not accept a model_id parameter.