Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: LockedDatasetTimeoutError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
scene_name large_string | path large_string | town large_string | weather large_string | n_frames int64 | n_actors int64 |
|---|---|---|---|---|---|
scene_T01_000 | mid/train/scene_T01_000 | T01 | ClearNoon | 100 | 119 |
scene_T01_001 | mid/train/scene_T01_001 | T01 | HardRainNoon | 100 | 117 |
scene_T01_002 | mid/train/scene_T01_002 | T01 | ClearNoon | 100 | 120 |
scene_T01_003 | mid/train/scene_T01_003 | T01 | MidRainSunset | 100 | 120 |
scene_T01_004 | mid/train/scene_T01_004 | T01 | ClearNoon | 100 | 115 |
scene_T01_005 | mid/train/scene_T01_005 | T01 | WetCloudyNoon | 100 | 117 |
scene_T01_006 | mid/train/scene_T01_006 | T01 | ClearNoon | 100 | 110 |
scene_T01_007 | mid/train/scene_T01_007 | T01 | HardRainNoon | 100 | 112 |
scene_T01_008 | mid/train/scene_T01_008 | T01 | ClearNoon | 100 | 108 |
scene_T01_009 | mid/train/scene_T01_009 | T01 | MidRainSunset | 100 | 113 |
scene_T01_010 | mid/train/scene_T01_010 | T01 | ClearNoon | 100 | 116 |
scene_T01_011 | mid/train/scene_T01_011 | T01 | WetCloudyNoon | 100 | 117 |
scene_T01_012 | mid/train/scene_T01_012 | T01 | ClearNoon | 100 | 112 |
scene_T01_013 | mid/train/scene_T01_013 | T01 | HardRainNoon | 100 | 110 |
scene_T01_014 | mid/train/scene_T01_014 | T01 | ClearNoon | 100 | 122 |
scene_T01_015 | mid/train/scene_T01_015 | T01 | MidRainSunset | 100 | 117 |
scene_T01_016 | mid/train/scene_T01_016 | T01 | ClearNoon | 100 | 117 |
scene_T01_017 | mid/train/scene_T01_017 | T01 | WetCloudyNoon | 100 | 120 |
scene_T01_018 | mid/train/scene_T01_018 | T01 | ClearNoon | 100 | 116 |
scene_T01_019 | mid/train/scene_T01_019 | T01 | HardRainNoon | 100 | 118 |
scene_T01_020 | mid/train/scene_T01_020 | T01 | ClearNoon | 100 | 120 |
scene_T01_021 | mid/train/scene_T01_021 | T01 | MidRainSunset | 100 | 117 |
scene_T01_022 | mid/train/scene_T01_022 | T01 | ClearNoon | 100 | 116 |
scene_T01_023 | mid/train/scene_T01_023 | T01 | WetCloudyNoon | 100 | 117 |
scene_T01_024 | mid/train/scene_T01_024 | T01 | ClearNoon | 100 | 117 |
scene_T01_025 | mid/train/scene_T01_025 | T01 | HardRainNoon | 100 | 113 |
scene_T01_026 | mid/train/scene_T01_026 | T01 | ClearNoon | 100 | 108 |
scene_T01_027 | mid/train/scene_T01_027 | T01 | MidRainSunset | 100 | 117 |
scene_T01_028 | mid/train/scene_T01_028 | T01 | ClearNoon | 100 | 112 |
scene_T01_029 | mid/train/scene_T01_029 | T01 | WetCloudyNoon | 100 | 114 |
scene_T01_030 | mid/train/scene_T01_030 | T01 | ClearNoon | 100 | 112 |
scene_T01_031 | mid/train/scene_T01_031 | T01 | HardRainNoon | 100 | 114 |
scene_T01_032 | mid/train/scene_T01_032 | T01 | ClearNoon | 100 | 120 |
scene_T01_033 | mid/train/scene_T01_033 | T01 | MidRainSunset | 100 | 117 |
scene_T01_034 | mid/train/scene_T01_034 | T01 | ClearNoon | 100 | 126 |
scene_T01_035 | mid/train/scene_T01_035 | T01 | WetCloudyNoon | 100 | 116 |
scene_T01_036 | mid/train/scene_T01_036 | T01 | ClearNoon | 100 | 117 |
scene_T01_037 | mid/train/scene_T01_037 | T01 | HardRainNoon | 100 | 119 |
scene_T01_038 | mid/train/scene_T01_038 | T01 | ClearNoon | 100 | 116 |
scene_T01_039 | mid/train/scene_T01_039 | T01 | MidRainSunset | 100 | 113 |
scene_T01_040 | mid/train/scene_T01_040 | T01 | ClearNoon | 100 | 117 |
scene_T01_041 | mid/train/scene_T01_041 | T01 | WetCloudyNoon | 100 | 122 |
scene_T01_042 | mid/train/scene_T01_042 | T01 | ClearNoon | 100 | 116 |
scene_T01_043 | mid/train/scene_T01_043 | T01 | HardRainNoon | 100 | 122 |
scene_T01_044 | mid/train/scene_T01_044 | T01 | ClearNoon | 100 | 118 |
scene_T01_045 | mid/train/scene_T01_045 | T01 | MidRainSunset | 100 | 111 |
scene_T01_046 | mid/train/scene_T01_046 | T01 | ClearNoon | 100 | 119 |
scene_T01_047 | mid/train/scene_T01_047 | T01 | WetCloudyNoon | 100 | 112 |
scene_T01_048 | mid/train/scene_T01_048 | T01 | ClearNoon | 100 | 119 |
scene_T01_049 | mid/train/scene_T01_049 | T01 | HardRainNoon | 100 | 115 |
scene_T01_050 | mid/train/scene_T01_050 | T01 | ClearNoon | 100 | 120 |
scene_T01_051 | mid/train/scene_T01_051 | T01 | MidRainSunset | 100 | 118 |
scene_T01_052 | mid/train/scene_T01_052 | T01 | ClearNoon | 100 | 123 |
scene_T01_053 | mid/train/scene_T01_053 | T01 | WetCloudyNoon | 100 | 117 |
scene_T02_000 | mid/train/scene_T02_000 | T02 | ClearNoon | 100 | 108 |
scene_T02_001 | mid/train/scene_T02_001 | T02 | HardRainNoon | 100 | 114 |
scene_T02_002 | mid/train/scene_T02_002 | T02 | ClearNoon | 100 | 119 |
scene_T02_003 | mid/train/scene_T02_003 | T02 | MidRainSunset | 100 | 115 |
scene_T02_004 | mid/train/scene_T02_004 | T02 | ClearNoon | 100 | 111 |
scene_T02_005 | mid/train/scene_T02_005 | T02 | WetCloudyNoon | 100 | 115 |
scene_T02_006 | mid/train/scene_T02_006 | T02 | ClearNoon | 100 | 112 |
scene_T02_007 | mid/train/scene_T02_007 | T02 | HardRainNoon | 100 | 116 |
scene_T02_008 | mid/train/scene_T02_008 | T02 | ClearNoon | 100 | 114 |
scene_T02_009 | mid/train/scene_T02_009 | T02 | MidRainSunset | 100 | 117 |
scene_T02_010 | mid/train/scene_T02_010 | T02 | ClearNoon | 100 | 114 |
scene_T02_011 | mid/train/scene_T02_011 | T02 | WetCloudyNoon | 100 | 118 |
scene_T02_012 | mid/train/scene_T02_012 | T02 | ClearNoon | 100 | 115 |
scene_T02_013 | mid/train/scene_T02_013 | T02 | HardRainNoon | 100 | 118 |
scene_T02_014 | mid/train/scene_T02_014 | T02 | ClearNoon | 100 | 118 |
scene_T02_015 | mid/train/scene_T02_015 | T02 | MidRainSunset | 100 | 118 |
scene_T02_016 | mid/train/scene_T02_016 | T02 | ClearNoon | 100 | 117 |
scene_T02_017 | mid/train/scene_T02_017 | T02 | WetCloudyNoon | 100 | 114 |
scene_T02_018 | mid/train/scene_T02_018 | T02 | ClearNoon | 100 | 120 |
scene_T02_019 | mid/train/scene_T02_019 | T02 | HardRainNoon | 100 | 119 |
scene_T02_020 | mid/train/scene_T02_020 | T02 | ClearNoon | 100 | 122 |
scene_T02_021 | mid/train/scene_T02_021 | T02 | MidRainSunset | 100 | 116 |
scene_T02_022 | mid/train/scene_T02_022 | T02 | ClearNoon | 100 | 121 |
scene_T02_023 | mid/train/scene_T02_023 | T02 | WetCloudyNoon | 100 | 116 |
scene_T02_024 | mid/train/scene_T02_024 | T02 | ClearNoon | 100 | 118 |
scene_T02_025 | mid/train/scene_T02_025 | T02 | HardRainNoon | 100 | 117 |
scene_T02_026 | mid/train/scene_T02_026 | T02 | ClearNoon | 100 | 114 |
scene_T02_027 | mid/train/scene_T02_027 | T02 | MidRainSunset | 100 | 116 |
scene_T02_028 | mid/train/scene_T02_028 | T02 | ClearNoon | 100 | 113 |
scene_T02_029 | mid/train/scene_T02_029 | T02 | WetCloudyNoon | 100 | 121 |
scene_T02_030 | mid/train/scene_T02_030 | T02 | ClearNoon | 100 | 116 |
scene_T02_031 | mid/train/scene_T02_031 | T02 | HardRainNoon | 100 | 119 |
scene_T02_032 | mid/train/scene_T02_032 | T02 | ClearNoon | 100 | 118 |
scene_T02_033 | mid/train/scene_T02_033 | T02 | MidRainSunset | 100 | 115 |
scene_T02_034 | mid/train/scene_T02_034 | T02 | ClearNoon | 100 | 119 |
scene_T02_035 | mid/train/scene_T02_035 | T02 | WetCloudyNoon | 100 | 116 |
scene_T02_036 | mid/train/scene_T02_036 | T02 | ClearNoon | 100 | 118 |
scene_T02_037 | mid/train/scene_T02_037 | T02 | HardRainNoon | 100 | 114 |
scene_T02_038 | mid/train/scene_T02_038 | T02 | ClearNoon | 100 | 112 |
scene_T02_039 | mid/train/scene_T02_039 | T02 | MidRainSunset | 100 | 118 |
scene_T02_040 | mid/train/scene_T02_040 | T02 | ClearNoon | 100 | 110 |
scene_T02_041 | mid/train/scene_T02_041 | T02 | WetCloudyNoon | 100 | 115 |
scene_T02_042 | mid/train/scene_T02_042 | T02 | ClearNoon | 100 | 111 |
scene_T02_043 | mid/train/scene_T02_043 | T02 | HardRainNoon | 100 | 116 |
scene_T02_044 | mid/train/scene_T02_044 | T02 | ClearNoon | 100 | 114 |
scene_T02_045 | mid/train/scene_T02_045 | T02 | MidRainSunset | 100 | 117 |
StaDy4D: Static-Dynamic 4D Multi-View Scene Dataset
Multi-view 4D scene dataset from CARLA simulator for 3D/4D reconstruction research. Each scene is captured with paired dynamic/static renders — identical camera trajectories with and without actors.
TL;DR
- 864 scenes, 15,552 videos (432 short + 432 mid, each with 9 cameras x 2 renders)
- 8 towns, 6 weather conditions, paired static/dynamic renders from identical camera trajectories
- Per video: RGB (lossless MP4), depth map, camera extrinsics & intrinsics (safetensors)
- Ground-truth: depth in meters, camera-to-world matrices, 3D actor trajectories with bounding boxes
Quick Start
from huggingface_hub import snapshot_download
from safetensors.numpy import load_file
import torchvision, json
# Download one scene (~50MB)
path = snapshot_download(
"henry000/StaDy4D", repo_type="dataset",
allow_patterns="short/train/scene_T01_000/**",
)
scene = f"{path}/short/train/scene_T01_000"
cam = f"{scene}/dynamic/cam_00_car_forward"
meta = json.load(open(f"{scene}/metadata.json"))
rgb = torchvision.io.read_video(f"{cam}/rgb.mp4")[0] # [N, H, W, 3] uint8
depth = load_file(f"{cam}/depth.safetensors")["depth"] # [N, H, W] float16
c2w = load_file(f"{cam}/extrinsics.safetensors")["c2w"] # [N, 4, 4] float32
K = load_file(f"{cam}/intrinsics.safetensors")["K"] # [N, 3, 3] float32
Browse available scenes:
# Download just the index (~15KB)
path = snapshot_download(
"henry000/StaDy4D", repo_type="dataset",
allow_patterns="short/train/index.parquet",
)
df = pd.read_parquet(f"{path}/short/train/index.parquet")
print(df[df.town == "T01"]) # filter by town, weather, n_actors, etc.
Structure
StaDy4D/
├── index.parquet
├── short/ # 5s @ 10fps = 50 frames
│ ├── train/ # Town01, 02, 04, 05, 06
│ │ └── scene_T01_000/
│ │ ├── metadata.json
│ │ ├── actors.json
│ │ ├── dynamic/
│ │ │ └── cam_00_car_forward/
│ │ │ ├── rgb.mp4 # lossless H.264 (CRF 0, BGR24)
│ │ │ ├── depth.safetensors
│ │ │ ├── extrinsics.safetensors
│ │ │ └── intrinsics.safetensors
│ │ └── static/
│ │ └── (same cameras, empty world)
│ └── test/ # Town03, 07, 10
└── mid/ # 10s @ 10fps = 100 frames
├── train/
└── test/
Splits
| Split | Towns | Scenes (per duration) |
|---|---|---|
| Train | Town01, 02, 04, 05, 06 | 270 |
| Test | Town03, 07, 10 | 162 |
Camera Types
| Camera | Type | Description |
|---|---|---|
cam_00-02_car_forward |
Attached | Vehicle roof-mounted, autopilot driving |
cam_03_drone_forward |
Free | Aerial, 10-20m altitude |
cam_04_orbit_building |
Free | Rooftop level, 120-degree pan |
cam_05_orbit_crossroad |
Free | Street level, intersection pan |
cam_06_cctv |
Free | Fixed surveillance, 30-40m |
cam_07-08_pedestrian |
Attached | Eye-level, AI-controlled walking |
File Formats
| File | Key | Shape | Dtype |
|---|---|---|---|
rgb.mp4 |
- | [N, H, W, 3] |
uint8 BGR, lossless H.264 (CRF 0) |
depth.safetensors |
depth |
[N, H, W] |
float16 (meters) |
extrinsics.safetensors |
c2w |
[N, 4, 4] |
float32 (camera-to-world) |
intrinsics.safetensors |
K |
[N, 3, 3] |
float32 |
Resolution: 640x360, FOV: 70 degrees.
Dynamic vs Static
Each scene has paired renders:
- dynamic/: 80 vehicles + 50 pedestrians with AI behavior
- static/: identical camera trajectories, empty world (no actors)
This pairing enables research on scene decomposition, dynamic object removal, and static background reconstruction.
Detailed Specifications
Dataset Overview
StaDy4D provides 864 multi-camera scenes (432 short + 432 mid-length) captured across 8 diverse urban environments with 6 weather conditions. Each scene contains 9 synchronized cameras (attached vehicle/pedestrian cameras and free-floating cameras) recording both dynamic (with traffic actors) and static (empty world) versions from identical camera trajectories.
Key Properties
| Property | Value |
|---|---|
| Total scenes | 864 (432 short + 432 mid) |
| Scenes per town | 54 |
| Towns | 8 (Town01, 02, 03, 04, 05, 06, 07, 10HD) |
| Weather conditions | 6 (ClearNoon x3, HardRainNoon, MidRainSunset, WetCloudyNoon) |
| Cameras per scene | 9 (3 car + 1 drone + 2 orbit + 1 CCTV + 2 pedestrian) |
| Render types per scene | 2 (dynamic with actors, static without) |
| Frames per scene (short) | 50 (5 seconds @ 10 FPS) |
| Frames per scene (mid) | 100 (10 seconds @ 10 FPS) |
| Image resolution | 640 x 360 pixels |
| Field of view | 70 degrees |
| Dynamic actors per scene | ~130 (80 vehicles + 50 pedestrians) |
| Train/test split | By town: train = {T01, T02, T04, T05, T06}, test = {T03, T07, T10} |
| Total RGB frames | ~15.5M (864 scenes x 9 cameras x 2 renders x [50 or 100] frames) |
Dataset Configurations
| Config | Duration | FPS | Frames/scene | Train scenes | Test scenes |
|---|---|---|---|---|---|
short |
5 seconds | 10 | 50 | 270 | 162 |
mid |
10 seconds | 10 | 100 | 270 | 162 |
Full Directory Structure
StaDy4D/
├── index.parquet # Global scene index (see schema below)
├── short/ # 5-second scenes (50 frames)
│ ├── train/ # 270 scenes from 5 towns
│ │ ├── scene_T01_000/ # Scene: Town01, index 000
│ │ │ ├── metadata.json # Scene-level metadata (see schema below)
│ │ │ ├── actors.json # Per-actor 3D trajectories (see schema below)
│ │ │ ├── dynamic/ # Render with traffic actors (80 vehicles + 50 walkers)
│ │ │ │ ├── cam_00_car_forward/ # Camera 0: vehicle roof-mounted, autopilot
│ │ │ │ │ ├── rgb.mp4 # Lossless H.264 (CRF 0, BGR24), 50 frames
│ │ │ │ │ ├── depth.safetensors # key="depth", float16 [50, 360, 640] meters
│ │ │ │ │ ├── extrinsics.safetensors # key="c2w", float32 [50, 4, 4]
│ │ │ │ │ └── intrinsics.safetensors # key="K", float32 [50, 3, 3]
│ │ │ │ ├── cam_01_car_forward/
│ │ │ │ ├── cam_02_car_forward/
│ │ │ │ ├── cam_03_drone_forward/
│ │ │ │ ├── cam_04_orbit_building/
│ │ │ │ ├── cam_05_orbit_crossroad/
│ │ │ │ ├── cam_06_cctv/
│ │ │ │ ├── cam_07_pedestrian/
│ │ │ │ └── cam_08_pedestrian/
│ │ │ └── static/ # Empty world, same camera trajectories
│ │ │ ├── cam_00_car_forward/
│ │ │ ├── ...
│ │ │ └── cam_08_pedestrian/
│ │ ├── scene_T01_001/
│ │ ├── ...
│ │ └── scene_T06_053/
│ └── test/
│ ├── scene_T03_000/
│ ├── ...
│ └── scene_T10_053/
└── mid/ # 10-second scenes (100 frames)
├── train/
└── test/
RGB Video (rgb.mp4)
- Format: MP4, lossless H.264 (CRF 0,
libx264,ultrafastpreset) - Pixel format: BGR24 (OpenCV convention — reads directly with
cv2.VideoCapture) - Frame shape:
[360, 640, 3](H, W, C), uint8 - Frame count: 50 (short) or 100 (mid)
- FPS: 10
import cv2
cap = cv2.VideoCapture("rgb.mp4")
frames = []
while True:
ret, frame = cap.read() # BGR uint8 [H, W, 3]
if not ret:
break
frames.append(frame)
cap.release()
Depth Maps (depth.safetensors)
- Format: safetensors
- Key:
"depth" - Shape:
[N, 360, 640]where N = number of frames (50 or 100) - Dtype: float16
- Unit: meters
- Range: 0 to 1000 meters
- Encoding: converted from CARLA's 24-bit RGB depth encoding via
(R + G*256 + B*65536) / (2^24 - 1) * 1000.0
from safetensors.numpy import load_file
depth = load_file("depth.safetensors")["depth"] # numpy array [N, H, W], float16
Extrinsics (extrinsics.safetensors)
- Format: safetensors
- Key:
"c2w" - Shape:
[N, 4, 4]where N = number of frames - Dtype: float32
- Convention: camera-to-world transformation matrix
- Coordinate system: CV convention (X=right, Y=down, Z=forward)
- Rotation: derived from CARLA's (pitch, yaw, roll) Euler angles via
scipy.spatial.transform.Rotation.from_euler("xyz", [pitch, yaw, roll], degrees=True) - Translation:
[location.y, location.z, location.x](CARLA to CV coordinate swap)
c2w = load_file("extrinsics.safetensors")["c2w"] # [N, 4, 4], float32
rotation = c2w[0, :3, :3] # 3x3 rotation matrix
translation = c2w[0, :3, 3] # 3D translation vector
Intrinsics (intrinsics.safetensors)
- Format: safetensors
- Key:
"K" - Shape:
[N, 3, 3]where N = number of frames - Dtype: float32
- Matrix structure:
[[fx, 0, cx], [0, fy, cy], [0, 0, 1]] - Default values:
fx = fy = W / (2 * tan(FOV/2)),cx = W/2,cy = H/2 - Note: stored per-frame (N copies) to support future variable-intrinsic scenarios; currently constant across frames within a scene
K = load_file("intrinsics.safetensors")["K"] # [N, 3, 3], float32
fx, fy = K[0, 0, 0], K[0, 1, 1]
cx, cy = K[0, 0, 2], K[0, 1, 2]
Scene Metadata (metadata.json)
{
"scene_name": "scene_T01_000",
"map_name": "Town01",
"town": "T01",
"scene_idx": 0,
"scene_seed": 0,
"num_cameras": 9,
"num_frames": 50,
"fps": 10,
"resolution": {"width": 640, "height": 360},
"fov_deg": 70.0,
"n_vehicles": 80,
"n_walkers": 50,
"weather": "ClearNoon",
"camera_types": [
"car_forward", "car_forward", "car_forward",
"drone_forward", "orbit_building", "orbit_crossroad",
"cctv", "pedestrian", "pedestrian"
],
"intrinsic": {
"fx": 457.01, "fy": 457.01,
"cx": 320.0, "cy": 180.0,
"width": 640, "height": 360,
"fov_deg": 70.0
},
"intrinsic_matrix": [
[457.01, 0.0, 320.0],
[0.0, 457.01, 180.0],
[0.0, 0.0, 1.0]
],
"initial_extrinsics": [
{
"camera": "cam_00_car_forward",
"traj_type": "car_forward",
"is_attached": true,
"location": {"x": 1.0, "y": 2.0, "z": 3.0},
"rotation": {"pitch": -3.0, "yaw": 90.0, "roll": 0.0},
"c2w_matrix": [[...], [...], [...], [...]]
}
]
}
Actor Trajectories (actors.json)
{
"num_frames": 50,
"num_actors": 120,
"tracks": [
{
"type_id": "vehicle.tesla.model3",
"is_ego": false,
"bbox_extent": {"x": 2.4, "y": 1.0, "z": 0.8},
"bbox_center": {"x": 0.0, "y": 0.0, "z": 0.8},
"frames": [
{
"frame": 0,
"location": {"x": 100.5, "y": -50.3, "z": 0.1},
"rotation": {"pitch": 0.0, "yaw": 90.5, "roll": 0.0},
"velocity": {"x": 5.2, "y": 0.1, "z": 0.0}
}
]
}
]
}
Field details:
type_id: CARLA blueprint ID (e.g.,vehicle.tesla.model3,walker.pedestrian.0050)is_ego:trueif an ego camera is attached to this actorbbox_extent: half-extents of the 3D bounding box in meters (x=length, y=width, z=height)bbox_center: bounding box center offset relative to actor originframes: list of per-frame states (variable length per actor — actors may appear/disappear)location: world position in meters (CARLA coordinate system)rotation: Euler angles in degreesvelocity: velocity vector in m/s
Index File (index.parquet)
Flat index for programmatic scene discovery and filtering.
| Column | Type | Description |
|---|---|---|
scene_name |
string | e.g., scene_T01_000 |
path |
string | Relative path from dataset root, e.g., short/train/scene_T01_000 |
town |
string | Town prefix, e.g., T01, T10 |
weather |
string | Weather preset, e.g., ClearNoon |
n_frames |
int | Frames per camera (50 or 100) |
n_actors |
int | Actual number of actors observed in the dynamic render (typically 100-135) |
Constants (not in index): 9 cameras, 640x360 resolution, 70 degrees FOV, 80 vehicles + 50 walkers spawned per scene.
import pandas as pd
df = pd.read_parquet("short/train/index.parquet")
clear_scenes = df[df.weather == "ClearNoon"]
Scene Naming Convention
Format: scene_T{NN}_{IDX}
T{NN}: Town prefix —T01throughT07,T10(Town10HD mapped to T10){IDX}: 3-digit scene index within that town (000-053)- Scene index encodes weather: cycles through
[ClearNoon, HardRainNoon, ClearNoon, MidRainSunset, ClearNoon, WetCloudyNoon]per 6 consecutive indices
Camera Types Detail
| ID | Label | Type | Height | Motion | Attached |
|---|---|---|---|---|---|
| 0-2 | cam_0X_car_forward |
Vehicle dashcam | 2-3m (roof) | Autopilot driving | Yes (vehicle) |
| 3 | cam_03_drone_forward |
Aerial drone | 10-20m | Forward flight with drift | No |
| 4 | cam_04_orbit_building |
Building orbit | 30-40m | 120-degree horizontal pan | No |
| 5 | cam_05_orbit_crossroad |
Intersection orbit | 3-5m | 100-degree horizontal pan | No |
| 6 | cam_06_cctv |
Surveillance | 30-40m | Static (no motion) | No |
| 7-8 | cam_0X_pedestrian |
Pedestrian POV | 1.5-1.8m | AI-controlled walking | Yes (walker) |
Attached cameras (is_attached: true): physically mounted on actors with autopilot/AI control. The camera trajectory is determined by the actor's motion and recorded frame-by-frame.
Free cameras (is_attached: false): follow pre-computed trajectories, independent of actors.
Static-Dynamic Pairing
Each scene contains two renders from identical camera trajectories:
dynamic/: World populated with 80 AI-driven vehicles and 50 AI-driven pedestriansstatic/: Empty world, no actors — same camera paths replayed
This enables:
- Scene decomposition: separating static background from dynamic foreground
- Dynamic object segmentation: by comparing static vs. dynamic renders
- Background reconstruction: using static renders as ground truth for the static scene
- 4D reconstruction evaluation: benchmarking against known static/dynamic ground truth
Loading Examples
Download and load a complete scene
from huggingface_hub import snapshot_download
from safetensors.numpy import load_file
from pathlib import Path
import torchvision, json
root = snapshot_download(
"henry000/StaDy4D", repo_type="dataset",
allow_patterns="short/train/scene_T01_000/**",
)
scene_path = Path(root) / "short/train/scene_T01_000"
meta = json.load(open(scene_path / "metadata.json"))
actors = json.load(open(scene_path / "actors.json"))
for cam_dir in sorted((scene_path / "dynamic").iterdir()):
rgb = torchvision.io.read_video(str(cam_dir / "rgb.mp4"))[0] # [N, H, W, 3]
depth = load_file(str(cam_dir / "depth.safetensors"))["depth"] # [N, H, W]
c2w = load_file(str(cam_dir / "extrinsics.safetensors"))["c2w"] # [N, 4, 4]
K = load_file(str(cam_dir / "intrinsics.safetensors"))["K"] # [N, 3, 3]
print(f"{cam_dir.name}: rgb {rgb.shape}, depth {depth.shape}, c2w {c2w.shape}")
Download a full split
# Download all short training scenes (~300GB)
root = snapshot_download(
"henry000/StaDy4D", repo_type="dataset",
allow_patterns="short/train/**",
)
Filter scenes with parquet index
import pandas as pd
root = snapshot_download(
"henry000/StaDy4D", repo_type="dataset",
allow_patterns="short/train/index.parquet",
)
df = pd.read_parquet(f"{root}/short/train/index.parquet")
# All clear-weather scenes from Town01
subset = df[(df.town == "T01") & (df.weather == "ClearNoon")]
print(f"{len(subset)} scenes match")
# Then download just those scenes
for _, row in subset.iterrows():
snapshot_download(
"henry000/StaDy4D", repo_type="dataset",
allow_patterns=f"{row['path']}/**",
)
Project depth to 3D point cloud
from safetensors.numpy import load_file
import numpy as np
depth = load_file("depth.safetensors")["depth"][0] # [H, W] for frame 0
K = load_file("intrinsics.safetensors")["K"][0] # [3, 3]
c2w = load_file("extrinsics.safetensors")["c2w"][0] # [4, 4]
H, W = depth.shape
u, v = np.meshgrid(np.arange(W), np.arange(H))
fx, fy, cx, cy = K[0, 0], K[1, 1], K[0, 2], K[1, 2]
# Unproject to camera space
z = depth.astype(np.float32)
x = (u - cx) * z / fx
y = (v - cy) * z / fy
pts_cam = np.stack([x, y, z, np.ones_like(z)], axis=-1) # [H, W, 4]
# Transform to world space
pts_world = (c2w @ pts_cam.reshape(-1, 4).T).T[:, :3] # [H*W, 3]
Generation
Generated using CARLA 0.9.16 simulator with data_generate_carla.py. Source code and configs available at the dataset repository.
License
CC-BY-SA-4.0
Citation
@dataset{stady4d,
title={StaDy4D: Multi-View Static-Dynamic 4D Scene Dataset for Reconstruction},
year={2025},
url={https://huggingface.co/datasets/henry000/StaDy4D}
}
- Downloads last month
- 74



