Suikong commited on
Commit
673293b
·
verified ·
1 Parent(s): 05044a5

Update EVAL.md

Browse files
Files changed (1) hide show
  1. EVAL.md +20 -2
EVAL.md CHANGED
@@ -8,7 +8,25 @@ Download json files from [here](https://huggingface.co/AIDC-AI/Omni-View/tree/ma
8
 
9
  Download metadata from [EmbodiedScan](https://github.com/OpenRobotLab/EmbodiedScan/tree/main/data). You need to fill out the official form to get the access to the dataset. Move the `embodiedscan_infos_*.pkl` to `./dataset/eval/embodiedscan`.
10
 
11
- Download images from [Video3DLLM](https://huggingface.co/datasets/zd11024/Video-3D-LLM_data). Move the `scannet` to `./dataset/eval/`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  The whole file structure under `./dataset/eval/` will be as follows.
14
 
@@ -223,4 +241,4 @@ python inference.py --scene-id 000c3ab189999a83
223
 
224
  If `scene-id != pose-id`, we will use the first image of scene-id as the reference image and generate novel views using the camera trajectory of pose-id.
225
 
226
- If `(scene-id is None) and (image-path is not None)`, we will use the image in image-path as the reference image and generate novel views using the camera trajectory of pose-id.
 
8
 
9
  Download metadata from [EmbodiedScan](https://github.com/OpenRobotLab/EmbodiedScan/tree/main/data). You need to fill out the official form to get the access to the dataset. Move the `embodiedscan_infos_*.pkl` to `./dataset/eval/embodiedscan`.
10
 
11
+ Download images from [Video3DLLM](https://huggingface.co/datasets/zd11024/Video-3D-LLM_data). Then,
12
+
13
+ ```shell
14
+ cd Video-3D-LLM_data
15
+ # unzip posed images
16
+ cat posed_images_part* > posed_images.tar.gz
17
+ tar -xzf posed_images.tar.gz
18
+ # unzip mask
19
+ unzip mask.zip
20
+ # unzip pcd
21
+ tar -xzf pcd_with_object_aabbs.tar.gz
22
+
23
+ mkdir scannet
24
+ mv posed_images/ scannet/
25
+ mv mask/ scannet/
26
+ mv data/scannet/pcd_with_object_aabbs/ scannet/
27
+ ```
28
+
29
+ Move the `scannet` to `./dataset/eval/`.
30
 
31
  The whole file structure under `./dataset/eval/` will be as follows.
32
 
 
241
 
242
  If `scene-id != pose-id`, we will use the first image of scene-id as the reference image and generate novel views using the camera trajectory of pose-id.
243
 
244
+ If `(scene-id is None) and (image-path is not None)`, we will use the image in image-path as the reference image and generate novel views using the camera trajectory of pose-id.