Update README.md
Browse files
README.md
CHANGED
|
@@ -1,8 +1,3 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-4.0
|
| 3 |
-
---
|
| 4 |
-
|
| 5 |
-
|
| 6 |
|
| 7 |
|
| 8 |
# UAVScenes
|
|
@@ -12,7 +7,7 @@ license: cc-by-nc-4.0
|
|
| 12 |
|
| 13 |
We introduce UAVScenes, a large-scale dataset designed to benchmark various tasks across both 2D and 3D modalities. Our benchmark dataset is built upon the well-calibrated multi-modal UAV dataset MARS-LVIG, originally developed only for simultaneous localization and mapping (SLAM). We enhance this dataset by providing manually labeled semantic annotations for both images and LiDAR point clouds, along with accurate 6-degree-of-freedom (6-DoF) poses. These additions enable a wide range of UAV perception tasks, including detection, segmentation, depth estimation, 6-DoF localization, place recognition, and novel view synthesis (NVS). To the best of our knowledge, this is the first UAV benchmark dataset to offer both image and LiDAR point cloud semantic annotations (120k labeled pairs), with the potential to advance multi-modal UAV perception research.
|
| 14 |
|
| 15 |
-
<img src="https://github.com/sijieaaa/UAVScenes/raw/main/pics/supp_demo.png"
|
| 16 |
|
| 17 |
## Download
|
| 18 |
We provide both the full dataset (interval=1) and the key-frame only dataset (interval=5, 1/5 size). <br>
|
|
@@ -44,12 +39,12 @@ Camera-3D map calibrations are in `sampleinfos_interpolated.json`. <br>
|
|
| 44 |
|
| 45 |
- More sensor and scene information can be found from [MARS-LVIG](https://mars.hku.hk/dataset.html).
|
| 46 |
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
-
|
| 49 |
-
 -->
|
| 50 |
|
| 51 |
-
<img src="https://github.com/sijieaaa/UAVScenes/raw/main/pics/dji_m300.png" alt="dji_m300" width="600px">
|
| 52 |
-
<img src="https://github.com/sijieaaa/UAVScenes/raw/main/pics/summary.png" alt="summary" width="600px">
|
| 53 |
|
| 54 |
## Baseline Code
|
| 55 |
Under preparing. Please stay tuned.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
| 2 |
|
| 3 |
# UAVScenes
|
|
|
|
| 7 |
|
| 8 |
We introduce UAVScenes, a large-scale dataset designed to benchmark various tasks across both 2D and 3D modalities. Our benchmark dataset is built upon the well-calibrated multi-modal UAV dataset MARS-LVIG, originally developed only for simultaneous localization and mapping (SLAM). We enhance this dataset by providing manually labeled semantic annotations for both images and LiDAR point clouds, along with accurate 6-degree-of-freedom (6-DoF) poses. These additions enable a wide range of UAV perception tasks, including detection, segmentation, depth estimation, 6-DoF localization, place recognition, and novel view synthesis (NVS). To the best of our knowledge, this is the first UAV benchmark dataset to offer both image and LiDAR point cloud semantic annotations (120k labeled pairs), with the potential to advance multi-modal UAV perception research.
|
| 9 |
|
| 10 |
+
<img src="https://github.com/sijieaaa/UAVScenes/raw/main/pics/supp_demo.png" alt="pic" style="width:80%; height:auto;">
|
| 11 |
|
| 12 |
## Download
|
| 13 |
We provide both the full dataset (interval=1) and the key-frame only dataset (interval=5, 1/5 size). <br>
|
|
|
|
| 39 |
|
| 40 |
- More sensor and scene information can be found from [MARS-LVIG](https://mars.hku.hk/dataset.html).
|
| 41 |
|
| 42 |
+
<img src="https://github.com/sijieaaa/UAVScenes/raw/main/pics/dji_m300.png" alt="pic" style="width:50%; height:auto;">
|
| 43 |
+
|
| 44 |
+
- UAVScenes consists of 4 large scenes (AMtown, AMvalley, HKairport, and HKisland). Each scene consists of multiple runs (e.g., 01, 02, and 03).
|
| 45 |
|
| 46 |
+
<img src="https://github.com/sijieaaa/UAVScenes/raw/main/pics/summary.png" alt="pic" style="width:80%; height:auto;">
|
|
|
|
| 47 |
|
|
|
|
|
|
|
| 48 |
|
| 49 |
## Baseline Code
|
| 50 |
Under preparing. Please stay tuned.
|