Skip to content

[BMVC'24] BaseBoostDepth: Exploiting Larger Baselines For Self-supervised Monocular Depth Estimation

License

Notifications You must be signed in to change notification settings

kieran514/baseboostdepth

Repository files navigation

BaseBoostDepth: Exploiting Larger Baselines for Self-Supervised Monocular Depth Estimation

Website Paper Paper

High edge accuracy

Installation Setup

The models were trained using CUDA 11.1, Python 3.7.4 (conda environment), PyTorch 1.8.0 and Ubuntu 22.04.

Create a conda environment with the PyTorch library:

conda env create --file environment.yml
conda activate baseboostdepth
pip install git+'https://github.com/otaheri/chamfer_distance'

Training Datasets

We use the KITTI dataset and follow the downloading/preprocessing instructions set out by Monodepth2. Download from scripts;

wget -i scripts/kitti_archives_to_download.txt -P data/KITTI_RAW/

Then unzip the downloaded files;

cd data/KITTI_RAW
unzip "*.zip"
cd ..
cd ..

Then convert all images to jpg;

find data/KITTI_RAW/ -name '*.png' | parallel 'convert -quality 92 -sampling-factor 2x2,1x1,1x1 {.}.png {.}.jpg && rm {}'

Pretrained Models

KITTI: (Abs_Rel, RMSE, a1)

SYNS: (Edge-Acc, Edge-Comp, Point cloud F-Score, Point cloud IoU)

Model Name Abs_Rel RMSE a1 Edge-Acc Edge-Comp F-Score IoU Model resolution Model
BaseBoostDepth 0.106 4.584 0.883 2.453 3.810 0.275 0.174 640 x 192 MD2
BaseBoostDepth (pre) 0.104 4.544 0.888 2.432 4.763 0.268 0.168 640 x 192 MD2
BaseBoostDepth (pre MonoViT) 0.096 4.201 0.906 2.409 5.314 0.300 0.191 640 x 192 MonoViT
BaseBoostDepth (pre SQLdepth) 0.084 3.980 0.920 2.505 13.164 0.246 0.151 640 x 192 SQLdepth

Training

Prepare Validation Data

python export_gt_depth.py --data_path data/KITTI_RAW --split eigen_zhou
bash run.sh

KITTI Ground Truth

We must prepare ground truth files for testing/validation and training.

python export_gt_depth.py --data_path data/KITTI_RAW --split eigen
python export_gt_depth.py --data_path data/KITTI_RAW --split eigen_benchmark

Evaluation KITTI

We provide the evaluation for the KITTI dataset. If a ViT model is used as the weights, please use --ViT when evaluating below.

KITTI

python evaluate_depth.py --load_weights_folder {weights_directory} --eval_mono --kt_path data/KITTI_RAW --eval_split eigen
python evaluate_depth.py --load_weights_folder {weights_directoryMonoViT} --eval_mono --kt_path data/KITTI_RAW --eval_split eigen --ViT
python evaluate_depth.py --load_weights_folder {weights_directorySQL} --eval_mono --kt_path data/KITTI_RAW --eval_split eigen --SQL

KITTI Benchmark

python evaluate_depth.py --load_weights_folder {weights_directory} --eval_mono --kt_path data/KITTI_RAW --eval_split eigen_benchmark
python evaluate_depth.py --load_weights_folder {weights_directoryMonoViT} --eval_mono --kt_path data/KITTI_RAW --eval_split eigen_benchmark --ViT
python evaluate_depth.py --load_weights_folder {weights_directorySQL} --eval_mono --kt_path data/KITTI_RAW --eval_split eigen_benchmark --SQL

SYNS Dataset Creation

Due August

References

About

[BMVC'24] BaseBoostDepth: Exploiting Larger Baselines For Self-supervised Monocular Depth Estimation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published