Skip to content

Latest commit

 

History

History
47 lines (30 loc) · 2.66 KB

File metadata and controls

47 lines (30 loc) · 2.66 KB

Adabins

Adabins: Depth estimation using adaptive bins

Introduction

Official Repo

Abstract

We address the problem of estimating a high quality dense depth map from a single RGB input image. We start out with a baseline encoder-decoder convolutional neural network architecture and pose the question of how the global processing of information can help improve overall depth estimation. To this end, we propose a transformer-based architecture block that divides the depth range into bins whose center value is estimated adaptively per image. The final depth values are estimated as linear combinations of the bin centers. We call our new building block AdaBins. Our results show a decisive improvement over the state-of-the-art on several popular depth datasets across all metrics. We also validate the effectiveness of the proposed block with an ablation study and provide the code and corresponding pre-trained weights of the new state-of-the-art model.

Framework

Citation

@inproceedings{bhat2021adabins,
  title={Adabins: Depth estimation using adaptive bins},
  author={Bhat, Shariq Farooq and Alhashim, Ibraheem and Wonka, Peter},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4009--4018},
  year={2021}
}

Results and models

KITTI

Method Backbone Train Epoch Abs Rel (+flip) RMSE (+flip) Config Download
Official EfficientNetB5-AP 25 0.058 2.36 - -
Adabins EfficientNetB5-AP 24 0.058 2.33 config log | model

NYU

Method Backbone Train Epoch Abs Rel (+flip) RMSE (+flip) Config Download
Official EfficientNetB5-AP 25 0.103 0.364 - -
Adabins EfficientNetB5-AP 24 0.106 0.368 config log | model
Adabins ResNet-50 24 0.141 0.451 config log | model