top of page

NeurIPS 2020 - One Paper Accepted : Juan Gonzalez

우리 연구실 Juan Gonzalez박사과정생 논문이 NeurIPS 2021 Accept 되었습니다. Congratulations!


본 논문은 비지도 학습으로 단일 시점 비디오에 대해 깊이맵을 추정하는 논문으로서 현재까지 발표된 논문들보다 가장 우수한 정확도의 깊이 예측 정확도를 달성한 논문입니다.


Title: Forget About the LiDAR: Self-Supervised Depth Estimators with MED Probability Volumes

Authors: Juan Luis Gonzalez Bello and Munchurl Kim

Abstract:

Self-supervised depth estimators have recently shown results comparable to the supervised methods on the challenging single image depth estimation (SIDE) task, by exploiting the geometrical relations between target and reference views in the training data. However, previous methods usually learn forward or backward image synthesis, but not depth estimation, as they cannot effectively neglect occlusions between the target and the reference images. Previous works rely on rigid photo metric assumptions or the SIDE network to infer depth and occlusions, resulting in limited performance. On the other hand, we propose a method to “Forget About

9 the LiDAR” (FAL), for the training of depth estimators, with Mirrored Exponential Disparity (MED) probability volumes, from which we obtain geometrically inspired occlusion maps with our novel Mirrored Occlusion Module (MOM). Our MOM does not impose a burden on our FAL-net. Contrary to the previous methods that learn SIDE from stereo pairs by regressing disparity in the linear space, our FAL-net regresses disparity by binning it into the exponential space, which allows for better detection of distant and nearby objects. We define a two-step training strategy for our FAL-net: It is first trained for view synthesis and then fine-tuned for depth estimation with our MOM. Our FAL-net is remarkably light-weight and outperforms the previous state-of-the-art methods with 8 fewer parameters and 3 faster inference speeds on the challenging KITTI dataset. We present extensive experimental results on the KITTI, CityScapes, and Make3D datasets to verify our method’s effectiveness. To the authors’ best knowledge, the presented method performs the best among all the previous self-supervised methods until now.



bottom of page