Underwater scenes are challenging for computer vision methods due to color degradation caused by the water column and detrimental lighting effects such as caustic caused by sunlight refracting on a wavy surface. These challenges impede widespread use of computer vision tools that could aid in ecological surveying of underwater environments or in industrial applications. Existing algorithms for alleviating caustics and descattering the image to recover colors are often impractical to implement due to the need for ground-truth training data, the necessity for successful alignment of an image within a 3D scene, or other assumptions that are infeasible in practice. In this paper, we propose a solution to tackle those problems in underwater computer vision: our method is based on two neural networks: CausticsNet, for single-image caustics removal, and BackscatterNet, for backscatter removal. Both neural networks are trained using an objective formulated with the aid of self-supervised monocular SLAM on a collection of underwater videos. Thus, our method does not requires any ground-truth color images or caustics labels, and corrects images in real-time. We experimentally demonstrate the fidelity of our caustics removal method, performing similarly to state-of-the-art supervised methods, and show that the color restoration and caustics removal lead to better downstream performance in Structure-from-Motion image keypoint matching than a wide range of methods.
Live content is unavailable. Log in and register to view live content