Skip to yearly menu bar Skip to main content


Poster

Unsupervised Moving Object Segmentation with Atmospheric Turbulence

Dehao Qin · Ripon k Saha · Woojeh Chung · Suren Jayasuriya · Jinwei Ye · Nianyi Li

[ ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Moving object segmentation in the presence of atmospheric turbulence is highly challenging due to turbulence-induced irregular and time-varying distortions. In this paper, we present an unsupervised approach for segmenting moving objects in videos downgraded by atmospheric turbulence. Our key approach is to adopt a detect-then-grow scheme: we first identify a small set of pixels that belong to moving objects with high confidence, then gradually grow a foreground mask from those seeds that segment all moving objects in the scene. In order to disentangle different types of motions, we check the rigid geometric consistency among video frames. We then use the Sampson distance to initialize the seedling pixels. After growing per-frame foreground masks, we use spatial grouping loss and temporal consistency loss to further refine the masks in order to ensure their spatio-temporal consistency. Our method is unsupervised and does not require training on labeled data. For validation, we collect and release the first real-captured long-range turbulent video dataset with ground truth masks for moving objects. We evaluate our method both qualitatively and quantitatively on our real dataset. Results show that our method achieves good accuracy in segmenting moving objects and is robust for long-range videos with various turbulence strengths.

Live content is unavailable. Log in and register to view live content