Skip to yearly menu bar Skip to main content


Poster

Towards Real-World Adverse Weather Image Restoration: Enhancing Clearness and Semantics with Vision-Language Models

Jiaqi Xu · Mengyang Wu · Xiaowei Hu · Chi-Wing Fu · Qi Dou · Pheng-Ann Heng

# 295
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

This paper addresses the limitations of existing adverse weather image restoration methods trained on synthetic data when applied to real-world scenarios. We formulate a semi-supervised learning framework utilizing vision-language models to enhance restoration performance across diverse adverse weather conditions in real-world settings. Our approach involves assessing image clarity and providing semantics using vision-language models on real data, serving as supervision signals for training restoration models. For clearness enhancement, we use real-world data, employing a dual-step strategy with pseudo-labels generated by vision-language models and weather prompt learning. For semantic enhancement, we integrate real-world data by adjusting weather conditions in vision-language model descriptions while preserving semantic meaning. Additionally, we introduce an efficient training strategy to alleviate computational burdens. Our approach achieves superior results in real-world adverse weather image restoration, demonstrated through qualitative and quantitative comparisons with state-of-the-art approaches.

Live content is unavailable. Log in and register to view live content