This paper addresses the limitations of existing adverse weather image restoration methods trained on synthetic data when applied to real-world scenarios. We formulate a semi-supervised learning framework utilizing vision-language models to enhance restoration performance across diverse adverse weather conditions in real-world settings. Our approach involves assessing image clarity and providing semantics using vision-language models on real data, serving as supervision signals for training restoration models. For clearness enhancement, we use real-world data, employing a dual-step strategy with pseudo-labels generated by vision-language models and weather prompt learning. For semantic enhancement, we integrate real-world data by adjusting weather conditions in vision-language model descriptions while preserving semantic meaning. Additionally, we introduce an efficient training strategy to alleviate computational burdens. Our approach achieves superior results in real-world adverse weather image restoration, demonstrated through qualitative and quantitative comparisons with state-of-the-art approaches.
Live content is unavailable. Log in and register to view live content