Skip to yearly menu bar Skip to main content


Poster

Arbitrary-Scale Video Super-Resolution with Structural and Textural Priors

Wei Shang · Dongwei Ren · Wanying Zhang · Yuming Fang · Wangmeng Zuo · Kede Ma

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Project Page ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Arbitrary video super-resolution is a challenging task that involves generating high-resolution videos of arbitrary sizes from low-resolution videos while preserving fine details and ensuring temporal consistency between consecutive frames. In this study, we present ArbVSR, an efficient and effective framework for arbitrary-scale video super-resolution. Specifically, ArbVSR builds upon flow-guided recurrent unit to capture temporal dependency and utilizes local window aggregation to exploit future frames. To better leverage scale information, we generate spatially varying maps via all stages in pre-trained deep neural networks as structural and textural priors, which can identify regions with a high probability of containing texture. The priors effectively guide the super-resolution process to generate more visually pleasing and accurate results across different scale factors. In the upsampling phase, we propose a scale-sensitive and data-independent hypernetwork to generate continuous upsampling weights for arbitrary-scale video super-resolution, which can be computed during pre-processing to improve efficiency. Extensive experiments demonstrate the significant advantages of our method in terms of both performance and efficiency. The source code and trained models will be publicly available.

Live content is unavailable. Log in and register to view live content