Skip to yearly menu bar Skip to main content


Poster

Towards Architecture-Agnostic Untrained Networks Priors for Image Reconstruction with Frequency Regularization

Yilin Liu · Yunkui Pang · Jiang Li · Yong Chen · Pew-Thian Yap

# 233
[ ] [ Paper PDF ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Untrained networks inspired by deep image prior have shown promising capabilities in recovering a high-quality image from noisy or partial measurements, without requiring training data. Their success has been widely attributed to the spectral bias acting as an implicit regularization induced by suitable network architectures. However, applications of such network-based priors often entail superfluous architectural decisions, overfitting risks, and slow optimization, all of which hinder their practicality. In this work, we propose efficient, architecture-agnostic methods for a more direct frequency control over the network priors: 1) constraining the bandwidth of the white-noise input, 2) controlling the bandwidth of the interpolation-based upsamplers, and 3) regularizing the Lipschitz constants of the layers. We show that even with just one extra line of code, the overfitting issues in underperforming architectures can be alleviated such that their performance gaps with the high-performing counterparts can be largely closed despite their distinct configurations, mitigating the need for architecture tuning. This then makes it possible to employ a more compact model to achieve similar or superior performance to larger models with greater efficiency. Our regularized network priors compare favorably with current supervised and self-supervised methods on MRI reconstruction and image inpainting tasks, serving as a stronger zero-shot baseline reconstructor. Our code will be made publicly available.

Live content is unavailable. Log in and register to view live content