Skip to yearly menu bar Skip to main content


Poster

Learned HDR Image Compression for Perceptually Optimal Storage and Display

Peibei Cao · HAOYU CHEN · Jingzhe Ma · Yu-Chieh Yuan · Zhiyong Xie · Xin Xie · Haiqing Bai · Kede Ma

# 11
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

High dynamic range imaging records natural scenes with closer to actual luminance distributions, at the cost of increased storage requirements and display demands. Consequently, HDR image compression for perceptually optimal storage and display is crucial, yet it remains inadequately addressed. In this work, we take steps towards this goal. Specifically, we learn to compress HDR images into two bitstreams for storage, one of which is used to generate low dynamic range (LDR) images for display purposes conditioned on the maximum luminance of the scene, while the other serves as side information to aid HDR image reconstruction from the generated LDR image. To measure the perceptual quality of the displayable LDR image, we employ the normalized Laplacian pyramid distance (NLPD), a perceptually quality metric that supports the use of the input HDR image as reference. To measure the perceptual quality of the reconstructed HDR image, we employ a newly proposed HDR quality metric based on a simple inverse display model that enables high-fidelity dynamic range expansion at all luminance levels. Comprehensive qualitative and quantitative comparisons on various HDR scenes demonstrate the perceptual optimality of our learned HDR image compression system for both displayable LDR images and reconstructed HDR images at all bit rates.

Live content is unavailable. Log in and register to view live content