UCMNet: Uncertainty-Aware Context Memory Network for
Under-Display Camera Image Restoration

1VILAB, Hanyang University     2Agency for Defense Development (ADD)
CVPR 2026

✨TL;DR: UCMNet performs uncertainty-aware adaptive processing to restore high-frequency details in regions with varying degradations.✨

Under-Display Camera Image Restoration Results

Drag the slider on each image to compare Input (left) vs Restored (right).

TOLED restored 1
TOLED input 1
Input
Restored
TOLED restored 2
TOLED input 2
Input
Restored
TOLED restored 3
TOLED input 3
Input
Restored
POLED restored 1
POLED input 1
Input
Restored
POLED restored 2
POLED input 2
Input
Restored
POLED restored 3
POLED input 3
Input
Restored
SYNTH restored 1
SYNTH input 1
Input
Restored
SYNTH restored 2
SYNTH input 2
Input
Restored
SYNTH restored 3
SYNTH input 3
Input
Restored

Comparison with Previous Methods

Drag the slider to compare previous methods (left) vs UCMNet (right).

TOLED restored 1
TOLED input 1
BNUDC
Ours
TOLED restored 2
TOLED input 2
BNUDC
Ours
TOLED restored 3
TOLED input 3
FSI
Ours
TOLED restored 3
TOLED input 3
FSI
Ours
TOLED restored 3
TOLED input 3
DAGF
Ours
TOLED restored 3
TOLED input 3
DAGF
Ours
TOLED restored 1
TOLED input 1
BNUDC
Ours
TOLED restored 2
TOLED input 2
BNUDC
Ours
TOLED restored 3
TOLED input 3
FSI
Ours
TOLED restored 3
TOLED input 3
FSI
Ours
TOLED restored 3
TOLED input 3
DAGF
Ours
TOLED restored 3
TOLED input 3
DAGF
Ours
SYNTH restored 1
SYNTH input 1
BNUDC
Ours
SYNTH restored 1
SYNTH input 1
BNUDC
Ours
SYNTH restored 1
SYNTH input 1
BNUDC
Ours
SYNTH restored 1
SYNTH input 1
FSI
Ours
SYNTH restored 1
SYNTH input 1
FSI
Ours
SYNTH restored 1
SYNTH input 1
FSI
Ours

Abstract

Under-display cameras (UDCs) allow for full-screen designs by positioning the imaging sensor underneath the display. Nonetheless, light diffraction and scattering through the various display layers result in spatially varying and complex degradations, which significantly reduce high-frequency details. Current PSF-based physical modeling techniques and frequency-separation networks are effective at reconstructing low-frequency structures and maintaining overall color consistency. However, they still face challenges in recovering fine details when dealing with complex, spatially varying degradation. To solve this problem, we propose a lightweight Uncertainty-aware Context-Memory Network (UCMNet), for UDC image restoration. Unlike previous methods that apply uniform restoration, UCMNet performs uncertainty-aware adaptive processing to restore high-frequency details in regions with varying degradations. The estimated uncertainty maps, learned through an uncertainty-driven loss, quantify spatial uncertainty induced by diffraction and scattering, and guide the Memory Bank to retrieve region-adaptive context from the Context Bank. This process enables effective modeling of the non-uniform degradation characteristics inherent to UDC imaging. Leveraging this uncertainty as a prior, UCMNet achieves state-of-the-art performance on multiple benchmarks with 30% fewer parameters than previous models.

Overview of the Proposed Method

UCMNet Framework Overview

Overview of the proposed UCMNet framework.

Visual Comparisons with Other Methods

UCMNet TOLED Overview

Visual comparisons on the TOLED dataset.

UCMNet POLED Overview

Visual comparisons on the POLED dataset.

Citation

@InProceedings{kim2026UCMNet,
  title     = {{UCMNet}: Uncertainty-Aware Context Memory Network for Under-Display Camera Image Restoration},
  author    = {Kim, Daehyun and Kim, Youngmin and Oh, Yoon Ju and Kim, Tae Hyun},
  booktitle = {Computer Vision and Pattern Recognition (CVPR)},
  year      = {2026}
}