Super Resolution for Humans
SIGGRAPH 2025 (Poster)

Our perceptually accelerated method can achieve perceptually lossless acceleration for neural network based SR

Abstract

Super-resolution (SR) is crucial for delivering high-quality content at lower bandwidths and supporting modern display demands in VR and AR. Unfortunately, state-of-the-art neural network SR methods remain computationally expensive. Our key insight is to leverage the limitations of the human visual system (HVS) to selectively allocate computational resources, such that perceptually important image regions, identified by our low-level perceptual model, are processed by more demanding SR methods, while less critical areas use simpler methods. This approach, inspired by content-aware foveated rendering, optimizes efficiency without sacrificing perceived visual quality. User studies and quantitative results demonstrate that our method achieves a reduction in computational requirements with no perceptible quality loss. The technique is architecture-agnostic and well-suited for VR/AR, where focusing effort on foveal vision offers significant computational savings.

Method Predictions

Visual results of our method compared to the original networks. On the right, we can observe the maps produced by our perceptual model.

VR Application

Our model predictions based on gaze position with X4 super-resolution. In the first column, we have the original image and the corresponding quality map. In the other columns we have on top the eccentricity map expressed in degrees and bottom we have the corresponding quality map.

User study results

The result of our user study (15 subjects) for the network branching application with 24 natural images.



The result of our subjective study (for 9 participants) for the network channel depth application.

Additional Results

Citation