A Perception-driven Hybrid Decomposition for Multi-layer Accommodative Displays

Hyeonseung Yu1, Mojtaba Bemana1, Marek Wernikowski2, Michał Chwesiuk2, Okan Tarhan Tursun1, Gurprit Singh1, Karol Myszkowski1, Radosław Mantiuk2, Hans-Peter Seidel1, Piotr Didyk1,3
1Max Planck Institute for Informatics, Saarbrücken 2West Pomeranian University of Technology, Szczecin 3Universit della Svizzera italiana, Lugano
In IEEE VR 2019 (Conference on Virtual Reality and 3D User Interfaces)
Snow
We build a two-plane VR display to test the rendering strategy. The schematic and photograph of the setup are shown (BS:Beam splitter, ET:Eye tracker). For each eye, images from two 2560 × 1440 LCD displays (Topfoison TF60010A) are combined with a beam splitter (Edmund Optics #64- 408) and magnified with an achromatic lens (Thorlabs AC508-080-A). Eye-trackers (Pupil Labs) are placed right behind the two lenses. The optical system for the right eye is mounted on the linear stage for adjusting the interpupillar distance. The dioptric distances to the front and back virtual planes are set to 2.0 D and 1.4 D, respectively

Abstract

Multi-focal plane and multi-layered light-field displays are promising solutions for addressing all visual cues observed in the real world. Unfortunately, these devices usually require expensive optimizations to compute a suitable decomposition of the input light field or focal stack to drive individual display layers. Although these methods provide near-correct image reconstruction, a significant computational cost prevents real-time applications. A simple alternative is a linear blending strategy which decomposes a single 2D image using depth information. This method provides real-time performance, but it generates inaccurate results at occlusion boundaries and on glossy surfaces. This paper proposes a perception-based hybrid decomposition technique which combines the advantages of the above strategies and achieves both real-time performance and high-fidelity results. The fundamental idea is to apply expensive optimizations only in regions where it is perceptually superior, e.g., depth discontinuities at the fovea, and fall back to less costly linear blending otherwise. We present a complete, perception-informed analysis and model that locally determine which of the two strategies should be applied. The prediction is later utilized by our new synthesis method which performs the image decomposition. The results are analyzed and validated in user experiments on a custom multi-plane display.

Material

MPI Project homepage
Paper (preprint)
Supplemental document
Video (137MB, mp4)

Imprint / Data Protection