Aylin Görgün

10.08.2022 - 10:16

Is it the real world you are seeing or is it the Research?

10.08.2022 - 10:16

June 21st was like Christmas in June for VR enthusiasts like myself. Meta shared their progress on their ongoing projects. Even though I'm not sure that this was a statement for us enthusiasts or a publicly-traded company explaining their current financial results to its stakeholders, I was thrilled to know about their approach to tackling those hard problems. And as I read through this amazing dev-logish writing I've felt impatient for what the future holds for us.  

TL;DR Creating a visually appealing, aesthetic virtual reality headset with an accessible price point is hard and it will take some time but fortunately, there are lab full of people working on it.
They reinterpreted the meaning of the “time machine”.  In their words, a time machine is far from consumer-ready, built for the purpose of probing what might be possible with years of research and development. That definition inspired me since it glimpses the future for us and with the current prototypes, things seem achievable. 

They showed us four headset concepts as solutions for four different crucial problems: Half Dome Series, Butterscotch, Starburst, and Holocake. The fifth one, Mirror Lake is only a concept with the combinations of all the best features of the previous four. It is a concept that took Meta 7 years to develop.

Mark Zuckerberg and Head of Display System Research Douglas Lanman sets the bar to passing the visual Turing test. Regular Turing test was designed to test the intelligence of a computer. Test considered successful if a person cannot distinguish the machine from another human being. Visual Turing is for testing our human eyes whether they are seeing a real or virtual world. It is a relatively subjective test and no VR technology can pass it today.

Meta Research focuses on four features to be able to pass the virtual Turing test.

- Varifocal technology that adjusts the focus of the display based on what we’re looking
- Resolution that approaches and ultimately exceeds our human vision
- Distortion correction to help address optical aberrations
- And high dynamic range (HDR) technology that expands the range of color, brightness, and contrast you can experience in VR

It took me a few minutes until I understand they are not working on a new generation Quest. They were all prototypes for a good reason. Lanman notes the complexity of the task: “Designing and building headsets that incorporate that collection of technologies is difficult and time-consuming work because, with headset displays, all technical systems are interconnected. Everything competes for that same size, weight, power, and cost budget, while also needing to fit into a compact, wearable form factor.” And it’s not just a matter of squeezing all the technology into a tight budget — each element of the stack must also be compatible with all the others.

    Half Dome Series 

    Vergence Accommodation Conflict (VAC) is a well-known phenomenon in VR. VAC occurs when the binocular disparity, which causes the eyes to the verge, disagrees with the distance that the eyes are focusing (accommodation)^([1]). In simpler terms, it is what makes you feel nauseous when you are wearing a VR headset. To solve this issue, RL works on the varifocal technology. Varifocal tech has the power to adjust the focus of the display based on what you’re looking at. As we hear from the first prototypes weight and aesthetics weren’t the only issue. Headset prototype was very loud due to moving mechanical parts. With the HD3, they ditched the motors and replaced them with multiple liquid crystal lenses. Why isn’t this already hitting our shelves? Because CTO of Meta, John Carmack, points out  two points: It is costly and imperfect varifocal may not be that much better than what we have now. Why making this real is important? Lanman has a great answer for this, which he talked about in this detailed presentation in EI 2020 Plenary: Quality Screen Time: Leveraging Computational Displays for Spatial Computing, because “VR is not a television it is a great personal action space.” 


    Four iterations of the Half Dome prototype. Credit: Meta/ Reality Labs


    Butterscotch

    Image clarity is also crucial for passing the virtual Turing test. Our human eyes’ retinal resolution is around 60 pixels per degree (ppd) and Butterscotch aims for that and beyond. My understanding from their paper is, that it is even possible to go beyond retinal resolution. And some existing screens can already achieve that. When we compare those visions side by side, we can clearly see the difference. I remember feeling quite impressed when I moved on to the previous generations of VR headsets like HTC Vive or Rift to Quest 2 and cannot wait to see the virtual world from Butterscotch lenses. 



    Butterscotch’s clarity in comparison to the Rift and Quest 2. Image credit: Meta/ Reality Labs


    DSR points out that image quality is equally crucial as resolution. Today’s VR headsets rely on software to warp images and reduce some of the optical aberrations. This method itself is not enough because corrections achieved are static, but the distortion of the virtual image is dynamic. When we also consider varifocal changing focus wrapping isn’t sufficient. So, in addition to Meta’s Butterscotch efforts, they created a testbed for eye-tracked distortion correction and they will talk more about this in SIGGRAPH 2022.

    Starburst

    Apparently, high dynamic range (HDR) technology has been most consistently linked to an increased sense of realism and depth〖^([2])〗. Thus it is an important feature for Meta in order to pass virtual Turing test and Starbust is their attempt to include this tech in VR. 

    The technology itself is not new. It improves the range of contrast and color. Even our favorite streaming services include proper visuals if we have the HDR-enabled display. Dolby Labs research〖^([3])〗 created an experimental setup with a peak of 20,000 nits. According to them, optimal brightness is around 10,000 nits and comparing Quest’s peak of 100 nits, it is very far ahead. 


    Starburst prototype. Image credit: Meta/ Reality Labs

    Looking at the image above, we can assume that displays are not the only concern when implementing this technology in VR headsets. More brightness requires more power and high power produces more heat. All that hardware components currently are not fitting the Quest’s equivalent shell. Lucky for us there are people like Andrew Maimone focusing on reducing the size, weight, and power of existing VR as much as possible. This reminds me Obadiah’s quote from Iron Man movie.


    Holocake

    Holocake’s first debut was in 2020. It was a prototype with a limited gamut. Their approach proposed, saving space by replacing traditional reflective lenses with holographic lenses. Even though lens is thin, it needed empty space between display panel for focusing the image. In original Holocake, they used polarization-based  optical folding^([4]), also known as “pancake” folding which made them utilize the space in-between by traversing light multiple times. The next generation of Holocake is a PC-tethered but fully functional VR headset compatible with existing PC VR titles.  


    Original Holocake featured in 2020 Image credit: Meta/ Reality Labs          


      Holocake 2 without its wire. Image credit: Meta/ Reality Labs

    Mirror Lake

    “A ski-goggles-like concept that begins with the base Holocake 2 architecture, then adds in nearly everything that the team has incubated over the last seven years.” It might be a little unfair to say that, the first time we hear the ski-goggles concept isn’t here. Especially since Meta shared their recent updates with us and Apple didn’t. 

    To sum things up, Meta spends a lot of its resources trying to pave the way for future VR headsets. They are focusing on four fundamental problems: Enabling correct depth focus with varifocal tech, improving resolution to exceed human vision, distortion correction to reduce optical aberrations, and HDR for achieving more realistic visuals. And of course, they are trying to fit all that new technology into an aesthetic shell while keeping it accessible for the end-users. Since I’ve just tried to summarize Display Systems Research, I think it’ll be appropriate to close of with Lanman’s inspiring quote: “In any case, whatever path we take, our team is certain that passing the visual Turing test is our destination, and that nothing in physics prevents us from getting there. Over the last seven years, we’ve glimpsed that future, and we remain fully committed to finding a practical path to a truly visually realistic metaverse.”

    References:


    Other Contents

    The rapid development of artificial intelligence (AI) has revolutionized industries worldwide, with generative AI taking center stage recently. Gen...

    This blog was written and translated by ChatGPT.Are you ready for a dazzling journey into the world of artificial intelligenc...

    Intel CEO Pat Gelsinger likened semiconductors to oil, suggesting that computer chips will play a central role in international relations in the de...