Last week, we announced a major update to the Gear VR platform. John Carmack, CTO of Oculus shared his insights on how the team rebuilt Oculus Home from the ground up.
Originally Published: March 31, 2017
A new version of Oculus Home has started going out to Gear VR users. It’s a staged rollout, so it may take a couple weeks for everyone to see it.
It isn’t obvious, but this is much more than just a rewrite of Home—this is the first application developed for a brand, new Oculus runtime system. There are a lot of interesting things to talk about, but the difference in visual quality is the most noticeable change.
For years now, I’ve lamented that the visual quality gap between what we should be able to do on the Gear VR hardware and what users are actually seeing is very large. Most people think “VR just looks that way (bad)” because that’s all they see. I finally have a pretty good example to show what we should get.
There are a bunch of things that combine to deliver the improvement, but “Cylindrical Timewarp Layers” is the new buzzword.
I discussed using a planar Timewarp layer for another application a couple years ago. I mentioned that, with that setup, the center of the screen looked great, but it started aliasing at the edges due to the VR lens distortion compressing the resolution. You had to make a tradeoff: Size for peak resolution in the center, and the edges would have problems. Size for the edges, and you’d have a blurry center. Using a large fraction of the screen also forces you to read in perspective at the edges, which isn’t ideal.
Many people have independently found that putting UI on a floating cylinder surrounding the user in VR feels good, since you have everything facing directly towards you so there’s no reading in perspective. It turns out that there’s a very happy coincidence here: When directly sampled by Timewarp (as opposed to just drawing to an eye buffer), the cylinder curvature almost perfectly counteracts the lens compression!
This means that you can have an almost constant pixel density in a ring all the way around the view without any compromises. You still have the tradeoff in the other axis, so a short cylinder could have a higher alias-free density than a taller one, but we settled on a fairly conservative 13 pixels per degree for the UI, because avoiding aliasing was more important to me than absolutely maximizing pixel count. Static images with mip maps or proper prefiltering can go up to around 18 pixels per degree if you really want the most detail in the center.
If a texture is copied pixel-for-pixel perfectly to a layer (be careful not to be off by a half texel!), it will only be resampled once by Timewarp, instead of the normal two times that rendering to eye buffers, then Timewarping gets you. How much this matters is very content dependent; it won’t make any difference on blurry imagery, but if there are crisp edges and narrow lines, it can be significant. This matters for text.
The one sampling that needs to happen is done in sRGB color space when possible. The details are very graphics-geeky, but the takeaway is that this also helps with crisp, high-contrast edges, especially under slight head motion. This matters for text, again. I am sad to report that Android N has broken sRGB framebuffers on Snapdragon for us, so this result isn’t universal.
These techniques enable a quality that you couldn’t get with traditional Gear VR rendering, but you still need to put good pixels into the layer texture.
The layer texture is measured in pixels, not abstract floating point units, and designers should think about it in those terms. Every pixel will contribute to four or more pixels on the screen due to filtering, so every pixel matters. While the hardware can obviously do it, I don’t let the designers just scale the layer up and down to adjust the size, because that would give you either aliased or blurry pixels. The pixels are sized correctly. If you want something bigger or smaller, you need to draw it with more or less pixels. Ideally everything is mastered at very high resolution offline, then resized appropriately with a high quality filter to exactly the pixel size it will occupy on the layer, which will turn out much better than GPU only filtering. We still haven’t done this for most of the imagery yet!
Then, there are the design best practices that everyone should have always been doing, like not making the text too small, sticking the gaze cursor depth directly on the UI surface instead of floating above it, putting backgrounds behind almost everything instead of floating text in thin air, etc.