Last April, we announced the release of Asynchronous Spacewarp (ASW) 2.0 combining the power of ASW 1.0 with Positional Timewarp (PTW) to provide higher-quality and lower-latency VR.
In this blog entry, we build on the
initial ASW 2.0 post to provide more technical details as well as tips on how to make sure your VR application is ASW 2.0-ready. Given the technical nature of this post, be sure to review to the glossary provided at the end of this article for more info on Oculus, and VR-specific terminology discussed throughout the article.
Our Road to PTW
In 2016, once we had Asynchronous Timewarp (ATW) working, it might have made sense to release PTW, except at that time we also had a different solution that was showing promise. That feature was of course ASW, and while ASW can temporally smooth various motion cues such as animation and HMD tracking (including HMD translation), PTW can only temporally correct for HMD tracking. In that sense, ASW would seem like a super set of PTW, but we would soon realize that PTW also had its own benefits that ASW didn't fully address.
In 2017, we introduced Oculus Dash, and along with it, the option for VR apps to submit their depth buffers to help with depth composition using a new layer type called ovrLayerEyeFovDepth. As seen in the screenshot below, the depth buffer is used to apply the X-ray effect when the VR app's content interpenetrates the Oculus Dash content. Simultaneously, our Engine-Integrations Team started to utilize this layer type in Unreal Engine 4 and Unity.

Most VR apps are optimized to run at native HMD refresh rates, only in isolation. This can lead to dropped app frames when Oculus Dash shows up and pushes the CPU or GPU over their performance budget. In these instances, the Oculus runtime will automatically apply ASW to the VR app. However, since ASW is limited to only one compositor layer at any given time, if Oculus Dash and the VR app are simultaneously failing to run at rate, then we need a way to also smooth Oculus Dash content movement without relying on ASW. This is the point where PTW made sense to use. As we polished PTW for Oculus Dash, we proceeded to update ASW to play nice with PTW and further improve our temporal stability. This is how ASW 2.0 was born.
ASW 1.0 vs 2.0
Below is a video comparing ASW 1.0 and 2.0 in action from Robo Recall. The white arrows are debug guides showing the calculated motion vectors per tile, used for content-motion warping by ASW.
The video stills below help illustrate the major differences. With ASW 1.0, notice how the parallax on the bulletin board (due to head-motion) is being compensated only by ASW. With ASW 2.0 now using PTW, the same parallax is registering minimal ASW corrections as it’s mostly being addressed by PTW. As expected, the rotating fan blades are still being corrected by ASW, while the slight corrections on the chair and geometry edges are mostly due to disocclusion and view-dependent shading being caught by ASW motion vectors after PTW.
Combining ASW with PTW
To better understand how ASW works with PTW, we first need to look at the flow of ASW. Here's a high-level break-down of the steps the Oculus runtime takes to utilize ASW:
- ASW captures textures for previous & current ovrLayerEyeFov frames submitted by the in-focus VR app.
- ASW generates “pre-warp frame” by Timewarping previous frame to use current frame’s pose.
- ASW converts current and pre-warp frame textures to GPU-video-encoder-friendly resources.
- ASW sends both frame textures to GPU-video-encoder for correspondence analysis.
- ASW gathers “motion vectors” from GPU-video-encoder output.
- ASW post-processes and converts motion vectors for frame extrapolation.
- ASW packages up the contents and inject into the compositor layers as if it came from the VR app.
- CompositorTimewarps and distorts as usual using ASW-injected ovrLayerEyeFov layer content.
As you can see in steps #2 and #8 we rely on Timewarp (TW). In our original implementation of ASW, since we didn't have a depth buffer to use for PTW, the TW reprojection technique used in those cases was Orientation Timewarp (OTW). However, as more VR apps are starting to provide depth buffers, we are able leverage the data for PTW. The trick is to make sure the TW-reprojection technique (be it OTW or PTW) used in both steps noted above are of the same kind. This ensures that HMD-movement reprojection is corrected either in ASW or TW, but not in both places at once, as this would lead to visual artifacts. When depth is not available (i.e. VR app submits an ovrLayerEyeFov instead of ovrLayerEyeFovDepth), the Oculus runtime automatically reverts back to the ASW 1.0 method for that VR app.
Flavors of PTW
PTW can be implemented in many different ways. The VR-specific nature of our requirements has two very important points to consider:
- Use as little GPU cycles as possible for PTW, allowing the VR app to maximize its GPU usage.
- No need to address significant reprojection deviation from the last view position, as each new VR-app render will be using a new HMD target pose that is very close to the previous one.
- Each reprojection result from PTW will only be visible for a split second (usually < 20 milliseconds) since new images are provided by the VR app at a very high rate.
Those familiar with real-time graphics literature will know of similar techniques such as parallax mapping, parallax-occlusion mapping (aka relief mapping) along with its variants using ray/sphere/cone marching, height-map rasterization and more. In most of these techniques, a shader will sample a height map texture to know how much to offset the texture sample look ups. The depth buffer used in PTW can also be thought of as a height map rotated to face the camera.
Over the years, we experimented with many different techniques for PTW to assess the various trade offs. Some of the techniques such as parallax-occlusion mapping are more accurate while costing more GPU cycles. The technique we settled on resembles a sparse-parallax-mapping technique as it helps us be cognizant of all the points above. Compared to OTW, the overhead of using our method of PTW is extremely low, while also being good enough to help address the judder artifacts seen with OTW due to HMD translation. In most cases, the GPU perf hit from using PTW vs OTW will be lost in frame timing noise.
Going Deeper on Depth
PTW relies mainly on depth buffers, and while there’s ubiquitous info online about depth-buffers, let's dig a bit deeper. Depth-buffers play a significant role in real-time rendering and their internal representations have gotten more complex over time in conjunction with GPU-performance optimizations. However, at its core, a depth buffer is a 2D array of values generated by a GPU while rasterizing a 3D scene where every element in the buffer stores a depth value for the corresponding color-buffer element. Since depth buffers are usually generated as a bi-product of rasterization to be used by the GPU for occlusion culling, the cost of generating one is mostly accounted for. For PTW, once a depth buffer is generated while rendering the VR app, it is expected that the VR app will then submit its contents as part of an ovrLayerEyeFovDepth layer. From that point on, the Oculus runtime compositor handles the rest of the PTW reprojection during the Timewarp & distortion stage.
A depth buffer can hold values in floating-point or normalized-integer formats, however, these raw values do not directly represent the distance of a given pixel. Instead, the depth values are calculated during rasterization using a projection matrix that transforms each vertex, and in-turn, pixel into the final depth value to be stored in memory. The way a projection matrix transforms a linear distance into a value, ready to be stored in a depth buffer, can be thought of as a way of efficiently mapping the linear distance values to dedicate higher precision for elements closer to the viewer.
Different content can call for different mapping schemes and clipping boundaries. For example in the earlier days of low-precision integer-based depth formats, the distance of the near and far clipping planes of the camera frustum were a significant source of frustration, while in the more recent years with floating-point values, projection matrices can map the far-clip plane to infinity. The mapping of linear distance to final depth values is where the precision built into floating-point formats comes in handy. Keep in mind that once a given vertex is transformed and rasterized into triangles, the elements outside the projected range and frustum are automatically clipped away by the GPU.
Due to all these considerations, we created helper functions in the
Oculus Rift PC SDK for app developers to use when creating their projection matrix, along with enumerations to allow common ways of creating projection matrices. Here's an excerpt from the OVR_CAPI_Util.h file found in the SDK, showing the modifiers provided for specifying a projection matrix. Refer to the PC SDK documentation for a description of each enumeration.
enum ovrProjectionModifier {
ovrProjection_LeftHanded,
ovrProjection_FarLessThanNear,
ovrProjection_FarClipAtInfinity,
ovrProjection_ClipRangeOpenGL
};
For PTW, we are interested in the distance of each pixel to the render camera in our tracking-space units. If a VR app were to only submit a depth buffer and no additional metadata, the Oculus runtime would not have enough information to recalculate the original linear distance of the pixels. By now it should be obvious that the projection matrix used by the VR-app is one of the pieces of info needed by the PTW algorithm. However, PTW doesn't need the whole matrix; just the parts that have to do with the Z and W coordinates of the rendered elements. To extract the necessary components from a projection matrix, the SDK also provides a structure to neatly pack the minimum amount of info called ovrTimewarpProjectionDesc and a helper function to do this called ovrTimewarpProjectionDesc_FromProjection. If the VR-app developer doesn't want to convert their own matrix format to the SDK format, then they can look at the implementation of the function, and trivially extract the necessary components.
Another piece of data that is needed, and perhaps not as obvious, is the world and view scale the VR app is using for its rendering units. In some rendering engines, the world-unit conversion scale is not handled as part of the projection matrix, which requires special attention. Consider a game engine where say 1 unit is 1 cm, while in the Oculus Rift PC SDK tracking values are always treated in units of meters (i.e. 1 unit is 1 meter). When said engine renders a plane that is 4 meters away, the inverse of the projection matrix applied to the depth buffer would generate a distance that is 400 units. However, in the PTW algorithm, what we really want is to calculate 4 units. So in this example, the render-scale factor of 0.01 would be provided to the Oculus runtime unless this scale factor is already captured in the projection matrix. The VR app can separately submit this value to the SDK using the ovrViewScaleDesc struct.
Known API Limitations
There are some limitations with the current depth-submission API. To name a few:
- It is required that the depth and color buffers submitted in the FovDepth layer use matching resolutions.
- We do not support “color” formats for depth buffers such as OVR_FORMAT_R32_FLOAT.
- OVR_FORMAT_D32_FLOAT without multisampling (i.e. MSAA) is currently the most optimal way to go. Other formats can potentially cause a resource copy or resolve in the Oculus runtime.
Summary & Final Thoughts
In order to utilize PTW for a given VR app, the Oculus Rift PC runtime requires the app developer to do the following:
- Rasterize or copy your depth buffer into an ovrLayerEyeFovDepth layer's DepthTexture swap chain.
- Provide the projection matrix parameters in the layer's ProjectionDesc data member.
- Provide the HmdSpaceToWorldScaleInMeters parameter using the ovrViewScaleDesc struct when submitting layers.
If Unreal Engine 4 or Unity are your tools of choice, then be aware that the latest Oculus engine integrations already provide the necessary facilities to submit the depth buffer to the Oculus Rift PC runtime.
You can also checkout our OculusRoomTiny and OculusWorldDemo samples which have already been updated to show the steps necessary to submit the depth buffer. OculusWorldDemo also provides additional tools in the menu (toggled via the ‘Tab’ key) to see how PTW helps. Simply navigate to the Timewarp menu, and increase the frame time to decrease the frame rate, and then under the Layers→Main Layer option, toggle depth submission on/off.
As usual, should you run into any issues, or have questions, feel free to drop a line in our
forums and we will be sure to help out. Happy coding!
- Volga Aksoy + Dean Beeler
Glossary
VR app: The software application in focus that is submitting rendered content to the Oculus Rift runtime, this will go through the VR compositor to eventually show up in the Oculus Rift HMD.
TW (Timewarp): An Oculus VR compositor feature in the Oculus Rift runtime, helping reduce latency by reprojecting the VR-app rendered content with a more recent HMD-tracking reading.
OTW (Orientation Timewarp): A version of TW reprojection that only reduces latency for the HMD's rotation (change in orientation/attitude), also referred to as 3-DOF TW. It can be thought of as a method where the VR app's frame content is treated as if it is projected infinitely far away such that the translation of the HMD has no effect on the reprojection results. OTW is a major improvement in VR comfort as it compensates for the users' precisely-tuned sense of VOR, which is hypersensitive to head (and HMD) rotation.
PTW (Positional Timewarp): A version of TW reprojection that reduces the latency of both rotation and translation (change in position) of the HMD, also referred to as 6-DOF TW. It can be thought of as a super set of OTW where the technique uses the distance of the elements in the VR-app submitted content. The distance info can be deduced in various ways such as analytical or rasterized depth. For example if the compositor knows that the content is projected to a flat plane or cylinder arc (such as a UI quad), then it can calculate the distance for each element of the surface using the analytical plane or cylinder equation. For depth-based PTW, the sampled distance information is gathered from a rasterized depth buffer, provided by the VR app corresponding to a certain color buffer.
STW (Synchronous Timewarp): A method of software scheduling used for TW where the VR compositor applying the final barrel distortion executes synchronously right after the VR app has submitted rendered content. This method was briefly used for the Oculus Rift around the DK2 days before it was replaced by the Asynchronous TW version.
ATW (Asynchronous Timewarp): As opposed to STW, ATW is a version of scheduling used for TW where the VR compositor runs independent of the VR app's frame submission schedule, and ideally at the rate of the refresh rate of the HMD. Its main purpose is to help smooth the visuals by continuously reprojecting the latest content provided by the VR app, even if the VR app is failing to maintain the required frame rate. The scheduling aspect of TW is orthogonal to the method of reprojection (OTW or PTW) employed. ATW helps keep the frame-to-frame HMD latency consistently low.
ASW (Asynchronous Spacewarp): A feature that temporally smooths visuals in the HMD when the VR app is failing to maintain the required frame rate. ASW achieves this by extrapolating the previously submitted frame contents to synthesize a plausible new frame. ASW does not substitute ATW, but rather works with it. So when ASW is active, ATW will continue to reduce latency while ASW smooths parts of the visuals that ATW doesn't address.
Motion-To-Photon Latency: The elapsed time from HMD motion to when the rendered image using that motion becomes visible on the display. Sometimes simply referred to as “latency” in real-time applications and VR. Unlike ASW, PTW helps lower HMD motion latency as it executes during the final stage of composition, whereas ASW happens much earlier in the pipeline.
VOR (Vestibulo-Ocular Reflex): The human reflex which links neck-muscle movements to counter eye-rotation movements helping stabilize images on the retina.
VOR gain: The reflexive adjustment of VOR compensation between the neck and eye muscles. This is used for various reasons such as wearing glasses with a certain distortion profile, or the extra parallax on closer objects due to eye translation during head rotation. Unlike OTW, PTW helps address VOR gain fidelity as it corrects for parallax based on distance.
References