TL;DR: We are introducing a new technique to optimize VR rendering by reprojecting rendered pixels from one eye to the other eye, and then filling in the gaps with an additional rendering pass. In our Unity sample, this idea has proven to work very well for pixel shader heavy scenarios, saving up to 20+% GPU cost per frame for our test scene. It is also very easy to integrate into your own Unity projects. The sample is available for direct download on the Unity Asset Store.
Typical virtual reality apps render the scene twice, once from the left eye’s view, and once from the right eye’s view. The two rendered images usually look very similar. Intuitively, one would think that maybe we can share some pixel rendering work between both eyes, so we implemented a tech called Stereo Shading Reprojection to make pixel sharing possible. Below we’ll provide an overview of this solution, different scenarios for optimization, and integration best practices.
Basic Theory
The basic idea is to avoid rendering pixels twice and save GPU costs by using the information from the depth buffer to reproject the first eye’s rendering result into the second eye’s framebuffer. While this basic solution seems easy to implement, for delivering a comfortable VR experience we have to make sure:
It is still stereoscopically correct, i.e. the sense of depth should be identical to normal rendering
It can recognize pixels not visible in the first eye but visible in the second eye due to slightly different points of view
It has a backup solution for specular surfaces that don’t work well under reprojection
It does not have obvious visual artifacts
And, in order to make this technology practical, it should:
Be easy to integrate into your project
Fit into the traditional rendering engine well, not interfering with other rendering like transparancy passes or post effects
Be easy to turn on and off dynamically, so you don’t have to use it when it isn’t a win
Let’s take a look at how reprojection works. Here is the basic procedure (in a typical forward renderer):
Render left eye: save out the depth and color buffer before the transparency pass
Render right eye: save out the right eye depth (after the right eye depth only pass)
Reprojection pass: using the depth values, reproject the left eye color to the right eye’s position, and generate a pixel culling mask.
Continue the right eye opaque and additive lighting passes, fill pixels in those areas which aren’t covered by the pixel culling mask.
Reset the pixel culling mask and finish subsequent passes on the right eye.
There are also a few additional important components to highlight.
Reprojection
Instead of reprojecting from the first eye to the second eye directly, we actually handle the reprojection backwards.
First, a fullscreen quad is drawn with the reprojection shader. Inside the shader, the right (second) eye’s depth buffer is used to reconstruct each pixel’s world-space position, which can be done easily by using the right eye’s inverse view projection matrix, then project the pixel back into left eye’s frame buffer by using left eye camera view projection matrix. Finally, the corresponding color pixel is pulled and output to the frame buffer.
See the images below. At the end of this reprojection process, the color frame buffer would look something like the image on the left. this image appears correct, but looking a bit closer, there are still visual artifacts on the object's edge. The right image shows these artifacts in green. In the next section we'll discuss how to fill these holes.
Occluded Area Detection
Due to something called “binocular parallax,” our left and right eyes see objects from slightly different positions, which results in some pixels the right eye can see that are simply not available in the left eye’s framebuffer. These pixels are occluded from the left eye’s point of view, which is what causes the edge artifacts. To fix these artifacts, the problematic areas can be re-drawn with correct pixels from the normal right eye camera rendering.
To do this, a reprojected pixel needs to be identified as either a valid-reprojection or a false-reprojection. By masking out the valid-reprojection areas, we know which part the reprojection works well and which part needs to be re-rendered.
In the diagram below, the right camera’s pixel O is occluded in the left camera - if you reproject O back into left eye’s frame buffer, what was stored is actually pixel X. Pixel O and X appear to have different depth values, but as a valid-reprojection pixel they should have the same depth value due to parallel view directions of the two cameras (e.g., pixel A). A pixel is a valid reprojection if the left and right pixel’s depths are approximately equal.
Pixel Culling Mask
Now that we can identify which parts of the image are valid-reprojections, we can mask out these areas to avoid re-rendering them. Most materials work well with reprojection. Mirror or very shiny materials, however, can look wrong since their appearance is very view-dependent. To solve this, our solution gives content creators the ability to disable reprojection on a per-material basis.
Either the depth test or stencil test can be used for per-pixel culling, the choice depends mainly on the chosen game engine’s architecture.
Stencil rejection: For game engines that have depth prepass, use stencil to mask out the valid-reprojection area. Then, when rendering the opaque pass, enable the stencil test on objects intended for reprojection and disable stencil testing on objects that will not be reprojected.
Depth rejection: For a game engine that does not have depth prepass, the stencil rejection approach may not be the easiest solution. For example, Unity’s depth only pass is purely a way to generate depth texture; the opaque lighting pass always starts from a fresh empty depth buffer. If the rendering on valid-reprojection areas is skipped the depth buffer will remain empty as well, which will create bugs when rendering transparent materials or non reprojection-friendly materials. For solving this problem, the following approach can be used:
For non-reprojection friendly opaque materials, render with alpha = 0 in the left eye camera
When doing the reprojection pass, in addition to doing the depth comparison, also check the left eye color texture’s alpha channel. If it is 0, treat it as a false-reprojection. Also, without a stencil output, the reprojection fullscreen quad is placed at the position of the near camera clip plane with depth writes enabled. This means that all valid-reprojection areas will output a depth value that can cull any later pixels
Render the opaque pass (+additive lighting passes) normally with z-testing enabled
Restore the depth buffer from the depth textures, then all later transparency passes can be rendered correctly
Note that since we output per pixel depth in the pixel shader to restore the depth buffer this can break the early-z culling optimization on PC hardware. It’s important to do this after the opaque pass is done, so you can minimize the penalty impact and your opaque pass can still benefit from the reprojection.
Conservative Reprojection Filter
If you’ve gotten this far, the reprojection should work, however, there is one more visual artifact that we can’t ignore -- edge ghosting. The left side of following picture shows what edge ghosting looks like This can be fixed by using a method called Conservative Reprojection Filter, which results in the improvement you see at right.
When occluded areas are detected, it’s recommended to check if the reprojected depth matches the source depth. Due to reprojection filtering and floating-point errors, a threshold test instead of an equality test is used. This can introduce artifacts where a reprojected pixel is close to the edge of a foreground object. Since the reprojected position can be anywhere between the foreground pixel and the background pixel, simply reducing the threshold won’t work, and it will eventually disable the reprojection completely.
However, It is easy to detect the foreground edge case and mask those areas as false-reprojection, even if it is valid-reprojection. Basically, make the culling more conservatively. This can be achieved by using a triangle shape filter to get 3 depth samples as below diagram, and only select the closest depth value for depth comparing. This conservative approach won’t hurt the performance gain noticeably since those edges don’t cover a big screen area. In term of the triangle’s size, 2 pixel distance from vertices to center works well in our project.
Performance Results
To put this all to the test, we implemented the stereo shading reprojection idea in Unity through some simple command buffers and shaders, which worked very well for pixel-shader heavy scenes. In our test scene, we intentionally exaggerated the pixel cost by adding multiple dynamic lights. Here is some performance data for the following scene running on GTX 970:
We can see the opaque pass cost drops from 4.6ms to 3.4ms, which is about a ~26% saving, this included all the overhead introduced by the reprojection ( about 0.7ms for this case ). In total, the whole frame GPU cost drops from 5.6ms to 4.4ms -- still a ~21% saving. Depending on the chosen MSAA level and framebuffer format , the reprojection overhead can vary due to the cost of the MSAA resolve ( 0.5ms - 1.2ms ). Because reprojection overhead is a constant cost, the more expensive opaque shading pass resulted in a better saving percentage. We also profiled it on AMD R290 and GTX 1080 and observed similar savings. We expect the constant reprojection overhead becomes more trivial when the GPU becomes faster.
Limitations
Stereo reprojection may not work for all cases, and has some limitations:
The initial implementation is specific to Unity, but it shouldn’t be hard to integrate with Unreal and other engines.
This is a pure pixel shading GPU optimization. If the shaders are very simple (only one texture fetch), it is likely this optimization won’t help as the reprojection overhead can be 0.4ms - 1.0ms on a GTX 970 level GPU.
For mobile VR, depth buffer referencing and resolving are slow due to mobile GPU architecture, which will add considerable overhead to the top, so we don't encourage trying the idea on mobile hardware.
The optimization requires both eye cameras to be rendered sequentially, so it is not compatible with optimizations that issue one draw call for both eyes (for example, Unity’s Single-Pass stereo rendering or Multi-View in OpenGL).
For reprojected pixels, this process only shades it from one eye’s point of view, which is not correct for highly view-dependent effects like water or mirror materials. It also won’t work for materials using fake depth information like parallax occlusion mapping; for those cases, we provided a mechanism to turn off reprojection.
Using the depth rejection approach, as we did for our Unity implementation, the depth buffer will need to be restored after the opaque lighting pass. This can potentially hurt the transparency pass’s depth culling efficiency.
Notes on Unity Integration
To integrate the Unity implementation process into your project simply attach the script StereoReprojectionPass.cs under your main camera object. If there are highly specular materials that need to avoid reprojecting, modify the shader to output alpha = 0. An n example of this is provided in the Unity implementation project for reference.
Depending on your scene’s pixel shading complexity, it might be not beneficial for some very simple environments, you can script the reprojection off in any frame.
The implementation is designed for the Unity forward renderer.
If you have any custom opaque shader using alpha channel, it can conflict with reprojection since we are using alpha = 0 to avoid reprojection.
Below is a simple diagram which illustrates the whole pipeline in Unity:
Unity
Unreal
Explore more
Quarterly Developer Recap: Tracked Keyboard, Hand Gestures, WebXR PWAs and more
Unlock the latest updates for developers building 3D, 2D and web-based experiences for Horizon OS, including the new tracked keyboard, microgesture-driven locomotion, and more.
GDC 2025: Strategies and Considerations for Designing Incredible MR/VR Experiences
Dive into design and rendering tips from GDC 2025. Hear from the teams behind hit games like Batman: Arkham Shadow and Demeo and learn how you can make your mixed reality and VR games more enjoyable and accessible.
Accessiblity, All, Apps, Avatars, Design, GDC, Games, Hand Tracking, Optimization, Quest, Unity
GDC 2025: Emerging Opportunities in MR/VR on Meta Horizon OS
Discover some of the opportunities we shared at GDC 2025 to accelerate your development process, level up your skillset and expand your reach with Meta Horizon OS.