VR Unity Sample for Developing High Quality Visuals on Quest: How to Get Started + Tips from Arabian Art Studios
Elie Arabian, Matt Ostgard
Today we hear from the team at Arabian Art Studios, who collectively have over 27 years of game industry experience at studios like Westwood Studios, Electronic Arts, Petroglyph Games and more. The core team members are Elie Arabian, Matt Ostgard, Phill Belanger, John Sleeper and Derek Arabian, and recently they published a new Unity VR Game sample to help developers leverage their development pipeline. Today they provide an overview of the sample, steps to get started, and a few tips along the way. As always, free to jump into the comments if you have any questions.
We at Arabian Art Studios started creating VR games for the Gear VR in 2016, looking to achieve the best visuals we could with relatively significant system restrictions. The minimum hardware spec was to run on the Samsung Note 5. Knowing this, we spent time in pre-production running performance tests for: maximum draw calls, sustainable vertex/polygon counts, baked lighting vs dynamic lighting, skinned mesh performance and more. Based on our tests, we developed a pipeline for that would enable the best visuals possible for our game.
Our first demo title to use our art pipeline was Cursed Sanctum (showed in the first set of images below), released in March of 2017 and still available for the Gear and the Oculus Go. Also pictured below, our next title, Myth Hunters was released on December 5, 2019, and is a full game experience using the many techniques we had developed during these two releases.
As time passed, more of the tools that were once unique to a particular software platform have become increasingly common, especially GPU texture baking, so our methods can now be used in most pipelines. Today we’re excited to share Ogre Smithy and His Workshop, an Oculus Quest demo for Unity which showcases our tools and pipeline. In the following blog post, we provide an overview of the sample, this includes how 3D assets were prepared, settings for the Unity pipeline, and a general tutorial of our workflow. It’s our hope that this article and newly released sample, will provide you the methods, assets, and processes to help you create visually immersive game experiences going forward.
3D assets included in the scene
All assets were modeled and textured using standard practices for generating game art. High detail models were created, followed by an optimized/clean topology mesh, and texturing. Our pipeline consisted of ZBrush, 3DS Max, Photoshop, Substance Painter, and Unity.
Scene creation and pipeline
Our pipeline for creating the scene and texture bakes prior to importing into Unity are based on 3D Studio Max (Autodesk) and V-Ray (Chaos Group). We then used the control and rendering capabilities of V-Ray to bake in all of the lighting and high poly detail onto the low poly, in-game assets. With the Ogre demo, we also used Substance Painter to do some of the asset processing and baking of the high poly detail onto the low poly assets.
For this process to work, the high poly and low poly asset must share the same coordinates, and sit on top of each other. The above screen shot shows these separated to showcase the level of geometry detail in the high poly asset. For scene construction, the models are always moved together in unison as one asset.
Preparing the assets for use in 3DS Max
The first step is to get the base geometry in 3DS Max applied with V-Ray materials. We export the textures from Substance Painter using the “Vray” Config setting.
Once the textures are exported, they will need to be assigned to the V-Ray shader in 3DS Max and applied to the bake object. The material type must be switched to VrayMTL. All texture types exported from Substance have a similarly named field in the VrayMtl. For bump mapping, make sure to use the “VrayNormalMap” type. Considering we used both high poly and low poly bake objects in the demo, which of these to use is really a decision to be made at production time. With a scene relying completely on dense, high poly bake objects, 3DS Max files tend to get huge and into the gigabytes. Depending on the scene, the number of unique assets, size + density of each detail, file sizes and end visual goal should all be taken into account when making decisions on each method.
Even with the export preset from Substance Painter to V-Ray, we found that the green channel on the normal bump needs to be flipped for proper results.
Scene assembly in 3DS Max
Once all assets are individually prepared and each has a high poly “bake” and a low poly “projection” ready asset, the scene can be assembled. Scene assembly is straight forward, simply place the assets together in 3d space to create the full level environment. We highly recommend using instance or reference methods for duplicating assets to keep the scene file memory footprint to a minimum. It is very important to set up a layering system that allows for easy selection of the high poly and low poly objects in the scene. Eventually, the low poly will be combined together as a single object to get projected onto. With the Ogre demo, we were able to use the low poly objects for the items as the high poly base since we baked a lot of the normal data in Substance.
The image below features the desert floor high object, separated into a “high” layer. The asset is 2.3 million vertices.
The desert floor is cut into smaller pieces to balance polygon counts per object and take advantage of occlusion culling when not physically in camera view.
Note the “Bake” layer asset naming and hierarchy conventions; file organization and asset naming makes it easier to follow each step in 3DS Max and Unity.
An important part of performance, along with draw call minimization, is polygon reduction. Once the scene is final, be sure to delete unnecessary polygons to reduce the in-game geometry.
Once the scene is built, preparation for texture baking must occur. In this step, groups of individual game assets are attached to become one unique object. By attaching the objects, the UVW coordinates will overlap for the combined new objects. This overlap must be resolved prior to texture baking, simply move the UVW coordinates of the combined objects to UVW2 and expand to remove overlap. The number of objects to combine is generally based on per-object poly count requirements and total draw calls in the final scene.
The image below provides another example of the interior objects combined and coordinates remapped to UVW2:
Once all in game assets are combined and remapped, the scene is ready for texture baking. As a prerequisite, the scene must be lit with V-Ray lights. Visualization at this stage can be done as renders of the high poly scene using GPU rendering.
The Ogre scene demonstrates two methods of texture baking, and it is important to make the distinction here. In 3DS Max, an object can self-project as well as use one set of objects to project onto another object through the projection modifier. To assign texture baking, the render-to-texture option under the Rendering tab is used. Select the object that will be assigned the projected texture. Once the object is selected, the required settings will need to be set for the texture bake. For this sample, our process called for the “VrayCompleteMap” output type. See the following tutorial for a more in depth, step by step guide from the makers of V-Ray: Basic Texture Baking with V-Ray.
Self Projection
Important note: do not enable projection Mapping field, and be sure to set the proper UVW channel.
Projecting to another object
To enable “Projection Modifier” field, be sure to set proper UVW channel. Apply a push modifier to the bake object above the Projection modifier, and be sure to set a value on the Push modifier that is ample enough to remove the interpenetration between the high poly base geometry and the in game projection geometry, otherwise tears will occur in the textures that are baked.
One final note on baking in 3DS Max with V-Ray. At the time of writing this [Dec. 2019], The object to be baked needs a VrayMtl that is completely transparent to render correctly. This will possibly be resolved in a later software update.
Although these specific steps are particular to this combination of software, much of this method can be used in combination with other software pipelines.
Textures are always baked as PNG files with 48 bit and Alpha. Sometimes the baked textures come out dark and need to be processed in Photoshop. Using 48 bit allows for adjustment to exposure without color degradation and banding.
We use a set of actions in Photoshop to process and fill in the blanks of the rendered textures.
This is also when textures can globally be adjusted for aesthetic reasons. By following these steps globally for a scene, there are no seams created and vary large or small changes can be done without the need to re render. See below for an example of a texture bake rendered and processed:
As a final check, the game ready assets should have the baked textures applied as self illuminated materials. This process validates the baked textures and exposes any errors that need to be resolved. Once everything looks correct, the scene is ready to export to Unity. As a general rule, we apply Reset XForms, align pivots, and check the normals for all objects to make sure everything is in good shape prior to exporting.
Unity Pipeline
With so much of the scene setup and production already taken care of, the Unity process is fairly straight forward. A few of the Unity features mentioned can be learned online and will not be covered step by step.
Importing assets to Unity
With all assets properly named, import the FBX and textures to Unity in the folder structure best suited for the project. Generally, we separate the geometry from the textures and shaders. Since the scene is one export it will import into Unity with everything in the same relative location, getting the scene looking good is a simple matter of creating a material per texture and applying it to each piece of geometry.
Once placed in the scene the newly imported FBX should be set to static and the mesh renderer’s “Receive Global Illumination“ drop down should be set to ”Light Probes“. This tells unity not to generate a lightmap, but still allow the geometry to influence light probes.
For the demo, we used the Lightweight Render Pipeline/Unlit shader in Unity.
The next step is to apply the skydome, as it will be important during the baking process.
Once the scene is fully setup with shaders and environment, the next step is to add the Unity light probes and reflection probes. There are no dynamic Unity lights in this pipeline, these probes will be used to bake Light and reflection maps within Unity for the shaders of the dynamic objects in the scene to receive light information.
Light probe and reflection probe example configurations. Above: Exterior; Below: Interior
After the probes are set up, a quick light baking needs to be done. We found that the base textures without a multiplication factor do not allow enough light to be emitted into the probes. To get around this issue, we do the following before baking:
Temporarily set the shader of our materials to “Lightweight Render Pipeline/Lit”, keeping in mind that this will likely be called “Universal Render Pipeline/Lit“ if you are using the latest version of Unity.
Turn on Emission checkbox. (First bake only)
Assign the baked texture to the Base Map and Emission Map parameters. (First bake only)
To brighten the influence on the light probes, set the Emission Color to 2, 2, 2 (RGB). (First bake only)
Save the project (File & Save Project) to store the material changes for later. (First bake only)
Bake the scene by going to the Lighting tab (Window > Lighting Settings) and press “Generate Lighting” button.
Select all of the scene materials and set the shaders back to “Lightweight Render Pipeline/Unlit”.
Bake the reflection probes via the Lighting tab and the drop down arrow via “Generate Lighting” button. Save the project (File > Save Project). Now the scene art is ready to be used in game.
Once the Light probes are baked, it’s important to revert the materials back to their standard exposure range before baking the reflection probes. If this isn’t done, the dynamic objects in the scene will be too bright in game.
Positives of the pipeline
We started building this art pipeline over three years ago, in the search for reducing draw calls and creating the best visuals possible for the the Gear VR and Samsung S5; in both instances we were more than happy with the results.
See below for a few of the other reasons to use this pipeline:
Bake individual or multiple selected assets.
Very robust controls of V-Ray to fine tune renders affecting both quality and production time savings.
Very performant shader for static geometry (one texture lookup).
Allows for high poly counts which works well in VR where details with depth have a large visual impact.
Better visual quality for rough surfaces and looks surprisingly good on fairly glossy surfaces.
Challenges of the pipeline
We did experience a few challenges with the pipeline, See below for a few of the other reasons to use this pipeline:
Iteration of scene geometry can be daunting and time consuming as the scene get close to final.
Many steps are required to get good results leaving room for errors as scenes get large and more complex.
The process of combining bake objects and remapping UVW coordinates becomes time consuming with more complicated groupings.
Base geometry and bake geometry need to be kept in sync.
Dynamic lights (like a flashlight) being used in the environment would not work or at least would not look good.
High VRAM/Disk usage. It is possible that mega-texturing (aka sparse virtual textures) could work around this.
How to download and get started with the sample project
First, download the Unity project. It requires at least Unity 2019.2, which is the recommended version for using this sample. Open OculusOgrePrototype.unity. To build and deploy the scene, you’ll need to enter your own signing certificate, then build and deploy to Quest as usual. See the following docs page for more information on how to build and deploy Unity apps to Quest.
Conclusion
Our process served us well for Cursed Sanctum, even after two years the game still gets rave reviews on the Quality of the visuals for the Gear and the Go. We continued onto Myth Hunters just as GPU rendering started to come online for V-Ray NEXT, we’re happy to say that this dropped our average texture bake times from 4 hours a render to 20 minutes. As we look ahead, we are evaluating the changes in Unity and the implementation of GPU supported texture baking to see if we can utilize Unity more in the scene building and baking process. Also, as VR hardware become less restrictive, some of the performance bottlenecks of a few years back will be less relevant and allow us to modify our processes to make the pipeline better for future games. Thank you for taking the time to read our post, we hope this sample serves you well.
- Elie Arabian and Matt Ostgard
Games
Quest
Unity
Explore more
Quarterly Developer Recap: Tracked Keyboard, Hand Gestures, WebXR PWAs and more
Unlock the latest updates for developers building 3D, 2D and web-based experiences for Horizon OS, including the new tracked keyboard, microgesture-driven locomotion, and more.
GDC 2025: Insights for Creating, Monetizing and Growing on Meta Horizon Worlds
Get insights from GDC on how Meta Horizon Worlds’ unique development path enables you to build quickly, broaden your reach, earn revenue and measure success.
All, Design, GDC, Games, Marketing, Mobile, Multi-User, Optimization, Quest
The Past, Present, and Future of Developing VR and MR with Meta
Take a journey through the past, present, and future of developing VR and MR as Meta’s Director of Games Chris Pruett shares evolving ecosystem trends and audience insights.