This post is about asset pipelines for game development. While this topic isn’t specific to VR, it is nonetheless critical to efficient production. Many of the thoughts shared here are subjective, and every team’s needs are different. This post aims to stay relatively high-level, as the goal is to better educate non-technical artists on the value and purpose of pipelines more so than give technical artists a specific technical blueprint to follow. Also, as most tech artists would agree, putting three tech artists in a room is likely to give you at least three different, equally viable solutions to any one problem. Getting into specifics without individual context is more likely to be a distraction than added value.
The simplest way to explain an asset pipeline is to imagine it, predictably, as a series of pipes connecting your artists and DCC (digital connect creation) tools to your game. Assets flow through these pipes and into your game, and if the system is designed correctly, that flow is simple, predictable and easy to maintain. If we abstract the entire pipeline itself as one big black-box, then it might look something like this.
Of course, this is a pretty significant simplification. Pipelines vary in complexity from the almost non-existent (maybe a folder structure on a network drive) to the incredibly complex pipelines employed by world-class animation and VFX studios, built and maintained by teams of dedicated pipeline engineers. Usually game pipelines fall somewhere in between, and the complexity of your pipeline is often directly correlated to the size of your art and engineering teams. What level of pipeline complexity is right for you is something we’ll touch on a little later.
Game development is a constant struggle to balance your product’s needs against your available resources. Rarely do those balance themselves naturally. In fact, it's safe to say that in general we all feel under-resourced during development. In light of that, spending time on pipeline development can feel counterintuitive. Dedicating tech-art or engineering resources, both often in short supply, to something the customer will never see feels risky at best. Likewise, it can be hard to justify continuing pipeline development when you could instead be developing new features that impress investors and/or executives at director reviews.
While all that is true, pipeline development is still incredibly important. It’s the foundation your entire development process will rest on, and time spent early in developing an appropriately scoped pipeline will pay dividends later through increased productivity and reduced debugging and rework times.
To give a concrete example, consider an individual artist working on character art for your upcoming VR dungeon crawler. This artist is highly skilled and jumps between all their DCC applications seamlessly creating dazzling armor sets and terrifying enemies. They’ve spent the last day polishing up their latest creation and just showed it off at art walkarounds. The low poly model is done and textures are baked, so the question is, when will you see this asset in game? Well, depending on the state of your pipeline the answer could be almost anything.
A spartan asset pipeline that relies on significant manual artist intervention could result in the “final mile” for this asset taking days or even more as an artist tries to create all the final output files by hand. They’ll need to ensure all the files are in the right format, get them into the right place for ingestion, and reconcile any dependencies. Worse, once the asset ends up in-game, it might look or perform differently. Shaders are different, color spaces are different, miscommunications between character art and animation… all these potential pitfalls can cause rework and delays that will devour even the most padded of schedules in short order. A good asset pipeline can help minimize some of these pitfalls up front, and when rework is inevitably needed, reduce the friction to make and rapidly deploy changes. Multiplying out even minimal gains per asset over the course of an entire production, the potential impact of good pipelining becomes much more apparent.
Another example is a title that is on final approach to release. It’s been a bumpy ride, as it often is, but success is in reach. Then, unexpectedly, a performance issue is discovered on one of your target platforms. After exhausting systemic changes to the code, the unavoidable reality hits you. You have to do an optimization pass on all your character art that will affect character geometry, materials and rigs.
Nothing is going to make this sort of large-scale, late-stage undertaking easy, but a solid, efficiently automated pipeline can at least make it manageable. Any time saved from our previous example would be multiplied out many times over in this scenario. Alternatively, all those little annoyances that cost 30 minutes to an hour per asset, but were never a “big deal” can suddenly be crippling at scale. In addition, as artists get tired, mistakes are easier to make and harder to track down. This is especially true when it comes to the mundane, repetitive aspects of asset and dependency management. The more of that work you’ve offloaded onto your pipeline, the more your team will be able to focus on the development challenge at hand and not just wrangling digital logistics.
There are a nearly infinite number of ways to think about and break down pipelines, but for the purposes of this discussion we’ll break our theoretical pipeline into three major pieces: Asset Ingestion, Asset Management, and Tools/Environment. We could just as easily break a pipeline up by discipline (Character Art, Animation, FX, etc), and when designing a pipeline doing so is a useful exercise, but for a high level discussion these three categories provide a useful framing.
Most simple pipelines are primarily concerned with asset ingestion, which means taking assets from your DCC applications and getting them into your game. This is necessary since the source formats your art is stored in aren’t usually the formats you want to pull directly into your engine. Even in a case where your engine can directly import your DCC’s native file format (Unity, for example, can import Maya’s ma files directly) you usually won’t want to do so without some sort of ingestion processing. Source files, the files artists work in, are organized to simplify the artist’s workflow. There are often extra objects stored in the file for reference, as well as complex node history and other data that’s invaluable during asset creation, but are either useless or a potential source of bugs if not cleaned out prior to import.
So, in our simple pipeline, you will likely have some sort of export plugin, script or a combination of the two. The simplest form of this would involve an artist selecting the objects they want to include, clicking an export button, and then choosing where to save the exported file so that the game engine picks it up. That sounds great, but there are a multitude of potential failure points lurking in that simple pipeline. Here’s just a short list of things that can go wrong at this stage:
Asset saved in the wrong place
Asset saved over another asset
Artist missed selecting an object, or selected an object they didn’t intend to
Artist exported wrong version of the source file
Artist forgot to save and check in this version of the source file
Artist didn’t delete history on objects prior to export
Countless artist and tech-artist hours are wasted on each of these understandable human errors, and none of them needs to be a problem. None of these examples should be relying on human intervention at all. Every asset should have a deterministic location to which it will be exported (and if it doesn’t, that’s a problem in and of itself).
Even if you store multiple assets in a single source file, which can at times be useful, there’s no reason you can’t define export groups within that file that correspond to a deterministic export location. There’s also no reason your export scripts or plugin can’t check with your source control software to ensure the artist is using the most recent version of the source file, and just as importantly, warn them if they haven’t checked that in (you should never end up with an exported asset in your game without being able to determine which source file and version it came from, because all that can be trivially encoded at export time). Finally, any asset sanitation (deleting history, checking for invalid geometry or out of budget material counts, etc) should be taken care of automatically, and common errors should be programmatically detected prior to exporting in the first place. The key thing is to minimize artist intervention. The closer you can get to a big green export button that automatically validates, sanitizes, exports, names, and versions your asset, the better for you, the better for your artists, and the better for your production schedule.
In addition to the direct task of ingesting assets, there’s an entire asset management aspect to a pipeline. Asset management, in this context, is a lot more than just a file structure and naming convention. There are almost always critical dependencies between assets that need to be managed. Even a simple character mesh has dependent textures and materials. It also likely has dependencies on a deformation skeleton. Structuring and managing these dependencies is often another source of bugs and lost time, and a great opportunity for a more advanced pipeline to help. Let’s look at the common example of updating a character mesh for an existing, rigged, animated asset.
Updating a mesh may seem simple, but it touches several other disciplines and assets. Let’s assume we already have a basic pipeline that handles versioning of source files as well as naming and placement of our export files. Now let’s look at possible dependencies. First and most directly, there are likely textures and materials that need to be updated. Those textures may exist in a PSD file in layer groups and need to be split into several simple, single layer bitmap files. UVs may also have changed, requiring a new bake, or some degree of artist intervention. New UVs could also affect visual effects that are leveraging the character mesh and UVs, and so FX artists may need to be notified so they can recheck their work. You also likely need to update your animation rig and notify animators so they can ensure their work still reads correctly on the new mesh.
Most of what I describe can be handled through good pipeline design. At its simplest level, maintaining a per-asset list of stakeholders who need notification when certain assets are updated should be straightforward to implement, and can save huge amounts of time and headache. If your rigging system is automated (which it absolutely should be) you can also rebuild your rig and render out a range of motion test that gets emailed to the character and animation leads for visual validation, as well as inform any animators using the old rig that it’s been updated, all as an automatic consequence of exporting. You can even detect certain changes (like UVs) by comparing them to a previous version directly during export, or you could develop a more complex system to hash data representative of different aspects of an asset to allow granular diffing directly within a DCC. The topic of detecting changes in source art could fill a post of its own, but the important takeaway is to avoid user intervention as much as possible, and where you need human attention on something, try to help direct that attention as best you can. Computers are great at the boring, managerial stuff, so it’s best to let them do it.
This is one of the most overlooked parts of pipeline design, but can also be hugely important. “Environment” in this context does not mean level art. This is referring to the “environment” of your users’ workstations. That includes their DCC software, utilities, plugins, as well as scripts. At a small scale, many developers handle this manually. Artists or IT workers are expected to update software on their own while plugins and scripts are often stored on a network share for users to access and update as needed. While this can work, it can also lead to bugs and difficulty debugging later on, and creates another potential point of human error.
A better solution is to include the developer environment in your pipeline design. The simplest way to do this is to keep plugins, scripts and simple tools as part of your pipeline. When a user runs their DCC application you can initiate a script to ensure all the associated scripts and plugins are up to date. You could also run a nightly chron to do the same. You can even do this for the DCC software itself if you want. For example, ensuring all your artists are using the same version of Maya. However, due to DCC license managers and other packaging challenges on Windows (which is much more common in games than visual effects) doing so can be a much larger technical challenge than simply syncing plugins and scripts and may not be practical for small to medium sized teams.
In addition to ensuring timely updates and reducing compatibility issues, this sort of system will also help your technical team precisely reproduce an artist’s environment for debugging, which can be hugely beneficial. At its most advanced level, a system like this will allow you to define groups and what version of software and tools each group is one. That way, instead of rolling out a large DCC or plugin update to your entire team and risking large-scale disruption, you could define a test group (or an opt-in beta group) to push updates to immediately, only rolling those updates out wider once they’ve lived in production for a few days.
Whether simple or advanced, the goal of this area of pipeline design is essentially the same as the asset focused areas: minimize human intervention. If you’re going to make sure an artist is opening the right version of a file, there’s little reason not to similarly make sure they’re opening that file in the right version of their software with the right tools as well.
We hope this post provided a bit of background about what a pipeline is, and why it matters. The next logical question is, what sort of pipeline is right for your team?
Unfortunately, there’s no easy way to answer that question. There are a multitude of important inputs into that equation, from project size, production schedule, technical team size, art team size, whether you have multiple projects that can share a single pipeline and magnify its impact, and what sort of pipeline, if any, you’re starting with. That said, if you aren’t a technical artist and you’re now thinking about pipelining and what it can do for you, then that is a great first step.
We recommend watching for friction in your development process and then ask yourself whether that friction is because of a human taking an action that could just as easily be handled by a computer. If the answer is yes, then you likely have an opportunity for a pipeline win. Be careful though, just as too little pipeline can drown your production in human error, too much can drown it in tech-debt. Balance is the key, and the better you understand the high-level aspects of what a pipeline can be, the better you’ll be able to have those critical conversations about what it should be for your specific project and team.
If you have any questions, please let us know in the comments or developer forum.