Developer Perspectives: Coatsink - Augmented Empire
Oculus Developer Blog
|
Posted by Oculus VR
|
September 8, 2017
|
Share

Coatsink is the development team behind VR games Esper 2, A Night Sky, and the recently launched Augmented Empire. Established in 2010 with two founding developers, Tom Beardsmore and Paul Crabb, Coatsink's first focus was to launch their first console title. With the success of Shu, they grew to a team of eight and then launched their first virtual reality title, Esper. Coatsink is now a studio of 50 people building games for VR, console, and PC.

The team recently released their newest VR title and their largest game on Gear VR to date, Augmented Empire. We got a chance to catch up with Tom and Paul about Coatsink's approach to developing Augmented Empire and their secrets for creating a long-form game for mobile VR. With today's developer perspective, let's jump into the imagination of the Coatsink team as they fast forward players to the year 2058 and pop them into a cyberpunk world where the rich live in luxury, the poor live in squalor, and players orchestrate the revolution.

*               *               *

What was the inspiration behind creating Augmented Empire?

The game’s main inspirations came from turn-based strategy games but is also influenced by classic isometric RPGs. Augmented Empire started as a simple demo of a table-top perspective gameplay that one of our programmers put together while we were finishing Esper 2. From there, we started thinking about how we might approach this kind of game for Gear VR. The ideas spiraled, and we liked the concept so much that we eventually worked it into a game prototype which led to the full game.

Augmented Empire is your biggest game release for the Gear VR platform to date. Was there a specific goal driving the game length?

From the start, our goal was a strong narrative experience. We hired a writer, Jon Davies, specifically to help with the scale of the project. We started with world-building: lore, characters, environments, etc. Our main focus in the beginning was to build missions for the core part of the game's story, and then we moved on to missions where the main story was less crucial to the overall plot.

For such a large project, the only way it could be realistically handled was by breaking it down into smaller chunks. The mission-by-mission format of the game helped a lot with this. By prioritizing the most important missions, story-wise, this informed us about which environments to begin building artwork for, which characters to model first, and which game mechanics to begin developing and testing. There were also of course much bigger aspects of the game which couldn’t be broken down so easily. Those had to be iterated on constantly throughout the project (such as the game’s AI, user interface, difficulty balancing and saving system).

How did you develop your Gear VR controller implementation?

We ran into design issues when implementing the Gear VR controller as a pointer. Since it feels natural to point from the waist/chest area, this would be very low to the ground level of the game's environments and would make it hard to select individual tiles at that angle. We tried raising the pointer up to the head level, which we already knew worked since we had gaze-based tile/unit selection, but this felt unnatural for the player. No matter where we placed the pointer, it would either feel awkward or too difficult to select floor tiles and units. In the end, we opted to use the controller as a simple button input and kept the tile/unit selection controlled by the headset's gaze vector.

What was the biggest technical challenge you faced to bring such a large title to Gear VR? How did you address it?

Given the size of Augmented Empire, we quickly ran into two very large issues during development:

  1. Saving the game’s state was a complex task as there was a significant amount of information and variables that needed to be tracked.
  2. Testing such a large game for any progression-blocking issues while working on the game, for a team of our size, would be impossible to achieve in a reasonable amount of time.

We ended up addressing both of these issues with one solution - an automation system that would play through the whole game at lightning speed.

Throughout the game’s logic, "skip points" were placed both manually and automatically. For example, at the start and end of each mission, each battle, each cutscene, and each player decision. The automated system could then “play” through the game by skipping to each of these points, running all of the code in-between, and telling us if it ever failed to reach a skip point. This allowed us to identify any blocking issues immediately whenever we changed the game.

We then used these skip points as our save system by recording the state of relevant data at each point as the player progressed through the game. When the player loads the game at a later date, what's actually happening is the game is playing itself at lightning speed, making all the same decisions the player had made up to that point and delivering the player to their last checkpoint.

Performance for such a large game seems like it could be challenging. What did you do to get maximum performance out of the Gear VR hardware?

There were a few immediate performance tricks that helped early on, such as disabling physics, since we knew we didn’t need it. The main performance focus was on reducing draw calls wherever possible. We used a custom batching system to draw groups of models at once and utilized multi-view rendering to draw both eyes of the VR view in one go. We also employed several shader tricks to drive multiple visual effects with mesh vertex properties, wrote a method of detecting and removing polygons that the user would never see (since we can guarantee the user's view is locked in one place), and tweaked Unity's draw orders in ways that favored reduced draw calls versus reduced overdraw. Finally, we attempted to minimize garbage collection processes as much as possible.

Along with custom technology, Augmented Empire is also made with Unity. Could you discuss your approach to using off-the-shelf technology alongside custom technology?

We’ve used Unity for a long time and learned a lot from using it. Using an off-the-shelf the engine has a lot of advantages. It saves time. It’s well documented making it easy to learn. It has good support systems allowing us to troubleshoot issues and have our questions answered quickly. And, its popularity means that new applicants tend to know how to use it reducing training time.

But, there are some obvious constraints to using an off-the-shelf engine. Less freedom to customize the interface, less control over functions and features, and unnecessarily spent resources and overhead on features that your project may not need that are sometimes difficult to disable to name a few.

We created custom tech to accommodate our specific needs which we use both outside and in conjunction with Unity. For instance, we have our own light-baking system, a 3D model and texture atlas batching system, and our own custom build pipeline.

This isn’t your first time creating a VR title. How have your previous games influenced the development of Augmented Empire?

We used a lot of technology that we developed and tweaked over the course of our previous titles in order to optimise and improve the game's performance and visuals. As mentioned above, we have our own light-baking system and we also wrote a method of batching all available 3D models and textures in each scene at runtime so that they only use 1 draw call. We've been using and working on our own version of an event-based scripting language that was first created for Esper 2, which is now being used on our post-AE projects.

Design-wise, we learned from Esper and Esper 2 that we (as a team) have a passion for story telling and a desire to explore how VR can help us tell stories. This is one of the main reasons we decided to focus on a narrative RPG after Esper 2 rather than another puzzle game.

With 60 different environments in the game, what was the most challenging aspect of creating such a diverse ecosystem?

Creating varied and distinct areas was easy compared to the task of connecting them in a way that seemed logical but ensured the city felt huge. For instance, we couldn’t have the player leave the port on the west side of the island, stroll through the centre of town, then immediately end up at the power station on the east. The city would feel tiny. The solution was the subway - both Detritum and Tenor City have carriage-like public transport systems. These transport systems gave each mission a few moments of downtime, allowed for great character-building opportunities, and also ensured New Savannah felt like more than the sum of its parts.

Transportation also let us divide the cities into more manageable areas. Detritum, for example, is essentially comprised of five zones connected via the subway: the high street, the port, the dump, the housing district and industrial plant, each of which is controlled by a specific hostile gang – augment bootleggers, street thugs, or the corrupt city watch.

How did you determine your business goals and what are your thoughts on progress toward those goals?

Given that VR is still a new market, we still have a lot to learn about its userbase. It’s hard to determine whether there’s a market for large scale VR content until something like it is released and the market reacts. This is always a risk, but given we have a lot of VR enthusiasts both within and close to the company that have a desire for larger scale VR games, it’s a risk that we wanted to be one of the first to take. Fortunately for us, Oculus were willing to support us with the endeavour.

What would you do differently if you were to make this game all over again?

Were we to do it again, we would plan out the project a little more carefully. We would focus more on shorter, iterative development cycles of the gameplay itself and less on pushing out content.

Early on we focused on creating a lot of content so that we’d have a game no matter what happened. But this meant that some of the more fundamental systems that needed focus at the earliest points of development were rushed. We also spent a lot of time on content that was cut from the game.

We would also write the game in a completely different way. We wrote the whole thing in word documents that had to be copied, translated, and parsed into the game. The multiple stages caused a lot of errors to pop up at each point of the process. Were we to do it again, we would probably build our own tool for writing text into the game that could easily export/import localisation formats, VO scripts, and dialog/cutscene files.

What’s next for Augmented Empire and Coatsink?

We’re already hard at work on the next VR project(s), but we’re keeping a close eye on AE’s feedback and will be releasing regular updates for the foreseeable future.