Today we hear from fellow VR enthusiast, design expert and author Doug North Cook, who shared his insights on Universal Design for VR earlier in the year. He is a professor and faculty lead at Chatham University, where he focuses on immersive media, and instructor at the Fallingwater Institute where his students explore the relationship between architectural theory, multisensory design and immersive technology.
In the first of a series of interviews focused on design for immersive technology, Doug meets with Sam Gage, who got his start in immersive tech designing motion controller interactions for Silent Hill on the Wii. Sam is former head of VR development at Oxford, senior technical designer at Sony and now principal technical designer at nDreams. In this interview, Sam + Doug will discuss VR interaction design systems, disappearing hands and more.
Doug: I wanted to specifically ask about your time spent working with the team at Oxford and how working with a team focused on immersive design for healthcare changed your perspective on immersive design in general. What are you bringing back to games from that experience?
Sam: What I understood from working on [immersive healthcare] is maybe not so much the people that we were working with in terms of design, but more the customer—who was very different to what you’re used to in games. In games, you can assume a certain level of technological literacy. They had to get as far as getting a computer and putting the game on it, so you know they know how to use a controller. When it comes to healthcare, some of these people will never have touched anything like this [VR] before—they may not even know it exists. People are often a lot more scared of it, and I think that was a big thing: understanding how you approach users who have absolutely no reference for the tech.
Doug: So how do you design specifically for people who have no reference point for the tech? In VR, most consumers still have no reference for this specific tech.
Sam: Say you’ve got an Oculus Quest. You can understand—that’s your head and those are your hands. You pop it on, and as long as you can explain to people, “That’s how to pick up objects,” then as long as your experience isn’t too complicated, they’ll get it.
In healthcare, the actual experiences themselves are usually very simple—there’s not a lot of branches there because these things have to be clinically driven, so they are a lot more restricted than a game would be. The expected level of interaction is a lot lower, so you can often get away with a single, very simple control. I think that’s the key: just don’t add more than that because you explain two things and they’re going to forget one.
Doug: Many of the conversations that you and I have had, and much of the work that you’ve done, is about crafting specific, flexible, and intimate object interactions. Why is that so important for VR experience design?
Sam: When you’re in a virtual world, it’s very difficult for a designer to show the difference between things you can and can’t interact with. So I think you need to be reasonably clear, but if your game has lots of objects around you or you don’t want to have surfaces that are just empty, and if it’s not going to damage the gameplay in any way, it’s fine for people to just mess around with the incidental stuff. I like those kinds of rich worlds with lots of stuff going on, and obviously if you’re going to make that happen, you don’t want to spend a lot of time custom coding every object. So you need to come up with ways to describe [the interaction system] in generic ways, out of some simple components. That’s basically the core of the work I do: figuring out a simple system that can cover a broad range of interactions.
Doug: Where do you start when you’re designing a new interaction system?
Sam: You know, when we met, it actually changed the way that I looked at it. So up until recently, I was thinking about things in terms of — you want something to be picked up. You think of this object, and you start describing the object’s behavior, and you end up with this world built of individual objects that contain all their own behavior in your hand. There’s not much complexity in the way that your hand is talking to this object. Then you reminded me of the concepts of Don Norman where you can describe objects more in terms of the different types of behavior that they exhibit based on how you’re interacting with them—the range of possible things that you can do with an object. Here are all the places I can grab an object, and here all the ways it reacts when I grab it in all those different places. Suddenly you can create these contextual object behaviors. You need to be able to describe an object in its subcomponents, and I think describing it in terms of where you grab it and how it reacts based on those grabs is the basis of a system. Then you go onto more complicated things like how objects move into your grip. Maybe you drive it by physics and hook up to a physics engine, or you can just drive it via some kind of interpolation. There are a whole lot of peripheral systems that build off of this idea, but the core thing is that you grab and the object responds.
Doug: When you’re developing a system like that, how do you keep it flexible enough to adapt to the way that a project will inevitably change over the course of development?
Sam: The core system shouldn’t ever really care what the object does—or even to a large degree which hand is grabbing or how. I like to write these things in a way that is highly generic. Look at the Jedi Force grab, or any kind of behavior you want. You’re always essentially saying, “This kind of agent is interacting with this particular affordance of this object.” That’s the core of it. It doesn’t even necessarily have to be hands. I think what’s useful is thinking of it in this way and having a system that’s built so that your implementation matches your design thinking.
Doug: I think about this in terms of all these different pieces. The object itself, what the object is capable of, how the object reacts and interacts when it’s not in your hand versus in your hand. What new kinds of relationships can you have with other objects while holding this object? I think what you said about not having to hard code that into every individual object, and instead systematizing it, is really important because it becomes so tedious—you’re not having to duplicate work, but you also ensure that the user starts to become familiar with object types. When they grab something, they know how it’s going to react, and then if it doesn’t react that way, they intuitively know that it’s a different type of object.
When you’re building a system to facilitate interactions between users and objects, how do you figure out what a user’s goal is going to be with an object?
Sam: I think that’s quite important; it’s almost like thinking of what the user wouldn’t think of, then adding an interaction and you’re just like, “Ah, it does that,” you know? In the London Heist, you suck on the cigarettes, you blow, and then smoke comes out, and people were like, “How did you do that?” It’s just picking up stuff on the microphone, but they never would’ve thought of it. It’s those little bits of surprise that you can kind of put in it.
Doug: Finally, what immersive experiences have surprised you with the way they’ve implemented object interactions?
Sam: Well actually, one of the first ones that got me was obviously Job Simulator. When I was at Sony working on interaction, we spent a ton of time trying to figure out how to get hands to feel good. And Owlchemy Labs came along and just said, “How about you just make them disappear?” It’s one of those things; you’re just like, “Ah, yeah, that’s kind of obvious.” Now when you see, it you know you’re just like, “Why didn’t I think of that?” But actually, it’s a really strange thing to think to do.
You can follow Doug on twitter at @dougnorthcook as he continues these conversations with other immersive designers over the coming months.
- The Oculus Team