Developer Perspective: Designing Awesome Locomotion In VR
Oculus Developer Blog
|
Posted by Sam Pattuzzi
|
June 14, 2018
|
Share

In this post, Sam Pattuzzi from GameDev.tv takes a look at some of the most successful mechanisms used to implement locomotion in VR. For each technique he picks apart why it works, when to use it, where it falls short as well as some example games for “research”. This post is based on a small slice of his more comprehensive Udemy course covering immersive VR development in depth. More on that course at the end of the post.


When you have an awesome idea for a game that you want to bring to life in VR, one of the first challenges you'll encounter is movement or, as it's often called, locomotion. How the player moves around is fundamental to most games and will likely be one of the first design considerations in your development process. Locomotion in VR is completely unique and, when it's done well, it can totally transform the experience for users.

Playspace movement

This is so common you probably didn’t even think of it as a mechanism. But moving around in your playspace is probably the most natural type of movement we are going to discuss.

How does it immerse users?

Simple, we fool the brain into seeing a scene that is not there and we move it around in perfect lock-step with our real-world movements. This means that our propriosensory and vestibular systems are happy that what they are feeling matches up with the movement the eyes are seeing. All is well.

Limitations

While playspace movement is great, it has some rather harsh limitations which mean it often needs to be supplemented with other movement paradigms.

  • You are limited by the playspace size (not a big issue if you are doing warehouse VR)
  • Only works for human movement. No superhero flying here, mate.
  • Implementation tips

    The first challenge with implementing playspace movement is getting the player object (an actor in Unreal parlance) to track the position of the camera.

    Typically, headset movement is reproduced in the engine via movement of the camera relative to its parent object. This means that if you have a camera as a child of the player object (a very common setup) you need to do some trickery to move the player with playspace movement.

    Imagine we have the following setup (I will use Unreal as my example but the concept also applies to other game engines). Suppose we have a Capsule collider that is the root of our player. Under that, we have a VRRoot scene component (read GameObject for you Unity folks). Under that, we have the Camera. So we end up with:

    What happens to the camera as we move around the playspace?

    You notice that the camera (the black headset device) is offset relative to both the VRRoot and the center of the player. No good for knowing where your player object actually is.

    When the camera moves around the playspace, we want the capsule to follow. But we can't make the capsule a child of the camera for two reasons:

    1. In Unreal, the capsule must be the root component of a Character actor.
    2. We don't want to move the capsule up and down, only along the horizontal plane.

    So what we can do instead is move the whole player to the location of the camera (the purple arrow below). Now, this will move the camera even further away from the center of the player is it is relatively positioned via the VRRoot. So we need to update the local transform of the VRRoot such that the camera stays in the same place (the blue arrow below).

    This method can be easily combined with other locomotion techniques.

    Teleporting

    The teleporting mechanism we develop in our course.

    Teleporting has become one of the most common mechanisms because it is flexible, works well in combination with playspace movement. However, it also has large downsides.

    In case you don’t know, teleporting involves pointing at where you want to go (often with a laser or marker) then pressing a button and ending up there a short while later.

    How does it immerse users?

    During teleporting, the screen will often fade out then back in. This essentially suppresses the visual system while movement is occurring. This means the brain won't be able to disagree with what it sees and all is well.

    Limitations

    • Not very immersive (but better if it can be part of the gameplay)
    • Not realistic or physical
    • No control while moving - this could be an issue in multiplayer games.

    Implementation Tips

    It's very easy to do the basic mechanism:

    1. Trace from the controller to the ground.
    2. Render some sort of marker there (I use a cylinder with a fading material).
    3. On trigger pulled, fade out the camera and save the current location.
    4. Move the player actor to the new location.
    5. Fade the camera back in.

    If you want to create a parabolic path you can achieve this in Unreal using a combination of: [`UGameplayStatics::PredictProjectilePath()`, [`USplineComponent`] and [`USplineMeshComponent`].

    Jogging on the spot

    Excerpt from HUGE ROBOT's video on Freedom Locomotion

    The newcomer to the group. Jogging on the spot can take many forms but the main idea is that the degree of motion is influenced by the amount of movement on the spot. This can lead to a very immersive way of moving.

    How does it immerse users?

    Unlike teleporting, we do not suppress the visual system during motion. Instead, we attempt to fool the propriosensory and vestibular systems by jumping up and down.

    Remember, that the vestibular system is basically an accelerometer. Just like an accelerometer, it cannot accurately keep track of position when it undergoes lots of movement. Just think of the children's party game “Pin the tail on the donkey”. This works only because our sense of location gets messed up by lots of movement.

    Examples

    One of the best implementations that I’ve come across is in the free tech demo of “Freedom Locomotion VR”. In the CAOTS system, the degree of motion is detected from the amount of arm movement and head bobbing. After an initial calibration, it is able to detect whether you are moving and how fast.

    This is then coupled with the use of the thumbstick or trackpad allowing the user to indicate the direction of movement (which can’t be inferred from jogging on the spot).

    The upshot of this system is that it leaves both hands free for grabbing objects or interacting with the environment. You don’t need to look in the direction of movement and you actually get a bit of a workout!

    Limitations

    While this is a great system, it only works when you need a running simulation. Again, no good for your superman sim!

    Implementation tips

    A simple implementation would need to combine the bobbing of the head with the input direction of the analog thumbstick and the direction of the hand controller. This is fairly straightforward apart from the detection of head movement.

    One simple approach to detecting head movement is to keep track of the velocity in the up-down dimension. If this is too noisy a simple exponential moving average should be enough to smooth the signal. Of course, the sky is the limit with more and more complex techniques.

    Cockpit Simulation

    Cockpit from Elite Dangerous in VR.

    So far we have dealt with human movement. What about games where you are driving a machine? These games are ideal for seated experiences as that is exactly what you are doing in the real world. Often, you will sit in front of a control panel with a cockpit around you that doesn’t move. Instead, the world outside moves in response to your controls.

    This could be used for flight simulators, racing games or even mech warrior battles!

    How does it immerse users?

    This system works slightly differently to the others. Rather than fooling any system it attempts to give the visual system something it can lock on to.

    Limitations

    By having the unmoving cockpit around you, the sensory disagreement is lessened considerably. However, it will return in the event of extreme acceleration. Finally, this only works for cockpit games.

    Examples

    Blinkers

    Eagle Flight’s dynamic blinker system.

    How does it immerse users?

    The peripheral vision is most sensitive to movement, so it follows that if we suppress it with blinkers, the degree of motion discomfort should lessen. It also acts a lot like the cockpit method described above giving your eye something to hang on to, without limiting the game to cockpit simulation.

    Furthermore, any motion that is nearby will give a much stronger signal of movement so if you are near a fast moving object, that area of the screen likely needs to be blinkered too.

    Limitations

    Blinkers can be annoying for some and certainly cut down on your field of view. When done subtly though (as in Eagle Flight), they are barely noticeable as you are focusing so intently on where you’re going.

    However, moving in any direction but the one you are looking in will cause even more of the view to be cut. In fact, it might lead to complete blackout depending on how aggressive you make the blinkers.

    Implementation tips

    In Unreal I used a post-processing material to achieve this effect. You have to set the material domain to “Post Process”.

    The effect can then be applied to the user’s camera via the UPostProcessComponent.

    Climbing

    The climbing mechanism we develop in our course.

    A cool little bonus for you. Although it’s hyper-specific, climbing is a movement paradigm that feels really natural to many.

    How does it immerse users?

    It works for a few reasons:

    • Movement is not super fast,
    • The controllers act as fixed reference points,
    • The propriosensory system is fooled by the arm movements.

    Limitations

    It can be used outside of climbing, for example, for movement while prone in a stealth game. However, it’s never going to be a general solution.

    Implementation tips

    The trick is to keep track of where the hand started grabbing. Once you have this location, you need to update the player’s position every frame such that the hand remains in the same place. Basically:

    FVector HandControllerDelta = GetActorLocation() - ClimbingStartLocation;
    GetAttachParentActor()->AddActorWorldOffset(-HandControllerDelta);
      

    Going Further

    Likely, there is not going to be one solution that fits your game. You often need to combine multiple approaches or even invent a whole new mechanism. That said, understanding what works and why gives you an excellent grounding for inventing something new.

    In this post, I have only briefly skimmed over the implementation details of these techniques. However, our Unreal VR course on Udemy covers this and much more in detail. At the moment it is still in development but our syllabus aims to include:

    • Implementing blinkered movement with shaders
    • Using splines to render teleport arches
    • User interface design principles for VR
    • In-world “diegetic” UI
    • Performance profiling
    • Combining and instancing object
    • Grabbing and manipulating the environment

    Why Unreal Engine? While you can use many engines for VR, I find that Unreal has excellent support and, furthermore, allows you to program in C++, letting you squeeze every last drop of performance from the hardware. Not to mention it’s stunning…

    Many designers might be scared off by C++. If you are unsure about programming, Unreal also provides Blueprint, a visual language that is a non-intimidating gateway to the C++ underpinnings.


    GameDev.tv's VR Development with Unreal Engine and C++ course is now available at a discounted price on Udemy.