Artificial VR Locomotion - Controls and User Input

Now that we know the different types of artificial locomotion, let’s discuss how to enable the user to control their movement within the virtual space. The purpose of any control system is to capture player intent, and doing so within an artificial locomotion system is no different. Capturing intent is particularly challenging in VR because of the many variables involved, such as head orientation, motion tracking data, avatar physics, and the user’s physical posture. There are many ways to control artificial locomotion, and some of the more useful ones are described below.

Locomotion can be a significant contributor to VR motion discomfort, so it’s important to provide effective control of locomotion while maximizing user comfort. The Techniques and Best Practices section provides further detail on how to improve the comfort of the various control techniques described below.

Thumbstick Driven Locomotion

Using the controller thumbsticks is a very common way to control locomotion systems. However, the thumbstick control behavior is usually more complicated than just moving the avatar in the direction of the stick. While there aren’t any rules for how thumbstick controls should behave, a number of design patterns have emerged which people have come to expect from certain types of VR applications.

Direction-mapping

When using thumbsticks, the direction the avatar moves depends on how the thumbstick directions are mapped to directions in the virtual space. People generally expect to move forward when they push the thumbstick in the upward direction, but what happens if they turn their head while moving? Should forward mean the new direction the headset is facing, or should it remain the direction it was when movement started, or perhaps forward should depend on the orientation of the controller? This is one aspect of locomotion control where people usually appreciate being able to choose their preferred method.

Every mapping type comes with pros and cons in different scenarios. So, consider carefully how users will be interacting with your experience and choose your mappings wisely. It is paramount for both usability and to prevent motion sickness that the user can easily anticipate how the camera will move through the virtual environment in response to their head movements and controller input. Your experience should give users an opportunity to learn and acclimate to the direction mappings available to them, and ideally they should be able to choose the one that best suits them.

Some of the useful variations in how forward direction is calculated during player movement include:

Head-Relative: With head-relative logic, thumbstick input is interpreted relative to the direction in which the user’s head is facing. Pushing the thumbstick upward causes the avatar to move in whichever direction the user’s head is facing. Turning the head while the thumbstick is pushed in any direction will continuously change the direction of movement so that it is always relative to wherever the headset is facing.

Initial Head-Relative: With initial head-relative logic, pushing the thumbstick forward causes the avatar to move in the direction the headset is facing when the user initiated the movement. Unlike head-relative controls, if the user turns their headset while pushing on the thumbstick, the direction of movement won’t change. For instance, if a user facing north pushes the thumbstick upward, they will move north, and until they release the thumbstick, they will continue moving north, even if they turn their head to look sideways.

Controller Relative: With controller relative logic, pushing the thumbstick upward causes the camera to move in the direction the hand controllers are pointed. The headset’s orientation has no impact on the direction of user movement; it is always relative to the orientation of the controller. This allows someone to steer by simply turning the controller itself while leaving the thumbstick pushed in the same direction.

Initial Controller-Relative: With initial controller-relative logic, pushing the thumbstick upward causes the avatar to move in the direction the hand controllers are pointed when movement is initiated. Unlike controller-relative logic, turning the controller while pushing the thumbstick will not change the direction of movement, which is similar to how initial head relative movement behaves.

Controller-relative direction mapping is perhaps the most common approach today, however, the list above is not comprehensive. There are applications which demonstrate subtle but useful variations on these patterns in an effort to infer user intent more accurately and improve the experience. For many designs, supporting these different control schemes can be a relatively simple task, so we recommend providing these as options whenever possible.

Turning Controls

Artificial turning based on thumbstick input should be supported when it is compatible with the application design. The choice will be especially appreciated by those who are in chairs that don’t spin, use wheelchairs, are tethered to a PC, or simply prefer not to spin around.

Everyone has different preferences and needs, and meeting these needs can be the difference between a user enjoying your experience or avoiding it all together. With this in mind, see below for the three most common artificial turning control schemes:

Quick Turns

The camera turns a fixed angle for each tap on the thumbstick. The angle of each turn per tap of the controller is usually 30 or 45 degrees, which occurs over 100 milliseconds or less. People generally have strong preferences for how this should be tuned, so it’s always a good idea to provide the option to tune both of these values. The goal is for the turn to be slow enough for people to keep track of the surroundings, but fast enough that it doesn’t trigger discomfort.

It is important to register all taps on the thumbstick so that people can continue to provide turning input signals, even though a turn may already be in progress. The system should not disregard taps that occur while the camera is in the process of turning, and should immediately change direction if the chosen orientation shifts to another direction. This affords people the opportunity to begin a turn, and immediately turn back, or tap a few more times if they know exactly how far they want to turn.

Visualization of thumbstick pressing down and player FOV moving to that direction smoothly and at a medium pace.

Snap Turns

Snap turning is similar to quick turning with respect to the accumulation of thumbstick taps to set the desired direction, but instead of smoothly turning over time, it will immediately face the final direction. If there is any kind of transition effect enabled that delays the immediate orientation of the player to the new direction, the accumulation of thumbstick taps is necessary so that multiple taps will always result in rotating a fixed amount per tap.

It is a common problem for applications to disregard thumbstick taps when a time-consuming turn has begun. This historically leads to an interface that responds unpredictably to directional controls because people need to wait until the turn has been completed before triggering another turn, and if they don’t, the input signal would be disregarded.

Visualization of thumbstick pressing down and player FOV snapping to that direction.

Smooth Turns

The camera turns at a speed relative to how far the thumbstick is pushed left or right. This can be uncomfortable because the movement in the field of view causes users to expect a corresponding sense of physical acceleration which doesn’t happen because they are not physically turning.

Because turning starts as soon as the thumbstick leaves the center position, the angular velocity will vary based on precisely how far from center the thumbstick happens to be, which often leads to further discomfort. While smooth turning is considered uncomfortable by many, it’s possible to improve this behavior as described in the following section: Improved Smooth Turning.

Visualization of thumbstick pressing down and player FOV smoothly turning to that direction in a relaxed pace.

Teleport Controls

The process of performing a teleport is a sequence of events that generally includes activation, aiming, potentially controlling the landing orientation, and finally, triggering the actual teleport. While it’s common for teleports to be triggered by a simple button or thumbstick movement, some designs will integrate teleport controls into the gameplay, which makes it possible to choose the destination by throwing a projectile, or some other mechanic unique to the application.

There is room for creativity, but if you are looking to implement teleportation in a way that is familiar to many VR users, this section describes a few common approaches.

Thumbstick-triggered Teleportation

A popular approach to initiating a teleport is to activate the process when the thumbstick is moved out of its center position. An aiming beam will appear which people then point at the desired destination. Aiming continues for as long as the thumbstick is pushed. When the thumbstick is released, the player is teleported to the destination.

Using a thumbstick to control this process makes it possible for the user to control the direction in which the user is facing after teleportation. While aiming at the destination, the thumbstick is used to control the direction of an indicator shown at the end of the teleport beam. When the teleport is completed, the player perspective will face the direction of the indicator. It is generally useful to provide a way to cancel the teleport by pressing another button or clicking the thumbstick, since releasing the thumbstick usually triggers the teleport.

The tricky part of this technique is ensuring the act of releasing the stick and triggering the teleport does not change the landing orientation. One way to approach this problem is to carefully monitor the thumbstick input so that when it moves a small distance back towards the center position, the indicator direction will remain where it was last set.

Button-triggered Teleports

In some cases, the thumbstick on the dominant hand may not be preferred or available to trigger teleportation, in this case, you can use one of the standard buttons on the controller to initiate the teleport. The thumbstick in the user’s non-dominant hand can be used to control landing orientation when teleport aiming is active, or cancel the teleport by clicking down on this thumbstick before releasing the button that activated the teleport.

Button-triggered teleportation also enables the dominant handed thumbstick to be used for other locomotion needs, such as controlling snap turns, although it is recommended to disable these snap turns if a user is pressing the controller button to teleport.

Motion-Tracked Locomotion

It’s possible to enable locomotion functionality using motion controllers without using controller buttons or thumbsticks for input. These techniques often consider posture, hand poses, and other physical movements as signals for controlling movement.

Applications that use hand tracking (when controllers are not held), may find the examples below useful for controlling artificial locomotion. Because hand tracking is a relatively new type of input, there’s a big opportunity to pioneer new best practices in this space.

Simulated Activities

Simulated activities map the user’s physical movements to a real-world or imaginary activity. The intent is to mimic the physical movement if you were to perform these activities in a real-world setting. From swimming and running to flying like a superhero, pretty much anything you can imagine is possible.

Running: Detecting when the user’s arms swing like they do when people run can be used as an input signal to make the avatar run.

Paddling Motion: Tracking the movement of the user’s arms as they move in a paddling motion, as in propelling a boat. Phantom - Covert Ops by nDreams is an excellent example of paddle-based locomotion.

Simulated activities are amongst the more immersive ways to control locomotion. Some of the most exciting surprises and successes have come from games that replicate real world activities within a VR experience.

Pose-Driven Controls

While the simulated activities outlined above require users to move how they would expect to move in the context of the real-world experience, pose-driven controls use learned poses and abstract gestures to control movement. Pose-driven controls are a bit more abstracted from the intended activity, and usually require some training or guidance so people know what motions cause each effect. These poses and gestures can take many forms, one example is to point with one hand and initiate the movement with a gesture on the other hand. Widely-adopted standards haven’t yet emerged for pose-driven controls, which makes it a space ripe for experimentation.

Here are a few useful tips if you’re considering experimenting with abstract gestures:

  • Ensure that your app detects the pose reliably every time. This is critical.
  • Provide consistent, reliable feedback for when a gesture is detected, so the user knows their intent has been understood by the system.
  • Provide clear guidance for how to use the gestures that are available.
  • Keep gestures simple enough to memorize so they can be used by the user with minimal cognitive effort.
  • Minimize the number of supported gestures, making them easier to use as a reflex.

Seated Control Considerations

The goal of supporting a seated mode is to simulate a standing experience while seated. Fatigue is a real issue for many people. Even when an application isn’t clearly designed as a seated experience, many people will minimize their movement during long sessions, as long as it doesn’t detract from the experience.

Seated mode is also important for accessibility. It is recommended to support a stationary, seated position unless the design is fundamentally incompatible, because this will allow more people to experience your content.

When seated mode is supported, there should be messaging early in the experience (possibly within the tutorial) that enables the user to choose between seated and standing modes. If seated, the avatar height, and in turn the camera height, should be set to an elevation that defaults to the player height reported by the system. This height should also be customizable within the game options. If the height were to be based simply on the headset’s elevation from the physical floor, avatars would be shorter and the user perspective would not match what they see when standing.

When people are physically seated, additional controls may be required to provide the full range of movement in VR that would be available if the user was standing and walking around the play space. Support for artificial turning is necessary because people won’t always be in a chair that spins. If the experience benefits from physically ducking or crouching, it will be important to provide controls to toggle crouch, and any other poses that are required to access specific content as effectively as someone who is able to physically move around the play space.

It’s useful to think of the transition between standing and crouching as a form of artificial locomotion, and to be aware of the comfort risks related to moving the camera up and down like this. It’s recommended to make the transition between camera elevations as brief as possible. The camera movement shouldn’t be instant, because this would lead to a visual discontinuity that can trigger disorientation. Also, people don’t crouch instantly so it would appear more natural to an observer. If the transition takes too long, this can also become a trigger for discomfort.

What’s Next: Design Challenges + Accessibility Design Guide

Especially in regards to seated control systems, design for accessibility is essential to locomotion, and VR app design as a whole. We recommend checking out the full guide on Accessibility Design, especially the section on Accessible Locomotion Design.

If you’re looking to learn more on the higher level topic of locomotion, we recommend checking out the following section that covers Locomotion Design Challenges.