This website uses cookies to improve our services and deliver relevant ads.
By interacting with this site, you agree to this use. For more information, see our Cookies Policy
Graphical User Interfaces (GUIs) in virtual reality present unique challenges that can be mitigated by following the guidelines in this document. This is not an exhaustive list, but provides some guidance and insight for first-time implementers of Virtual Reality GUIs (VRGUIs).
If any single word can help developers understand and address the challenges of VRGUIs, it is “stereoscopic”. In VR, everything must be rendered from two points of view -- one for each eye. When designing and implementing VRGUIs, frequent consideration of this fact can help bring problems to light before they are encountered in implementation. It can also aid in understanding the fundamental constraints acting on VRGUIs. For example, stereoscopic rendering essentially makes it impossible to implement an orthographic Heads Up Display (HUD), one of the most common GUI implementations for 3D applications -- especially games.
Neither orthographic projections nor HUDs in themselves are completely ruled out in VR, but their standard implementation, in which the entire HUD is presented via the same orthographic projection for each eye view, generally is.
Projecting the HUD in this manner requires the user to focus on infinity when viewing the HUD. This effectively places the HUD behind everything else that is rendered, as far as the user’s brain is concerned. This can confuse the visual system, which perceives the HUD to be further away than all other objects, despite remaining visible in front of them. This generally causes discomfort and may contribute to eyestrain.
Orthographic projection should be used on individual surfaces that are then rendered in world space and displayed at a reasonable distance from the viewer. The ideal distance varies, but is usually between 1 to 3 meters. Using this method, a normal 2D GUI can be rendered and placed in the world and the user’s gaze direction used as a pointing device for GUI interaction.
In general, an application should drop the idea of orthographically projecting anything directly to the screen while in VR mode. It will always be better to project onto a surface that is then placed in world space, though this provides its own set of challenges.
Placing a VRGUI in world space raises the issue of depth occlusion. In many 3D applications, it is difficult, even impossible, to guarantee that there will always be enough space in front of the user’s view to place a GUI without it being coincident with, or occluded by, another rendered surface. If, for instance, the user toggles a menu during play, it is problematic if the menu appears behind the wall that the user is facing. It might seem that rendering without depth testing should solve the problem, but that creates a problem similar to the infinity problem, in which stereoscopic separation suggests to the user that the menu is further away than the wall, yet the menu draws on top of the wall.
It is generally not practical to disallow all VRGUI interaction and rendering if there is not enough room for the VRGUI surfaces, unless you have an application that never needs to display any type of menu (even configuration menus) when rendering the world view.
There is more than one way to interact with a VRGUI, but gaze tracking may be the most intuitive. The direction of the user’s gaze can be used to select items in a VRGUI as if it were a mouse or a touch on a touch device. A mouse is a slightly better analogy because, unlike a touch device, the pointer can be moved around the interface without first initiating a “down” event.
Like many things in VR, the use of gaze to place a cursor has a few new properties to consider. First, when using the gaze direction to select items, it is important to have a gaze cursor that indicates where gaze has to be directed to select an item. The cursor should, like all other VR surfaces, be rendered stereoscopically. This gives the user a solid indication of where the cursor is in world space. In testing, implementations of gaze selection without a cursor or crosshair have been reported as more difficult to use and less grounded.
Second, because gaze direction moves the cursor, and because the cursor must move relative to the interface to select different items, it is not possible to present the viewer with an interface that is always within view. In one sense, this is not possible with traditional 2D GUIs either, since the user can always turn their head away from the screen, but there are differences. With a normal 2D GUI, the user does not necessarily expect to be able to interact with the interface when not looking at the device that is presenting it, but in VR the user is always looking at the device -- they just may not be looking at the VRGUI. This can allow the user to “lose” the interface and not realize it is still available and consuming their input, which can further result in confusion when the application doesn’t handle input as the user expects (because they do not see a menu in their current view).
Another frequently-unexpected issue with using a gaze cursor is how to handle default actions. In modern 2D GUIs, a button or option is often selected by default when a GUI dialog appears - possibly the OK button on a dialog where a user is usually expected to proceed without changing current settings, or the CANCEL button on a dialog warning that a destructive action is about to be taken. However, in VRGUIs, the default action is dictated by where the gaze cursor is pointing on the interface when it appears. OK can only be the default in a VRGUI if the dialog pops up with OK under the gaze cursor. If an application does anything other than place a VRGUI directly in front of the viewer (such as placing it on the horizon plane, ignoring the current pitch), then it is not practical to have the concept of a default button, unless there is an additional form of input such as a keyboard that can be used to interact with the VRGUI.