The physical side-effects of VR are well-documented and go by many names: simulator sickness, VR sickness, VORTAN[1], stillness illness[2] and others. Despite extensive research, the precise physical mechanisms are still poorly understood. This is not unique to VR – the causes of most forms of motion-induced illness such as seasickness and spacesickness are also poorly understood. Although we don’t understand the physiology well, we do understand many of the things that cause it. A few of these causes are
inherent in the desired experience, but many of them can be solved with good (though complex) engineering. But because all the symptoms are basically the same whatever the cause, it is often difficult to discern whether you’re running into a physiological limit or if you simply have your math wrong. This is especially tricky to track down when many of the symptoms take twenty minutes or more to manifest.
For added challenge, the brain can train itself to ignore all sorts of strange things if given months of daily exposure, which is exactly the sort of exposure you get when developing a VR game. This means that we developers rapidly become immune to all but the most obvious rendering errors, and as a result we are the worst people at testing our own code. It introduces a new and exciting variation of the coder’s defense that “it works on my machine” – in this case, “it works for my brain”. Developers should ideally do a significant amount of testing on “kleenex” subjects before shipping anything, and because peoples’ responses vary so widely, they need to test on multiple players to ensure there isn’t some terrible howler that will make a significant portion of their players sick.
The typical components of problems in VR that we see in the current crop of experiments can be classified into two broad categories – things developers can fix by using good engineering to modify the existing rendering pipelines, and things developers can’t fix without changing the game or the content in some way. The latter is an area that is really fun to explore with new game concepts and ideas, and there are already some fascinating experiments, but they’re often beyond the scope of developers porting an existing game to VR.
Things developers can fix
- Incorrect camera calibration and distortion correction. These are a set of numbers and transforms that describe the physical characteristics of the device in shape and optical properties. Fundamentally, they describe where on the retina each pixel on the screen will appear. This is a technically tricky thing to do because it involves lots of finicky measurement and cameras and manufacturing tolerances and so on, and needs to be tuned for the specific device. Although tricky, it will eventually get done to a high enough precision – it just takes time and effort. There’s nothing developers need to actively do here apart from making sure to keep up-to-date with the latest Oculus SDK.
- Rendering code in the game not using the calibration & distortion values correctly. This is “just math”. It is slightly tricky math, and there’s a bunch of i’s to dot and t’s to cross, so it does require good discipline to make sure it is being done correctly. It can require a bit of code surgery on existing pipelines since it’s not as simple as offsetting the cameras and changing the FOV. But please do take the time to do this right – the difference between “nearly right” and “actually right” looks subtle in screenshots, but will have a dramatic effect on how tired or dizzy a user gets while playing. Because this is so tricky to diagnose, Oculus are working on ways to put reference scenes into graphics engines and check the output against known-good screenshots.
- Game code that breaks the VR “rules”. I’ll do a future blog post about these in more detail, and when you might want to bend or break them, and there were great GDC talks on this from Joe Ludwig and Nate Mitchell, but in summary these are generally bad things:
- Changing the head orientation without user input.
- Changing the field of view.
- Violent or unprovoked translation of the view.
- Ignoring or overriding head movement, such as freezing the view
during cinematics or menus.
- Not providing any head translation in response to user movement – even a simple head-on-a-stick model is usually enough to avoid most symptoms.
Even if you think the added “intensity” of some effect is really exciting and immersive, there are many of your players who will just feel sick and will want to stop playing. The variation of intensity felt by different players is huge, and what is mildly interesting for some may be something that others find literally unbearable – they will stop playing your game because of it. Remember how old FPSes used to have “head bob” turned on by default because it was considered more immersive? Developers stopped doing that because it made a certain fraction of players ill, even in 2D. Finally, try to obey the rules at all times – including cinematics (first or third person), menus, and loading screens.
- Players not measuring their IPD and feeding it into the game’s settings. We completely understand the eagerness to experience VR, and right now the calibration process does take a bit of time. But if this is not correctly set up for each person’s face, it can lead to some people getting queasy in seconds. Oculus now has a user configuration utility, so please encourage people to take the time to do the calibration.
- High latency. Reducing latency is important for all interactive experiences, and in VR it plays a key role in increased immersion and preventing problems. There are many aspects to what causes latency in a VR system, and we’ll be discussing this in detail in future blogs, and there’s already some excellent posts on the subject from John Carmack and Michael Abrash.
Things developers can’t easily fix
- Disparity between focus depth and vergence. Like 3D movies, almost all HMDs, including the Rift, present the image at a single distant focal plane, which means that unlike the real world, the brain cannot sense any depth information from the focus of the lens in the eyes. Instead the brain gets depth from the stereo pair of images by measuring how much the eyes have to rotate toward each other to look at an object – this is called “vergence”.
- Disparity between vergence and focus (from the Journal of Vision). In this diagram the VR focus is closer than reality, however in the Rift (with A lenses) the focus is always at infinity and further away than reality.Some people get a very poor sense of 3D from vergence alone (I am one of them), but usually this doesn’t cause any problems – they just don’t see the 3D effect very strongly. But there are a few people for whom the disparity between between eye vergence and lens focus causes problems. Usually they feel eye-strain and tiredness rather than the more severe side-effects, but it still limits the play time and enjoyment. Unfortunately there’s currently no practical way to add focus information to individual pixels, so there are no good technical solutions to this problem yet[3], but I include it for completeness.
- Disparity in apparent motion between visual and vestibular stimuli. The disagreement between what your eyes are telling you and what your inner ear is saying is at the heart of most existing forms of motion sickness (such as sea and travel sickness), and the same is true inside VR. In VR your eyes tell you you’re running and jumping through a world, but your inner ear says you’re sitting at a desk. This is actually the reverse of most real-world motion sickness where your eyes say you are stationary, but your ears say you’re being tossed around. Either way, it’s the same fundamental disagreement, and unless your game is one where the protagonist is still most of the time (and there are a few), there’s not a great deal developers can do about it. It’s a complex subject that is not well-understood, but we at Oculus will do a follow-up blog post talking about what we do know.
What developers can do about it
Some of these components are fundamentally difficult problems to do with human perception and the limits of technology. But most of them are purely technical aspects that we know how to solve, and we also know that if they’re not done correctly, they cause some of the strongest forms of simulator sickness. Anecdotally, many of the reports of side-effects are due to these well-understood factors, and that is frustrating to us both as a company and as a bunch of geeks that want to see VR succeed. For some components, it’s up to Oculus to improve our tools, libraries and documentation. For others we need the help of developers to do the right thing, use the correct mathematical models, and resist the completely understandable temptation to just “get something on screen”. We’ll go into that in more detail in the SDK and blogs in the future.
We all need to educate new users on what to do before they jump with both feet into virtual worlds. This is difficult, especially right now when people get their exciting new dev kit and there is a room with twenty people all scrambling to try it on. Oculus can help by making the calibration process simpler and more intuitive, and we now have a standalone calibration utility that generates named user profiles that all VR games can read. This allows users to adjust the rendering to their unique facial shape and size, and means they will only have to do it once and can then play without using console variables or editing config files. Please incorporate the data from these profiles into your game code, help us encourage players to use it rather than just accepting the defaults, and please also encourage them to take things slow at first. VR is very intense and can take some getting used to.
In summary – do the math right, don’t cut corners, be kind to your sensitive players, and encourage them to take it slowly at first.
Tom Forsyth, Oculus coder.
—
[1] VORTAN is formed from five Rift dev kits, acting as one –dedicated, inseparable, invincible! Alternatively it stands for
Vestibulo-Optical Reflex something something Nausea. If anyone can remember what the T & A stands for (behave!), please let me know.
[2] Pretty sure I saw Stillness Illness open for Sigue Sigue Sputnik at the Apollo in ’83.
[3] People with this problem can try setting the IPD to zero, which will then present the same image to both eyes. This removes all vergence cues, which decreases immersion, but if it also reduces eye-strain that’s probably a good trade-off. They will still get many useful 3D cues from parallax and translation via the head-on-a-stick model which they would not have from a standard monitor or a 3D movie.