Power management is a crucial consideration for mobile VR development.
A current-generation mobile device is amazingly powerful for something that you can stick in your pocket - you can reasonably expect to find four 2.6 GHz CPU cores and a 600 MHz GPU. Fully utilized, they can deliver more performance than an Xbox 360 or PS3 in some cases.
A governor process on the device monitors an internal temperature sensor and tries to take corrective action when the temperature rises above certain levels to prevent malfunctioning or scalding surface temperatures. This corrective action consists of lowering clock rates.
If you run hard into the limiter, the temperature will continue climbing even as clock rates are lowered, and CPU clocks may drop all the way down to 300 MHz. The device may even panic under extreme conditions. VR performance will catastrophically drop along the way.
The default clock rate for VR applications is 1.8 GHz on two cores, and 389 MHz on the GPU. If you consistently use most of this, you will eventually run into the thermal governor, even if you have no problem at first. A typical manifestation is poor app performance after ten minutes of good play. If you filter logcat output for "thermal" you will see various notifications of sensor readings and actions being taken. (For more on logcat, see Android Debugging: Logcat.)
A critical difference between mobile and PC/console development is that no optimization is ever wasted. Without power considerations, if you have the frame ready in time, it doesn't matter if you used 90% of the available time or 10%. On mobile, every operation drains the battery and heats the device. Of course, optimization entails effort that comes at the expense of something else, but it is important to note the tradeoff.
The Fixed Clock Level API allows the application to set a fixed CPU level and a fixed GPU level.
On current devices, the CPU and GPU clock rates are completely fixed to the application set values until the device temperature reaches the limit, at which point the CPU and GPU clocks will change to the power save levels. This change can be detected (see Power State Notification and Mitigation Strategy below). Some apps may continue operating in a degraded fashion, perhaps by changing to 30 FPS or monoscopic rendering. Other apps may display a warning screen saying that play cannot continue.
The fixed CPU level and fixed GPU level set by the Fixed Clock Level API are abstract quantities, not MHz / GHz, so some effort can be made to make them compatible with future devices. For current hardware, the levels can be 0, 1, 2, or 3 for CPU and GPU. 0 is the slowest and most power efficient; 3 is the fastest and hottest. Typically the difference between the 0 and 3 levels is about a factor of two.
Not all clock combinations are valid for all devices. For example, the highest GPU level may not be available for use with the two highest CPU levels. If an invalid matrix combination is provided, the system will not acknowledge the request and clock settings will go into dynamic mode. VrApi asserts and issues a warning in this case.
There are no magic settings in the SDK to fix power consumption - this is critical.
The length of time your application will be able to run before running into the thermal limit depends on two factors: how much work your app is doing, and what the clock rates are. Changing the clock rates all the way down only yields about a 25% reduction in power consumption for the same amount of work, so most power saving has to come from doing less work in your app.
If your app can run at the (0,0) setting, it should never have thermal issues. This is still two cores at around 1 GHz and a 240 MHz GPU, so it is certainly possible to make sophisticated applications at that level, but Unity-based applications might be difficult to optimize for this setting.
There are effective tools for reducing the required GPU performance:
These all entail quality tradeoffs which need to be balanced against steps you can take in your application:
In general, CPU load seems to cause more thermal problems than GPU load. Reducing the required CPU performance is much less straightforward. Unity apps should always use the multithreaded renderer option, since two cores running at 1 GHz do work more efficiently than one core running at 2 GHz.
If you find that you just aren’t close, then you may need to set MinimumVsyncs to 2 and run your game at 30 FPS, with TimeWarp generating the extra frames. Some things work out okay like this, but some interface styles and scene structures highlight the limitations. For more information on how to set MinimumVsyncs, see the TimeWarp technical note.
In summary, our general advice:
If you are making an app that will probably be used for long periods of time, like a movie player, pick very low levels. Ideally use (0,0), but it is possible to use more graphics if the CPUs are still mostly idle, perhaps up to (0,2).
If you are okay with the app being restricted to ten-minute chunks of play, you can choose higher clock levels. If it doesn’t work well at (2,2), you probably need to do some serious work.
With the clock rates fixed, observe the reported FPS and GPU times in logcat. The GPU time reported does not include the time spent resolving the rendering back to main memory from on-chip memory, so it is an underestimate. If the GPU times stay under 12 ms or so, you can probably reduce your GPU clock level. If the GPU times are low, but the frame rate isn’t 60 FPS, you are CPU limited.
Always build optimized versions of the application for distribution. Even if a debug build performs well, it will draw more power and heat up the device more than a release build.
Optimize until it runs well.
For more information on how to improve your Unity application’s performance, see Best Practices and Performance Targets in our Unity documentation.
The mobile SDK provides power level state detection and handling.
Power level state refers to whether the device is operating at normal clock frequencies or if the device has risen above a thermal threshold and thermal throttling (power save mode) is taking place. In power save mode, CPU and GPU frequencies will be switched to power save levels. The power save levels are equivalent to setting the fixed CPU and GPU clock levels to (0, 0). If the temperature continues to rise, clock frequencies will be set to minimum values which are not capable of supporting VR applications.
Once we detect that thermal throttling is taking place, the app has the choice to either continue operating in a degraded fashion or to immediately exit to the Oculus Menu with a head-tracked error message.
In the first case, when the application first transitions from normal operation to power save mode, the following will occur:
In this mode, the application may choose to take additional app-specific measures to reduce performance requirements. For Native applications, you may use the following AppInterface call to detect if power save mode is active: GetPowerSaveActive(). For Unity, you may use the following plugin call: OVR_IsPowerSaveActive(). See OVR/Script/Util/OVRModeParms.cs for further details.
In the second case, when the application transitions from normal operation to power save mode, the Universal Menu will be brought up to display a non-dismissible error message and the user will have to undock the device to continue. This mode is intended for applications which may not perform well at reduced levels even with 30Hz TimeWarp enabled.
You may use the following calls to enable or disable the power save mode strategy:
For Native, set settings.ModeParms.AllowPowerSave in VrAppInterface::Configure () to true for power save mode handling, or false to immediately show the head-tracked error message.
For Unity, you may enable or disable power save mode handling via OVR_VrModeParms_SetAllowPowerSave(). See OVR/Script/Util/OVRModeParms.cs for further details.