This website uses cookies to improve our services and deliver relevant ads.
By interacting with this site, you agree to this use. For more information, see our Cookies Policy
This guide outlines Avatar SDK support with a C/C++ game engine or application. The source code samples used in this guide are taken from the Mirror demo, available in OVRAvatarSDK\Samples\Mirror.
To add Avatar support to your Visual C++ project:
The functions ovrAvatar_RequestAvatarSpecification() and ovrAvatarAsset_BeginLoading() are asynchronous. The avatar message queue contains the results of these operations.
You can retrieve the most recent message with ovrAvatarMessage_Pop(). After you finish processing a message on the queue, be sure to call ovrAvatarMessage_Free() to free up the memory used by the pop.
Avatars are composed of avatar components (body, base, hands, controller) which are themselves composed of render parts. Each Oculus user has an Avatar Specification that indicates the mesh and texture assets that need to be loaded to recreate the avatar.
Our Mirror.cpp example code contains good examples of the entire process and includes helper functions, prefixed with _, that we have written to make it easier to render complete avatars.
The complete process goes something like this:
To render avatar hands without controllers:
ovrAvatar_SetLeftControllerVisibility(_avatar, 0); ovrAvatar_SetRightControllerVisibility(_avatar, 0);
To render avatar hands with controllers:
ovrAvatar_SetLeftControllerVisibility(_avatar, 1); ovrAvatar_SetRightControllerVisibility(_avatar, 1);
You can pass your own custom transforms to the hand pose functions or use our cube and sphere preset hand poses. Here is an example of a custom pose made from freezing the hands in their current pose:
const ovrAvatarHandComponent* handComp =
ovrAvatarPose_GetLeftHandComponent(_avatar);
const ovrAvatarComponent* comp = handComp->renderComponent;
const ovrAvatarRenderPart* renderPart = comp->renderParts[0];
const ovrAvatarRenderPart_SkinnedMeshRender* meshRender =
ovrAvatarRenderPart_GetSkinnedMeshRender(renderPart);
ovrAvatar_SetLeftHandCustomGesture(_avatar,
meshRender->skinnedPose.jointCount,
meshRender->skinnedPose.jointTransform);
handComp =
ovrAvatarPose_GetRightHandComponent(_avatar);
comp = handComp->renderComponent;
renderPart = comp->renderParts[0];
meshRender = ovrAvatarRenderPart_GetSkinnedMeshRender(renderPart);
ovrAvatar_SetRightHandCustomGesture(_avatar,
meshRender->skinnedPose.jointCount,
meshRender->skinnedPose.jointTransform);To pose the hands as if to grip cubes:
ovrAvatar_SetLeftHandGesture(_avatar, ovrAvatarHandGesture_GripCube); ovrAvatar_SetRightHandGesture(_avatar, ovrAvatarHandGesture_GripCube);
To pose the hands as if to grip spheres:
ovrAvatar_SetLeftHandGesture(_avatar, ovrAvatarHandGesture_GripSphere); ovrAvatar_SetRightHandGesture(_avatar, ovrAvatarHandGesture_GripSphere);
To unfreeze the hand poses:
ovrAvatar_SetLeftHandGesture(_avatar, ovrAvatarHandGesture_Default); ovrAvatar_SetRightHandGesture(_avatar, ovrAvatarHandGesture_Default);
Voice visualization is an avatar component. It is created as a projection on top of an existing mesh.
Create the microphone:
ovrMicrophoneHandle mic = ovr_Microphone_Create();
if (mic)
{
ovr_Microphone_Start(mic);
}Pass an array of voice samples to ovrAvatarPose_UpdateVoiceVisualization().
float micSamples[48000];
size_t sampleCount = ovr_Microphone_ReadData(mic, micSamples, sizeof(micSamples) / sizeof(micSamples[0]));
if (sampleCount > 0)
{
ovrAvatarPose_UpdateVoiceVisualization(_avatar, (uint32_t)sampleCount, micSamples);
}The render parts of the voice visualization component are a ProjectorRender type.
The Avatar SDK contains a complete avatar pose recording and playback system. You can save pose data to packets at regular intervals and then transmit these packets to a remote computer to drive the avatar poses there.
Call ovrAvatarPacket_BeginRecording() to begin recording
ovrAvatarPacket_BeginRecording(_avatar);
After you record as many frames worth of pose changes you want, stop the recording with ovrAvatarPacket_EndRecording() and then write your packet out with ovrAvatarPacket_Write().
ovrAvatarPacket* recordedPacket = ovrAvatarPacket_EndRecording(_avatar); // Write the packet to a byte buffer to exercise the packet writing code uint32_t packetSize = ovrAvatarPacket_GetSize(recordedPacket); uint8_t* packetBuffer = (uint8_t*)malloc(packetSize); ovrAvatarPacket_Write(recordedPacket, packetSize, packetBuffer); ovrAvatarPacket_Free(recordedPacket);
Transmit your data to your destination using your own network code.
To read your pose data back into packets:
// Read the buffer back into a packet playbackPacket = ovrAvatarPacket_Read(packetSize, packetBuffer); free(packetBuffer);
To play the packets back:
float packetDuration = ovrAvatarPacket_GetDurationSeconds(packet);
*packetPlaybackTime += deltaSeconds;
if (*packetPlaybackTime > packetDuration)
{
ovrAvatarPose_Finalize(avatar, 0.0f);
*packetPlaybackTime = 0;
}
ovrAvatar_UpdatePoseFromPacket(avatar, packet, *packetPlaybackTime);The playback routine uses the timestamp deltaSeconds to interpolate a tween pose in case the frames on the remote computer are offset by a different amount.