Bring your 3D avatar to life with Intel RealSense and faceshift

Intel Developer Blog: How game characters can be trained to recreate a player's expressions
Author:
Publish date:
Social count:
0
Intel Developer Blog-6.jpg

Imagine controlling the facial expressions and speech of a 3D-animated character on screen by simply speaking into a camera. That’s what developers and a few early users are now able to do thanks to facial-tracking and gesture-control technology. Intel and faceshift are helping to democratise these exciting capabilities that were once the preserve of game animators and creators of pre-recorded content.

The idea is simple but has a variety of potential applications: users speak into a camera on their computer (or just frown or smile at it) and the camera scans their expressions. This data is then used by the app to train a character or avatar to recreate those expressions. By using the Intel RealSense camera and the faceshift SDK, developers can create truly expressive, interactive user interfaces.

So gamers can see their on-screen characters mirror their own movements and really feel like they’re part of the action. Or you could take part in a video call in the form of a zombie, a talking pug or whatever takes your fancy – check out this video of live motion capture using the faceshift app.

And this is just one exciting example of developers harnessing Intel RealSense technology. As 3D tracking software advances, app creators are building sensitive facial-tracking software into more apps so that users can enjoy increasingly immersive experiences.

Visit the Intel Developer Zone for advice and resources, and to see how the Intel RealSense SDK can help you incorporate gesture control and facial tracking into your next development project.

• This blog post is written by Softtalkblog, and is sponsored by the Intel Developer Zone, which helps you to develop, market and sell software and apps for prominent platforms and emerging technologies powered by Intel Architecture.

Related