With the new system, directors will be able to fine-tune the performances in post-production, rather than on the film set.
Called FaceDirector, the system enables a director to seamlessly blend facial images from a couple of video takes to achieve the desired effect.
"It's not unheard of for a director to re-shoot a crucial scene dozens of times, even 100 or more times, until satisfied," said Markus Gross, vice president of research at Disney Research.
"That not only takes a lot of time - it also can be quite expensive. Now our research team has shown that a director can exert control over an actor's performance after the shoot with just a few takes, saving both time and money," Gross added.
FaceDirector is able to create a variety of novel, visually plausible versions of performances of actors in close-up and mid-range shots.
Moreover, the system works with normal 2D video input acquired by standard cameras, without the need for additional hardware or 3D face reconstruction.
In a first, the system analyses both facial expressions and audio cues. It then identifies frames that correspond between the takes using a graph-based framework.
Once this synchronization has occurred, the system enables a director to control the performance by choosing the desired facial expressions and timing from either video, which are then blended together using facial landmarks, optical flow and compositing.
The researchers would present their findings at the ongoing International Conference on Computer Vision 2015 in Santiago, Chile.