I spent all yesterday doing a tute for Motionbuilder automated lip sync with the Voice Device, and took a lot of notes on the workflow to make sure I can do it again without any hiccups. The automation is pretty amazing, you can tweak it in many different ways to an extrodinary level of detail. Then once you're happy with the automation, plot it to the character and keyframe over the top in a different layer for expressions and offset.
Started a workflow sheet for modelers preparing character models in Lightwave for this pipeline. It will outline:
Phonemes and Expressions
- a minimum, then expanding on that for characters requiring more expressive face animation, more dialogue intensive, bigger parts etc
- clear description of phonemes using examples
Naming Convention
- so the models can be plugged into Motionbuilder and work, also saving time trying to track down oddly-named morphs
Also going to work up some demos/studies, logging hours to each stage
- setup
- first pass automated
- second pass keyframed
- final
I was blown away by Motionbuilder's power. In the last hour of a long day I worked out how go into a scene already set up with automated lip sync, replace the audio file with a different one, and tweak the settings to suit that file. 10 second audio files (from 10 second club archives.) So with a saved Motionbuilder file with the character model set up with phonemes etc all rigged up, we're talking about the first automated pass for a new audio take taking less than an hour. If it was the same actor and recording conditions for each piece of dialogue for a specific character, you could assume this time would be even less.
Saturday, July 31, 2004
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment