Get the Desktop App for Battle.net Now
- All your games in 1 place
- Log in once
- Automatic game updates
When I use a campaign sound for my custom transmissions, the portraits mouths move. However when I use my own sounds they don't. Of course, I understand that the lip-sync is embedded somehow into the campaign sounds.
My question is: how does this work from a technical standpoint? Is there a standard lip-sync mark up language? Is the lip-sync data embedded into the sound file or is it imported separately, but named in the same way as the sound clip? How was the lip-sync produced in the first place?
I'm satisfied with what I've got right now (my sound with some random lip-movement), but I'm still curious about how this works.
It is generated by FaceFX. You can check localization mpq's and see that there is a ton of .fx (or .fxa, I don't remember now) files which are responsible for the lipsyncing job.
Inside the file there are sequences named the same way as the sound files are.
In the editor you can distinguish the difference between say Hydralisk portrait and Marine portrait. The Marine is FXA Portrait and the hydralisk is not. That's because the hydra doesn't have any lipsync information.
FXA stands for FaceFXActor.
Edited by Leru on 3/22/2013 5:53 AM PDT
Indeed; after digging in the repositories i found the fxa-files. After a bit of googling i found this: http://social.bioware.com/wiki/datoolset/index.php/FaceFX
The faceFX seems to be included into the DragonAge toolset. I wonder if it can be used to produce content for sc2?
Threats of violence. We take these seriously and will alert the proper authorities.
Posts containing personal information about other players. This includes physical addresses, e-mail addresses, phone numbers, and inappropriate photos and/or videos.
Harassing or discriminatory language. This will not be tolerated.