If you use Blendshape driven visemes, you currently can't accurately reproduce the same result by manually implementing it; you can approximate it by utilizing
Voice
float in combination with the
Viseme
int, but the actual viseme system seems to be handling things differently.
It'd be nice to be able to get access to this data so you could recreate the entire system, manually if you wanted.