Expose Viseme Floats
tracked
Fox P McCloud
If you use Blendshape driven visemes, you currently can't accurately reproduce the same result by manually implementing it; you can approximate it by utilizing
Voice
float in combination with the Viseme
int, but the actual viseme system seems to be handling things differently.It'd be nice to be able to get access to this data so you could recreate the entire system, manually if you wanted.
Log In
SenkyDragon
Didn't realize there was an existing canny here. Additional discussion is in https://feedback.vrchat.com/feature-requests/p/provide-individual-viseme-levels-as-avatar-float-parameters .
This post was marked as
tracked
․Mystical․
This would be very useful for avatars that could have two different modes of visemes, such as crocodile that has normal visemes but then has ones that are swapped to be more constrained once duck tape is wrapped around their snout.