We would like to have some kind of better official method for ET/FT that receives OSC from VRCFT, etc.
The facial tracking add-on created by the user community achieves excellent quality, but it causes significant CPU frame time performance issue due to running complex animator processing such as bit compression and smoothing of synced expression parameters. An animator-driven ET/FT requires building a large-scale direct blend tree for optimization, demanding advanced technical skills, which is also a problem.
One possible solution is to extend the official ET/FT features to the Avatar Descriptor.