Full Face Tracking blendshape support for Selfie Expression for PC
Rycia
Selfie Expression has been around a for while.
However, as good as the existing selfie expression is, I would love the option for PC users to support other blendshape formats, such as ARKit and Unified Expressions.
This introduces the possibility of being able to do full native face tracking in desktop mode, using industry standard blendshapes, given the avatar supports it; and that's why there should be a "Expression Mode" option that creates a selector (similar to the "Selfie Expression Quality Level" selector) of different formats.
Let's name the default one (the current basic one) "Universal" or "General" and then have "Unified Expressions" (UE) and "Augmented Reality Kit" (ARKit). These are both open formats that VRChat more than has the authority to implement into support.
Eg.
Expression Mode < Universal >
(Current system, works with all avatars, default)
Expression Mode < Unified Expressions >
(Most commonly used)
Expression Mode < Augmented Reality Kit >
(Common, more for avatars with VTuber support)
Tooltip: Changes the method Selfie Expression uses to track your face. Choose what your avatar supports
Nothing beyond these three is really necessary. Creators can easily convert blendshapes if they don't already support these two methods.
These can just use common blendshapes used by these two standards. they're open to use and I'm fairly certain it doesn't require any licensing to do so.
See here:
Currently, in order to sit at your desk currently and get full face tracking using ARKit or Unified Expressions, you have to set up a camera on a phone with something like MeowFace (which you can no longer install this outside of finding the APK, and it's no longer maintained) and forward the tracking data to PC using the VRCFT app. A lot of steps are involved to do this, which by no means is not user friendly nor native, and not even the best quality possible.
It's possible to use VRCFT while on desktop mode already, so it is possible for the OSC output for that to conflict. If this was to be implemented, either:
A) It needs to be up to the user to not run VRCFT
B) VRChat needs to override whatever comes in from VRC if on desktop mode on these parameters depending on if the "Universal" selfie expression is enabled or not.
C) VRC itself need to internally drive the OSC values.
A second issue that can come up is that this might have a larger performance overhead than using the current "Universal" mode. This is okay for a few reasons:
1) Warn about this in the tooltip.
2) Statistics (I would assume) probably would be that most of desktop players that use selfie expression spend most of their time in VR, or at some point use VR. It takes a relatively strong PC to run VR. So people who use this feature are already going to have mostly strong enough PC's to handle these more verbose expression methods. It's going to be harder to run, but there will always be the option to use the "Universal" method for any avatar.
3) This can make use of Selfie Expression Quality Level, simply by just locking to "Universal" when Auto-Adjust sends it to the lowest quality 'Performance" setting. eg.
Off = Completely off
Performance = Lock to "Universal"
Balanced+ = UE or ARKit
4) Keep this PC only. Mobile wouldn't be able to handle it.
5) At lesser performances, like Balanced or High, can strip the usage of some minor blendshapes like "Dimple", while Best Performance uses all of the blendshapes available.
This could also possibly mean introducing a new system to Selfie Expression on the backend to even support something like this, which can take a lot of work, but Selfie Expression has already been around, and tested for quite a few months now. In the current state it's in, I'd say its safe to do even more with it!
This request is similar to https://feedback.vrchat.com/sdk-bug-reports/p/1601-add-support-for-vrcft-unified-expressions-for-selfie-expression , but this covers how it can actually be done and requests both UE and ARkit, and addresses some problems that can come up with it. I have not found a request that requests this feature in this specific way.
Log In