Researchers from ViRVIG (a European visualization, virtual reality, and graphics interaction joint research group, in this case from the Technical University of Catalonia and University of Cyprus) have produced a deep learning-based IK system trained on motion capture data, which is designed to produce high-quality skeletal output from sparse (i.e. 6-point) motion data. It runs on PyTorch.
There is already a Unity implementation, and it's MIT-licensed. The software is standalone and communicates with Unity over TCP/IP. Anyone could use OSC or SteamVR tracker emulation to implement this, but the ideal implementation would be directly inside VRchat, as an option for those who have enough compute power to spare.
Github with MIT-licensed Unity demo project: https://github.com/UPC-ViRVIG/SparsePoser
Preprint on arxiv (also published in ACM Transactions on Graphics): https://arxiv.org/pdf/2311.02191.pdf