Currently, based on the way things behave under heavy network traffic, it seems like VRC Pickups are always constantly sending their position and rotation over the network while they're being carried? But VRChat is already sending the bone positions of all the players in the world; why not just send a one-time manual sync of the hand (right or left) that picked up the object, and the object's offset relative to the hand, and then until the object is dropped just apply that with parenting (or at least copy the hand's matrix and apply it to the untransformed offset, same thing) instead of continuing to send the object's whole position and rotation every frame when it's trivial to reconstruct from the hand data that every client already has....
This is implementable in Udon and works fine; it would just be nice if it were how the default VRCPickup system worked too, rather than having to try and recreate such basic functionality to make it more efficient...