One of recent trends on the dark side of VRChat is uploading various malicious/highly objectionable avatars as public on New User accounts, so that moderation action hits those accounts instead of a Trusted "main" account.
One handy solution for this problem could be letting people set safety settings based on trust rank of the uploader of the avatar (new user) instead of its wearer (trusted user). This even makes more sense in general - it's the content maker's trust level that needs to be taken into account, not content user's, as the former have the ultimate control over what said content is. This will also solve the issue of robot visitors - if they use content uploaded to trusted accounts, there's much less reason to hide them.
The way I see this proposal implemented, all avatar-related settings would be applied according to uploader's trust rank, whereas other settings (voice, user icon, maybe portals if implemented) would be applied according to user's rank. Alternatively, an option to choose the old behavior can be added.