I am writing to provide constructive feedback regarding the recent moderation actions concerning "NSFW geometry/textures" on avatars.
Currently, the NSFW tag exists but fails to adequately protect creators. Users are frequently banned for public uploads even when the distinction between allowed content and bannable offenses is ambiguous in the UI.
I believe the following system-level changes would solve this issue more effectively than reactive ban waves:
  1. Explicit Definition & Enforcement of the NSFW Tag The NSFW tag currently appears to be for private use only, yet creators are punished for public uploads. The system should be binary:
The NSFW option must be strictly limited to private avatars, OR
If allowed on public avatars, it must trigger automatic viewing restrictions (age-gating).
  1. Prevention of Public NSFW Uploads It should not be technically possible to upload an avatar as "Public" with NSFW characteristics without safeguards.
If an avatar is public, the NSFW toggle should either be disabled, or;
Checking the NSFW toggle should automatically enforce age verification and force a "fallback" avatar for non-verified users.
  1. Pre-Upload Detection System Instead of banning users retroactively, please consider implementing a detection step during the upload process (scanning mesh geometry or specific texture patterns).
If potential NSFW content is detected, the uploader should receive a warning: "NSFW content detected. Please set to Private or apply restrictions."
This protects minors from seeing content and protects creators from accidental bans due to third-party assets.
Moving from reactive moderation to a proactive system (Age-Gate + Upload Detection) would reduce the moderation workload, prevent false bans, and rebuild trust with the creator community.
Thank you for considering this feedback.