Summary:
Community leaders in VRChat face significant challenges in managing disruptive or harmful groups that infiltrate well-moderated spaces. We are requesting a feature that allows community leaders to ban entire groups from their group instances, rather than having to rely solely on individual bans.
Problem:
In large communities with thousands of members, it becomes incredibly time-consuming and inefficient to ban disruptive users one by one—especially when the problematic behavior is clearly tied to certain organized groups. These can include crashers, hate groups (e.g., Nazi-themed groups), or communities that promote offensive ideologies (e.g., a group named "Nine Eleven was the Best Event Ever").
As things currently stand, users from such groups can still join group instances unless they are individually identified and banned. Worse, users can hide or obfuscate their group affiliations, making moderation even more difficult. Reporting groups to VRChat for dangerous or criminal behavior has historically led nowhere in many cases, leaving community leaders without effective tools to protect their spaces.
Proposed Solution:
Introduce a Group-Based Ban System that allows community leaders to blacklist specific VRChat groups from accessing their group instances. This would allow for:
Preventing members of known harmful groups from joining community instances.
Reducing the burden on moderators by enabling proactive, community-wide protection.
Making it significantly harder for organized bad actors to disrupt well-run communities.
Offering an additional layer of moderation, especially in the absence of effective enforcement on VRChat’s side.
Benefits:
Improved safety and user experience for community members.
More scalable and efficient moderation tools.
Empowered community leaders with more control over their own spaces.
A step toward reducing the spread and impact of dangerous ideologies and behaviors within VRChat.
Conclusion:
Giving community leaders the ability to ban entire groups aligns with VRChat’s goals of safety, inclusivity, and giving users more control over their experiences. This feature would be a meaningful upgrade to the platform’s moderation toolkit and help protect communities from coordinated harm.