Re-open Moderation Tickets for certain malicious avatars known as "Crashers".
complete
Furriest
VRChat has recently added new changes are coming to their Trust and Safety reporting flow. These allow pornographic, profane, graphic or racist content to be moderated with extreme haste, efficiency and scalability.
What these changes do not allow for however is for the reporting of "Crasher" avatars, avatars designed to close VRChat either the moment they are loaded (corrupted asset bundle) or the moment a toggle is activated (crasher shader, mesh, particle system.)
Doing something as simple as re-opening tickets for specifically avatars designed to crash VRChat clients, something that by no means can be reported in-game due to the fact that they close the VRChat client, would be beautifully helpful for reducing the volume of crashers on the platform.
Please at the very least re-open moderation tickets for Avatars that close the VRChat client (AKA Crashers), otherwise, Public instances, where the most impressions for New Users and Visitors happens, will remain ridden with bad-actors who close VRChat clients via "Crasher Avatars." If there still remains no way to report "Crasher" avatars, then the "opportunity cost" to VRChat's user-base growth will be great. I have already noticed an up-tick in "Instant Crashers" in Public instances, and I have no way to report them due to these recent moderation changes.
If you want a practical example, I have encountered a user who crashes the "Furry Hideout" across all of its instances, one by one, using a Trusted rank account. That ticket was closed due to the changes in moderation, and there was no other way to report that type of behavior / malicious avatar. This is problematic, as it allows that malicious behavior to persist, and it still does today.
Log In
Tupper - VRChat Head of Community
marked this post as
complete
In cases where you can't report an avatar because it instantly crashes you (and thus makes using in-app reporting hard or impossible), you can still submit a ticket via our Help Desk.
You need to include evidence: an output log is best. A video can help, but usually isn't enough context or information (plus, its unreasonable to ask you to record 24/7). Screenshots, of course, aren't usually helpful at all.
Please do your best to include the avatar ID
-- it's the best way to let us know exactly which avatar caused the issue. Without an avatar ID, it is much, much harder for us to investigate and action the reports we get.You can learn more in this Help Desk article: https://help.vrchat.com/hc/en-us/articles/360062658553-I-want-to-report-someone
Deantwo
The tooltop for the "Integrity and Authenticity" option is:
"Spamming, botting, impersonation, malicious content, or misinformation"
The name of that option isn't all that good, so I can see why you would be confused. I had to check the tooltip on it to understand what it was about.
WubTheCaptain
OP's concern is that avatar reports (without evidence, or reports that could've been handled with a more streamlined in-app report alone) are getting closed or auto-closed by the ticketing system at Help Desk. This is especially true for most tickets originally submitted before January 26, 2026, which is getting closed by Trust and Safety team members with a request to resubmit with evidence or in-app (implicitly if it's still an issue). Some old avatar report tickets (e.g. opened > 6 months ago) may still linger open in the ticketing system, but are automatically closed by the ticketing system if replied to.
I posted something here earlier, but decided to delete it. After reporting an user in-app, a web form user report with evidence (such as output logs) is still effective.
Deantwo
Thanks for providing context, WubTheCaptain. The OP was a little hard to read and I likely skimmed too much of it.