Trust and Saftey Tickets being ignored, despite following instructions.
Tehrasha
I currently have MULTIPLE crasher and NSFW avatars reported via the in-game/website ‘Report’ option which remain publicly available weeks after their being reported.
Multiple attempts to address this via T&S with the requested ‘additional evidence’ are only met with the same generic ‘use the in-game’ boilerplate response and closure of the ticket, despite the text of the ticket pointing out that has already been done.
“If you would like to attach any evidence alongside your report, please resubmit it through the Helpdesk form.”
These ARE the followups your boilerplate message is asking for.
Jan. 08, 2026 https://help.vrchat.com/hc/en-us/requests/653874
Jan. 27, 2026 https://help.vrchat.com/hc/en-us/requests/659622
A more recent ticket was to report a KKK avatar, in which I pointed out that the avatar was too new to appear in the standard avatar search, so the in-game/website report function was impossible to use in this circumstance. So I included logfile which show the avatar's name and the username of the uploader.
Feb. 06 2026 https://help.vrchat.com/hc/en-us/requests/662660
This received the standard boilerplate response and closure of the ticket literally seconds after it was submitted.
I decided to try moving it up the chain, and submitted what you see above to the Help Desk to report that I was having trouble with the website. Instead of a 'Submit' button, I was instead met with a 'Next Step' button, which feeds my ticket to an AI Assistant to attempt solve the issue without having to bother a human being.
“Our Help Desk assistant will scan through our docs and check to see if we can help you right away. If you still need assistance, you can submit a ticket to our User Support team.”
It dissected my message, reiterated all of my points and broke them out into bullet points, then added some flowery platitudes while repeating the 'use the in-game' mantra. Ultimately admitting that there is no workaround or way to escalate a report with the ticket system.
It then says that if this did not address my issue, to go ahead and click Submit so someone could address it. So I did.
Today, the ticket received this....
-----------------------
Hello,
It looks like you may have sent a moderation-related inquiry to the incorrect channel.
If you would like to resubmit your report, we encourage utilizing VRChat’s In-app reporting system. For more guidance, please refer to the following articles:
- Blocking, Unblocking, and Reporting Users in VRChat
- Reporting Avatars and Worlds
- Reporting Groups
If you would like to attach any evidence alongside your report, please resubmit it through the Help Desk form.
Lilith
VRChat Team
-----------------------
This new “smooth transition into a newer, more streamlined report system” is more akin to a brick wall if the report doesnt fall into a very narrow set of defined violations, with no room for special case outliers. (outliers that are far too common) It is utterly demoralizing for those who want to help make VRChat better/safer.
Log In
Sintharas
No ID Logs of privatly uploaded crasher avatars so not possible to report through ingame (cause they crashed you) and disappear right after they leave from the report list, or through the website (no ID to get). An absolute design flaw in my eyes. (Thank the community for decoding the amplitude cache to actually dump ID's)
No native way to report emojis, stickers or prints retroactive through any means, only ingame and live. (thanking the foxxo for an api push thingy that gets the job done, shouldn't be community work though)
0 actions on weekends, crasher groups already tend to announce the newest public crashers on fridays after 4pm for a weekend full of fun.
Barely any action against repeated offenders and uploaders with clear indications doing so maliciously. (looking at you Void_Demon and Blackv65) same also goes for the malicious avatar users...
0 action against reported crasher avatars that are older than 1 year and have just been announced publicly for maximum damage in publics. You have to literally get on your knees with multiple tickets, massive dumps of logs and videos itself to get one removed. In my case it was the "Quashole" one that ravaged publics for months after publicly announced before anything was done.
No filters or blacklists of anything regarding text inputs like usernames, profiles, pronouns ect. The amount of bans I've pushed for racism or NSFW in those fields is unfathomable.
Clearly non-functioning anti-cheat or anti-tamper measures since years now. Nice that the holes are getting patched but how about taking initiative and using a different system to authenticate genuine clients and uploads? Certificates is a great word that comes to mind and also maybe a behavioral analysis of critical users would do the trick which the community had already built.
Cookie cutter ai agent responses for integrety topics and "smooth transitioning" closing tickets and not migrating them is the most toxic trait I've seen in CS and is definetly not best practice.
And Group or World owners are powerless except for putting their instances behind an age-verified gate from shady-provider who I'm just waiting for a leak from so GDPR and EUDA blasts them.
If this platform didn't have such a strong community, it would have been dead in it's tracks for a clear lack of actual brains in it's security architecture.
WubTheCaptain
I've posted my personal response/feedback and anecdotes on this topic on VRChat Ask: https://ask.vrchat.com/t/recent-changes-to-abuse-reporting-trust-and-safety/47535/7 - it's a bit hard to digest, feel free to skip the anecdotes. Trying to TL;DR here:
The takeaway from that response for me is the new policy works better than web form ticketing after carefully following the instructions and new policy, but the templated message to close tickets and submit them in-app could have more clarity or empathy from Trust and Safety to be more approachable and not misunderstood as ignorance (which I think OP seems to be having some difficulty with). I thought Support team's response to OP to resubmit moderation-related inquiries from an incorrect channel (Support) to Trust and Safety was appropriate, even if the web form response from Trust and Safety may be blunt.
For me, an in-app closed report is the final decision on the matter. For me I interpret evidence in web form tickets is "kept on file if it’s needed for investigation", and I try my best to include an avatar ID to have the matter investigated, but I don't get discouraged if a web form ticket is closed with evidence and most of my reports can be solved with an in-app report alone.
See also tupper's response on Canny to another user: https://feedback.vrchat.com/feature-requests/p/re-open-moderation-tickets-for-certain-malicious-avatars-known-as-crashers