a vulnerability that allows you to ban anyone
Sidุุุุุุຸຸຸ
As I understand it, if you use cheats and send an endless number of reports through the VrChat API, sooner or later the "AI" responsible for checking the legitimacy of an account will start to experience Delusion and Confabulation, and it will ban you for any characters not in English.
I can't repeat it myself, sorry, I don't use cheats.
But I can provide screenshots of how a schoolboy with cheats from a Steam account banned us.
My account was banned for just one day because I only had Russian preview on my avatar, but my friends were permanently banned.
One of his accounts is already banned.
Log In
WubTheCaptain
Social engineering shouldn't be considered as a bug, imo.
As of December 2025, there is no "AI" handing out bans. https://ask.vrchat.com/t/developer-update-december-18-2025/47354#p-86291-first-there-is-no-ai-ban-wave-9
You can submit a moderation ticket on the VRChat's Help Desk to appeal a ban.
Sidุุุุุุຸຸຸ
WubTheCaptainOkay, maybe you're right. I don't know all the details, but did all these people really have problems with their profiles? And judging from the posts you sent, Vard confirms that there is an automod.
Sidุุุุุุຸຸຸ
WubTheCaptainI believe that it may not be AI, but the vulnerability exists, I just don't know how to properly characterize it.
Sidุุุุุุຸຸຸ
WubTheCaptainYes, my friends and I have already appealed the blocking; we are waiting for a response from support.
WubTheCaptain
Sidุุุุุุຸຸຸ I can share an anecdote (n=1) from summer 2025. Multiple reports of different offenses in different categories by one person for legitimate offenses in a group public instance (VRC+ prints), with minors present nearby. A permanent ban was handed to this user on the first offense, no previous bans on record. This "permanent" ban was reversed after a week, after submitting a third moderation appeal - and only after I (as the reporter) appealed once in a moderation ticket to Trust & Safety why the person I had reported should not be handed a permanent ban, and that the person who was banned was a helpful contributor to VRChat community. To the best of my awareness, the user appealing his ban didn't admit his bad behavior or to correct it and thus his appeals were being initially denied, only asking "can I appeal?". End result: All content removed, unbanned after a week.
There was no visible automation. The reports were handled several hours after reporting. One report or many, it doesn't generally matter from my experience (and what I think I've seen stated on VRChat Ask forums or on reddit by VRChat Community Team).
There are human disagreements. I have seen over 900 closed moderation reports from things I've personally reported for actual safety issues (in public), and some of their actions. A few <>< avatar creators banned for aclaimed "NSFW" too, bans squashed after contacting VRChat Community Team directly via friend contacts by the community.
The automated scanning is used for photo uploads (e.g. avatar pictures uploaded from SDK), it doesn't lead to an automated ban but rejects the photo upload to the best of my awareness. There's at least one bug report somewhere in Canny from last year about this "failed to upload" message not being explicit about photo automoderation rejecting it.
But I am not part of the VRChat Team, so take this as a grain of salt.
Please appeal. We're not seeing the results of the appeal process here with appeal text, so I personally doubt this thread to be actionable either way.
Discuss this on the VRChat Ask forum.