Transparency Reports for Trust & Safety
Faxmashine
Please keep us updated on what VRChat is doing to keep its users safe.
For example, here's Discord's transparency report: https://discord.com/blog/discord-transparency-report-h1-2021
Log In
knah
To add in on the topic of Trust & Safety transparency, making the ban appeal process and ban reasons more informative/transparent would be a nice improvement.
So far I've heard plenty of stories from people who got a confusing ban out of the blue, and upon contacting support to appeal or at least understand what exactly they did wrong, they only get a generic reply of "Nope, no unban, no additional info on ban reason, the reply is not automated, good bye. Oh, also, the ban is permanent".
Some of those people eventually got unbanned, implying that it was either a wrongful ban, or that the appeal was possible, or that the duration shouldn't have been permanent in the first place. Getting a flat out "no" from support in this case is obviously wrong, as the real answer was way more nuanced. Most, however, just give up and either make alt accounts, or leave the platform.
More transparency in this entire process would be highly beneficial - otherwise, with the growth of userbase, eventually someone influential will get one of those completely random-looking bans (even if for some legitimate reason) and create lots of bad PR alongside the lines of "VRChat just randomly bans people without explanation or appeal".
I understand that there needs to be some balance between not exposing too much internal stuff and providing people with clear information, but the current balance feels far from ideal.
KidKwazine
Piggy backing to suggest publishing a warrant canary alongside this. Warrant canaries are often published with transparency reports to account for "gaps" - such as NSL gag orders or FISA court orders.
Warrant Canary, as defined by the Electronic Frontier Foundation:
"A warrant canary is a colloquial term for a regularly published statement that a service provider has not received legal process that it would be prohibited from saying it had received. Once a service provider does receive legal process, the speech prohibition goes into place, and the canary statement is removed."
This is not effective in totality when taking into account providers/partners who may not do the same, but it's ultimately a stance on user privacy that I think is very important (+ takes little to no effort to do)
For more info: https://www.eff.org/deeplinks/2014/04/warrant-canary-faq
Salbug
As VRChat starts to grow its userbase more exponentially and to work towards improving new user experience, it'd soon become a necessity to provide more transparency regarding moderation as the platform becomes more mainstream.
Faxmashine
Sometimes, I wonder whether VRChat could do a better job of protecting its users. But I honestly can't remember their last update on the topic. (It's been at least a year?)
I'm curious about what steps they have planned, to make VRChat a safer and more trustworthy platform.