As a shader author, writing shaders that require use of the depth buffer is not a clean task. It requires taking unnecessary steps to do and is quite bad for avatar performance.
Why is this important?
A number of shaders I write make use of the depth buffer for one reason or another. Two current examples is that I have a shader that draws a surface when the skybox is behind it, and a shader that creates a fog-like effect based on distance to the viewer.
Currently, it is possible to activate the depth buffer, but this requires appending a directional light to the avatar which has some serious drawbacks. The takeaway is that to access the depth buffer, I must make the experience worse for every player around me. As such, this is a
terrible
solution and I honestly feel terrible for using it.
Why this alternative? What is it?
The solution I propose should be relatively simple: I propose a boolean named something like
VRCNeedsDepth
, allowing me to declare that in the shader's tags block (alongside other game-specific tags like
VRCFallback
). Any shader that loads with this enabled will append the depth texture flag to the main camera.
I think this solution is appealing because as it stands now, enabling the depth texture is possible with the directional light trick. Effectively, this solution allows authors to not use a rather negative workaround to access the depth buffer, ensuring that their avatar stats actually reflect upon the avatar itself (players shouldn't need to turn Lights on in their safety to see some of my shaders, they should
just
turn on Shaders) while narrowing the performance footprint of their avatar in the process. It boasts only benefits, even in the worst case scenario where the shader in question is in a forward pass (from which the depth buffer needs to be computed as its own additional render pass. You are still trimming out the need to render lighting alongside doing that).