I’ve run into inconsistent behaviour in depth testing and I don’t know if this behaviour is specified anywhere.
My question is: when using EQUAL
as the depth-testing function, what is the expected behaviour when the depthClear value and a primitive's depth value are both exactly 1.0
? Is there a defined behaviour? Or is depth testing not specifically intended for comparisons with "clear/empty" areas of the depth buffer?
I'm seeing the depth test pass on Mac (Intel and M3) and Mali-G52, but fail on Snapdragon 685.
Here is the question more clearly: if you create a WebGL program with the following configuration:
gl.enable(gl.DEPTH_TEST);
gl.clearDepth(1.0);
gl.depthFunc(gl.EQUAL);
...and then in in your vertex shader specify:
gl_Position.z = 1.0;
...would you expect the program to render anything?
On my M3 Mac, Intel Mac and a Mali GPU the primitives are rendered, but on two Snapdragons they are not. On further investigation using a floating-point calculator I determined that changing the configuration above with gl.clearDepth(0.9999828338623046875)
does work on Snapdragon, suggesting that clip-space depth is scaled (but not clamped) on these devices to less than 1.0 during depth testing operations. I’ve confirmed in the fragment shader, after depth testing, that the fragment depth is still 1.0.
That weird value (0.9999828338623046875) requires 18 bits of mantissa, which is greater precision than depth16unorm
or half-precision float (admittedly, I have no clue how double-precision JS numbers are converted to normalized integers in WebGL).
Is this a bug? Or undefined behaviour? After looking through the spec I can't see anything helpful.
I’ve run into inconsistent behaviour in depth testing and I don’t know if this behaviour is specified anywhere.
My question is: when using EQUAL
as the depth-testing function, what is the expected behaviour when the depthClear value and a primitive's depth value are both exactly 1.0
? Is there a defined behaviour? Or is depth testing not specifically intended for comparisons with "clear/empty" areas of the depth buffer?
I'm seeing the depth test pass on Mac (Intel and M3) and Mali-G52, but fail on Snapdragon 685.
Here is the question more clearly: if you create a WebGL program with the following configuration:
gl.enable(gl.DEPTH_TEST);
gl.clearDepth(1.0);
gl.depthFunc(gl.EQUAL);
...and then in in your vertex shader specify:
gl_Position.z = 1.0;
...would you expect the program to render anything?
On my M3 Mac, Intel Mac and a Mali GPU the primitives are rendered, but on two Snapdragons they are not. On further investigation using a floating-point calculator I determined that changing the configuration above with gl.clearDepth(0.9999828338623046875)
does work on Snapdragon, suggesting that clip-space depth is scaled (but not clamped) on these devices to less than 1.0 during depth testing operations. I’ve confirmed in the fragment shader, after depth testing, that the fragment depth is still 1.0.
That weird value (0.9999828338623046875) requires 18 bits of mantissa, which is greater precision than depth16unorm
or half-precision float (admittedly, I have no clue how double-precision JS numbers are converted to normalized integers in WebGL).
Is this a bug? Or undefined behaviour? After looking through the spec I can't see anything helpful.
Share Improve this question asked Jan 18 at 6:45 AndrewAndrew 14.5k15 gold badges66 silver badges106 bronze badges1 Answer
Reset to default 1The GLSL ES 1.0 spec(§4.5.2) only specifies minimum ranges for a given precision but not the underlying representation, target specific casting/rounding occurs at the end when the data is written to the framebuffer, thus I'd say you're in undefined behavior territory. You could try to keep things on the GPU by rendering a screenspace quad with ALWAYS
(or disabled depth testing) instead of using clear
and clearDepth
.