I've narrowed my case down to this simple GLSL code:
uniform int zeroUniform; // Always set to zero, it is there so that the code is not optimized out
out int c;
int a = 8660165;
int b = 6;
c = (a + zeroUniform) / b;
When I put this in a shader and inspect the shader with RenderDoc, it says that c is 1443361! But it should be 1443360. What the hell is happening? In hex, 8660165 is 0x8424C5, so there's a whole one byte free before the sign bit could alter the calculations. Am I missing something or is this a GPU bug?
OpenGL 4.6 core, Tested on AMD RX 5700 XT. I've also tried using uint instead of int, which works correctly.
int = (int + int) / int;
. There is no floating point number involved. When implemented correctly, there should never be a float -> int assignment.