1

I am trying to make calculations on the fragment shader in WebGL2. And I've noticed that the calculations there are not as precise as on C++. I know that the high precision float contains 32 bits either in the fragment shader or in C++.

I am trying to compute 1.0000001^(10000000) and get around 2.8 on C++ and around 3.2 on the shader. Do you know the reason that the fragment shader calculations are not as precise as the same calculations on C++?

code on C++

#include <iostream>
void main()
{
  const float NEAR_ONE = 1.0000001;
  float result = NEAR_ONE;

  for (int i = 0; i < 10000000; i++)
  {
    result = result * NEAR_ONE;
  }

  std::cout << result << std::endl; // result is 2.88419
}

Fragment shader code:

#version 300 es
precision highp float;
out vec4 color;
void main()
{
  const float NEAR_ONE = 1.0000001;
  float result = NEAR_ONE;

  for (int i = 0; i < 10000000; i++)
  {
    result = result * NEAR_ONE;
  }    

  if ((result > 3.2) && (result < 3.3))
  {
    // The screen is colored by red and this is how we know 
    // that the value of result is in between 3.2 and 3.3
    color = vec4(1.0, 0.0, 0.0, 1.0); // Red
  }
  else
  {
     // We never come here. 
     color = vec4(0.0, 0.0, 0.0, 1.0); // Black
  }
}

Update: Here one can find the html file with the full code for the WebGL2 example

22
  • 3
    Why don't you just use e directly instead of computing it in such a precision-dependent way? Commented Dec 12, 2019 at 9:19
  • Here is an artificial example to demostrate that the calculations are not precise.
    – David
    Commented Dec 12, 2019 at 9:21
  • 2
    You added the ieee-754 tag, but are you sure that your GPU hardware is compliant to that standard?
    – Bob__
    Commented Dec 12, 2019 at 9:23
  • 1
    Actually, rounding mode alone doesn't explain it: godbolt.org/z/eXY_FP It does lead to different results, but none of them near 3.2. Commented Dec 12, 2019 at 9:26
  • 1
    @David: No, it should not. In const float NEAR_ONE = 1.0000001, the source text 1.0000001 is rounded during conversion to 32-bit floating-point to 1.00000011920928955078125. The program then attempts to compute (1.00000011920928955078125)*1e7, not (1+1e-7)**1e7. Commented Dec 12, 2019 at 12:46

2 Answers 2

2

OpenGL ES 3.0 on which WebGL2 is based does not require floating point on the GPU to work the same as it does in C++

From the spec

2.1.1 Floating-Point Computation

The GL must perform a number of floating-point operations during the course of its operation. In some cases, the representation and/or precision of such operations is defined or limited; by the OpenGL ES Shading Language Specification for operations in shaders, and in some cases implicitly limited by the specified format of vertex, texture, or renderbuffer data consumed by the GL. Otherwise, the representation of such floating-point numbers, and the details of how operations on them are performed, is not specified. We require simply that numbers’ floating point parts contain enough bits and that their exponent fields are large enough so that individual results of floating-point operations are accurate to about 1 part in 105 . The maximum representable magnitude for all floating-point values must be at least 232 . x· 0 = 0 ·x = 0 for any non-infinite and non-NaN x. 1 ·x = x· 1 = x. x + 0 = 0 + x = x. 0 0 = 1. (Occasionally further requirements will be specified.) Most single-precision floating-point formats meet these requirements.

Just for fun let's do it and print the results. Using WebGL1 so can test on more devices

function main() {
  const gl = document.createElement('canvas').getContext('webgl');
  const ext = gl.getExtension('OES_texture_float');
  if (!ext) { return alert('need OES_texture_float'); }
  // not required - long story
  gl.getExtension('WEBGL_color_buffer_float');

  const fbi = twgl.createFramebufferInfo(gl, [
    { type: gl.FLOAT, minMag: gl.NEAREST, wrap: gl.CLAMP_TO_EDGE, }
  ], 1, 1);
  
  const vs = `
  void main() {
    gl_Position = vec4(0, 0, 0, 1);
    gl_PointSize = 1.0;
  }
  `;
  const fs = `
  precision highp float;
  void main() {
    const float NEAR_ONE = 1.0000001;
    float result = NEAR_ONE;

    for (int i = 0; i < 10000000; i++) {
      result = result * NEAR_ONE;
    } 
    
    gl_FragColor = vec4(result);
  }
  `;
  
  const prg = twgl.createProgram(gl, [vs, fs]);
  gl.useProgram(prg);
  gl.viewport(0, 0, 1, 1);
  gl.drawArrays(gl.POINTS, 0, 1);
  const values = new Float32Array(4);
  gl.readPixels(0, 0, 1, 1, gl.RGBA, gl.FLOAT, values);
  console.log(values[0]);
 }
 
 main();
 
<script src="https://twgljs.org/dist/4.x/twgl.js"></script>

My results:

Intel Iris Pro          : 2.884186029434204
NVidia GT 750 M         : 3.293879985809326
NVidia GeForce GTX 1060 : 3.2939157485961914
Intel UHD Graphics 617  : 3.292219638824464 
8
  • This merely says there can be something in WebGL2 that behaves differently. It does not tell us what it is. Commented Dec 13, 2019 at 4:08
  • It's specifically says it's not specified which is the point. " the representation of such floating-point numbers, and the details of how operations on them are performed, is not specified." As long as the implementation matches he precision metioned above, how it does it is up to the GPU. Different GPUs use different methods since they are competing on speed and price.
    – gman
    Commented Dec 13, 2019 at 5:02
  • Sure, the GPU may be conforming to the specification. That may be your point. But it still leaves us uninformed about what the GPU actually is doing. The fact that a specification does not specify particular behavior does not mean we cannot inquire further and seek understanding of what is happening. It would be useful to know what precision the GPU is using and whether the observed results arise out if that precision or some other cause. The results you added suggest the implementations producing results around 3.29 are using 64-bit floating point. Commented Dec 13, 2019 at 8:23
  • @gman Thank you for your answer. In this specs khronos.org/registry/OpenGL/specs/es/3.0/… in the chapter 4.5.1 there is a table, where it is written that +,- and * operations must be "correctly rounded". Does it mean that on these operations the precision should follow the IEEE-754 standard?
    – David
    Commented Dec 13, 2019 at 9:02
  • @EricPostpischil This should give the precision, but I didn't find anything about the rounding method (also implementation defined), which I still suspect should have an impact on the final result.
    – Bob__
    Commented Dec 13, 2019 at 9:02
0

The difference is precision. In fact, if you compile the c++ fragment using double (64-bit floating-point, with 53-bit mantissa) instead of float (32-bit floating-point, with 24-bit mantissa), you obtain as result 3.29397, which is the result you get using the shader.

2
  • Can you show, from documentation or trial, that the WebGL2 implementation, is using more precision? Commented Dec 13, 2019 at 4:07
  • The proof that WebGL2 is using more precision in this case is that the result of the oriexperiment is the result when using double precision. In the specification, however, highp implies using at least 32 bits, but it does not forbid to use more than that. What is happening here is that the implementation is using 64 bits, thus more that the 32 bits of the C++ float.
    – luisp
    Commented Dec 13, 2019 at 12:51

Not the answer you're looking for? Browse other questions tagged or ask your own question.