I am trying to make calculations on the fragment shader in WebGL2. And I've noticed that the calculations there are not as precise as on C++. I know that the high precision float contains 32 bits either in the fragment shader or in C++.
I am trying to compute 1.0000001^(10000000) and get around 2.8 on C++ and around 3.2 on the shader. Do you know the reason that the fragment shader calculations are not as precise as the same calculations on C++?
code on C++
#include <iostream>
void main()
{
const float NEAR_ONE = 1.0000001;
float result = NEAR_ONE;
for (int i = 0; i < 10000000; i++)
{
result = result * NEAR_ONE;
}
std::cout << result << std::endl; // result is 2.88419
}
Fragment shader code:
#version 300 es
precision highp float;
out vec4 color;
void main()
{
const float NEAR_ONE = 1.0000001;
float result = NEAR_ONE;
for (int i = 0; i < 10000000; i++)
{
result = result * NEAR_ONE;
}
if ((result > 3.2) && (result < 3.3))
{
// The screen is colored by red and this is how we know
// that the value of result is in between 3.2 and 3.3
color = vec4(1.0, 0.0, 0.0, 1.0); // Red
}
else
{
// We never come here.
color = vec4(0.0, 0.0, 0.0, 1.0); // Black
}
}
Update: Here one can find the html file with the full code for the WebGL2 example
const float NEAR_ONE = 1.0000001
, the source text1.0000001
is rounded during conversion to 32-bit floating-point to 1.00000011920928955078125. The program then attempts to compute (1.00000011920928955078125)*1e7, not (1+1e-7)**1e7.