0

I am tearing my hair out at this problem! I have a simple vertex and fragment shader that worked perfectly (and still does) on an old Vaio laptop. It's for a particle system, and uses point sprites and a single texture to render particles.

The problem starts when I run the program on my desktop, with a much newer graphics card (Nvidia GTX 660). I'm pretty sure I've narrowed it down to the fragment shader, as if I ignore the texture and simply pass inColor out again, everything works as expected.

When I include the texture in the shader calculations like you can see below, all points drawn while that shader is in use appear in the center of the screen, regardless of camera position.

Example screenshot attached

You can see a whole mess of particles dead center using the suspect shader, and untextured particles rendering correctly to the right.

Vertex Shader to be safe:

#version 150 core
in vec3 position;
in vec4 color;

out vec4 Color;

uniform mat4 view;
uniform mat4 proj;
uniform float pointSize;

void main() {
   Color = color;
   gl_Position = proj * view * vec4(position, 1.0);
   gl_PointSize = pointSize;
}

And the fragment shader I suspect to be the issue, but really can't see why:

#version 150 core
in vec4 Color;
out vec4 outColor;
uniform sampler2D tex;

void main() {
   vec4 t = texture(tex, gl_PointCoord);

   outColor = vec4(Color.r * t.r, Color.g * t.g, Color.b * t.b, Color.a * t.a);
}

Untextured particles use the same vertex shader, but the following fragment shader:

#version 150 core
in vec4 Color;
out vec4 outColor;
void main() {
   outColor = Color;
}

Main Program has a loop processing SFML window events, and calling 2 functions, draw and update. Update doesn't touch GL at any point, draw looks like this:

void draw(sf::Window* window)
{
    glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    sf::Texture::bind(&particleTexture);

    for (ParticleEmitter* emitter : emitters)
    {
        emitter->useShader();
        camera.applyMatrix(shaderProgram, window);
        emitter->draw();
    }

}

emitter->useShader() is just a call to glUseShader() using a GLuint pointing to a shader program that is stored in the emitter object on creation.

camera.applyMatrix() :

GLuint projUniform = glGetUniformLocation(program, "proj");
glUniformMatrix4fv(projUniform, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
...
GLint viewUniform = glGetUniformLocation(program, "view");
glUniformMatrix4fv(viewUniform, 1, GL_FALSE, glm::value_ptr(viewMatrix));

emitter->draw() in it's entirity:

    glGenVertexArrays(1, &vao);
    glBindVertexArray(vao);

    // Build a new vertex buffer object
    int vboSize = particles.size() * vboEntriesPerParticle;
    std::vector<float> vertices;
    vertices.reserve(vboSize);

    for (unsigned int particleIndex = 0; particleIndex < particles.size(); particleIndex++)
    {
        Particle* particle = particles[particleIndex];
        particle->enterVertexInfo(&vertices);
    }

    // Bind this emitter's Vertex Buffer
    glBindBuffer(GL_ARRAY_BUFFER, vbo);

    // Send vertex data to GPU
    glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vertices.size(), &vertices[0], GL_STREAM_DRAW);

    GLint positionAttribute = glGetAttribLocation(shaderProgram, "position");
    glEnableVertexAttribArray(positionAttribute);
    glVertexAttribPointer(positionAttribute,
        3,
        GL_FLOAT,
        GL_FALSE,
        7 * sizeof(float),
        0);

    GLint colorAttribute = glGetAttribLocation(shaderProgram, "color");
    glEnableVertexAttribArray(colorAttribute);
    glVertexAttribPointer(colorAttribute,
        4,
        GL_FLOAT,
        GL_FALSE,
        7 * sizeof(float),
        (void*)(3 * sizeof(float)));

    GLuint sizePointer = glGetUniformLocation(shaderProgram, "pointSize");
    glUniform1fv(sizePointer, 1, &pointSize);

    // Draw
    glDrawArrays(GL_POINTS, 0, particles.size());

And finally, particle->enterVertexInfo()

vertices->push_back(x);
vertices->push_back(y);
vertices->push_back(z);
vertices->push_back(r);
vertices->push_back(g);
vertices->push_back(b);
vertices->push_back(a);

I'm pretty sure this isn't an efficient way to do all this, but this was a piece of coursework I wrote a semester ago. I'm only revisiting it to record a video of it in action.

All shaders compile and link without error. By playing with the fragment shader, I've confirmed that I can use gl_PointCoord to vary a solid color across particles, so that is working as expected. When particles draw in the center of the screen, the texture is drawn correctly, albeit in the wrong place, so that is loaded and bound correctly as well. I'm by no means a GL expert, so that's about as much debugging as I could think to do myself.

This wouldn't be annoying me so much if it didn't work perfectly on an old laptop!

Edit: Included a ton of code

9
  • My bet would be still on undefined behavior on the client side. Your changes in the fragment shader can also result in changes in the unfirom locations, and maybe even attribute locations. You should post the code for uniform and attribute setup as well.
    – derhass
    Commented Apr 3, 2016 at 17:07
  • I think I've included all the relevant code to do with attributes and uniforms, but while I've been playing with the shaders, I haven't touched any input or output variables they had, just what I included in the calculations, in an attempt to not introduce new bugs. I'll admit that I don't remember exactly what all the arguments in the code I posted do, so I could well have missed something important. Commented Apr 3, 2016 at 17:24
  • That code looks OK so far. However, you say you are using different shaders for the untextured and the textured variants, which means you could still mess up the attribute and uniform locations between both programs.
    – derhass
    Commented Apr 3, 2016 at 17:30
  • I've added the untextured fragment shader too, in terms of attributes and uniforms, they only differ by the texture, which is set using SFML's sf::Texture::bind function at the beginning of each draw call, as there's only one texture. The attributes are located using the same code already posted, as each emitter is just an instance of a class, passed a GLuint that refers to the shader it should use. Each emitter then calls glUseProgram() before it does any drawing. Commented Apr 3, 2016 at 18:18
  • Well, the shader codes doesn't help. The question is how and when your attributes and uniforms are set up, with respect to switching the program.
    – derhass
    Commented Apr 3, 2016 at 18:30

1 Answer 1

1

As turned out in the comments, the shaderProgram variable which was used for setting the camera-related uniforms did not depend on the actual program in use. As a result, the uniform locations were queried for a different program when drawing the textured particles.

The uniform location assignment is totally implementation specific, nvidia for example tends to assign them by the alphabetical order of the uniform names, so view's location would change depending if tex is actually present (and acttively used) or not. If the other implementation just assigns them by the order they appear in the code or some other scheme, things might work by accident.

Not the answer you're looking for? Browse other questions tagged or ask your own question.