7

In the fancy new versions of OpenGL (3.0 and 4.0 up), built-in vertex attributes like gl_Vertex are being deprecated. The "new way" to actually render anything is to specify your own vertex attributes for position, color, etc., and then bind these custom attributes to buffers.

My question is this: how can one do this without closely coupling the rendering code and the shader? If I write a shader that uses "position" as the vertex position, the host code using the shader has to know that and pass vertex data as "position". If I want to use a different shader that was written to take vertex data in "vertex_pos", I have to either rewrite that shader first, or modify my host code to send vertex data as "vertex_pos" instead.

Is there a set of best-practice names for standard vertex and fragment attributes that all shaders should be using? Or are there Balkanized engine-specific standards, such that a shader written for one engine cannot work on another without modification? Or are there no standards at all, such that, in general, every object needs its own custom rendering code to match its custom shader?

3 Answers 3

3

Just keep calling them the old names. If you've got a core profile (i.e. no backwards compatibility) the reserved names of older GLSL specs are freed up declared as unavilable; redeclaring them for binding vertex attributes. Seems to change their availability attribute. In the compatibility profile those variable names are preallocated and binded.

So it boils down to this: Keeping the old naming in the shaders is a convenience and seems to work with the current GLSL compilers. If you want to go safe use the preprocessor to rewrite the gl_ prefix reserved names into an self choosen prefix and bind that one.

2
  • 2
    Actually, that's not true. GLSL reserves any name that starts with "gl_". If a core compiler allows you to use "gl_Vertex", then it's not conforming with the specification. The 1.50 spec clarifies the redeclaration syntax as only being valid for changing properties of the declared type. So it should not allow you to redeclare them either. Commented Jun 11, 2011 at 21:54
  • @Nicol: Now here begins the language lawery: Some people might see deprecated variable names as predefined identifiers from the compatibility profile, and using them in a 1.50 GLSL program is a property changing redeclaration; IMHO the spec is not clear about this, though §3.3 on #version seems to indicate that core means "No compatibility names available". If one want's to be safe he can use the preprocessor #define gl_Vertex glVertex and use that name in core profile.
    – datenwolf
    Commented Jun 11, 2011 at 22:18
1

First, to answer your question. I am unaware of any such standard naming convention.

But... it is more complicated than attribute names. Hidden under the question is the notion of attribute semantics. Each of the input attributes have some form of semantics that each shader expects.

And what I learned from the fixed gl_ names is that they under-specify their semantics.

  • Which space is gl_Position in ? (answer: it completely depends on what the host passed in. The engine does not have to pass in something that is local-space, e.g. if it transformed it already as it required world space for different reasons).
  • is gl_TexCoord1 a texture coordinate, or really, a tangent ? what about its range ? Is it encoded ?

It's not clear that a specific nomenclature could be found that really addresses all those issues, but it would be required to make various engines compatible.

More problematic, it's not even obvious that a specific engine (or specific asset) would have the capability to provide specific attributes required by a shader coming from a different engine. What then ?

Those are all reasons why we end up with balkanized shader environments.

4
  • The OpenGL spec is quite clear about what's to be passed through well knows uniforms and attributes. Things like abusing gl_TexCoord1 for specifying a tangent has it's origin in the times, when there were only limited vertex attributes available and data had to be passed by other means. For input OpenGL 4 doesn't specify any special names at all, it's all freely nameable vertex attributes, but of course the output names must be well defined. Heck, even the standard matrices were removed, now all matrices have to be specified by uniforms.
    – datenwolf
    Commented Jan 12, 2011 at 10:52
  • 1
    @datenwolf: my point is, as soon as the GL provides shaders, the semantics of inputs become the host's responsibility, not GL's. That's true even when you use the gl_* variables, and does not give you any guarantee that your shader will work on another engine, e.g. I've seen shaders that don't even consume a Position, even though it's required to be provided.
    – Bahbar
    Commented Jan 12, 2011 at 12:10
  • This loss of semantics is, why OpenGL-3/4 core no longer has those predefined input variables. It sports generic vertex attributes only and you're free to call them whatever you like. Shaders are tied to their underlying render engine anyway, since any advanced render technique depends on several render passes, which are controlled by the engine. And the shaders must wrote to fit those passes.
    – datenwolf
    Commented Jan 12, 2011 at 16:13
  • @datenwolf: sounds like we agree then.
    – Bahbar
    Commented Jan 12, 2011 at 17:34
0

See my question for a list of possible implementations of attribute/uniform semantics. I'm afraid even with OpenGL 3.4 this issue is not going to be resolved and you are pretty much left to your own devices to define a contract between your shaders and your code.

Not the answer you're looking for? Browse other questions tagged or ask your own question.