0

I have the following setup:

  • S - the scene that includes terrain, some houses and other man-made objects
  • T - a set of tree configurations that I would like to iterate through and place in S.

Every iteration results in a configuration C(i) = S + T(i) that is rendered onto the screen.

Since S is static and does not change between iterations I decided to employ glBufferSubData() with a large buffer split into two regions, where the first region of my buffer will store S, while the second region will contain the T(i). With every new iteration a new C(i) will be the state of the big buffer.

This will allow me to pre-allocate the big buffer, except for one issue I am facing, namely that T(i) may have slight variations in size. In order to not have to expand my buffer (e.g. through buffer orphaning aka buffer re-specification) I would like to pre-calculate the largest T(i) resulting in a T_max, which I can then use to calculate the total size of my big buffer as C_max = S + T_max.

I know that glBufferSubData allows a partial update of the data. The problem is that after my second region is updated with some T(i) I may have old data remaining from T(i-j) if the size of T(i) is LESS THAN the size of T(i-j).

In this case I will have C(i,j) = S + T(i) + partial(T(i-j)).

I was thinking that in this case I can calculate the offset of the partial data from T(i-j) and use a second glBufferSubData() that will set that chunk of memory to NULL or actual 0 (see bullet points below). So I will have

[                 S               ][         T(i)       ][   NULLs/0s   ]

I read that for

  • VBOs this is not really an issue
  • IBOs this may lead to
    • Ignore the index and continue processing (most likely scenario)
    • Restart the primitive being drawn (potentially causing rendering artifacts)
    • In rare cases, it could lead to a program crash
  • UBOs (which I am not using here, at least yet) this may lead to
    • For numeric data types (float, int, etc.), NULL values will likely be interpreted as zeros
    • For data types like vec3 or mat4 (representing vectors or matrices), NULL might be treated as all zeros, leading to unexpected behavior in the shader.

I am curious what exactly happens under the hood and how to avoid crashes. My hope is that this is not yet another "it depends on the driver implementation" scenario. :D

9
  • You know, that a vao can pull from multiple vbo's? Meaning, instead of one large buffer, use two buffers, one for S the other for T. For each new iteration, use BufferData instead of BufferSubData (always a fresh buffer for T of the desired size). Recommended: khronos.org/opengl/wiki/Buffer_Object_Streaming Commented Mar 26 at 5:39
  • Thanks, @ErdalKüçük . Are you referring to the buffer re-specification in the article you have mentioned? Commented Mar 26 at 6:36
  • Yes, for example. The issue is not partitioning a large buffer but the intended usage has its problems. And in the case where one has to constantly update data (aka stream), there are some solutions for that. It would be possible to invalidate a specific region but besides the size management, how would you know that the data in this region has already been consumed (think of the async nature of the GPU). Commented Mar 26 at 13:10
  • All of the problems you describe only apply if you read from the null data. But why would that happen in your case? If you have less elements in the IBO, then you will certainly also render less primitives.
    – BDL
    Commented Mar 26 at 23:26
  • 1
    I think you are still on the wrong track. glVertexAttribPointer only specifies the format of the buffer (size and type of one element), not the amount of data/elements in the buffer. As long as you adjust the count parameter of the draw command(which you have to do anyway), there shouldn't be a problem.
    – BDL
    Commented Mar 27 at 7:09

0

Browse other questions tagged or ask your own question.