5
$\begingroup$

Some months ago I wrote a minimal opengl 2.5d game engine with pre-rendered backgrounds (such old Resident evil, Alone in the dark and Final fantasy games). Depth is managed through a grayscale image rendered in blender with the same camera settings of pre-rendered background, with this compositor settings:

enter image description here

In pratical I take depth value, subtract NEAR distance, divide by FAR-NEAR and render with Raw color space settings. In this way I got a linear depth buffer snapshot, but when I upload in opengl I have to make it non linear in the fragment shader, in this way:

in vec2 uvTex;

uniform mat4 projectionMatrix;
uniform sampler2D textureSampler;
uniform float zNear;
uniform float zFar;

void main() 
{
   float imageDepth = texture2D(textureSampler, uvTex).b;
   imageDepth = -imageDepth;
   vec4 temp = vec4(0.0, 0.0, (imageDepth * (zFar - zNear) - zNear), 1.0);
   vec4 clipSpace = projectionMatrix * temp;
   clipSpace.z /= clipSpace.w;
   clipSpace.z = 0.5 * (clipSpace.z + 1.0);
   float depth = clipSpace.z;
   gl_FragDepth = depth;
}

This way works quite well (buffer is not very precise on far distance but this is another problem, I will solve I this using openEXR format for depth texture)

My question is: it's possible to "migrate" the buffer "de-linearization" done in the fragment shader and do it directly in the compositor and obtain directly the logarithmic z-buffer image? I saw that there's no matrix manupulation on Math node, and even not possible to create custom node with python (all this process is part of an automatic blender-batch I wrote in python) but you can create only node groups.

If previous options are not possible it's possible to add in the compositor a custom shader (for doing the same job of my opengl fragment)?

Thanks a lot

Ray

$\endgroup$

0

You must log in to answer this question.

Browse other questions tagged .