1
\$\begingroup\$

I have a Unity URP project and in the Scriptable Render Pipeline, I would like to create a RenderTexture and pass the depth and color buffer of the camera object as the color and depth buffer of this RenderTexture object. Unfortunately, these buffers only have a getter, so I suppose only copy / blit operations would solve my problem. However I don't wanna copy, just pass the references. Is there a way I can achieve this?

I want to process the depth and color buffer of the camera in one shader call, but now I need to do it in two steps.

In the shader, I would process both of them like this:

float4 frag(v2f i, out float depth : SV_Depth) : SV_Target
{
    float scale = 2.0;
    float3 color = Rcas(i.uv, scale);
 
    depth = SAMPLE_DEPTH_TEXTURE(_MainTex, i.uv);
    return float4(color, 1.0);
}

Any ideas are appreciated!

\$\endgroup\$
2
  • 1
    \$\begingroup\$ This looks like something to implement as a Render Feature in the Scriptable Render Pipeline. This lets you inject a full-screen pass between the camera rendering the scene and when the result gets shown on the screen, using Unity's internal pool of buffers for reading/writing colours/depths instead of managing your own. I show an example here, though the exact steps may have changed as URP has matured. \$\endgroup\$
    – DMGregory
    Commented Mar 19 at 11:27
  • 1
    \$\begingroup\$ Putting the camera's existing render buffers into a RenderTexture without a copy doesn't make sense, because while it's reading from that buffer, where would the camera write to? But using a Render Feature, Unity can ping-pong between its internal buffers, reading from one to write to the other, without a copy in between. Whichever member of the ping-pong pair it writes to last becomes the output. \$\endgroup\$
    – DMGregory
    Commented Mar 19 at 11:42

0

You must log in to answer this question.

Browse other questions tagged .