6
$\begingroup$

I want to create particles that are 2d, and always drawn the exact same size, no matter where they are in the scene, or at what angle, OR what size the final render is.

So if my particle texture is 8x8 pixels, I want it to show up as 8x8 pixels exactly on the final render.

This can be done with an alpha overlay in the compositor, but I'd like to apply it to moving particles, and other objects. It would also be nice if they respected the depth of other objects (so if another object is in front of it, it obscures the particle).

EDIT:

I have found a way to do this with geometry nodes by scaling objects depending on their distance from the camera:

enter image description here

Unfortunately its not quite right, and the sprites frequently get stretched or squashed by 1px, which looks very distracting, especially in animations:

enter image description here

Here are the nodes / .blend

https://blenderartists.org/uploads/short-url/5qs6SVA0ARB0jijR0EDvTBhPTCX.blend enter image description here

If anyone can figure out how to avoid this, it would be great.

$\endgroup$
8
  • $\begingroup$ Something to do with the “Window” texture coordinate? $\endgroup$
    – TheLabCat
    Commented Feb 10, 2022 at 21:01
  • 2
    $\begingroup$ i would guess the reason are rounding/precision issues $\endgroup$
    – Chris
    Commented Feb 27, 2022 at 8:26
  • $\begingroup$ okay, how to fix $\endgroup$
    – stackers
    Commented Feb 27, 2022 at 17:54
  • 1
    $\begingroup$ The reason is that even for a perfectly aligned plane, in perspective projection various "pixels" (understood as regions translating to a pixel) have different area depending on distance from the camera, which is not constant (it would be constant only if instead of a plane you used a part of a sphere (where camera's pinhole sits in the center of that sphere) $\endgroup$ Commented Feb 28, 2022 at 12:18
  • 1
    $\begingroup$ I finally got some time to play around with this and as soon as I opened Blender the practice brutally defeated the theory... What I said would be true for a human eye, but is not true for a flat sensor that Blender simulates. $\endgroup$ Commented Feb 28, 2022 at 21:06

1 Answer 1

2
$\begingroup$

If your camera is aligned to axes, you can separate XYZ to easily get the distance between sensor and an instanced plane along a ray perpendicular to them (the middle camera ray, center of camera's frustum), and use a formula from Gordon Brinkmann's answer:

Why does reducing Resolution X increase my camera's FOV and increasing it reduce FOV?

If you can't afford to have the camera aligned, you can create a vector representing camera's direction when it has no rotation (0; 0; -1), rotate it by the camera's rotation, and calculate Dot Product (a.k.a. projection product) with the difference of camera location and a given point location.

But since then you still need to similarly calculate sensor's XY coordinates, I decided to just rotate the coordinates to where they would have been if the camera looked down, then snap, and then rotate back.

The oddness/evenness of a given axis has to be the same for the image and the render, otherwise the center of an image is between pixels (on pixel boundaries), and it's snapped to a pixel center or vice-versa. The node setup could be improved to deal with that, but it would clutter the main solution.

Likewise, I only use Resolution X, because I assume either horizontal sensor fit, or auto fit with the horizontal dimension being bigger.

$\endgroup$
4
  • 1
    $\begingroup$ Wow. I really thought this was impossible. :) $\endgroup$
    – Robin Betts
    Commented Mar 11, 2022 at 16:48
  • $\begingroup$ im amazed, it seems like you actually solved it. still trying to understand it all. the camera distance from a plane rather than the center of the camera makes sense, I actually tried something similar but it didn't seem to work. By "having the camera aligned" did you mean pointing at 0,0? $\endgroup$
    – stackers
    Commented Mar 12, 2022 at 5:54
  • $\begingroup$ @stackers I didn't mean location but orientation. If your camera rotations are 0;0;0, then you can use my setup without the 2 Vector Rotate nodes. If your camera has some rotations but is aligned to axes, for example 90°;0;90° rotations, Then you just need to treat the XYZ components differently (sensor Y becomes world Z, sensor X is still world X, distance to camera becomes world Y). $\endgroup$ Commented Mar 12, 2022 at 9:03
  • $\begingroup$ Ive just noticed the textures are actually upside down (a bit hard to tell with my crappy texture). I tried to add some rotation at different points in the nodes, but it seems to move the points or rotate it to an incorrect angle (if i put vector rotate right before rotation on instance on points). any ideas? $\endgroup$
    – stackers
    Commented Mar 14, 2022 at 16:14

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .