1
$\begingroup$

I have some basic experience with blender and can use node editor to achieve basic tasks like combining shaders, textures etc. The workflow of a node editor looks like a functional programming language to me, but I still don't understand the node editor in the way I understand, say, a programming language like python. This is how I think the node graph works (for the materials):

  1. The texture coordinate node generates the actual 3D coordinates of the surface at the Generated latch.

  2. The node graph determines what color should be displayed at that coordinate and passes it to the output node.

  3. This procedure is repeated for each point on the surface in question.

These are all hypotheses. I can't find anything in the official documentation explaining how the node graph actually works. The description of various nodes seem to assume a certain level of familiarity with the blender internals.

These are the queries I have:

  1. Are the above hypothesis correct?
  2. If yes, can you flesh them out a bit because they are lacking a lot(just like my knowledge of blender)
  3. If no, what's wrong with them? What's the correct model?
  4. Where should I look If I want to look deeper into this topic?
  5. How do the material node graph and texture node graph work with each other?
$\endgroup$
4
  • $\begingroup$ You are talking about the Blender Internal Nodes right? Because Cycles is not like you described. But I think you hypothesis for BI Material Nodes is absolutly correct. I used it once and I remember that it works exactly like that. It evaluates the color for every pixel in the final image. For that it passes arguments like position texcoords and things to the material nodes and the nodes calculate the color. Its just like a GLSL Shader in OpenGL. $\endgroup$
    – HenrikD
    Commented Nov 29, 2018 at 14:56
  • $\begingroup$ The documentation provided by the Blender Foundation can be of very poor quality. I also had BASE, Blender Aggravation Shock and Exasperation when I first saw the documentation and even today. Perhaps your understanding is adequate with occasional help from BSE. $\endgroup$ Commented Nov 29, 2018 at 17:15
  • $\begingroup$ Python.org has the goals and standards of of publishing high quality documentation online. I would not say the same of Blender.org. Blender emphasizes features in their written materials. $\endgroup$ Commented Nov 29, 2018 at 17:33
  • $\begingroup$ This is late, but kinda, sure. Your pathway is overly specific (generated texture coords are not all that exist). "The color that should exist" is really a function of what the ray has and will hit in addition to these nodes. And "repeated" isn't very right, although I'm not sure how pedantic that is: it's pretty essential to understanding nodes that you understand that every sample is evaluated in parallel, not serially. $\endgroup$
    – Nathan
    Commented Aug 31, 2022 at 23:24

2 Answers 2

0
$\begingroup$

Think of nodes as methods just like in programming language

Methods have inputs and outputs and so do nodes

Your 1st hypothesis is correct but the 2nd and 3rd is out of my little 15yrs brain.

Basically you provide some inputs to node and the nodes does its job and you get the result as output.

$\endgroup$
0
$\begingroup$

For a formal understanding, you should read up on articles about what is a pixel shader (wikipedia)

Your hypotheses are absolutely correct. The program is applied individually to every pixel in screen.

Variable input nodes like Texture Coordinate will give you direct output data from a vertex shader. In simpler terms, it gives you the relevant data of the geometry found at the pixel.

It's the same mechanism for the texture nodes, only that the effect is not dynamic. All the inputs can be calculated ahead, and the result can be buffered.

Shader nodes are different since they almost always depend on the view vector. (Unless you use no variable input nodes or implicit input nodes).

Shader nodes will almost always be a function.

Texture nodes can be seen as a texture itself, or 3d texture if time is a variable.

This is the simple version, true for eevee. For cycles, being a ray-traced engine, it becomes a lot complicated for the shader nodes. The Layout still behaves as a single function, but it's no longer being run for each pixel on the screen.

It is being run for any intersection of a ray and geometry, and has a branching recursive nature at each 'Shader' type node.

On the first bounce, just like a rasterizer, a ray is emitted corresponding directly to a pixel on screen. It intersects geometry and performs the 'function' you have defined. Each subsequent bounce will perform a function on a different point, with new inputs for that 'texel'

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .