7
$\begingroup$

The White Noise node returns a pseudo-random float between 0 and 1, with a uniform distribution of values. (i.e. all ranges of values of the same size occur with the same probability.) But it is discontinuous. A very small change in the lookup coordinate will create an arbitrarily large change in the output value, which make it very sensitive to floating-point error.

Other shader-noises are continuous, but there is none available with uniform distribution, as far as I know?

Does anyone know a way to achieve both continuity and uniform distribution? Any way, for example, to 'blur' White Noise? I've tried reducing the precision of the look-up.. maybe I'm not approaching that the right way.

$\endgroup$
15
  • 3
    $\begingroup$ First of all I would say, of course it is discontinuous since it tries to represent the technical white noise. It is not like a texture shader. If you say 'blur' White Noise, what about using a Noise Texture? If you choose a very large scale so that the noise has a very high resolution I could imagine it resembles white noise a little bit - and it's not discontinuous. $\endgroup$ Commented Oct 13, 2021 at 13:32
  • $\begingroup$ ...but I guess I'm not quite sure what exactly you need. $\endgroup$ Commented Oct 13, 2021 at 14:35
  • 1
    $\begingroup$ I'm not sure if an exact result will be possible (the continuous distributions, like Perlin, all seem to have pretty complicated algorithms rather than nice terse probabilistic descriptions), but the GitHub I linked above took an interesting approach---if I understand them right, they computed the cumulative distribution function empirically for a large number of samples, and then used this to determine how to remap the pixel values. This would at least give a somewhat more principled reason for using some specific color ramp settings (to approximate the CDF) $\endgroup$ Commented Oct 14, 2021 at 0:26
  • 1
    $\begingroup$ (specifically, they approximate the empirically observed CDF with a polynomial, but I think you could skip that step and just directly approximate the empirically observed CDF with a color ramp or some other Blender node. This approach corresponds, approximately, to just directly "plugging in" the input random variable into the CDF to get a uniform random variable: en.wikipedia.org/wiki/Probability_integral_transform#Statement An interesting question is which existing continuous distribution in Blender is easiest/best to do this with, mm) $\endgroup$ Commented Oct 14, 2021 at 0:28
  • 1
    $\begingroup$ And, also: should you try to approximate the CDF in 1-d, 2-d, some higher number of dimensions.. (e.g., one per pixel?)? 1-d is probably simplest (just approximate the CDF of all observed pixel values treated as samples from a single simple random variable), but it may improve the "uniformity" to interpret the Perlin noise as a bunch of samples from a 2-D random variable (or higher d) in some way $\endgroup$ Commented Oct 14, 2021 at 0:37

1 Answer 1

3
+200
$\begingroup$

This answer is a bit long, since I wanted to provide context, to help explain some of the choices I made, their consequences, and what alternatives might be possible. Skip to the Implemented Solution Examples section at the end if you just want to see Python code and Blender nodes for actually solving the problem in a few ways, and notes on limitations/possible extensions.

Spatial Uniformity

Before talking about how to achieve uniformity, it might be good, first, to talk about what we mean by 'uniform':

The White Noise node returns a pseudo-random float between 0 and 1, with a uniform distribution of values. (i.e. all ranges of values of the same size occur with the same probability.)

The most limited but simplest way of interpreting this requirement is: letting I be a single large region (a set of pixels) in any single realized sample from the generated noise image (which may be 2D, 3D, 4D, etc), for any interval [a,b], we require that the number of pixels with color values inside [a,b] be proportional to the interval length, |b-a| (note: I always work over simple black-and-white images here, and usually use "color" synonymously with "black-white pixel intensity"; but the ideas all extend in the obvious way to RGBA color channels and the like).

But this suggests we can strengthen the requirement further by demanding it hold over more than one sub-region; for example, we might partition the input image I into large rectangles, and require that, within each such rectangle, pixels with values in each [a,b] occur with frequency proportional to |b-a|. Really, in principle, there's no need for us to restrict ourselves to rectangular regions, and there's no reason they must be mutually exclusive; the most general form of this requirement might say something like: let S=[S1,S2,....] be a (possibly infinite) list of sets of pixels (i.e., each Si corresponds to some arbitrary subset of the image); we might want to require uniformity on every such subset.

Sampling Uniformity

I call the above notions of uniformity spatial uniformity, because they are meant to capture the idea that we want uniformity in the distribution of values in any single 2-D image (which can be understood as a single high-dimensional sample from some input distribution, e.g. Perlin or Voronoi, etc). Note that this notion of uniformity is impossible to satisfy exactly, and becomes more and more difficult to 'reasonably' satisfy, at increasingly small scales: with, say, P possible color values in a single pixel, it is impossible to impose this kind of spatial uniformity requirement on any region Si containing fewer than P pixels (some values will always be missing; of course, we could still work to get 'closer' to uniformity, even in cases where exactly achieving it is not possible); and, the fewer pixels we have to work with, the more our joint goals of "continuity" and "spatial uniformity" are at odds with one another.

In any case, just for the sake of being precise, there is at least one other kind of uniformity we should talk about, because it may be the kind that occurs to someone looking at this problem abstractly, without being focused on graphical applications specifically: given a sequence of independent random samples of our distribution (e.g., maybe we pick new random seeds uniformly at random, and each one yields a different, independent 1-D line-of-numbers or 2-D image or 3-D volume or 4-D volume-over-time sample of our desired distribution), we could require not that uniformity holds within any single realized image, but instead that certain margins over this sampling distribution are uniform. The simplest version of this might, say, require that, in any single pixel, if we draw a large number of random seeds, that pixel must take all possible color values in [a,b] with a probability proportional to |b-a|.

This second notion of uniformity is I think less closely related to what @RobinBetts has in mind in his question, since we only ever see a single sample from our Blender noise distributions. So, it seems reasonable to demand one of the forms of spatial uniformity suggested above, rather than sampling uniformity, and so I'll focus on spatial uniformity in the following.

The Probability-Integral Transform

The probability-integral transform is a fundamental theorem in probability theory, useful for taking an initial, continuous random variable and identifying a function that, if applied to it, will convert it into a uniform random variable (an aside: inverting the ideas in this theorem also yields the ubiquitous method of inverse-transform sampling, which can be used to identify a function that, when applied to a uniform random variable, will yield samples from a distribution of interest. This method of converting uniform random variables into target random variables of interest is, for example, used extensively in the random sampling primitives implemented in numpy.)

The statement of the probability-integral transform theorem is very simple: if you have a random variable R from a known distribution, if you form its cumulative distribution function CDF_R(), then CDF_R(R) is a random variable that has a uniform distribution. CDF_R() is just the function defined by CDF_R(r) = Pr[R <= r]; you can think of CDF_R() as taking in a possible value r of R and telling you what percentage of the time R falls below r.

Or, a little more casually, we can just think of CDF_R() as converting values of R into "ranks": you feed r into CDF_R(), and it tells you the percentile-rank of r. The probability-integral transform is then just telling us: while the original input variable isn't uniformly distributed, its percentiles certainly are! Looked at this way, the probability-integral transform can be viewed as a general method for taking a non-uniform function and telling you how to remap its values to ranks, which will be uniform.

It may be worth noting that this doesn't generally work for discrete random variables, or for functions that only take values in a discrete range. For example, if we have an R that takes the values 0 with probability 0.9 and 1 with probability 0.1, then CDF_R() is the function CDF_R(r) that returns 0.9 for r=0 and 1.0 for r=1. Obviously, this isn't uniform!

The trouble here is that discrete random variables (and functions of discrete input spaces) can be "clumpy": no matter how far we zoom in on them, there's still a lot more mass on the single value 0 than on the single value 1. But this doesn't happen with continuous functions and random variables: if you zoom in close enough to a continuous function, neighboring values are arbitrarily close to one another, essentially indistinguishable, and the probability of any single value being observed is 0. This lets us "spread the values out" by converting them to ranks, which we could not do if they were clumpy in the manner of discrete functions and random variables.

A useful observation is that the idea underlying the probability-integral transformation isn't really specific to random variables. Given any set of distinct, observed values v1, v2, ..., vk from an interval [A,B], we may want to "flatten" them to remove noticeable clumping, and we can always do this by mapping the vi to their ranks, and normalizing the ranks over [A,B]. For example, if we have observed values 0,9,10,11,12,20, over the rang [0,20], clearly there is some "clumping" in the center of this interval. But we can convert these to ranks: 0,1,2,3,4,5; and then normalize like 0,1,2,3,4,5 -> (0/5)*20, (1/5)*20, ..., (5/5)*20, which gives 0, 4, 8, 12, 16, 20---a sampling of values evenly distributed, with the central clump removed. That we can use probability-integral transform-like arguments will be useful below, because when considering all the pixels in a region Si, these aren't really independent samples from a single reference distribution (so we aren't technically approximating an independent r.v's CDF by empirically estimating what proportion of values fall below each threshold)--indeed, we need these pixels to exhibit significant correlation with one another (otherwise, we wouldn't have continuity!).

Using the Probability-Integral Transform to Approximate Uniformity

Given an input distribution (e.g., Perlin, Voronoi, etc) and a large sub-region of pixels S, we can use the ideas discussed above to compute, empirically, the rank of each pixel value in S, and then re-map pixels to their ranks, and then renormalize their ranks to the bounds of the color space. This transformation will give us spatial uniformity over S. Taking this approach is still limited in a few ways:

  1. Uniformity will not generally hold over arbitrary sub-regions of S, nor regions that lie outside of S, nor regions that intersect S partially, etc, although the departures from uniformity will be smaller the closer these other regions are to the region S we used to generate our "uniformizing" ranks originally (though we can partially combat this by adding other such "regions" S', S'', etc, and averaging based on distance from each of them, if needed)
  2. Actual pixel values are discrete, not continuous (but we can't do much about this, and there are "enough" levels in most color spaces that we can reasonably pretend they are continuous, even though we know they're not)
  3. We're not trying to compute any sort of theoretically well-defined CDF exactly. An alternative approach might try to define from first principles a kind of graphical noise that obeys some form of weak or strong spatial uniformity, or might start from the exact definitions of known spatially continuous noise distributions (e.g., Perlin noise) and try to show how to transform this to achieve spatial uniformity exactly. The kinds of graphical noise distributions typically in use don't have very simple probabilistic descriptions though, so we'll settle for empirical approximations
  4. To implement our approximation re-mapping of color values in practice, we'll use Blender nodes, which won't allow for exactly representing the re-mapping (and instead use some kind of linear or spline interpolation, etc, depending on options), but are simple and fast

On the other hand, there are two major advantages to our approach:

  1. Empirically estimating how to transform from a given spatially continuous noise distribution back to a spatially uniform distribution is much easier than doing so exactly
  2. Starting from a known spatially continuous distribution and transforming it to have some form of spatial uniformity (rather than defining a new, spatially uniform and continuous distribution from first principles) has the very useful side effect that we'll get different results depending on which input noise distribution we "plug in" (and this will be easy/fast to switch, thanks to 1)

Implemented Solution Examples

First, an API caveat for something I haven't figured out: I'm unfortunately not sure if there's a nice way to evaluate from the Blender API the noise samplers used directly in Blender Shader nodes; the Python API definitely has directly exposed implementations of the samplers used for building textures, and of various intermediate noise utility functions, but while these overlap a bit with the distributions we can access manually in Shader nodes, there are also some differences. The scale, detail, and distortion parameters in the Noise Shader Node, for example, don't seem to be exposed when building textures from the Python API in this way, and some noise texture nodes (like the Brick Texture Shader Node) don't seem to have any BlendDataTextures analogue. Unfortunately, this means I can't keep everything purely within the Python API; I'll work around this by rendering a sample from our texture of interest from an orthographic camera to a 1 x 1 plane (which unfortunately limits this approach to 1-D and 2-D textures), with a manual Blend-file setup that looks like:

enter image description here

Where the plane has this very simple material setup (and you could play with this, inserting other shader nodes, altering parameter values etc, of course, to find uniformizing mappings for different noise types and parameter settings):

enter image description here

With this intermediate step, here's an example code snippet for generating "uniformizing" mappings from observed pixel values (in the original source noise) to what they "should" be if we want to enforce uniformity (here, in just a single target region; not attempting to achieve more complex spatial-uniformity-in-many-subregions-simultaneously, though I think it is possible to do so, with more work and at the expense of increased Shader compilation & rendering time):

import bpy
import numpy as np
import os, time
RESOLUTION = 8
DISPLAY_LIM = None

def example():
    bpy.context.scene.render.resolution_x = RESOLUTION
    bpy.context.scene.render.resolution_y = RESOLUTION
    bpy.context.scene.render.filepath = bpy.path.abspath("//") + f"noise_example_{RESOLUTION}.png"
    bpy.ops.render.render(animation=False, write_still=True)
    noise_im = bpy.data.images.load(bpy.path.abspath("//") + f"noise_example_{RESOLUTION}.png")
    assert len(noise_im.pixels)/4 == RESOLUTION**2, f"# noise pixels: {len(noise_im.pixels)} /4 : {len(noise_im.pixels)/4}, Resolution: {RESOLUTION}"
    t0 = time.time()
    sorted_bw_pix = np.sort([noise_im.pixels[4*i] for i in range(RESOLUTION**2)])
    t1 = time.time()
    print(f"Sorting bw pixels took {t1 - t0} secs")
    cumsums = np.cumsum(sorted_bw_pix)
    t2 = time.time()
    print(f"Generating cumulative sums took {t2 - t1} secs")
    empirical_cdf = cumsums/np.sum(sorted_bw_pix)
    t3 = time.time()
    print(f"Normalizing to get empirical CDF took {t3 - t2} secs")

    print(f"[Method 1] Empirical CDF suggests the mapping:")
    for i, (p, e) in enumerate(zip(sorted_bw_pix, empirical_cdf)):
        if DISPLAY_LIM is None or i < DISPLAY_LIM:
            print(f"# {i}: {p}\t->\t{e}")
    increments = [empirical_cdf[i] - empirical_cdf[i-1] for i in range(1, len(empirical_cdf))]
    print(f"Max CDF increment: {max(increments)}")
    print(f"Min CDF increment: {min(increments)}")

    print(f"[Method 2] Sorted ranks suggest the mapping:")
    num_bw_pix = RESOLUTION**2
    max_bw_pix = sorted_bw_pix[-1]
    for i, p in enumerate(sorted_bw_pix):
        if DISPLAY_LIM is None or i < DISPLAY_LIM:
            print(f"# {i}: {p}\t->\t{(i/(num_bw_pix-1))}")
    increments = [i/(num_bw_pix-1) - (i-1)/(num_bw_pix-1) for i in range(1, len(sorted_bw_pix))]
    print(f"Max rank-based increment: {max(increments)}")
    print(f"Min rank-based increment: {min(increments)}")

Here's an example of the above output loops run at impractically small 8 x 8 resolution:

>>> importlib.reload(uniformize_noise); uniformize_noise.example()
<module 'uniformize_noise' from '...uniformize_noise.py'>
Sorting bw pixels took 0.0 secs
Generating cumulative sums took 0.0 secs
Normalizing to get empirical CDF took 0.0 secs
[Method 1] Empirical CDF suggests the mapping:
# 0: 0.6274510025978088    ->    0.014787430759218456
# 1: 0.6313725709915161    ->    0.02966728295190246
# 2: 0.6352941393852234    ->    0.04463955657805201
# 3: 0.6392157077789307    ->    0.05970425163766712
# 4: 0.6392157077789307    ->    0.07476894669728222
# 5: 0.6431372761726379    ->    0.08992606319036288
# 6: 0.6431372761726379    ->    0.10508317968344354
# 7: 0.6431372761726379    ->    0.1202402961765242
# 8: 0.6431372761726379    ->    0.13539741266960484
# 9: 0.6431372761726379    ->    0.1505545291626855
# 10: 0.6431372761726379    ->    0.16571164565576615
# 11: 0.6470588445663452    ->    0.18096118358231236
# 12: 0.6470588445663452    ->    0.19621072150885857
# 13: 0.6509804129600525    ->    0.2115526808688703
# 14: 0.6509804129600525    ->    0.22689464022888206
# 15: 0.6549019813537598    ->    0.24232902102235937
# 16: 0.6549019813537598    ->    0.2577634018158367
# 17: 0.6549019813537598    ->    0.273197782609314
# 18: 0.658823549747467    ->    0.28872458483625685
# 19: 0.658823549747467    ->    0.3042513870631997
# 20: 0.658823549747467    ->    0.3197781892901425
# 21: 0.658823549747467    ->    0.33530499151708537
# 22: 0.658823549747467    ->    0.35083179374402823
# 23: 0.658823549747467    ->    0.3663585959709711
# 24: 0.6627451181411743    ->    0.3819778196313795
# 25: 0.6627451181411743    ->    0.3975970432917879
# 26: 0.6627451181411743    ->    0.4132162669521963
# 27: 0.6627451181411743    ->    0.42883549061260473
# 28: 0.6627451181411743    ->    0.44445471427301314
# 29: 0.6666666865348816    ->    0.46016635936688705
# 30: 0.6666666865348816    ->    0.475878004460761
# 31: 0.6666666865348816    ->    0.49158964955463497
# 32: 0.6666666865348816    ->    0.5073012946485089
# 33: 0.6666666865348816    ->    0.5230129397423828
# 34: 0.6666666865348816    ->    0.5387245848362568
# 35: 0.6666666865348816    ->    0.5544362299301308
# 36: 0.6666666865348816    ->    0.5701478750240048
# 37: 0.6666666865348816    ->    0.5858595201178787
# 38: 0.6666666865348816    ->    0.6015711652117527
# 39: 0.6666666865348816    ->    0.6172828103056266
# 40: 0.6666666865348816    ->    0.6329944553995005
# 41: 0.6705882549285889    ->    0.6487985219268401
# 42: 0.6705882549285889    ->    0.6646025884541795
# 43: 0.6705882549285889    ->    0.6804066549815191
# 44: 0.6705882549285889    ->    0.6962107215088585
# 45: 0.6745098233222961    ->    0.7121072094696637
# 46: 0.6745098233222961    ->    0.7280036974304687
# 47: 0.6745098233222961    ->    0.7439001853912737
# 48: 0.6745098233222961    ->    0.7597966733520788
# 49: 0.6745098233222961    ->    0.7756931613128838
# 50: 0.6745098233222961    ->    0.7915896492736889
# 51: 0.6745098233222961    ->    0.8074861372344939
# 52: 0.6784313917160034    ->    0.8234750466287645
# 53: 0.6784313917160034    ->    0.8394639560230351
# 54: 0.6784313917160034    ->    0.8554528654173057
# 55: 0.6784313917160034    ->    0.8714417748115764
# 56: 0.6784313917160034    ->    0.8874306842058469
# 57: 0.6784313917160034    ->    0.9034195936001176
# 58: 0.6784313917160034    ->    0.9194085029943881
# 59: 0.6823529601097107    ->    0.9354898338221243
# 60: 0.6823529601097107    ->    0.9515711646498605
# 61: 0.6823529601097107    ->    0.9676524954775966
# 62: 0.686274528503418    ->    0.9838262477387983
# 63: 0.686274528503418    ->    1.0
Max CDF increment: 0.016173752261201768
Min CDF increment: 0.014879852192684003
[Method 2] Sorted ranks suggest the mapping:
# 0: 0.6274510025978088    ->    0.0
# 1: 0.6313725709915161    ->    0.015873015873015872
# 2: 0.6352941393852234    ->    0.031746031746031744
# 3: 0.6392157077789307    ->    0.047619047619047616
# 4: 0.6392157077789307    ->    0.06349206349206349
# 5: 0.6431372761726379    ->    0.07936507936507936
# 6: 0.6431372761726379    ->    0.09523809523809523
# 7: 0.6431372761726379    ->    0.1111111111111111
# 8: 0.6431372761726379    ->    0.12698412698412698
# 9: 0.6431372761726379    ->    0.14285714285714285
# 10: 0.6431372761726379    ->    0.15873015873015872
# 11: 0.6470588445663452    ->    0.1746031746031746
# 12: 0.6470588445663452    ->    0.19047619047619047
# 13: 0.6509804129600525    ->    0.20634920634920634
# 14: 0.6509804129600525    ->    0.2222222222222222
# 15: 0.6549019813537598    ->    0.23809523809523808
# 16: 0.6549019813537598    ->    0.25396825396825395
# 17: 0.6549019813537598    ->    0.2698412698412698
# 18: 0.658823549747467    ->    0.2857142857142857
# 19: 0.658823549747467    ->    0.30158730158730157
# 20: 0.658823549747467    ->    0.31746031746031744
# 21: 0.658823549747467    ->    0.3333333333333333
# 22: 0.658823549747467    ->    0.3492063492063492
# 23: 0.658823549747467    ->    0.36507936507936506
# 24: 0.6627451181411743    ->    0.38095238095238093
# 25: 0.6627451181411743    ->    0.3968253968253968
# 26: 0.6627451181411743    ->    0.4126984126984127
# 27: 0.6627451181411743    ->    0.42857142857142855
# 28: 0.6627451181411743    ->    0.4444444444444444
# 29: 0.6666666865348816    ->    0.4603174603174603
# 30: 0.6666666865348816    ->    0.47619047619047616
# 31: 0.6666666865348816    ->    0.49206349206349204
# 32: 0.6666666865348816    ->    0.5079365079365079
# 33: 0.6666666865348816    ->    0.5238095238095238
# 34: 0.6666666865348816    ->    0.5396825396825397
# 35: 0.6666666865348816    ->    0.5555555555555556
# 36: 0.6666666865348816    ->    0.5714285714285714
# 37: 0.6666666865348816    ->    0.5873015873015873
# 38: 0.6666666865348816    ->    0.6031746031746031
# 39: 0.6666666865348816    ->    0.6190476190476191
# 40: 0.6666666865348816    ->    0.6349206349206349
# 41: 0.6705882549285889    ->    0.6507936507936508
# 42: 0.6705882549285889    ->    0.6666666666666666
# 43: 0.6705882549285889    ->    0.6825396825396826
# 44: 0.6705882549285889    ->    0.6984126984126984
# 45: 0.6745098233222961    ->    0.7142857142857143
# 46: 0.6745098233222961    ->    0.7301587301587301
# 47: 0.6745098233222961    ->    0.746031746031746
# 48: 0.6745098233222961    ->    0.7619047619047619
# 49: 0.6745098233222961    ->    0.7777777777777778
# 50: 0.6745098233222961    ->    0.7936507936507936
# 51: 0.6745098233222961    ->    0.8095238095238095
# 52: 0.6784313917160034    ->    0.8253968253968254
# 53: 0.6784313917160034    ->    0.8412698412698413
# 54: 0.6784313917160034    ->    0.8571428571428571
# 55: 0.6784313917160034    ->    0.873015873015873
# 56: 0.6784313917160034    ->    0.8888888888888888
# 57: 0.6784313917160034    ->    0.9047619047619048
# 58: 0.6784313917160034    ->    0.9206349206349206
# 59: 0.6823529601097107    ->    0.9365079365079365
# 60: 0.6823529601097107    ->    0.9523809523809523
# 61: 0.6823529601097107    ->    0.9682539682539683
# 62: 0.686274528503418    ->    0.9841269841269841
# 63: 0.686274528503418    ->    1.0
Max rank-based increment: 0.015873015873015928
Min rank-based increment: 0.015873015873015817

Here, the max and min "increment" sizes are used as a simple measure of "how uniform" the resulting distribution of values is. In both cases, it's pretty clear the distribution of observed values becomes much more uniform after the transformation. Arguably, I think the rank-based one should be preferred, since it is a bit simpler, faster, and exact, but the final results should be visually indistinguishable at realistic resolutions for the input noise.

An important note: this seems to be very slow on more reasonable resolutions (e.g., it took at least an hour, maybe as many as 3, on a 1080 x 1080 example I tested early on). I haven't carefully profiled it, and it could definitely be better optimized (or easily, aggressively parallelized in some places). I would recommend only using the rank-based approach (which will cut out the np.cumsum, at least), and of course commenting out or deleting the parts of this you don't actually need (e.g., prints), but -- in any case -- just be aware that this isn't particularly speedy, as written, on realistic resolutions.

All that is left to do after this is to actually encode this mapping; as @RobinBetts noted in the comments above, a standard Blender Color Ramp node is fine for this. Doing so may be a bit a tedious unless inserting new fitting points in a Color Ramp can be done programmatically, from Python (alternatively, one could fit a polynomial, spline, etc using standard numpy or scipy functions and then approximate those with Blender shader nodes; ultimately, this won't be much different from directly fitting the above mappings into a Color Ramp, though). You could also choose to just use a small subset of the overall mapping, rather than encoding all of it in your Color Ramp, if the results look OK visually from using a subset of points (and you might use, say, the rate-of-change in input pixel values to determine where you need more or fewer Color Ramp control points).

In Blender 2.83 (in case this is version-dependent), I played around a bit with the Python API access to ColorRamp nodes. It looks like controls can be added to them programmatically pretty easily; so, if desired_points = [(0.6274510025978088, 0.0), (0.6313725709915161, 0.015873015873015872), (0.6352941393852234, 0.031746031746031744), (0.6392157077789307, 0.047619047619047616), ...] is a list of (original_pixel_value, new_pixel_value) pairs as generated by (a slight modification of) my code snippet above, an existing ColorRamp Shader Node in Material.001 named ColorRamp (assumed here to be in its default state; specifically, to have 2 control points already, at position 0.0 and 1.0) could be programmatically modified to add all of the required control points, like:

for i, (old_val, new_val) in enumerate(desired_points):
    print(f"Adding new ColorRamp point # {i} at position {old_val} (type: {type(old_val)})")
    bpy.data.materials["Material.001"].node_tree.nodes["ColorRamp"].color_ramp.elements.new(position=float(old_val))
elems = bpy.data.materials["Material.001"].node_tree.nodes["ColorRamp"].color_ramp.elements
print(f"Desired points has {len(desired_points)} points")
for i in range(1, len(elems)-1):
    print(f"Accessing point {i-1}")
    new_val = float(desired_points[i-1][1])
    elems[i].color[0] = new_val
    elems[i].color[1] = new_val
    elems[i].color[2] = new_val

This seems like a nice way to complete the final step (although, note that I haven't tried this at realistic resolutions; I would not be surprised if Blender has a practical and/or absolute limit on how many control points can be added, so some care may need to be taken in selecting a subset of control points even so).

Notable Limitations/Extensions

  • behavior outside of the empirically tuned-to region
  • haven't displayed examples of visual behavior, nor shown how the results may differ depending on the input node type
  • ways to use simple averaging to (at the cost of a more expensive shader nodes setup) enforce uniformity over a wider selection of regions (this is pretty easy, possibly useful, and I may edit this to add an example of it later)
  • if one knew how to evaluate the Shader nodes noise distributions directly in the Python API, or to re-implement them, this could be done without using rendering as complicating intermediate
  • I suspect someone who knows more about OSL than I do might be able to write a custom shader node to replace the Python completely, which would be nice (and should be much faster!). Blender's built-in nodes offer pretty limited ability to combine spatially distinct pixels, unfortunately, so doing things like computing the ranks of all pixels' color values is very difficult
  • in very small sub-regions, as noted above, the goals of uniformity (over the entire domain of possible color values) and continuity are in conflict: if we have just 16 pixel values, say, reorganizing them to be "uniformly distributed" is going to have lots of very large jumps. An intriguing tweak to try to get something like this to work in small sub-regions would be to maintain the empirically observed min and max pixel value in these regions, rather than normalizing to [0,1] as the min and max color intensities. By taking this approach and using a (more expensive) Shader nodes setup designed to do smoothed averaging between neighboring regions (so that each sub-region's uniformizing-map is very strong when in that region, very weak outside of it, and smoothly declines in strength the further you are from the region's center), it is probably possible to get a stronger kind of "local uniformity" without compromising continuity
  • due to the render-to-a-plane workaround, this won't work for 3-D or 4-D noise (although, you could render a bunch of 2-D slices, and then generalize the code to combine them)
$\endgroup$
5
  • $\begingroup$ W.....ow!! I'm going to have to give this some time! But one thing I am gleaning early on, is that this is a more subtle problem than I thought it was, and that an accurate answer requires a very close definition of terms. I don't know how to thank you for the time you've put into this... yet :) $\endgroup$
    – Robin Betts
    Commented Oct 18, 2021 at 9:43
  • $\begingroup$ No need for thanks, I thought it was a really interesting question and I learned a lot while trying to think through it carefully. There's also a lot that can hopefully be improved about my answer -- and some really interesting possible extensions; for example, I think we could easily extend this same method to convert not to a uniform spatial distribution, but to an arbitrary spatial distribution, which is pretty intriguing $\endgroup$ Commented Oct 18, 2021 at 14:36
  • $\begingroup$ I think I'm going to reach back to the 3D CDF in your ref, as nodes, and check it out by graphing it? My Python isn't super-fast, so I'll need some time. If it works out, I'll post it, too. $\endgroup$
    – Robin Betts
    Commented Oct 18, 2021 at 17:10
  • $\begingroup$ You mean from the Github I linked in the comments on the original question? That may work -- one problem you might run into is that, if that Github uses a different parameterization of the target distribution from the one you're relying on in Blender, they may call for different "uniformizing mappings". I'm not sure how large a difference that would likely make in practice. If it would help, I can add some code to fit a polynomial to the from-Blender-render computed mappings, and once I have a bit more time can probably add a 3D example (which will re-use the 2D one above over multiple slices) $\endgroup$ Commented Oct 18, 2021 at 17:58
  • $\begingroup$ True.. I don't know that Blender's noise is a standard Perlin.. I'll give it a go, anyway. For my own personal practical purposes, I was shocked that White Noise worked. I was calculating the 2D lookup (input vector) for White Noise in 2 different ways, one involving a rotation. I was expecting the f.p. difference in the inputs to return completely different outputs.. but they were close! So long as the input vector was well away from 0. Something a bit dodgy there, but manageable. $\endgroup$
    – Robin Betts
    Commented Oct 18, 2021 at 18:23

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .