119
$\begingroup$

To get images that seem more photorealistic, not only a much wider Dynamic Range is needed, but also having the color information desaturate towards white as it would happen in an overexposed photograph.

This all follows from an answer on this question

The default sRGB output view transform captures a mere two and a bit stops of light above middle grey from a Cycles render. This is entirely unnatural when compared to our learned response of examining photographic-like reproductions, which maps anywhere from six or more stops of light above middle grey to the display / output referred transform

To illustrate the issue here's a sample scene:

Simple emission shaders: White, Red, Green and Blue. From left to right each one is twice as bright in a linear progression (in photography double the light intensity it would be considered one stop brighter). The red numbers indicate the brightness for the emission shader.

enter image description here

As you can see once the values are mapped from linear scene to sRGB transfer curved values, there is no difference in anything that is brighter than 1.

Looking at the waveform the issue is very clear: values over 1 are just clipped. 1 is white and, of course, there is nothing whiter than than white

enter image description here

enter image description here The vectorscope reveals another issue as well: The Red, Green and Blue colors, also reach a point in saturation at 1 and keep going past what is possible to represent correctly.

Again from the same answer:

Rendering in a scene referred model extends primaries off to infinity. This means that they do not desaturate nor reach the display referred white in any way that is familiar nor correct.

So how can both of this issues be resolved, so that we can capture more dynamic range from the scene and the color saturation seems more photorealistic? Ideally the brighter the object gets it will get disaturated as well... enter image description here

$\endgroup$
3
  • 7
    $\begingroup$ Part of the issue is that your color swatches are pure red, green, and blue. So their pixel values will never contain any other colors, and thus never go to white since white is all colors. If you make the colors not purely one channel they will desaturate to white as you wish since all channels are multiplied by the emission strength. $\endgroup$
    – PGmath
    Commented Feb 13, 2016 at 23:30
  • 9
    $\begingroup$ His question is deadly accurate and on point. No colour desaturates “correctly” when using a default sRGB View transform, and cheating a colour to add a compliment does not remedy the situation. Note that he cites “photorealistic”, which your advice violates in technique. $\endgroup$
    – troy_s
    Commented Feb 14, 2016 at 0:26
  • 1
    $\begingroup$ Hey, there's some great explanation in this video: youtu.be/PG_k2wz9mcU $\endgroup$
    – Samoth
    Commented Feb 29, 2016 at 20:21

2 Answers 2

151
+500
$\begingroup$

Update as of June 23, 2019

Plenty has changed since the original posting. I figured it was worth updating this post to highlight two excellent videos provided by two of the celebrities in the community, Gleb Alexandrov and Andrew Price. Both of the videos are top shelf quality, and well worth viewing.

Mr. Price was largely responsible for kicking the monumental interest in camera rendering transforms forward, including exposing plenty of folks to ACES. The Secret Ingredient to Photorealism has now hit over 1.7 million views, attracting viewers from plenty of different domains.

Mr. Alexandrov's video came out slightly thereafter, and has some terrific examples with synthetic imagery, as well as a humorous dive into some of the more nuanced details.

Update as of October 5, 2016

There has been a huge amount of interest in the creative affordances colour management provides an imager. If you want to jump right in and know what you are doing, there is an updated filmic set of tools available in a new filmic Blender set. The new set offers:

  • Very easy view based selection of five basic contrasts. Added to support new imagers that are interested in trying the package out, but worry about grading. The new set should be turn-key for this audience.
  • Much improved contrast curves selection. In addition to a simplified naming convention, the basic transfer curves are much more refined, perfectly pegging your scene referred middle grey values of 0.18 to 0.50 in every single transform.
  • Much improved desaturation and crosstalk film emulation. This new transform is complex and offers imagers an extremely graceful roll-off to display referred white. In addition to this, there has been a crosstalk element added which carefully mixes the primaries as the values reach peak. This will result in images that are much closer to what one would expect from typical photographic mediums.

It would be excellent if the talented imagers out there would try this new set and render out some sample images for this posting.

Issues

Post your issues to the GitHub repository.

Original Update

If you are seeking the original OpenColorIO set, you can find it at the original link.

After a recent presentation I did on the subject, there is a Google Slides presentation on dynamic range and intensity. For those interested in this subject, or perhaps for a greater understanding, feel free to watch the presentation located in this link.

Sample Images

Here are some sample images generated from the OCIO configuration. You should notice immediately how all of the physical based light interactions are augmented such as subsurface scattering, indirect lighting etc. Also note how the configuration allows for proper photo-based highlights captured in the display referred transform. Sample files compliments of Eugenio Pignataro, Mike Pan, Henri Hebeisen, Tynaud, and Mareck, Dmitry Ryabov, Rachel Frick, Marius Kreiser, and Andrew Price. Click to enlarge.

Mareck:

Bathroom

Click to Enlarge

Dmitry Ryabov:

Cherry Spoon

Click to Enlarge

Rachel Frick

Painted Shoes

Click to Enlarge

Marius Kreiser

Golden Globe

Click to Enlarge

Andrew Price

Sun Crushed Sink

Click to Enlarge

Eugenio Pignataro:

SculptureRainbow SnailsHairy SnailOrange JuicePomegranateBananaOrangeKiwisGrapes

Click to Enlarge

Mike Pan:

Mike Pan Cellar Mike Pan Tesla 3 One Mike Pan Tesla 3 Two Mike Pan Tesla 3 Three Mike Pan Tesla 3 Four

Click to Enlarge

Henri Hebeisen

Henri Hebeisen AudiHenri Hebeisen Chair Henri Hebeisen Chair Two

Click to Enlarge

Tynaud

Effects Pedal

Simple Example of the Filmic Desaturation / Crosstalk 3D LUT

Two simple images that demonstrate the critical differences of high intensity values and how purely saturated colours fail to behave photographically. Note how the average greyscale values desaturate and bloom to display referred white as expected, while the purely saturated colours break. Compare against the transformed version which blooms as one would expect as values increase:

Exposure Array with Filmic Suzanne Before and After 3D LUT

Click to enlarge

The Question

To get images that seem more photorealistic, not only a much wider Dynamic Range is needed, but also having the color information desaturate towards white as it would happen in an overexposed photograph.

The key word here is photorealistic. While many imagers focus on modeling, texturing, and other critical nuances, this is an often overlooked term with some profound implications on imaging.

What is Photorealistic?

While an apparently obvious question, breaking it down into components will help us provide a solution to the initial question. Photorealism has a direct link to the photographic world. This means that to solve the problem at hand, we need to break down what exactly a photograph is and how it has influenced our learned aesthetic response. Once having done so, we can compare how the photographic model relates to a CGI model, and provide bridging tissue to derive a solution.

What is Film?

The advent of film provided a unique aesthetic transformation of a physical scene into a convention. This convention brought with it particular nuances of photographic emulsion, and later digital sensors that sought to imitate the medium. These nuances can be loosely broken down into two categories when evaluating CGI and its relationship to the photographic and photorealistic.

Breaking Down Film, and the Photograph

The vectorscope reveals another issue as well: The Red, Green and Blue colors, also reach a point in saturation at 1 and keep going past what is possible to represent correctly.

For our purposes, we will examine the later era of colour photographic reproduction. The first concept we need to address is why, when photographing something of intense light, does the image desaturate? First, the composition of the film itself:

Colour Film Emulsion
Click to enlarge

Given that the spectral locus, or range of all visible light, is a strangely curved mapping of wavelengths to colour, we can begin to see some of the reasons that images blow out to a white. In the above example, we see that there are three primary layers that are sensitive to loose regions of the spectral wavelength, crystallizing and "recording" the wavelengths. Here are some simple spectral responses based on the layers:

Basic Spectral Response Chart Kodak Ektachrome Response Chart Click to enlarge

What we learn immediately is that film is not a narrow band recording medium. That is, if we think of the primaries, or colours for each RGB such as sRGB, the primaries are extremely narrow band, representing a unique and singular colour of light for each channel.

What are the Implications of Non-Narrow Band Recording?

Both film and DSLRs use the filtering technique to record their data. With regard to colour, we know that the filtering mechanism, due to a variety of complex reasons, are sensitive to a non-narrow range of actual physical wavelengths in the visible spectrum. This means that even though the "green" layer is attempting to only record a specific colour of green, the emulsion / DSLR photosites will also register recorded information because of the wider wavelength gathering.

The net sum is that when a specific "blue" light lands on a photograph, it is also crosstalking with the other layers or photosites, creating an extremely unique mixture of values. Part of this is colour response, and forms another discussion. The critical part we need to grasp regarding the desaturation of film / DSLRs, is that there is no single, physically plausible colour of light that will solely trigger an isolated emulsion layer or photosite. As a result, the stock or sensor will bloom out to white given enough exposure time. This is very much unlike the default sRGB transfer curve applied to Blender's default view.

The Intensity of Light in Relation to a Photograph

Photography has unique colour characteristics above which result in an image "blowing out to white" as well as many more subtle crosstalk features that yield the unique looks of film and DSLRs. At least as important as this facet is the dynamic range of the medium itself.

Film had an extremely unique feature that even DSLRs struggle to match today: a logarithmic encoding scheme. That is, as the particles of silver were exposed to light and crystallised, the response became harder and harder to influence the negative; once a granule was exposed, it became physically harder to expose the grains behind or around it. This meant that film was able to respond to light in a logarithmic form, and in doing so, record a tremendous range of intensity of light.

Film and a Camera

When we dial in a camera to record a scene, we set an aperture, shutter speed, and typically select an ISO sensitivity. These three facets restrict the scene's intensity values that the logarithmic film records.

To understand this better, we need to consider two CGI terms that break our computerized models down into a more granular format: Scene Referred and Display Referred (aka Output Referred, Device Referred, etc.)

Scene Referred Capture of a Photograph to a Display

If we consider that the scene, or scene referred data in our photographic examples above, cover a vast range of intensities, we can see a transformation happen at the camera / emulsion level. This is a mapping of the scene referred linear light values to the logarithmic encoding structure of film. The following image shows an arbitrary twelve and a bit stop mapping of scene referred values to the display referred / device referred encoding of film or DSLRs, as viewed on an sRGB device:

Loose Example of Scene Referred Photographic Capture
Click to enlarge

What is Happening in Blender from Cycles?

In Blender, and particular using a raytracing engine such as Cycles, we are generating scene referred values in the internal model, and passing those values via a display referred transform to output. The default "sRGB" display referred viewing transform appears to be a blind and ignorant hard cut. While some might call this "clipping", it would be more accurate to consider this a transform from the scene referred domain to the display referred, and the value of 1.0, while happens to match the same value of 1.0 in the display referred domain. The values, despite being identical, represent different things.

The "Default" transform is a strict inversion of the sRGB transfer curve that was developed as part of the sRGB specification. Here is roughly what it looks like from a layperson's vantage:

Current sRGB View Transform
Click to enlarge

This transform is particularly confusing for a number of reasons:

  1. It grossly ignores much of the scene referred data present in Cycles.
  2. It uncharacteristically models the range of intensity of light to the display referred output image.
  3. Imagers unaware of the transformation of scene referred to display referred values conflate the concept as existing on a single continuum, forcing unfortunate mangling of their rendered scenes to fit in under this completely arbitrary and extremely important transformation.

Almost At the Solution

Before we "solve" the above to issues of desaturation and latitude, it is worth revisiting how a clear division between scene referred data and display referred data can greatly elevate an imager's ability to craft work.

How is Scene Referred Data Different?

1 is white and, of course, there is nothing whiter than than white

  • Scene referred data has no notion of white nor black. Those concepts do not exist until the display referred transformation. "Whiter than white" is an anachronistic term that typically referred to safe video encodes.
  • Scene referred data, much like a true scene in reality, can represent a colossal and infinitely large range of data. Imagine a planet with one sun? Now imagine one with two suns? Three? The only limit on scene referred data is the bit depth of the actual architecture, and even then, it is constantly evolving.
  • Scene referred data is rendered from architecture such as Cycles that seeks to model a version of reality. If an imager is unaware of the transformation from scene referred to the display referred domain, they may end up artificially mangling their lighting, data, textures, etc. to fit in under an arbitrary view transform.
  • Scene referred data is stored linearly, or more specifically, a visual radiometrically linear fashion. That means the ratios of light emulate a physical model of light, and respond accordingly.
  • Very few formats store scene referred data effectively. EXR is the most robust format for such storage.

How is Display Referred Data Different?

  • Display referred data has a minimum and maximum creative point, typically zero and one respectively.
  • Display referred data is most typically stored nonlinearly, with the arbitrary middle grey point mapped to a particular middling value in the display referred encoding.
  • Only at the display referred transform do values end up mapped from a given high and low point to white and black respectively. Speaking in terms of white or black prior to this transform is utterly meaningless. Only terms like achromatic, or without colour, apply.
  • The display referred transformation is handled via OpenColourIO in Blender. This transformation is arbitrary and a creative tool for imagers.
  • Display referred encodings will almost always be discarding information when stored on disk, and as such imagers should be well aware of formats used to store their data. This extends to alpha storage concerns, as some formats such as PNG mangle alpha.

The Long Path to the Solution

In summary, we are faced with two unique problems posed in the original question by @cegaton.

  1. Latitude or dynamic range of the encoded image.
  2. Unique colour characteristics such as desaturation to emulate the photographic.

We know that OpenColorIO controls the transformation from the scene referred domain to the display referred domain, and as such, the solution will revolve around our manipulation of the configuration for OpenColourIO.

What Might a Solution Look Like?

Dealing with Latitude / Dynamic Range

With regard to how to capture the latitude range, we need to consider what a more optimal solution would be than the default sRGB display referred transform. We could suggest it might look something like the following:

A "Better" Display Referred View Transform
Click to enlarge

The above image maps approximately six and a half stops above middle grey to our display referred notion of white. It also maps a scene referred value of 0.2 (again, in scene linear) to our middle grey value. This keeps roughly to the values that many display referred images would have their middle grey values mapped to when converted to display linear. The above ignores a more complex display referred black range mapping, and simply maps zero to zero.

This can be accomplished via a 1D display referred viewing transform in OpenColourIO. An imager can use a spreadsheet or other tools to generate such a LUT. More information on this can be provided if someone chooses to ask the question.

Dealing with Desaturation or Crosstalk

Dealing with the desaturation or crosstalk issue is much more subtly complicated. In the case of a desaturation, we expect that as say, the blue primary pushes up toward the display referred maximum, that the other channels move up as well. This is impossible to achieve using any amount of curves.

It should be noted that no matter how hard one tries with the default sRGB display referred viewing transform, that colours will always be mangled as they near the ceiling of the viewing transform. Why is this? This is because the 1D LUT simply hard cuts the scene referred data within the transform. This yields colours that, while it is possible to force them to clip to white, do so in a manner that is entirely unlike any sort of desaturation known in a photographic medium.

The technique to achieve this is typically via a 3D LUT. A 3D LUT differs from a 1D LUT only in the sense of the input and output influence. While a 1D LUT takes an input value and converts it to an output value, a 3D LUT is able to take a single input value and adjust, in addition to the input value, also the other channels. This provides us the magic tool to simulate not only desaturation but also the complexities of filmic crosstalk.

3D LUTs have a unique problem however, in that the input range must be very well defined as their size and resolution grows exponentially. To accomplish a suitable 3D LUT, it is prudent to convert scene referred linearized data to a display referred perceptual domain. This allows the 3D LUT to be applied in a perceptually uniform manner to the data, increasing the quality of the transform.

One Possible Solution

While the above hopefully highlights how much creative control an imager has, it hopefully also sheds light on how not to deal with the complexities of image based lighting or high dynamic range lighting. Instead of mangling and crunching scene values to fit the display referred transform, it is much more prudent for an imager to create a clear division between the scene referred data and the display referred encoding. Doing so will not only elevate the imager's work, but also her creative control when grading at a later step.

While the actual generation of 1D and 3D LUTs, as well as the OpenColourIO configuration details, is beyond the scope of the original question, the following is left here for any imager to experiment and light with. It is a fully compatible OpenColourIO configuration for use immediately.

What it does:

  • The "-10-+6.5" represents a viewing transform that grabs approximately 6.5 stops over middle grey and maps it to the display referred encode.
  • In the Looks you will find a number of different looks. While well documented in the README, the "Basic" adds onto the above transform a desaturation / convergence toward display referred white. This is the exact shaper transformation, further transformed into a 3D LUT that deals with the desaturation component explained above. The LUT was generated using the idea that the luminance of the primaries would be roughly a decent entry point for emulating the desaturation of the layers / photosites. As such, it uses the sRGB / 709 primaries weights to more accurately desaturate as the intensity values near the display referred maximum point, at approximately two and a half stops below the maximum scene referred value of 16.291, or roughly 3.0 scene referred linear.

Other useful Looks:

  • A False Color look that offers a visually shifted "heatmap" of exposure useful for lighting.
  • Several Sharp variants which interpret the data under a power curve that increases contrast. Useful as a rough approximation of a grade.
  • A Scaled set that maps middle grey from the view's .6 to sRGB's 0.466. This can be considered training wheels for those not used to grading footage.
  • A Greyscale Look on both the desaturated and standard views for evaluating contrast. It uses 709 primaries as weights.

Updated LUTs are located at this GitHub link. Please read the README to spot specific issues with some of Blender's yet-to-be-addressed shortcomings when operating on scene referred imagery. Sadly, many of the problems present in Blender are simply because very few imagers realize the extent to which the default view impacts their view of the scene referred data. In this regard, it makes the scene referred data appears as though it were display referred simply because imagers rarely are aware of the data in their scene.

The more imagers that give the LUT pack a spin, the more likely they are going to help Blender evolve as a tool. That is of course, in addition to almost magically transforming their imagery.

To use it, an imager merely needs to:

  • Backup / copy / move their [BLENDER DIR]/bin/[VERSION NUMBER]/datafiles/colormangement directory to a different directory.
  • Link or copy the files into a fresh colormangement directory in the datafiles directory.
  • Change between the rendered views using the Scene's Colormanagement Properties panel.

References and Further Reading / Viewing:

$\endgroup$
8
  • $\begingroup$ Further reading/Related links: blender.stackexchange.com/questions/53155/… -And_ blender.stackexchange.com/questions/55859/… -And-blender.stackexchange.com/questions/55231/… $\endgroup$
    – user1853
    Commented Jun 29, 2016 at 17:32
  • $\begingroup$ Is the first image in the Google Docs PDF Andrei Tarkovsky? The PDF was interesting reading, but the font displayed incorrectly on some pages with large type, just wanted to let you know. Thanks for the amazing answer! $\endgroup$ Commented Feb 10, 2017 at 22:00
  • $\begingroup$ @MicroMachine Good spot. ;) Yes. Typeface rendering should work out assuming the connection is decent. $\endgroup$
    – troy_s
    Commented Feb 10, 2017 at 22:19
  • $\begingroup$ This article: blender3darchitect.com/blender-3d/…. Says that Blender 2.79 didn't properly implement the filmic system you created. How is it different? Thanks for this incredible research you've been doing! $\endgroup$ Commented Feb 10, 2018 at 4:25
  • $\begingroup$ @AnsonSavage you can probably deduce this on your own. The way it is implemented in 2.79 will reveal no difference between the Filmic Log with no look and the Base Contrast. This is because the base log encoding is missing. Ironic, given that the Agent film was graded off of the base log look with a Resolve adapted Base Contrast. It stems from a misconception as to how OCIO was designed, and what the concepts behind Displays and Views are. You can read through the developer site and figure out which side you stand on. :) $\endgroup$
    – troy_s
    Commented Feb 11, 2018 at 2:42
15
$\begingroup$

Part of the issue is confusing rendered result with displayed result. Blender is drawing the image in the uv/image editor using the exact image values. When drawing to screen an RGB value of 0.0 is black and RGB of 1.0 is white, anything over 1.0 is cropped to be 1.0 so looks the same.

To test this render your test scene and click on the white sections, the footer will show the true colour values under the cursor and the white parts will range from 0.1 to 819.2 as you expect, it's just each piece over 1.0 all look the same on the screen.

image colour info

When saving these images you will want to use OpenEXR and enable Float(full) to keep the full colour information. Float (half) will work but you will see small value drops due to reducing the precision. Even a 16bit PNG gets cropped to a 0.0 to 1.0 range.

The first step to getting the visual result you are after could be to first map the wide colour range down to a 0.0 to 1.0 range. A simple version would be done with -

Plain map range nodes

so the input values ranging between 0.0 and 820.0 gets scaled down to the 0.0 to 1.0 displayed range. This linear map range gives the following waveform -

waveform after map to range

This shows that the scopes work with the same 0.0 to 1.0 range that is shown on screen not the true range of the image data.

To get the over saturation you could use the value from HSV to drive the brightness.

oversaturate using brightness

$\endgroup$
4
  • 2
    $\begingroup$ There are several problems with this approach, not the least of which is that there is a clear division between Scene Referred and Display Referred, which is lost in the node chain. Further, the arbitrary middle grey point (typically mapped to around 0.18-0.2), ends up compressed to a non-middle grey Scene Referred value here. $\endgroup$
    – troy_s
    Commented Feb 14, 2016 at 1:54
  • 4
    $\begingroup$ I don't think it's confusing - far from it, the original post is very much about the difference between scene and display colors; the main functional difference (aside from exact numbers) is using ocio configs to get a wider range, vs. using the compositor - one issue with the latter is you don't see composited colors in live lighting preview, so you have no idea what you're doing while lighting $\endgroup$ Commented May 14, 2016 at 22:51
  • 2
    $\begingroup$ Coming from a background in film I really can appreciate this answer. However "Photorealistic" isn't about film anymore (sadly)... $\endgroup$
    – Dontwalk
    Commented May 19, 2016 at 18:39
  • 1
    $\begingroup$ I am trying to save my rendering results in 16-bit PNG files. After loading the PNG files in Python I noticed that the highest number for all pixels is never more than 255 (8-bits). I posted a question here. I would appreciate if you can take a look and tell me if I'm doing anything wrong. I may note that I wanted to initially save my rendering results in OpenEXR but due to this issue, I cannot do this as of now. So I decided to continue with PNG to get some preliminary results until Blender people fix the OpenEXR bug. $\endgroup$
    – Amir
    Commented Mar 31, 2018 at 1:47

You must log in to answer this question.