4
\$\begingroup\$

I’ve always wondered what benefit a 14 stop acquisition has when the exhibition will be in SDR which has around 6 stops of dynamic range (according to Wikipedia)? How does the 14 stop image fit into these 6 stops, are the luminance values between the stops in the acquired image just not represented faithfully?

I also wonder because, I also always hear that the benefit of having such large dynamic range, often has to do with highlight roll-off, but I don’t quite understand.

I also just wanted to note that this is a question perhaps more specifically to catered towards cinema. However, Im assuming the concept applies to both photography and cinematography.

Thank you for your time

\$\endgroup\$
5
  • 1
    \$\begingroup\$ Does this answer your question? What's the point of capturing 14 bit images and editing on 8 bit monitors? \$\endgroup\$
    – Michael C
    Commented Feb 8 at 12:49
  • \$\begingroup\$ Thank you for the link, it was definitely very insightful. However, I am also still wondering now that (in relation to the same example from the link), if I capture a scene with greater bit depth, which also allows for greater dynamic range (without posterization and what not) and I am editing/exhibiting this scene on an 8 bit monitor, how does the appearance change? Will the greater dynamic range be clipped, or will the steps between the stops not be represented faithfully or something else? \$\endgroup\$
    – vannira
    Commented Feb 14 at 23:11
  • \$\begingroup\$ I think, I understand the benefit of having greater flexibility in post but I am very curious about the exhibition of an image captured in HDR, displayed on an SDR device? Does it clip and I have to choose what to display or does it display all highlights and shadows, just not faithfully (apologies for practically repeating the above). \$\endgroup\$
    – vannira
    Commented Feb 14 at 23:15
  • 1
    \$\begingroup\$ The whole point of "HDR" (high dynamic range imaging), from the very beginning in the 19th century, was to capture and display scenes with much a greater dynamic range than the capabilities of the display medium to display. "High Dynamic Range" has been used for decades as a description of the scene captured, not the display medium. The recent terminology of calling certain display monitors 'HDR monitors' muddies the waters, but that has not been the historical meaning of the term. \$\endgroup\$
    – Michael C
    Commented Feb 15 at 23:42
  • 1
    \$\begingroup\$ How the result will be displayed will depend on the choices made when processing the captured data into a finished image. Those choices may be made by the developer of the software used, or they may be choices made by the user in selecting various settings when converting the raw data or when combining the data from multiple images to create a single image that can be displayed on an 8-bit, or even 10-bit "HDR" monitor. It can be any of the possibilities you've listed, plus a few others. \$\endgroup\$
    – Michael C
    Commented Feb 15 at 23:45

4 Answers 4

5
\$\begingroup\$

A contrast ratio between 1000:1 and 4000:1 could be expected from an LCD screen. That's 10-12 stops. Given something more exotic like OLED and it goes up higher. 8 bit color is going to have at least 8 stops of theoretical (low precision) range if interpreted linearly, and more in practice thanks to gamma curves mimicking the exponential curve of color perception. Displays with more than 8 bits-per-channel have been a thing for decades. So it's possible to present more than 6 stops with relatively inexpensive hardware already.

Also if you're going to apply tone corrections to an image that will leave posterization unless you have more precision so capture is often at a higher bit depth which is more than capable of storing a greater dynamic range.

Even without HDR standards photographers can use HDR information, if only to choose which stops to display.

\$\endgroup\$
3
  • 2
    \$\begingroup\$ 8 bit color with gamma curve applied (i.e. jpeg srgb 2.2) can display ~ 12 stops of DR. 8 bit linear would be 8stops (7.99); and 8bit 2.2 without the srgb limitation can do over 17 stops (8 x 2.2)... and you could use a 2.4 gamma curve instead for even more stops. \$\endgroup\$ Commented Feb 3 at 13:45
  • \$\begingroup\$ 8 bit color is going to have roughly 8 stops of theoretical (low precision) range thanks to gamma curves mimicking the exponential curve of color perception. - that's not how gamma encoding works, you just ignored it. \$\endgroup\$ Commented Feb 4 at 20:33
  • \$\begingroup\$ Yep, I wasn't thinking when I wrote that. \$\endgroup\$
    – davolfman
    Commented Feb 5 at 17:18
4
\$\begingroup\$

One word: Editing.

One different word: Choosing.

The point of having a raw file that contains a wide range in itself gives you the ability to choose how you want to "develop" the image if you need more detail in the shadows, or in the highlights. It is not that everything goes from the camera to the display.

Even an experienced photographer benefits from a file that does not have the exact middlepoint when shooting.

\$\endgroup\$
3
  • 1
    \$\begingroup\$ Bingo. If one shoots a scene with very harsh lighting and shadows using a camera with limited dynamic range, one will have to choose at the time of shooting which parts of the scene one wants to capture faithfully. If one renders a photo for display on an LCD and thinks the contents of an unlit tent would be interesting, except that even the brightest parts of the interior are almost black, and the scene was originally captured with 14 f-stops of dynamic range, one could in a sense "go back in time" and show how the scene would look if it had used brighter exposure settings. \$\endgroup\$
    – supercat
    Commented Feb 4 at 19:39
  • \$\begingroup\$ Also s-curves are possible that retain contrast and detail in shadows and highlights. \$\endgroup\$
    – davolfman
    Commented Feb 6 at 22:27
  • \$\begingroup\$ ^^^^^THIS!^^^^^ \$\endgroup\$
    – Michael C
    Commented Feb 15 at 23:52
4
\$\begingroup\$

Digital cameras have almost perfectly linear sensor response and there is no gradual loss of quality towards the saturation point. As soon as one of channels reaches saturation the colour cannot be recorded accurately. "Dynamic range" definition varies quite a lot but most generic one is "maximum ratio between tone just saturating at least one colour channel and tone not totally ruined by noise in a single frame" and it's incomplete. To complete it you also need to define:

  • whether you measure it at pixel level or take resolution into account. Same image will be perceived as less noisy if it's viewed from further away and vice versa
  • what exactly is the criteria of "totally ruined"

A recorded raw photo is almost never the final product, editing involves many steps which can make noise more apparent:

  • tonal curves / tone mapping in case you want to preserve colour of highlight while making image less representative of reality. You cannot bring the lost highlights back but you can brighten everything else and that increases perceived noise
  • contrast increase
  • saturation increase and colour profiling (different cameras need different amount of colour adjustment. Colour conversion can be included in definition, DXOMark does that for example)
  • cropping
  • sharpening

And of course:

  • there are different viewing scenarios. Any scenario where a viewer can zoom in closer or look closer at the photo makes better DR an advantage
  • there are a lot of scenes where you can't decide the best balance between lost highlights and shadows on the go so having wider DR is definitely an advantage so that you can adjust exposure in editor
  • you can't get same DR for saturated colours as you get for gray tones, the noise is higher because of lower values compared to gray tones
\$\endgroup\$
3
\$\begingroup\$

First off, DR is not "steps;" it is only the difference between minimum and maximum... the same dynamic range can be encoded/displayed with 16bit, or with 2 bit; i.e. pure black and pure white. And you need a larger DR capability if you want to record a scene of greater contrast; i.e. what scene luminance records as black/white.

If you record more stops (steps) than you can display, then some are lost in the process; typically in the shadows. If the data were to be converted to SDR then where/how the losses occur would be determined by the color grading applied.

However, I do not understand why SDR is the primary consideration... modern video is digital, as is modern display. Modern recording is up to 17 stops of DR with Rec.2020 encoding able to display about 14 stops/steps.

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.