0
\$\begingroup\$

Someone said that 16 bit photos can sell for more money than 8 bit photos. But doesn't that depend on the camera used: Anyone can convert an 8 bit file to have 16 bit, which isn't authentic processing.

Which is the industry standard? What are the advantages and practicalities of choosing between exporting a PNG photo as either 8-bit or 16-bit? Can a consumer discern the difference by looking at the image alone, regardless of what the file information reads?

\$\endgroup\$
2
  • 1
    \$\begingroup\$ Take a look: photo.stackexchange.com/questions/72116/… \$\endgroup\$
    – Rafael
    Commented Feb 1, 2022 at 15:16
  • 1
    \$\begingroup\$ @Rafael Yeah, I included a link to that question in my answer below, along with several other existing related questions. \$\endgroup\$
    – Michael C
    Commented Feb 2, 2022 at 7:08

3 Answers 3

2
\$\begingroup\$

The advantage of 16 bits over 8 bits is when you edit the image. Then you will have bigger "playground" to apply different edits and not finish with negative effects as banding for example.

And yes, it depend on cameras. they can produce 10, 12, 14, 16 bits RAW images which can be converted in to 16 bits image files. Converting 8 bit images to 16 bits will not provide more information because there is no such information, you have only 8 bits.

P.S. As I said in other answer you can sell 16 bits image but only if the buyer need those bits and want to pay for them.

P.P.S. Also I am not aware of monitors which can represent 16 bits colours per channel. Be aware that JPG by definition is 8 bits format and you can't sore more information there.

\$\endgroup\$
4
  • \$\begingroup\$ How to tell if a camera is native 16 bit? Windows 10 file information for cameras only designates files, not cameras, as being 8 bit. So far, all photos exported here are automatically 8 bit, so that means all cameras used have been 8 bit? \$\endgroup\$
    – user610620
    Commented Jan 31, 2022 at 16:23
  • \$\begingroup\$ @user610620, from camera specs. Cameras can produce 8 bits photos (JPG). I talk about RAW files. To export in 16 bits you need to select appropriate format (PNG,TIFF, PSD) \$\endgroup\$ Commented Jan 31, 2022 at 16:29
  • 1
    \$\begingroup\$ Implicit if perhaps not explicit in this is that you can't save as JPG from your camera & get more than 8-bit. If your camera has a RAW option, then that will have a larger bit-depth, often selectable in the camera prefs. Mine will save RAW as 12- or 14-bit, but not the full 16. JPGs only have 8 bits, no more. It possibly also needs saying that your computer monitor cannot display 16-bits, most can do 8, so the benefits you gain are 'virtual'. \$\endgroup\$
    – Tetsujin
    Commented Jan 31, 2022 at 17:14
  • \$\begingroup\$ The 12 or 14 bit monochrome luminance values in a raw file are not remotely the same as the 16-bit per color channel values in a TIFF, PSD, or PNG. The former includes only a single brightness value per photosite (pixel well, if you will). The latter includes color information derived from demosaicing. While it is true that 16-bit color files allow more editing latitude than 8-bit files, it is also true that black point, white point, and, to a large degree, white balance has been "baked in" by that point. The biggest advantage of 16-bit color files is when converting to other color spaces. \$\endgroup\$
    – Michael C
    Commented Feb 1, 2022 at 0:15
2
\$\begingroup\$

Someone said that 16 bit photos can sell for more money than 8 bit photos.

Someone was either making up nonsense, or the information was taken out of context.

Common file formats are either 8bit (jpeg/png) or 16bit (raw/tiff/png); but that says nothing about the information stored w/in that format.

Similarly camera ADC processors typically have 12bit or 14bit accuracy (16bit is very rare); but again, that says nothing about the information they write into those 8bit or 16bit file formats. I am not aware of a camera that currently reaches even 8bit color or 14bit dynamic range... but some get really close. At this level bit depth is primarily about recordable dynamic range where 1 bit is required per stop of light (in a linear raw file). And they only reach those levels at native/base ISO; at higher ISO's they generate less accuracy.

In this sequence, whichever is less is what you get. I.e. the sensor is recording/generating 13bit DR at a low ISO, that needs to be processed by a 14bit ADC because a 12bit ADC will reduce the accuracy, and it needs to be put into a 16bit file format because 8bit is not nearly enough. Or, instead it can be at a very high ISO where the sensor is not recording/generating more than 8bit of data... that signal can be processed by the 12bit ADC and saved in an 8bit file (jpeg) w/o penalty.

Now, the real/primary benefit of 16bit is in the accuracy of the math when editing. You can think of bit depth as decimal places or ruler increments... I.e. if my ruler only has 8 increments between the main numbers I can only measure to the 1/8th" (.0625), but if my ruler has 16 increments I can measure to the 1/16th" (.03125). In this sense it is beneficial to convert your lower accuracy data into 16bit even though that doesn't generate anything new... because the mathematics (edits) will be more accurate with fewer rounding errors. E.g 9÷3 doesn't require any decimal places, but 10÷3 sure does.

Then you have to decide what you are going to do with the post-edit 16bit data. Are you going to be putting it or sending it to a print shop that accepts sRGB jpegs? Then an 8bit jpeg is perfect. Or are you going to be sending it to a printer that can accept 16bit files (i.e. tiff) and the image will benefit from doing that (color/DR/etc)?

So, a 16bit limited process from start to finish can generate the most information/accuracy... that could produce an image of discernable higher quality, and that image might therefore be worth more to someone. But most of the time something is limiting it to far less than 16bit regardless of the format, and buying/selling based on the file format makes no sense at all... I would say that 8bit jpeg as the final output/use is the most common application, bordering on being industry standard.

\$\endgroup\$
7
  • \$\begingroup\$ One bit-per-stop is not required. You can have a 1-bit file that shows everything as either pure white or pure black. Take for example line pairs in a test chart. The measured difference using a light meter at the surface of the test chart my be seven or eight stops, but the chart can be faithfully reproduced in a 1-bit file. Bit depth is only the size of each step, not the height of the staircase. The steps can be larger than one stop, or they can be smaller. The analog information from a sensor has no steps at all, it is a continuous ramp. \$\endgroup\$
    – Michael C
    Commented Feb 1, 2022 at 0:20
  • \$\begingroup\$ That only works in a scenario where the second value can be assigned to anything that is different, but that is not how a linear raw file works. The numbers written into a raw file are not randomly assigned, each has a specific meaning/exposure value. Even if the entire scene is monotone/near clipping, if you record it at/near FWC you will need the highest bit depth capability of the ADC to convert/write it... that's why they choose X bit depth for a camera's ADC to begin with. \$\endgroup\$ Commented Feb 1, 2022 at 16:15
  • \$\begingroup\$ But that bit depth doe not, by definition, as you keep stating in numerous answers, HAVE to equal one stop of analog sensor dynamic range per bit. It can be more, or it can be less. It's still linear if the analog amplification factor is constant. That's one reason, among several, that Sony sensors seem to be noiseless: because they clip the noise floor out at the ADC and the lowest digital value does not correspond to sensels with no light falling on them, but sensels with enough light falling on them to be above the noise floor. \$\endgroup\$
    – Michael C
    Commented Feb 2, 2022 at 7:15
  • \$\begingroup\$ The big advantage of 16 bit over 8 bit is the shadow detail: 8 bits per channel may permit 16 million colors, but the bottom half of the brightness range only gets 4096 of those colors. \$\endgroup\$
    – Mark
    Commented Feb 3, 2022 at 2:06
  • \$\begingroup\$ @Mark, actually, when you reduce bit depth you reduce highlights/DR... the minimum (darks) doesn't change, the max (highlights) does. \$\endgroup\$ Commented Feb 3, 2022 at 4:00
-2
\$\begingroup\$

First things first:

All "bits" are not equal.

Even if the 12 or 14 bit monochrome luminance values in a raw file were 16-bit monochrome luminance values they would not be the same as the 16-bit per color channel values in a TIFF, PSD, or PNG. Raw image files do not store the same information as color raster image files do. Obviously color image files derived from raw files can not contain information that was not contained in the original raw file, but they typically contain much less information, even though the way they store that information makes them much larger than the raw file from which they were derived.

The 14-bit values in a raw file are monochromatic luminance values for each photosite on the sensor. These values that describe only the total brightness of all wavelengths of light detected by the sensel are not equivalent to a 14-bit color channel value that would be directly comparable to an 8-bit or 16-bit value for each of three color channels per pixel. When converted to RGB via demosaicing each pixel is assigned an 8-bit or 16-bit value for each of the three color channels. This means that each pixel requires 24-bits or 48-bits to express the combined color of that pixel.

Raw files, whether 12-bit, 14-bit, or anyother bit-depth include only a single brightness value per photosite (a/k/a sensel for sensor+pixel a/k/a pixel well). All light that makes it into that sensel gets counted as light energy. Some light from all wavelengths of the visible spectrum will make it past each of the three color filters. More blue light than red or green light will make it past the blue filter, but some of all three make it through the blue filter. The same is true of the other two colors used in Bayer masks. Some of all of the visible spectrum will make it through each of the three differently colored filters. This imitates the way our retinal cones are sensitive to various wavelengths of light in an overlapping manner. These overlapping responses of our retinal cones are what allows our brains to create the perception of color. There are no colors intrinsic to a particular wavelength of light, there are only the colors our eye-brain system perceives when stimulated by specific wavelengths or specific combinations of wavelengths of light.

It's no different than when we used red filters with B&W film to make the sky look darker and more dramatic. Our red filters did not make the blue sky totally black, as would have been the case if the red filter blocked all blue light. Instead, the red filter made the bright blue sky a darker shade of gray in our B&W photo than it otherwise would have been so that the darker green and yellow forest or field, or the red brick buildings beneath the bright blue sky could be a lighter shade of gray in our B&W photo, relative to the brightness of the blue sky.

RGB color image files with 16-bits per color channel, on the other hand, contain three values per pixel. One 16-bit value for red, one 16-bit value for green, and one 16-bit value for blue are included for every pixel. These color values are interpolated via the process of demosaicing the monochrome luminance values of the raw image file and comparing the relative brightness of adjacent photosites filtered by differently colored filters. Creating these R,G, and B values for each pixel in the image also involves setting a black point (defining what is the highest luminance value in the raw file that will still be depicted as pure black with no brightness value?), a white point (defining what is the lowest luminance value in the raw file that will be depicted as full brightness?), and white balance (defining what color multipliers will be used when converting linear monochrome luminance values to logarithmic color values to account for the spectral content of the light illuminating the scene?).

In a typical color camera, the photosites are covered by a Bayer mask, which is a filter array of three differently colored filters. We often call the colors of the respective filters over each photosite "red", "green", and "blue", but they are really closer to "blue-violet", "slightly yellow green", and "yellow-orange". They are not the same colors as our emissive displays that do emit colors very close to what we mean when we say "red", "green", or "blue". For "red", it's not even remotely close. The colors of the Bayer mask emulate, to a degree, the three colors to which the three types of retinal cones in the human vision system are each most sensitive. Using one set of colors for sensing, and another set of colors for displaying is perfectly fine. It is the trichromatic nature of human vision that makes it work.

All reproduction systems do not use the same "primary" colors

Different color reproduction systems use different "primary" colors to reproduce a similar retinal response in the human eye to what the original scene or image looked like. There are two basic types of color reproduction systems: additive and subtractive.

Our emissive displays, such as computer screens and televisions, tend to use red, green, and blue - or RGB - as the three colors used to stimulate retinal responses. Because such displays emit light and various levels of the three primary colors mix together to make other colors, we call them additive systems. If all three colors at full brightness are added together, we get pure white.

But our printed display systems tend to use CMY - cyan, magenta, and yellow - mixed together in varying combinations to make colors. When two or three of the primary colors in our printing systems are mixed together to get a non-primary color, they reduce the amount of light being reflected from the surface to which they have been applied. If we use large amounts of C,M, and Y ink we get a result close to black.¹ Thus we call such color systems subtractive color systems. Printing systems that use more than three inks typically still use only three primary colors: CMY. Because inks are imperfect to produce black, they typically add a fourth ink that is as close to pure black as a printing system can reproduce, thus CMYK with "K" being ink as close to black as we can make. Printing systems that use even more than the four basic CMYK inks typically add lighter shades of "K" (grey), lighter shades of CMY, or lighter shades of all four. There are also other subtractive color printing systems that use additional colors besides CMY, but they're not near as widespread.

¹ We've yet to find the perfect combination of materials for three subtractive primary colors that can absorb all light without reflecting any light. Thus, we can't get true black from our CYM inks. That's why we often use a fourth "black" ink that is about as close as we can find to absorbing all light that falls on it. It's usually called "K" because it can also be used in small amounts to reduce the brightness of various combinations of CMY inks to lower the "key", or brightness, of a color created with a combination of CMY inks.

The biggest advantage of 16-bit color files is when converting to other color spaces.

The biggest advantage of image files with 16-bit depth per color channel is that it allows those files to be translated from one color space to another in higher fidelity. For instance, if we take a 16-bit TIFF file that is saved in RGB and translate it to CMYK we can print that image using a subtractive color system with less chance of color banding or color blocking than if we had used an 8-bit RGB file.

A 16-bit TIFF version of an RGB photo can be printed using a CMYK printing process with a better result than an 8-bit TIFF of the same photo², even though both will look the same when viewed on our 8-bit monitors. This is because when we view the 16-bit TIFF on our monitors, it is being translated from 16-bit to 8-bit before being sent to the monitor for display.

² Note that this is the case when printing with CMYK or similar inks. It is not so much the case when printing digital RGB images onto photosensitive paper using lasers to expose the paper which must then be developed chemically.

While it is true that 16-bit RGB files allow more editing latitude than 8-bit files when we want to slightly change the color balance or contrast of an image before we display it on an emissive display, it is also true that black point, white point, and, to a large degree, white balance have been "baked in" at the point that we used demosaicing to generate RGB values from the monochromatic raw values and then applied light curves to the linear brightness values.

In the case of PNG files, this is far more likely to be applied to files created using graphic design applications than to photographic images created using a camera. The nature of the PNG format allows for vector graphics, which don't really have "pixels" per se, but only instructions for drawing a line from one point to another. PNG also allows for color and brightness gradients that are defined only by the starting and ending values, with no specific color values at defined vector points in between. It is up to the application reading those instructions and generating the PNG file to translate those instructions and create a raster image with defined color values at defined pixel positions.

Someone said that 16 bit photos can sell for more money than 8 bit photos...

It seems to me that the difference between image files and photos was lost in translation somewhere along the way. All digital photos are image files. But not all image files are photos. Some are graphics created on a computer without any use of a camera. A blue background with text in yellow letters, for instance, can be saved as a PNG image file. But it is not a photograph.

In the world of graphic design, higher bit depth PNGs or PSDs can sometimes be desirable, especially if the graphics contain color or brightness gradients or even part of the graphic includes output from a photo. For instance, a poster advertising a concert that has a photo of the band as one part of the total poster that also includes a colored background with text information about date & time, location, ticket information, etc. could be a mix of non-photographic and photographic content. This allows the same file to be distributed via both digital devices that use emissive displays and via printed materials that use subtractive color systems. Higher bit-depth means that the different versions of the graphic content can be more closely reproduced in the same way by both display mediums without banding or blocking being visible when converted from RGB to CMYK.

In the area of printed books with high quality images, such as large Art books with either original photos or photo reproductions of artistic works such as paintings, tapestries, jewelry, etc. 16-bit TIFF versions of photos can also be more desirable than 8-bit raster images, whether 8-bit TIFFs, PNGs, or minimally compressed JPEGs.

For further reading here at Photography SE:

What's the point of capturing 14 bit images and editing on 8 bit monitors?
Why can software correct white balance more accurately for RAW files than it can with JPEGs?
RAW to TIFF or PSD 16bit loses color depth
Why are Red, Green, and Blue the primary colors of light? (Hint: They're not)

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.