Assume you take a picture with 3 different exposures: balanced, under-exposed by 3EV, and over-exposed by 3EV. In a 8-bit image, you can only have 8EV of range(*). So
- The "balanced" image contains the "middle gray" in the middle, and everything -4EV to +4EV. Pixels darker that -4EV are blacked out, pixels lighter than 4EV are whitened out.
- In the under-exposed image, average gray is 1EV, anything more than 1EV darker that average gray is blacked out, but at the other end you have pixels that are +7EV from the average gray.
- In the over-exposed image, it is the opposite, the average gray pixels are are at +7EV, and everything light than 1EV from that is whitened out. But in the shadows, you have the data for pixel that are up to -7EV from middle gray.
So between the three pictures, you go from -7EV darker than middle gray to +7EV lighter, so in total you have 14EV of range.
<----- range of EV ----------->
<--- captured -->
Over-exposed --[++++++---------]------------
Balanced ------[--++++++++++++--]-------
Under exposed -----------[---------+++++++]--
_______________________________________________
Extracted --[+++++++++++++++++++++++++]--
The computer that does the processing isn't itself limited to 8 bit integer values, and in practice the luminosity of pixels is encoded as a decimal value between 0.0
(black) and 1.0
(white) so middle gray is 0.5
and the darkest non-black pixel of a picture with 14EV of dynamic is 1/2^14=0.000061035
. Values can be extracted from the images above like this:
(value/255)/8
(because 8=2³
) of over-exposed pixels in the dark parts (**)
(value/255)
of balanced pixels in the mid-tones
(value/255)*8
of the under exposed pixels in the highlights
In practice this could be a weighed mix if the images, so have a smooth transition.
You can of course take even more shots (-6,-3,0,+3,+6) to extend the range (20EV in this case).
The final step if transforming this 14EV image into something encoded on 8-bit (so much lower dynamics) which is usually done with tone-mapping.
(*) This is a bit simplified (at least if working from JPEG) because the pixels values are gamma-corrected, and in that case
- The processing must first convert the 8-bit values to "linear" luminosity.
- The EV aren't linearly mapped to the recorded values, the gamma correction gives more importance to the darker parts of the image.
(**) if the image is gamma-corrected, value/255
is actually (value/255)^(1/2.2)