2
\$\begingroup\$

With the release of the Canon EOS R5/R6, there have been many saying that they prefer the 20MP R6 camera over the 45MP R5 camera, due to its lower price but also because of "better low-light performance."

In camera comparison sites, I've also seen the sentiment that bigger pixels (lower resolution) = better low light performance. However, is this a myth?

Here is my reasoning for this. Let's say that there are two full-frame sensors, one 20MP sensor and a 80MP sensor. According to many people, the 80MP sensor would be terrible for low-light photography since the pixels are smaller. However, each 2x2 block in the 80MP sensor could be mapped, and in post averaged to a 1x1 pixel, effectively making the image into a 20MP image. This could also be done in-camera as an option. Would this image have just the same amount of usable information as the image from the 20MP sensor?

In other terms, since both sensors are 35mm sensors (i.e. equal area), the amount of light falling onto the sensor is the same. So, at the end of the day, there will be no difference between a 20MP image and a 20MP image that has been downsampled from 80MP, right? So their low-light performance would be the same. You even get an advantage from having the option to get 80MP photos if you wanted.

Why do people say that a lower-resolution camera has better low-light performance, when it should theoretically be the same as one with higher resolution (all other factors equal)?

Note: this question is about sensors with the same size but with different pixel density. Hence, it is different from this question asking about different sensor sizes. I am aware that a full-frame camera will have better low-light signal due to its bigger area than an APS-C sensor, but what about the similarly sized sensors?

\$\endgroup\$
7
  • 2
    \$\begingroup\$ Does this answer your question? what's the relation between sensor size and image quality (noise, dynamic range)? \$\endgroup\$
    – scottbb
    Commented Jul 10, 2020 at 20:49
  • \$\begingroup\$ (Correct me if I'm wrong) I believ that your assumption is correct, however, usually, the 80mp image simply is not downsampled. \$\endgroup\$
    – jng224
    Commented Jul 12, 2020 at 8:21
  • \$\begingroup\$ That's what I was thinking too. Well, it can be downsampled in post, so I was thinking, it still contains the same (signal) as usual; the SNR is higher, but the signal is also higher, and in the end there's at least as much signal on the 80MP sensor as on the 20MP sensor. If I wanted to go out one night and pretend that I was shooting with bigger pixels—I could just take the photos and downsample from 80 to 20MP. \$\endgroup\$ Commented Jul 12, 2020 at 23:24
  • \$\begingroup\$ @SkeletonBowNotr exactly, because the pattern of your Bayer mask would be different for the 20MP sensor and the 80MP sensor. To get the exact equivalent of a 20MP sensor with an 80MP sensor, the Bayer mask would need blocks of 4X4 sensels filtered for each color. In other words GGRR-GGRR-BBGG-BBGG in the space that would normally be GRGR-BGBG-GRGR-BGBG. And you'd still be dealing with the differences in full well capacity for some shooting situations. \$\endgroup\$
    – Michael C
    Commented Jul 13, 2020 at 23:42
  • \$\begingroup\$ @MichaelC Oh, I see. Thanks for the info! Do you know how noticeable this difference would be, though? Maybe it turns out that it doesn't really matter? \$\endgroup\$ Commented Jul 14, 2020 at 2:19

3 Answers 3

2
\$\begingroup\$

It depends.

Assuming both sensors have the same linear dimensions:

If you are viewing the images from both sensors at the same display size, then the low light performance of both will be similar, assuming they use the same generation of technology. There are other advantages unrelated to low light S/N performance that make using a higher resolution sensor and then downsizing the result slightly better for reproducing fine detail when images (and video) are shot under better light.

If you are viewing the images from each sensor at 100% magnification (1 image pixel = 1 screen pixel), then the image from the higher resolution sensor is being enlarged more and will have poorer low light performance, all other things being equal (which they never are).

There are also some scenarios with very small, very bright specular highlights, such as is the case with astrophotography, where the better performance of the sensor with larger photosites can be due to the smaller photosites (a/k/a sensels or pixel wells) on the higher resolution sensor having a lower full well capacity than the larger photosites of the lower resolution sensor. If the scene contains bright specular points, a larger sensel with only one specular point illuminating all of its surface will allow a brighter exposure before full saturation than a smaller sensel with the same specular highlight illuminating it.

\$\endgroup\$
6
  • \$\begingroup\$ Right, so wouldn't it technically incorrect to say that a lower resolution sensor has better low light capability? So given the choice between low res and high res, of course I'd choose high res, since it allows me to get high resolution (when I need it) or good SNR compared to the low res sensor (also when I need it), right? \$\endgroup\$ Commented Jul 11, 2020 at 21:04
  • \$\begingroup\$ @SkeletonBow it all depends on what your purpose is and what your planned viewing habits are. If you plan to do a lot of pixel peeping, then the sensor with larger sensels will have better low light performance at the expense of magnification. There's always a tradeoff. The difference in full well capacity can also make a real difference in other shooting scenarios, such as shooting in bright light with dark shadows in the scene. You'll be able to push the shadows harder with the sensor having the larger photosites. \$\endgroup\$
    – Michael C
    Commented Jul 12, 2020 at 3:15
  • \$\begingroup\$ I basically agree and upvoted. But the smaller FWC of smaller photosites is not a major factor. When the same image is recorded the smaller photosites also receive less light. I.e. both sensors will clip at the same exposure per image area. \$\endgroup\$ Commented Jul 12, 2020 at 15:29
  • \$\begingroup\$ @StevenKersting That assumes the amplification factor is the same for both sensors for a given ISO setting, That is not always the case with all camera manufacturers. It also assumes a uniform field density of light. If the scene contains bright specular points, a larger sensel with only one specular point illuminating all of its surface will allow a brighter exposure before full saturation than a smaller sensel with the same specular highlight illuminating it. But you are correct that it is somewhat of an edge case for some uses. I'll remove the bold lettering for that part. \$\endgroup\$
    – Michael C
    Commented Jul 12, 2020 at 22:38
  • \$\begingroup\$ @MichaelC, If a single point fell w/in an 8um photosite, that same point would cover four 4um photosites with an equivalent light density/area in both cases (assuming equivalent fill factors). If instead you compare a 4um point on a sensor with 4um photosites against 8um point/8um photosite then the same thing happens if the points are of equivalent brightness...same density/exposure per illuminated area. It's the inverse square law in effect; and it's the reason zooming in with a constant aperture doesn't cause changes in exposure/clipping. \$\endgroup\$ Commented Jul 13, 2020 at 12:32
1
\$\begingroup\$

Sensors have antialiasing filters that block higher frequency image content in order to avoid Moiré patterns. Averaging pixels will also average (and thus reduce) noise but is comparatively bad as a low-pass filter and thus will not work as well for suppressing Moiré patterns as an optical antialiasing filter made to size would. While you can try using different interpolation functions than a mere average, while they work better for reducing high frequency content, they work worse for noise suppression.

Also averaging light on a larger pixel means that the averaging of noise happens right on the pixel area. That makes it quite less likely that noise will make a single pixel exceed its dynamic range than averaging in the digital stage would. And digitisation noise different from the optical quantum noise will not decrease along with pixel size: there is a cost of having to deal with smaller sites.

Smaller pixel sites also tend to be more susceptible to charge leakage in the form of "hot pixels", outliers that tend to self-saturate under longer exposure times without actual optical excitement. Once a pixel saturates, it is no longer useful for averaging purposes. If the underlying defects mess with a larger charge well, their effect is smaller and will take longer to saturate.

\$\endgroup\$
7
  • 2
    \$\begingroup\$ Most of the newer very high resolution cameras no longer have anti-aliasing filters. \$\endgroup\$
    – Eric S
    Commented Jul 11, 2020 at 13:42
  • \$\begingroup\$ @Eric Shain: but we are not talking about very high resolution results, but about lower resolution either by digital reduction or a lower resolution sensor which will typically be accompanied by an optical anti-aliasing filter. \$\endgroup\$
    – user92986
    Commented Jul 11, 2020 at 13:52
  • 2
    \$\begingroup\$ An 80 MP sensor is unlikely to have an anti-aliasing filter which counters your first argument. \$\endgroup\$
    – Eric S
    Commented Jul 11, 2020 at 14:02
  • \$\begingroup\$ @Eric Shain: no, it doesn't "counter my first argument" since question and answer are about comparing digital reduction in resolution to working with lower sensor resolution so we are talking about 20MP sensors in comparison to a digital reduction from 80MP, and 20MP full-frame sensors will have optical antialiasing filters for Moiré suppression. \$\endgroup\$
    – user92986
    Commented Jul 11, 2020 at 14:17
  • \$\begingroup\$ @user92986 But the 80MP camera probably won't, and will likely show more moire in the typical shooting conditions for which aliasing is a problem, even after rescaling to match the 20 MP sensor. \$\endgroup\$
    – Michael C
    Commented Jul 12, 2020 at 3:20
0
\$\begingroup\$

The SNR begins with/originates from the scene generating it... if there is nothing in the camera's signal chain that reduces that SNR, then all sensors receive the same SNR when used with the same lens and w/ the same Ap/SS settings.

The reduced pixel level SNR associated with a higher resolution sensor is simply because the light/scene generated SNR is divided between more photosites. To some small degree it may also be due to a slightly lower fill factor (micro-lens gaps), but these days that is really negligible.

Conversely, smaller photosites perform better in lower light as they have a lower gain (fill) requirement. This why many modern sensors are incorporating dual gain photosites which have a smaller capacitor (photo diode) with a lower capacity (FWC) for low light situations, and a second capacitor (second gain stage) for brighter situations.

What really matters is light/SNR per image area. And larger sensors recieve more light with a higher SNR when the same image is recorded with the same SS/Ap settings (i.e. from closer, or with a longer lens w/ a larger entrance pupil).

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.