81
$\begingroup$

In doing research on vision, I have learned that "20/20" vision corresponds to a visual acuity of being able to resolve details 1 arcminute in size, that most people have around 20/15 vision, and that due to the limits of physiology basically nobody has vision better than 20/10 vision. This is an upper limit of resolving details about 0.5 arcminutes in size.

According to Wikipedia the Moon is around 30 arcminutes wide when seen by the naked eye.

Put these together, and it seems to say that when looking at the moon with the naked eye nobody can see more details than would be visible in a 60×60 image of the Moon
Moon on black background, 60×60 pixels
and that the average person can't see any more details than in a 40×40 version
Moon on black background, 40×40 pixels

Those seem so small on my monitor. Can that really be all the detail that I can see on the Moon with the naked eye?

$\endgroup$
1
  • 2
    $\begingroup$ Given that the lower image looks about the same size as the moon appears to be in the sky, that seems about right. $\endgroup$
    – Vikki
    Commented Feb 11, 2021 at 22:58

6 Answers 6

82
$\begingroup$

Yes and no.

Yes, it's true that the apparent size of the Moon is 30 arcmin. It's true that the visual acuity of most people is 1 arcmin. So it's true that if you take the angular size of the smallest detail you can see on the Moon, and you put a bunch of those lined up straight in a row, you could span a Moon diameter with only a few dozen of them. In that sense, you are correct.

However, when you try to reproduce the situation on a computer screen, the comparison breaks down. First off, the eye doesn't see in "pixels". Like most optical systems, there's a point-spread function, that takes very tiny details and smears them up to a larger spot. The resolution of the eye is not the pixel size, but the size of the bell curve coming out of the point-spread function, and that has soft edges and is round, and it's everywhere and it's not fixed.

You assimilate the size of that larger spot with the size of a pixel on a digital screen, in your comparison. But that's not the same. The pixel grid in those thumbnails is fixed, so whatever falls between pixels is lost forever. Aliasing intervenes and creates artifacts that are not there in the original image. The dynamic range of the monitor is not the same like the dynamic range of the eye (the eye is much better). Color and brightness levels on the monitor are discrete, whereas the eye sees them as a continuum. Finally, the visual center in your brain is like a powerful computer that applies intelligent correction algorithms to the live image.

The list goes on and on. The bottom line is - all these effects combine and allow you to perceive a live image that is slightly more rich than those dead, frozen thumbnails that you posted. Not a whole lot better, but a little bit better. It's not like the eye can "work around" limitations, but it's more like you lose too much when you shrink a large image into a tiny fixed pixel grid on a computer screen.

It's very hard to reproduce reality on a computer screen. A much better way would be to take a 2000px by 2000px image of the Moon, put it on a big super HD monitor, and move it back to the point where the apparent size of that image is 30 arcmin. I know that doesn't sound satisfactory in the context of your original query, but it's a much better simulation.


Similar problems appear whenever you try to map the resolution of any continuous optical system (like a telescope) to a fixed digital grid (like a camera).

Let's say you're using a sensor with a pixel size of 4 microns. Let's say your telescope has a linear resolution in prime focus equal to 4 microns. You might be tempted to say - great, the sensor matches the telescope, right?

Well, not really. When that happens, you actually lose a bit of resolution. The image is good, but it's a little bit softer than it really should. See below an image of the Moon I took a while ago, with a system having exactly the parameters indicated above.

You can tell it's a bit soft, it's not really down to the pixel. Turbulence also plays a role, but part of the problem is that linear resolution is equal to pixel size.

Click the image below and open in new tab; if your browser shrinks it again to fit the window, left-click the big image to expand to full size - you must do this to see the full resolution image and notice the effects I'm talking about. The fuzziness is not visible on this small version here:

One way around that phenomenon, as an example, is to blow up the image in the telescope with a barlow until the linear resolution in prime focus is much greater than the camera pixel size, maybe 4x bigger. You do all your processing, and then you shrink it back, if you like, and you'll get a sharper image. Combine it with stacking multiple frames, and the overall quality can get pretty close to 100% the theoretical performance of the telescope.


TLDR: Continuous optical systems, and discrete grids of pixels, are very different things and cannot be easily compared.

$\endgroup$
9
  • $\begingroup$ Very nice and clear explanation. Wish I could give you more than +1. $\endgroup$
    – Tonny
    Commented May 9, 2014 at 20:26
  • 4
    $\begingroup$ Why stop at a 2000×2000 image? Why not make a 4k×4k image and move that farther away? At some point the extra pixels are adding zero perceived detail to the observer. While 120×120 may add subtle details over 60×60, does 240×240 add actual observable details beyond 240×240? I'm guessing not. You're right that the eye is not a digital system, but there are discrete cones gathering light, and Nyquist does have a say in how much information they can actually pull in at some point. $\endgroup$
    – Phrogz
    Commented May 10, 2014 at 14:22
  • 1
    $\begingroup$ This is wrong. According to the Nyquist sampling theorem, to model a waveform with a frequency cutoff, you should sample at twice the cutoff and then low-pass filter the reconstructed result. In other words, it's a 120x120 image blurred according to the ideal point spread function in the illustration. $\endgroup$ Commented May 11, 2014 at 15:02
  • $\begingroup$ @BlackbodyBlacklight Thank you for the details. It's been so long since Nyquist and I shook hands that I had forgotten about the "twice the frequency" bit. (Though, that may be what takes the upper limit from 1 arcminute to 0.5 arcminutes.) Anyhow, my point in invoking Nyquist's name wasn't that 60 pixels is the correct limit, but rather that there is some limit (presumably lower than 2000). $\endgroup$
    – Phrogz
    Commented May 12, 2014 at 2:58
  • 1
    $\begingroup$ Let's be explicit: Nyquist tells us that if you want to sample a wave with 60 equally-spaced datapoints worth of information, but you can't line up the phase exactly, you should take 120 samples. Then if you convolve those using the sinc kernel, you get the original 60 samples back. Despite sinc having negative values, the reconstruction is identical to the original. $\endgroup$ Commented Feb 15, 2021 at 20:48
60
$\begingroup$

It doesn't seem so far-fetched to me. Sure, you might be off by a few pixels, due to differences between the human eye and a computer monitor, but the order of magnitude seems about right — the detail in your images, viewed closely, more or less matches what I see when I look at the full moon.

Of course, you could fairly easily test it yourself: go outside on a dark night, when the moon is full, and see if you can spot with your naked eye any details that are not visible (even under magnification) in the image scaled to match your eyesight. I suspect you might be able to see some extra detail (especially near the terminator, if the moon is not perfectly full), but not very much.


For a more objective test, we could try to look for early maps or sketches of the moon made by astronomers before the invention of the telescope, which should presumably represent the limit of what the naked human eye could resolve. (You needed to have good eyesight to be an astronomer in those days.)

Alas, it turns out that, while the invention of the telescope in the early 1600s brought on a veritable flood of lunar drawings, with every astronomer starting from Galileo himself rushing to look at the moon through a telescope and sketch what they saw, very few astronomical (as opposed to purely artistic) drawings of the moon are known from before that period. Apparently, while those early astronomers were busy compiling remarkably accurate star charts and tracking planetary motions with the naked eye, nobody really though it important to draw an accurate picture of the moon — after all, if you wanted to know what the moon looked like, all you had to do was look at it yourself.

Perhaps this behavior may be partly explained by the prevailing philosophical opinions at the time, which, influenced by Aristotle, held the heavens to be the realm of order and perfection, as opposed to earthly corruption and imperfection. The clearly visible "spots" on the face of the moon, therefore, were mainly regarded as something of a philosophical embarrassment — not something to be studied or catalogued, but merely something to be explained away.

In fact, the first and last known "map of the moon" drawn purely based on naked-eye observations was drawn by William Gilbert (1540–1603) and included in his posthumously published work De Mundo Nostro Sublunari. It is quite remarkable how little detail his map actually includes, even compared to a tiny 40 by 40 pixel image as shown above:

William Gilbert's map of the moon The moon, scaled down to 40 px radius and back up to 320 px
Left: William Gilbert's map of the moon, from The Galileo Project; Right: a photograph of the full moon, scaled down to 40 pixels across and back up to 320 px.

Indeed, even the sketches of the moon published by Galileo Galilei in his famous Sidereus Nuncius in 1610, notable for being based on his telescopic observations, are not much better; they show little detail except near the terminator, and the few details there are appear to be inaccurate bordering on fanciful. They are, perhaps, better regarded as "artist's impressions" than as accurate astronomical depictions:

Galileo's sketches of the moon from Sidereus Nuncius (1610)
Galileo's sketches of the moon, based on early telescopic observations, from Sidereus Nuncius (1610), via Wikimedia Commons. Few, if any, of the depicted details can be confidently matched to actual lunar features.

Much more accurate drawings of the moon, also based on early telescopic observations, were produced around the same time by Thomas Harriott (1560–1621), but his work remained unpublished until long after his death. Harriott's map actually starts to approach, and in some respects exceeds, the detail level of even the 60 pixel photograph above, showing e.g. the shapes of the maria relatively accurately. It is, however, to be noted that it is presumably based on extensive observations using a telescope, over several lunar cycles (allowing e.g. craters the be more clearly seen when they're close to the terminator):

Thomas Harriott's lunar map, c. 1609 The moon, scaled down to 60 px radius and back up to 320 px
Left: Thomas Harriott's lunar map, undated but probably drawn c. 1610-1613, based on early telescopic observations, quoted from Chapman, A. "A new perceived reality: Thomas Harriot's Moon maps", Astronomy & Geophysics 50(1), 2009; Right: same photograph of the full moon as above, scaled down to 60 pixels across and back up to 320 px.

Based on this historical digression, we may thus conclude that the 40 pixel image of the moon, as shown in the question above, indeed does fairly accurately represent the level of detail visible to an unaided observer, while the 60 pixel image even matches the detail level visible to an observer using a primitive telescope from the early 1600s.

Sources and further reading:

$\endgroup$
2
  • 1
    $\begingroup$ An excellent answer to the original question and very convincing comparisons, thanks. $\endgroup$
    – Patru
    Commented May 12, 2014 at 7:58
  • $\begingroup$ After going through a size calibration; he's off, but not by much. Not taking into account the eye's telescopic ability (which isn't even 2:1), I'd say about 90x90. At 60x60 I can see the pixellation artifacts. $\endgroup$
    – Joshua
    Commented Sep 14, 2019 at 14:15
27
$\begingroup$

When you gaze at the moon "live", you are not seeing a still image. You're seeing a "video": your retina is gathering multiple images over time. Those pixels have to be taken into account; they amount to extra pixels.

Suppose that 60x60 pixel images are taken of a scene using a tripod-mounted camera which slightly jitters. From the multiple images, a higher-resolution image could be reconstructed.

Have you ever noticed how a sharp-looking video can appear blurry when paused or stepped frame by frame?

As an aside, another thing to remember is, that a pixel is not a unit of information; not unless you specify how many bits encode a pixel. Suppose you sample 60x60 points, but with continuous amplitude resolution, and zero noise. The 60x60 pixel image then contains infinite information (though, of course, its ability to resolve adjacent details is still limited).

$\endgroup$
6
  • $\begingroup$ This is an excellent point. Even if your eyes aren't moving, the atmospheric shifts are certainly lensing in different details. $\endgroup$
    – Phrogz
    Commented May 10, 2014 at 14:18
  • 1
    $\begingroup$ The "gathering of multiple images" are saccades. Each is a single high resolution snapshot that the brain composites into a single image. For every perceived instant of image, you take over a dozen snapshots. $\endgroup$
    – TechZen
    Commented May 10, 2014 at 17:00
  • 3
    $\begingroup$ Pausing a video will reveal either VHS or digital compression artifacts. "Sub-pixel" eye vibrations would already be accounted for in any visual acuity test. Taking advantage of atmospheric lensing, or moments of good seeing, is the domain of adaptive optics and I wouldn't assume the brain is capable of that sort of processing. $\endgroup$ Commented May 11, 2014 at 14:59
  • $\begingroup$ @Phrogz - the "atmospheric shifts" are called seeing. Seeing is never a limiting (or enhancing) factor for naked eye observations. The only visible effect that way is the twinkling of the stars, but that's it. $\endgroup$ Commented May 13, 2014 at 0:12
  • 1
    $\begingroup$ @Phrogz due to its stochastic nature, turbulence can only make things worse. But it's never so bad as to make naked eye observations worse. The only kind of remote lensing effects that can enhance the view are those that are orderly - think gravitational lensing around distant galaxies. $\endgroup$ Commented Oct 23, 2021 at 19:06
5
$\begingroup$

After all these astronomic answers, I will add a computer one.

Pixels are not the same on all monitors. Take a 1990's monitor and take the latest smartphone screen, the 60 pixels won't be the same.

How did you calculate the pixel size according to the vision accuracy ?

$\endgroup$
1
  • 1
    $\begingroup$ You are right, how you view those pixels matter if you want it to look roughly the same as the moon. You would need to see the 60 pixels on a screen at around 100-120ppd, for example a 27" monitor seen from 6 feet away, or a 50" HDTV seen from 12 feet away. Try my calculator. (Note: does not work in IE, and the SVG diagram currently looks bad in Firefox. Use Chrome or Safari for best results.) The question was, though, not how to make it look just like the moon, but how much detail there is when you see the moon with a naked eye. $\endgroup$
    – Phrogz
    Commented May 13, 2014 at 12:48
2
$\begingroup$

No, it isn't.

Our senses, including vision, do not operate in the manner in which artificial, digital devices operate. Our eyes do not contain "pixels", nor is the image we are perceiving through our eyes composed of "pixels". Our nervous system also does so much post-processing of the image that any of these non-comparisons are meaningless.

A "pixel" is an element of image in raster graphics. However, there is also vector graphics, in which there is no notion of "pixels"; it would be meaningless to talk about an object pictured in vector graphics in terms of "pixels". Our vision is neither raster nor vector graphics.

The retina's photosensitive layer is made of photosensitive cells (rods and cones), but they cannot be reasonably compared to "pixels". While there are some distant and deceiving similarities -- like the fact that the size, amount and density of rods and cones correspond to the degree of visual acuity -- rods and cones do not correspond to discrete elements of our vision in the same manner as individual pixels are discrete elements of digital image.

The retina is actually the extension of the brain, and there is a lot of image preprocessing going on in the retina before the signal even gets sent down the optic nerve. This preprocessing is absolutely necessary because there are about 100 times more photosensitive cells than there are retinal ganglion cells (cells leading down the optic nerve). Basically, there are not enough optic nerve fibers to make it so each one of 100 million photoreceptors connects directly to the visual cortex in the brain. Retinal ganglion cells use sophisticated mechanisms like edge detection, etc. to "compress" the signals from photoreceptors, and this is just the beginning of the post-processing. Our brain does a lot more, including filling-out the missing image elements without us being aware of it. In fact, it is actually our brain and not our eyes that creates our cognitive experience of vision perception.

What is more, our eyes are constantly moving via microsaccadic motion -- it is because, greatly simplifying, we cannot really see static images; if people's eyes are experimentally made immobile, their perceived vision slowly dissolves and completely disappears until the eyes are allowed to move again. Here is a static image exploiting our microsaccadic movements to cause an illusion of apparent movement (warning -- could be a strong nausea trigger):

enter image description here

In conclusion, despite the oversimplified non-comparisons promoted by pseudo-intellectual sources like xkcd and many others, the reality is much more complicated, and details of complex biological systems cannot be reasonably compared to details of artificial systems.

It is correct to say that the Moon's size is 30 arc-minutes, but not that its size is A×B pixels.

$\endgroup$
1
  • $\begingroup$ i.sstatic.net/sWjfi.png That's not how to use Stack Exchange; proposed edit comments are not for addressing users individually, especially in this way. Rather than escalate to an edit war, "I'm right, you're wrong" options include shrugging and leaving it alone, leaving a comment (which in this case you can't do until you get 50 reputation points and some experience with SE) flagging for moderator assistance, or going to Astronomy meta and posting a question of the form "what to do next if my seemingly helpful edits are rejected?" $\endgroup$
    – uhoh
    Commented Oct 23, 2021 at 0:09
1
$\begingroup$

Answer: No, the moon is not 60x60 pixels wide (as seen by the human eye).

Please read Bad Chad’s answer. It is factually correct and makes many salient points overlooked in other answers. I can only enlarge or the excellent points he makes:

The human visual system cannot be modeled after a camera or video any more than the brain can be modeled after a computer or the liver after a chemical factory. Biological systems are designed fundamentally differently than human-designed artifacts. For instance, the retina is not a sensor full of pixels. It has multiple layers of neurons which process image information before communicating information to the visual cortex. By some measures of computation speed, the retina outperforms some supercomputers https://www.videofoundry.co.nz/ianman/laboratory/research/retina . When a ganglion cell fires, it is communicating higher order information than a “pixel”.

The proposition that human vision can be defined by a certain size pixel is an artifact of using optotypes (high contrast letters) for determining glasses prescription. There are a multitude of other ways of measuring human visual performance, many of which give higher measured acuity than optotypes.:

Vernier acuity is the ability to discriminate misaligned straight lines. You may be familiar with Vernier Calipers which allow measurement accuracy of one thousandth of an inch with the unaided eye. Vernier acuity far exceeds optotype acuity (and so is termed hyperacuity) and can be further improved up to 6 fold by training.

enter image description here https://en.wikipedia.org/wiki/Vernier_scale

Another example of hyperacuity is stereoacuity (depth perception) https://en.wikipedia.org/wiki/Stereoscopic_acuity

Another example of hyperacuity is diopter sights on competition precision rifles.

enter image description here

So Bad Chad is right: "It is correct to say that the Moon's size is 30 arc-minutes, but not that its size is A×B pixels.”

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .