17
$\begingroup$

It has always been my understanding that humans have two eyes so that we can have 3D vision: the left eye sees more of the left side of an object than the right eye and vice versa. This helps us to estimate depth among other things.

Now when I close one eye, I still am able to perceive depth. I assume this is because my brain fills in the blanks For how long does this work? Do people eventually lose the depth perception (or at least it diminishes significantly) when they lose a single eye?

If so, how low does it take? If not, clearly we are capable of perceiving everything with one eye. Why do we then have two (besides redundancy and a larger field of view)? What is considering the evolution of man better in having two eyes as opposed to one or three, or four,..?

$\endgroup$
5
  • 2
    $\begingroup$ Here's an open-access article on monocular stereopsis that might be interesting: researchgate.net/publication/… .. Being said this may very well be more suited on cognitive science SE. $\endgroup$
    – CKM
    Commented Jan 14, 2016 at 23:03
  • 2
    $\begingroup$ @Kendall +1 for neat article. But I disagree about migrating the question. $\endgroup$ Commented Jan 14, 2016 at 23:11
  • $\begingroup$ I guess distance can be perceived with information from lens focal length, but that is not sufficient for 3D vision. $\endgroup$
    – busukxuan
    Commented Jan 16, 2016 at 2:35
  • $\begingroup$ Try getting a friend to throw a ball back and forth with you. Then attempt to close one eye while continuously tossing the ball. I think you will be surprised by the results if they are the same as mine.. It does in fact, from my experience becomes extremely difficult to make the judgements needed for the task with only one eye. $\endgroup$
    – user21343
    Commented Jan 22, 2016 at 0:51
  • $\begingroup$ I can accurately understand 3-d structure of an object in hand, from various perspectives when I see from 1 eye. Isn't it normal? $\endgroup$
    – user25568
    Commented Oct 14, 2016 at 14:41

3 Answers 3

7
$\begingroup$

It seems like you suffer from a misconception. "The left eye sees more of the left side of an object..." is not how distance perception works. Otherwise we wouldn't be able to estimate the distance from flat objects, such as traffic signs and shooting targets.

The actual mechanism is parallax estimation, or Binocular disparity. In a nutshell, the closer an object is to your eyes, the bigger the difference of its position on the left and right eye's retina will be.

You may perform a simple experiment: find a place where several parallel wires hang in the air: a train line or aerial phone/power line. Look at the wires normally, and they will appear just as black lines in the sky, with no distance perception. Now tilt your head to the side, and you will instantly gain a feeling which one is closer, and which one is further away. Why the difference? Because any shift of a horizontal wire gives the same picture, meanwhile for vertical wires the difference is obvious.

When you close one eye, your brain loses the ability to estimate parallax. However, it is left with several options:

  1. Visual cues. If one object overlaps another, it's obviously closer. If two objects look the same but one is smaller, it's probably further away (or a child). If you know the expected size of an object (a mountain/a horse/a fly), you may get the feeling of distance as well.
  2. Focusing distance. When you focus your eye on a near and far object, the feelings are different.
  3. Memories. You remember what you've seen with two eyes.

Of these, only (3) depends on previous binocular vision. (1) and (2) are available even to those who were born with one eye. However, parallax estimation is much faster and more precise. With a single eye you will be able to hit a fly on the wall, but catching it in the air will be extremely difficult.

$\endgroup$
2
  • 2
    $\begingroup$ Actually, there is 4. moving your head (or the whole body) in order to get different viewpoints. Human brain is perfectly adapted for this approach and computer graphics frequently "abuse" it, because the 2D screen cancels any binocular vision or focusing distance cues. That's why 3D object are frequently presented slowly rotating. $\endgroup$
    – fraxinus
    Commented Dec 18, 2020 at 0:40
  • 1
    $\begingroup$ As someone who is blind on one eye I can confirm an option 1 works pretty well under most circumstances and option 2 covers a lot of the others. they don't work well on quickly moving objects though. $\endgroup$
    – John
    Commented Dec 30, 2020 at 4:49
-2
$\begingroup$
Edit01 (here and scattered throughout)  

[Not meaning to be antagonistic… but admittedly somewhat defensive.]
Insofar as the following is a citation of an existing work, it is the below Martin and Foley item, pp165-185 (with a noted exception) — noting again that I read this years ago, and did not consult it (nor anything else) when writing the below… and (again) that I consider that, although this can be studied as a science, most of the root knowledge is innate (of course) and implicit and could be rendered explicit by any adult who did not already have it so (as in the following example). (It is of course theoretically possible that I have read other sources on the same material. As far as I am concerned, most modern adults would consider almost everything here neither insightful nor controversial, with the exception of brightness (for which I have no source), height cues (which I did not mention) and kinetic depth effect (which I did not mention and which is not obviously and certainly relevant to the question).)
Atmospheric colouring is an exception in that one can use this to work out that air is blue (as opposed to needing to know that air is blue to make sense of an apparent phenomenon).
Brightness is an exception in the case of which indeed I should have supplied a reference… except that, as noted, I can not see it in the noted text book, and thus have no idea where I heard it (noting that I do have a memory of the excitement of learning this (or possibly working it out)).
More citation anomaly and disparity as noted.

End — Main Body of  Edit01.  

Original, with references in “[]” added.  

Going from memory, since no one else has covered this… .

This is more a psychology question (as below).

3D vision utilises about seven different mechanisms. Binocular vision accounts for only two of these.
• Occlusion — each eye sees slightly different areas of a partly occluded object. [(“Occlusion” suggests also the monocular topic “interposition”, below.) As a binocular phenomenon, Martin and Foley (p183) label this “binocular disparity”, which is [from a quick look] actually about the image from the same object falling on “different” — I would say “non-corresponding” — areas of the retina (as a function of being closer to, on, or further from, the focal distance). [From a quick look] they do not seem to cover the fact that one eye will actually see more of a partly occluded object (which means I have no idea where I heard it (and I am thinking that I might well have worked it out myself)).
• Focus — you have to focus both of your eyes on an object to see it not-fuzzy; your system knows about (relative) eye positions and subject distance. This includes both focusing the lens in the eye [“accommodation” [p167] is lens shape] and pointing both eyes at the object [“convergence” [p182] is the eyes turning towards each other].
Binocular occlusion information only works out to about (from memory) 2m [“10 feet” Martin and Foley p183]. This would be increased by having the eyes further apart. (Focus works out to a much greater distance; I am not sure which of the above two aspects is more useful, but I am guessing it is lens shape (and that they are of roughly similar usefulness). [Martin and Foley mention “10 feet” (p167), citing Hochberg (1971) for accommodation (lens shape)… but appear to have chosen this as immediately arbitrary. Their point is that it yields “rather weak” distance information.]

Other (monocular) 3D vision cues include the following.
• Object size — many objects have a standard size [“familiar size” p167], and almost all objects have different proportions (such as leg thickness) according to how big and heavy they are [which Martin and Foley do not cover; the source is me]… and an object that is further away will have a smaller image size. [pp167-169 “size cues”, “relative size”.]
• Brightness — any given object will reflect less light into the eye from further away (because the eye is a smaller target further away). (This is more useful and important than one realises.) [From a quick check, this one does not appear to be in the noted text. I do not think (particularly) that I worked this out for myself, but if it is not in that text, then I have no idea where I heard it.]
• Perspective — many types of object (e.g. road, path, wall, river) have a stable width, or similar or analogous, and this will have a progressively smaller image size with greater distance. [“linear perspective” p170.]
• Texture — many objects have a known texture (or (assumptively) a regular texture), and the image size of the detail of the texture will decrease with distance. This works for groups of animals, and leaves, and the like, as well. [pp169-170.]
• Air tinting — the further away an object is, the more it is tinted blue by the interposing nitrogen. [“atmospheric perspective” p170. Martin and Foley also say that more distant objects appear “blurr/y/”. I would say that that is largely because we have less visual acuity for an object that is further away (because the image size is smaller); Martin and Foley put it down to interference by air particles.]
• The physics of movement — speed and acceleration can be informative. [Martin and Foley treat this area on pp174-176. I was thinking of… what I said. M&F mention “motion cues” (the class), “motion parallax” (which involves the subject moving laterally, and multiple stationary objects), “motion perspective” (which involves the subject moving towards or away, in any environment) and “kinetic depth effect” (which is about apparent 3D-ness in rotating objects, and is ostensibly not about distance).]

[Other monocular items that Martin and Foley mention… . • Shading [p170] — objects can cast shadows on other objects. I would say that this is primarily not about distance perception. • Interposition [p167] — the nearer object obscures part of the further object (or not). • Height cues [pp173-174] — my take on this is that, on flat ground (and because the viewer is above the ground), an object that is further away will have its base closer to the horizon and thus visually higher… and, for objects in the air, I would say here only that it is more complicated.]

Focus is very valuable for (for instance) catching a ball, whereas a task like driving [with one eye] has much more 3D information available. [Source: I worked this out using thinking. …So it’s me.]

As for having more than 2 eyes… having eyes spaced apart vertically (as well as having the existing horizontal spacing) would give additional occlusion information, but the gain would be minimal. [Source: I worked this out using thinking. …So it’s me.]

Source  

The source is a custom book — that is, “Custom Book”, which is, “a compilation of chapters from… [existing] Pearson Education Australia titles.” The publisher logo “Prentice Hall” appears, but only as a logo, and not on the cover.
The Custom Book is “PSYC236 Cognition and Perception”, ©2001, Pearson Education Australia Pty Ltd. It is “Sourced from: Sensation and Perception 4th Edition”, ©2001, Martin and Foley.
The cited material is from a chapter numbered 6, which appears to be its number in the original/source book; ditto for the page numbers.

$\endgroup$
7
  • 1
    $\begingroup$ But the question is based on a misconception or reversal of causality. "humans have 2 eyes so that we can have 3D vision" is backwards. We have 3D vision because we have two eyes. We have two eyes because, like all vertebrates (and insects &c), we are bilaterally symmetric and have two (or pairs) of just about everything. But starfish, for instance, with 5-fold symmetry, have 5 eyes. $\endgroup$
    – jamesqf
    Commented Dec 18, 2020 at 18:36
  • $\begingroup$ Welcome to Biology.SE! While this could be part of a good answer, answers are much more likely to receive a favorable response if you include supporting references (primary literature is best). Without that support, your answer is indistinguishable from opinion and thus not appropriate for this site. This is a good example of how to format references. ——— You may also want to take the tour and then consult the help center pages for additional advice on How to Answer effectively on this site. Thank you! 😊 $\endgroup$
    – tyersome
    Commented Dec 18, 2020 at 19:39
  • $\begingroup$ @jamesqf Eh, I do think the OP has a bit of confusion about that, but I think their question is more along the lines of "why can I still perceive depth with only one eye". "We have 3D vision because we have two eyes" isn't true - there are several monocular depth cues that give us 3D vision as well. People physically missing an eye or with amblyopia still see in 3D. Additionally, many other mammals have poor stereoscopic vision despite having 2 eyes, our quality of stereo vision is an adaptive trait shared with other mammals like other primates and carnivores (and non-mammals like raptors). $\endgroup$
    – Bryan Krause
    Commented Dec 18, 2020 at 23:04
  • $\begingroup$ In summary, I think this answer perfectly answers the question that was asked, it's just missing references as noted in the banner and pointed out by tyersome. $\endgroup$
    – Bryan Krause
    Commented Dec 18, 2020 at 23:07
  • $\begingroup$ @Bryan Krause: True, and also binocular stereopsis only works for relatively close objects. For instance, ever noticed the exaggerated 3D effect of viewing a fairly close subject with good binoculars? $\endgroup$
    – jamesqf
    Commented Dec 19, 2020 at 2:38
-2
$\begingroup$

The ability of seeing in 3-D is an emergent quality of the brain. Yes; at a certain level, all perception is based on emergent qualities of our brains.

That being said, when I was sleep deprived for almost three full days, I began to see in 3-D using only one eye. Somehow, my brain ‘unlocked’ an ability that had previously been limited to seeing with both eyes. Even more astonishing 2-D printed photos and 2-D computer images appeared in 3-D as well.

Anecdotal? Yes. But genuine and real to my perception as anything else I experience. Go figure!

See definition of ‘neural reality’. I believe this allows a sneak peak behind the curtain of human perception.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .