-1
\$\begingroup\$

Would it not be great to have a dynamic image resolution to better resemble the human perception of things? It would be made so that the photo would be taken with different zooming levels and then all images would be stitched together to make a high dynamic resolution image which is possible to zoom a lot in the middle with retained detail.

I think a lot of times when I take a photo, it feels like I have to choose between a good overview of things with bad resolution of the object of importance or I have to zoom only the object of interest and I then loose the overview...

See example vector image in link Dynamic image resolution vector file enter image description here

\$\endgroup\$
6
  • \$\begingroup\$ I think the file format is the least of the problems you've got to solve here. How are you proposing to simultaneously capture both the wide and telephoto versions of the image? \$\endgroup\$
    – Philip Kendall
    Commented Jul 10, 2023 at 7:28
  • \$\begingroup\$ Why the middle? \$\endgroup\$
    – xenoid
    Commented Jul 10, 2023 at 8:02
  • 2
    \$\begingroup\$ This question doesn't really seem like you have a problem to solve. "Wouldn't it be great if...?" Well, maybe yes. But in concrete terms, what are you trying to achieve? What are you taking photos of? I feel like photos are supposed to tell a little story, and that idea isn't necessarily related to being able to zoom into every last part of an image. Do you think you might be overly-focused on so-called "pixel peeping"? \$\endgroup\$
    – osullic
    Commented Jul 10, 2023 at 9:23
  • \$\begingroup\$ What about using a zoom lens and just taking 2 photos? \$\endgroup\$
    – osullic
    Commented Jul 10, 2023 at 9:25
  • 1
    \$\begingroup\$ Is there a question here? \$\endgroup\$
    – Rafael
    Commented Jul 10, 2023 at 11:48

3 Answers 3

2
\$\begingroup\$

I cannot think of any aspect of human vision that would be replicated by a variable resolution image as you describe.

If you are wanting to replicate the fact that humans have a very narrow field of sharp focus in the middle of our field of view (able to see resolution), with decreasing ability to focus towards the periphery; then all you need to do is produce a very large image and display it so it occupies a greater field of view (physically or virtually).

If you are wanting to simulate that aspect in a smaller field of view, all you need is a lens with poor resolution away from the center... probably a "normal" lens (~45˚ horizontal FOV) with some kind of diffusion filter towards the edges would work best. However, while that would replicate/simulate what a human sees when they stare at a point; it would seem odd because a human normally scans a scene and pays less attention to what is not in focus.

Similarly, if you are trying to replicate the aspect of human perception where the mind focuses only on what it perceives as being important (and in focus)... then you just need a large enough image with a strong point of focus (to the viewer).

There is no aspect of human vision which "zooms" resolution... and certainly not where resolution increases at greater distances (ignoring uncorrected farsightedness).

\$\endgroup\$
2
\$\begingroup\$

If you stand behind a glass window and look out at a vista, you can trace the glass with wax pencil, the outline of objects. The image you have created is the “human perspective”.

You can take a picture of this vista with your camera. The film / digital sensor size / focal length of the lens used, is moot. Once the picture is taken, it must be displayed to observe. This picture will replicate the “human perspective” if it is displayed the same size as the native camera format (zero magnification – contact print) and viewed from a distance equal to the focal length of the taking lens.

Such a viewpoint is likely impractical given today’s miniature cameras Thus we enlarge this image to display it, say on a computer / TV screen or paper print or projection on a screen. In other words, the displayed image will be an enlargement. If the camera is a full frame 35mm size, 24mm by 36mm, and the displayed image is 8 x 12 inches, we have applied 8 X magnification. If the camera is an APS-C, 16mm by 24mm) we must apply 12 X magnification.

Now the viewing distance that presents the “human perspective” is focal length times magnification. If the full frame with a 50mm is used, the viewing distance to make a 8 x 12 inch display is 8 x 50 = 400mm = 16 inches. For the APS-C its 50 X 12 = 600mm = 24 inches.

In other words, to view an image as to represent the “human perspective, it’s the viewing distance that counts. This distance is the focal length of the taking lens multiplied by the magnification applied to make the displayed image.

Let me add, most images we take need not be viewed in this manner. However, portraits are an exception. I learned in photo school to mount a lens 2.5 X the diagonal measure of the format. For the full frame that’s about 50 mm X 2.5 = 125mm. This assumes an 8 x 12 inch print on the mantel or wall viewed from 125 x 8 = 1000mm (about a yard). Such a lash-up makes pictures that sell best because the resulting portrait subject will not appear distorted (nose too big – ears too small). P.S. I learned this at the Professional Photographers of America School for Continuing Education. I was an instructor; my subject was color print and process.

\$\endgroup\$
1
\$\begingroup\$

Images might be consumed in different ways:

  1. static images on the display: even though you might want to only portray the main object in good quality the quality of the image will probably be judged by the whole image rather its main object only
  2. AR goggles: this might be a good use case for your idea if you somehow are sure that viewer won't stare anywhere on your image except its main object
    and many others most of which assume static display of the image.

What's the obvious problem with this idea:

  1. resolution cannot be decreased in small fractions without significant loss of quality. Any non-rational (1/2, 1/3, etc) resize will require image resampling (guessing pixels at intermediate positions). While scaling works very well in general (because most images are very redundant) you cannot assume it will work well for highly detailed images. So, rather than reducing resolution for outer region you will probably need to compress it in different quality
  2. storage prices are decreasing constantly
  3. there are already several very good image codecs which yield much better image quality than what's generally used but are not widely adopted

Also, if you are talking about smartphones having combined cameras then it would be reasonable to expect being able to save an image for each of the cameras simultaneously. But since the technology is proprietary you would need to beg your smartphone producer to make it possible to save a pack of images. It should be possible in general in Android:
https://source.android.com/docs/core/camera/multi-camera
https://developer.android.com/training/camera2/multi-camera

\$\endgroup\$
1
  • \$\begingroup\$ There is also the Gigapan format, which is not what the OP seems to want but is a variable resolution format. A robotic tripod mount aims the camera in a grid of overlapping images. The software stitches it all together and creates a very large image which can be hosted on the Gigapan website for viewing at multiple resolutions. https://gigapan.com \$\endgroup\$
    – user106382
    Commented Jul 11, 2023 at 1:44

Not the answer you're looking for? Browse other questions tagged or ask your own question.