3
$\begingroup$

I hope this is the correct place to ask this. If not, please refer me to a better place, thanks. Now to my question,

I am learning about digital image processing, and the book I am reading is discussing spatial resolution. They discuss spatial resolution in terms of line pairs. They talk about constructing a chart with pairs of dark and light lines, and how the width of a line pair is the width of a light and dark line. This is all of the information the book gives about line pairs, and I don't really understand what role these play in digital image formation and resolution.

What does a line pair actually correspond to or respresent in the context of image formation, specifically in a camera? There is a question in my book about how many line pairs/mm a camera will be able to resolve when taking an image of a subject. It gives the size of the CCD in the camera, as well as how many elements there are vertically and horizontally in the CCD. What does line pairs/mm of the CCD mean? Is this something like the number of pixels that will be resolved per CCD element? It just isn't clear to me how line pairs are realized in the camera.

I hope this question makes sense. I am a CS student so don't do much of anything with signal processing, but I would like to have a better understanding of how images are formed with respect to their spatial resolution capabilities.

$\endgroup$
1
  • $\begingroup$ yes this is a right place to ask that... $\endgroup$
    – Fat32
    Commented Sep 24, 2019 at 22:26

2 Answers 2

3
$\begingroup$

The spatial resolving capability of an optical system refers to its ability to distingusih between closest (and possibly tiniest) details. The distance between a pair of lines, yields a measure of how small the distance between them can be while they are still sperated from each other. Resolution depends on several factors such as brightness level, color, and viewing environment.

Those resolution charts are designed to test optical equipment and systems. Consider two lines viewed from a fixed distance, separated by $d$ from each other. It's known that as $d$ is made smaller and smaller, after some threshold, you can no longer tell that there are two lines: they will look like a single one. That line speraration distance $d_{min}$ is the limit of your optical resolution at that viewing condition. To get rid of the viewing distance dependence, resolution is stated in angular units.

That's also true for a camera system. Its spatial resolution is limited by two factors; first the resolution limit of the lens (its sharpness), and then the resolution limit of the CMOS / CCD sensor grid (its sampling density).

$\endgroup$
1
  • $\begingroup$ Ok, you've cleared up the concept of line pairs for me. However I am still a bit unclear about how this works in a CCD. Right now it seems to me that essentially a higher ratio of line pairs/mm, all else constant, results in more line pairs being seen by each element of the sensor? I don't think I am right. $\endgroup$
    – Blake F.
    Commented Sep 25, 2019 at 14:53
1
$\begingroup$

This answer is to be read along with Fat32's answer.

The whole electrooptical imaging thing is complicated, and there's always more details to stumble over. In this case, I think the following three points will clear things up for you:

Given an optical system with perfect resolution, an image will be projected onto the face of the sensor chip. If you know the lens characteristics (focal length, f number, distortion) and the distance to the target, then you can calculate exactly what this image should be just using geometry. At this point, there's no loss of signal because we're in Plato's land of perfect things. I mention it because you're trying to think in mm in object space, but ultimately you need to translate that to $\mu\mathrm m$ on the sensor.

The optical system (and to some extent the sensor) will blur the image. This is usually what people think about when they think of optical resolution. Just imagine looking at a set of white and black lines and telling them apart depending on whether they're crisp or blurred. This is usually what people think of when they think of optical resolution. The spot that's formed when you look at some point source of light is called the "blur spot", and the size of the blur spot is an important system parameter.

If the blurring from the optical system and sensor is significantly smaller than a pixel, then you can get spatial aliasing*. For instance, if you've got a perfectly crisp pair of lines that are projected onto one pixel column, you'll just see a 50% gray vertical line. Many inexpensive optical systems these days tend to have pixel counts high enough that the pixels are smaller than the blur spot. This means that the optical blur forms an anti-aliasing filter of sorts, and you don't need to worry about the sensor geometry. Really good cameras and esoteric sensors with low pixel counts still tend to be limited by the sensor's pixel size.

So, bottom line:

  • The lens's geometric properties and what you're looking at determine the density of the lines on the sensor.
  • The optical and electrooptical blur determine how much the lines are smeared out hitting pixels.
  • The density of the pixels themselves, the line density, and the amount of blur determine whether optical aliasing is an issue, and if so, how it'll behave.

* Spatial aliasing is a long subject and there's information out there, so Google for it. Or look in your book -- it really ought to be there.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.