9
$\begingroup$

I’ve been going back and forth with peers on why image processing is done the way it is on the Webb Telescope. Based off some article, my understanding is that they have various filters that represent different IR bands, these are then assigned visible spectrum colors based on the wavelength of the IR filter, and finally you have some weighting scheme that combines these assigned colors to form a picture.

The thing I don’t understand though is why we wouldn’t just take the IR data and ‘unshift’ the light since we know the distance of the objects we are observing? Is the process described above in some convoluted way equivalent to this idea of unshifting the colors?

$\endgroup$
2
  • 22
    $\begingroup$ I think there is a misunderstanding here that objects emit at visible wavelengths and we only observe them in IR because of redshift? In reality, emission in the rest frame of the object can be in parts of the spectrum outside visible light. $\endgroup$
    – lucas
    Commented Feb 23 at 17:29
  • $\begingroup$ @lucas yes, that is a misunderstanding on my part thanks for pointing it out. Using the scheme I described, one would not be able to know whether the IR from the object is due to the object's natural properties or redshift. However, if you assume the object doesn't naturally emit IR and exclusively has IR due to redshift, the question still holds. $\endgroup$
    – lumenhippo
    Commented Feb 26 at 18:10

6 Answers 6

26
$\begingroup$

The infrared data cannot be unshifted to produce visually pleasing images simply by linearly scaling observed wavelengths because JWST does not work like the human eye nor like the camera in your phone. One important difference is that the range of wavelengths JWST covers is over five times larger than the spectrum humans can see, as John Doty commented. If you just adjusted for redshift, most of the "colors" would remain invisible to you.

The article linked in the question outlines how JWST's instruments do not record colors but total electromagnetic radiation intensity in a particular band. They use a variety of filters that each pass specific wavelengths optimized to detect specific elements or molecules such as hydrogen, water, or methane.

When several filtered captures are available, false-color images can be created by combining these separate monochrome images. This is a very helpful visualization technique, but the results are different from what is produced by a digital camera that is optimized for reproducing images similar to what is seen by the human eye.

It is also noteworthy that colors are not physical attributes but names we use to describe the visual perception of light. People distinguish colors differently because the response of human cone cells to a particular wavelength varies between individuals.

See Monochrome astrophotography techniques, Color vision and Color at Wikipedia.

$\endgroup$
8
  • 12
    $\begingroup$ JWST covers over five octaves of spectrum, while the visible spectrum is less than an octave. So, even after any shift you might apply, most of the information in the data would be invisible to human eyes. $\endgroup$
    – John Doty
    Commented Feb 23 at 21:18
  • $\begingroup$ @JohnDoty this could be a good answer itself $\endgroup$
    – fraxinus
    Commented Feb 24 at 6:27
  • 2
    $\begingroup$ Excellent answer. Not only would it be inaccurate to map it back to visible by sliding and shrinking the scale of wavelengths, it would also be unhelpful. The shortest emitted wavelengths that JWST can see (for the most distant object yet observed, z = 13.2) are ~40nm ultraviolet. That short UV (almost x-ray) light interacts with matter very differently as compared to the longest wavelength light that JWST can observe from the same object, ~2000nm infrared when it was emitted. Pretending that one is truly blue and the other truly red would give us a distorted mental image. $\endgroup$ Commented Feb 24 at 14:12
  • 3
    $\begingroup$ "Different individuals perceive various wavelengths and thus colors differently." Perceive is a complex word here, with some meanings more appropriate on philosophy.stackexchange than here. There exist color-blind people, and people with mutated cones who see more different colors. But the usefulness of color photography and especially digital photography and displays shows that most people map wavelengths to RGB colors in a consistent way, so a digital photo of an object stored with three samples per pixel matches what that object looks like in real life to most people. $\endgroup$
    – prosfilaes
    Commented Feb 24 at 21:30
  • 1
    $\begingroup$ @doubleunary,@JohnDoty thanks for your response! I can finally stop thinking about this :P $\endgroup$
    – lumenhippo
    Commented Feb 26 at 17:48
10
$\begingroup$

You can't "unshift" images in the way you suggest because they often/usually do not contain objects that all have the same redshift.

If they did, then you probably could, since all wavelengths would be divided by the same $(1+z)$ factor. It is doubtful this would result in visually pleasing or useful results since some of the JWST images would necessarily still lie outside our visual wavelength range. This is because the full span of wavelength sensitivity of the JWST cameras cover about a factor of 10 (for NIRCAM 0.6-5 $\mu$m) and 6 (for MIRI 5-28 $\mu$m) in wavelength, whereas the human eye has only a factor of 2 range. It is possible, for example in MIRI images of objects with relatively low redshift ($z<7$), that none of the "unshifted" data would lie in the visible wavelength range.

$\endgroup$
1
  • $\begingroup$ I’m not sure “visually pleasing” is a convincing standard to people who question the value of false-colour images. If the process already sounds arbitrary to them, aesthetics seem unlikely to justify it. It should be noted that often specific colours in the resulting images are chosen to represent the presence of important elements which emit in the corresponding parts of the IR spectrum, allowing easy visual inspection of the distribution of different types of matter. $\endgroup$
    – Seb
    Commented Feb 24 at 11:33
9
$\begingroup$

I know this doesn't answer your question, but I thought I would mention it since it's mildly relevant.

Here is a far ultraviolet (139.4 nm) image of the Sun that has been shifted into the human visible color range in a similar way that you proposed in your question.

false-color solar image

This image was originally a three-dimensional cube (x, y, and wavelength) that was captured by the Interface Region Imaging Spectrograph (IRIS), which is NASA satellite that has been continuously observing the Sun since 2013. To create the colors, I shifted and stretched the wavelengths into the visible spectrum, and then mapped to RGB using the CIE 1931 color space.

I don't have any experience with using Webb observations, but I think you could do a similar exercise with the NIRspec spectrograph on JWST (at least in principle, although it might not be very visually-pleasing).

If anyone is interested in creating your own false-color images, I've published my code as a Python package called colorsynth.

$\endgroup$
4
  • 1
    $\begingroup$ This is much closer to what I'd ask this question for. +1 $\endgroup$ Commented Feb 25 at 14:21
  • $\begingroup$ I don't understand the colorbar. Why are the units "km/s"? $\endgroup$ Commented Feb 26 at 14:36
  • $\begingroup$ This is really cool @RoySmart! Thanks for taking the time to show a visual representation $\endgroup$
    – lumenhippo
    Commented Feb 26 at 17:52
  • 1
    $\begingroup$ @WaterMolecule, good question, I should've explained that. The units are in km/s since that corresponds to the Doppler shift of the spectral line (Si IV 139.4 m) being observed by IRIS. Astronomers sometimes express wavelength in Doppler shift for convenience since it makes it easier to interpret. $\endgroup$
    – Roy Smart
    Commented Feb 27 at 2:06
4
$\begingroup$

The thing I don’t understand though is why we wouldn’t just take the IR data and ‘unshift’ the light since we know the distance of the objects we are observing? Is the process described above in some convoluted way equivalent to this idea of unshifting the colors?

I think this is a great question, but some readers may not understand it.

Suppose we have a series of images of the same object taken through several different fairly narrow band (say 10%) filters. Each image is named after the central wavelength of its filter.

Then suppose through spectroscopic data the redshift $z$ is determined.

We can "re-name" each image with a new central wavelength that would be red-shifted by $z$ to the old one.

Those won't necessarily be red/green/blue, say 650, 550, 450 nm, nor will they be good approximations to the spectral sensitivity of our eye, and depending on the data, they may often no longer represent visible wavelengths at all.

In rare cases it might be possible that there are combinations of images and red shifts where two or potentially three visible colors can be recovered, and some semblance of a roughly realistic rendering could be reconstructed.

But it would probably be very boring to look at!

Aa @JohnDoty points out:

JWST covers over five octaves of spectrum, while the visible spectrum is less than an octave. So, even after any shift you might apply, most of the information in the data would be invisible to human eyes.

Images taken through many filters from many telescopes are available online. The ones from JWST may not be immediately available after they are taken for us to play with, but until then, there are plenty of multi-filter image sets from Hubble one can use to practice de-shifting.

You can just try this yourself, and if you need help, just as a new "how to de-shift multi-filter images" question. I predict that the answer will be boring-looking and not so informative, but give it a try!

$\endgroup$
3
  • $\begingroup$ "won't necessarily be red/green/blue"... and /that/ is the real problem. While the casual tinkerer might assume that he could map one of the JWST filter passbands to purple, one then has to ask "what wavelength is purple?" and then when one realises what a messy question /that/ is one has to ask "what RGB mix will approximate purple best for publication when perceived by the widest segment of potential readers?". $\endgroup$ Commented Feb 24 at 8:24
  • $\begingroup$ @MarkMorganLloyd Nobody said color was simple!. onlinelibrary.wiley.com/doi/book/10.1002/9781119367314 But that won't stop people from playing around with it. $\endgroup$
    – uhoh
    Commented Feb 24 at 8:53
  • $\begingroup$ Unfortunately, despite the ready availability of things like the AS7341 sensor (11 bands, not just RGB) a lot of people assume that all they have to do is connect a camera to "AI de jour" and all their problems are solved. $\endgroup$ Commented Feb 24 at 9:20
2
$\begingroup$

The thing I don’t understand though is why we wouldn’t just take the IR data and ‘unshift’ the light

That is presumes a bit:

  1. The interesting emissions of the observed object(s) are in visible light in the object's frame of reference.

    No, a lot of interesting light is invisible to naked eye even before redshift affected it, so "unshifting" won't make it visible.

    What we consider "visible" light spectrum is an evolutionary adaptation of species on Earth to the light of our Sun, within the confines of the biochemistry available to us. In the Universe, "visible" light does not hold such special status. Due to how our technology evolved, it is particularly easy for us to capture visible light images, but that, again, is a confluence of our biological and technological evolutionary path.

  2. That all the objects in the field of view have same redshift (or even known redshift).

    This is not generally true and varies a lot with what is observed, how wide is the field of view, and so on. It also assumes that the redshift of everything in the image is known. Knowing the redshift requires taking spectra, and we don't have redshift data for every little thing in every field of view of JWST or Hubble. I'd imagine that an average JWST field of view has a lot of pixels with either unknown redshift, or potentially with multiple light sources (nearer and farther away) contributing to the pixel's "color". So for some pixels there is no single redshift to apply. Take for example a foreground nebula with background stars and galaxies.

  3. That the human perception of the images is an important factor in the analysis.

    Ultimately, no deep analysis is done on the visible light visualizations of infrared or UV data. The data is operated on in its original form, without regard for how a human might see it. The pretty pictures synthesized in visible light are helpful, and provide qualitative information to astronomers and to the public. But that's about it. Their interpretation does not depend on any particular visible light translation.

$\endgroup$
0
$\begingroup$

Sensors like CCDs in telescopes or the CMOS sensor used in most commercial photography like in your phone do not record wavelengths, they record the intensity of photons striking a pixel of the sensor and use filters to limit the light to certain wavelengths. For example in the CMOS in your phone's camera, each pixel is basically 3 separate sensors recording the intensity of light reaching the sensor past their red, green, and blue filters.

The Hubble can use red, green, and blue filters to take three separate images making an image that would resemble closely what we would see with our eyes.

James Webb's two main imaging sensors have 29 filters for NIRCam and 10 filters for MIRI. For each recorded image we know the intensity of photons that made it past the filter being used, but to make a true color image they would need precise filters created for a specific redshift. When processing images they can try to match those filters as closely as possible to red, green, and blue accounting for red-shift such as with the first deep field, but even in that you can see that further galaxies are more red because they all don't have the same redshift. Other times as with an image of Jupiter it may make sense to convert certain frequencies to red, green, and blue to allow our human eyes and minds to make sense of interesting features that our eyes wouldn't see.

You can explore the data yourself and process it however you want to just analyze it or create your own images.

More on color processing

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .