13
$\begingroup$

Quanta Magazine's Two Weeks In, the Webb Space Telescope Is Reshaping Astronomy highlights two submissions to arXiv soon after the first images were released: "Three days later, just minutes before the daily deadline on arxiv.org..." It certainly sounds exciting!

The first deep field images from the James Webb Space Telescope (JWST) of the galaxy cluster SMACS~J0723.3-7327 reveal a wealth of new lensed images at uncharted infrared wavelengths, with unprecedented depth and resolution. Here we securely identify 14 new sets of multiply imaged galaxies totalling 42 images, adding to the five sets of bright and multiply-imaged galaxies already known from Hubble data. We find examples of arcs crossing critical curves with magnification factors of at least 150, allowing detailed community follow-up, including JWST spectroscopy for precise redshift determinations, chemical abundances and detailed internal gas dynamics of very distant, young galaxies. We also detect an Einstein cross candidate only visible thanks to JWST's superb resolution. Our parametric lens model is available at this https URL , and will be regularly updated using additional spectroscopic redshifts. The model reproduces very well the multiple images, and allows for accurate magnification estimates of high-redshift galaxies. This work represents a first taste of the enhanced power JWST will have for lensing-related science.

Exploiting the fundamentally achromatic nature of gravitational lensing, we present a lens model for the massive galaxy cluster SMACSJ0723.3-7323 (SMACS J0723, z=0.388) that significantly improves upon earlier work. Building on strong-lensing constraints identified in prior Hubble Space Telescope (HST) observations, the mass model utilizes 21 multiple-image systems, 16 of which were newly discovered in Early Release Observation (ERO) data from the James Webb Space Telescope (JWST). The resulting lens model maps the cluster mass distribution to an RMS spatial precision of 1.08'' and is publicly available at this https URL . Consistent with previous analyses, our study shows SMACSJ0723.3-7323 to be well described by a single large-scale component centered on the location of the brightest cluster galaxy, however JWST data point to the need for two additional diffuse components west of the cluster, which in turn accounts for all the currently identified multiply imaged systems. A comparison of the galaxy distribution, the mass distribution, and gas distribution in the core of SMACS0723 based on HST, JWST, and Chandra data reveals a fairly concentrated regular elliptical profile along with tell-tale signs of recent merger activity, possibly proceeding aligned closely to our line of sight. The exquisite sensitivity of JWST's NIRCAM reveals in spectacular fashion both the extended intra-cluster-light distribution and numerous star-forming clumps in magnified background galaxies. The high-precision lens model derived here for SMACSJ0723-7323 demonstrates impressively the power of combining HST and JWST data for unprecedented studies of structure formation and evolution in the distant Universe.

While the groups chose different algorithms for title selection ("unscrambling" vs "precision modeling") I wonder if they used similar variations on the same technique?

Question: How do they do this? How do astronomers unscramble or precision-model the undistorted image from an observed image gravitationally lensed by a complex (or at least lumpy) gravitational field. Is there an easy way to explain it as a straightforward algorithm, or is it more like solving a jigsaw puzzle - a long series of guesses and decisions and tests?

$\endgroup$
13
  • 4
    $\begingroup$ Maybe you could un-scramble an egg with a computer? :) $\endgroup$ Commented Jul 26, 2022 at 0:32
  • 5
    $\begingroup$ Basically, when you scramble an egg, you also change its chemistry. The energy you have input into it by the whisking action, and the fact that parts were in contact which would not have been in contact normally, all that changes its nature. When light goes through a lens, gravitational or otherwise, its nature is not changed. It’s twisted, but it still remains light. $\endgroup$ Commented Jul 26, 2022 at 4:14
  • 6
    $\begingroup$ It's a bit of a chicken-and-egg situation. ;) The lensing mass distribution determines the distortion, but we measure that mass distribution from the distortion. So you start with approximate models of the masses and the undistorted field, ray-trace, and gradually refine the models, possibly using a process similar to a multidimensional version of the Remez algorithm. en.wikipedia.org/wiki/Remez_algorithm Maybe some Fourier magic is used, too. $\endgroup$
    – PM 2Ring
    Commented Jul 26, 2022 at 17:05
  • 5
    $\begingroup$ It helps that the mass distribution of a cluster of galaxies is mostly from the intracluster medium (gas and dark matter between the galaxies), and its lensing effects are well approximated by a simple spheroid. The individual galaxies are almost insignificant perturbations to that unless the image lies right on a galaxy. With this JWST data one can apparently make out some additional low mass subclumps of galaxies. $\endgroup$
    – eshaya
    Commented Jul 26, 2022 at 18:24
  • 2
    $\begingroup$ We can't unscramble an egg, but 7 years ago someone figured out how to uncook egg whites. chemistry-europe.onlinelibrary.wiley.com/doi/abs/10.1002/… $\endgroup$ Commented Aug 6, 2022 at 20:39

1 Answer 1

9
+200
$\begingroup$

I work in gravitational lensing, so maybe I can give you an idea.

In those JWST you have a massive galaxy cluster that is bending the light behind and acting as a true lens. If you have a temptative model of the mass distribution, including the dark matter halo of the cluster and the cluster galaxies, you can find the best mass distribution model by fitting the model to some observables. These observables are objects with multiple images.

Some objects with multiple images are obvious, just take a detailed look at the images of the cluster yourself. Once you identify these multiple images, you can use them to find the best mass model that reproduces those images. Then you can use this model to predict new images, and see if you find something that you previously missed, so you can improve the lens model again. It is an iterative process. The key here is that if you have an object with multiple images, you know it is the same object, so if the galaxy cluster were not there, you would only see one source, and the lens model MUST map multiple images to the same source behind the lens.

Once you have a precise enough model, you basically know how the mass distribution of the cluster bends the light. Take any light-ray coming to you through the cluster and you can know very precisely where it came from behind the lens. A "distorted" source looks distorted just because different regions of this source are in slightly different positions, and pass through different regions in the "lens", but you now know the mass distribution, so you can use it to "undistort" the source and see how it really is behind the cluster.

Gravitational lensing in galaxy clusters works almost exactly in the same way as conventional optics, that's why is relatively easy to make lens models and . If you have more questions, I am happy to answer them.

Edit added from comments

Both papers use a parametric approach (using physical models for the mass distributions) and the same software (LENSTOOL). Now talking about multiple images, this is first done just (believe it or not) by eye. Then you iteratively use the lens model to find new images (complemented with spectroscopy if available). But basically, both papers just do the same, they just use the new JWST data to find new multiple images systems and improve the lens models. (More multiple images->better model). The ability to find new systems depends just on eye shaperness at the moment.

$\endgroup$
5
  • 1
    $\begingroup$ Certainly the ray-tracing calculations through a given model of mass distribution is not hard (something a little bit like this) but so far all you've said is start with a guess based on what you are sure of, then refine it, which is how I learned to implement least-squares fitting in high school. Can you add to this a few specifics? Which objects were obvious at first? How did these two groups find such different ways to solve the same problem? Thanks, and Welcome to Stack Exchange! $\endgroup$
    – uhoh
    Commented Dec 1, 2022 at 22:12
  • 1
    $\begingroup$ They both use a parametric approach (using physical models for the mass distributions) and the same software (LENSTOOL). Now talking about multiple images, this is first done just (believe it or not) by eye. Then you iteratively use the lens model to find new images (complemented with spectroscopy if available). But basically, both papers just do the same, they just use the new JWST data to find new multiple images systems and improve the lens models. (More multiple images->better model). The ability to find new systems depends just on eye shaperness at the moment. $\endgroup$ Commented Dec 5, 2022 at 14:44
  • 1
    $\begingroup$ I can not explain all the details because it would be too many characters. But if you read the sections where they explain what they do with the lens models, you may find your answer. They say that they use the previous lens model (or add new mass component) and they just found new multiple images in JWST data. HOW multiple images are found is never fully explained. People just learn how to do it from other expert people and they almost never explicitly explain the procedures in the papers. $\endgroup$ Commented Dec 5, 2022 at 14:51
  • $\begingroup$ Okay I've got the picture (so to speak) thanks! You might consider moving this insight back into the answer post to ensure it remains a permanent part of the answer (some future readers will skip comments, and comments are considered temporary and subject to possible future deletion without warning). Certainly the bit about both using LENSTOOL and a link to it would be nice. $\endgroup$
    – uhoh
    Commented Dec 5, 2022 at 20:44
  • 1
    $\begingroup$ Sure I will. LENSTOOL is the code used to produce the lens models. You basically tell it where and what mass distributions to put in your model, and it optimizes the best parameters of these models using an MCMC algorithm. Once you have a model you are happy with, LENSTOOL allows you to produce "images" of the mass distributions, calculate the deflection angles produced by the lens, estimate light arrival times, calculate magnifications, and a lot of other things. projets.lam.fr/projects/lenstool/wiki This is the project's main page. It is not that good, but this is it. $\endgroup$ Commented Dec 6, 2022 at 20:33

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .