James Webb Telescope

diabol1k

Ars Scholae Palatinae
1,441
Moderator
Some micrometeroid updates -- https://www.space.com/james-webb-space- ... oid-damage
  • six impacts since launch, about the expected cadence
  • one impact (in May, reported in June) was more damaging than expected (either b/c it was larger/more energetic, or JWST is more susceptible to micrometeroid damage than modeled)
  • net net, the May impact has been worked around via realigning the other 17 segments

One of the mitigations mentioned is restricting how frequently the telescope is aligned to its orbital movement -- would that limit the amount of the sky visible, over the long term? Or would it just shift when in an orbit (of the sun) the telescope could point at a part of the sky?
 

halse

Ars Praefectus
3,746
Subscriptor
Oldest galaxies, about 300-400 million years after Big Bang, observed by JWST:
The first few hundred Myrs at z > 10 mark the last major uncharted epoch in the history of the Universe, where only a single galaxy (GNz11 at z ≈ 11) is currently spectroscopically confirmed. Here we present a search for luminous z > 10 galaxies with JWST/NIRCam photometry spanning ≈ 1 − 5μm and covering 49 arcmin2 from the public JWST Early Release Science programs (CEERS and GLASS). Our most secure candidates are two MUV ≈ −21 systems: GLASS-z13 and GLASS-z11. These galaxies display abrupt 􏰀 2.5 mag breaks in their spectral energy distributions, consistent with complete absorption of flux bluewards of Lyman-α that is redshifted to z ≈ 13 and z ≈ 11. Lower redshift interlopers such as dusty quiescent galaxies with strong Balmer breaks would be comfortably detected at > 5σ in multiple bands where instead we find no flux. From SED modeling we infer that these galaxies have already built up ∼ 109 solar masses in stars over the 􏰁 300−400 Myrs after the Big Bang. The brightness of these sources enable morphological constraints. Tantalizingly, GLASS-z11 shows a clearly extended exponential light profile, potentially consistent with a disk galaxy of r50 ≈ 0.7 kpc. These sources, if confirmed, join GNz11 in defying number density forecasts for luminous galaxies based on Schechter UV luminosity functions, which require a survey area > 10× larger than we have studied here to find such luminous sources at such high redshifts. They extend evidence from lower redshifts for little or no evolution in the bright end of the UV luminosity function into the cosmic dawn
https://arxiv.org/pdf/2207.09434.pdf
 
So if I understood that last part correctly, not only has JWST found some of the earliest galaxies detected (which was expected), but it’s already found some outside expected constraints (too many bright stars in them)? So unless it just _happened_ to be pointed at a spot extra dense in bright-ass galaxies (more imaging time needed to confirm?), we have some more modeling to do to figure out the very early universe?
 

parejkoj

Ars Centurion
389
Subscriptor
That's quite a paper (some friends of mine on it, too!). Finding two such UV-bright galaxies at z>10 in this field means there must be a lot more such objects to be found by JWST. Their constraints seem pretty conservative: unless there's something much worse about the JWST backgrounds than expected, this feels pretty robust.

@smartalco: Whether these galaxies are bright due to many moderately UV-bright stars, or a few *very* bright stars (we don't know if these would host the elusive pop-III stars, which some models suggest could be hundreds of solar masses), we don't know. We would expect galaxies at that era to be UV-bright in general, because they'd contain a lot of young, hot, blue stars that produce a lot of UV. However, we don't have a lot of constraints on how exactly star formation worked at such high redshifts: there's plenty of models, but a lot of unexplored physics (e.g. there's no heavy elements to help cool gas clouds, so the physics of such clouds collapsing into stars can be quite different than it is today). These two make for three total z>10 galaxy found so far, and they're all quite bright, and you expect bright galaxies to be the least common (number of galaxies at a given luminosity is a strong power law function, with many more fainter galaxies than brighter ones), so this helps us put constraints on how star and galaxy formation happens in the very early universe.

I'd be very curious if anyone has done a similar analysis of the strongly lensed objects in the SMACS field. Lensed sources are also magnified, so you can find fainter/further objects than you could in a non-lensed field like the one in this paper. I would expect that SMACS field has several detectable z~>10 sources due to lensing, and possibly fainter ones which can definitely help with population statistics.
 

parejkoj

Ars Centurion
389
Subscriptor
More on JWST high-z galaxies, from the front page:

https://arstechnica.com/science/2022/07 ... k-in-time/

The preprint is pretty neat. They found 44 new candidate galaxies at z>8, and 6 candidates at z>12, including the two described above. Plus one more at z~16 with what looks like a fairly robust photometric redshift (the chi2 distribution is very narrow, unlike many of the other candidates). Unfortunately, most of these will all be a challenge to get spectra of, being fainter than 26th magnitude in most bands, so confirmation of their redshifts may be a long time coming. That's the problem of having imaging from a space-based 6m telescope: you really need a ~20m class space based telescope to do spectroscopic followup!

In table two, the preprint also answers a question I had upthread: this JWST observation of the SMACS cluster field gets to magnitude ~28.4 (5 sigma point source depth) in most JWST bands, which is fainter than the Hubble Deep Field (reddest band only ~800nm), not quite as faint as the HUDF (28.8 in near IR), and not as faint as the XDF reprocessing (~29.4 in near IR). That's quite impressive for a few hours of observing; truly deep JWST fields (>days of observing) are planned, and should definitely get fainter than 30th magnitude in IR, which will be fascinating.

http://arxiv.org/abs/2207.12356
 

parejkoj

Ars Centurion
389
Subscriptor
(Would cropping them remove important information from the image?)

What do you mean by "cropping"? There are ways to deconvolve the PSF from the images, but they absolutely do degrade the data. One could use deconvolved images for public releases, but that would remove the obvious 'this image came from this specific telescope" image features (you can identify the telescope that took an image by the number and shape of the diffraction spikes), and would result in a more blurry image that doesn't showcase its full capabilities
 

Shavano

Ars Legatus Legionis
59,866
Subscriptor
I hesitate to speculate on whether it would make the images blurrier or noisier. I don't see why either would be the case. Mathematically, there must exist a perfect inverse filter that would correct the image without any distortion at all, right? Or is that wrong? (Set aside for the moment the the inevitable fact that we could at best only approximate it and it might be extremely compute-intensive.) Wouldn't you be able to produce images without visible spikes that are very nearly as good, if not better than the raw images?

It might require sending precise photon counts for each pixel to Earth though. I expect any compression algorithm is going to cement the distortion in a way that's mathematically unrecoverable.
 

Dmytry

Ars Legatus Legionis
10,309
I hesitate to speculate on whether it would make the images blurrier or noisier. I don't see why either would be the case. Mathematically, there must exist a perfect inverse filter that would correct the image without any distortion at all, right? Or is that wrong? (Set aside for the moment the the inevitable fact that we could at best only approximate it and it might be extremely compute-intensive.) Wouldn't you be able to produce images without visible spikes that are very nearly as good, if not better than the raw images?

It might require sending precise photon counts for each pixel to Earth though. I expect any compression algorithm is going to cement the distortion in a way that's mathematically unrecoverable.
Well, the photon counts themselves are noisy, Poisson distributed. When you apply a de-convolution filter (to undo any kind of blurriness / light not ending up where it should but near where it should), in the frequency domain this is a high pass filter, it amplifies high frequencies i.e. shot noise. This limits how much you can "enhance" out of focus images, for example.

As far as computational issues go, light is linear, the sensor should be calibrated to be linear, so assuming the sensors are not saturating or the like, the effect of the aperture is a convolution (assuming a single spectral band for simplicity). In frequency domain, that is a multiplication. So you convert the image to the frequency domain, and multiply with the inverse of that, and convert back to spatial domain. That is your basic de-convolution. Not very computationally expensive. Also, doesn't work right if the sensor is saturating (that would break linearity). That can be done as quickly as 2 fast Fourier transforms.

If the sensor is saturating, I guess the solution would be to treat it as a nonlinear optimization problem, solve for the "true" image given the model of those diffraction spikes, and clamping. Also shouldn't be too computationally expensive, probably no more than minutes per image on a GPU. You could just use (slightly nonlinear) least squares. I think it should also increase noise because it would still be fundamentally a high pass filter.

There's more clever things you could do accounting for the noise, you could e.g. compute the most probable input image given some prior probability distribution over input images and photon noise. That tends to be done with neural networks these days, because neural networks are able to represent a prior probability distribution over images. However that would not be very useful scientifically because it could outright hallucinate details that aren't there.

edit: I don't expect the results to be good, though. The parts of image obscured by a diffraction spike have the shot noise from the light of the obscured object, and the shot noise from the diffraction spike. That intrinsically results in loss of information.

The standard deviation of Poisson distribution is proportional to the square root of the mean. Let's suppose you have mean of 100 photons from the distant object on a pixel, plus mean of 800 photons from the diffraction spike. The total is 9x greater, the noise is 3x greater, that noise will stay there even if you subtract the mean value of the spike. In this example instead of standard deviation of something like 10% of the value as it would be without the spike, you get 30% where the spike was subtracted. Looks noisier visually.
 

MilleniX

Ars Tribunus Angusticlavius
6,865
Subscriptor++
For actual scientific imaging purposes, they've said that they can use longer-duration pointing sessions at a target object/field to offset and/or rotate the telescope slightly, enough that they get more/all of the field not covered by the spikes in at least some read-outs from the sensor (i.e. 'exposures'), which lets them solve the inverse problem much more cleanly. Keep in mind that almost everything we've seen so far has been from very short sessions, while some of the scheduled observations have it focusing on particular targets for hours to days.
 

Ecmaster76

Ars Tribunus Angusticlavius
14,926
Subscriptor
Physicist trolls James Webb Space Telescope fans with a photo of a chorizo sausage
On July 31st, Étienne Klein, the director of France’s Alternative Energies and Atomic Energy Commission, shared an image he claimed the JWST captured of Proxima Centauri, the nearest-known star to the sun.

"It was taken by the James Webb Space Telescope,” Klein told his more than 91,000 Twitter followers. “This level of detail... A new world is unveiled every day." Thousands of people took the post at face value and retweeted it without comment.

Its a decent fake picture and an amusing troll

I wonder how well the real deal would compare in detail
 

Dmytry

Ars Legatus Legionis
10,309
For actual scientific imaging purposes, they've said that they can use longer-duration pointing sessions at a target object/field to offset and/or rotate the telescope slightly, enough that they get more/all of the field not covered by the spikes in at least some read-outs from the sensor (i.e. 'exposures'), which lets them solve the inverse problem much more cleanly. Keep in mind that almost everything we've seen so far has been from very short sessions, while some of the scheduled observations have it focusing on particular targets for hours to days.
Yeah if you can rotate it a bit, that would do the trick, you can use pixel values from images where they are not noisified by the spike.
 

Shavano

Ars Legatus Legionis
59,866
Subscriptor
Physicist trolls James Webb Space Telescope fans with a photo of a chorizo sausage
On July 31st, Étienne Klein, the director of France’s Alternative Energies and Atomic Energy Commission, shared an image he claimed the JWST captured of Proxima Centauri, the nearest-known star to the sun.

"It was taken by the James Webb Space Telescope,” Klein told his more than 91,000 Twitter followers. “This level of detail... A new world is unveiled every day." Thousands of people took the post at face value and retweeted it without comment.

Its a decent fake picture and an amusing troll

I wonder how well the real deal would compare in detail

How far away could you spot the chorizo though? That's the true measure of Webb's power.
 

parejkoj

Ars Centurion
389
Subscriptor
I wonder how well the real deal would compare in detail

Not even remotely. Proxima Centauri is about 1 milliarcsecond (mas) in size from Earth. JWST's resolution is at best ~40mas in a 1 micron band (1.22 * lambda / diameter), so it absolutely cannot resolve Proxima Cen. NIRCam has 31mas pixels in the short channel (undersampled; that is not a regime I'm comfortable working in: I mostly work with ground based data, where you try very hard to be Nyquist sampled so you can reconstruct the PSF), so even the stars with the largest apparent size (Mira, Betelgeuse, R Doradus are all about 50mas) would still cover less than 2 pixels.
 

Ecmaster76

Ars Tribunus Angusticlavius
14,926
Subscriptor
I wonder how well the real deal would compare in detail

Not even remotely. Proxima Centauri is about 1 milliarcsecond (mas) in size from Earth. JWST's resolution is at best ~40mas in a 1 micron band (1.22 * lambda / diameter), so it absolutely cannot resolve Proxima Cen. NIRCam has 31mas pixels in the short channel (undersampled; that is not a regime I'm comfortable working in: I mostly work with ground based data, where you try very hard to be Nyquist sampled so you can reconstruct the PSF), so even the stars with the largest apparent size (Mira, Betelgeuse, R Doradus are all about 50mas) would still cover less than 2 pixels.
Thank you for the excellent explanation
 

parejkoj

Ars Centurion
389
Subscriptor
A few highlights from a presentation by Marcia Rieke on JWST commissioning I'm watching now (as part of the Rubin Observatory Project & Community Workshop this week):

  1. "Galaxies photobomb every image." How can you tell? People in the community knew the "chorizo" image was fake because there weren't galaxies in the background.
  2. Something that looks like a scratch on the Southern Ring Nebula image is an edge on galaxy. "A new class of vermin on the sky: galaxies!"
  3. A jump in the middle of a transit observation was caused by one mirror segment moving during the observation, probably due to a sudden release of stress from cooldown. They expect this to continue to happen for a while, as the whole observatory settles into a stable state.
  4. >20 preprints so far on high redshift sources from the early release images.
  5. 7-8 papers in arXiv on the spectra from the lensing field, including redshift ~8 galaxies.
  6. JWST is ready and able to observe moving targets, including near Earth asteroids, but can't do rapid followup: only 40% of the sky visible at any given time, and only two opportunities per day to upload observation sequences.
    • NIRSpec slit size (1.5") means you can easily follow up transient sources that are only marginally well localized (e.g. something like GW170817), and near/mid-IR spectra of gravitational wave electromagnetic counterparts (one of the current hot topics in transient astronomy) provides really useful data on source composition.
    • Limited to 8 "disruptive" Target of Opportunity (ToO) observations per year, and such ToO observations have to pay a 30 minute "penalty" (i.e., they have to be "worth" more time than they actually take to observe, because of the disruption they cause to regular observing).
  7. Image quality is good enough that NGC 7320 (from Stephen's Quintet) is resolved into individual stars. This was not expected, and would not have happened if they'd only met the minimum spec.
  8. Something I didn't know re: the naming controversy: when the name was originally changed to "James Web Space Telescope" circa 2002, the international contributors (and there are many) were not consulted and didn't have any input on the name.
 

davidtheweb

Ars Scholae Palatinae
1,213
I wonder how well the real deal would compare in detail

Not even remotely. Proxima Centauri is about 1 milliarcsecond (mas) in size from Earth. JWST's resolution is at best ~40mas in a 1 micron band (1.22 * lambda / diameter), so it absolutely cannot resolve Proxima Cen. NIRCam has 31mas pixels in the short channel (undersampled; that is not a regime I'm comfortable working in: I mostly work with ground based data, where you try very hard to be Nyquist sampled so you can reconstruct the PSF), so even the stars with the largest apparent size (Mira, Betelgeuse, R Doradus are all about 50mas) would still cover less than 2 pixels.


I suppose that my question about "what if we put up 10 more JWSTs and linked them all in an interferometry setup?" has been mostly answered here.

But perhaps I'm being too defeatist on this. Interferometry undoubtedly has benefits for astronomy, so what would they be in my question's scenario, and could any of those benefits apply to observing Proxima Centauri?
 

parejkoj

Ars Centurion
389
Subscriptor
But perhaps I'm being too defeatist on this. Interferometry undoubtedly has benefits for astronomy, so what would they be in my question's scenario, and could any of those benefits apply to observing Proxima Centauri?

Optical/IR interferometry is really hard.

CHARA is probably the most productive optical interferometric astronomical observatory right now, and it achieves 0.2-0.5 mas resolution for bright targets (magnitude~6, which is much brighter than Proxima Cen). Note that that's not the resolution of an image in the way you conventionally think of things; interferometry is an image in fourier space, so how that translates to a real space image is complicated and doesn't generally result in the kind of pretty picture people want from astronomy.

Proxima Cen was resolved by ESO's VLTI in the early 2000s, with a measured disk size of 1.02 ± 0.08 mas, so it has been done. But that's a "how much larger than a point source is this" type of measurement, not a "what does the surface look like", which I think is what you're referring to. As I understand it, the VLTI has had a lot of problems since then, with not a lot of useful data produced. The Keck interferometer was shut down years ago due to cost. Wikipedia has a decent list of optical/IR interferometers; you can see that most of them have closed. I was so hyped about the LBTI when I was in grad school, and I'm not aware of any significant results from it.

The above is just one of the reasons why people who say "why don't they just launch a bunch of small telescopes to do the work of JWST/Rubin/whatever" (e.g. to use smaller, cheaper launch platforms, or avoid the problems of satellite swarms on ground based observatories) clearly don't understand the engineering difficulties of that kind of proposal. There are serious proposals for space-based optical interferometers, but they're not cheap and there are a lot of reasons why we don't have any yet.
 

Dr Nno

Ars Praefectus
4,526
Subscriptor++
https://www.sciencenews.org/article/james-webb-space-telescope-first-exoplanet-image

Just directly imaged an exoplanet. I know it's been done before, but still pretty amazing.

How far is its host-star from earth?

The star is 385 LY away. The exoplanet itself is more than 100 AU from it.
 

Shavano

Ars Legatus Legionis
59,866
Subscriptor

halse

Ars Praefectus
3,746
Subscriptor
For the first time, astronomers have used NASA’s James Webb Space Telescope to take a direct image of a planet outside our solar system. The exoplanet is a gas giant, meaning it has no rocky surface and could not be habitable.
The image, as seen through four different light filters, shows how Webb’s powerful infrared gaze can easily capture worlds beyond our solar system, pointing the way to future observations that will reveal more information than ever before about exoplanets.

pictures and more at
https://blogs.nasa.gov/webb/
 

Frennzy

Ars Legatus Legionis
85,829
Subscriptor++
For the first time, astronomers have used NASA’s James Webb Space Telescope to take a direct image of a planet outside our solar system. The exoplanet is a gas giant, meaning it has no rocky surface and could not be habitable.
The image, as seen through four different light filters, shows how Webb’s powerful infrared gaze can easily capture worlds beyond our solar system, pointing the way to future observations that will reveal more information than ever before about exoplanets.

pictures and more at
https://blogs.nasa.gov/webb/

Might want to read several posts directly above yours. :D
 

parejkoj

Ars Centurion
389
Subscriptor
https://www.sciencenews.org/article/james-webb-space-telescope-first-exoplanet-image

Just directly imaged an exoplanet. I know it's been done before, but still pretty amazing.

How far is its host-star from earth?

The star is 385 LY away. The exoplanet itself is more than 100 AU from it.

:eek:

How do you even tell it's orbiting that far away from the star?

You compare the proper motion of the star and the planet: if they're moving through the sky along the same path, they're very likely a bound system. From the original VLT-SPHERE measurements, they only needed ~6 months of data to distinguish it from a background source, which would have moved measurably differently on the sky. The orbital period at ~100 AU is around 600 years in that system, so we wouldn't be watching the orbit evolve. The new JWST observations help constrain the orbital parameters a bit, but from the JWST paper (Carter et al., 2022), there's still quite a range of possible orbits.
 

parejkoj

Ars Centurion
389
Subscriptor
what does proper motion have to do with milliarcseconds? They're orthogonal!

I'm not sure what you mean by "orthogonal" in this context. Proper motion is measured in milliarcseconds per year (mas/yr), with HIP 65426 having ~30mas/yr per Gaia EDR3. Given position uncertainties of a few mas for the companion source, you can rule out it being a background source with a few months of data. Figure 2 of Chauvin et al. (2017) illustrates this nicely.

If you're thinking of parallax, that's given in milliarcseconds on the Earth orbit baseline. HIP 65426's parallax of 9.3mas (also from Gaia) makes it only a few times larger than the astrometric uncertainty in Chauvin et al. (2017), so a 6 month parallax comparison is not quite enough to rule out the source being a background object (and thus having a smaller parallax). The joint relative motion of the star and companion is what clinches the case here, and that is dominated by the proper motion of the system.