7
$\begingroup$

Interferometry is among the best ways (if not, the best way!) to have an image of a very distant object.

Recently a picture of the black hole at the center of M87 was released. It is the result of several data collected by the Event Horizon Telescope, a series of arrays all across our world, working as one like a giant Earth sized telescope. The picture is not really cutting edge and high definition however it is still very surprising and in a way detailed enough, considering the fact that M87 is 53.49 million light years away... and this is where I arrive to Pluto which is just around 6 to 7 billion kilometers from us.

If we used an interferometer, perhaps the same size as the EHT (or just a smaller one, the size of an entire continent), and point all the arrays at Pluto, then we should have a picture with a resolution that is at least higher as the Hubble Space Telescope, but likely not as high as the pictures from the New Horizons spacecraft which directly made a fly by of Pluto... right?

If so then why don't we use interferometry to take pictures of Pluto from Earth?

$\endgroup$
2
  • $\begingroup$ Optical interferometry actually exists, by the way. It is very widely used. But, in interferometry, you are using only collimated beams, which are very narrow. So there is no "picture" in those. You can use it for measuring distances and so on. Quite useful. $\endgroup$
    – sanaris
    Commented Feb 26, 2020 at 2:19
  • $\begingroup$ But in EHT, they are not just using interferometry. A lot of other techniques is used. $\endgroup$
    – sanaris
    Commented Feb 26, 2020 at 2:20

2 Answers 2

7
$\begingroup$

Radio interferometry can combine observations over very large baselines. But optical interferometry cannot. According to a list of interferometry instruments on wikipedia, the largest baseline for optical measurements is less than a kilometer. We can't take optical measurements with continent-sized instruments.

Then if you drop down to radio where the instruments do have that capability, I think you'll find Pluto is quite dim (it's not a radio source, and there's no strong radio emissions that it can reflect to us). There's no radio signal from Pluto that can be imaged.

From a page on optical interferometry:

Interferometers are seen by most astronomers as very specialized instruments, as they are capable of a very limited range of observations. It is often said that an interferometer achieves the effect of a telescope the size of the distance between the apertures; this is only true in the limited sense of angular resolution. The combined effects of limited aperture area and atmospheric turbulence generally limit interferometers to observations of comparatively bright stars and active galactic nuclei.

$\endgroup$
4
  • 1
    $\begingroup$ Might want to state Pluto is "dim" because it doesn't have many radio emissions (else one might infer you meant light). $\endgroup$ Commented Apr 10, 2019 at 20:19
  • 1
    $\begingroup$ "cannot" → "cannot yet" or "cannot currently" $\endgroup$
    – uhoh
    Commented Apr 11, 2019 at 7:06
  • 1
    $\begingroup$ Second to what @uhoh comment: can you clarify why it cannot? Is this a fundamental physical limitation, or is it an engineering limitation? $\endgroup$
    – gerrit
    Commented Apr 11, 2019 at 11:38
  • 1
    $\begingroup$ astronomy.stackexchange.com/questions/29082/… $\endgroup$
    – BowlOfRed
    Commented Apr 11, 2019 at 20:26
5
$\begingroup$

Interferometry at infrared and shorter wavelengths is more difficult than at microwave/radio wavelengths for a number of reasons. Radio signals can basically be recorded on tape (or rather hard drives these days) at different sites and then recombined (or correlated) "off-line" at another location. This won't work at optical wavelengths because of the higher frequencies. Data cannot yet be recorded at this rate and the storage problems for such data would be enormous. Instead, optical interferometers implement a hardware solution - they recombine the signals from different telescopes by sending the light along optical delay lines that compensate for the separations of the telescopes, before bringing the signals together directly to form the interference patterns. There are multiple hard problems to be solved with this approach.

Visible light is badly affected by the atmosphere. This introduces phase errors for telescopes situated in different places. The phase error is not the same for objects that differ in position by only a few arcseconds (the so-called "isoplanatic patch"), so large scale imaging is not possible. This latter point also prevents the "phase referencing" technique used in radio interferometers where any phase noise is calibrated out by periodically looking at another nearby (bright) reference source that is within the isoplanatic patch.

It is possible to use the "closure phase" technique in optical interferometry. The phase errors introduced in one baseline $e_1 - e_2$, can be eliminated by combining the signals within a triplet of baseline: $(e_1 - e_2) + (e_2 - e_3) + (e_3 - e_1) = 0$. This works if the sources are bright enough that sufficient signal can be obtained in the time it takes the pathlength introduced by the atmosphere to change. At optical wavelengths this can be as short as 10-20 ms, and is much shorter than the radio coherence time. You might think then that just using large telescopes would help boost the signal strength, but unfortunately the coherence length of atmospheric turbulence (the transverse distance over which significant pathlength differences are expected) means that apertures greater than about 10cm don't really improve matters unless adaptive optics systemas are also used.

A further problem is that in order to observe faint objects you would like to observe over a broad wavelength bandwidth. But unless one restricts the bandwidth to a small fraction of the observation wavelength, then a different delay line length is required for sources viewed at slightly different angles. Imagine trying to do Young's double slit experiment using light that isn't monochromatic - the fringes will appear in slightly different places according to their wavelength. This amounts to a chromatic aberration. You basically have a trade-off between a very narrow field of view and bandwidth.

This puts severe constraints on the optical pathlengths used in the array of telescopes - basically you end up with requiring the various pathlengths between the telescopes and where the signals are recombined, to be the same within a wavelength of light and this precison is difficult to achieve over longer baselines. For instance you need very precisely controlled delay lines running in precisely measured vacuum tubes. What's more, because of the Earth's rotation, then to keep pathlengths similar as an object moves with respect to the telescope array, then the delay lines need fast, but accurate, moving components to compensate for this!

i.e. It is not just that you have to make the various pathlengths similar to within a wavelength of light; you have to keep them that way with moving parts/mirrors etc. The picture below shows the "Paranal Express", which is an optical platform that moves (at up to 50 cm/s) to compensate for the "sidereal optical path difference" at the VLT Interferometer in Chile.

The Paranal Express

Essential reading: Monnier (2003); Jackson (2008)

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .