11
$\begingroup$

Now that LIGO has finally measured gravitational waves using a huge laser interferometer, to me, the question remains, why was it possible? As it is explained in many news articles, gravitational waves are similar to water waves or electromagnetic waves, they just do not exist in a medium like water or space, but space-time itself is the transport medium. If space-time itself gets contracted and expanded by the gravitational waves, so does any means of measurement. The ruler you use for measurement (the laser beam) gets deformed while the wave travels through the measuring device. Otherwise the "ruler" had to live outside of space-time, but there is no outside. If space-time was a cup filled with pudding, on which we had painted a straight line with 10 marks, pushing into the pudding slightly with our thumb does bend the line, but for us, there remain 10 marks on the line, because to measure the extension, we had to use a ruler, outside of our space-time (pudding) to measure, let's say, 11 marks. But, well, there is no outside. I assume the same happens not only to the 3 spacial dimensions but also to the time dimension. Because they "did it", what am I missing?

$\endgroup$

1 Answer 1

13
$\begingroup$

The short answer is that waves that are "in the apparatus" are indeed stretched. However the "fresh waves" being produced by the laser are not. So long as the "new" waves spend much less time in the interferometer than it takes to expand them (which takes roughly 1/gravitational wave frequency), then the effect you are talking about can be neglected.

Details:

There is an apparent paradox: you can think about the detection in two ways. On the one hand you can imagine that the lengths of the detector arms change and that the round-trip travel time of a light beam is subsequently changed and so the difference in the time-of-arrival of wavecrests translates into a phase difference that is detected in the interferometer. On the other hand you have the analogy to the expansion of the universe - if the arm length is changed, then isn't the wavelength of the light changed by exactly the same factor and so there can be no change in the phase difference? I guess this latter is your question.

Well clearly, the detector works so there must be a problem with the second interpretation. There is an excellent discussion of this by Saulson 1997, from which I give a summary.

Interpretation 1:

If the two arms are in the $x$ and $y$ directions and the incoming wave the $z$ direction, then the metric due to the wave can be written $$ds^2 = -c^2 dt^2 + (1+ h(t))dx^2 + (1-h(t))dy^2,$$ where $h(t)$ is the strain of the gravitational wave.

For light travelling on geodesic paths the metric interval $ds^2=0$, this means that (considering only the arm aligned along the x-axis for a moment) $$c dt = \sqrt{(1 + h(t))}dx \simeq (1 + \frac{1}{2}h(t))dx$$ The time taken to travel the path is therefore increased to $$\tau_+ = \int dt = \frac{1}{c}\int (1 + \frac{1}{2}h(t))dx$$

If the original arm is of length $L$ and the perturbed arm length is $L(1+h/2)$, then the time difference for a photon to make the round trip along each arm is $$ \Delta \tau = \tau_+ - \tau_- \simeq \frac{2L}{c}h$$ leading to a phase difference in the signals of $$\Delta \phi = \frac{4\pi L}{\lambda} h$$ This assumes that $h(t)$ is treated as a constant for the time that the laser light is in the apparatus.

Interpretation 2:

In analogy with the expansion of the universe, the gravitational wave does change the wavelength of light in each arm of the experiment. However, only the waves that are in the apparatus as the gravitational wave passes through can be affected.

Suppose that $h(t)$ is a step function so that the arm changes length from $L$ to $L+h(0)/2$ instantaneously. The waves that are just arriving back at the detector will be unaffected by this change, but subsequent wavecrests will have had successively further to travel and so there is a phase lag that builds up gradually to the value defined above in interpretation 1. The time taken for the phase lag to build up will be $2L/c$.

But then what about the waves that enter the apparatus later? For those, the laser frequency is unchanged and as the speed of light is constant, then the wavelength is unchanged. These waves travel in a lengthened arm and therefore experience a phase lag exactly equivalent to interpretation 1.

In practice, the "buildup time" for the phase lag is short compared with the reciprocal of the frequency of the gravitational waves. For example the LIGO path length is about 1,000 km, so the "build up time" would be 0.003 s compared with the reciprocal of the $\sim 100$ Hz signal of 0.01 s and so is relatively unimportant when interpreting the signal (the detection sensitivity of the interferometer is indeed compromised at higher frequencies because of this effect).

$\endgroup$
1
  • 4
    $\begingroup$ This is a great explanation. For the full, less qualitative, calculation (not so difficult) see the nice article of Valerio Faraoni: arxiv.org/pdf/gr-qc/0702079v1.pdf in which the above argument is presented and in addition the effect of the gravitational wave on the light travel time is explicitly calculated. $\endgroup$ Commented Feb 21, 2016 at 11:01

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .