46
$\begingroup$

From what I remember in my undergraduate quantum mechanics class, we treated scattering of non-relativistic particles from a static potential like this:

  1. Solve the time-independent Schrodinger equation to find the energy eigenstates. There will be a continuous spectrum of energy eigenvalues.
  2. In the region to the left of the potential, identify a piece of the wavefunction that looks like $Ae^{i(kx - \omega t)}$ as the incoming wave.
  3. Ensure that to the right of the potential, there is not piece of the wavefunction that looks like $Be^{-i(kx + \omega t)}$, because we only want to have a wave coming in from the left.
  4. Identify a piece of the wavefunction to the left of the potential that looks like $R e^{-i(kx + \omega t)}$ as a reflected wave.
  5. Identify a piece of the wavefunction to the right of the potential that looks like $T e^{i(kx - \omega t)}$ as a transmitted wave.
  6. Show that $|R|^2 + |T|^2 = |A|^2$. Interpret $\frac{|R|^2}{|A|^2}$ as the probability for reflection and $\frac{|T|^2}{|A|^2}$ as the probability for transmission.

This entire process doesn't seem to have anything to do with a real scattering event - where a real particle is scattered by a scattering potential - we do all our analysis on a stationary waves. Why should such a naive procedure produce reasonable results for something like Rutherford's foil experiment, in which alpha particles are in motion as they collide with nuclei, and in which the wavefunction of the alpha particle is typically localized in a (moving) volume much smaller than the scattering region?

$\endgroup$
3
  • 10
    $\begingroup$ Essentially because the dynamical problem only interests you in the limit where $T_i \to \infty$, $T_f \to \infty$ and by Lippmann-Schwinger equation it can be shown that all you need to do is to match the asymptotic states of the time-independent Hamiltonian (which is precisely what you describe, although nobody will tell you this in the undergraduate class). This can be developed more fully into the S-matrix theory, fundamental to all of scattering problems. I'll see if I can get to a more complete answer later. $\endgroup$
    – Marek
    Commented Jul 22, 2011 at 11:50
  • 3
    $\begingroup$ This really bothered me too when I first took quantum mechanics. $\endgroup$
    – Ted Bunn
    Commented Jul 22, 2011 at 14:11
  • $\begingroup$ Allow me to say that this is a very interesting question with even more interesting answers. $\endgroup$ Commented Feb 23, 2016 at 10:37

6 Answers 6

32
$\begingroup$

This is fundamentally no more difficult than understanding how quantum mechanics describes particle motion using plane waves. If you have a delocalized wavefunction $\exp(ipx)$ it describes a particle moving to the right with velocity p/m. But such a particle is already everywhere at once, and only superpositions of such states are actually moving in time.

Consider

$$\int \psi_k(p) e^{ipx - iE(p) t} dp$$

where $\psi_k(p)$ is a sharp bump at $p=k$, not a delta-function, but narrow. The superposition using this bump gives a wide spatial waveform centered about at x=0 at t=0. At large negative times, the fast phase oscillation kills the bump at x=0, but it creates a new bump at those x's where the phase is stationary, that is where

$${\partial\over\partial p}( p x - E(p)t ) = 0$$

or, since the superposition is sharp near k, where

$$ x = E'(k)t$$

which means that the bump is moving with a steady speed as determined by Hamilton's laws. The total probability is conserved, so that the integral of psi squared on the bump is conserved.

The actual time-dependent scattering event is a superposition of stationary states in the same way. Each stationary state describes a completely coherent process, where a particle in a perfect sinusoidal wave hits the target, and scatters outward, but because it is an energy eigenstate, the scattering is completely delocalized in time.

If you want a collision which is localized, you need to superpose, and the superposition produces a natural scattering event, where a wave-packet comes in, reflects and transmits, and goes out again. If the incoming wavepacked has an energy which is relatively sharply defined, all the properties of the scattering process can be extracted from the corresponding energy eigenstate.

Given the solutions to the stationary eigenstate problem $\psi_p(x)$ for each incoming momentum $p$, so that at large negative x, $\psi_p(x) = exp(ipx) + A \exp(-ipx)$ and $\psi_p(x) = B\exp(ipx)$ at large positive x, superpose these waves in the same way as for a free particle

$$\int dp \psi_k(p) \psi_p(x) e^{-iE(p)t}$$

At large negative times, the phase is stationary only for the incoming part, not for the outgoing or reflected part. This is because each of the three parts describes a free-particle motion, so if you understand where free particle with that momentum would classically be at that time, this is where the wavepacket is nonzero. So at negative times, the wavepacket is centered at

$$ x = E'(k)t$$

For large positive t, there are two places where the phase is stationary--- those x where

$$ x = - E'(k) t$$

$$ x = E_2'(k) t$$

Where $E_2'(k)$ is the change in phase of the transmitted k-wave in time (it can be different than the energy if the potential has an asymptotically different value at $+\infty$ than at $-\infty$). These two stationary phase regions are where the reflected and transmitted packet are located. The coefficient of the reflected and transmitted packets are A and B. If A and B were of unit magnitude, the superposition would conserve probability. So the actual transmission and reflection probability for a wavepacket is the square of the magnitude of A and of B, as expected.

$\endgroup$
0
9
$\begingroup$

Here I would like to expand some of the arguments given in Ron Maimon's nice answer.

i) Setting. Let us divide the 1D $x$-axis into three regions $I$, $II$, and $III$, with a localized potential $V(x)$ in the middle region $II$ having a compact support. (Clearly, there are physically relevant potentials that haven't compact support, e.g. the Coulomb potential, but this assumption simplifies the following discussion concerning the notion of asymptotic states.)

ii) Time-independent and monochromatic. The particle is free in the regions $I$ and $III$, so we can solve the time-independent Schrödinger equation

$$\begin{align}\hat{H}\psi(x) ~=~&E \psi(x), \cr \hat{H}~=~& \frac{\hat{p}^2}{2m}+V(x),\qquad E> 0,\end{align} \tag{1}$$

exactly there. We know that the 2nd order linear ODE has two linearly independent solutions, which in the free regions $I$ and $III$ are plane waves

$$ \begin{align} \psi_{I}(x) ~=~& \underbrace{a^{+}_{I}(k)e^{ikx}}_{\text{incoming right-mover}} + \underbrace{a^{-}_{I}(k)e^{-ikx}}_{\text{outgoing left-mover}}, \qquad k> 0, \tag{2} \cr \psi_{III}(x) ~=~& \underbrace{a^{+}_{III}(k)e^{ikx}}_{\text{outgoing right-mover}} + \underbrace{a^{-}_{III}(k)e^{-ikx}}_{\text{incoming left-mover}}. \tag{3} \end{align} $$

Just from linearity of the Schrödinger equation, even without solving the middle region $II$, we know that the four coefficients $a^{\pm}_{I/III}(k)$ are constrained by two linear conditions. This observation leads, by the way, to the time-independent notion of the scattering $S$-matrix and the transfer $M$-matrix

$$ \begin{pmatrix} a^{-}_{I}(k) \\ a^{+}_{III}(k) \end{pmatrix}~=~ S(k) \begin{pmatrix} a^{+}_{I}(k) \\ a^{-}_{III}(k) \end{pmatrix}, \tag{4} $$

$$ \begin{pmatrix} a^{+}_{III}(k) \\ a^{-}_{III}(k) \end{pmatrix}~=~ M(k) \begin{pmatrix} a^{+}_{I}(k) \\ a^{-}_{I}(k) \end{pmatrix}, \tag{5} $$

see e.g. my Phys.SE answer here.

iii) Time-dependence of monochromatic wave. The dispersion relation reads

$$ \frac{E(k)}{\hbar} ~\equiv~\omega(k)~=~\frac{\hbar k^2}{2m}. \tag{6} $$

The specific form on the right-hand side of the dispersion relation $(6)$ will not matter in what follows (although we will assume for simplicity that it is the same for right- and left-movers). The full time-dependent monochromatic solution in the free regions I and III becomes $$\begin{align} \Psi_r(x,t) ~=~& \sum_{\sigma=\pm}a^{\sigma}_r(k)e^{\sigma ikx-i\omega(k)t}\cr ~=~&\underbrace{e^{-i\omega(k)t}}_{\text{phase factor}} \Psi_r(x,0), \qquad r ~\in~ \{I, III\}.\end{align} \tag{7} $$

The solution $(7)$ is a sum of a right-mover ($\sigma=+$) and a left-mover ($\sigma=-$). For now the words right- and left-mover may be taken as semantic names without physical content. The solution $(7)$ is fully delocalized in the free regions I and III with the probability density $|\Psi_r(x,t)|^2$ independent of time $t$, so naively, it does not make sense to say that the waves are right or left moving, or even scatter! However, it turns out, we may view the monochromatic wave $(7)$ as a limit of a wave packet, and obtain a physical interpretation in that way, see next section.

iv) Wave packet. We now take a wave packet

$$\begin{align} A^{\sigma}_r(k)~=~&0 \qquad \text{for} \qquad |k-k_0| ~\geq~ \frac{1}{L}, \cr \sigma~\in~&\{\pm\}, \qquad r ~\in~ \{I, III\},\end{align}\tag{8} $$

narrowly peaked around some particular value $k_0$ in $k$-space,

$$|k-k_0| ~\leq~ K, \tag{9}$$

where $K$ is some wave number scale, so that we may Taylor expand the dispersion relation

$$\omega(k)~=~ \omega(k_0) + v_g(k_0)(k-k_0) + {\cal O}\left((k-k_0)^2\right), \tag{10} $$

and drop higher-order terms ${\cal O}\left((k-k_0)^2\right)$. Here

$$v_g(k)~:=~\frac{d\omega(k)}{dk}\tag{11}$$

is the group velocity. The wave packet (in the free regions I and III) is a sum of a right- and a left-mover,

$$ \Psi_r(x,t)~=~ \Psi^{+}_r(x,t)+\Psi^{-}_r(x,t), \qquad r ~\in~ \{I, III\},\tag{12} $$

where

$$\begin{align} \Psi^{\sigma}_r(x,t)~:=~& \int dk~A^{\sigma}_r(k)e^{\sigma ikx-i\omega(k)t}\cr ~\approx~& e^{i(k_0 v_g(k_0)-\omega(k_0))t} \int dk~A^{\sigma}_r(k)e^{ ik(\sigma x- v_g(k_0)t)}\cr ~=~&\underbrace{e^{i(k_0 v_g(k_0)-\omega(k_0))t}}_{\text{phase factor}} ~\Psi^{\sigma}_r\left(x-\sigma v_g(k_0)t,0\right), \cr \qquad\sigma~\in~&\{\pm\}, \qquad r ~\in~ \{I, III\}.\end{align} \tag{13}$$

The right- and left-movers $\Psi^{\sigma}$ will be very long spread-out wave trains of sizes $\geq \frac{1}{K}$ in $x$-space, but we are still able to identity via eq. $(13)$ their time evolution as just

  1. a collective motion with group velocity $\sigma v_g(k_0)$, and

  2. an overall time-dependent phase factor of modulus $1$ (which is the same for the right- and the left-mover).

In the limit $K \to 0$, with $K >0$, the approximation $(10)$ becomes better and better, and we recover the time-independent monochromatic wave,

$$ A^{\sigma}_r(k) ~\longrightarrow ~a^{\sigma}_r(k_0)~\delta(k-k_0)\qquad \text{for} \qquad K\to 0. \tag{14}$$

It thus makes sense to assign a group velocity to each of the $\pm$ parts of the monochromatic wave $(7)$, because it can understood as an appropriate limit of the wave packet $(13)$. The previous sentence is in a nutshell the answer to OP's title question (v3).

$\endgroup$
7
  • $\begingroup$ $a^{\pm}(k)={}_{\infty}\langle \pm k| \psi \rangle$. Notes for later: D. Tong, Lectures on Topics in QM; sections 6.2.2 + 6.2.3 + 6.3. (Unnecessary to assume rotationally symmetric potential.) 1st quantized operator formalism. Elastic scattering: $|\vec{k}|=|\vec{k}^{\prime}|$. TISE: $(E-\hat{H}_0-\hat{V})|\psi\rangle=0$; $(E-\hat{H}_0)|\psi_0\rangle=0$; $\hat{H}_0=-\frac{\hbar^2}{2m}\nabla^2$; $E=\hbar\omega=\frac{\hbar^2k^2}{2m}$. Lippmann-Schwinger: $|\psi\rangle=|\psi_0\rangle+\frac{1}{E-\hat{H}_0+i\epsilon}\hat{V}|\psi\rangle$. $\endgroup$
    – Qmechanic
    Commented Sep 18, 2021 at 18:52
  • $\begingroup$ Normalization: $\phi=\sqrt{\frac{\hbar^2}{2m}}\psi$. Effectively 2nd quantized although we will not use it. Greens function: $\frac{\hbar^2}{2m}G_0(\vec{r}\!-\!\vec{r}^{\prime})=g_0(\vec{r}\!-\!\vec{r}^{\prime})=\int \frac{d^3q}{(2\pi)^3}e^{i\vec{q}\cdot(\vec{r}-\vec{r}^{\prime})} \widetilde{g}_0(\vec{q})=-\frac{e^{ik|\vec{r}-\vec{r}^{\prime}|}}{4\pi|\vec{r}-\vec{r}^{\prime}|}$ where $(E-\hat{H}_0)G_0(\vec{r}\!-\!\vec{r}^{\prime})=(k^2+\nabla^2)g_0(\vec{r}\!-\!\vec{r}^{\prime})=\delta^3(\vec{r}\!-\!\vec{r}^{\prime})$. Fourier transform: $\widetilde{g}_0(\vec{q})=-\frac{1}{q^2-k^2-i\epsilon}$. $\endgroup$
    – Qmechanic
    Commented Sep 18, 2021 at 19:57
  • $\begingroup$ Normalization of potential: $v=\frac{2m}{\hbar^2}V$. Fourier transform: $\widetilde{V}(\vec{q})=\int d^3r~e^{-i\vec{q}\cdot\vec{r}}V(\vec{r})$. Sign mistake in eq. (6.64)? No, not really. Just weird def. $\endgroup$
    – Qmechanic
    Commented Sep 18, 2021 at 20:20
  • $\begingroup$ Path integral: $Z_0[J,J^{\ast}] =\int\!{\cal D}\phi~\exp\left\{\frac{i}{\hbar}\int\!\frac{d\omega}{2\pi}\int\!d^3r\left(k^2|\phi|^2-|\nabla\phi|^2+J^{\ast}\phi+\phi^{\ast}J\right)\right\}$ $\sim \exp\left\{-\frac{i}{\hbar} \int\!\frac{d\omega}{2\pi} \int\!d^3r\int \!d^3r^{\prime}J^{\ast}(\vec{r})g_0(\vec{r}\!-\!\vec{r}^{\prime})J(\vec{r}^{\prime}) \right\}$ $\sim\exp\left\{-\frac{i}{\hbar}\int\!\frac{d\omega}{2\pi}\int\!\frac{d^3k}{(2\pi)^3}\int \!\frac{d^3k^{\prime}}{(2\pi)^3}\widetilde{J}^{\ast}(\vec{k})\widetilde{g}_0(\vec{k}\!-\!\vec{k}^{\prime})\widetilde{J}(\vec{k}^{\prime})\right\}$. $\endgroup$
    – Qmechanic
    Commented Sep 22, 2021 at 8:43
  • $\begingroup$ Path integral: $Z[J,J^{\ast}] =\exp\left\{-\frac{\hbar}{i} \int\!\frac{d\omega}{2\pi} \int\!d^3r~v\frac{\delta}{\delta J} \frac{\delta}{\delta J^{\ast}}\right\}Z_0[J,J^{\ast}]$ $=\exp\left\{-\frac{\hbar}{i} \int\!\frac{d\omega}{2\pi} \int \!\frac{d^3k}{(2\pi)^3}\int \!\frac{d^3k^{\prime}}{(2\pi)^3} ~\widetilde{v}(\vec{k}\!-\!\vec{k}^{\prime})\frac{\delta}{\delta \widetilde{J}(\vec{k})} \frac{\delta}{\delta \widetilde{J}^{\ast}(\vec{k}^{\prime})}\right\}Z_0[J,J^{\ast}]$. Differential scattering cross-section: $\frac{d\sigma}{d\Omega}=|f|^2$ outgoing flux relative to incoming flux. $\endgroup$
    – Qmechanic
    Commented Sep 23, 2021 at 15:19
7
$\begingroup$

First suppose that the Hamiltonian $H(t) = H_0 + H_I(t)$ can be decomposed into free and interaction parts. It can be shown (I won't derive this equation here) that the retarded Green function for $H(t)$ obeys the equation $$G^{(+)}(t, t_0) = G_0^{(+)}(t, t_0) - {i \over \hbar} \int_{-\infty}^{\infty} {\rm d} t' G_0^{(+)}(t,t') H_I(t') G^{(+)}(t', t_0)$$ where $G_0^{(+)}$ is the retarded Green function for $H_0$. Letting this equation act on a state $\left| \psi(t_0) \right>$ this becomes $$\left| \psi(t) \right> = \left| \varphi(t) \right> - {i \over \hbar} \int_{-\infty}^{\infty} {\rm d} t' G_0^{(+)}(t,t') H_I(t')\left| \psi(t') \right> $$ where $\varphi(t) = G_0^{(+)}(t,t') \left| \psi(t_0) \right>$. Now, we suppose that until $t_0$ there is no interaction and so we can write $\left |\psi(t_0) \right>$ as superposition of momentum eigenstates $$\left| \psi(t_0) \right> = \int {\rm d}^3 \mathbf p a(\mathbf p) e^{-{i \over \hbar} E t_0} \left| \mathbf p \right>.$$ A similar decomposition will also hold for $\left| \phi(t) \right>$. This should inspire us in writing $\left| \psi(t) \right >$ as $$\left| \psi(t) \right> = \int {\rm d}^3 \mathbf p a(\mathbf p) e^{-{i \over \hbar} E t} \left| \psi^{(+)}_{\mathbf p} \right>$$ where the states $\left| \psi^{(+)}_{\mathbf p} \right>$ are to be determined from the equation for $\left|\psi(t) \right>$. Now, the amazing thing (which I again won't derive due to the lack of space) is that these states are actually eigenstates of $H$: $$H \left| \psi^{(+)}_{\mathbf p} \right> = E \left| \psi^{(+)}_{\mathbf p} \right>$$ for $E = {\mathbf p^2 \over 2m}$ (here we assumed that the free part is simply $H_0 = {{\mathbf p}^2 \over 2m}$ and that $H_I(t)$ is independent of time).

Similarly, one can derive advanced eigenstates from advanced Green function $$H \left| \psi^{(-)}_{\mathbf p} \right> = E \left| \psi^{(-)}_{\mathbf p} \right>.$$

Now, in one dimension and for an interaction Hamiltonian of the form $\left< \mathbf x \right| H_I \left| \mathbf x' \right> = \delta(\mathbf x - \mathbf x') U(\mathbf x)$ it can be further shown that $$\psi^{(+)}_p \sim \begin{cases} e^{{i \over \hbar}px} + A(p) e^{-{i \over \hbar}px} \quad x< -a \cr B(p)e^{{i \over \hbar}px} \quad x> a \end{cases}$$ where $a$ is such that the potential vanishes for $|x| > a$ and $A(p)$ and $B(p)$ are coefficients fully determined by the potential $U(x)$. Similar discussion again applies for wavefunctions $\psi^{(-)}_p$. Thus we have succeeded in reducing the dynamical problem into a stationary problem by writing the non-stationary states $\psi(t, x)$ in the form of stationary $\psi^{(+)}_p(x)$.

$\endgroup$
2
  • 5
    $\begingroup$ -1 This answer is no good. You are turning off the scattering potential at $t=-\infty$ for no reason, the Hamiltonian in a scattering problem of the sort the OP is asking about is time independent. The answer is ridiculously formal, and all the interesting things are in the "it can be shown...". $\endgroup$
    – Ron Maimon
    Commented Aug 17, 2011 at 2:32
  • 2
    $\begingroup$ @Ron: I don't quite understand your objection. Physically, the $t = -\infty$ part of the potential never matters in a scattering problem since particles are infinitely away from the potential (that is usually generated by their being close anyway). So this is only technicallity that I prefer to work with that doesn't change anything (rather, it's very convenient in more general situations). As for the "it can be shown" parts... well, I can show them but the answer would be twice as long. Will you remove the downvote if I include the derivations? And as for being formal... so what? $\endgroup$
    – Marek
    Commented Aug 17, 2011 at 6:26
5
$\begingroup$

I also struggled to understand it myself. Why I think this confuses many people is that they try to interpret the time-independent scattering wavefunction as describing one single collision of a particle from the target and it is this interpretation which is not correct and leads to the confusion!

I think that the easiest way of seeing why the time-independent approach works lies in the definition of the scattering process which the wavefunction describes.

The time-independent scattering solution describes the situation in which the target is being continuously bombarded by a flux of non-interacting projectiles approaching with different impact parameters (this is how most of the scattering experiments work). Therefore the process you are trying to describe is stationary. This is the actual reason why the time-independent formulation works. You can see that from e.g. the classical book on scattering (Taylor: Scattering theory), where the scattering process is defined (Chapter 3, section d) very clearly in terms of the continuous flux of the incoming particles.

You can convince yourself that this interpretation of the time-independent scattering solution is indeed correct by simply noting that the probability flux (either incoming or outgoing) that you can calculate from the scattering wavefunction has the units of probability per unit time per unit area, i.e. it describes a stationary scattering process.

$\endgroup$
3
$\begingroup$

The answer to this is the same as the answer to why you solve the Time-Independent-Schrodinger-Equation to find the time evolution of a bound particle. First you solve the TISE to find the stationary states $\psi_n$, then you write the particle's wavefunction $\Psi(t=0)$ in terms of a superposition of the $\psi_n$. Since you know how the stationary states evolve in time, you now know (at least in principle) how ANY wavefunction evolves in time.

It's the same thing for scattering. You figure out what happens for the energy eigenstates, and now you know what will happen for any wavepacket (which you would write as a superposition of energy eigenstates, of course). And here it's even easier than the bound states: if all you care about is R and T, and your wavepacket has a narrow range of energies (for which T is nearly constant), then the value of T for your wavepacket is the same as what you just calculated for the energy eigenstate. Huzzah!

If your wavepacket involves a superpostion of a wide range of energies, with a wide range of T's, then your life will be more complicated, of course. But in scattering experiments, folks usually try to employ nearly monoenergetic beams.

Because quantum mechanics classes spend so much time mired in the details of solving the TISE (either for scattering or bound states), they often lose sight of one of the motivations for solving the TISE: it's a tool for finding the time behavior of any initial condition.

$\endgroup$
13
  • 2
    $\begingroup$ I'm baffled by @Marek's statement that the Hamiltonian is explicitly time-dependent. It certainly doesn't need to be and often isn't. For instance, Rutherford scattering: $H=p^2/(2m)+q_1q_2/(4\pi\epsilon_0r)$. Note the absence of time dependence. In a scattering situation, the wavefunction is time-dependent, not generally the Hamiltonian. In any situation in which the Hamiltonian is explicitly time-dependent, the procedure described in the original question wouldn't work, so in the context of this question we're certainly assuming time-independent Hamiltonians. $\endgroup$
    – Ted Bunn
    Commented Jul 22, 2011 at 18:49
  • 1
    $\begingroup$ @Ted: also note that the process Mark describes is not what AC describes in his answer. We don't evolve solutions in time at all. To give complete justification one needs to proceed as in the usual scattering theory (which is best dealt with in the Dirac picture and not Schrodinger picture). This is a huge subject and it certainly is not about simple solving of TISE (even though it can be reduced to this sometimes)... $\endgroup$
    – Marek
    Commented Jul 22, 2011 at 19:36
  • 1
    $\begingroup$ I don't dispute any of this, but I don't think any of it is relevant to the question at hand. Note that it's explicitly about scattering from a static potential. One should be able to understand why the "usual" undergraduate quantum mechanics procedure for treating, e.g., Rutherford scattering, or scattering from a delta-function potential, or a square barrier gives the right answer. (Continued ...) $\endgroup$
    – Ted Bunn
    Commented Jul 22, 2011 at 19:47
  • 3
    $\begingroup$ There's no need to introduce time-dependence in any of those cases: you could solve the time-dependent equation numerically for a wave packet, or you can solve the time-dependent Schrodinger equation analytically. As I understand it, Mark's question is why those two ways of treating the problem give the same answer. $\endgroup$
    – Ted Bunn
    Commented Jul 22, 2011 at 19:49
  • 1
    $\begingroup$ @Ted: well, I was just trying to describe why the problem is about something else than simple solving of TISE. As for the real justification, I hinted at it in my comment under the question: it follows from the L-S equation. What AC describes is either another way of solving the scattering problem (and so irrelevant to the question) or a (wrong) justification of why the "usual" way works. Either way, I find this answer unsatisfactory. $\endgroup$
    – Marek
    Commented Jul 22, 2011 at 20:12
1
$\begingroup$

There is already a detailed and correct derivation, in my answer I can try to address the qualitative side of "why". In a scattering problem, there is always a hierarchy of well-separated scales. In your example of an alpha particle in Rutherford experiment, you refer to localization in space which means a certain spread in the momentum/energy. However, as long as this spread is smaller than the characteristic energy scale on which the scattering amplitudes changes, the time-independent at well-defined energy should give correct results.

In terms of lengths this scale separation required for the time-independent picture to work is that the wave-packet of the alpha particle should be larger that the neighbourhood of the nucleus where the scattering happens. Typically, this is the case -- if it is not, the alpha particle is likely to have very uncertain (in Heisenberg sense) energy/momentum.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.