5
$\begingroup$

I am in the beginning of my first Quantum Mechanics class, and I just learned about state vectors.

From my understanding, the representation in a basis is related to the probability distribution of the object being in one of the basis states when measured, i.e. writing a state vector in the position basis would tell you about what positions are likely. What I am wondering is: can you take knowledge of the probability distribution of positions, change basis, and get information about, say, the probability distribution of energy? Or any other observable?

I know that there is a unique representation of a state vector in a given basis, but is there a unique state vector corresponding to each representation (representation as in set of coefficients to multiply basis vectors by)?

In other words, are there multiple state vectors that could be written the same in one basis, but differently in another?

(The way I am thinking about it, these three questions are all equivalent)

$\endgroup$
2

4 Answers 4

4
$\begingroup$

In quantum mechanics, kets reside in a vector space, and, as in any vector space, vectors can be uniquely expressed in ANY basis, whether or not it has physically relevant content. In quantum mechanics, it is common to undergo transitions, for instance, from the position eigenstate basis to the momentum eigenstate basis. When I say "transition," I mean rewriting a given ket that is initially expressed in the position eigenstate basis in the momentum eigenstate basis.

This process can be illustrated using two relevant formulas. Let's suppose we have a ket $|\psi\rangle$ expressed in the position eigenstate basis $|x\rangle$, and we want to obtain its representation in the momentum eigenstate basis $|p\rangle$. The relationship between these two bases is established through the Fourier transform:

$$ |\psi\rangle_p = \frac{1}{\sqrt{2\pi\hbar}} \int_{-\infty}^{\infty} e^{-\frac{i}{\hbar}px}|\psi\rangle_x \,dx. $$

Here, $|\psi\rangle_p$ is the representation of the ket in the momentum basis. The integral captures the contribution of each position $x$ to the momentum $p$.

This transition will, according to the postulates of quantum mechanics, provide relevant information about the probabilities of finding $|\psi\rangle$ in each of the new elements of the momentum basis (meaning the probability of having one momentum or another).

$\endgroup$
4
$\begingroup$

I will probably get dinged for this general comment packaged in an answer, but I am going to make it for your sake anyway because I still hope that it will clarify some of the intuition about quantum mechanics.

Be careful with what you mean by "probability distribution of the object". The way we teach quantum mechanics is a bit deceptive in terms of language but it is historical and therefor unavoidable. You have to learn to translate the old language in a way that actually works without introducing conceptual problems.

A probability distribution automatically implies that we are talking about a statistical ensemble of systems (it's better not to think too much about objects in this context). The "state vector" describes an infinite collection of systems that were prepared with the same initial conditions. So if we are preparing two ensembles corresponding to the same state vectors and we perform a position measurement on one, then we get a distribution of positions. If we perform a momentum measurement on the other ensemble, then we get a momentum distribution for the other. The individual measurements are not being taken "on the same system", however. They are being taken on two individual (and in the limit of the law of large numbers infinite) sets of different systems.

The second point concerns the meaning of the wave function vs. the actual measurement (given by the the Born rule).

The actual physical position measurement alone can certainly not give you energy information. Consider two plane waves, one with quanta of energy 1eV an one with quanta of energy 10eV. The spatial distribution of position measurements on both ensembles is the same: it will be a constant density of events. If we want the energy information, then we have to run those plane waves through a spectrometer, e.g. a grating. The grating will deflect the wave by a different angle, depending on the energy of the quanta. The energy information is therefor in the oscillating term of the wave function, but it's not in the actually measured spatial distribution, unless we use a specific physical transformation mechanism like a grating before we perform a spatial measurement.

The mathematical genius of the formalism lies in the fact that it can describe a wide range of physical interactions that lead to specific measurements just by knowing the spectra of their corresponding operators without having to deal with details like "gratings". That's also where the problem with it lies: it is not obvious how we can translate the measurement operator spectra back into actual physical measurements (and for most operators like the unity "spatial measurement operator" there are no perfect physical implementations anyway).

I would suggest that it is instructive to read Heisenberg's matrix mechanics papers as part of an undergrad quantum mechanics course (if you have the bandwidth and interest, that is). You can still find a trace of physics in them that connects actual spectral energy measurements to a formalism involving linear algebra. By the time von Neumann had reduced quantum mechanics to a linear operator formalism (that's what you are being taught in your undergrad course) that connection to actual physical measurements is lost and the theory seems to drop out of nowhere. Nothing could be further from the truth historically... it's just easier to teach the answer to the "How to calculate something." instead of the "How in the world did they come up with this?" question in a one semester introductory class that is already too short to even touch the basics as it is.

$\endgroup$
3
$\begingroup$

Can you take knowledge of the probability distribution of positions, change basis, and get information about, say, the probability distribution of energy? Or any other observable?

If you only know the probability distribution relative to one basis (the set of probabilities that the system will be observed in each particular eigenstate within that basis) then the answer is no. If you know the components that make up the representation of a state relative to a given basis then yes, you can transform these components to get the representation of the state in any other basis - this is linear algebra. But the components are a set of complex numbers, whereas the probabilities are real numbers that are the (normalised) magnitudes of these complex numbers. By going from components to probabilities you lose information.

$\endgroup$
2
$\begingroup$

A simple example:

Take a real wave function $\psi(x)$ in position space, normalized as $$\int\limits_{-\infty}^{+\infty} |\psi(x)|^2=1.$$ $\psi(x)$ represents a certain (pure) state of the system with the probabilty distribution $|\psi(x)|^2=\psi(x)^2$ for measurements of the observable "position of the particle" in this state.

Compare this with the state represented by the wave function $\chi(x)=\psi(x)e^{ip_0x/\hbar}$, where $p_0$ is a real constant (with the dimension of a momentum). As $|e^{i p_0 x/\hbar}|=1$, the state $\chi$ has the same probability distribution $|\chi(x)|^2=|\psi(x)|^2$ as the state $\psi$ for position measurents. In other words, the states $\psi$ and $\chi$ cannot be distinguished by measurements of the observable "position".

But what happens, if we measure the observable "momentum", represented by the differential operator $P=-i \hbar d/dx$, in these two states? Let us simply have a look at the expectation value of the momentum (the mean value of the momentum) in these two states. According to the rules of quantum mechanics, the expectation value of the momentum in the state $\psi$ (remember, $\psi(x)$ was assumed to be real) is given by $$\begin{align} \langle \psi|P \psi\rangle &= \int\limits_{-\infty}^{+\infty} \!\!\!dx\, \psi(x)\left(-i \hbar \frac{d}{dx}\psi(x) \right) \\ &=\frac{-i \hbar}{2}\int\limits_{-\infty}^{+\infty}\!\!\! dx \, \frac{d}{dx} \psi(x)^2 \\ &= \frac{-i \hbar}{2}\left[\psi(+\infty)^2-\psi(-\infty)^2 \right]=0, \end{align}$$ as $\lim\limits_{x\to \pm \infty}\psi(x)=0.$

The physical interpretation of this result is that measuremts of the momentum with outcomes $p_1, p_2,\ldots,p_N$ performed at a large number $N$ of copies of the system prepared in the state $\psi$ will have the property $$\lim\limits_{N\to\infty} \frac{1}{N}\sum\limits_{k=1}^N p_k =0.$$

What happens in the case of a momentum measurement if the system is prepared in the state $\chi$? The expectation value of the momentum is now given by $$\begin{align}\langle \chi |P \chi\rangle &= \int\limits_{-\infty}^{+\infty} \!\!\! dx \,\chi^\ast(x) \left(-i \hbar\frac{d}{dx} \right) \chi(x) \\ &= \int\limits_{-\infty}^{+\infty} \!\!\! dx \, e^{-ip_0x/\hbar} \psi(x) \left( -i \hbar \frac{d}{dx} \right) \left(\psi(x) e^{ip_0x/\hbar} \right) \\ &=-i\hbar \int\limits_{-\infty}^{+\infty} \!\!\! dx \, \psi(x) \left( \psi^\prime(x)+\frac{i p_0}{\hbar} \psi(x) \right)\\ &=p_0, \end{align}$$ showing that the experimentally measured values $p_1, p_2,\ldots, p_N$ of the momentum are now distributed in such a way that $$\lim\limits_{N\to \infty} \frac{1}{N}\sum\limits_{k=1}^N p_k = p_0.$$

The probability distribution of the momentum of a particle in an arbitrary state $\phi$ can be studied in full detail by taking advantage of fact that the momentum-space wave function $\tilde{\phi}(p)$ is related to the associated wave function $\phi(x)$ in position space by the Fourier transform $$\tilde{\phi}(p)=\int\limits_{-\infty}^{\infty}\!\!\! dx \, \frac{e^{-ipx/\hbar}}{\sqrt{2\pi \hbar}} \phi(x),$$ where the probabilty distribution for momentum measurements is given by $|\tilde{\phi}(p)|^2$. As the position-space wave function $\phi(x)$ can be recovered from the momentum-space wave function $\tilde{\phi}(p)$ by the inverse Fourier transform $$\phi(x) = \int\limits_{-\infty}^{+\infty} \!\!\! dp \, \frac{e^{ipx/\hbar}}{\sqrt{2 \pi \hbar}} \tilde{\phi}(p),$$ both, $\phi(x)$ and $\tilde{\phi}(p)$, contain the same information about the state $\phi$.

Returning to our previous example, the momentum-space wave functions of the states $\psi$ and $\chi$ are thus related by $$\tilde{\chi}(p) =\int\limits_{-\infty}^{+\infty} \!\!\! dx \frac{e^{-ipx/\hbar}}{\sqrt{2\pi \hbar}} \psi(x) e^{ip_0x/\hbar} = \int\limits_{-\infty}^{+\infty} \!\! \! dx \frac{e^{-i(p-p_0)x/\hbar}}{\sqrt{2 \pi \hbar}} = \tilde{\psi}(p-p_0),$$ showing that the probability distribution $|\tilde{\chi}(p)|^2=|\tilde{\psi}(p-p_0)|^2$ of the momenta in the state $\chi$ is related to the probability distribution $|\tilde{\psi}(p)|^2=|\tilde{\psi}(-p)|^2$ in the state $\psi$ by a translation in momentum space, explaining the above outcomes for the expectation values of the momentum.

In conclusion, the states $\psi$ and $\chi$ have the same probability distribution for the observable "position", but their probability distributions for the momentum differ.

Note that $\psi(x) \to e^{i \alpha} \psi(x)$ with a constant phase angle $\alpha \in \mathbb{R}$ does not change the physics. Pure states are described by "rays" in Hilbert space.

Finally, a short remark on the general case. Assume, you decompose a state vector $|\psi\rangle$ with respect to the complete orthonormal system $\{|\phi_n\rangle\}_{n=1}^\infty$ of eigenvectors of some observable $A=\sum\limits |\phi_n \rangle a_n \langle \phi_n|$ with nondegenerate real eigenvalues $a_1, a_2, \ldots$ by $$|\psi\rangle = \sum\limits_{n=1}^\infty c_n |\phi_n\rangle, \qquad \sum\limits_{n=1}^\infty |c_n|^2 =1,$$ where $|c_n|^2$ is the probability to measure the eigenvalue $a_n$ of $A$ in this state. A change of the expansion coefficients $c_n \to c_n e^{i \alpha_n} $ with $\alpha_n \in \mathbb{R}$ will not change the probability distribution for the eigenvalues $a_n$ of the observable $A$ in the new state $$|\chi\rangle =\sum\limits_{n=1^\infty} c_n e^{i \alpha_n} |\phi_n\rangle,$$ but the probability distribution for the eigenvalues $b_k$ of some observable $B$ with $[A,B]\ne 0$ will change (unless $\alpha_n =\alpha \, \forall \, n$).

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.