4
$\begingroup$

In the beginning quantum mechanics is introduced by representing the states as cute little complex vectors, for example: $$|a\rangle=a_+|a_+\rangle+a_-|a_-\rangle$$ this is a complex vector representing a state that can collapse in two possible states, with corrisponding probabilities $|a_+|^2,|a_-|^2$. On the other hand observables are represented by hermitian operators, the eigenvalues of those operators are the possible outcomes of a measurement and the corresponding eigenvectors are the corresponding states of the system after the measurement. Ok, problem is we often deal with observables with an infinite number of possible outcomes of a measurement (one classical example of this is a measurement of position); so we need to work with a complex vector space that has infinite dimension. (Incidentally functions with real argument and complex value can be thought as a vector space with infinite dimension, this will become important later I think). So now, after a bit of work to define the specifics of this infinite dimensional vector space, we can define the position and momentum operators ($\hat{x},\hat{p}$). Here comes the problem for me, I have found two different definition of this two operator, this first one comes from Leonard Susskind's lectures: $$\hat{x}\psi(x)=x\psi(x)$$ $$\hat{p}\psi(x)=-i\hbar\frac{\partial}{\partial x}\psi(x)$$ Where $\psi(x)$ is any function such as $\psi : \mathbb{R} \to \mathbb{C}$.
The second definition comes from Stefano Forte - Fisica Quantistica and it's the following: $$\langle x|\hat{x}|\psi\rangle=x\psi(x)$$ $$\langle x |\hat{p}|\psi\rangle=-i\hbar \frac{\partial}{\partial x}\psi(x)$$ where $|x\rangle$ is an eigenvector of the position operator and $\psi(x)$ is the wave function, defined as (where $|\psi\rangle$ is an arbitrary state): $$\psi(x)=\langle x|\psi\rangle$$ The first definition defines the operators as acting on functions, while the second operator defines them as acting on vectors. This confounds me quite a bit. In the continuous case the states are represented by functions or by vectors? Does this distinction even make sense since functions form a vector space? We also like to talk about eigenfunctions and eigenvectors somewhat interchangeably. But I don't see why we can talk about them interchangeably, for example what does it mean to derive a vector with respect to $x$ as the momentum operator does?

$\endgroup$
6
  • 1
    $\begingroup$ "one classical example of this is a measurement of position" Very nice pun. $\endgroup$
    – DanielSank
    Commented Aug 31, 2020 at 18:56
  • $\begingroup$ Glad you like it, at least I'm funny. $\endgroup$
    – Noumeno
    Commented Aug 31, 2020 at 18:58
  • $\begingroup$ On your point about derivatives of vectors, do you have the same issue when asking about gradients/curls/divergences? Taking derivatives of vectors is basic vector calculus. $\endgroup$
    – Triatticus
    Commented Aug 31, 2020 at 19:03
  • $\begingroup$ Yes but in this case vectors that represent states seems to have fixed components and not components that we can interprete as functions. $\endgroup$
    – Noumeno
    Commented Aug 31, 2020 at 19:17
  • $\begingroup$ @Triatticus Curl and divergence only make sense for vector fields, i.e. a set of vectors defined over e.g. $\mathbb{R}^n$. $\endgroup$
    – DanielSank
    Commented Aug 31, 2020 at 21:20

1 Answer 1

11
$\begingroup$

It's good that you're confused because Susskind's notation is ridiculous. $\psi(x)$ is a number and so you cannot conceivable apply the $\hat x$ operator to it. This is an example of typical misuse of notation by physicists who like to denote a function $f$ by its value at a particular point $f(x)$. This abuse of notation is responsible for so much confusion that it breaks the heart.

In the continuous case the states are represented by functions or by vectors?

I would say, that in the continuous case the vectors are represented by functions.

Remember that a vector $\left \lvert v \right \rangle$ can be expressed in many different bases. In one basis, this vector may have components $(0, 1)$ which in another basis it may have components $(1 / \sqrt{2})(1, 1)$. Similarly, the vector $\left \lvert \psi \right \rangle$ may have different components in infinite dimensions... and those components are expressed as a function $\psi: \mathbb{R} \rightarrow \mathbb{C}$.

For example, the notation $\psi(x)$ usually means "The components of the vector $\left \lvert \psi \right \rangle$ in the $x$ basis", where by "$x$ basis" we mean the set of vectors $\left \lvert x \right \rangle$ with the property $$ \hat X \left \lvert x \right \rangle = x \left \lvert x \right \rangle $$ i.e. the set of vectors that are eigenvectors of the $\hat X$ operator.

See, when you wrote $$ \langle x | \hat X | \psi \rangle = x \psi(x) $$ you can think of it like this $$ \langle x | \hat X | \psi \rangle = \left( \langle x | \hat X \right) \lvert \psi \rangle $$ and as $\hat X$ is hermitian it can act to the left producing $$ x \langle x \lvert \psi \rangle = x \, \psi(x) $$ where we used the definition $\psi(x) \equiv \langle x | \psi \rangle$.

This is all in agreement with what you already wrote. So now let's get to the questions.

In the continuous case the states are represented by functions or by vectors?

Either way, but note that the functions are representations of the vectors in a particular basis.

Does this distinction even make sense since functions form a vector space?

This is quite deep. The representations of vectors in a particular basis are themselves vectors spaces. This is true even in finite dimensions. Consider the set of arrows in two dimensions. Those arrows can be summed and multiplied by scalars, so they form a vector space. However, if we choose a basis, we can express those arrows as pairs of real numbers $(x, y)$, and those pairs are themselves a vectors space as they too can be summed and multiplied by scalars. One can say that the vector space of arrows in two dimensions is isomorphic to the vector space of pairs of real numbers, and so the space of pairs of real numbers can be used to represent the space of arrows.

We also like to talk about eigenfunctions and eigenvectors somewhat interchangeably.

Yes, this is typical loosey-goosey physicist talk.

But I don't see why we can talk about them interchangeably

Good, that's a good instinct.

for example what does it mean to derive a vector with respect to x as the momentum operator does?

So first of all, as we said above, Susskind's notation $\hat x \psi(x)$ is unclear and bad for two reasons:

  1. It makes no sense to apply the $\hat x$ operator to the number $\psi(x)$.
  2. $\hat x$ exists independent of any choice of basis, but $\psi(x)$ is implied to mean "The components of $\lvert \psi \rangle$ in the $x$ basis. The $\hat x$ is basis independent, but the $\psi(x)$ is not, so he's mixing notations, which is confusing.

As for the momentum operator, note that it is only a derivative when expressed in the $x$ basis! If we work in the $p$ basis, then we'd have e.g. $$ \langle p | \hat P | \psi \rangle = p \psi(p) $$ where here $\psi(p)$ is implied to mean "the components of $\lvert \psi \rangle$ in the $p$ basis. The function $\psi(p)$ is also a wave function -- it's just the wave function for momentum instead of for position.

Now note that I'm using awful notation myself here because $\psi(x)$ and $\psi(p)$ look like the same function evaluated at two different points whereas really they are completely different functions [1]. Really we should distinguish the position and momentum wave functions by using different symbols:

\begin{align} \langle p | \psi \rangle &= \psi_\text{momentum}(p) \\ \langle x | \psi \rangle &= \psi_\text{position}(x) \, . \end{align} Please let me know if this answers all your questions.

[1]: They are actually related by Fourier transform.

$\endgroup$
0

Not the answer you're looking for? Browse other questions tagged or ask your own question.