39
$\begingroup$

In inflationary cosmology, primordial quantum fluctuations in the process of inflation are considered responsible for the asymmetry and lumpiness of the universe that was shaped. However, according to the Copenhagen interpretation, any random quantum phenomenon only occurs when the system is observed; before observation, the quantum state is symmetric. So the question is, who has observed the universe while it was inflating? Obviously, there was no conscious creature that time.

Actually, this problem is discussed in the paper The Bohmian Approach to the Problems of Cosmological Quantum Fluctuations (Goldstein, Struyve and Tumulka; arXiv:1508.01017), and the proposed solution to the problem in said to be an observer-independent interpretation (the pilot-wave theory).

$\endgroup$
1
  • $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$
    – rob
    Commented Jan 13, 2019 at 6:01

14 Answers 14

73
$\begingroup$

“Observe” oftentimes causes a lot of confusion for this exact reason. It doesn’t actually refer to some conscious entity making an observation.

Rather, think about how we actually make an observation about something. You have to interact with the system in some way. This can be through the exchange of photons, for example. This interaction is what constitutes an observation having taken place.

Obviously, particles can undergo their fundamental interactions without a nearby sentient entity.

For the sake of analogy, consider measuring air pressure in a tire. In the process of doing so, you let out some air — changing the tire pressure in the process.

$\endgroup$
13
  • 8
    $\begingroup$ According to the Copenhagen interpretation, the wavefunction collapse is "subjective", it DOES depend on the act of observation, and doesn't have anything to do with physical and experimental imperfections like the exchange of photon. If you say that the wavefunction can collapse automatically, you are advocating "objective-collapse" interpretations, which are proposed by some people like Roger Penrose, but are not mainstream. $\endgroup$
    – Alex L
    Commented Jan 10, 2019 at 22:07
  • 66
    $\begingroup$ You have a misunderstanding. Collapse of the wave function in the Copenhagen interpretation is caused by a thermodynamically irreversible interaction with a classical environment. I agree — it does depend on the act of observation. Observation just doesn’t mean what you think it does. $\endgroup$ Commented Jan 10, 2019 at 22:10
  • 11
    $\begingroup$ @AliLavasani The point is that exchange of photon is not an experimental imperfection, but that observation without it is impossible, so that it is a fundamental part of the observation. I believe I have been told that the realization of 'you can't observe something without interacting with it and hence changing it' is what led Bohr to the Copenhagen interpretation but maybe I misremember. Others here are more knowledgeable on this history. $\endgroup$
    – Vincent
    Commented Jan 11, 2019 at 9:53
  • 7
    $\begingroup$ Aren't observations interactions only between a "classical system" and a quantum system? If we don't divide the Universe in different system, but describe it by a single state, there wouldn't be any observations. $\endgroup$
    – jinawee
    Commented Jan 11, 2019 at 15:17
  • 8
    $\begingroup$ The von Neumann interpretation requires consciousness, the Copenhagen interpretation only requires a measurement. But what constitutes such a measurement remains unclear and opinions on it are divided. en.wikipedia.org/wiki/… $\endgroup$ Commented Jan 11, 2019 at 16:27
31
$\begingroup$

The Copenhagen interpretation isn't an essential part of quantum mechanics. It isn't required in order to make physical processes happen. It's just a way of describing what seems to happen when an observer makes a measurement. It's not even the only way of describing what it seems like to the observer.

However, according to the Copenhagen interpretation, any random quantum phenomenon only occurs when the system is observed; [...]

If you don't use the Copenhagen interpretation, quantum mechanics still works fine. In your example of the early universe, all the quantum-mechanical processes work in the same way. E.g., a hydrogen atom in an $n=3$ state will radiate light, and at a later time it will be in a superposition of $n=2$ and $n=1$. No randomness, just a superposition.

[...] before observation, the quantum state is symmetric.

I'm not sure what you mean by symmetric here. This seems like a nonstandard description.

$\endgroup$
11
  • 8
    $\begingroup$ You say there has been a "superposition" of all possible outcomes in the inflation, so what has destroyed the superposition? In Copenhagen, ONLY observation can collapse the superposition. If you believe it has automatically collapsed, you are defending "objective collapse" interpretations, and another option is that the universe is still in superposition (the many worlds interpretation). Either way you are implying one of these two kinds of interpretations, aren't you? $\endgroup$
    – Alex L
    Commented Jan 11, 2019 at 0:20
  • 5
    $\begingroup$ @Wolphram Yes, any interpretation other that Copenhagen has no problem. Copenhagen shouldn't also fail, so my question is how the observation can have been done at the beginning of the universe. I don't know, maybe observation is done NOW when we look at the universe!! $\endgroup$
    – Alex L
    Commented Jan 11, 2019 at 1:34
  • 7
    $\begingroup$ @Wolphram Notice that Copenhagen perfectly works. Interpretations like Bohmian mechanics have more serious problems (nonlocality, or retrocausality in transactional interpretation, etc). $\endgroup$
    – Alex L
    Commented Jan 11, 2019 at 1:46
  • 4
    $\begingroup$ @AliLavasani how about the good old many-worlds interpretation? No wavefunction collapse, no issue choosing when it happens. The only thing that you lose is the notion that only the universe that you see is what exists. It's not that far-fetched either. An electron created by the collision of two gamma photons can only "see" a positron that flies away in the exact opposite direction, which is just a layman's way of saying that a superposition of states evolves the same as if you evolve each state separately and only then sum up the states. $\endgroup$ Commented Jan 11, 2019 at 15:14
  • 4
    $\begingroup$ @John In many worlds, you have the problem that you cannot interpret probability for your "worlds". For example, suppose the probability of quantum event A is 0.7, and the probability of quantum event B is 0.3. What does this mean? Does it mean you have 7 universes in which A happens and 3 ones in which B happens, or what? $\endgroup$
    – Alex L
    Commented Jan 11, 2019 at 15:33
18
$\begingroup$

"Observation" does not refer to a human actually viewing and consciously perceiving a system. If one state is capable of affecting another state, then the latter is said to be measuring, or observing, the former. The reason conscious observation also constitutes measurement is simply because interaction with the environment is fundamentally necessary for our eyes to be able to perceive an event.

$\endgroup$
12
$\begingroup$

The Copenhagen interpretation is nothing but an impediment to understanding quantum mechanics. There is no such thing as "wave function collapse" within the system described by QM, nor in any falsifiable physical sense outside of the theory. At best it's an artificial glue for sticking quantum and classical models together; less flatteringly it's a mental crutch for people who don't want to accept that the best model of physical reality we can hope for describes not the evolution of a single deterministic state, but rather the deterministic evolution of a probability model of possible observed states.

Ultimately what's attributed to "wave function collapse" from an act of observation is just conditional probabilities, or if you want to go even more basic, correlations between random variables. I like to explain this via analogies with other applications of conditional probability, and usually end up picking something morbid like cause of death. As a random member of a general population, you have some $X$ percent chance of dying of a particular disease. If you get DNA tests done, you might find out that you instead have a $Y$ percent chance of dying from it, where $Y$ is greater or less than $X$. No physical change took place when you had the test done to change the likelihood of dying from that particular disease. Rather, you're just able to make better predictions based on correlations.

Now, neither QM nor any other physical theory is going to tell us much about what fine-grained observations could have been made in the very early universe, because the correlations to anything we can observe are going to be too small. But that doesn't mean the probability model didn't evolve the same way then as it does now, with all the consequences that entails.

$\endgroup$
13
  • 10
    $\begingroup$ It is true that observing a particle's position can be thought of as just acquiring information about its position. However, unlike the case with the disease, you can't take this to mean that the particle really had a definite, though unknown position the entire time -- such a hidden variable theory just doesn't work. That's the real subtlety of quantum mechanics, the puzzle that led people to adopt the Copenhagen interpretation in the first place. If you simply ignore it, you're not really talking about quantum mechanics at all. $\endgroup$
    – knzhou
    Commented Jan 11, 2019 at 17:58
  • 5
    $\begingroup$ Instead, you're just repeating what you already know about classical probability and hoping it all transfers effortlessly to quantum mechanics. The problem is, experiment tells us it doesn't. Quantum mechanics and classical mechanics are different things. $\endgroup$
    – knzhou
    Commented Jan 11, 2019 at 18:00
  • 1
    $\begingroup$ @R.. In most QM classes you're told that after a measurement associated to an operator "A" which leads to result "a", the state "collapses" to $|a\rangle$. I'd like to see same formulation with classical correlations. $\endgroup$
    – jinawee
    Commented Jan 11, 2019 at 18:44
  • 3
    $\begingroup$ @knzhou: I didn't assert that there is any possible hidden variable theory; quite the opposite, that QM only describes the evolution of a probability model, not of some "real state" among the elements of the probability space. Everything I've said above is roughly equivalent to how you view QM through the MWI, but without its ontological baggage which is problematic just like the CI, but for different reasons. $\endgroup$ Commented Jan 11, 2019 at 21:30
  • 2
    $\begingroup$ @Menno: Nothing requires "probabilities cancelling each other out". The fact that QM describes the evolution of a probability model (determined by the wave function) is not at all controversial. Interference is one consequence of the rules for how the probability model evolves. I'm not clear what you're actually objecting to. $\endgroup$ Commented Jan 13, 2019 at 2:04
6
$\begingroup$

For an interpretation of quantum mechanics that requires "conscious observers", you can assign our present-day astronomers that role. Certainly their observations are not done at the time of the early universe itself. That's just fine. No problem if you observe 15 billion years after the fact.

The problem only exists if you insist that observations must be done simultaneous with the observed phenomenon. But simultaneity has no place in physics, such a requirement would be at variance with basic physics (relativity). Quantum mechanics does not use simultaneity, and does not prescribe when observations must be made.

$\endgroup$
2
  • 1
    $\begingroup$ This is I think the correct answer ! Whether observer dependant or not, these interpretations can all account for the observed facts as long as they don't rely on some "time of observation" or "time of collapse". It is the case of both the Copenhagen and the Bohmian interpretation, so same results. $\endgroup$
    – user140255
    Commented Jan 14, 2019 at 5:18
  • $\begingroup$ The logical outcome of this form of absurdity is that an observation made by an astronomer on Earth, today, can have an effect on an event which occured billions of years ago in the early history of the universe. If we accept such stupidity we may as well throw away science and the scientific method, and go back to believing in magic. An observation made today cannot travel in time and thereby affect events which have already occured. $\endgroup$
    – Ed999
    Commented Sep 6, 2020 at 14:18
6
$\begingroup$

If the Copenhagen interpretation is correct(unknown), and if it requires conscious observers(unknown), our observations of the universe could retroactively collapse the superpositions. https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser .

$\endgroup$
1
  • $\begingroup$ This seems to mesh with some work by Hawking I was reading a while back that suggested that the present influences the early parameters of the universe. $\endgroup$
    – Michael
    Commented Jan 11, 2019 at 19:34
5
$\begingroup$

Good question! I've been thinking about that for myself too! Here's what I think.

If only an act of observation by a conscious creature (be it a flee or an elephant or a human being; I can see no reason why people are preferable, the only difference distinguishing ourselves from them being that we possess in our minds a theory of QM, and I'm pretty sure that can't cause a wavefunction to collapse), then it would be impossible in the first place for conscious creatures to develop in the course of history because the entire Universe would be in a continuously developing superposition of states without any collapse taking place (collapse is a necessary condition for conscious creatures to develop). This means that conscious creatures making an observation aren't the cause for the collapse (and nor can conscious creatures now cause the collapse at the beginning of the Universe retroactively because conscious creatures couldn't have developed if the collapse is caused by them; more circular one can't get). So when inflation took place, no conscious creatures were needed to make a wavefunction collapse, and as you stated in your question, obviously there were no conscious creatures (if the collapse is caused by "a thermodynamically irreversible interaction with a classical environment" then by the same token, neither a classical environment will be able to develop).

This means, for example, that the pattern of lines (resulting from the collapse of a whole lot of wavefunctions corresponding to photons) appearing on the screen in the double-slit experiment will develop independently of some conscious creature observing the setup.

This doesn't necessarily mean though that an observer(creature)-independent interpretation is one that postulates a pilot wave (or hidden variables). The "inherently probabilistic" interpretation will do as well. Both can make a wavefunction collapse without an observer. I think which interpretation corresponds to reality will remain unknown (unless someone comes up with an experiment to make a decision which I find hard to imagine) and be a question of "taste". Einstein was an advocate for a theory that underlies the apparent probabilistic behavior of matter ("Gott würfelt nicht", that is, "God doesn't play dice"). But many others (like Bohr in the "famous" Bohr-Einstein debate) take an opposite stand. The hidden variables interpretation (theory) gives an explanation though for the probability interpretation (as supposed by Born), which in my eyes is an advantage. It answers the question of how something can appear to be probabilistic.

$\endgroup$
0
4
$\begingroup$

Observation does not mean "by a human". Observation is any action on the system by outside of the system. Photons interacting, the confines of the system being changed, etc.

Your comment above about superposition "automatically collapsing in the early universe" is wrong. A hydrogen atom with superposition of it's energy level will collapse when the value of it's energy level is needed (e.g in a physical collision) which counts as an observation. The main takeaway is that when we say observation we mean interaction with a clearly defined outcome.

$\endgroup$
4
$\begingroup$

The problem with this question is that it assumes there is some metaphysical interpretation that we can be sure is true. While we have excellent equations that work incredibly precisely, we are not sure which qualitative interpretation of these equations is real.

There are now countless interpretations, each with their own sub-interpretations. Alexander R. Pruss splits these interpretations into two main groups - No collapse theories with a deterministically evolving wavefunctions and wavefunction collapse theories.

Out of the collapse theories, we have the Copenhagen Interpretation, where the wavefunction collapse is triggered by a measurement. Definitions of what constitutes a measurement can differ a lot depending on the physicist/philosopher. The Ghirardi-Rimini-Weber theory is another collapse theory where the collapse is triggered at some particular rate over time. The trouble with this theory is that no spontaneous collapse has been observed in any way, and an additional parameter - that of the rate of collapse - has to be introduced and explained in some way.

There are also many no collapse theories such as Bohmian Mechanics, the Many Worlds Interpretation, Many Minds Interpretation and Traveling Forms interpretation. In these, the universe continues to develop deterministically, but each have their own reasons as to why we can only get stochastic results from the deterministic systems upon measurement. Each of these interpretations also have their own problems. Bohmian Mechanics has the problem of nonlocality. The Many Worlds Interpretation is unclear as to how splits occur and is a bit bizarre to try to reconcile with, for example, the conservation of energy. The Many Minds interpretation leads to bizarre absurdities such as Boltzmann Minds and universes where there is just one mind surrounded by zombies. I don't think the Traveling Forms is well enough known to have its own critique, but I expect someone will come up with one at some point.

I found an excellent study of this topic in this book: http://www.michalpaszkiewicz.co.uk/blog/reviewnapocs/index.html

$\endgroup$
4
  • 1
    $\begingroup$ I don't find it problematic to ask how a popular interpretation is compatible with a specific phenomenon. Your overview of other interpretations is actually nice, but that's not asked here? $\endgroup$
    – M. Stern
    Commented Jan 11, 2019 at 22:19
  • $\begingroup$ Thanks for the comment - the OP mentioned 2 different interpretations (Copenhagen and Bohmian) and also didn't specify that there was a particular desire for an answer for the Copenhagen interpretation - so I wasn't sure of how many interpretations the OP was aware of and thought it needed a more generalised answer. $\endgroup$ Commented Jan 12, 2019 at 10:15
  • $\begingroup$ "is a bit bizarre to try to reconcile with, for example, the conservation of energy" There is no conflict whatsoever between MWI and energy conservation. If you think unitary evolution doesn't violate energy conservation, neither does MWI, since the latter is just the former. $\endgroup$
    – user76284
    Commented Oct 8, 2019 at 22:19
  • 1
    $\begingroup$ Do you not see a problem with whole additional universes (energy and all) being added on a whim because a particle could be in one of two spin states? $\endgroup$ Commented Oct 9, 2019 at 7:23
2
$\begingroup$

As others have mentioned, your definition of observer seems to have mislead you.

Take the double slit experiment for instance. In this case, the observer which forces the wave function to collapse is the screen, not the person looking at the screen. The results would be the same without a person looking at the screen.

$\endgroup$
1
  • $\begingroup$ So how large does this screen need to be in order to count as an "observer"? What if you isolate the screen very well, would you still call it an observation? This approach has some problems if you think about it. There are similar problems with highly upvoted answers, to be fair... $\endgroup$
    – M. Stern
    Commented Jan 15, 2019 at 17:41
2
$\begingroup$

It's an interesting question - with no answer
Your asking about quantum effects in the pre-inflation universe, which could have been as small as $10^{-26}m$. We are talking about a very massive and extremely small system, which would be described by a theory that unifies general relativity and quantum mechanics. As of now, we just don't have this theory, so anything might have happened. At least quantum theory probably does not apply.

$\endgroup$
0
$\begingroup$

The interpretations of QM, such as the Copenhagen Interpretation are just interpretations. The actual behavior of the universe that QM predicts will occur is defined using just a wave function. However, there's a philosophical issue with this. We as humans don't see wave-function like behavior on a day to day basis. We see what we think of as concrete objects, governed by classical mechanics. The interpretations are ways that such a classical object, were it to exist, could interact with the quantum world in a way which is consistent with QM's predictions.

No observer nor observation is needed for the world to evolve in the ways QM predicts. However, should any part of the universe begin to act in a way similar to a classical object (which they do), QM should predict behaviors which, in their limiting case, coincide with the interpretations.

In the particular case of the Cophenhagen Interpretation, it does suggest that if a truly metaphysical being were to observe a quantum system in the way one observes a classical system, it would have to do something akin to waveform collapse. However, a more useful takeaway from it might be that if you have an entity that has properties that lead it to interact rather classically (such as your hand), you should expect the result of that entity interacting should be similar to waveform collapse.

If you are 100% certain that you are a 100% classical being with 0% quantum behavior, then you will need an interpretation to explain how you interact with the world that is governed by quantum mechanics (read: everything). However, if you are merely 99.9999999% certain that you are a 99.999999% classical being with 0.000001% quantum behavior, then you could view yourself as part of the quantum system, but it may be very convenient to do predictions based on classical physics. Since your interactions typically involve trillions of interactions or more, classical physics does a very good job of making good predictions. Its only when the number of interactions gets small that we find the quirks of this classical physics approach start to show, and we have to think of things in QM terms.

$\endgroup$
0
$\begingroup$

This is, I believe, a shortcoming of a lot of poor use of language and also of unfortunate pop-sci explanations involved here.

Quantum mechanics does not say that things "need an observer" to "exist", any more then that classical mechanics does. One can say that it is philosophically debatable, of course, whether things "exist" when we aren't looking, but what I'd say is that quantum mechanics at least does not shed any more or less light on this than it already had.

You have no doubt heard things like "well the object doesn't 'exist' or 'doesn't have properties'" prior to being "measured" or "observed". This is not right. A better interpretation is that it has fuzzy properties, and in fact it has them at all times. Even when a measurement or observation is made, at best all this does is clarify one property at the expense of another, which is the tradeoff encapsulated in Heisenberg's uncertainty principle which, by the way, is perhaps better translated from the German as the "fuzziness principle", or even "blurriness principle", where the term translated as "fuzzy" or "blurry" here is the same word that would be used to describe a blurry photograph, i.e. when the lense on the camera is defocused and a photograph taken.

What "fuzziness" means here is that there is a restriction on the level of information which defines the properties of the particle. In Newtonian mechanics, properties of a particle such as its position are defined "with an infinite amount of information": the variable $x$ representing position, at least mathematically, is an infinite-precision real number. It gives us perfect information, singling out one point in space with absolute certainty. Moreover, in general it would require an infinite amount of paper to write it all down.

The notion of "limiting" the amount of information is what leads us to use probability distributions. Probability distributions are, mathematically, how we represent a situation where information is missing and, indeed, one should not find this concept too unfamiliar. If someone tells you they're "only 85% sure" about something, it means the information they have about it isn't really as good as someone who can be 100% sure or 0% sure (i.e. 100% sure of its falsity). The overall degree of privation of information for a particular distribution can be quantified by the Shannon entropy, which for a finite or countable set of outcomes with probabilities $P_i$ is defined by

$$H := -\sum_i P_i \lg(P_i)$$

where $\lg$ is the base-two, or binary, logarithm, which we customarily use if we want to measure the entropy in bits - if we want to use a different unit, we should use a different base (if the base is $e$, you are measuring in "nats", and if the base is 10, in "hartleys"). The higher $H$ is, the more information we are missing as a result of nontriviality in the probability distribution. The "shape" of the probability distribution represents in what way we are missing information, and the wide range of possible variation in that shape corresponds to great variety in the possible ways we could be lacking it. We can lack information about a situation in many ways - consider, and indeed this will be a bit close to what we're talking about since it concerns location, e.g. a pet cat (not originally inspired by a more famous one but upon second look, rather apropos for this discussion, imo) left at home while we go to run an errand (so at least, thankfully, we are not some sicko with a sadist complex who murders little animals pointlessly with painful gassings simply to sate hir own curiosity). We will expect the cat to wander, and thus when we come back, we will not know necessarily in what room of the house we will find it when we go inside again. Alternatively, we may know that - if it's the right kind of cat - it will stay in one place for the most part, but that there may be a friend we entrusted with access to the house coming over during our outing, which will cause it to move, but we don't know that for sure. We will end up then with two different distributions of probability for the location of the cat when we are going to open the door - two different ways in which we are missing information. In both cases, one way we can describe them mathematically is as a probability distribution $P(\lambda, \phi)$ of the cat's geographic position on the Earth (measured to suitably fine resolution), here given as latitude $\phi$ and longitude $\lambda$.

As a note, in physics, for a continuous variable like the position of a particle, this sum won't do, because there are an uncountably infinite number of possible points. Instead, we must take an analogous integral over the continuous probability distribution:

$$H := -\int_S P(x)\ \lg(P(x))\ dx$$

Unlike the case of the discrete-space probability distribution, thanks to the scale invariance of the continuum, this entropy effectively has within it a "reference level": namely, when the entropy $H$ is zero, it generally doesn't correspond to having complete information. In fact, the entropy can go all the way down to negative infinity. Rather, roughly you can take a zero of $H$ to mean that we know the position to within the equivalent information of knowing it to one unit of the scale $x$, which may be set variably to be meters, millimeters, nanometers, picometers, etc. and $H$ will change accordingly. A value of $H$ below 0 effectively corresponds to how much information we have in refining the position to below the level of our measuring unit scale, and above, that we can't even narrow it down to that. The negative of $H$ can be considered the degree of information presence, $I$.

So where does the "observer" enter in to all this? Well, the reason for that is that if we're going to be talking about information, we also need an information bearer, and while of course physical systems are information bearers, in the conceptual background of information and probability-as-information theory, a probability distribution is taken as representing the (incomplete) knowledge, or information possessed by, one information-processing system about some external entity to itself. The key part is that this information-processing system need not be a human: indeed, the theory behind this - information theory - is used all the time in describing, say, communications between computers even without any humans interacting with them in the process thereof. Ultimately, of course, since our theories are supposed to be used by us to explain the world we see around us, the "ultimate" one to be informed is typically a human, but there is no reason that we cannot narrate a story about the Universe without humans, or narrate a story from a non-human's point of view or perhaps better and more humbly, "our imagination of if a non-human were the expositor" (might look into, say, Karen Barad's theories if one wants some philosophical fluff grounding to pad this out although her writing style is kinda difficult which imo is a bit sad and I might not be making precise this statement though I don't feel like digging this out. Feminist philosophy actually seems to do a better job at handling this, imo, than maybe the kind of philosophies considered "kosher" here but then again I don't give two craps what's kosher, just what actually illuminates.) or even simply the attribution of viewpoint and better, agency, to such.

Indeed given this, I'd suggest a good replacement for the term "observer" here is "agent". In quantum mechanics, putting this together, we have an agent - basically any system that can store, retrieve, and process information and through interaction with the outside world, acquire information thereabout. To avoid taking the various terms in quantum mechanics too literally like that the "wave function is a physical object" or that "probabilities disappear from some regions of space" as though they were a 'substance' or any of a number of other such things, we should add the qualification that what it constitutes is a mathematical model for the knowledge/information possessed by that agent and what information it can deduce therefrom, and how new information can be acquired from the outside world, and it is from the perspective of such an agent - whether human or non-human - that the theory describes the world. The probabilities, etc. are all just language that we use to describe such. The agent may encode its actual information store very differently.

I'd also want to point out that even in classical mechanics we can't be entirely devoid of an "observer" because ultimately we need a reference frame, and yet for some reason this does not seem to cause as many quibbles, but I suspect that is because quantum theory is painted as being far more magic and "indecipherable" than I think it really deserves to be.

In that model, the information the agent has about a specific physical parameter of a system, say, the position $\mathbf{x}$, is modeled by a wave function $\psi(\mathbf{x})$. More generally, it can be modeled by a Hilbert space vector $|\psi\rangle$ which can be projected to get the information for many different parameters. This is just standard quantum theory, more or less. The squared magnitude of the function's values gives the probability distribution representing the information possessed by the agent about where the particle is, while the values themselves contain an interesting phase factor that, while not corresponding to anything directly observable, is indispensible in describing dynamics as it gives rise to both interference patterns and also the "quantization" from which "quantum mechanics" takes it name.

Now you may object that $\psi(\mathbf{x})$ also takes "an infinite amount of information" to write down, and perhaps even more, being now a function in space, and thus how can we say the agent "has limited information"? That's a valid point, but it's a subtle one: keep in mind the earlier discussion. The information describing $\psi(\mathbf{x})$ is information describing in what way our agent does and does not have information. The amount of information the agent has about the position is the (negative) of entropy.

But as a scientific theory, of course quantum mechanics should allows us to predict observations not yet made, and thus we have to add some more elements now to give us that ability and to that we say that in addition to the basic fact of the wave function, there are also two things that the theory's user (not conceptually and not necessarily the "agent" in the theory whom is ascribed $\psi$ as a model of its knowledge) can do with it:

  • There is an operation which we may call EVOLVE, which basically tells us, "given a wave function $\psi(\mathbf{x})$ describing what information our agent has at a present time $t$, what is the best information that the agent can possibly have at a future time $t + \Delta t$?" This is basically the Schrodinger equation.

  • There is another operation which, and this is what causes all the hoopla, we may call, to get around that, QUERY. The operation QUERY basically, as the name suggests, consists of "asking a question" of the external system, e.g. "Is the particle located at $50-100\ \mathrm{pm}$ from the atomic nucleus?" or "What is the current energy level of the particle?" or any of an (infinite) number of other such possible questions. In this operation the agent acts upon the external system so as to retrieve that information. When the information is received by the agent, it updates the information it has with the new information - which means the probability distribution $\psi(\mathbf{x})$ is changed. The rules for changing $\psi(\mathbf{x})$ in the theory depend on the precise nature of the query in question and how we are modeling the querying process and mechanism.

Now that last part should be highly emphasized. In this second operation, the change here does not represent anything physical in the system: it is simply an update of the knowledge that our agent has. If a broad probability distribution in space becomes a narrow one after this, that doesn't correspond to some physical "instantaneous disappearance" of matter or some kind of "energy" or something else from some parts of space. It just means the new information eliminates those regions and in fact, the actual effect is to increase the information content - lower the entropy $H$.

That said, however, there is a real physical difference at work here and, in fact, it has nothing much to do directly with the fact we are modeling in terms of agents and their acquisition of knowledge. In fact, you can model classical mechanics using probability distributions and incompletely-informed agents in just the same way: that's how one would describe the original cat scenario I just gave earlier as the cat is quite far and away from the realm of quantum mechanics!

Instead, the actual differences, and what is the "real" physical content of quantum mechanics as a theory, is the uncertainty relations or, in more general terms, the non-commuting nature of the operators representing certain physical parameters, namely those that are Hamiltonian conjugates in classical mechanics. What these do is they result in situations where that if the agent becomes more informed about one parameter, it then becomes at the same time less informed about others, at least when the information requested in a QUERY operation is sufficiently strong, and "sufficiently strong" is encoded by the constant of non-commutativity: $\hbar$, or Planck's reduced constant.

And it turns out this necessitates also that any suitably informative query will necessarily have a physical effect upon the system queried and, moreover, that also if we want to reduce the physical effect to zero, we can only do so at the cost of also providing zero information to the agent. This is the real content of the "observer effect" as being a profundity of quantum mechanics. In fact, observer effects - the general idea where the process of observation changes that which is observed - are far from limited only to, or some mysterious component of, quantum mechanics. They are found all throughout many areas of science. There is an observer effect in computer programming, especially when doing timings or memory debuggings. Sociology and psychology are notorious for them. Heck, even in classical mechanics, technically there is an observer effect, it's just left out of the presentations in question: to find out where something is located, I have to bump it with something, and you may also have heard this as an "explanation" of the Heisenberg principle in quantum mechanics. However, in classical mechanics, at least when you have such a "bump", you can make the "bumping" object arbitrarily light and non-intrusive as to the energy and momentum that it imparts. In fact, you can even do that in quantum mechanics, too! What the quantum laws and the true content of HUP is, is that when you do this, you also get *arbitrarily little information* about what you're observing.

And going a step further, this reveals that what the theory is really saying about the Universe is ultimately that it contains an information content limit, in the same way that Einstein's relativity theory tells you that the Universe contains an information propagation limit. The limit on information content is set by the physical constant $\hbar$, just as the limit on information propagation is set by the physical constant $c$ (that $\hbar$ doesn't directly contain units of information is more owed to the subtleties of measuring such, than of it not actually being such.). The introduction of agents, probabilities, etc. is all just the necessities of our language to describe that situation and describe it in the ways that are convoluted enough to make an accurate (very accurate!) theory about the Universe. How the information is "really" stored in the Universe is something we can't know, but this, again, is a philosophical problem that isn't strictly speaking limited only to quantum mechanics. We could have just as well asked the same of classical mechanics, were our universe classical, with "how does it store infinite-precision reals?" and so forth.

Now finally, to wrap up with "how can effects occur without an observer?" Just because the description requires an observer - or here, an "agent" - doesn't mean then we are unable to offer a narrative of the Universe before us humans existed. Already, classical mechanics even since the time of Galileo with his famous thought experiment involving the ship on the perfect sea, had effectively done away with the notion of a completely observer-independent description of reality, yet it seems that in discussions of quantum theory this often gets forgotten about or simply ignored, by doing away with the notions of absolute motion and absolute rest. And indeed, modern developments in physics only serve to push things more and more in a relative direction, and quantum mechanics is in fact part of this trend, not something totally different therefrom. The big step it makes over classical mechanics is really the aforementioned upgrading of the passive "observer" to an active "agent" whose "observations" actually not only affect the Universe but moreover must have such effects to at least some degree if they are to be informative to it. This upgrading is effectively a necessity that follows on from the need to talk of limits on information content, which require measures of information be introduced and then probabilities, and by bringing information in in a central way, the transaction of information between the external object and observing agent becomes important.

How, then, do we make the description? Simple - just as if we're going to make a classical model of the Universe wherein we pick some reference frame, we likewise simply pick by fiat a fictitious "agent" who tells the story from the beginning even if one could not exist physically. The only trick is, however, that due to the agent's interactions it will of necessity introduce some possible deviation in the evolution and thus could be considered to at least in theory throw the predictions of the theory given its fictitious nature: however, we can remedy that by simply considering the level at which it extracts information to be suitably coarse, and/or its queries to be very infrequent.

(One may also wonder what role "decoherence" has in all this and how it supposedly "obviates" the need for operation QUERY above. Decoherence is really just an effect describing how that the wave function behaves when we are considering a wave-function description of another agent. The end wave function ends up being one in which the agent is superposed between multiple outcomes of its query, while in each such outcome the individual queried object has been "reduced" to one outcome. To understand the superposition, remember that since every wave function must be attributed to an agent, when we do this description we are implicitly presuming a second agent is in play. The superposition means here that this second agent doesn't know - due to the nondeterminism, which is the manifestation of the relative lack of information in the Universe - how the first agent has been updated by its query. It will be resolved when the second agent asks - if it's a human, likely literally asking, if it's a computer, then by sending it a request for data, etc. just to avoid anthrocentrism - the first for its result. The real, physical "observer effect", where the query/observation changes the reality, and which is erroneously advanced to "explain" the HUP in the naive "ball bounce off" idea, is embodied in the fact that in this evolution the queried system also undergoes a change to the given outcome during the decoherence event. Confusion results from not cleanly separating conceptually the subjective knowledge-update in operation QUERY from the similar-looking physical "system collapse" in the decoherence process which actually could be interpreted as increasing the information content of the queried system. Similar-looking does not imply identity! One is a real physical effect, the other happens within the agent [according to our model thereof]. The two are not completely unrelated, either - when one agent goes to quiz the other on what it saw, the result of that quiz better be something sensible like "I saw a particle at X" or "I saw a yes/no" and not some weirdly surreal mosh of various options. Decoherence, instead, is thus simply the theory telling us that it is internally self-consistent - hooray!)

$\endgroup$
-1
$\begingroup$

All of quantum mechanics theory suffers from being entirely devoid of real facts, being just a bunch of theories: the so-called interpretations.

Schroedinger developed a perfectly valid and hugely successful equation, which accurately handles all the practical aspects of quantum mechanics. Then a whole lot of other people tried to theorise about why the equation was so successful.

All the theories violently disagree with each other.

Einstein never agreed with any of these theories, and was particularly scathing about the so-called Copenhagen interpretation, which he viewed as a load of rubbish. And he was a lot smarter than everyone else working in this field - then and now.

So good luck with trying to second-guess Einstein.

Schroedinger realised that at the heart of quantum mechanics there is a random factor, which can't be precisely quantified, but which must be handled statistically: that is, it can be assigned a probability. The implication of this is that what is being measured is not a single event, but many events: so many, that even given a certain amount of freedom (i.e. randomness) within the system being measured, when viewing a sufficiently large sample - presumably millions of events - it is possible to measure the average response of the system with an impressive degree of certainty.

At the heart of statistics lies a grain of truth: that what to us, here at the macroscopic level, appears to be a single event (we call it, out of ignorance, a particle), is really many events. Statistics give us a picture of a quark, or an electron, or a neutrino: we assume, on no evidence, that it is a single spacetime event; but Schroedinger assures us that it is not, and that what we are seeing is merely the tip of the iceberg: an iceberg built out of the statistics of thousands, perhaps millions, of underlying events.

Schroedinger's work is the only solid piece in the quagmire termed quantum mechanics. What one ought to do in this field is pay more attention to him, because the rest is all theory, and largely based purely on speculation.

If a particle is not a statistical illusion, why does its behaviour conform so closely with Schroedinger's equation, an equation which requires one to accept - in its math - that the behaviour it is modelling is based on a series of statistical probabilities?

Certainly one can understand why a particle might not be capable of being assigned a precise spacetime location, if what one is "observing" is not a single spacetime event but is, rather, the statistical outcome of a million underlying events.

Even if (which seems unlikely) there are only a dozen underlying events, it is still a case of the "particle" having a "position" which is derived from averaging the positions of those 12 actual events. How much less precise does its position become if the "position" is averaged from the locations of a million actual events? Which of those million is its "real" location? Are they not all equally valid?

When we measure a property, we are measuring the average of a large number of events, not, as we have previously supposed, a single event. Classical physics believed that a particle is a single spacetime event, whereas quantum mechanics is trying to tell us that a particle is the average value of many separate events.

Quantum interpretations tell us nothing: we simply do not have the technology capable of magnifying the events at the sub-atomic level to see what is really occurring there. But Schroedinger has already given us the clearest road-map: we must expect to see a large number of individual events, which are to some degree chaotic, but which are predictable when treated in groups, using statistics, and which when so treated will obey the probabilities he sets down.

His math gives the clearest possible explanation of what is occurring, and all the theorists do is ignore him. They persist in claiming that a particle is a single event, and thereby they mislead themselves into ignoring the statistical nature of Schroedinger's work.

Accordingly, the answer to the o/p's question is that none of the so-called interpretations is valid, and that a true understanding of quantum events must wait on the development of techniques for magnifying the quantum level, such that we can study what is actually occurring there instead of theorising about what might be.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.