3
$\begingroup$

The Everett interpretation has memory robots. Copenhagen requires observer memory states. Consistent histories has its IGUSes. Decoherence has its existential interpretation. All of them refer to memory states of observers. What counts as an observer, and which parts of the observer count as memory states? Why isn't there a precise answer to this question? Does a camera count as an observer and the pixels on a photo as memory states? This is a serious question.

$\endgroup$

4 Answers 4

1
$\begingroup$

The question is sort of philosophical--- it is related to the question of how the program of physics is conceived.

The goal of physics is to give a model that describes a chunk of nature exactly and precisely. The issue is that there is no reason to suppose that the mathematical model involves concepts which have an immediately clear interpretation. To give an example, I will give a clearly bogus law of physics. Suppose you say--- the universe is a cellular automaton, say the world is a Newtonian universe with newtonian particles obeying a Newtonian force law. How am I supposed to make sense of this?

In order to do this, I have to match the stuff in the Newton world to observations that I make in the world. This requires a map between the mathematical stuff in the theory and the physical stuff in a laboratory. In Newton's world, we usually take the position coordinates to be the positions of visible things, but let me suppose this isn't the case. Let me suppose I have described the Newtonian physics in some crazy way--- say by interleaving all the digits of the all the x,y,z position of all the particles into one super-position variable X which has all the information of the position variable. To time-step, I disentangle the digits, do a Newton time evolution, and interleave them again.

How am I supposed to know what this mess of numbers is supposed to mean?

One reasonable way to do this is to identify something which corresponds to your own experiences inside the mathematical model, that is a computing entity with distinct memories that are changing with a perceptual time. Your perceptions are some classical data, and you can say that this data is present in the simulation as a certain truncation of the disentangled positions of all the particles, the ones in your brain. Then you can check that the time-evolution of the system reproduces the computation in your brain.

Suppose instead, your mathematical model is of the linear evolution of a probability distribution on the positions. You can write this as a probability distribution $\rho(X)$ for the interleaved position variable, and now this probability distribution obeys an equation which is completely linear.

Now if you ask "what is the proper interpretation of this mess of numbers"? The way to proceed is to still identify your experience with certain X's, the ones which de-interleave into positions of atoms in your brain which encode the same memory state. To check that the theory is reproducing your experience, you check that given a probability distribution starting on a certain set of X's consistent with this internal state, you produce a probability distribution peaked on X's which are a consistent forward evolution of the internal state, consistent with a computation that this state is doing.

The same thing in quantum mechanics. To have an interpretation, you give an embedding of the observer's experience in the theory. In this case, you embed in the X's (in orthogonal states), like you do in probability, but you now evolve wavefunctions according to quantum mechanics. The truncation to a given experience state is predicated on the fact that the quantum mechanics will only evolve you to superpositions of reasonable states in the future consistent with the computation you are doing.

The embedding of experience inside the theory is something that is always necessary in order to map mathematics to the world. It's just kind of trivial in classical mechanics. To see cases where it is nontrivial in classical mechanics, imagine duplicating observers atom-by-atom and doing different things to the copies. This thought experiment shows you that the map is just as nontrivial in principle in a classical mechanical world.

$\endgroup$
0
$\begingroup$

I'm not in any way a QM expert, I'm a layman, but I'll answer that with an insight I got from the Feynman Lectures on Physics, which I've already used to answer another question.

You do add the amplitudes for the different indistinguishable alternatives inside the experiment, before the complete process is finished. At the end of the process you may say that you "don't want to look at the photon". That's your business, but you still do not add the amplitudes. Nature does not know what you are looking at, and she behaves the way she is going to behave whether you bother to take down the data or not.

and later on:

If you could, in principle, distinguish the alternative final states (even though you do not bother to do so), the total, final probability is obtained by calculating the probability for each state (not the amplitude) and then adding them together. If you cannot distinguish the final states even in principle, then the probability amplitudes must be summed before taking the absolute square to find the actual probability.

And @terry-bollinger added upon:

Vol. III, Sec. 3-3 Scattering from a crystal, p.3-9, first full paragraph, talking about whether a neutron will interact with a crystal as a wave or as a particle:

"You may argue, 'I don't care which atom is up.' Perhaps you don't, but nature knows; and the probability is, in fact, what we gave above -- there is no interference."

Vol. III, Sec. 3-4, Identical particles, audio version only:

"... if there is a physical situation in which it is impossible to tell which way it happened, it always interferes; it never fails."

I think Feynman is teaching you to stop trying to mold what an observer is.

What matters only is whether exists (in principle) any kind of distinguishability.

$\endgroup$
1
  • $\begingroup$ This is true, but it isn't asking what counts as a measurement, it is asking about the extra concept of a "memory" or a "memory state" which pops up in all Everett style interpretations, and why it is necessary and what it means, really. $\endgroup$
    – Ron Maimon
    Commented Jul 27, 2012 at 23:29
0
$\begingroup$

"Attitude problem" 's answer is fascinating, but we can't restrict to words alone. It's really discrete information encoded as a string of symbols from an alphabet which counts. So, we have to include computer files encoding videos, graphics and sound in binary in general, and also compiled binary code instructions. Quantum mechanics is about discrete INFORMATION. LOGOS is INFORMATION. The communication of information across channels, and the processing of information. As Maimon noted, the processing of information is computation. Brains and computers are computational processors of information. Why discreteness? The answer lies in the error correction properties of discrete information, even when encoded over an analog substrate. Error correction leads to a relative insensitivity to noise and decoherence. Words, and equations and uncompiled computer code are particular forms of discrete information which are especially well suited for telling stories.

Welcome to quantum information!

$\endgroup$
1
  • $\begingroup$ This is true of classical information, but quantum information is different. The basic interpretation issue is how do you embed classical information in a quantum universe, which is Bohr's split. It's just the observation that we are classical information, and the quantum mechanics is describing quantum information. It isn't fair to say discrete information is equivalent to quantum information, because quantum information has entanglement. $\endgroup$
    – Ron Maimon
    Commented Aug 3, 2012 at 18:31
-2
$\begingroup$

Looking at the human brain as the observer, and neurological configurations as memory states is a red herring taking you down the wrong blind alley.

The actual memory state configurations are — guess what — words. Yes, language words, plus possibly some other symbols like mathematical symbols and the like, even diagrams. There is nothing special about humans per se. Humans only matter as automatons parsing in words and producing words. The words may be encoded phonetically as sound waves, or as written symbols on paper, or ASCII encoding in a computer file, or phonetically in a sound file on a computer, or in sign language gestures, or Morse code taps. The form of the encoding doesn't matter too much. Rather, it's the abstract string of symbols the words form which matters. This is what philosophical functionalism is all about. Words are the observer. As William James said, "the thought is the thinker".

As Daniel Dennett said, we are all zombies, and there are no qualia. All that matters are the verbal reports in words that are produced by humans. This is what heterophenomenology is all about.

The ancient Greeks knew this very well, which was why they enshrined LOGOS in their innermost mysteries.

This is why analytic philosophers spend so much time analyzing language games, and why Wittgenstein emphasized word games so much. It's all about words.

Even Niels Bohr knew that so well. That's why he wrote

What is it that we humans depend on? We depend on our words... Our task is to communicate experience and ideas to others. We must strive continually to extend the scope of our description, but in such a way that our messages do not thereby lose their objective or unambiguous character ... We are suspended in language in such a way that we cannot say what is up and what is down. The word "reality" is also a word, a word which we must learn to use correctly.

and

There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about Nature.

The interpretation of words, their hermeneutics, is nothing more than even more words about words. This is what poststructuralism is all about.

PS: Preverbal infants can't communicate verbally, so they can't be analyzed heterophenomenologically. Of course, when they grow up a bit, they might still keep some of their nonverbal memories of infancy which they can then translate into words which they can then use to report what they remembered happen. But really, it's all confabulation years later, as shown by Elizabeth Loftus.

PPS: Of course, we can ask why there is no effective superposition of words, but any answer to this question can only be in the form of even more words.

$\endgroup$
2
  • 2
    $\begingroup$ So how do you explain memory in infants? $\endgroup$ Commented Aug 1, 2012 at 10:23
  • $\begingroup$ The answer is just vaguely asserting that the observer can be modelled as a classical computer, with classical information. That's true. But the embedding of this stuff in a quantum mechanical description is not natural--- the computational states are embedded all weird. This is OP's question. Saying it's "all words", even with caveats, is not very helpful, because you just mean "LOGOS" which is really "classical computational data" to a modern person. $\endgroup$
    – Ron Maimon
    Commented Aug 2, 2012 at 3:32

Not the answer you're looking for? Browse other questions tagged or ask your own question.