1
$\begingroup$

I'm a high school student, so my (mis)understanding here is not very rigorous or precise, but I will write it below so as to concretely frame my question:

Bell's theorem takes three assumptions beyond the less refutable mathematical and logical assumptions any proof must take. These three assumptions are as follows:

  1. Reality is abides by the principle of locality; only events within the light cone of event A can affect event A.
  1. There are hidden variables that completely determine the state an object in a superposition of states will collapse into after measurement.
  1. The aforementioned hidden variables are, in general, causally unrelated to the measurement device's measurement settings, called the principle of statistical independence.

From these assumptions, Bell (and others after him) derive rules that particles must obey; these rules are called Bell-like inequalities. It happens that experiment shows these rules are broken; the Bell-like inequalities are violated. Thus, by empirical reductio ad absurdum, the conjunction of assumptions 1., 2., and 3., is false. There is no longer any way around this, since a loophole-free experiment has been made.

But all that does not rule out that one or two of the assumptions may be true. There are 3 ways to pick one of the assumptions; and 3 ways to pick two of the assumptions; thus, there are 3 + 3 = 6 theories that can be true despite the violation of Bell's inequality. I want to investigate the validity and soundness of these theories; some of which can be disregarded rather trivially.

Combination A: Only assumption 1 is true; this would lead to experimental contradiction, since there are no hidden variables, which means events outside of other events' light cones can affect them. Since there are no hidden variables, the cause is confined at the actual wave function collapse, which is not guaranteed to happen with the light cone of the collapse of the entangled particle's wave function.

Combination B: Only assumption 2 is true, which means some of the point of postulating hidden variables is gone. It would allow for determinism however, though in cases of wave function collapse outside of the intersection of entangled particles' light cones, these hidden variables are functions of the collapsed state that non-locally determine their entangled partners future collapsed state. Also, statistical independence is oddly not true in this hypothetical theory, which I don't know how factors in.

Combination C: Only assumption 3 is true. This theory is mentioned mostly for the sake of exhausting the logical possibilities (prima facie possible, of course); I don't see why anyone would postulate theory C, and I reckon it is demonstrably false.

Combination D: Only assumptions 1 and 2 are true. These kinds of theories are called superdeterministic, if I understand correctly.

Combination E: Only assumptions 2 and 3 are true. This would correspond to normal, non-local hidden variable theories, like Bohmian Pilot-Wave Theory?

Combination F: Only assumptions 1 and 3 are true. This Combination is nonsensical, since assumption 3 is referring to the hidden variables that are non-existent due to the rejection of assumption 2.

So, here's my question: what are the possible ways to get around Bell's theorem? I want an answer that takes the form I used above:

  1. First the answer explains what the non-logical, non-mathematical assumptions that Bell & co. take when proving Bell-like inequalities.

  2. Then the answer lists the combinations of assumptions that exclude at least one of the aforementioned assumptions.

  3. Then the answer explains the immediate consequences of those combinations for each combination; the kinds of theories those combinations will give rise to.

I understand if this is asking way too much. However, I am not looking for an extremely in-depth explanation. My own explanation above is at a level of detail satisfactory for me, which is why I included it. Any level of detail beyond that is only appreciated, however.

EDIT:

During further research, I have learned that another assumption of Bell-type theorems is that the measurement only has one outcome. Rejecting this assumption leads to the Many-Worlds interpretation, that allows for local hidden-variable theories. I don't know what combining this rejection with all combinations of other assumptions would look like.

I believe the explanation provided by MW theories is that both results for both measurements are achieved, thus splitting reality into four branches; then, as future light cones intersect, the branches whose results are aligned with the hidden variables are joined into one branch. I believe I am missing a lot in this explanation, however.

$\endgroup$
15
  • $\begingroup$ Regardless of locality, all hidden variables (and also many worlds) is simply a way to rename indeterminism and push it aside pretending that everything is now determined without actually determining anything. More importantly one can construct hidden variable theories and many worlds theories even if indeterminism is true, so there is something meaningless in such theories. $\endgroup$
    – Nikos M.
    Commented Apr 20 at 17:38
  • $\begingroup$ As for MW interpretations, I can believe that it doesn't change anything in practice, since there is no way to know which universe one will fall into (am I correct, or I am misunderstanding it?). As for hidden variable theories; they at least allow for determinism in practice, assuming that one is eventually able to figure out how to find and measure the hidden variables, no? Hidden variable theories are, without their corresponding extension to QM, indeterminate due to lack of knowledge, but not indeterminate due to ontology; or have I misunderstood it? $\endgroup$
    – user110391
    Commented Apr 20 at 17:43
  • $\begingroup$ You have misunderstood it. Bell's theorem means that even if there are hidden variables that are real, but we we do not know the value of, they can not reproduce what quantum mechanics predicts and what we actually measure. It is not something that can can be fixed by gaining knowledge of the hidden variables. However, you are right MW is indeterminate, because the world you end up in is random. $\endgroup$
    – KDP
    Commented Apr 20 at 17:54
  • $\begingroup$ @KDP "Bell's theorem means that even if there are hidden variables that are real, but we we do not know the value of, they can not reproduce what quantum mechanics predicts and what we actually measure." Are you saying that all hidden variable theories contradict experimental data? Or, are you saying that even theories that do fit the data would not allow us to know the result of a measurement ahead of time, because the hidden variables are somehow determining the outcome, but knowing them would nonetheless not allow us to predict the outcome? I can't see how that is possible [1/2]. $\endgroup$
    – user110391
    Commented Apr 20 at 18:04
  • 1
    $\begingroup$ Yes, I am saying (and Bell has proved) that all HVTs contradict experimental data. For an explanation in greater detail see my answers to some related questions: physics.stackexchange.com/questions/810744/… and physics.stackexchange.com/questions/114218/… $\endgroup$
    – KDP
    Commented Apr 20 at 18:12

4 Answers 4

2
$\begingroup$

The assumptions you listed are not the assumptions required for the Bell inequalities.

  1. Reality is abides by the principle of locality; only events within the light cone of event A can affect event A.
  2. Measurable quantities are represented by stochastic variables: quantities that have a single value at the time of measurement and whose value is picked with some set of probabilities from a set of possible values
  3. The aforementioned stochastic variables are, in general, causally unrelated to the measurement device's measurement settings, called the principle of statistical independence.

Any theory respecting those three assumptions satisfies Bell inequalities. The predictions of quantum theory violate Bell's inequalities.

Quantum mechanical equations of motion such as the Schrodinger equation or Dirac equation don't include collapse. As such, a theory that does include collapse is not quantum theory since it must have different equations of motion. Such theories exist and they are non-local and they make different predictions than quantum theory:

https://arxiv.org/abs/2310.14969

Pilot wave theories also respect the restrictions imposed by Bell's inequalities. Those theories are also non-local and make different predictions than quantum theory:

https://arxiv.org/abs/1906.10761

Superdeterminism violates assumption 3 but nobody has actually come up with such a theory, even according to the advocates of such a theory:

https://arxiv.org/abs/2010.01324

You will notice that this list so far has not included any theory that actually takes quantum mechanical equations of motion seriously as a description of reality. In quantum theory, the evolution of measurable quantities is not described by stochastic variables. It is described by Heisenberg picture quantum observables. These evolving observables have multiple values that can interfere with one another. When information is copied out of an observable this interference is suppressed. This is called decoherence:

https://arxiv.org/abs/2208.09019

As a result of decoherence on the scales of time and space of everyday life the world looks approximately like a collection of parallel universes:

https://arxiv.org/abs/1111.2189

https://arxiv.org/abs/quant-ph/0104033

https://arxiv.org/abs/2008.02328

That is why taking quantum theory seriously as a description of reality is often called the many worlds interpretation (MWI).

The quantum theory explanation of the Bell correlations is that when the measurements of the two entangled systems are performed quantum information is copied from those systems into decoherent channels in the form of locally inaccessible information that doesn't affect the expectation values of the observables. Since both of the possible outcomes happen there is no need to match them up until they are compared and when they are compared those correlations arise as a result of locally inaccessible quantum information:

https://arxiv.org/abs/quant-ph/9906007

https://arxiv.org/abs/1109.6223

$\endgroup$
1
$\begingroup$

As you mentioned in your edit, you missed single outcomes result which leads to the many-worlds interpretation (if that interpretation even makes sense). I would assume that this assumption holds.

Now I will comment on your assumptions:

  1. Your locality assumption needs more work, what is an event? Is measurement an event? The problem is that me, the other users and me can talk pass each other because we are not using math. Same goes for the rest of this answer.
  2. The hidden variable assumption is the heart of discord. Some say it is about determinism. Some say it is about realism (having observables be defined previous to measurement). Bell original paper has something like determinism but he later came up with his Theory of local beables in which he replaces this assumption and 1 with "local causality". So it is not clear how important this is.
  3. You can formulate three without hidden variables. What matters is the correlations between the two measurement devices. What you require is that you can reduce the correlations between the two devices as much as you want. For example, you can do the experiment with non-coherent light and see that the correlations result in a very small value $\epsilon$. And you can decrease $\epsilon$ to near 0 for example by placing the measurement devices as far you want. Those that do not like statistical independence say that these correlations cannot be always reduced.

Now onto the combinations:

Combinations that throw away 3: It does not matter if you throw 1 and 2 also with it. It is superdeterminism and according to many, anti-scientific. Any theory without statistical independence can violate 1 and 2 without a problem because any experiment is flawed (universal conspiracy). So does it matter to discuss these combinations?

Combinations that throw away 1: yes these are the interpretations that include Bohmian mechanics. These allow variables to be updated non-locally, but also allow for retrocausality like in the transactional interpretation.

Combination F (keep 1 and 3, throw 2): this is Bohr's view of quantum mechanics (anti-realist view). Bell said that this was nonsense. There are many papers that try to argue for or against it. This is a matter of controversy even here in Physics SE. I will recommend you to read on Bell's local causality, it is a strong argument on why this does not exist. But then you have to agree on how to derive local causality. The problem of interpretations of quantum mechanics is still open.

$\endgroup$
3
  • $\begingroup$ Thank you for this answer! I do agree my formulation of Bell's assumptions need work, they felt iffy when I wrote them, but due to my lack of knowledge regarding the mathematics involved, it is hard to truly understand what Bell was assuming for his theorem (not to mention that some assumptions may be implicit and thus even harder to spot). I am inclined to agree that combination F is false even without my overly specific formulation of assumption 3. How could the correlation be local if it wasn't determined by something shared in the intersection of their light cones, a lá hidden variables? $\endgroup$
    – user110391
    Commented Apr 21 at 10:01
  • $\begingroup$ @user110391 particles interacted with each other (entangled) and the system of two particles interacts with the detectors. Where is the light-cone issue? People that defend F say that locality as in the sense of quantum field theory is not violated. Also you can see this as information communication faster than light is not allowed. This is partly because of the indeterminacy of the state. So it feels like we can say that no determination should lead to Bell inequality violation without the need of 1. But nobody has come up with a convincing version of F. $\endgroup$
    – Mauricio
    Commented Apr 21 at 10:10
  • $\begingroup$ It is not true that violating 3 is necessarily superdeterministic. It's far more natural to see that option as retrocausal, or future-input-dependent. The settings are the cause, and the past hidden variables are the effect. The usual paradoxes are avoided because they are hidden variables. Sure, superdeterminism is unscientific, but not future constraints. Physics often uses future boundary conditions to constrain the past (most notably in general relativity, but also in action principles of all sorts). Those certainly aren't unscientific, and neither, therefore, is giving up on #3. $\endgroup$ Commented Apr 22 at 0:39
0
$\begingroup$

Let's try and analyse Everett's Many World Interpretation (MWI) a bit more and see if it circumvents Bell's theorem without invoking non local interactions or requiring non deterministic processes.

In the MWI, if something can occur, it will occur in at least one parallel world and there as many new parallel worlds created as there are permutations of possibilities.

If Alice and Bob have their detectors 60 degrees apart, then after a detection of a single pair and using only locally available information, 33.33% of the parallel worlds have a correlated result (assuming the members of an entangled pair have the same orientation) and if one of these worlds are picked at random the system fails because to agree with QM, the correlation should be 25% for these angles. There is a way to fix this problem. Each time Alice and Bob carry out a measurement, they write down the time, the position of their analyser and what their detector reading was (1 or 0) in their individual ledgers. In the parallel worlds that are created each time a measurement is made, there are copies of Alice and Bob getting different results and writing different things in their respective ledgers. After doing several thousand measurements Alice and Bob travel to meet each other and check how the results in their ledgers correlate. When they meet, the chances are that in their current world the correlations do not agree with QM. However, there are millions of copies of Alice and Bob all with different ledgers in the parallel worlds and in at least one of those parallel worlds, there is a world where by pure random chance there is a version of Alice and Bob who get results that agree with the predictions of QM when they compare ledgers. It is at this point that the world switches to a parallel world where the correct outcome was achieved. Ta dah! What if Alice and Bob have good memories and notice that what is in the ledgers does not agree with what they recall writing in them? That is not a problem. When they switched worlds they were replaced by their copies that have different memories that do match what is in their ledgers. The old versions of Alice and Bob are erased right back to the time they started the experiment and anyone they that interacted with while carrying out the experiments that they told the results to is also erased back to the start of the experiment and their replacement copy from the parallel world also has the correct memories. This is unfortunate for the individual concerned, if they recently won the lottery and is then switched to a parallel world where they did not win the lottery. What if by chance there is no suitable parallel world where the correlation of the results in the ledgers? Remember there must of been a time a time when Alice and Bob decided to do the experiment? When they made their decision, a parallel world was created where Alice and Bob decided to NOT carry out the experiment? The 'universal algorithm' (UA) switches to the parallel world where they never did the experiment in the first place. In this way the UA ensures that the rules of QM are always complied with.

Another method occurred to me for how to duplicate the correlations of Quantum Mechanics and that is to have some sort of supercomputer that can track the position of everything in the universe, so it knows the positions of both Alice and Bob's detectors at any given time and so superluminal communication is not required. Let's call this the hidden computer model (MCM). It simply refers to the records in its memory of the analyser positions and computes the required correlation of 25%. It then sets the output of the detectors to 1:0 or 0:1 25% of the time and 0:1 or 1:0 75% of the time to produce what we expect and updates its records accordingly. Distance is no object when you know the exact location and velocity of everything in the universe(s) at any given time. Unlike the super-deterministic model (SDM) the MCM does not need to predict the future in order to decide what photons to emit. It just lets the emitter emit random orientations, and sorts it all out on reception. This model would probably need a computer system that has more particles than there are particles and sub atomic particles in our universe, in order to have enough memory to store all the required information about location, velocity and spin/polarisation etc etc of all those particles.

Looking at the SDM, one problem it has to overcome is how to simulate the apparent random decay of radioactive particles. One way it could do this is to give each particle a seed number (n) and this number refers to the nth digit of Pi. After each interval of time the next digit of Pi is obtained and when a certain sequence of digits is found the particle decays. This would give the appearance of random decay. Now when determining the next iteration of the universe, the system looks at each particle and looks at its seed number together with the current iteration number of the universe (the time) and can compute exactly when it will decay. At each iteration, if an entangled pair is about to be created, it can compute forwards to predict the exact location and actions of observers like Alice and Bob and then determine what orientation the entangled pair should be created with. Like the HCM, enormous hidden computing power would be required.

Neither the HCM or the SDM is satisfying, because they are orders of magnitude more complex (and require hidden computing power) than a system that allows non local interaction between entangled particles which is self evolving and does not require hidden computing power or continuous external interference from some external controlling entity or 'puppet master'.

The Bohmian Mechanics Model (BMM) requires a pilot wave to guide the particle to its destination. This pilot wave is definitely not local in behaviour. It has infinite extent at the time of emission and its shape is determined by everything in the universe at that instant and its shape instantly adapts to any changes, so if for example Bob changes his analyser orientation at any time, the wave shape is instantly updated and guides the particle accordingly. The only deterministic thing about the BMM is that the trajectory of the particle is determined by initial conditions (which we cant measure precisely) and by local interactions between objects in its path and local interactions with the pilot wave. There are some similarities between the pilot wave and the curved spacetime of a gravitational field in that they both provide a terrain that guides the particle, but changes in the GR field can only occur at the speed of light. BMM seems to be better at explaining the two slit interference experiment than the correlations of entangled particles and has not been widely accepted in the scientific community.

$\endgroup$
1
  • $\begingroup$ Your derivation seems to be wrong, for detectors 60 degrees apart we get the Schmidt decomposition: $$ |\psi\rangle = \tfrac14\sqrt6 \,|0\rangle_A|0\rangle_B + \tfrac14\sqrt2 \, |0\rangle_A|1\rangle_B + \tfrac14\sqrt2 \, |1\rangle_A|0\rangle_B -\tfrac14\sqrt6 \,|1\rangle_A|1\rangle_B $$ and then you find that MWI exactly matches the Born rule (the 0.25 correlation). I added the mathematics of your example at the end of my answer physics.stackexchange.com/questions/811135#811150 . It's unclear where you get your claimed factor 0.33 from. $\endgroup$ Commented Apr 22 at 3:42
0
$\begingroup$

$\color{red}{\large \text{(For the mathematics & physics: see example at end!)}}$

In your analysis you frequently mention "wave function collapse". But there is no reason to assume such a thing exists. Instead we can assume there is only subjective collapse. By insisting that it occurs you are limiting yourself to "objective collapse theories". You are ruling out the many-worlds theory and probably some similar ones that go by other names.

When you describe assumption 2 (about the hidden variables) you introduce this new, hidden assumption: that wave functions can collapse!! So you should add this as assumption 4 (or perhaps better assumption 0?). You then of course do end up with about twice as many cases for selectively dropping some assumptions...

I also think the concept of "hidden" variables is not clear, since every theory has variables describing the state, and you can always call them hidden as long as you haven't measured anything. So what does a variable have to do to be called hidden, or not hidden?

Actually, for Bell's result, I do not think you really need all these assumptions since it is essentially just an upperbound on the correlation that is possible in joint probability distributions. Of course only if you sample them fairly, which looks like your assumtion 3, and of course there must be some information present, which you can always call "hidden" so we can drag in assumption 2, but assumption 1 certainly is not needed. Probability distributions are pure mathematical concepts, they do not need space and time to exist.

Now to your question: ways to get around Bell's theorem. I'll mention just one, using quantum mechanics without collapse!

So we drop assumption 0 (the collapse), we can keep assumption 1 (local QFT is our best theory!) we can forget assumption 2 since it assumes collapse to happen and we ruled that out. We can keep assumption 3, but reformulate it to: "the variables of the measured state are unrelated to the measurement device's measurement settings" (to stress that we do fair measurements but avoid mentioning the hidden variables of assumption 2 which we dropped).

Sticking to QM without collapse you will then get correlations that do violate the limit of Bell's theorem, but by at most a factor $\sqrt{2}$ because they still obey the Tsirelson bound. To get even larger correlations you could look at Popescu and Rohrlich's PR box, but then we're going even further than what you ask, beyound quantum mechanics.

(EDIT) PS: I see that you now edited the question to address this fourth assumption, about the collapse. When you write "thus splitting reality into four branches", that may be a bit suggestive. The situation is a superposition, which is a normal concept in QM, we never call that a "splitting of reality" in other cases. As for your question about circumventing Bell, we now have, for the 4 branches after the measurements:

  1. The QM state of the two maximally entangled particles clearly contained all necessary information to create the 4 complex amplitudes after the measurements, for every possible choice of the detector axes of A and B. (See below why this does not require infinite information!)
  2. This information was in their combined entangled state long before any measurement was done and before the particles reached their (possible very distant) positions. So if the detector axes were changed at the last moment this did not lead to a special need for (superfast) communication, the necessary information was already there.
  3. Although this QM state is nonlocal (describing joint properties of distant particles) it does not need interaction, or action, at a distance. So in the sense of your assumption 1), it is local.
  4. Such a QM state does not contain enough information to select one of the 4 branches to be the "chosen one" after the measurements, only enough to give them the 4 complex amplitudes. And complex amplitudes can give quantum correlations that are higher than correlation between real positive probabilities which are restricted by Bell's theorem. So that is the answer to your question: describe things with amplitudes, not probabilities!

As for the precise information in the state: If Alice uses a basis $|0\rangle_A, |1\rangle_A$ and Bob $|0\rangle_B, |1\rangle_B$ then the Schmidt decomposition of a maximally entangled state (like the singlet state or one of the other Bell states, or certain linear combinations of them) can always be written as: $$ \begin{align} |\psi\rangle = \alpha\, |0\rangle_A|0\rangle_B + \beta\, |0\rangle_A|1\rangle_B + e^{i\phi}\beta^* |1\rangle_A|0\rangle_B -e^{i\phi}\alpha^* |1\rangle_A|1\rangle_B \end{align}. $$ Let's now assume that for whatever reason, Alice instead uses basis $|0\rangle_{A'}, |1\rangle_{A'}$ and Bob also changes his basis, we can still write $$ \begin{align} |\psi\rangle = \alpha'\, |0\rangle_{A'}|0\rangle_{B'} + \beta'\, |0\rangle_{A'}|1\rangle_{B'} + e^{i\phi'}\beta'^* |1\rangle_{A'}|0\rangle_{B'} -e^{i\phi'}\alpha'^* |1\rangle_{A'}|1\rangle_{B'} \end{align} $$ only it now has other coefficients based on altered values $\alpha', \beta',$ and $\phi'$. The new set of 4 amplitudes in the superposition can be obtained by just doing the appropriate (double) rotation in the space of states, so it requires no addition of infinite information for infinitely many choices of the detector axes that A and B might use.

$\color{red}{\large \bf PPS:}$ As suggested in one of the other answers, let's assume the Bell state $|0\rangle_A|0\rangle_B+|1\rangle_A|1\rangle_B$ is the input entangled state and that A and B then measure with 60 degrees axis difference, rotated in the $z-x$ plane. For spin-$\tfrac12$ states this gives the Schmidt decomposition: $$ |\psi\rangle = \tfrac14\sqrt6 \,|0\rangle_A|0\rangle_B + \tfrac14\sqrt2 \, |0\rangle_A|1\rangle_B + \tfrac14\sqrt2 \, |1\rangle_A|0\rangle_B -\tfrac14\sqrt6 \,|1\rangle_A|1\rangle_B $$ If A and B repeat the proces $N$ times we end up with $$ \begin{align} |\psi\rangle^{\otimes\,N} &= \Big(\tfrac12\sqrt3 \ \frac{|0\rangle_A|0\rangle_B-|1\rangle_A|1\rangle_B}{\sqrt2} +\tfrac12 \,\frac{|0\rangle_A|1\rangle_B+|1\rangle_A|0\rangle_B}{\sqrt2}\Big)^{\otimes\,N} \\ &= \big(\tfrac12\sqrt3 \ |C_+\rangle + \tfrac12 \,|C_-\rangle\big)^{\otimes\,N} \end{align} $$ where the normalized $|C_+\rangle$ and $|C_-\rangle$ denote the combined terms with matching and non-matching readings in one iteration. After $N$ measurements, creating $2^N$ branches in the superposition, we can then evaluate this tensor product and find those superposition terms where exactly $n$ of the $N$ results have matching outcomes (and $N-n$ have opposite outcomes). There are $\tbinom{N}{n}$ of those terms, and each has amplitude $(\tfrac12\sqrt3)^n\ (\tfrac12)^{N-n}$, and they are all orthogonal in the tensor product space. So if we add them up and call the combined sum term $|\Psi_n\rangle$, then it has amplitude: $$ || \Psi_n || = \binom{N}{n}^{1/2} (\tfrac12\sqrt3)^n\ (\tfrac12)^{N-n} \tag{a} $$ The total superposition can then be written as: $$ |\psi_{\,\text{Tot}}\rangle = |\psi\rangle^{\otimes\,N} = \sum_{n=0}^N \ |\Psi_n\rangle, \tag{b} $$ nicely splitting it up into terms where A and B have matching results in exactly $n$ of the $N$ iterations. Now let's observe that the amplitudes $(a)$ are precisely the square roots of the binomial expansion terms in: $$ \big(\tfrac34 + \tfrac14\big)^N = \sum_{n=0}^N \ \binom{N}{n}\ (\tfrac34)^n\ (\tfrac14)^{N-n} \tag{c} $$ and we know that those terms get a very sharp peaking around $n=\tfrac34 N$, so their square roots will also have a peaking around that $n$ value, somewhat less pronounced by the square root, but in the large-$N$ limit still very strong.

So if (finally!) we want to see this as a many-worlds state (as was the intention in the other answer I referred to), then all terms in the tensor product state are branches, and we see that there is a very high peaking in amplitude for the subset of worlds where Alice and Bob have their number of matching results, $n$, very close to $\tfrac34 N$. As is to be expected in QM (the Born rule rederived! QED).

$\endgroup$
7
  • $\begingroup$ "I also think the concept of "hidden" variables is not clear, since every theory has variables describing the state, and you can always call them hidden as long as you haven't measured anything." AFAIK, hidden variables are the extra and necessary info that along with the wave function, fully determines the future state; they are "hidden" because (then-)current QM cannot describe them. However, Many-Worlds is deterministic and not a hidden-variable theory? If so, I am lacking some important knowledge regarding Many-Worlds theories. $\endgroup$
    – user110391
    Commented Apr 20 at 17:27
  • $\begingroup$ "So we drop assumption 0 (the collapse), we can keep assumption 1 (local QFT is our best theory!)" Dropping assumption 0 is tantamount to adopting a subjective wave function collapse, correct? However, will that necessarily entail a Many-Worlds interpretation? I cannot see any other way. $\endgroup$
    – user110391
    Commented Apr 20 at 17:29
  • $\begingroup$ If for variables, "hidden" means QM cannot describe them, then the real meaning of assumption 2 becomes: "QM is incomplete." That's saying clearer what you mean and it avoids the problem of defining hidden. And "subjective" wave function collapse means that we accept that the full superposition still exists, but that some parts of the wave function have so little influence on each other (by decoherence) that we can leave them out of the computation. It is just decoherence happening after entanglement. The "popular press" version of "many-worlds", OTOH, may make lots of other claims... $\endgroup$ Commented Apr 20 at 17:41
  • $\begingroup$ "(...) then the real meaning of assumption 2 becomes: "QM is incomplete."" Yes, but with the addition that it is incomplete because there is something that absolutely determines the future state of the quantum object. AFAIK, the purpose of assumption 2 is to say that, despite current appearances, determinism is upheld. However, it seems there are more ways to arrive at determinism (MW interpretation), but AFAIK, the MW intepretation only allows for ontic determinism, but not epistemic determinism, as opposed to hidden-variable theories that say the determining info is accessible to us. $\endgroup$
    – user110391
    Commented Apr 20 at 17:50
  • 1
    $\begingroup$ No, if the state doesn't collapse you need no determinism to determine in which part it collapses, neither ontic determinism, nor determinibility, etc. The wave functions of QM are then all you need to absolutely determine the future state of everything. Probably what we should say is that with subjective collapse, in the collapsed part of the state not all information about the whole state is available. But that should not be confused with the theory as a whole. $\endgroup$ Commented Apr 20 at 18:23

Not the answer you're looking for? Browse other questions tagged or ask your own question.