7

[Note: After I made this post, the title and the post have been criticised as badly phrased and/or opinionated. I partially agreed with that and made some initial modifications. However, after others started writing answers, I decided to leave the post as is, so that I would not, as it were, pull the rug out from under those answers. See the chat for details. If you also think that that criticism is warranted, then please interpret the question as "What makes this problem so controversial?" or try to read the post in the light of that question.]

The Newcomb problem is a conceptual puzzle that became widely known after Nozick first presented it (in 1969) and Martin Gardner wrote about it in his popular Scientific American column (1973). The problem is fascinating, intriguing, and confusing to many people.

Some might see it as a stupid, confusing little mind-twister, but philosophers (who have given it some thought) suggest that it may teach us something about human decision making. Nozick writes that it confronts us with a paradox of rationality: a conflict between two principles of rational decision making that in the Newcomb scenario seem to lead to different choices. The problem seems to be a neat, small thought experiment, easily repeatable as actual experiment, and you might guess that after half of century of study and debate, there should be a broad consensus about what to do in this scenario (and why!), but ... it turns out that there is still just as much controversy as 50 years ago!

According to the Wikipedia entry:

In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly." The problem continues to divide philosophers today. In a 2020 survey, a modest plurality of professional philosophers chose to take both boxes (39.0% versus 31.2%).

If you wonder why the percentages don't sum to 100.0%, this is because the rest either thought the problem was unclear or didn't know how to answer or had other reservations. (See: PhilSurvey 2020)

I am very amazed by the controversy. If Nozick's quote is correct then people who take a stand in this debate may be thinking: "How is it possible that about half of the other respondents - academic philosophers - don't see the simple, correct solution? Why is about half of them still confused?" My question then is: "Why is there no broad consensus about what to do (and for what reasons) in the Newcomb scenario?"

Here we have an apparently simple conceptual riddle and yet about half of respondents is not able to see the solution (if there is one). Why?

That is my first, main question: Why is this problem so confusing? (I'm assuming it is also confusing to most non-philosophers, but have only anecdotal evidence. I actually also belief there is a simple, correct solution, but which side I happen to be on in this debate doesn't seem relevant for this question. Or is it?)

It may be impossible to answer that question without directly engaging with the problem and actually proposing a solution? After all, we can only show the fly the way out of the flyglass when we first clearly identify the fly's blind spots and have found a way out for ourselves. But is it perhaps possible to point at some aspects of the presentations that either (potentially) add to the confusion or do the opposite and make it less murky? Are there particular aspects in general (or in this debate) that keep it murky, keep the fly trapped? Have any studies been done of these kind of meta-aspects (of debates or lines of argumentation) that could explain why the debates (in general or in this particular case) are endless? -- This is my second question. I hope it's sufficiently clear.

I actually have more follow-up questions, but I'll leave it at this :)

References

  • The Wikipedia entry about the problem is fine as far as it goes, but is perhaps not completely unbiased. Also, it mentions several opinions, but the listing appears ad hoc; it's not representative of the rainbow of actual opinions.
  • Unfortunately the SEP does not have a separate chapter devoted to the problem. It is mentioned in several other entries, though. Practical Reason and the Structure of Actions mentions it in passing. A short (imo extremely flawed) analysis from the standpoint of "causal decision theory" is given in Normative Theories of Rational Choice: Expected Utility. A longer analysis can be found in a link mentioned in a comment below this post.
  • The Wolpert, Benford presentation(2011) is sometimes quoted as "the" solution, but this is controversial. Their argument is strong, but it stands or falls with their conclusion that the problem is "ill-posed" -- basically that the problem itself allows/requires two different interpretations. But this can be challenged.
  • A very readable overview of an approach from a computational point of view can be found in a series of blog posts An Introduction to Newcomb-like problems by Nate Soares.
2

7 Answers 7

11

The confusing part is that we don't know how that predictor works, because that defies our expectation as to how reality works.

Like it's a fact that the truth value of the prediction is determined by the players action to chose either A, B or A+B. (Yes the player could also chose A, just out of spite to show the predictor is wrong, for the price of certainly not getting a chance on the $1,000,000)

It's also a fact that a cause must precede it's effect. Now the action of the player precedes the result, while the prediction precedes the player's action. So if the prediction IS also the result, the prediction would happen both before and after the player's choice. Which is impossible.

OR

The prediction, though unknown, would need to be the determining cause for the player's choice, which would mess with the free choice that the player is offered to, but which might still work given that technically only B is the only viable option if you accept the setup.

So we're either in the realm of fantasy ("pigs can fly? Sure I'll take it"), alternative physics ("Sure I get $1,000,000 but what does that even mean in a universe where cause and effect are weird") or simply a scam ("Thanks for the money, oh did you see the giant over there....").

If we assume we're in fantasy and no matter how ridiculous the rules are, that is just what the the rules are. Then it's, as said, fairly simply: Pick B take your $1,000,000.

You know that by this twisted logic the prediction will show whatever you've actually picked and so when the prediction matches your pick, B is simply the best option, as B nets a million while A or A+B only net $1000.

Same for alternative physics though not sure whether $1,000,000 is even > than $1000 in this upside down reality.

So realistically you'd have to assume it's a scam. Meaning you're having to distinguish between you're the one getting scammed or whether you're a tool to scam someone else. So what does the person offering this game values more their reputation or $1,000,000.

These are variables that are completely outside of our ability of knowing or even guessing them so the only thing you can expect with certainty is that your upfront money to play the game is gone. But jokes aside if we don't consider it to be that scammy you'd be able to expect a safe $1000 picking A and B for the fractional chance of a million. Or gamble on the $0 vs $1,000,000 by picking just B.

Like you could be lucky and they offered this as a bet for $10,000,000 to a 3rd party and supply A with a trapdoor that makes the million drop if A is opened. So picking B would actually yield you a million and lose that 3rd party $10,000,000.

Or they took $100 for you to play the game and once it comes to the reveal they just run and you find a $5 cardboard setting and badly faked $1000 under the transparent glass.

Though the scenario as given would prefer A+B for a conservative run in case of a scam.

Also other answers have pointed out, the psychology of orders of magnitude might mess with us as well. Like due to the 3 orders of magnitude difference between 1000 and 1 million or a 0.1% ratio we default that to basically 0 so the difference to the actual 0 ratio for 0 or $1000 appears to be in the same order of magnitude. So the risk felt in losing the $1000 compared to the reward is seen as low, while in reality you have nothing and after being scammed you'll likely have nothing as well or less, so the $1000 are a hell of a lot more than nothing, while the $1,000,000 is likely just a bait.

So it's essentially the choice between 0 --- a whole lot --- infinity.

now infinity is a lot bigger than a lot, and a lot is a lot bigger than 0 so depending on which side you're initially drawn to (high reward or loss avoidance) you might prefer the middle as high reward or reject it as basically a loss.

10
  • 8
    "we don't know how that predictor works, because that defies our expectation as to how reality works." There is a lot of truth in that sentence. People aren't rejecting the logical answer, they are rejecting the premise!
    – Michael W.
    Commented Jun 20 at 19:30
  • 2
    +1 A little long but I think you have it more or less worked out. This makes me think of 'mentalist' tricks. I get how you can use suggestions to get people to do what you want while they think they are in control, but I don't see how that can work all the time. It seems to me that if they chose a person who knows what they are doing, they are in big trouble. I think this also plays into a lot of why people are so impressed with LLM chatbots.
    – JimmyJames
    Commented Jun 20 at 21:53
  • 4
    The problem with this argument is that the predictor does not require retrocausation in order to exist. In some presentations, the predictor is "merely" using advanced psychology to determine what the chooser is more likely to choose, which is hardly implausible. In the extreme case, you could just directly ask the chooser which box they would hypothetically choose, while under the influence of some drug that suppresses short-term memory so they don't remember the question or their answer. There is no "fantasy" requirement to do this experiment in real life.
    – Kevin
    Commented Jun 20 at 21:58
  • 1
    @Kevin - I agree with those points. There is also a very simple way to implement a near perfect predictor: SImply ask the player in advance what they are going to choose. The problem presentation doesn't rule this "implementation" of a predictor out and it obviously doesn't involve any time-travel or retro-causation. This reduces the problem to: How truthful is the player when they say they will choose the one-box? (Basically brings it closer to Parfit's Hitchhiker.) Of course, still not an easy practical problem - for the predictor - but no longer a weird exotic one, at least.
    – mudskipper
    Commented Jun 21 at 0:20
  • 3
    @Kevin One way to create a 'perfect' predictor is to cheat: create a device that changes the result as the player is making their choice. That aside, there's a whole aspect of magic/mentalism about 'forcing' people to make a specific choice. I was recently playing a game with my kid where we were making up 'backronyms' for words. One of the words was 'penguin' and we both chose 'igloo' for the 'I' and I think that was maybe more than just coincidence.
    – JimmyJames
    Commented Jun 21 at 15:40
8

Newcomb's paradox is confusing because different answers to it highlight different ways of understanding what it means to make a rational decision.

  1. Evidential decision theory does not distinguish between an event and an action and it assesses a rational decision as one that maximises expected utility. It yields the one-box solution. But evidential decision theory fails to deliver good decisions in lots of cases.

  2. Causal decision theory distinguishes events from actions and assesses a rational decision as one that maximises the expected utility of the causal consequences of an action. An action is treated as an intervention and, when conditionalising on it, one screens off the probabilities of events that are causally independent of the action. This appears to yield the two-box solution, at least on a simple account. Causal decision theory usually gives much more plausible results than evidential decision theory, but it might be argued that the reflexive nature of Newcomb's problem renders it inapplicable.

  3. It might be possible to find a hybrid form of rational decision based both on a consideration of causal consequence and the reflexive nature of the prediction. Some writers have claimed that the puzzle can be understood as having two stages: a pre-prediction stage at which the agent is rationally motivated to decide to take one box, and a post-prediction, post-box-filling stage at which the agent is rationally motivated to take both boxes on the basis that nothing they do now can causally affect the contents of the closed box. This understanding of the problem devolves into another interesting puzzle which asks whether it is rationally possible to make a decision now knowing that one will have no rational reason in future to follow through with it.

Newcomb's paradox has also attracted some analyses based on game theory and these exhibit similar problems. It is often assumed in non-cooperative game theory that if a player has a dominant strategy then it is rational to adopt it. Though arguably this fails in prisoner's dilemma situations. This would yield the two-box solution. But it is also possible to think of the predictor as an antagonist who rewards those who don't follow dominant strategies. Some even claim that the predictor is rewarding irrational decision making. In which case the player is better off going along with the expected pay-off and taking the one box.

So, Newcomb's paradox is capable of telling us interesting things about rational choice and game theory.

12
  • 2
    From the perspective of policy optimization: adopt the policy that yields maximum reward. This would be a one-box policy.
    – causative
    Commented Jun 20 at 4:44
  • 3
    There seems to be an implication in this problem that the goal of the predictor is accuracy, it's not competing for the money. Consistently choosing B only makes its job easier and nets you $1M per round. The closer it is to infallible, the better for you. Am I missing something? Commented Jun 20 at 14:24
  • @CristobolPolychronopolis - If most players just choose one-boxing, then that does make it easier for the predictor: simply adopt a probabilistic strategy, without even inspecting the player in the current game. When playing against multiple players, the pred. can also modify that strategy as they go along. The only required assumption then is that most (or just a particular % of) people will select this strategy or that strategy. Based on the empirical evidence, about 22% of philosophers don't know what to do, about 39% two-box, and 31% one-box. But that means about 50% two-box ... Ay :/
    – mudskipper
    Commented Jun 21 at 0:40
  • @causative: The two boxers also contend that they are yielding the maximum reward - more specifically, they contend that the one boxers are willfully leaving $1000 (the amount in the lesser box) on the table after the contents of both boxes have already been determined, which means it cannot possibly be the maximum reward. Talk of "policy optimization" sounds profound, but both sides can argue that they are producing the maximum reward, so it does not help us to distinguish which side is correct.
    – Kevin
    Commented Jun 21 at 21:33
  • 1
    @Kevin The predictor's choice depends on whether you're a "one-boxer" or a "two-boxer." Arguably we do have at least some ability to choose, right now as we are discussing it, whether we are a "one-boxer" or "two-boxer"; i.e. that our decision, now, of what it is best to do, would constrain or influence what we would actually do if we were in the situation. And that means we have some ability to choose, right now, whether the predictor would put the $1 million in the box.
    – causative
    Commented Jun 22 at 0:54
6

In addition to Bumble's answer, I'd posit the following two reasons for confusion:

  • As explained on the Wikipedia page, it makes a difference whether the "reliable predictor" is 100% totally infallible and incapable of making an error, or whether it is only correct in the overwhelming majority of cases. Especially the first case opens the door to circular causation (where the future decision of the player influences the past prediction, and vice versa).
  • The second is that the two camps both make "local" sense: chosing A+B always nets the 1000 from A (on top what B gives) and thus the expected value is always 1000 "more" (with the problem obviously being that this is only in relative terms). Choosing B always nets 1000 less (relatively), although of course "0+1000" is smaller than "1000000-1000". This kind of confusion happens all the time (for example marginal values when looking at stepped tax systems), and while it may seem trivial to you and me, it obviously is not to other people.

As to why our brain suffers from these issues, there is simply no reasonable evolutionary pressure for our brains to develop in a way that would make them go straight like an arrow to a commonly accepted "trivial" answer. These problems do not come up in everyday life.

The conundrum of course also has been formulated with this effect in mind - it is supposed to be confusing, same like many other problems of game theory (i.e., Prisoner Dilemma, Trolley Problem, Monty Hall etc.). It is designed to show limitations of our brain's intuitive solutions for very complicated, non-every-day problems. And it gives us fodder to sharpen our minds in the art of discussing and dissecting these (at least intuitively) almost unsolvable problems.

4
  • 1
    @AnoE - "These problems do not come up in everyday life." That is something I've also been thinking about - it's surely relevant. The problem(s) seem contrived, highly unrealistic. Parfit's Hitchhiker (which is a variant) for instance can even be perceived as distasteful since it starts out with two people who are totally selfish - is that really how normal people would behave in a situation as described? But I would ask: "The problem as presented may be contrived, but can the formal problem (underneath it) perhaps model real-life situations? Can we find a non-contrived variant?"
    – mudskipper
    Commented Jun 20 at 11:36
  • 2
    @mudskipper, sure, Game Theory in itself is very valuable and can help with many practical decisions. Considering these paradoxa is like a weight lifter training in the gym - in itself totally useless, but when you have to move house next time, you will be happy that you have all those muscles!
    – AnoE
    Commented Jun 20 at 12:07
  • 1
    @AnoE - I like that metaphor! :)
    – mudskipper
    Commented Jun 20 at 12:16
  • I think variants of prisoner dilemma occur in everyday life. Like for example if you have a job where you have to contact clients, when you talk to a client, but you don't know what the (other) manager has told the client before.
    – rus9384
    Commented Jun 21 at 22:38
4

I think there is a psychological issue here that is important. While expected gain/loss is certainly a rational criterion, there are some who would be distraught if they found out that their choice had meant they had lost out on $1,000,000. In other words, your actual subjective gain/loss/regret may not be linear in the amount of money - you may prefer to almost certainly loose $1,000 than have a much smaller possibility of having lost $1,000,000. On the other hand, you might be more conservative and risk averse and prefer a high chance of a modest gain than a small chance of a big gain. In the former case you might prefer "B only", in the second case, you might prefer "A+B". This is true whether we perform a detailed analysis or not, because the decision is dominated by our subjective regret preferences rather than logic.

If this SE has LaTeX support, I might be tempted to have a go at an objectivist Bayesian analysis, but without that it would be like kicking a dead whale down a beach (although that isn't the worst way of dealing with that particular disposal problem).

Unless I have missed something (which is entirely possible), the question isn't that confusing, the main problem is that we don't know how reliable the predictor is. If they are perfect, then there is no adversarial nature and we can pick B only as our optimal strategy, knowing we will get the $1,000,000. If we know they are anti-perfect, i.e. guaranteed to get it wrong, then choosing A+B ensures there will be $1,000,000 in box B. There is likely to be some cut off-point between the two where the optimal strategy switches.

5
  • 1
    "There are those that look at things the way they are, and ask 'why?' I dream of things that never were, and ask 'why not?'." Another one: "There are two kinds of people: those that divide people in to two kinds, and those that do not."
    – Scott Rowe
    Commented Jun 20 at 11:24
  • @ScottRowe I vaguely recall something in "Thinking fast and slow" along those lines, but it is a fair while since I read it. Commented Jun 20 at 11:29
  • 2
    There are those that think fast and slow, and those that do not? I like the additional info you put in.
    – Scott Rowe
    Commented Jun 20 at 12:33
  • 2
    @ScottRowe "There are 10 kinds of people: those who know binary numbers and does who don't."
    – mudskipper
    Commented Jun 20 at 13:49
  • 2
    @scottRowe I think we all can think fast and slow, but thinking slow requires a deliberate choice to do so, and we don't make the effort as often as we should. Commented Jun 20 at 13:51
2

Why is the Newcomb problem confusing?

It isn't confusing. It asserts a contradiction: that future information can reach the past. But it can't. The reason involves 4-dimensional geometry.

But if you pretend it can, then your make-believe world is inconsistent.

It's like asserting that 2+2=5. If you do that, you can prove that any number equals any other number. Arithmetic becomes inconsistent and uninteresting.

Harry Potter books are great. I read them all. But they don't pose philosophical problems.

17
  • 1
    Some academic philosophers have claimed this too. But in this case, why do you think the other side doesn't see the point you make? Why does it remain so controversial if it would really be that simple?
    – mudskipper
    Commented Jun 21 at 0:01
  • 2
    The paradox does not require that the predictor is infallible. A predictor who is, say, 99% reliable and derives their predictions from a scientific study of the subject would still result in a conflict between the evidential and causal approaches to decision theory. Problems of this kind are not entirely theoretical: they can arise when information is available that is evidence for a behavioural disposition.
    – Bumble
    Commented Jun 21 at 2:56
  • 1
    @Bumble Miss_Understands's counterstatement doesn't rely on the predictor being infallible either.
    – Rosie F
    Commented Jun 21 at 7:20
  • 2
    @RosieF Yes it does. There is nothing contradictory about being able to make highly reliable predictions about what choice a person will make in some specific situation. In fact, it is not uncommon. What might contradict our best understanding of the universe is the ability to make absolutely infallible predictions, since this would in effect be a form of backward causation.
    – Bumble
    Commented Jun 21 at 8:48
  • 3
    @RosieF We make highly reliable predictions all the time. This does not give rise to a paradox. I predict that if someone today offered you a gift of $1000 with no strings attached you would take it. That prediction will work extremely reliably across a very wide range of people. All that Newcomb requires is that there is some combination of scientific tests that will allow a highly reliable prediction of whether a person will choose one or two boxes. There is nothing absurd about that assumption and it does not conflict with any laws of nature.
    – Bumble
    Commented Jun 21 at 12:53
2

One important driver of controversy in Newcomb's Problem is the concept of causation. Intuitively, causation has many features. It's perfectly objective, not a matter of perspective. It's an antisymmetric relation. An effect never precedes its cause. Causation is tied to fundamental physical laws. Causation underlies effective means-ends relationships. All these features apply in the routine encounters with causality that we deal with, so it is no wonder that we roll them all into a package deal. Newcomb challenges us because the paradox threatens to tear the package apart.

Unfortunately, nothing in the real world meets all our intuitive desiderata for causation. I'll confine myself to two accounts of causation that bring out important issues. The Bayes nets discussed by Wolpert and Benford are appropriate for assessing means-ends relationships. But their directed acyclic graphs, like the interventionist approach of Pearl, depend on dividing system from environment, endogenous from exogenous variables.

The scientist carves a piece from the universe and proclaims that piece in ... The rest of the universe is then considered out ... This choice of ins and outs creates asymmetry in the way we look at things and it is this asymmetry that permits us to talk about ‘outside intervention’ and hence about causality and cause-effect directionality(Pearl 2nd ed.: 420).

On the other hand, the laws of physics are arguably perspective-independent. In Causation and its Basis in Fundamental Physics, Douglas N. Kutach proposes a definition of causation in terms of fundamental laws: see this excellent review for a summary. Since Einstein's equations and Schrödinger's equation specify the time-evolution of events equally well backward or forward in time, Kutach's definition suggests that present events routinely causally contribute to past events as well as future ones. But Kutach provides a partial explanation, appealing to entropy, of why we (generally) can't strategically influence the past. The past effects (when characterized in past-centric terms) of a present action are unpredictable. See Albert and Loewer for related arguments about entropy and the time-asymmetry of strategic influence.

Newcomb's Problem pits the "causation can't go back in time" principle against the "causation is tied to fundamental physical laws" principle. But I would argue (if I had more space) that it is the tie to physical laws, not the direction in time, that provides the means-ends leverage. The fact that an effect is later than our action is only required in order to know what we are doing (under the assumption that we care about the effect as it would be described contemporarily, rather than e.g. "what you get by evolving the Schrödinger equation backward from ...").

Arif Ahmed has a thought experiment called Betting On The Past which brings out the conflict even more strongly:

In my pocket (says Bob) I have a slip of paper on which is written a proposition P. You must choose between two bets. Bet 1 is a bet on P at 10:1 for a stake of one dollar. Bet 2 is a bet on P at 1:10 for a stake of ten dollars. So your pay-offs are as in [Figure 1]. Before you choose whether to take Bet 1 or Bet 2 I should tell you what P is. It is the proposition that the past state of the world was such as to cause you now to take Bet 2. (p. 120)

Betting On The Past bypasses the knowledge problem by specifying the desired past state in relational terms: via your present selection, and the laws of physics. Newcomb's Problem does a similar thing, only implicitly, and in Nozick's version, not necessarily in such a perfectly correlated manner, since all we know is that the Predictor has a long record of successes. Nevertheless, we can use objective relationships (the laws of physics) to increase the probability of a favorable prediction, by casting aside the overgeneralization that nothing we do contributes (in one very useful sense of "contributes": Kutach's) to the past. It should go without saying that nothing depends on using the word "causality" for Kutach's concept, and I would actually recommend against it. The point, rather, is to tease apart the package deal of "causality", and apply each element carefully, and if need be, separately.

3
  • 1
    Thanks! This is very insightful! I believe this clarifies one major point of contention/confusion in the reasoning about (and in) the Newcomb game: entanglement of the prediction - plus the knowledge about the past prediction - causality and choice. I didn't know about the BettingOnThePast game; pointing out the partially similarity with Newcomb is brilliant. - Ok, now I just have to read Arif Ahmed's book. Your reply has just made me positively determined to do so :)
    – mudskipper
    Commented Jun 22 at 12:47
  • This also directly implies that it doesn't (shouldn't) really matter if we assume the predictor is fallible or infallible. (In fact, if you agree with that -- which I do -- then you could guess that the assumption of fallibility subtly misguides people in their attempt to analyze the problem.)
    – mudskipper
    Commented Jun 22 at 13:31
  • Very nice combination of Newcomb and Doctor Who's Don't Blink. I really liked the smug AI of your extinct filasofs. I assume you would or could predict that I would enjoy it, if I would be curious enough to keep browsing and stumble across your story.
    – mudskipper
    Commented Jun 22 at 18:30
2

The one-boxer position is intuitively straightforward and has been explained in other answers in some depth (but in short: the one boxer asserts that the outcome in which you choose both boxes, but box B still contains $1M is either impossible or improbable enough that one-boxing is a superior overall strategy). So this answer will specifically focus on the two-boxer position, which I think is the more confusing one.

Let's imagine a simpler version of the problem, where the only information we are given is the following:

  1. Box A definitely contains $1000.
  2. Box B might contain $1 million, or it might not.
  3. The contents of box B have been determined (by some unknown-to-us process) and cannot change now.

We are then asked to choose between both boxes and only B. The obvious answer is to take both boxes, because the process by which B's contents are determined is irrelevant - either it contains $1 million or not, but no matter what is in B, we get an extra $1000 if we take both.

The operator of the game then reveals to us that box B's contents were in fact determined by the predictor as in the original problem, and asks if we wish to change to only taking box B, or stick with our original choice of both boxes. The one boxer position is that we ought to change. But a paragraph ago, we concluded that any information about the contents of B is irrelevant, so it should not affect our choice of boxes. Either the previous paragraph contains a logical error, or the one boxer position is in error. Since there is no obvious error in the previous paragraph, the two boxer position is that the one boxer position must be in error, and so we should stick with two boxes.

The difficulty is that there is also no obvious error in the one boxer position, which is why Newcomb's problem is confusing.

I should also note that several answers have claimed the problem is a "fantasy" or otherwise impossible, but as the original formulation explicitly notes, the predictor need not be infallible, just moderately better than chance. Humans are more predictable than a fair coin, and so such a predictor is obviously possible. An infallible predictor does not materially change the problem, it merely casts it in a starker light... unless you go to the point of claiming that an outcome where the predictor is wrong is somehow physically impossible, in the sense that it is physically prevented from happening by any means, which does indeed take you well beyond the realm of normal physics and into the world of fantasy. But the one boxer position merely requires a straightforward application of expected value to the fallible predictor, and so I refuse to further address a physically infallible predictor as it is beyond the scope of the original problem as stated by Newcomb (and also, in my opinion, not very interesting since it is unphysical, whereas the original problem is something you could actually do in real life).

1

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .