It was in August–September 2025 that I first got curious enough to really think about Newcomb’s problem, and I really nerded out. I went through a lot of relevant literature. I extensively revised the Wikipedia article “Newcomb’s problem” (which had been titled “Newcomb’s paradox” before I retitled it) and added several illustrations, of which the main one seems to have been shared severally on social media and become instantly canonical. I was already convinced that the problem can’t happen in real life, and I wrote a blog post insisting on this at length while citing several academics (in retrospect, not very clearly for all that).
In conversation with a British girl named Rowan, I thought a little harder about how to explain why I think Newcomb’s problem can’t happen as a decision problem (regardless whether predictors are possible in general). The things I said in the conversation were also somewhat confused, but here is the best explanation I have as of now.
Let’s say choices are sets of boxes, so the possible sets are $\{\}, \{A\}, \{B\}, \{A, B\}$. Your choice is called $C$, the optimal choice is $O$, and the predictor is a function $\text{predicted}(C)$.
The problem is about coherence between these four assumptions:
- We know that the predictor is accurate: $\text{predicted}(C) = C$
- We know that we will do whatever choice we think is best: $C = O$
- We know what choice $O$ we think is best.
- We don’t know what the predicted choice, $\text{predicted}(C)$, is.
Suppose by our reasoning we think that one-boxing is optimal, i.e., $A ∉ O$, and we know this and also know we will do the optimal thing, i.e., $O = C$. Then by the axiom of extensionality, we know that $A ∉ C$. But if we know that $A ∉ C$ and also know that $\text{predicted}(C) = C$, then we know that $A ∉ \text{predicted}(C)$. But this means we know that the predictor has predicted one-boxing.
If we know this, it seems two-boxing is again optimal, i.e., $A ∈ O$. But if we had started out by thinking two-boxing is optimal, then we also, by parallel reasoning, know that $A ∈ \text{predicted}(C)$, i.e., we know the prediction (so it’s no wonder we think two-boxing is optimal). It seems, then, that we can’t be a situation where all four assumptions are true. To spell this out:
- If we know that the predictor is accurate, then either we can’t know that we will do what we think is best, or at any rate we can’t know what it is that we think is best, or we can know what the predictor predicted and can act accordingly.
- If we know that we will do what we think is best, then either we can’t know what it is that we think is best (and will do), or we know that the predictor isn’t accurate, or we can know what it predicted and can act accordingly.
- If we know what choice it is that we think is best, then either we can’t know that we will actually carry it out, or we know that the predictor isn’t accurate, or we can know what it predicted and can act accordingly.
- If we can’t know what the predictor predicted, then either we know that the predictor isn’t accurate, or we can’t know that we will do what we think is best, or we can’t know what it is that we think is best.
I personally think none of these four coherent situations are truly Newcomb’s problem. So it seems that a situation, to be truly Newcomb’s problem, must be incoherent: it has to realize conditions that cannot be realized at the same time.
