This blog post will review “A Philosophical Guide to Chance” by Toby Handfield. Throughout, I talk about its implications for the relevance of probability theory in its usual applications to the broad category of chance, risk, uncertainty, etc. (which is, ultimately, what all “interpretations of probability” attempt to capture), without of course intending to convey that this has any spillover effects on the application of the abstract mathematical theory to nonstandard entities such as tables, chairs, and mugs of beer (and therefore intending to convey no implications either for the unapplied, “pure” mathematical probability theory in itself), but nevertheless referring to it simply as “probability theory” (or “probability”) rather than “applied probability theory” or some other such bracketing locution.
Contents
Why I am reviewing this here even though, from some points of view, this is a bad idea
There are two good reasons for me not to review this book here.
- It goes against my usual policy: I don’t usually review books on my blog, since I don’t usually have much to say about them besides giving a summary and saying “I recommend this” or “I don’t recommend this”. I have usually relegated such brief comments, sans summary, to my X/Twitter. Summarizing is dull, usually rather uncreative work, which I would only undertake for books that I am especially interested in disseminating, such as when I summarized Agnes Callard’s Aspiration and, back in the day, various books that I tried to summarize for Wikipedia (most interestingly The Open Society and Its Enemies and A Preface to Paradise Lost). Although my verdict on Handfield’s book is that it is good, I am not especially interested in disseminating it, especially since I believe much of its value is in its survey of other works, so that much of a summary of the book would consist in a second-hand summary of those other works, which feels like overdiluting things. There will be a summary of Handfield’s book here, but only a rather perfunctory one, to give a feel for what I’m talking about, not a detailed one like my summary of Aspiration, which was intended to help the reader follow Agnes’s argument. (I had found Agnes’s argument hard to follow because I thought she wrote paragraphs that were too long, which is ironic, given the length of the paragraph I’m writing now.)
- I haven’t read this book with as much attention as I’d like. There was some skimming. When I talk about this book, I don’t fully know what I’m talking about. However, given how little detail this review will have, I will certainly not say anything confidently about any part of the book that I did not pay attention to, so this is not a big problem. The fact that I don’t have detailed comments on each microsection of the book is part of why I have kept the book summary separate from my broad, sweeping, ‘reviewing’ remarks.
The reason I’m reviewing the book here, which overrides the two reasons above, is that the book is very relevant to some of my past projects and will be relevant to some of my future projects. Probability theory is important in science and philosophy; it is a going concern. Handfield’s book establishes an important thesis about probability theory, and it has implications for all of my past and future work involving probability theory. So I need to at least have some sort of note, uploaded somewhere, signposting that this book exists and roughly more-or-less what its implications are. This post is that note. Also, reviewing the book on X/Twitter would not allow me to hyperlink other things from the review very nicely, which I want to do. So much, then, for the justification of the review.
Summary of Toby Handfield, “A Philosophical Guide to Chance”
Toby Handfield’s A Philosophical Guide to Chance is a guided tour of a single, stubborn problem: we treat “chance” as both (i) something objective in the world and (ii) something that tells us what it’s rational to believe and how to act—yet chance-guided belief can still “fail” in a particular case, and it’s surprisingly hard to say what makes a chance fact true.
The book begins by fixing the target. “Chance” isn’t (just) ignorance or a personal hunch; it plays a distinctive role in thought and science: it’s supposed to be a physical probability that normatively constrains credence (your confidence should match the chance you take there to be), without guaranteeing success in any one trial. That normative role is sharpened via Lewis’s Principal Principle (credence should track known chance, absent inadmissible information), but the early chapters also foreshadow the central sticking point: what counts as admissible, and—more deeply—what could ground chance so that it earns this authority over rational belief.
From there Handfield builds the most tempting “scientific” staging ground: the classical picture of a deterministic world of particles, positions, and velocities evolving under time-reversal-invariant laws. To talk sensibly about uncertainty in such a world, we represent “ways the world might be” as points and trajectories in phase space, and we represent ordinary propositions (like “there’s an elephant in the room”) as sets of microstates—regions in that space. This apparatus works beautifully for physics’ own categories, especially macrostates (temperature/pressure/etc.) as large phase-space “blobs,” and it sets up statistical mechanics’ key move: explain thermodynamic regularities by saying anti-thermodynamic behavior is not impossible but overwhelmingly improbable relative to a natural measure (“volume”) on phase space.
That statistical-mechanical picture is the launching pad for the first major metaphysical proposal: possibilist theories, which try to ground chance in “how the actual world sits among possibilities,” typically via relative modal volume. Handfield treats this as initially attractive—almost the default temptation once you’ve absorbed phase space and measures—but then systematically presses why it doesn’t deliver what chance is supposed to be. Volume-based chances struggle with conditioning on measure-zero events, with the vagueness and open-texture of ordinary propositions, and—most importantly—with the justificatory demand: why should that measure have any normative authority over rational credence? Attempts to vindicate the privileged measure by appeal to learning, frequencies, or “how well peers do” run into circularity: you end up using the very notion of “most” or “typical” that the measure was meant to explain. Even more sophisticated “washout” ideas (microconstancy/typicality) capture real robustness in practice, but still appear to smuggle in an ungrounded measure over possibilities.
The next major strategy is actualism: keep chance objective and real, but reduce it to purely actual, this-worldly facts—paradigmatically, frequencies or (more subtly) the Lewisian “Best System” account where probabilistic laws and chances are whatever best balance simplicity, strength, and fit to the actual history. Handfield grants the sophistication and influence of this approach, but argues it distorts how chance is meant to work. Crude frequentism fails because chance and finite frequency can come apart (and single-case chances collapse to 0/1), while Best-System actualism threatens counterfactual weirdness (chances depend too heavily on what actually happens), reverses the explanatory direction we ordinarily use (outcomes/frequencies explained by chances, not vice versa), and risks making chance’s normative force depend on an anthropocentric modeling compromise (tailored to limited creatures like us).
At that point the book opens the anti-realist landscape. If the realist reduction programs don’t ground chance, perhaps chance talk is (in one way or another) not tracking mind-independent chance properties. Handfield distinguishes: error theory (chance discourse aims at objective facts but none exist), subjectivism (chance claims depend on agents’ credences), and non-cognitivism (chance talk functions more like a guiding or expressive tool than a straightforward description). Subjectivism, he argues, collapses genuine disagreement and wrecks chance’s explanatory role in science; error theory and non-cognitivism remain live but owe us a story about why chance-talk is so successful and entrenched if it doesn’t describe objective chance facts.
Quantum mechanics then becomes the stress test: if anything forces objective chance on us, surely it’s QM. But Handfield’s survey of the main interpretive families—collapse, Bohm, Everett—aims to show that QM doesn’t straightforwardly rescue chance realism. Collapse interpretations can take chance as primitive (which doesn’t illuminate chance), Bohmian mechanics is deterministic and pushes probabilities toward typicality/ignorance-style stories, and Everett replaces “one outcome happens with probability p” with “all outcomes happen,” creating a new problem: reconstruct genuine uncertainty and justify the Born rule as uniquely rational. The many-worlds chapters push hard on the idea that self-locating uncertainty and decision-theoretic derivations can simulate the role of probability, but struggle to secure the robust, explanatory, uniquely action-guiding “objective chance” we started with—especially around death and selection/weighting worries.
Late in the book, Handfield turns to time. We experience chance as future-directed, yet many fundamental laws are time-symmetric. His proposed reconciliation is evidence-first: chance is tied to what credence is recommended by available evidence, and evidence is radically time-asymmetric because the past leaves records while the future does not. Statistical mechanics plus the Past Hypothesis (a low-entropy boundary condition) is brought in as the deeper physical explanation of why records and traces overwhelmingly point one way in time—why we get “footprints from the past” but not “portents from the future.” This supports an overall picture where “chance” varies with evidential situation and context, rather than being a fixed world-feature readable off the total microstate.
The final chapter makes the book’s argumentative posture explicit by analogy to moral debunking: chance concepts carry norms (coordinate credence with chance; use likelihood-style updating), and we can give a compelling “natural history” of why creatures like us adopt and rely on chance-thinking—because it’s practically indispensable for decision and for restraining our pattern-hungry tendency to invent causal stories. But that practical vindication doesn’t automatically yield existential vindication of irreducible, objective, physical chance properties. Handfield’s overall drift is therefore deflationary: keep the tool (probabilistic reasoning, Bayesian modeling, statistical mechanics, quantum predictions), but be suspicious of the heavyweight metaphysics—especially the idea that there must be mind-independent “chance facts” that both ground probabilities and uniquely dictate rational credence in the strong way many philosophers hoped.
In short: the book starts from chance’s everyday-and-scientific authority over rational belief, tries the two dominant realist grounding strategies (possibility-structure and actual-history reduction), finds both wanting (especially on normativity and explanation), tests the hope that quantum theory might force realism, and ends by recommending a broadly anti-realist / debunking-friendly stance: chance-talk is an extraordinarily useful practice for limited agents embedded in an entropic, record-filled world, but its success may not require—and may not support—robust metaphysical “chance-makers.”
Remarks on the book
Handfield is an invaluable guide to all previous philosophical proposals in probability semantics. To make a pun with his name, he really puts the whole field in your hands; he possesses a rare talent for pedagogical clarity, managing to render even the most formidable proposals accessible. A prime example is his treatment of John Bigelow’s proposal that “probabilities are ratios of volumes of possibilities.” If one attempts to read the original papers where Bigelow advances this view, one is immediately thrown into the deep end of a difficult formalism that can obscure the philosophical intuition. Handfield, by contrast, reconstructs the argument with the ease of an introductory textbook, without sacrificing the necessary rigor. I am grateful for this; it is representative of the service Handfield performs in making all philosophical proposals in the semantics of chance legible.
However, Handfield’s bright clarity in exposition is equally applied to illuminating the flaws in each of the proposals, and the book leads the reader to the conclusion that there is, in fact, no good probability semantics. I find myself in full agreement with his verdict on the existing literature: every major attempt to ground probability—whether in modal volumes, frequencies, or best-system analyses—is fatally flawed. Interestingly, Handfield identifies deep problems even within anti-realist theories, despite ultimately siding with an anti-realist stance himself. The result is that the book offers no semantic theory at all on which to rest, realist or not. You put it down feeling logically thoroughly briefed, but metaphysically homeless.
When I reflect on the dim prospects for probability semantics, I feel some relief that at least I’m not a LessWrong user. LessWrong is a website where the foundation of its philosophy is, among other things, overusing probability theory, applying it to all of life and reasoning. (Mostly they are “objective Bayesians” in epistemology, although of course, the broader attitude of probability overuse is not at all part of objective Bayesianism.) As Handfield’s book demonstrates by systematically dismantling the semantic grounds of chance, this is a precarious position. If probability lacks a coherent semantic footing, then building an entire identity or epistemic system upon it is the height of irrationality. To overuse a tool that we cannot fundamentally define is to actively work toward making language meaningless, communication impossible, and inquiry fruitless. LessWrong users are happy with this, of course, since without exception they hate all knowledge and always seek to destroy any possibility of anyone understanding any truth. (Since they are often autistic, I should note that this paragraph is largely hyperbole.)
Yet, even for those of us who have not staked our identity on Bayesianism, the drift toward anti-realism is uncomfortable. I do not particularly enjoy being forced into a position of philosophical scepticism regarding such a widely accepted field of research. Probability theory is everywhere: it appears in scientific explanation, in everyday deliberation, in statistics, and in decision theory. And it even appears in many proposals in philosophical semantics, such as the following three:
- Adams’s Thesis, which analyzes natural-language conditional statements (“If A, then B”) by equating their assertability with the conditional probability P(B|A). This is extremely famous and you can read about it at the SEP, for instance, so there is really no reason for me to say more about it, even though it is more important than the other two.
- Hannes Leitgeb’s rehabilitation of verificationism, which proposes that a sentence A is meaningful if, and only if, there exists some evidence B such that P(B|A)≠P(B). (I am very sympathetic to verificationism, but I have mostly been following a form of Gordian Haas’s version. Haas’s Minimal Verificationism is a great book. Notice how I don’t have much to say about most books, beyond recommending them. Besides its original proposal, which fully rehabilitates the verificationist criterion of meaning with only some small holes to patch regarding counterfactuals, Haas’s book serves as a great survey of historical verificationism, and I have used its information to make some improvements on the Wikipedia article about verificationism.)
- My own previous work on concessive statements. In a post that I decided was not good enough for this blog, so that I instead posted it to my X/Twitter, I have previously modeled concessives like “even if p, [still] q” using a probabilistic threshold. If we let τ be a robustness threshold (e.g., 90%) and δ be a measure of how much p is implied to hinder q, the concessive assertion can be formalized as: P(q|¬p) −δ ≥ P(q|p) ≥ τ. (In plain English: q remains likely above threshold τ even given p, and p does not lower the probability of q by more than the “hinderance factor” δ compared to when p is false.) This work was inspired by Crupi and Iacona’s earlier, much better work which modeled concessives using a conditional logic, as well as the axiomatic (“Hilbert-style”) proof system that they built for this logic together with Raidl. Still, it further shows that, if probability theory could be understood, it would help understand some parts of language.
It is disappointing that these proposals, which had felt like they were clarifying vague linguistic phenomena, might simply be translating one thing we don’t understand (conditionals, meaning, concessives) into another thing we don’t understand (probability).
Granted, perhaps this translation is not entirely in vain. Even if probability lacks a fundamental “truth-maker” in the physical world, treating these problems probabilistically is helpful because it imposes structural coherence; we may not know what a “chance fact” is, but we know exactly how probabilities must relate to one another mathematically to avoid Dutch books or incoherence. By translating a linguistic problem into probability calculus, we trade vague, shifting linguistic intuitions for a rigid, checkable structure, albeit one which “hadeth no semantics”. We may not have found the ground, but we have at least found a rigorous syntax for our uncertainty. Handfield leaves us with the tool, but takes away the comforting illusion that the tool describes the furniture of the universe.






