Monday, February 2, 2026

Simplified modal ontological argument

This blog post defends S5 modal logic by arguing that atheists should not reject its axioms merely to avoid the theistic conclusion of Plantinga-style modal ontological arguments. It does so by emphasizing that a modal ontological argument is, after all, an ontological argument: it moves from a certain substantive definitional claim about God to an affirmative existential claim about God. The real work in the argument is done by the substantive definitional claim, and the logic, regardless how strong of a modal logic it is, is just scaffolding. Or so I claim, and I defend this by recasting the modal ontological argument in a weaker modal logic with stronger premises, showing that the argument is still valid and can still be equally well motivated by informal conceptions of theism in this way, and then only afterwards comparing the argument with something that you’d need S5 to show. S5 is then seen to be a perfectly innocent, and delightfully accurate, model of how alethic modalities work in natural language, so that non-theists who would like to reject modal ontological arguments would be best served by either sticking to an agnostic or sceptical claim about theism or by adopting some theory that denies the very possibility of the truth of theism, rather than trying to make such strange, hard to interpret claims as that “God’s existence is contingent since it isn’t actual, but it could have been actual, in which case it would be necessary”.

I first came up with the central argument of this blog post on 2025-08-03, but the blog post came largely from a discussion on 2026-02-02 in which I used the argument to defend S5, much the same as in this blog post (in fact, a few words in this blog post were reused from the discussion).

1. Introduction

Alvin Plantinga famously considered his own modal ontological argument unconvincing. Maybe his pessimism about natural theology was warranted in some way, but it wasn’t helped by his needlessly overcomplicated presentation of the argument.

Plantinga’s argument, like all ontological arguments, moves from a substantive, definitional claim about God to an affirmative existential claim. But Plantinga took it for granted that he would be able to use the strongest modal logic, S5, and hence, like a logician seeking to use only the minimal necessary axioms, pared down his substantive understanding of God to a minimal claim that allowed S5 to do as much of the work of the proof as possible. This is perfectly fine if the argument is to remain academic, but naturally, since it is a proof-of-God, it made its way to apologists, who merely repeated Plantinga’s argument without deeper reflection on what was being said in it. This was the first time many atheists heard of modal logic, and many of them reflexively denied the S5 axioms instead of the premises, which is what the argument’s phrasing was meant to pressure them to do. But while modal logicians can laugh at atheists being made to look foolish by restricting themselves to a uselessly weak version of alethic modal logic, laypersons don’t really see what’s so funny about this. And I think it’s a dismal state of affairs, because I don’t want to be unable to use the nice, perfect, beautiful, S5 modal logic when modelling alethic modalities in my conversations with atheists, just because they’re afraid of Plantinga. Enough is enough.

In this blog post, I restate the modal ontological argument by using a weaker modal logic combined with a stronger substantive definitional claim about God, so as to make the background understanding of God explicit, and show that nothing is unusual about it. With this, I aim to show that the substantive definitional claim about God is the real core of the ontological argument, not anything about S5. I review the options for non-theists very thoroughly so as to make it clear that they really don’t need to deny S5, and there’s really no reason to do so. The main goal of this blog post, in summary, is to emphasize the advantages and innocence of S5, so as to show that S5, by itself, says nothing more than natural language intuitively says about alethic modalities, so that the core of any modal ontological argument is something else.

2. The argument restated

The argument uses classical propositional modal logic with axiom T, and uses g as a constant that means “God exists”. (Axiom T is the same as what is called Axiom M by the SEP, which seems to be an outlier on this.) The exposition uses the definition ◇p ≝ ¬□¬p freely, and also freely speaks of “theists” as believers in g and “atheists” as believers in ¬g, with “agnostics” being those who suspend judgment, and “sceptics” being those who deny that it is possible to know whether g is the case (more on this in section 3); “non-theists” are whichever persons aren’t theists, of course.

The argument is stated as follows:

  1. □g ∨ □¬g (premise)
  2. ¬(□¬g) (premise)
  3. □g (1,2 DS)
  4. g (3 via axiom T)

This all went by very fast, so I am going to explain it line-by-line.

2.1. Premise 1: God’s existence is not contingent

Premise 1 states that God’s existence is either necessary or impossible. This is to deny that God is a contingent being, in the sense that God might either have been or not have been. The negation of Premise 1 is ◇g ∧ ◇¬g, i.e., possibly God exists AND possibly God doesn’t exist. Theists who accept Premise 1 deny that it’s possible that God doesn’t exist, and atheists who accept Premise 1 deny that it’s possible that God exists. Hence, neither theism nor atheism are forced by Premise 1 by itself.

The motivation of Premise 1 for theists is that they accept one of the disjuncts. The motivation of Premise 1 for atheists is that, if they affirm ◇g ∧ ◇¬g, then it’s unclear that their statement ¬g is really denying the same thing that theists affirm. After all, theists conceive of God as a necessary being, not as a contingent being. If the atheist denies Premise 1, the theist is free to say that the atheist hasn’t really denied what he affirms.

The very use of “contingent” as standard modal terminology hints at this way of conceiving of necessary truths, since it comes from Leibnizian metaphysics. In the Leibnizian metaphysics, whenever you said something was “contingent”, you also said it was “contingent on” something else: the “things that might have been and might not have been” were coextensive with the “things that depend on something else for their existence”. Leibniz took this simply as a linguistic datum, and it went unquestioned for a long time (I’m still not sure who exactly questions it), because it just works with natural language so perfectly.

But affirming Premise 1 does not require a commitment to this thesis of Leibnizian metaphysics. Using “fundamental being” for beings that aren’t ontologically dependent, then we might say that, regardless whether Leibniz is right in saying that all-and-only fundamental beings are necessary beings, it remains that theists conceive of God as a necessary being, quite apart from whether theists conceive of God as a fundamental being (which I think they also do, but which the argument does not claim). If the atheist conceives of God as a contingent being, which might either be or not be, then the atheist is not meaning the same thing by “God” that the theist means by it.

I say that the theist conceives of God as a necessary being. There is, of course, no universal agreement among theists about anything, so which theist do I mean? Well, Alvin Plantinga, for one, certainly conceives of God as a necessary being, and hence accepts Premise 1. Leibniz, of course, also does. More generally, I can’t think of any theist in the history of philosophy or of religion who ever explicitly denied Premise 1, at least about the greatest god, although not all of them always explicitly affirmed it. The atheist who denies Premise 1 is being very weird, historically, and hence I think I am saying something very plausible in saying that he is not denying something that any theists affirm. So if the atheist wants to deny what theists affirm, he must accept Premise 1, and hence (if he wants to remain an atheist in the present sense, rather than an agnostic or sceptic) claim that God’s existence is impossible, not just “possible but not actual”. More on options for atheists in section 3.

2.2. Premise 2 (God’s existence is possible) and the derivation

Premise 2 states that God’s existence is not impossible. Together with Premise 1, this easily implies theism, of course. Plantinga also uses Premise 2, so this argument isn’t an improvement on Plantinga in this respect. The atheist is free to deny Premise 2 if he wants; this is discussed in section 3.

To be fully explicit in case someone can’t read the notation: Since Premise 1 said God’s existence is either necessary or impossible, and Premise 2 denied that it’s impossible, then by Disjunctive Syllogism you get that God’s existence is necessary, which is the proposition 3. Axiom T says that necessary propositions are actually true, hence from proposition 3, via axiom T, you get proposition 4. I don’t expect any of this derivation to be controversial, it’s the premises that are controversial.

3. Options for non-theists, and why rejecting S5 isn’t one

The argument did not use the full strength of the modal logic system S5; it relied more or less entirely on the fact that Premise 1 is required to capture the theist’s informal understanding of God.

Granted, it also used Axiom T. But Axiom T, unlike the more bespoke S5 axioms that involve iterated modalities, is certainly obviously true in natural language. When would you ever say that something is “necessarily true but not actually true”? This seems simply ungrammatical. Hence, Axiom T is an analytic truth.

3.1. How S5 enters into the argument, and why rejecting it is a bad idea

How does S5 even come into it, then? The answer may be surprising: S5 only comes into it if the atheist wants to accept Premise 2 (i.e., deny that God’s existence is impossible; together with atheism and Axiom T, this already implies the negation of Premise 1, but nevermind this for now) and furthermore affirm that, “if God had existed, then God would exist necessarily”, in the subjunctive mood, interpreted under a standard modal reading of subjunctives. S5 is then required to show that such an atheist is inconsistent, such that the subjunctive statement combined with Premise 2 would force theism. I will explain.

If the atheist merely wants to say, in the indicative mood, that “if God exists, then God exists necessarily”, then this can be interpreted as a mere material conditional, g → □g. Even for the atheist, this can coexist in a classical S5 model with ◇g (God’s existence is not impossible), because, since the atheist accepts ¬g, then accepting g would lead to inconsistency. In classical logic, “from a contradiction, anything follows”, and in particular, you have g → □g.

But if the atheist wants to say, in the subjunctive mood, “if God had existed, then God would exist necessarily”, then under a standard modal reading of subjunctive conditionals, S5 can show that this atheist cannot, as an atheist, grant that it is possible that God exists. This standard interpretation, seen for instance here and here, interprets a subjunctive statement “if A had been the case, then B would have been the case” as materially implying, at least, that ◇A → ◇B. So for the atheist who says, “if God had existed, then God would exist necessarily”, this amounts to ◇g → ◇□g. Combined with his commitment to ◇g from his acceptance of Premise 2, the atheist becomes committed to ◇□g. This is where the characteristic axiom of S5, which is called either Axiom 5 or Axiom E depending on the source, comes into play, which says that ◇□p → □p. Instantiating the scheme with g, you have ◇□g → □g. By modus ponens, you have □g, and then Axiom T yields the theistic conclusion, as before. So such an atheist would be inconsistent, committed to theism and atheism, under S5.

However, such an atheist is certainly inconsistent in a more ordinary way, not just given S5. If he claims that God’s existence is possible (given his acceptance of Premise 2) but not actual (given his acceptance of atheism), then he claims that God’s existence is contingent. (That is, atheism and Premise 2, together with Axiom T, commit him to the negation of Premise 1.) So it is simply very strange, in natural language, that he claims, about God’s existence, that “if this proposition were to be true, it would be a necessary truth”. If you were to ever say the just-quoted sentence about any other proposition in your life, certainly you would be saying it about something which is either a tautology or a contradiction, such as a mathematical conjecture, not about something contingent. It is very weird, in natural language, to say that a contingent proposition would be necessary if it were true. The modal translation, and the S5 axiom, are only capturing this reality about how language is used, not introducing anything out of the ordinary. The dispute is about the premises, not about S5.

S5 is the best formal model of natural alethic modal language. It corresponds to frames where the accessibility relation is reflexive, symmetric, and transitive, or equivalently, reflexive and Euclidean. This is simply how natural-language alethic modalities work, and there is really nothing else to use, if you want a model of alethic modalities in formal logic. There is no reason to weaken it. I encourage you to look further into modal logic with an open mind, without thinking so much about theistic arguments, so that you will see what I am talking about. (Do not confuse alethic modalities with other modalities, of course, such as epistemic, deontic, or temporal modalities, which I do not address here.)

3.2. The real live options for non-theists

If you don’t want to accept theism, your options are the following:

3.2.1. Agnosticism or scepticism

This is to suspend judgment on whether God exists (agnosticism), or alternatively, to claim that is impossible to know whether God exists (scepticism about theism). These can be challenged, but I don’t care to do so here.

3.2.2. Believe that God’s existence is impossible, and hence reject Premise 2

Since the ontological argument merely proves that there is a necessary being, the most direct way to do this is to deny that there are any necessary beings at all, which is certainly something some people have believed, such as David Hume; if there are no necessary beings, then any purported necessary being is impossible.

If the atheist wants to make a more modest claim, he can use the typical arguments that some particular version of God’s purported attributes leads to inconsistency, such as arguments from evil, or from the omnipotence paradox. Those arguments are, of course, tied to a specific notion of what God’s attributes are, which may be disputed by the theist. Alternatively, the atheist can restrict necessary beings in a more local way, such as by claiming that the only necessary beings are (for instance) mathematical objects, and God isn’t (for instance) a mathematical object. You need to motivate the restriction on necessary beings somehow, and the more kinds of necessary beings you allow, of course, the more room there is for theists to argue that God fits into one of your allowed kinds.

3.2.3. Dispute the theist’s definition

I am including this option for logical completeness: technically, you may reject Premise 1 and try, somehow, to answer the theist’s challenge that this does not capture his conception of God. You may claim that the theist does not really believe that it is impossible for God to fail to exist, despite his protestations to the contrary; you may try to interpret the religious and theological tradition as somehow best explained by a contingent view of God. I do not think this makes any sense, but I can’t let my blog post be anything but exhaustive.

Illustration for this blog post, drawn by Nano Banana.

Friday, January 30, 2026

Mathematics is a model of language

This blog post outlines a philosophy of mathematics centered on representational structures, rejecting both the view that math is merely a tool for physics and the view that it is solely a game of analytic derivations in abstract logic. I draw on Robert Stalnaker’s pragmatics and the solution used by Stalnaker to address the “problem of equivalence”, which is the puzzle that, since all mathematical truths are necessary, they should logically all have the same content. The solution is that mathematics is not just about the truths themselves, but about how we represent them: it is the study of the specific connections between formulas, proof patterns, and axiomatic systems. Therefore, to learn mathematics is to master these symbolic manipulations, and to explore the capabilities of different representational systems.

Under this view, mathematics acts as a formal model of natural language. Mathematical concepts (like numbers or triangles) originate from ordinary language practices (counting or measuring) and are then “reified” into rigorous systems to allow for consistent operations. This perspective explains why mathematics is universally applicable (because these structural patterns are portable across domains) and why “auxiliary” concepts like hyperreals are useful (they serve as temporary representational bridges). It also explains why reducing mathematical objects to set theory often feels wrong (producing “junk theorems”): such reductions might work formally, but they sever the connection to the original representational roles that the concepts play in our natural language.

We cannot deny the axioms of mathematics; for they exhibit nothing more than a consistent use of words, and affirm of some idea that it is itself and not something else. —William Godwin, Political Justice, book 1, chapter 4

Philosophies of mathematics

This blog post is about philosophy of mathematics. As I’ve said before, there are roughly three types of philosophy of mathematics:

  1. an excuse to do more mathematics: here we have how intuitionism created intuitionist mathematics, or how there are foundation movements that try to recast all of mathematics in terms of type theory or category theory.
  2. “math is only real if it’s applied”: here we have so-called “mathematical naturalism”, which gives us the famous arguments that mathematical objects must be real because, and only insofar as, they are indispensable to physics.
  3. you can accept all of current math if you just accept this very unclear idea: the idea here is that you can give an illuminating philosophical interpretation to all of current mathematics—not just the parts that get applied practically—, and without even needing to reinterpret it in any restricted logic (as in intuitionism) or foundational theory (such as category theory or type theory). This is an ambitious goal, and the catch seems to be that every proposal that claims to achieve it seems to do its work by way of providing a very vague and unsatisfying interpretive scheme, such as by saying mathematics is a meaningless game of symbols, or that it consists of tautologies, etc., where it isn’t clear what makes it true that all-and-only mathematics fits the interpretive scheme, and how this fits the various appearances about mathematical practice.

I don’t think any idea about philosophy of mathematics fully escapes being one of these types, and this blog post belongs to type 3, but I hope I have mitigated the vagueness enough that it says something interesting.

The fact that I’m vaguely gesturing at something that isn’t very precise makes it all important to give the reader a feel for the kind of thing that was said in the literature that constitutes the broad kind of background that I’m coming from, without making the reader go read all of those books in full; which I hope explains why I’ve thrown all these quotations in there, including the famous Kant quotation that everyone has seen before (but who knows if someone hasn’t?).

What mathematics is about

Now if mathematical truths are all necessary, then on the possible worlds analysis there is no room for doubt about the truth of the propositions themselves. There are, it seems, only two mathematical propositions, the necessarily true one and the necessarily false one, and we all know that the first is true and the second false. But the functions that determine which of the two propositions is expressed by a given mathematical statement are just the kind that are sufficiently complex to give rise to reasonable doubt about which proposition is expressed by a statement. Hence it seems reasonable to take the objects of belief and doubt in mathematics to be propositions about the relation between statements and what they say. —Robert Stalnaker, Inquiry, p. 73

Stalnaker’s pragmatics, as explained in his books Inquiry (1984; especially chapter 4) and Context and Content (1999; especially chapter 12), begins with what he calls a “coarse-grained” semantic picture. Propositions, at the most basic level, are sets of possible worlds (or functions from worlds to truth-values). With that in hand, you can model assertion as eliminating worlds from the conversational context, presupposition as fixing a common-ground set, and inquiry as progressively narrowing what remains live. As spare as it is, that model earns its keep: Stalnaker can explain a lot of linguistic practice while keeping the semantic core simple and truth-conditional.

But the same simplicity creates what Stalnaker calls the problem of equivalence. If propositions are just sets of worlds, then necessarily equivalent statements—especially in mathematics—collapse to the same content. If every mathematical truth is necessary, it can look as if there are, informationally, only two mathematical propositions: the necessary truth and the necessary falsehood. And yet we plainly distinguish “17 is prime” from “the angles of a Euclidean triangle sum to 180°,” can believe one without believing the other, and can learn one without learning the other.

Stalnaker’s motivating response is to treat this not as a reason to abandon the possible-worlds model of informational content, but as a reason to notice something special about mathematical inquiry: much of what we learn in mathematics is about representational structures and their relations to that coarse-grained content. We do not just “latch onto” a necessary truth; we learn (and can be ignorant of) whether a given formula, proof pattern, or axiom-system representation determines that truth. Mathematics, on this view, is not merely a repository of necessary contents; it is, centrally, a disciplined study of structures of representation—structures that can be instantiated in different languages and notations, and manipulated by calculation and proof.

How mathematics is learned

Suppose there were a community of English speakers that grew up doing its arithmetic in a base eight notation. The words “eight” and “nine” don’t exist in its dialect; the words “ten” and “eleven,” like the numerals “10” and “11” denote the numbers eight and nine. Now suppose that some child in this community has a belief that he would express by saying “twenty-six times one hundred equals twenty-six hundred.” Would it be correct to say that this child believes that twenty-two times sixty-four equals fourteen hundred eight? This does not seem to capture accurately his cognitive state. His belief, like our simple arithmetical beliefs, is not really a belief about the numbers themselves, independently of how they are represented. —Robert Stalnaker, Context and Content, p. 237

The representational-structure picture fits the phenomenology of learning mathematics unusually well. Mathematical competence is inseparable from activities like: transforming inscriptions, applying rules of inference, calculating, constructing proofs, translating between equivalent forms.

Those are not optional “presentational” extras. They are how mathematical information is acquired and used. If learning that 689×43=29627 were merely ruling out worlds where a necessary truth fails, it would be mysterious how computation helps—since computation doesn’t teach us which worlds are actual. But if what you are learning is (roughly) that this representational procedure connects these numerals and operations to this result, then calculation is exactly the right kind of epistemic act: it is an exploration of the representational system’s structure.

So the “object” of mathematical belief is often best seen as involving claims like: a representation with this structure has this content; a derivation of this form preserves truth relative to these axioms; this symbolic construction yields a representation equivalent to that one. Those are propositions we can genuinely be ignorant of and come to know by manipulating the system.

How mathematics is applied

An initial hint that numbers are magnitudes comes from their algebra. The natural numbers, the positive real numbers and the ordinal numbers each have an associative operation of addition which defines a linear order. I call a system with that precise algebraic structure a positive semigroup. We find positive semigroups cropping up not only in mathematics but in the physical sciences as well. The fundamental physical magnitudes have this same algebraic structure: for example, mass is a positive semigroup because addition of masses is associative, and the addition defines a linear order. The same is true for length, area, volume, angle size, time and electric charge: these and other physical magnitudes all have the structure of a positive semigroup. —Keith Hossack, Knowledge and the Philosophy of Number, p. 1

Our picture also helps with a classic puzzle: mathematics is generally applicable—it shows up everywhere—yet it also seems to be about very particular objects (the number 2, the empty set, π, triangles, and so on).

If mathematics is fundamentally about representational structures, its generality is no surprise. Representational structures are general: they are patterns of form and transformation that can be implemented in many domains. The same algebraic structure can organize bookkeeping, physics, logic, probability, and geometry because those practices can be represented in ways that share the relevant structure.

At the same time, it feels like mathematics is about particular objects because our linguistic practice encourages reification. We talk as if “2” names an object; we quantify over numbers; we introduce constants (“0”, “∅”); we build theories that treat these as a stable domain. That practice is extremely useful: it streamlines inference and stabilizes coordination. But on the representational view, the “particularity” of mathematical objects is tightly connected to how our public language fixes and reuses representational roles.

How mathematics is motivated

The trouble with this objection is that it completely ignores history: the theory of real numbers, and the theory of differentiation etc. of functions of real numbers, was developed precisely in order to deal with physical space and physical time and various theories in which space and/or time play an important role, such as Newtonian mechanics. Indeed, the reason that the real number system and the associated theory of differentiation etc. is so important mathematically is precisely that so many of the problems to which we want to apply mathematics involve space and/or time. It is hardly surprising that mathematical theories developed in order to apply to space and time should postulate mathematical structures with some strong structural similarities to the physical structures of space and time. It is a clear case of putting the cart before the horse to conclude from this that what I’ve called the physical structure of space and time is really mathematical structure in disguise. —Hartry Field, Science Without Numbers

Mathematics developed, initially, from ordinary language, and then grew more complex to account for more technical, scientific language. The natural numbers are a model of the counting numbers we deploy in ordinary language: “two apples,” “three steps,” “four chairs.” We naturally used them for measurement of continuous quantities as well, since measurement tools (such as rulers) require counting (of markings on the rulers, for instance) to be used.

Extensions beyond the naturals—negative numbers, rationals, reals, complex numbers—are not forced on us by everyday counting talk, but they can be similarly motivated by ordinary counting contexts themselves, insofar as there is a practical need to perform operations (subtraction, division, solving equations) in a way that preserves and extends successful inferential patterns. We introduce new entities so that the representational system remains closed under operations we already treat as coherent.

On this view, the payoff of “new numbers” is that they yield correct, systematic consequences about the original practice. We extend the representational framework, do the work there, and recover truths that constrain ordinary counting-number claims.

Nonstandard analysis is a good illustration, since it introduces the hyperreal numbers, but only does so in order to formulate theorems of calculus, whose interest lies in what they say about real numbers. Hyperreal numbers function as an auxiliary representational domain: you begin with real-valued problems, temporarily move into the hyperreals to make certain reasoning patterns (infinitesimals, transfer principles) tractable, and then return with a theorem stated purely in terms of the reals. The hyperreals are not forced on us as “the real subject matter”; they are a representational device that systematizes certain operations and delivers results about the original target domain.

More exotic mathematical theories, which are more removed from ordinary practices, must be seen to correspond to more unusual, specialized, or tightly constrained contexts of representation—ways of carving up possibilities that aren’t our everyday default, but that could in principle become useful for unusual particular tasks.

How mathematics is interpreted

Give a philosopher the concept of a triangle, and let him try to find out in his way how the sum of its angles might be related to a right angle. He has nothing but the concept of a figure enclosed by three straight lines, and in it the concept of equally many angles. Now he may reflect on this concept as long as he wants, yet he will never produce anything new. He can analyze and make distinct the concept of a straight line, or of an angle, or of the number three, but he will not come upon any other properties that do not already lie in these concepts. But now let the geometer take up this question. He begins at once to construct a triangle. Since he knows that two right angles together are exactly equal to all of the adjacent angles that can be drawn at one point on a straight line, he extends one side of his triangle, and obtains two adjacent angles that together are equal to two right ones. Now he divides the external one of these angles by drawing a line parallel to the opposite side of the triangle, and sees that here there arises an external adjacent angle which is equal to an internal one, etc. In such a way, through a chain of inferences that is always guided by intuition, he arrives at a fully illuminating and at the same time general solution of the question. —Immanuel Kant, Critique of Pure Reason (Cambridge ed.), A716/B744

Kant famously treated mathematics as synthetic a priori: not merely unpacking meanings, but extending knowledge while remaining non-empirical. Others—especially in the Fregean and later formalist traditions—press the opposite thought: mathematics is analytic, derivable by logic and definitions.

The representational-structure view suggests a reconciliation: Within an axiom system, many results are “analytic” in a formal sense: they follow by rule-governed transformations from the axioms. But choosing the axiom system is not itself analytic. It is a modeling decision about which representational structure we are using to regiment some practice (counting, measuring, spatial reasoning, etc.). So mathematics is “analytic” only conditional on the axioms and rules—on the representational framework you adopt. As a model of natural-language practices (counting, describing space, tracking quantities), it is not analytic in any simple way, because natural-language meanings are not fixed enough to determine a unique axiom system. We can adopt different axioms and get different theorems—precisely because we are building different precise models of an imprecise practice.

Geometries as precisifications of spatial intuitions

In Kant’s time, only Euclidean geometry existed. Kant claimed geometry provided synthetic a priori knowledge about all possible sense experience, because it rested, according to him, on the spatial structure which is inherent to our sensory perception. The later development of non-Euclidean geometries has been thought to undermine Kant’s privileging of Euclidean geometry in this way; but as I see it, the fact that it’s even possible to see Euclidean geometry and alternative geometries as somehow in conflict, rather than simply talking about entirely different things, is a powerful illustration of the point I just made.

In both Euclidean and hyperbolic geometry, despite their different axioms and theorems, we really mean the same thing by the word “triangle”: a triangle is a figure determined by three non-collinear points joined pairwise by straight lines (geodesics). What changes across Euclidean and hyperbolic geometry is not that we suddenly mean something different by “triangle,” but that we add different background modeling assumptions about the ambient space—assumptions that determine which geometrical inferences are licensed. Once those modeling assumptions are in place, predictions diverge: Euclidean triangles have angle-sum 180°, while hyperbolic triangles have angle-sum less than 180°.

The key thought is that these are two precise mathematical models of the flexible, somewhat indeterminate natural-language apparatus of “straight,” “triangle,” and “space.” Different contexts of use—surveying small regions vs. reasoning about curved or non-Euclidean spaces—pull us toward different regimentations.

Reduction and junk theorems

I suggest that both of the above difficulties with the set-theoretical foundational consensus arise from the same source – namely, its strongly reductionistic tendency. Most mathematical objects, as they originally present themselves to us, are not sets. A natural number is not a transitive set linearly ordered by the membership relation. An ordered pair is not a doubleton of a singleton and a doubleton. A function is not a set of ordered pairs. A real number is not an equivalence class of Cauchy sequences of rational numbers. Points in space are not ordered triples of real numbers, and lines and planes are not sets of ordered triples of real numbers. Probability is not a normalized countably additive set function. A sentence is not a natural number. A proof is not a sequence of finite strings of symbols formed in accordance with the rules of some formal system. Each reader can supply his own examples of cases in which mathematical objects have been replaced in our thought and in our teaching by other, purely conceptual, objects. These conceptual objects may form structures which are isomorphic to relevant aspects of the structures formed by the objects we were originally interested in. They are, however, distinct from the objects we were originally interested in. Moreover, they do not fit smoothly into the larger structures to which the original mathematical objects belong. —Nicolas D. Goodman, The Knowing Mathematician

The representational view sheds light on how reductions of one mathematical theory to another always produce “junk theorems”, like “2 ∈ 3” or “1 is the powerset of 0.” (J.D. Hamkins, in his Lectures on the Philosophy of Mathematics, adds the example of John Conway’s account of numbers as games, which produces all familiar theorems about numbers, but where one may ask, “who wins 17?”) Inside a particular coding scheme (say, identifying numbers with particular sets), such statements can come out true. But we recoil because these are artifacts of the encoding, not stable generalizations that track our ordinary linguistic practice with numerals and membership talk.

In other words: reductions can preserve formal structure while scrambling representational roles that matter for interpretation. The resulting theorems may be harmless as bookkeeping inside the reduction, yet they fail as a model of how we actually use “2,” “element of,” or “powerset” in natural language. Our discomfort is pragmatic and semantic at once: the reduction has shifted us into a representational regime where the sentences no longer line up with the inferential and explanatory roles those expressions play in ordinary discourse.

Illustration for this blog post, drawn by Nano Banana.

Postscript: I never watch videos, but the day after I posted this, I watched this video about Euclid by Ben Syversen, which I think is related to this post somehow.

Wednesday, January 28, 2026

Distinguo Maiorem: On Predicate Relativization in Critical Inquiry

This blog post is geared around establishing two propositions, and deriving two conclusions from them. The two propositions concern the logical ubiquity and necessity of predicate relativization, defined as the act of subdividing a subject term to challenge universal claims. Drawing on the Scholastic tradition of distinguo maiorem (“I distinguish the major premise”), I argue that it is always linguistically possible to distinguish between types or senses of a term—for instance, conceding that “volant birds” fly while denying that “biological birds” (like penguins) do. Furthermore, I contend that such relativization is always permissible in critical inquiry; the alternative, “simpliciter-insistence” (demanding terms be accepted simply as stated), would block the way of inquiry in too many cases, and hence, the right to distinguish is essential for resolving conceptual disagreements and navigating edge cases where definitions break down.

From these premises, I derive two conclusions regarding the indeterminacy of rules and the interpretation of texts. First, I give an analysis of Kripke’s “plus/quus” paradox, which shows that rules cannot be defined tightly enough to prevent future distinctions; this analysis refutes Edward Feser’s argument for the immateriality of the human mind. Second, I argue that interpretive systems relying solely on fixed texts without a living authority—specifically, legal Originalism and theological Protestantism (Sola Scriptura)—are functionally impossible. Because a “dead” or unavailable author cannot settle distinctions regarding ambiguous predicates (e.g., “cruel” punishment or “killing”), these systems cannot yield definitive rules for a community, necessitating a living authority (such as a Supreme Court or Magisterium) to ratify interpretations and halt the infinite regress of relativization.

(1) It is always possible to relativize a predicate. (This is an observation about language.)

It is a fundamental property of language and logic that it is always possible to relativize a predicate. In the course of any argument, when a speaker makes a universal claim—for example, “All Fs are Gs”—an interlocutor can always complicate the matter by subdividing the subject term F.

By “relativizing a predicate,” I mean the act of answering a universal claim by distinguishing between different senses or types of the subject. If the claim is “All Fs are G,” the relativizer counters by separating F into “A-type Fs” (meaning F understood in sense A) and “B-type Fs” (meaning F understood in sense B). The counter-claim then becomes: “All A-type Fs are Gs, but B-type Fs are not G.”

It is possible to relativize a predicate’s definition rather than the predicate. Even if we agree on the surface statement “All Fs are Gs,” we might disagree on what constitutes an F. Suppose we define F as “all-and-only the Hs that are J.” For example, let us define “bachelor” (F) as “an unmarried man” (H that is J). The relativizer might argue, “I grant that bachelors are unmarried men. However, I distinguish the term ‘man’ (H). All-and-only eligible, adult unmarried men (A-type Hs) are bachelors. But Pope Leo is an unmarried man (B-type H), yet we do not call him a bachelor.”

Predicate relativization has a rich history, most notably in the Scholastic tradition of medieval disputation. In these rigorous debates, the “proponent” would advance a syllogism, and the “opponent” was tasked with finding the flaw. If the proponent argued:

  1. All birds fly. (major premise)
  2. The penguin is a bird. (minor premise)
  3. Therefore, the penguin flies. (conclusion)

The opponent could employ the distinguo maiorem (“I distinguish the major premise”). They might argue: “That all volant-type birds (A-type) fly, I concede; but that all biological birds (B-type) fly, I deny.” By relativizing the predicate “bird,” the opponent dismantles the universality of the major premise and blocks the conclusion.

Analysis confirms that this move is always logically available. No matter how precise a term appears, language is sufficiently fluid that a distinction can always be introduced to carve the concept into accepting and rejecting sub-types. Hence, (1) is true.

(2) It is always permissible, in critical discussion, to relativize any predicate. (This is a normative claim about critical inquiry.)

Not only is predicate relativization always possible, it is always allowed in critical discussion, or inquiry. By critical discussion, I mean a truth-seeking discussion along the lines of what Frans van Eemeren and Rob Grootendorst meant by it when they defined their normative pragma-dialectical model of critical discussion. Eemeren and Grootendorst correctly point out that certain norms are required by the practice of a dispute-resolution process designed to ensure rationality; a core tenet, the Freedom Rule, is that parties must be free to advance any standpoint or cast doubt on any standpoint. I am arguing, then, that blocking the ability to make distinctions stifles this process and prevents the resolution of the underlying conceptual disagreement.

To see the necessity of relativization, consider the alternative, which we might call “simpliciter-insistence.” This is the stance of a debater who refuses to accept distinctions, demanding that terms be taken simply (simpliciter) as stated. Such an opponent of relativization would argue something like the following:

I am not talking about A-type Fs and B-type Fs. I reject the relevance of your distinction. The question was about whether Fs, taken simpliciter and without any extra predicates, are Gs. You may grant that the object of our discussion is a B-type F, and deny that it is an A-type F; but in doing this, you change the subject. You must commit to an answer on whether the object of our discussion is an F, simply speaking, just an F, without any extra predicates.

For instance, in a normative discussion about freedom: “I reject your distinction between positive and negative freedom. I am talking about freedom, period. By distinguishing, you are changing the subject. You must answer whether the object of our discussion is in accordance with freedom, simply speaking.”

The fatal flaw of simpliciter-insistence is that it collapses when faced with “odd cases out-of-domain.” These are edge cases where a predicate is applied to a subject outside its usual context. For instance, suppose we know what “addition”, or summing, means for numbers. We may also have an intuition that “addition”, or summing, is possible with concepts, not only with numbers. But then there are conflicting, equally plausible answers as to what adding concepts means. For instance, what is the sum of the concepts “rational” and “animal”? If we demand that adding the concepts means adding their intensions, we know that the sum of the concepts is “rational and animal”, i.e., a human. But if we demand that adding the concepts means adding their extensions, we know then that the sum of the concepts is “rational or animal”, a concept that applies to humans, dogs, and, if there are any, rational non-animals such as angels. These result in opposite logical operators (AND vs. OR). If one insists on “addition simpliciter” in the domain of concepts, the question is unanswerable.

The defender of simpliciter-insistence might reply by saying that we can solve this by restricting a concept’s domain of application. As the philosopher Alvin Plantinga said, after all, “No prime minister is a prime number.” The opponent of relativization can argue, then, that we can simply restrict the domain to clear cases, e.g., addition is only defined for numbers, not for concepts. Very well. But here we have a regress, since the domain itself can be relativized: do only natural numbers count as numbers, or possibly complex numbers, and even more foreign algebraic systems? If we restrict the domain to algebraic systems with a well-defined addition, this is the same as leaving it to a disputant’s linguistic intuition whether some particular case is addition or not, which was the original problem. And in any purported particular application of addition, we may deny that this particular application constitutes a number in the sense required by arithmetic, and hence that 2+2=4 is relevant to an issue. This is always doable fully in good faith, in a truth-seeking way.

If the simpliciter-insistence defender restricts the domain, the relativizer can always relativize the domain restriction. Because critical discussants must be able to navigate these boundary cases to reach truth, the right to distinguish (distinguo maiorem) must be preserved, and hence, (2) is true.

(3) Conclusions and implications.

If we accept that predicates can always be relativized (1) and that we must allow this in discourse (2), two profound implications follow regarding rule-following and textual interpretation.

(3.1) An analysis of the plus/quus problem.

The inevitability of relativization sheds light on the famous “plus/quus” paradox presented by Saul Kripke in his reading of Wittgenstein.

Kripke challenges us to prove that when we used the “plus” function in the past, we didn’t actually mean “quus,” where x quus y equals x + y if x, y < 57, but equals 5 otherwise. If we have never performed a calculation with numbers larger than 57, all our past behavior is consistent with both “plus” and “quus.”

Since we ordinarily talk about very universal and general laws of arithmetic, such as the axiom of induction, it is not possible that, when we used “plus” in the past, we meant “quus” as specifically defined by Kripke. Nevertheless, although “plus” is indeed defined well outside the range of 57, it is not well-defined for every arbitrary argument. There are always borderline cases (e.g., adding transfinite numbers, or adding symbols) where the definition breaks down. The Kripkean “quus” proponent is essentially exploiting the infinite potential for predicate relativization. We cannot define a rule so tightly that it pre-empts every possible future distinction or out-of-domain application.

Edward Feser’s case for the immateriality of the human mind rested largely on the plus/quus problem: Feser says neural and behavioral features of humans are consistent with multiple interpretations of our words, which suggests our words should be indeterminate, but in fact when we use words, we actually mean an absolutely definite concept which is not indeterminate between all concepts that are consistent with our past behavioral and neural features, and hence, our minds must be immaterial, wholly independent from our behavioral and neural features. I argue that human concepts are not, after all, all that determinate, since in fact infinitely many questions can be raised about them which the concepts themselves do not determine, and hence, Feser has not shown that the human mind is immaterial.

(3.2) The impossibility of Protestantism and Originalism.

Finally, the universal possibility and permissibility of predicate relativization poses a devastating challenge to systems of interpretation that rely solely on a fixed written text without a living authority that can answer questions about it, such as, specifically, legal Originalism about the U.S. Constitution, and theological Protestantism, resting on the Sola Scriptura principle. The challenge is that it is impossible to use these systems consistently to determine rules of belief or practice for a collectivity, such as the community of believers or the community of U.S. citizens.

This is because, if we have a written text by an author who is dead or otherwise unavailable to answer questions (either in person or through a representative), then it is not always possible to apply the text to issues of theory or practice and get a definite result, which is acceptable to all reasonable parties to a discussion about the text. In particular, we cannot always apply the Bible to get definite answers on theology and on moral law; and we cannot always apply the U.S. Constitution to get definite answers on how to judge a federal issue. Some examples of relativization:

  • Constitutional law: If the Constitution forbids “cruel and unusual punishment,” and we ask, “Is the death penalty cruel?”, we are asking for a judgment on a predicate. An opponent can always distinguish: “It is cruel in the modern sense (A-type), but not in the 18th-century sense (B-type).”

  • Theology: If the Bible says “Thou shalt not kill,” does this apply to war? One can distinguish: “It forbids murder (A-type killing) but allows martial combat (B-type killing).”

Originalism and Protestantism both rely on a “dead author” in our current, Barthes-inspired sense, i.e., the Founding Fathers or the author(s) of the biblical texts, respectively. Protestants may argue that the true author of the Bible is God, who is indeed living rather than dead, but this does affect my point here, which is that the author is unavailable to answer clarifying questions. Some Protestants may here press onward, by arguing that God is available to the individual believer via the “inner witness of the Holy Ghost”; but this does not change the case I was arguing, which is that it is impossible for Sola Scriptura to get a definite result, which is acceptable to all reasonable parties to a discussion about the text, and hence to determine rules for the community of Christian believers; this is, of course, obvious now, in light of the manifold splinters of the Protestant community since Luther’s time.

If the author is available to answer questions, either in person or through a representative, then he can settle the matter: “I meant A-type, not B-type.” But if the author is unavailable to discussants, and we do not agree that anyone on earth is his representative, then we are left with the text “simpliciter.” Because the text cannot distinguish its own predicates when faced with new contexts or edge cases, and because the author cannot respond to a distinguo maiorem, the reader is forced to supply the distinction, leading to a fork in how the predicate is understood.

Thus, without a living authority (such as a Supreme Court or a Magisterium) to ratify which distinction is authoritative, the text alone cannot yield definite answers to theoretical or practical problems. It remains forever open to the infinite regress of relativization.

Illustration for this blog post, drawn by Nano Banana.

Sunday, January 18, 2026

Cultures, communities, and atomization

This blog post is about The Ideology Is Not the Movement, a 2016-04-04 blog post by Scott Alexander. I read it only recently because it was linked from Scott Alexander’s viral 2026-01-16 eulogy of Dilbert cartoonist Scott Adams (who died 2026-01-13), on which I have nothing to say. Shortly after writing the blog post, I read an essay by G.A. Cohen that turned out to be relevant, and added a postscript about it.

The Ideology Is Not the Movement, or I≠M for short, develops a concept of “tribe” and “tribalism” in order to explain various phenomena of human social groups. I≠M opens by questioning whether the Sunni/Shia divide can really be about whether a given Muslim supports Abu Bakr or Ali for caliph, given that that dispute “was fourteen hundred years ago, both candidates are long dead, and there’s no more caliphate. You’d think maybe they’d let the matter rest.” Scott Alexander proposes that it’s not really about that at all by comparing it to the Robbers’ Cave experiment, where two nearly identical groups of boys, naming themselves the Rattlers and the Eagles, became bitter rivals almost immediately. It is clear that those two groups of boys, although they developed different customs and ideas, were not divided fundamentally by their having competing ideologies of “Rattlerism” and “Eagleism”. Scott Alexander’s idea, then, is that Sunnis and Shias are something more like that, and he uses this contrast as the motivation to build up his concept of “tribe”.

Culture

You might expect that I’m about to explain Scott Alexander’s concept of “tribe”, since I just summarized the first section of I≠M. Well, you can read I≠M yourself to see how it develops the concept of “tribe” from then on, but since I have a problem with that development, I will make that problem clear before continuing, so as not to confuse you. The problem is as follows. Scott Alexander confuses two concepts, which I will call culture and community.

  • A culture is basically what Scott Alexander calls a “tribe” when he is trying to define it in broad and abstract terms: it is a group of people who share “pre-existing differences” (traits, styles, dispositions, and “undefinable habits of thought”), which are gathered by a particular belief/event/activity which serves as a “rallying flag” for those types of people, which then undergo cultural “development” (of symbols, myths, heroes/villains, jargon, grievances, norms), and which, finally, may experience “dissolution” if the rallying flag somehow stops serving its purpose. Due to the pre-existing differences marked out by cultures, people get along better within-culture than between-cultures, and members of the same culture tend to enjoy the same kinds of cultural products, regardless whether they share any single core set of beliefs or values.
  • A community is what is really behind some, but not all, of Scott Alexander’s purported examples of tribes, which therefore do not wholly fit his abstract theory. A community is defined by lack of alternatives, as is often caused by poverty or disaster, but not necessarily; a community must stick together in order to face their harsh conditions together, and this mutual dependency for survival makes it much more important to “fit in” with a community than with a culture. Members of a community will often hide parts of themselves or suppress preferences, because individuality and authenticity are less valuable than continued membership in good standing.

In this blog post, I will consistently use the terms “culture” and “community” as just defined, regardless what word was used for a particular group in other sources. I claim, then, that Scott Alexander’s examples of the gamer culture, the atheist culture, and the LessWrong rationalist culture, are examples of cultures rather than communities. I claim, more broadly, that most cultures are not communities, although some communities have cultures; this will be explained further in the next sections.

By contrast with communities, there is not as much of a clear benefit from “fitting in” with cultures. If you’re an outcast from the Communist Party (in most cases today), you may feel bad about yourself if you respected the other members of the Party, but you can still make it well in broader society; regarding anything that mattered for your survival, you have alternatives.

In my reflections leading up to this post, I had been wondering whether maybe I had never experienced tribalism in the sense Scott Alexander mentions. It turned out that, while I have been in many cultures, I do not find that the cultures I have been in have had the traits of communities, and hence my wonderment was due to the confusion inherent to the tribe concept.

Community

I will now point out which of Scott Alexander’s examples of tribes were clearly communities rather than cultures.

The Robbers’ Cave boys were a community. They were not divided by ideologies, but what united them wasn’t ideology either: they were united by depending on each other for various survival tasks, or rather, mock-survival tasks at their summer camp. Very likely, if you gave each boy plenty of resources so that each could have a nice time on his own, they’d stop caring about each other as much.

A different example of community in I≠M are disability communities, such as the community of deaf people. Deaf people are as diverse as humanity at large; they were not brought together because they had common likes and dislikes, or shared “undefinable habits of thought”. They were brought together because they depended on each other to have access to communication, employment, mutual aid, and social life in a world built for hearing people. They lacked alternatives, and this is what defines a community.

As I said, lack of alternatives is very often produced by harsh conditions such as poverty, disaster, geography, or discrimination, but this is not always the case; for instance, the Amish have voluntarily made some alternatives unavailable to themselves, secluding themselves from the outside world.

Illustration for this blog post, drawn by Nano Banana.

Community cultures

Communities often have cultures. The Eagles and the Rattlers developed rudimentary cultures, where “the Eagles developed an image of themselves as proper-and-moral”, while “the Rattlers developed an image of themselves as rough-and-tough”. Although they didn’t rally around anything cultural, it wouldn’t surprise me if deaf people have a unique culture of their own, as well as other disability communities; and of course, the Amish are a community built from a culture, rather than the other way around. (We may call a community built from a culture a “culture community”. I think most so-called “intentional communities” are culture communities.)

That said, I think it’s not true that every community has a unique “community culture”. For instance, the mafia may have many unique values regarding honor and retribution, but I don’t think they necessarily feel better than people outside the mafia, or share a lot of cultural products, or enjoy each other’s company more than outsiders’, or anything; they probably aren’t very different from other Italians regarding what sorts of media they like. Maybe there are flaws with this example of the mafia, but it at least seems plausible to me that we may find many communities that don’t have a unique culture. If families are communities, and I think they often are, then I would also count most of them among the communities that have no unique culture.

Scott Alexander’s example of Sunni and Shia Islam is trickier, since they may have started out as a community with a culture and later become merely a culture, I am not sure. I do not believe that this necessarily weakens the culture; many strong cultures were never communities.

Atomization

Insofar as there is a process which may be called “atomization” in modern society, I think it consists in the fact that people are getting richer and, as a result, do not need communities to survive as much. Since communities are defined by lack of alternatives, they are dissolved when alternatives develop.

Aside from the cases of culture communities such as the Amish and intentional communities, the sacrifice of authenticity and individuality is, usually, a compromise made in order to survive, not something valued for its own sake; most people prefer not to hide parts of their personality in order to better fit in with a group.

So I consider that atomization is a good thing, insofar as it consists in people getting what they prefer.

Many persons are appalled when they hear me saying that atomization is a good thing, and I think the reason this happens is that they are, like Scott Alexander, confusing cultures with communities. Hence, they think atomization is bad because being part of a culture is enjoyable.

However, I have just explained atomization as an effect that affects communities, not cultures; there are many cultures today, and more cultures than ever thanks to the Internet, such as the rationalist culture that Scott Alexander mentions, which formed entirely online. Atomization is the erosion of communities, whereas cultures are not harmed by it, except the few cultures that depend on communities.

While I think it’s good that communities are on the way out, I have nothing against cultures, and I have generally enjoyed being part of cultures, although I do not fit in with any cultures, very much, these days – and notably, this is perfectly fine, since failing to fit in with cultures is not a survival risk, unlike communities.

So the fact that Scott Alexander confused these two concepts into his blurred concept of “tribe” has really helped me clarify them for myself and notice the difficulties with terminology around this topic.

Postscript: Refuting G.A. Cohen’s case for socialism

Very shortly after writing this post, due to unrelated circumstances, I read G.A. Cohen’s short 2009 essay, Why Not Socialism? It turns out to be related to this post, since Cohen rests his case for socialism largely on the value which he calls “community”, a value which is a kind of brotherhood or general friendliness, which he believes is impaired by large inequalities.

Cohen does not give a necessary-and-sufficient definition of his view of community, but given how similar it is to the kinds of ingroup-directed friendliness that exist in cultures and communities, I claim that Cohen’s value of community can only exist toward persons that you already, in some way, get along with, either because you have a culture in common with those persons, or a community.

I believe inequality is rarely a problem within cultures, since cultures tend to form between people of the same class, but it does introduce friction within communities, since the mutual dependency for survival becomes asymmetrical, which may lead to resentment. I have already argued that most communities should, ideally, not exist, and since inequality is not a problem within cultures, Cohen’s case for socialism is therefore toppled, insofar as it rested on setting up the value of community as an ideal to strive for. Insofar as it rested on considerations of justice, I have refuted it in the last blog post, which supported libertarian private property on an indisputable foundation.

Someone could reply that, yes, cultures tend to form between people of the same class, but this is only more reason to desire that more persons belong to the same class as ourselves, so that more persons could share in our culture. To such a someone, I say that it is unlikely that it ever happens that persons of different classes are otherwise entirely matching in “pre-existing differences”, that is, in traits, styles, dispositions, and “undefinable habits of thought”, and are only prevented from being part of the same culture by a wealth disparity. Personality traits, and general background, are among the causes of social classes, after all. And besides, if someone happens to be much richer or much poorer than other persons in the same culture, but is otherwise a member in good standing within the culture, allowances and accommodations tend to be spontaneously made for this sort of thing by other members of the culture. You could claim that this is itself a form of socialism, but it is voluntary and, at any rate, no society-wide socialism is required for it.

Thursday, January 8, 2026

Ethics

This blog post explains and defends an Aristotelian-Thomistic, Austro-libertarian ethics, grounded on the metaethics from the previous blog post.

The core argument is that if we are ethically bound to preserve the social conditions for truth-seeking (SPT), we must also preserve the objects of that search (the Intelligibility Principle, IP). By combining this with a specific “modern-scientific” Aristotelian hierarchy of being (HMSAT), I defend human rights and reject animal rights. Then, by combining that with a theory of agency, I argue that respecting private property preserves the intelligibility of the world, while aggression causes an “ontological crash,” degrading complex human plans into mere physical collisions; under the adopted metaethics, this amounts to defending a specific, limited form of the libertarian nonaggression principle (NAP). Although this ethic basically vindicates the laissez-faire result of Murray Rothbard’s demonstrated-preference theory of welfare, it does not vindicate all of his Ethics of Liberty, since we lack a suitable theory of punishment and damages, as I explain in the third part of the post, where some limitations of the theory are highlighted.

Saturday, December 27, 2025

Dialogical metaethics

This blog post explains my view on metaethics, which, as I’ve mentioned before, is similar to “discourse ethics” but is different (and it is based on a conception of argumentative dialogue, but the name “argumentation ethics” is taken), and hence I call it “dialogue ethics” (an aptly similar-but-different name) or “dialogical metaethics” (a name that more nicely indicates that it is, after all, a view on metaethics rather than first-order ethics, but loses the parallel with discourse/argumentation ethics).

It is somewhat longwinded because I think if I do not spell things out in a lot of detail, people don’t get it, and misinterpret it. Hence there are four sections of background, to try to teach you some academic theories in case you’ve never heard of them (just skip through them if you have), and then I develop the ideas themselves in the latter four sections.

Basically, I argue that moral and ethical concepts are best understood as dynamic components of a language game. Building on a Wittgensteinian “Use Thesis” as motivation and David Lewis’s “conversational scoreboard” as formal background, I develop a model where ethics is treated as a conduct-defense dialogue (CDD). The ultimate conclusion attempts to ground a narrow, absolute set of ethical norms (the social-pragmatic truth-norm, SPT) in the necessity of preserving the social conditions required for philosophical inquiry itself.

The theory of argumentation will help to develop what a logic of value judgments has tried in vain to provide, namely the justification of the possibility of a human community in the sphere of action when this justification cannot be based on a reality or objective truth. And its starting point, in making this contribution, is an analysis of those forms of reasoning which, though they are indispensable in practice, have from the time of Descartes been neglected by logicians and theoreticians of knowledge. —Perelman & Olbrechts-Tyteca, The New Rhetoric, conclusion

Wednesday, December 17, 2025

Review of Toby Handfield’s “A Philosophical Guide to Chance”

This blog post will review “A Philosophical Guide to Chance” by Toby Handfield. Throughout, I talk about its implications for the relevance of probability theory in its usual applications to the broad category of chance, risk, uncertainty, etc. (which is, ultimately, what all “interpretations of probability” attempt to capture), without of course intending to convey that this has any spillover effects on the application of the abstract mathematical theory to nonstandard entities such as tables, chairs, and mugs of beer (and therefore intending to convey no implications either for the unapplied, “pure” mathematical probability theory in itself), but nevertheless referring to it simply as “probability theory” (or “probability”) rather than “applied probability theory” or some other such bracketing locution.

Allegorical representation, drawn by Nano Banana, of Toby Handfield’s “A Philosophical Guide to Chance” destroying probability theory, with various themes from the book seen surrounding it. I had it drawn like this because my main takeaway from the book, as you will see, is that there is no good semantics for probability theory, meaning that, in an important sense, probability theory does not contribute anything to anyone’s understanding of anything.

Contents

Why I am reviewing this here even though, from some points of view, this is a bad idea

There are two good reasons for me not to review this book here.

  1. It goes against my usual policy: I don’t usually review books on my blog, since I don’t usually have much to say about them besides giving a summary and saying “I recommend this” or “I don’t recommend this”. I have usually relegated such brief comments, sans summary, to my X/Twitter. Summarizing is dull, usually rather uncreative work, which I would only undertake for books that I am especially interested in disseminating, such as when I summarized Agnes Callard’s Aspiration and, back in the day, various books that I tried to summarize for Wikipedia (most interestingly The Open Society and Its Enemies and A Preface to Paradise Lost). Although my verdict on Handfield’s book is that it is good, I am not especially interested in disseminating it, especially since I believe much of its value is in its survey of other works, so that much of a summary of the book would consist in a second-hand summary of those other works, which feels like overdiluting things. There will be a summary of Handfield’s book here, but only a rather perfunctory one, to give a feel for what I’m talking about, not a detailed one like my summary of Aspiration, which was intended to help the reader follow Agnes’s argument. (I had found Agnes’s argument hard to follow because I thought she wrote paragraphs that were too long, which is ironic, given the length of the paragraph I’m writing now.)
  2. I haven’t read this book with as much attention as I’d like. There was some skimming. When I talk about this book, I don’t fully know what I’m talking about. However, given how little detail this review will have, I will certainly not say anything confidently about any part of the book that I did not pay attention to, so this is not a big problem. The fact that I don’t have detailed comments on each microsection of the book is part of why I have kept the book summary separate from my broad, sweeping, ‘reviewing’ remarks.

The reason I’m reviewing the book here, which overrides the two reasons above, is that the book is very relevant to some of my past projects and will be relevant to some of my future projects. Probability theory is important in science and philosophy; it is a going concern. Handfield’s book establishes an important thesis about probability theory, and it has implications for all of my past and future work involving probability theory. So I need to at least have some sort of note, uploaded somewhere, signposting that this book exists and roughly more-or-less what its implications are. This post is that note. Also, reviewing the book on X/Twitter would not allow me to hyperlink other things from the review very nicely, which I want to do. So much, then, for the justification of the review.

Summary of Toby Handfield, “A Philosophical Guide to Chance”

Toby Handfield’s A Philosophical Guide to Chance is a guided tour of a single, stubborn problem: we treat “chance” as both (i) something objective in the world and (ii) something that tells us what it’s rational to believe and how to act—yet chance-guided belief can still “fail” in a particular case, and it’s surprisingly hard to say what makes a chance fact true.

The book begins by fixing the target. “Chance” isn’t (just) ignorance or a personal hunch; it plays a distinctive role in thought and science: it’s supposed to be a physical probability that normatively constrains credence (your confidence should match the chance you take there to be), without guaranteeing success in any one trial. That normative role is sharpened via Lewis’s Principal Principle (credence should track known chance, absent inadmissible information), but the early chapters also foreshadow the central sticking point: what counts as admissible, and—more deeply—what could ground chance so that it earns this authority over rational belief.

From there Handfield builds the most tempting “scientific” staging ground: the classical picture of a deterministic world of particles, positions, and velocities evolving under time-reversal-invariant laws. To talk sensibly about uncertainty in such a world, we represent “ways the world might be” as points and trajectories in phase space, and we represent ordinary propositions (like “there’s an elephant in the room”) as sets of microstates—regions in that space. This apparatus works beautifully for physics’ own categories, especially macrostates (temperature/pressure/etc.) as large phase-space “blobs,” and it sets up statistical mechanics’ key move: explain thermodynamic regularities by saying anti-thermodynamic behavior is not impossible but overwhelmingly improbable relative to a natural measure (“volume”) on phase space.

That statistical-mechanical picture is the launching pad for the first major metaphysical proposal: possibilist theories, which try to ground chance in “how the actual world sits among possibilities,” typically via relative modal volume. Handfield treats this as initially attractive—almost the default temptation once you’ve absorbed phase space and measures—but then systematically presses why it doesn’t deliver what chance is supposed to be. Volume-based chances struggle with conditioning on measure-zero events, with the vagueness and open-texture of ordinary propositions, and—most importantly—with the justificatory demand: why should that measure have any normative authority over rational credence? Attempts to vindicate the privileged measure by appeal to learning, frequencies, or “how well peers do” run into circularity: you end up using the very notion of “most” or “typical” that the measure was meant to explain. Even more sophisticated “washout” ideas (microconstancy/typicality) capture real robustness in practice, but still appear to smuggle in an ungrounded measure over possibilities.

The next major strategy is actualism: keep chance objective and real, but reduce it to purely actual, this-worldly facts—paradigmatically, frequencies or (more subtly) the Lewisian “Best System” account where probabilistic laws and chances are whatever best balance simplicity, strength, and fit to the actual history. Handfield grants the sophistication and influence of this approach, but argues it distorts how chance is meant to work. Crude frequentism fails because chance and finite frequency can come apart (and single-case chances collapse to 0/1), while Best-System actualism threatens counterfactual weirdness (chances depend too heavily on what actually happens), reverses the explanatory direction we ordinarily use (outcomes/frequencies explained by chances, not vice versa), and risks making chance’s normative force depend on an anthropocentric modeling compromise (tailored to limited creatures like us).

At that point the book opens the anti-realist landscape. If the realist reduction programs don’t ground chance, perhaps chance talk is (in one way or another) not tracking mind-independent chance properties. Handfield distinguishes: error theory (chance discourse aims at objective facts but none exist), subjectivism (chance claims depend on agents’ credences), and non-cognitivism (chance talk functions more like a guiding or expressive tool than a straightforward description). Subjectivism, he argues, collapses genuine disagreement and wrecks chance’s explanatory role in science; error theory and non-cognitivism remain live but owe us a story about why chance-talk is so successful and entrenched if it doesn’t describe objective chance facts.

Quantum mechanics then becomes the stress test: if anything forces objective chance on us, surely it’s QM. But Handfield’s survey of the main interpretive families—collapse, Bohm, Everett—aims to show that QM doesn’t straightforwardly rescue chance realism. Collapse interpretations can take chance as primitive (which doesn’t illuminate chance), Bohmian mechanics is deterministic and pushes probabilities toward typicality/ignorance-style stories, and Everett replaces “one outcome happens with probability p” with “all outcomes happen,” creating a new problem: reconstruct genuine uncertainty and justify the Born rule as uniquely rational. The many-worlds chapters push hard on the idea that self-locating uncertainty and decision-theoretic derivations can simulate the role of probability, but struggle to secure the robust, explanatory, uniquely action-guiding “objective chance” we started with—especially around death and selection/weighting worries.

Late in the book, Handfield turns to time. We experience chance as future-directed, yet many fundamental laws are time-symmetric. His proposed reconciliation is evidence-first: chance is tied to what credence is recommended by available evidence, and evidence is radically time-asymmetric because the past leaves records while the future does not. Statistical mechanics plus the Past Hypothesis (a low-entropy boundary condition) is brought in as the deeper physical explanation of why records and traces overwhelmingly point one way in time—why we get “footprints from the past” but not “portents from the future.” This supports an overall picture where “chance” varies with evidential situation and context, rather than being a fixed world-feature readable off the total microstate.

The final chapter makes the book’s argumentative posture explicit by analogy to moral debunking: chance concepts carry norms (coordinate credence with chance; use likelihood-style updating), and we can give a compelling “natural history” of why creatures like us adopt and rely on chance-thinking—because it’s practically indispensable for decision and for restraining our pattern-hungry tendency to invent causal stories. But that practical vindication doesn’t automatically yield existential vindication of irreducible, objective, physical chance properties. Handfield’s overall drift is therefore deflationary: keep the tool (probabilistic reasoning, Bayesian modeling, statistical mechanics, quantum predictions), but be suspicious of the heavyweight metaphysics—especially the idea that there must be mind-independent “chance facts” that both ground probabilities and uniquely dictate rational credence in the strong way many philosophers hoped.

In short: the book starts from chance’s everyday-and-scientific authority over rational belief, tries the two dominant realist grounding strategies (possibility-structure and actual-history reduction), finds both wanting (especially on normativity and explanation), tests the hope that quantum theory might force realism, and ends by recommending a broadly anti-realist / debunking-friendly stance: chance-talk is an extraordinarily useful practice for limited agents embedded in an entropic, record-filled world, but its success may not require—and may not support—robust metaphysical “chance-makers.”

Remarks on the book

Handfield is an invaluable guide to all previous philosophical proposals in probability semantics. To make a pun with his name, he really puts the whole field in your hands; he possesses a rare talent for pedagogical clarity, managing to render even the most formidable proposals accessible. A prime example is his treatment of John Bigelow’s proposal that “probabilities are ratios of volumes of possibilities.” If one attempts to read the original papers where Bigelow advances this view, one is immediately thrown into the deep end of a difficult formalism that can obscure the philosophical intuition. Handfield, by contrast, reconstructs the argument with the ease of an introductory textbook, without sacrificing the necessary rigor. I am grateful for this; it is representative of the service Handfield performs in making all philosophical proposals in the semantics of chance legible.

However, Handfield’s bright clarity in exposition is equally applied to illuminating the flaws in each of the proposals, and the book leads the reader to the conclusion that there is, in fact, no good probability semantics. I find myself in full agreement with his verdict on the existing literature: every major attempt to ground probability—whether in modal volumes, frequencies, or best-system analyses—is fatally flawed. Interestingly, Handfield identifies deep problems even within anti-realist theories, despite ultimately siding with an anti-realist stance himself. The result is that the book offers no semantic theory at all on which to rest, realist or not. You put it down feeling logically thoroughly briefed, but metaphysically homeless.

When I reflect on the dim prospects for probability semantics, I feel some relief that at least I’m not a LessWrong user. LessWrong is a website where the foundation of its philosophy is, among other things, overusing probability theory, applying it to all of life and reasoning. (Mostly they are “objective Bayesians” in epistemology, although of course, the broader attitude of probability overuse is not at all part of objective Bayesianism.) As Handfield’s book demonstrates by systematically dismantling the semantic grounds of chance, this is a precarious position. If probability lacks a coherent semantic footing, then building an entire identity or epistemic system upon it is the height of irrationality. To overuse a tool that we cannot fundamentally define is to actively work toward making language meaningless, communication impossible, and inquiry fruitless. LessWrong users are happy with this, of course, since without exception they hate all knowledge and always seek to destroy any possibility of anyone understanding any truth. (Since they are often autistic, I should note that this paragraph is largely hyperbole.)

Yet, even for those of us who have not staked our identity on Bayesianism, the drift toward anti-realism is uncomfortable. I do not particularly enjoy being forced into a position of philosophical scepticism regarding such a widely accepted field of research. Probability theory is everywhere: it appears in scientific explanation, in everyday deliberation, in statistics, and in decision theory. And it even appears in many proposals in philosophical semantics, such as the following three:

  1. Adams’s Thesis, which analyzes natural-language conditional statements (“If A, then B”) by equating their assertability with the conditional probability P(B|A). This is extremely famous and you can read about it at the SEP, for instance, so there is really no reason for me to say more about it, even though it is more important than the other two.
  2. Hannes Leitgeb’s rehabilitation of verificationism, which proposes that a sentence A is meaningful if, and only if, there exists some evidence B such that P(B|A)≠P(B). (I am very sympathetic to verificationism, but I have mostly been following a form of Gordian Haas’s version. Haas’s Minimal Verificationism is a great book. Notice how I don’t have much to say about most books, beyond recommending them. Besides its original proposal, which fully rehabilitates the verificationist criterion of meaning with only some small holes to patch regarding counterfactuals, Haas’s book serves as a great survey of historical verificationism, and I have used its information to make some improvements on the Wikipedia article about verificationism.)
  3. My own previous work on concessive statements. In a post that I decided was not good enough for this blog, so that I instead posted it to my X/Twitter, I have previously modeled concessives like “even if p, [still] q” using a probabilistic threshold. If we let τ be a robustness threshold (e.g., 90%) and δ be a measure of how much p is implied to hinder q, the concessive assertion can be formalized as: P(q|¬p) −δ ≥ P(q|p) ≥ τ. (In plain English: q remains likely above threshold τ even given p, and p does not lower the probability of q by more than the “hinderance factor” δ compared to when p is false.) This work was inspired by Crupi and Iacona’s earlier, much better work which modeled concessives using a conditional logic, as well as the axiomatic (“Hilbert-style”) proof system that they built for this logic together with Raidl. Still, it further shows that, if probability theory could be understood, it would help understand some parts of language.

It is disappointing that these proposals, which had felt like they were clarifying vague linguistic phenomena, might simply be translating one thing we don’t understand (conditionals, meaning, concessives) into another thing we don’t understand (probability).

Granted, perhaps this translation is not entirely in vain. Even if probability lacks a fundamental “truth-maker” in the physical world, treating these problems probabilistically is helpful because it imposes structural coherence; we may not know what a “chance fact” is, but we know exactly how probabilities must relate to one another mathematically to avoid Dutch books or incoherence. By translating a linguistic problem into probability calculus, we trade vague, shifting linguistic intuitions for a rigid, checkable structure, albeit one which “hadeth no semantics”. We may not have found the ground, but we have at least found a rigorous syntax for our uncertainty. Handfield leaves us with the tool, but takes away the comforting illusion that the tool describes the furniture of the universe.