Friday, January 30, 2026

Mathematics is a model of language

This blog post outlines a philosophy of mathematics centered on representational structures, rejecting both the view that math is merely a tool for physics and the view that it is solely a game of analytic derivations in abstract logic. I draw on Robert Stalnaker’s pragmatics and the solution used by Stalnaker to address the “problem of equivalence”, which is the puzzle that, since all mathematical truths are necessary, they should logically all have the same content. The solution is that mathematics is not just about the truths themselves, but about how we represent them: it is the study of the specific connections between formulas, proof patterns, and axiomatic systems. Therefore, to learn mathematics is to master these symbolic manipulations, and to explore the capabilities of different representational systems.

Under this view, mathematics acts as a formal model of natural language. Mathematical concepts (like numbers or triangles) originate from ordinary language practices (counting or measuring) and are then “reified” into rigorous systems to allow for consistent operations. This perspective explains why mathematics is universally applicable (because these structural patterns are portable across domains) and why “auxiliary” concepts like hyperreals are useful (they serve as temporary representational bridges). It also explains why reducing mathematical objects to set theory often feels wrong (producing “junk theorems”): such reductions might work formally, but they sever the connection to the original representational roles that the concepts play in our natural language.

We cannot deny the axioms of mathematics; for they exhibit nothing more than a consistent use of words, and affirm of some idea that it is itself and not something else. —William Godwin, Political Justice, book 1, chapter 4

Philosophies of mathematics

This blog post is about philosophy of mathematics. As I’ve said before, there are roughly three types of philosophy of mathematics:

  1. an excuse to do more mathematics: here we have how intuitionism created intuitionist mathematics, or how there are foundation movements that try to recast all of mathematics in terms of type theory or category theory.
  2. “math is only real if it’s applied”: here we have so-called “mathematical naturalism”, which gives us the famous arguments that mathematical objects must be real because, and only insofar as, they are indispensable to physics.
  3. you can accept all of current math if you just accept this very unclear idea: the idea here is that you can give an illuminating philosophical interpretation to all of current mathematics—not just the parts that get applied practically—, and without even needing to reinterpret it in any restricted logic (as in intuitionism) or foundational theory (such as category theory or type theory). This is an ambitious goal, and the catch seems to be that every proposal that claims to achieve it seems to do its work by way of providing a very vague and unsatisfying interpretive scheme, such as by saying mathematics is a meaningless game of symbols, or that it consists of tautologies, etc., where it isn’t clear what makes it true that all-and-only mathematics fits the interpretive scheme, and how this fits the various appearances about mathematical practice.

I don’t think any idea about philosophy of mathematics fully escapes being one of these types, and this blog post belongs to type 3, but I hope I have mitigated the vagueness enough that it says something interesting.

The fact that I’m vaguely gesturing at something that isn’t very precise makes it all important to give the reader a feel for the kind of thing that was said in the literature that constitutes the broad kind of background that I’m coming from, without making the reader go read all of those books in full; which I hope explains why I’ve thrown all these quotations in there, including the famous Kant quotation that everyone has seen before (but who knows if someone hasn’t?).

What mathematics is about

Now if mathematical truths are all necessary, then on the possible worlds analysis there is no room for doubt about the truth of the propositions themselves. There are, it seems, only two mathematical propositions, the necessarily true one and the necessarily false one, and we all know that the first is true and the second false. But the functions that determine which of the two propositions is expressed by a given mathematical statement are just the kind that are sufficiently complex to give rise to reasonable doubt about which proposition is expressed by a statement. Hence it seems reasonable to take the objects of belief and doubt in mathematics to be propositions about the relation between statements and what they say. —Robert Stalnaker, Inquiry, p. 73

Stalnaker’s pragmatics, as explained in his books Inquiry (1984; especially chapter 4) and Context and Content (1999; especially chapter 12), begins with what he calls a “coarse-grained” semantic picture. Propositions, at the most basic level, are sets of possible worlds (or functions from worlds to truth-values). With that in hand, you can model assertion as eliminating worlds from the conversational context, presupposition as fixing a common-ground set, and inquiry as progressively narrowing what remains live. As spare as it is, that model earns its keep: Stalnaker can explain a lot of linguistic practice while keeping the semantic core simple and truth-conditional.

But the same simplicity creates what Stalnaker calls the problem of equivalence. If propositions are just sets of worlds, then necessarily equivalent statements—especially in mathematics—collapse to the same content. If every mathematical truth is necessary, it can look as if there are, informationally, only two mathematical propositions: the necessary truth and the necessary falsehood. And yet we plainly distinguish “17 is prime” from “the angles of a Euclidean triangle sum to 180°,” can believe one without believing the other, and can learn one without learning the other.

Stalnaker’s motivating response is to treat this not as a reason to abandon the possible-worlds model of informational content, but as a reason to notice something special about mathematical inquiry: much of what we learn in mathematics is about representational structures and their relations to that coarse-grained content. We do not just “latch onto” a necessary truth; we learn (and can be ignorant of) whether a given formula, proof pattern, or axiom-system representation determines that truth. Mathematics, on this view, is not merely a repository of necessary contents; it is, centrally, a disciplined study of structures of representation—structures that can be instantiated in different languages and notations, and manipulated by calculation and proof.

How mathematics is learned

Suppose there were a community of English speakers that grew up doing its arithmetic in a base eight notation. The words “eight” and “nine” don’t exist in its dialect; the words “ten” and “eleven,” like the numerals “10” and “11” denote the numbers eight and nine. Now suppose that some child in this community has a belief that he would express by saying “twenty-six times one hundred equals twenty-six hundred.” Would it be correct to say that this child believes that twenty-two times sixty-four equals fourteen hundred eight? This does not seem to capture accurately his cognitive state. His belief, like our simple arithmetical beliefs, is not really a belief about the numbers themselves, independently of how they are represented. —Robert Stalnaker, Context and Content, p. 237

The representational-structure picture fits the phenomenology of learning mathematics unusually well. Mathematical competence is inseparable from activities like: transforming inscriptions, applying rules of inference, calculating, constructing proofs, translating between equivalent forms.

Those are not optional “presentational” extras. They are how mathematical information is acquired and used. If learning that 689×43=29627 were merely ruling out worlds where a necessary truth fails, it would be mysterious how computation helps—since computation doesn’t teach us which worlds are actual. But if what you are learning is (roughly) that this representational procedure connects these numerals and operations to this result, then calculation is exactly the right kind of epistemic act: it is an exploration of the representational system’s structure.

So the “object” of mathematical belief is often best seen as involving claims like: a representation with this structure has this content; a derivation of this form preserves truth relative to these axioms; this symbolic construction yields a representation equivalent to that one. Those are propositions we can genuinely be ignorant of and come to know by manipulating the system.

How mathematics is applied

An initial hint that numbers are magnitudes comes from their algebra. The natural numbers, the positive real numbers and the ordinal numbers each have an associative operation of addition which defines a linear order. I call a system with that precise algebraic structure a positive semigroup. We find positive semigroups cropping up not only in mathematics but in the physical sciences as well. The fundamental physical magnitudes have this same algebraic structure: for example, mass is a positive semigroup because addition of masses is associative, and the addition defines a linear order. The same is true for length, area, volume, angle size, time and electric charge: these and other physical magnitudes all have the structure of a positive semigroup. —Keith Hossack, Knowledge and the Philosophy of Number, p. 1

Our picture also helps with a classic puzzle: mathematics is generally applicable—it shows up everywhere—yet it also seems to be about very particular objects (the number 2, the empty set, π, triangles, and so on).

If mathematics is fundamentally about representational structures, its generality is no surprise. Representational structures are general: they are patterns of form and transformation that can be implemented in many domains. The same algebraic structure can organize bookkeeping, physics, logic, probability, and geometry because those practices can be represented in ways that share the relevant structure.

At the same time, it feels like mathematics is about particular objects because our linguistic practice encourages reification. We talk as if “2” names an object; we quantify over numbers; we introduce constants (“0”, “∅”); we build theories that treat these as a stable domain. That practice is extremely useful: it streamlines inference and stabilizes coordination. But on the representational view, the “particularity” of mathematical objects is tightly connected to how our public language fixes and reuses representational roles.

How mathematics is motivated

The trouble with this objection is that it completely ignores history: the theory of real numbers, and the theory of differentiation etc. of functions of real numbers, was developed precisely in order to deal with physical space and physical time and various theories in which space and/or time play an important role, such as Newtonian mechanics. Indeed, the reason that the real number system and the associated theory of differentiation etc. is so important mathematically is precisely that so many of the problems to which we want to apply mathematics involve space and/or time. It is hardly surprising that mathematical theories developed in order to apply to space and time should postulate mathematical structures with some strong structural similarities to the physical structures of space and time. It is a clear case of putting the cart before the horse to conclude from this that what I’ve called the physical structure of space and time is really mathematical structure in disguise. —Hartry Field, Science Without Numbers

Mathematics developed, initially, from ordinary language, and then grew more complex to account for more technical, scientific language. The natural numbers are a model of the counting numbers we deploy in ordinary language: “two apples,” “three steps,” “four chairs.” We naturally used them for measurement of continuous quantities as well, since measurement tools (such as rulers) require counting (of markings on the rulers, for instance) to be used.

Extensions beyond the naturals—negative numbers, rationals, reals, complex numbers—are not forced on us by everyday counting talk, but they can be similarly motivated by ordinary counting contexts themselves, insofar as there is a practical need to perform operations (subtraction, division, solving equations) in a way that preserves and extends successful inferential patterns. We introduce new entities so that the representational system remains closed under operations we already treat as coherent.

On this view, the payoff of “new numbers” is that they yield correct, systematic consequences about the original practice. We extend the representational framework, do the work there, and recover truths that constrain ordinary counting-number claims.

Nonstandard analysis is a good illustration, since it introduces the hyperreal numbers, but only does so in order to formulate theorems of calculus, whose interest lies in what they say about real numbers. Hyperreal numbers function as an auxiliary representational domain: you begin with real-valued problems, temporarily move into the hyperreals to make certain reasoning patterns (infinitesimals, transfer principles) tractable, and then return with a theorem stated purely in terms of the reals. The hyperreals are not forced on us as “the real subject matter”; they are a representational device that systematizes certain operations and delivers results about the original target domain.

More exotic mathematical theories, which are more removed from ordinary practices, must be seen to correspond to more unusual, specialized, or tightly constrained contexts of representation—ways of carving up possibilities that aren’t our everyday default, but that could in principle become useful for unusual particular tasks.

How mathematics is interpreted

Give a philosopher the concept of a triangle, and let him try to find out in his way how the sum of its angles might be related to a right angle. He has nothing but the concept of a figure enclosed by three straight lines, and in it the concept of equally many angles. Now he may reflect on this concept as long as he wants, yet he will never produce anything new. He can analyze and make distinct the concept of a straight line, or of an angle, or of the number three, but he will not come upon any other properties that do not already lie in these concepts. But now let the geometer take up this question. He begins at once to construct a triangle. Since he knows that two right angles together are exactly equal to all of the adjacent angles that can be drawn at one point on a straight line, he extends one side of his triangle, and obtains two adjacent angles that together are equal to two right ones. Now he divides the external one of these angles by drawing a line parallel to the opposite side of the triangle, and sees that here there arises an external adjacent angle which is equal to an internal one, etc. In such a way, through a chain of inferences that is always guided by intuition, he arrives at a fully illuminating and at the same time general solution of the question. —Immanuel Kant, Critique of Pure Reason (Cambridge ed.), A716/B744

Kant famously treated mathematics as synthetic a priori: not merely unpacking meanings, but extending knowledge while remaining non-empirical. Others—especially in the Fregean and later formalist traditions—press the opposite thought: mathematics is analytic, derivable by logic and definitions.

The representational-structure view suggests a reconciliation: Within an axiom system, many results are “analytic” in a formal sense: they follow by rule-governed transformations from the axioms. But choosing the axiom system is not itself analytic. It is a modeling decision about which representational structure we are using to regiment some practice (counting, measuring, spatial reasoning, etc.). So mathematics is “analytic” only conditional on the axioms and rules—on the representational framework you adopt. As a model of natural-language practices (counting, describing space, tracking quantities), it is not analytic in any simple way, because natural-language meanings are not fixed enough to determine a unique axiom system. We can adopt different axioms and get different theorems—precisely because we are building different precise models of an imprecise practice.

Geometries as precisifications of spatial intuitions

In Kant’s time, only Euclidean geometry existed. Kant claimed geometry provided synthetic a priori knowledge about all possible sense experience, because it rested, according to him, on the spatial structure which is inherent to our sensory perception. The later development of non-Euclidean geometries has been thought to undermine Kant’s privileging of Euclidean geometry in this way; but as I see it, the fact that it’s even possible to see Euclidean geometry and alternative geometries as somehow in conflict, rather than simply talking about entirely different things, is a powerful illustration of the point I just made.

In both Euclidean and hyperbolic geometry, despite their different axioms and theorems, we really mean the same thing by the word “triangle”: a triangle is a figure determined by three non-collinear points joined pairwise by straight lines (geodesics). What changes across Euclidean and hyperbolic geometry is not that we suddenly mean something different by “triangle,” but that we add different background modeling assumptions about the ambient space—assumptions that determine which geometrical inferences are licensed. Once those modeling assumptions are in place, predictions diverge: Euclidean triangles have angle-sum 180°, while hyperbolic triangles have angle-sum less than 180°.

The key thought is that these are two precise mathematical models of the flexible, somewhat indeterminate natural-language apparatus of “straight,” “triangle,” and “space.” Different contexts of use—surveying small regions vs. reasoning about curved or non-Euclidean spaces—pull us toward different regimentations.

Reduction and junk theorems

I suggest that both of the above difficulties with the set-theoretical foundational consensus arise from the same source – namely, its strongly reductionistic tendency. Most mathematical objects, as they originally present themselves to us, are not sets. A natural number is not a transitive set linearly ordered by the membership relation. An ordered pair is not a doubleton of a singleton and a doubleton. A function is not a set of ordered pairs. A real number is not an equivalence class of Cauchy sequences of rational numbers. Points in space are not ordered triples of real numbers, and lines and planes are not sets of ordered triples of real numbers. Probability is not a normalized countably additive set function. A sentence is not a natural number. A proof is not a sequence of finite strings of symbols formed in accordance with the rules of some formal system. Each reader can supply his own examples of cases in which mathematical objects have been replaced in our thought and in our teaching by other, purely conceptual, objects. These conceptual objects may form structures which are isomorphic to relevant aspects of the structures formed by the objects we were originally interested in. They are, however, distinct from the objects we were originally interested in. Moreover, they do not fit smoothly into the larger structures to which the original mathematical objects belong. —Nicolas D. Goodman, The Knowing Mathematician

The representational view sheds light on how reductions of one mathematical theory to another always produce “junk theorems”, like “2 ∈ 3” or “1 is the powerset of 0.” (J.D. Hamkins, in his Lectures on the Philosophy of Mathematics, adds the example of John Conway’s account of numbers as games, which produces all familiar theorems about numbers, but where one may ask, “who wins 17?”) Inside a particular coding scheme (say, identifying numbers with particular sets), such statements can come out true. But we recoil because these are artifacts of the encoding, not stable generalizations that track our ordinary linguistic practice with numerals and membership talk.

In other words: reductions can preserve formal structure while scrambling representational roles that matter for interpretation. The resulting theorems may be harmless as bookkeeping inside the reduction, yet they fail as a model of how we actually use “2,” “element of,” or “powerset” in natural language. Our discomfort is pragmatic and semantic at once: the reduction has shifted us into a representational regime where the sentences no longer line up with the inferential and explanatory roles those expressions play in ordinary discourse.

Illustration for this blog post, drawn by Nano Banana.

Postscript: I never watch videos, but the day after I posted this, I watched this video about Euclid by Ben Syversen, which I think is related to this post somehow.

Wednesday, January 28, 2026

Distinguo Maiorem: On Predicate Relativization in Critical Inquiry

This blog post is geared around establishing two propositions, and deriving two conclusions from them. The two propositions concern the logical ubiquity and necessity of predicate relativization, defined as the act of subdividing a subject term to challenge universal claims. Drawing on the Scholastic tradition of distinguo maiorem (“I distinguish the major premise”), I argue that it is always linguistically possible to distinguish between types or senses of a term—for instance, conceding that “volant birds” fly while denying that “biological birds” (like penguins) do. Furthermore, I contend that such relativization is always permissible in critical inquiry; the alternative, “simpliciter-insistence” (demanding terms be accepted simply as stated), would block the way of inquiry in too many cases, and hence, the right to distinguish is essential for resolving conceptual disagreements and navigating edge cases where definitions break down.

From these premises, I derive two conclusions regarding the indeterminacy of rules and the interpretation of texts. First, I give an analysis of Kripke’s “plus/quus” paradox, which shows that rules cannot be defined tightly enough to prevent future distinctions; this analysis refutes Edward Feser’s argument for the immateriality of the human mind. Second, I argue that interpretive systems relying solely on fixed texts without a living authority—specifically, legal Originalism and theological Protestantism (Sola Scriptura)—are functionally impossible. Because a “dead” or unavailable author cannot settle distinctions regarding ambiguous predicates (e.g., “cruel” punishment or “killing”), these systems cannot yield definitive rules for a community, necessitating a living authority (such as a Supreme Court or Magisterium) to ratify interpretations and halt the infinite regress of relativization.

(1) It is always possible to relativize a predicate. (This is an observation about language.)

It is a fundamental property of language and logic that it is always possible to relativize a predicate. In the course of any argument, when a speaker makes a universal claim—for example, “All Fs are Gs”—an interlocutor can always complicate the matter by subdividing the subject term F.

By “relativizing a predicate,” I mean the act of answering a universal claim by distinguishing between different senses or types of the subject. If the claim is “All Fs are G,” the relativizer counters by separating F into “A-type Fs” (meaning F understood in sense A) and “B-type Fs” (meaning F understood in sense B). The counter-claim then becomes: “All A-type Fs are Gs, but B-type Fs are not G.”

It is possible to relativize a predicate’s definition rather than the predicate. Even if we agree on the surface statement “All Fs are Gs,” we might disagree on what constitutes an F. Suppose we define F as “all-and-only the Hs that are J.” For example, let us define “bachelor” (F) as “an unmarried man” (H that is J). The relativizer might argue, “I grant that bachelors are unmarried men. However, I distinguish the term ‘man’ (H). All-and-only eligible, adult unmarried men (A-type Hs) are bachelors. But Pope Leo is an unmarried man (B-type H), yet we do not call him a bachelor.”

Predicate relativization has a rich history, most notably in the Scholastic tradition of medieval disputation. In these rigorous debates, the “proponent” would advance a syllogism, and the “opponent” was tasked with finding the flaw. If the proponent argued:

  1. All birds fly. (major premise)
  2. The penguin is a bird. (minor premise)
  3. Therefore, the penguin flies. (conclusion)

The opponent could employ the distinguo maiorem (“I distinguish the major premise”). They might argue: “That all volant-type birds (A-type) fly, I concede; but that all biological birds (B-type) fly, I deny.” By relativizing the predicate “bird,” the opponent dismantles the universality of the major premise and blocks the conclusion.

Analysis confirms that this move is always logically available. No matter how precise a term appears, language is sufficiently fluid that a distinction can always be introduced to carve the concept into accepting and rejecting sub-types. Hence, (1) is true.

(2) It is always permissible, in critical discussion, to relativize any predicate. (This is a normative claim about critical inquiry.)

Not only is predicate relativization always possible, it is always allowed in critical discussion, or inquiry. By critical discussion, I mean a truth-seeking discussion along the lines of what Frans van Eemeren and Rob Grootendorst meant by it when they defined their normative pragma-dialectical model of critical discussion. Eemeren and Grootendorst correctly point out that certain norms are required by the practice of a dispute-resolution process designed to ensure rationality; a core tenet, the Freedom Rule, is that parties must be free to advance any standpoint or cast doubt on any standpoint. I am arguing, then, that blocking the ability to make distinctions stifles this process and prevents the resolution of the underlying conceptual disagreement.

To see the necessity of relativization, consider the alternative, which we might call “simpliciter-insistence.” This is the stance of a debater who refuses to accept distinctions, demanding that terms be taken simply (simpliciter) as stated. Such an opponent of relativization would argue something like the following:

I am not talking about A-type Fs and B-type Fs. I reject the relevance of your distinction. The question was about whether Fs, taken simpliciter and without any extra predicates, are Gs. You may grant that the object of our discussion is a B-type F, and deny that it is an A-type F; but in doing this, you change the subject. You must commit to an answer on whether the object of our discussion is an F, simply speaking, just an F, without any extra predicates.

For instance, in a normative discussion about freedom: “I reject your distinction between positive and negative freedom. I am talking about freedom, period. By distinguishing, you are changing the subject. You must answer whether the object of our discussion is in accordance with freedom, simply speaking.”

The fatal flaw of simpliciter-insistence is that it collapses when faced with “odd cases out-of-domain.” These are edge cases where a predicate is applied to a subject outside its usual context. For instance, suppose we know what “addition”, or summing, means for numbers. We may also have an intuition that “addition”, or summing, is possible with concepts, not only with numbers. But then there are conflicting, equally plausible answers as to what adding concepts means. For instance, what is the sum of the concepts “rational” and “animal”? If we demand that adding the concepts means adding their intensions, we know that the sum of the concepts is “rational and animal”, i.e., a human. But if we demand that adding the concepts means adding their extensions, we know then that the sum of the concepts is “rational or animal”, a concept that applies to humans, dogs, and, if there are any, rational non-animals such as angels. These result in opposite logical operators (AND vs. OR). If one insists on “addition simpliciter” in the domain of concepts, the question is unanswerable.

The defender of simpliciter-insistence might reply by saying that we can solve this by restricting a concept’s domain of application. As the philosopher Alvin Plantinga said, after all, “No prime minister is a prime number.” The opponent of relativization can argue, then, that we can simply restrict the domain to clear cases, e.g., addition is only defined for numbers, not for concepts. Very well. But here we have a regress, since the domain itself can be relativized: do only natural numbers count as numbers, or possibly complex numbers, and even more foreign algebraic systems? If we restrict the domain to algebraic systems with a well-defined addition, this is the same as leaving it to a disputant’s linguistic intuition whether some particular case is addition or not, which was the original problem. And in any purported particular application of addition, we may deny that this particular application constitutes a number in the sense required by arithmetic, and hence that 2+2=4 is relevant to an issue. This is always doable fully in good faith, in a truth-seeking way.

If the simpliciter-insistence defender restricts the domain, the relativizer can always relativize the domain restriction. Because critical discussants must be able to navigate these boundary cases to reach truth, the right to distinguish (distinguo maiorem) must be preserved, and hence, (2) is true.

(3) Conclusions and implications.

If we accept that predicates can always be relativized (1) and that we must allow this in discourse (2), two profound implications follow regarding rule-following and textual interpretation.

(3.1) An analysis of the plus/quus problem.

The inevitability of relativization sheds light on the famous “plus/quus” paradox presented by Saul Kripke in his reading of Wittgenstein.

Kripke challenges us to prove that when we used the “plus” function in the past, we didn’t actually mean “quus,” where x quus y equals x + y if x, y < 57, but equals 5 otherwise. If we have never performed a calculation with numbers larger than 57, all our past behavior is consistent with both “plus” and “quus.”

Since we ordinarily talk about very universal and general laws of arithmetic, such as the axiom of induction, it is not possible that, when we used “plus” in the past, we meant “quus” as specifically defined by Kripke. Nevertheless, although “plus” is indeed defined well outside the range of 57, it is not well-defined for every arbitrary argument. There are always borderline cases (e.g., adding transfinite numbers, or adding symbols) where the definition breaks down. The Kripkean “quus” proponent is essentially exploiting the infinite potential for predicate relativization. We cannot define a rule so tightly that it pre-empts every possible future distinction or out-of-domain application.

Edward Feser’s case for the immateriality of the human mind rested largely on the plus/quus problem: Feser says neural and behavioral features of humans are consistent with multiple interpretations of our words, which suggests our words should be indeterminate, but in fact when we use words, we actually mean an absolutely definite concept which is not indeterminate between all concepts that are consistent with our past behavioral and neural features, and hence, our minds must be immaterial, wholly independent from our behavioral and neural features. I argue that human concepts are not, after all, all that determinate, since in fact infinitely many questions can be raised about them which the concepts themselves do not determine, and hence, Feser has not shown that the human mind is immaterial.

(3.2) The impossibility of Protestantism and Originalism.

Finally, the universal possibility and permissibility of predicate relativization poses a devastating challenge to systems of interpretation that rely solely on a fixed written text without a living authority that can answer questions about it, such as, specifically, legal Originalism about the U.S. Constitution, and theological Protestantism, resting on the Sola Scriptura principle. The challenge is that it is impossible to use these systems consistently to determine rules of belief or practice for a collectivity, such as the community of believers or the community of U.S. citizens.

This is because, if we have a written text by an author who is dead or otherwise unavailable to answer questions (either in person or through a representative), then it is not always possible to apply the text to issues of theory or practice and get a definite result, which is acceptable to all reasonable parties to a discussion about the text. In particular, we cannot always apply the Bible to get definite answers on theology and on moral law; and we cannot always apply the U.S. Constitution to get definite answers on how to judge a federal issue. Some examples of relativization:

  • Constitutional law: If the Constitution forbids “cruel and unusual punishment,” and we ask, “Is the death penalty cruel?”, we are asking for a judgment on a predicate. An opponent can always distinguish: “It is cruel in the modern sense (A-type), but not in the 18th-century sense (B-type).”

  • Theology: If the Bible says “Thou shalt not kill,” does this apply to war? One can distinguish: “It forbids murder (A-type killing) but allows martial combat (B-type killing).”

Originalism and Protestantism both rely on a “dead author” in our current, Barthes-inspired sense, i.e., the Founding Fathers or the author(s) of the biblical texts, respectively. Protestants may argue that the true author of the Bible is God, who is indeed living rather than dead, but this does affect my point here, which is that the author is unavailable to answer clarifying questions. Some Protestants may here press onward, by arguing that God is available to the individual believer via the “inner witness of the Holy Ghost”; but this does not change the case I was arguing, which is that it is impossible for Sola Scriptura to get a definite result, which is acceptable to all reasonable parties to a discussion about the text, and hence to determine rules for the community of Christian believers; this is, of course, obvious now, in light of the manifold splinters of the Protestant community since Luther’s time.

If the author is available to answer questions, either in person or through a representative, then he can settle the matter: “I meant A-type, not B-type.” But if the author is unavailable to discussants, and we do not agree that anyone on earth is his representative, then we are left with the text “simpliciter.” Because the text cannot distinguish its own predicates when faced with new contexts or edge cases, and because the author cannot respond to a distinguo maiorem, the reader is forced to supply the distinction, leading to a fork in how the predicate is understood.

Thus, without a living authority (such as a Supreme Court or a Magisterium) to ratify which distinction is authoritative, the text alone cannot yield definite answers to theoretical or practical problems. It remains forever open to the infinite regress of relativization.

Illustration for this blog post, drawn by Nano Banana.

Sunday, January 18, 2026

Cultures, communities, and atomization

This blog post is about The Ideology Is Not the Movement, a 2016-04-04 blog post by Scott Alexander. I read it only recently because it was linked from Scott Alexander’s viral 2026-01-16 eulogy of Dilbert cartoonist Scott Adams (who died 2026-01-13), on which I have nothing to say. Shortly after writing the blog post, I read an essay by G.A. Cohen that turned out to be relevant, and added a postscript about it.

The Ideology Is Not the Movement, or I≠M for short, develops a concept of “tribe” and “tribalism” in order to explain various phenomena of human social groups. I≠M opens by questioning whether the Sunni/Shia divide can really be about whether a given Muslim supports Abu Bakr or Ali for caliph, given that that dispute “was fourteen hundred years ago, both candidates are long dead, and there’s no more caliphate. You’d think maybe they’d let the matter rest.” Scott Alexander proposes that it’s not really about that at all by comparing it to the Robbers’ Cave experiment, where two nearly identical groups of boys, naming themselves the Rattlers and the Eagles, became bitter rivals almost immediately. It is clear that those two groups of boys, although they developed different customs and ideas, were not divided fundamentally by their having competing ideologies of “Rattlerism” and “Eagleism”. Scott Alexander’s idea, then, is that Sunnis and Shias are something more like that, and he uses this contrast as the motivation to build up his concept of “tribe”.

Culture

You might expect that I’m about to explain Scott Alexander’s concept of “tribe”, since I just summarized the first section of I≠M. Well, you can read I≠M yourself to see how it develops the concept of “tribe” from then on, but since I have a problem with that development, I will make that problem clear before continuing, so as not to confuse you. The problem is as follows. Scott Alexander confuses two concepts, which I will call culture and community.

  • A culture is basically what Scott Alexander calls a “tribe” when he is trying to define it in broad and abstract terms: it is a group of people who share “pre-existing differences” (traits, styles, dispositions, and “undefinable habits of thought”), which are gathered by a particular belief/event/activity which serves as a “rallying flag” for those types of people, which then undergo cultural “development” (of symbols, myths, heroes/villains, jargon, grievances, norms), and which, finally, may experience “dissolution” if the rallying flag somehow stops serving its purpose. Due to the pre-existing differences marked out by cultures, people get along better within-culture than between-cultures, and members of the same culture tend to enjoy the same kinds of cultural products, regardless whether they share any single core set of beliefs or values.
  • A community is what is really behind some, but not all, of Scott Alexander’s purported examples of tribes, which therefore do not wholly fit his abstract theory. A community is defined by lack of alternatives, as is often caused by poverty or disaster, but not necessarily; a community must stick together in order to face their harsh conditions together, and this mutual dependency for survival makes it much more important to “fit in” with a community than with a culture. Members of a community will often hide parts of themselves or suppress preferences, because individuality and authenticity are less valuable than continued membership in good standing.

In this blog post, I will consistently use the terms “culture” and “community” as just defined, regardless what word was used for a particular group in other sources. I claim, then, that Scott Alexander’s examples of the gamer culture, the atheist culture, and the LessWrong rationalist culture, are examples of cultures rather than communities. I claim, more broadly, that most cultures are not communities, although some communities have cultures; this will be explained further in the next sections.

By contrast with communities, there is not as much of a clear benefit from “fitting in” with cultures. If you’re an outcast from the Communist Party (in most cases today), you may feel bad about yourself if you respected the other members of the Party, but you can still make it well in broader society; regarding anything that mattered for your survival, you have alternatives.

In my reflections leading up to this post, I had been wondering whether maybe I had never experienced tribalism in the sense Scott Alexander mentions. It turned out that, while I have been in many cultures, I do not find that the cultures I have been in have had the traits of communities, and hence my wonderment was due to the confusion inherent to the tribe concept.

Community

I will now point out which of Scott Alexander’s examples of tribes were clearly communities rather than cultures.

The Robbers’ Cave boys were a community. They were not divided by ideologies, but what united them wasn’t ideology either: they were united by depending on each other for various survival tasks, or rather, mock-survival tasks at their summer camp. Very likely, if you gave each boy plenty of resources so that each could have a nice time on his own, they’d stop caring about each other as much.

A different example of community in I≠M are disability communities, such as the community of deaf people. Deaf people are as diverse as humanity at large; they were not brought together because they had common likes and dislikes, or shared “undefinable habits of thought”. They were brought together because they depended on each other to have access to communication, employment, mutual aid, and social life in a world built for hearing people. They lacked alternatives, and this is what defines a community.

As I said, lack of alternatives is very often produced by harsh conditions such as poverty, disaster, geography, or discrimination, but this is not always the case; for instance, the Amish have voluntarily made some alternatives unavailable to themselves, secluding themselves from the outside world.

Illustration for this blog post, drawn by Nano Banana.

Community cultures

Communities often have cultures. The Eagles and the Rattlers developed rudimentary cultures, where “the Eagles developed an image of themselves as proper-and-moral”, while “the Rattlers developed an image of themselves as rough-and-tough”. Although they didn’t rally around anything cultural, it wouldn’t surprise me if deaf people have a unique culture of their own, as well as other disability communities; and of course, the Amish are a community built from a culture, rather than the other way around. (We may call a community built from a culture a “culture community”. I think most so-called “intentional communities” are culture communities.)

That said, I think it’s not true that every community has a unique “community culture”. For instance, the mafia may have many unique values regarding honor and retribution, but I don’t think they necessarily feel better than people outside the mafia, or share a lot of cultural products, or enjoy each other’s company more than outsiders’, or anything; they probably aren’t very different from other Italians regarding what sorts of media they like. Maybe there are flaws with this example of the mafia, but it at least seems plausible to me that we may find many communities that don’t have a unique culture. If families are communities, and I think they often are, then I would also count most of them among the communities that have no unique culture.

Scott Alexander’s example of Sunni and Shia Islam is trickier, since they may have started out as a community with a culture and later become merely a culture, I am not sure. I do not believe that this necessarily weakens the culture; many strong cultures were never communities.

Atomization

Insofar as there is a process which may be called “atomization” in modern society, I think it consists in the fact that people are getting richer and, as a result, do not need communities to survive as much. Since communities are defined by lack of alternatives, they are dissolved when alternatives develop.

Aside from the cases of culture communities such as the Amish and intentional communities, the sacrifice of authenticity and individuality is, usually, a compromise made in order to survive, not something valued for its own sake; most people prefer not to hide parts of their personality in order to better fit in with a group.

So I consider that atomization is a good thing, insofar as it consists in people getting what they prefer.

Many persons are appalled when they hear me saying that atomization is a good thing, and I think the reason this happens is that they are, like Scott Alexander, confusing cultures with communities. Hence, they think atomization is bad because being part of a culture is enjoyable.

However, I have just explained atomization as an effect that affects communities, not cultures; there are many cultures today, and more cultures than ever thanks to the Internet, such as the rationalist culture that Scott Alexander mentions, which formed entirely online. Atomization is the erosion of communities, whereas cultures are not harmed by it, except the few cultures that depend on communities.

While I think it’s good that communities are on the way out, I have nothing against cultures, and I have generally enjoyed being part of cultures, although I do not fit in with any cultures, very much, these days – and notably, this is perfectly fine, since failing to fit in with cultures is not a survival risk, unlike communities.

So the fact that Scott Alexander confused these two concepts into his blurred concept of “tribe” has really helped me clarify them for myself and notice the difficulties with terminology around this topic.

Postscript: Refuting G.A. Cohen’s case for socialism

Very shortly after writing this post, due to unrelated circumstances, I read G.A. Cohen’s short 2009 essay, Why Not Socialism? It turns out to be related to this post, since Cohen rests his case for socialism largely on the value which he calls “community”, a value which is a kind of brotherhood or general friendliness, which he believes is impaired by large inequalities.

Cohen does not give a necessary-and-sufficient definition of his view of community, but given how similar it is to the kinds of ingroup-directed friendliness that exist in cultures and communities, I claim that Cohen’s value of community can only exist toward persons that you already, in some way, get along with, either because you have a culture in common with those persons, or a community.

I believe inequality is rarely a problem within cultures, since cultures tend to form between people of the same class, but it does introduce friction within communities, since the mutual dependency for survival becomes asymmetrical, which may lead to resentment. I have already argued that most communities should, ideally, not exist, and since inequality is not a problem within cultures, Cohen’s case for socialism is therefore toppled, insofar as it rested on setting up the value of community as an ideal to strive for. Insofar as it rested on considerations of justice, I have refuted it in the last blog post, which supported libertarian private property on an indisputable foundation.

Someone could reply that, yes, cultures tend to form between people of the same class, but this is only more reason to desire that more persons belong to the same class as ourselves, so that more persons could share in our culture. To such a someone, I say that it is unlikely that it ever happens that persons of different classes are otherwise entirely matching in “pre-existing differences”, that is, in traits, styles, dispositions, and “undefinable habits of thought”, and are only prevented from being part of the same culture by a wealth disparity. Personality traits, and general background, are among the causes of social classes, after all. And besides, if someone happens to be much richer or much poorer than other persons in the same culture, but is otherwise a member in good standing within the culture, allowances and accommodations tend to be spontaneously made for this sort of thing by other members of the culture. You could claim that this is itself a form of socialism, but it is voluntary and, at any rate, no society-wide socialism is required for it.

Thursday, January 8, 2026

Ethics

This blog post explains and defends an Aristotelian-Thomistic, Austro-libertarian ethics, grounded on the metaethics from the previous blog post.

The core argument is that if we are ethically bound to preserve the social conditions for truth-seeking (SPT), we must also preserve the objects of that search (the Intelligibility Principle, IP). By combining this with a specific “modern-scientific” Aristotelian hierarchy of being (HMSAT), I defend human rights and reject animal rights. Then, by combining that with a theory of agency, I argue that respecting private property preserves the intelligibility of the world, while aggression causes an “ontological crash,” degrading complex human plans into mere physical collisions; under the adopted metaethics, this amounts to defending a specific, limited form of the libertarian nonaggression principle (NAP). Although this ethic basically vindicates the laissez-faire result of Murray Rothbard’s demonstrated-preference theory of welfare, it does not vindicate all of his Ethics of Liberty, since we lack a suitable theory of punishment and damages, as I explain in the third part of the post, where some limitations of the theory are highlighted.