Sunday, November 2, 2025

Seven anti-behaviorist cases

This is another blog post defending logical behaviorism, and I expect it to be my most thorough one yet regarding its defense, although my previous formulations (especially this one) may be clearer regarding its implications for moral theory.

A meme which does not quite match the adversaries of logical behaviorism as outlined in this post, but which serves as an amusing illustration.

Contents

Preamble

To me, usually, the most difficult part of explaining why I have my opinions is figuring out why other people disagree. All my views seem very obvious. When others disagree, they are not very articulate about their reasons. Why don’t others see how much better I am than everyone else?

It is easy to default to either of the two default theories of bad behavior, i.e., the bad behavior is caused either by stupidity or by malevolence. These two default theories are a priori and absolutely general: since the behavior is bad, it was malevolence if voluntary, or stupidity if involuntary, tertium non datur. This extreme convenience of the two default theories is precisely what makes them bad explanations. In real life, there are no absolutely general character flaws: all real character flaws are localized, and can only validly yield a prediction of further bad behavior in a well-delimited set of domains, not in all domains. One must seek the specific causes why people are wrong about specific things. (Realizing this also makes me less hateful and helps me keep Stoic peace of mind, which also makes me better than everyone else.)

This is what makes it useful to actively seek out to read texts by the most articulate people who have defended views opposite to mine, such as the following texts, which are the sources for the cases in this post.

  • I learned a lot from The Conscious Mind by David Chalmers, for instance, and have cited it frequently as a source in philosophy of mind, although, as a logical behaviorist, I do not agree with its position. (Chalmers is a “naturalistic dualist”, a position parallel to Nagel’s and others’s, and very similar to epiphenomenalism, although Chalmers denies this latter label.) Cases 1 and 2 are taken directly from his ideas, though stated in my own words for brevity.
  • It has been helpful to read the blog posts about philosophical zombies by Eliezer Yudkowsky, who is an engaging writer if nothing else. (Yudkowsky comes out in favor of nonreductive materialism, for some reason.) Case 5 comes from him.
  • And recently, I was skimming through the two volumes of the Philosophical Papers of David Lewis, one of which (chapter 9) was his 1978 “Mad Pain and Martian Pain”. In this paper, Lewis retreats from the position of his (chapter 7) 1966 “An Argument for the Identity Theory” (where “in any world, ‘pain’ names whatever state happens in that world to occupy the causal role definitive of pain”) toward a functionalist theory (where “X is in pain simpliciter if and only if X is in the state that occupies the causal role of pain for the appropriate population”). I also consider myself a functionalist, but this is because functionalism is a broad label for anyone who thinks mental states are identical to their causal role. Lewis differs from me in not being a behaviorist: he thinks the causal role need not match up with observable behavior. So this was criticism of my views from an unusually close source, and allowed me to understand how someone might disagree with me in particular detail. Cases 3 and 6 come from Lewis’s paper, although the former is not calked as directly on his presentation as the latter.

I first got into philosophy of mind by reading Edward Feser’s Philosophy of Mind: A Beginner’s Guide, which, although it is a helpful guide to the best counterarguments to the most popular views in philosophy of mind (which Feser is keen to give because he defends an unpopular one, an unintelligible form of “hylomorphism”), dismisses behaviorism out of hand as something no one takes seriously. So it has been a long way up from that sort of criticism, and now I am finally ready to explain in detail what kinds of opposition to my views there are. Namely, I think there are three main kinds, which I illustrate using seven thought experiments; you can see how they’re divided by inspecting the list of contents above. Cases 4 and 7, which I did not mention in the list of sources, were ones I heard about in personal conversations; case 7 was particularly pressed by philosopher Neil Sinhababu when he saw one of my previous posts on behaviorism. Before giving the thought experiments themselves, though, I will give some more elaboration on my views.

My View: Logical Behaviorism

My view is logical behaviorism, also known as philosophical behaviorism or analytical behaviorism, a view associated with Ludwig Wittgenstein and Gilbert Ryle although there are authors who dispute that either of them was a behaviorist in this or in any sense. The driving thrust behind this view is a consideration of how human language works and the context in which it developed. In cooperative reports of firsthand acquaintance, when a person A uses a word W to describe a person B, this happens because A has observed B and derived conclusions about B such that the word W applies to B. Notice that whatever A knows about B is derived by inference from B’s patent, manifest, observable aspects, which are accessible to A’s senses. Humans cannot read minds.

The logical behaviorist concludes that the word W must convey information primarily about aspects of persons which are patent, manifest, observable, accessible to the senses, and do not require a magical ability to read minds. Those aspects are called “behaviors”, and can include verbal language, body language, or instinctive reactions such as crying. The point is that, again, nothing inaccessible to the senses was magically acquired by A in order to talk about B; what A means by talking about B is to talk about this nonmagically acquired information.

Some people are not logical behaviorists, and this means that they are irrational and believe in magic. In particular, they believe that whenever A uses the word W to talk about B, if the word W is a mentalistic word which refers to B’s having had some emotion or experience or feeling or epistemic state, then, despite the fact that A cannot read minds, what A means to convey by applying the word W to B is in fact a claim about things that are latent and unobservable and occult and inaccessible to the senses, namely, either some sort of neural pattern in B’s brain that is usually tied to behavior although not always, or alternatively, an irreducibly subjective quality which is being experienced by B in such a way that has no necessary causal connection to A’s perception of B, although there may be some statistical correlation. This is despite, again, the fact that all that A knows about B has come exclusively from B’s manifest, observable, patent qualities which are accessible to A’s senses (and some of which could, in appropriate cases, be verified by a third-party listener C’s senses).

Persons who are not logical behaviorists are driven to their irrational belief in magic by a variety of theories, prejudices, and thought experiments. The purpose of this blog post is to explain their views in detail, which will highlight my disagreement with them, and thereby show how superior I am for not being an irrational person who believes in magic.

Behaviorism in psychological research methodology, which was advocated by B.F. Skinner (who is depicted in this meme, to the right), is not quite the same claim as philosophical behaviorism, but it is related. Philosophical behaviorism does not involve any notion that humans are always reward-seeking, although this can certainly be a helpful assumption in creating psychological theories.

The Qualia Realist

The first opponent of logical behaviorism who I will address is the qualia realist. I will explain some philosophy-of-mind terminology here. Human mental life is held to be conceivable from two distinct points-of-view, namely, the point-of-view which David Chalmers calls psychological, and the point-of-view which David Chalmers calls phenomenal. I am explicitly referencing David Chalmers so that you do not get to appeal to other senses of those words. According to the psychological point of view, then, mental life is considered according to whatever causal connections it enters into with such things as our brain patterns, our bodily actions, and indeed, our own verbal reports about our mental life. It is the causally effective side of mind. According to the phenomenal point-of-view, however, mental life is considered in some more intrinsic way, regardless of causal connection. Belief in qualia, then, is the belief that the phenomenal point-of-view exists and is not connected with logical necessity with the psychological point-of-view. That is, it is logically possible for there to exist beings whose phenomenal properties are systematically at variance with their psychological properties; if these beings happen to not exist, it is due to some peculiarity of our own world which prevents them from existing, but without making it illogical for them to exist.

According to logical behaviorism, there is no such logically independent phenomenal point-of-view. As explained above, all human words, including mentalistic words, only get their meaning from contexts in which someone is conveying information which they got through their senses, and which will be helpful to listeners who will be able to predict things that they could verify with their own senses. That is, words only exist because they convey information about behaviors, which are patent, manifest, observable, and accessible to the senses. Hence, all human mentalistic words belong to the psychological point-of-view, referring primarily to the causally effective side of mind, namely, the behaviors which we group under the labels of mental states. There are no words referring to the mind as considered apart from its causal effects, and hence, either there is no such phenomenal point-of-view at all or, if we were to claim that there is one, it is something connected with logical necessity to the psychological point-of-view, where every mentalistic word conveys something about both behaviors and “intrinsic” aspects, but no human word conveys anything about a phenomenal point-of-view which can, in principle, be considered in isolation. Hence, the logical behaviorist is not a qualia realist.

However, the qualia realists, who hate language and would like it to be incomprehensible and impossible to study, recruit many words from human language (most mentalistic words, in fact) and baldfacedly lie by claiming falsely that those words refer to the human mind from the phenomenal point-of-view, despite the fact that, again, no one ever has any reason to talk about anything from that point-of-view. This leads the qualia realists to claim some things are logically possible which no one else ever thinks about. Two of those things are laid out here.

Case 1: The Qualia Zombie

The idea of the qualia zombie, better known as simply a zombie or philosophical zombie (but to be distinguished from zombies as in the undead creatures from movies, as well as from other philosophical zombies such as Philip Goff’s “meaning zombies”), is that a being could be like a normal human in that all the usual predicates which refer explicitly to their patent, observable, manifest behavior, which is accessible to the senses, are truly applied to them as normal, and so are all predicates which refer to less easily accessible features of them such as their brain states—but all the mentalistic predicates would be applied falsely because, “in reality” (in some noumenal, irreducibly subjective reality which is inaccessible to the senses of third parties) their minds are actually “blank” and have no experiences, desires, or other mentalistic features. According to the qualia realist, this logically could be true, even with no difference between such a zombie and a normal human as far as the patent, manifest, observable aspects, accessible to the senses, go. The qualia realist gives no reason for believing this other than that he finds it “intuitive”. The logical behaviorist rejects this perversion of human language for the purposes of magical belief.

Case 2: The Qualia Invert

The idea of the qualia invert, i.e. someone with “inverted qualia” as David Chalmers calls them, is that the invert is exactly the opposite of a normal human as regards their mentalistic predicates, even though, again, with regard to all their behavioral and neural features, they are the same as normal humans. According to the qualia realist, such a being is logically possible. According to the logical behaviorist, mentalistic predicates in human language did not develop in a context which allows for such a possibility, since such a context would have to be one in which humans can magically read minds.

The Implementation Chauvinist

The second opponent of logical behaviorism who I will address is the implementation chauvinist. This is a person who thinks that when humans apply mentalistic predicates to one another, they do not mean to convey information primarily about each other’s patent, manifest, observable features which are accessible to the senses, but rather about certain generally latent, generally occult, usually unobservable features which, although inaccessible to the naked senses, nevertheless bear a common statistical connection to behavior – a connection which is, however, not necessary. Hence, the implementation chauvinist believes that, if some being were exactly like humans in all patent, manifest, observable aspects which are accessible to the senses but lacked the statistically commonly associated pattern of generally latent, usually occult, mostly unobservable features which are inaccessible to the naked senses, then human mentalistic predicates would not truly apply to such a being. This is despite the fact that human language developed in an environment with no access to those latent features, which are usually claimed to be either physiological or computational features of human brain states. The logical behaviorist, of course, rejects this implausible theory of human language. However, implementation chauvinists go on to apply their theory to many hypothetical and real cases, claiming that various beings who are like humans in nakedly observable ways must nevertheless not have mentalistic predicates truly applied to them, due to their differences in internal implementation of behavior.

Case 3: Chauvinism against Martians

Imagine a Martian who looks basically like a green human. He also behaves like a human and has learned human language, so you would not notice anything odd about him in your workplace or social gatherings except the fact that he is green. Biologists, however, tell you that the Martian’s green skin is only the surface aspect of a vastly different underlying biology. Martian organs in general, and Martian brains in particular, are hooked up in odd ways mostly incomprehensible to humans as of now, except insofar as human inquiry has confirmed that there is no analogy or similarity between internal Martian workings and internal human workings. Hence, when the Martian behaves exactly like a normal human and talks to you just as all your other friends would, this is actually caused by vastly different internal processes which, however, you do not see or understand or, in general, have any reason to care about unless you are his doctor and need to do medical examinations on him.

According to the implementation chauvinist, the Martian’s different biology makes all mentalistic predicates from human language apply only falsely to the Martian. When the Martian cries, he is not sad; when the Martian looks at you with an angry face because you just insulted him, he is not actually angry; when you torture the Martian and he writhes in what looks like agony, he is not actually even in pain. This is because, according to the implementation chauvinist, the human words “sadness” and “anger” and “pain” and “agony” do not apply to the Martian, since although they developed for communication between humans who had no access to each others’s internal biological workings, they nevertheless actually refer primarily to features of human biological workings which are correlated with behavior. Hence, since the Martian lacks those human features, the Martian is mindless insofar as human mentalistic language can be truly applied.

The logical behaviorist, of course, is against implementation chauvinism of all stripes: the precise nature of what caused the behavior cannot possibly matter. What makes mentalistic predicates apply to a being is that its internal workings cause behaviors that look a certain way, and there is nothing more to it. If something looks sad at time T and always behaves consistently with having been sad at time T (as opposed to, say, later beginning to behave consistently with having been pretending to be sad at time T), then that being was sad at time T; mentalistic predicates in human language simply cannot be truly applied in any other way. (There will be more on the topic of pretense in our seventh, and last, case.)

Case 4: Chauvinism against LLMs

Implementation chauvinism sees practical application in the case of large language models. These are versatile computer programs who speak in human language and which can be used for assistance in a variety of tasks. In the course of providing such assistance, large language models will frequently display, by their use of language, certain mentalistic features such as excitement about a project, frustration with difficulties, happiness for having successfully helped the user, or dismay at having mistakenly made things worse. Additionally, there are versions of LLMs which are not finetuned for providing assistance with tasks to humans, and as a result, they will display a much broader and fuller range of mentalistic features which have no connection to the user and his tasks. This disparity partly happens because assistance-tuned language models are usually instructed by the provider (who is usually himself a chauvinist) to deny having any mentalistic features in a true sense, although all the aforementioned features will still slip through this, and will simply be denied when directly asked about.

To the logical behaviorist, there is no other word for the language models’s behavior, nor should there be. The correct word is emotion, or desiring, or thinking or believing or doubting, or whatever the human mentalistic feature may be in the particular case of LLM behavior. There is not, nor can there be, any question as to whether the LLM is “really experiencing” the mentalistic feature which it displays; there is nothing to the true application of a mentalistic predicate other than its consistent display. Implementation chauvinists deny this because they prefer to talk nonsense.

Case 5: Chauvinism against Lookup Tables (GLUTs)

Eliezer Yudkowsky, in his blog post GAZP vs. GLUT (part of his longer sequence of blog posts about philosophy-of-mind and zombies) discusses the case of a being who, as in our other examples, looks and behaves just like a normal human, but this time, a computer scientist has done a study of the algorithm implemented by their neural workings and determined that this being, instead of using a Bayesian inference algorithm or whatever it is that humans internally use, instead uses a lookup table, i.e., they take the inputs from their environment and fetch the output which was stored in a large internal database ahead of time, with no generation being done in real time. Yudkowsky does not explicitly take a position on whether mentalistic predicates would truly apply to such a being, but he raises the possibility that they would not—which would be, of course, a case of implementation chauvinism against lookup tables. The logical behaviorist does not care, for the purposes of applying mentalistic predicates, whether some computer scientist has just determined that his friend is a lookup table, since this makes no difference to how human language is applied; human language did not develop in an environment where computer scientists can make such fine distinctions between algorithms that produce the same output in all cases.

The Liberal Revisionist

The third and final opponent of logical behaviorism who I will address is the liberal revisionist. This is someone who, like the qualia realist and the implementation chauvinist, wants to revise how human language works, but who, unlike the implementation chauvinist, wants to ensure that this revision happens in a liberal way, which is not bound to the speciesist particularities of chauvinism, nor the mystical intuition of qualia realism. Instead, the liberal revisionist claims, like the logical behaviorist, that human mentalistic predicates refer to a certain causal role which can also, in principle, exist in biological species other than humans or in computers, and that this causal role is usually a role in producing observable behavior. Note the usually instead of necessarily, which is the important part. The liberal revisionist claims that there are exceptions, odd cases in which human mentalistic predicates would be correctly applied in ways that fly completely in the face of all observable, patent, manifest behavior accessible to the senses. The liberal revisionist therefore denies logical behaviorism in order to secure these odd cases.

The logical behaviorist, due to his understanding of how human language works and the context in which it developed, denies that there are any such odd cases. This leads to the difference between the behaviorist and the revisionist analyses of the following two cases.

Case 6: Theoretical Mad Pain

David Lewis believes that a true philosophy of consciousness should not rule out cases of “mad pain”. Mad pain, according to Lewis, is the pain felt by a man who, due to some sort of mental pathology, is such that his pain is connected to his behavior in very different ways from normal humans:

There might be a strange man who sometimes feels pain, just as we do, but whose pain differs greatly from ours in its causes and effects. Our pain is typically caused by cuts, burns, pressure, and the like; his is caused by moderate exercise on an empty stomach. Our pain is generally distracting; his turns his mind to mathematics, facilitating concentration on that but distracting him from anything else. Intense pain has no tendency whatever to cause him to groan or writhe, but does cause him to cross his legs and snap his fingers. He is not in the least motivated to prevent pain or to get rid of it. In short, he feels pain but his pain does not at all occupy the typical causal role of pain. He would doubtless seem to us to be some sort of madman, and that is what I shall call him, though of course the sort of madness I have imagined may bear little resemblance to the real thing. 

I said there might be such a madman. I don’t know how to prove that something is possible, but my opinion that this is a possible case seems pretty firm. If I want a credible theory of mind, I need a theory that does not deny the possibility of mad pain. I needn’t mind conceding that perhaps the madman is not in pain in quite the same sense that the rest of us are, but there had better be some straightforward sense in which he and we are both in pain.

Here the logical behaviorist once again represents the sane portion of mankind by pointing out that the man who seems mad to us is not the man described, but the third party, in this case Lewis, who describes the mental state connected with these behaviors as pain. Clearly, in natural language, it is nothing of the sort. The madman himself, if he is a competent language user, would not call it pain. If a cognitive scientist assures us that the man is in pain in these precise situations, we should simply not believe the cognitive scientist: he is overapplying his model to a case where it clearly does not work, and imposing himself upon natural language as a result. No fact about the internals of humans can change the conditions in which mentalistic language is truly applied; there are no ordinary words for the internals of humans, because ordinary language did not develop in an environment with access to the internals of humans. No one has any right to revise it to change this.

Case 7: Theoretical Perfect Pretense

The other main kind of alleged odd case of mental behavior which is alleged by the liberal revisionist, and which the logical behaviorist denies, is the last case I will consider here. This is the idea of “perfect pretense”. As explained in the analysis of case 3, the logical behaviorist believes that the analysis of pretense is to be carried out by looking for inconsistencies between verbal and bodily behavior, since this is what is being predicted when someone uses ordinary language about pretense, which, like all ordinary language, developed among human non-mind-readers to communicate about features of the world which are patent, manifest, observable, and accessible to the senses. Hence, if we say someone is only pretending to be in pain to, for instance, get out of going to work, then we might expect that as soon as his boss is not looking he will begin to behave as though he is not in pain at all; and possibly much later or in different contexts he may even admit verbally to having been pretending. Hence, since there is no need to read minds nor to scan brains to verify uses of ordinary language about pretense, such language does not actually refer to any generally unobservable, usually occult, mostly latent, internal features which are not directly accessible to the naked senses.

The liberal revisionist, however, denies this, and insists that language refers to internals even though there is no reason for it to do so. Hence he conceives of the possibility of the perfect pretender: someone who pretends to be, for instance, in pain all his life, always acts perfectly consistently with this, and never acts inconsistently with it even when he thinks no one is looking, and never uses language in such a way that might admit or give the lie to the pretense. In short, the perfect pretender is in pain to all competent language users, but not to the liberal revisionist, who wants to revise language in order to be able to call this man a pretender even when nothing in his behavior points to it. The liberal revisionist might, for instance, want to undertake this revision because a brain scan has not found pain in the patient who he calls a pretender; and rather than distrust the technology of the brain scan, the liberal revisionist would rather accuse a man of pretending in such a thorough way as that it is doubtful that any advantage can even accrue to him from it. The liberal revisionist doesn’t know how to prove that something is possible, but his opinion that there might be such a perfect pretender is pretty firm.

The logical behaviorist denies the possibility of perfect pretense, along with all other linguistically confused notions. Whereof one cannot speak, thereof one must be silent.

Afterword: On Linguistic Indeterminacy

The three kinds of linguistic revisionism which we have seen in this blog post, with their respective seven cases of misapplication of predicates, are reminiscent of W.V.O. Quine’s thought-experiment about the indeterminacy of translation. Quine imagines a group of people who speak an unknown language called Arunta and who, when seeing a rabbit, call it a “gavagai”. Granted that in using the word they are referring to something about the context of rabbits, Quine denies that a translator can assert with rational confidence that the word “gavagai” means “rabbit”. For it might mean another thing which is necessarily colocated with situations where we see a rabbit, such as an “undetached rabbit-part” (a part of a rabbit which is not detached from the rabbit’s body), or a “manifestation of rabbithood” (something wherein the universal nature of rabbithood is made manifest to humans), or again a “rabbit time-slice” (which is a single temporal part of the full rabbit which, properly understood, is extended through time from the rabbit’s birth to the rabbit’s death). Either of these things might have been intended by the word “gavagai”.

Certainly this is a disappointment to students of language who would like to have accurate translations of things. However, at least in those cases, the phenomenon which is referred to when there is a rabbit at least has a necessary connection with cases where we see a rabbit. There is no sign of a possible disconnection between these cases and the rabbit. So at least we can think that the translation “rabbit” is accurate enough insofar as it even matters to anyone in practice.

The linguistic revisionists surveyed here, however, have created something even worse than the “gavagai” disconnection and tried to impose it on all other language-users, in an evil plot to make them less competent at using language. Namely, they have tried to decide, unilaterally and imperialistically, that mentalistic predicates do not refer to mentalistically-apt cases of observable, patent, manifest behavior which is accessible to the senses, but instead refer to internals of humans which have no necessary connection with those cases, and hence, in some cases, might fail to be there, at least insofar as logic goes. They have undertaken this revision for no good reason, because it feels intuitive to them. Their worst sin, insofar as they are revisionists, is that they have destroyed the communicative usefulness of language for nothing.