That human beings can be mistaken in anything they think or do is a proposition known as fallibilism. Stated abstractly like that, it is seldom contradicted. Yet few people have ever seriously believed it, either.
That our senses often fail us is a truism; and our self-critical culture has long ago made us familiar with the fact that we can make mistakes of reasoning too. But the type of fallibility that I want to discuss here would be all-pervasive even if our senses were as sharp as the Hubble Telescope and our minds were as logical as a computer. It arises from the way in which our ideas about reality connect with reality itself—how, in other words, we can create knowledge, and how we can fail to.
The trouble is that error is a subject where issues such as logical paradox, self-reference, and the inherent limits of reason rear their ugly heads in practical situations, and bite.
Paradoxes seem to appear when one considers the implications of one’s own fallibility: A fallibilist cannot claim to be infallible even about fallibilism itself. And so, one is forced to doubt that fallibilism is universally true. Which is the same as wondering whether one might be somehow infallible—at least about some things. For instance, can it be true that absolutely anything that you think is true, no matter how certain you are, might be false?
What? How might we be mistaken that two plus two is four? Or about other matters of pure logic? That stubbing one’s toe hurts? That there is a force of gravity pulling us to earth? Or that, as the philosopher René Descartes argued, “I think, therefore I am”?
A fallibilist cannot claim to be infallible even about fallibilism itself.
When fallibilism starts to seem paradoxical, the mistakes begin. We are inclined to seek foundations—solid ground in the vast quicksand of human opinion—on which one can try to base everything else. Throughout the ages, the false authority of experience and the false reassurance of probability have been mistaken for such foundations: “No, we’re not always right,” your parents tell you, “just usually.” They have been on earth longer and think they have seen this situation before. But since that is an argument for “therefore you should always do as we say,” it is functionally a claim of infallibility after all. Moreover, look more closely: It claims literal infallibility too. Can anyone be infallibly right about the probability that they are right?
But wait. Have we now gotten lost in the paradox of being wrong about being wrong? There really is such a thing as existing knowledge—including a vast amount of useful, salutary truth. Parents really do know more than children about everyday dangers; your physician does know more than a passing hobo about your illness. Although, certainly1, all concerned are fallible—they could be wrong—isn’t it rational to play the odds? To defer to the opinion of the experts who have a lot more knowledge about the matter? In other words, isn’t it better to act as if one considered them infallible about it, even though they are not?
No. That is not only an irrational answer but, catastrophically, the wrong question. I’ll return to it below. But first, consider infallibility itself.
Ascribing a sphere of infallibility to a parent or expert has the same logic as the Roman Catholic Church’s doctrine about the pope: It likewise considers him infallible only under certain narrowly-defined circumstances, called ex cathedra (metaphorically “from the throne”). So, consider this thought experiment: You seriously believe in papal infallibility. One day, an atheist friend gleefully tells you that the pope has said something which, after due consideration, you decide must be false: “There is no force of gravity.” Immediately, it becomes vital for you to know whether the pope declared this ex cathedra. For if he did, you would have to accept that you are mistaken about gravity, and act accordingly, even if you never managed to understand the mechanics of how that might be so. Because for you, ideas are about something—important precisely because they have consequences for how you think, feel, and act. And so you would have to drop some assumptions that you hitherto considered true incontrovertibly—or even infallibly.
Furthermore, one cannot seriously believe that the pope is infallible while also believing any rival religion, or atheism. So the implications of papal infallibility, even more than parental infallibility, are sweeping. Despite its narrow nominal scope, it is functionally equivalent to the entire gamut of Roman Catholic doctrine. But there is another class of implications—even more sweeping—in the opposite direction.
Consider the steps you are obliged to follow, from hearing of an ex cathedra declaration to believing its content.
A passing hobo tells you that he saw the pope making the declaration ex cathedra. Do you therefore accept that there is no force of gravity? Obviously not: That would involve assuming that the hobo was infallible—which would contradict the church’s teachings. And the same would hold even if an archbishop were to visit you and swear that he had witnessed it too, and stated his expert opinion that it met the requirements for being ex cathedra. Since the doctrine does not ascribe infallibility to archbishops, you would still not be required to accept the claim about gravity. Thus the doctrine of infallibility has made you take the fallibility of archbishops more seriously than you otherwise might. Even if the pope himself were to swear that his claim about gravity was strictly ex cathedra, you would not be forced, by your faith, to believe it. The doctrine of papal infallibility does not say that the reminiscences of a pope are infallible—unless they are ex cathedra reminiscences.
So your very faith in papal infallibility has led you to within touching distance of one of the cornerstones of scientific rationality: nullius in verba—“take no one’s word for it”—the motto of the Royal Society2.
But now, what if you personally witnessed the ex cathedra statement?
So, there you were, visiting the Vatican and you took a wrong turn and found yourself witnessing the pope as he solemnly declared that there is no force of gravity. You happened to have purchased, from the souvenir shop, a checklist of the official requirements for a declaration to count as ex cathedra, and you took the trouble to verify that each one was met. None of this constitutes direct observation of what you need to know. Did you observe infallibly that it was the pope? Did you do a DNA test? Can you be certain that souvenir checklists never contain typos? And how is your church Latin? Was your translation of the crucial phrase “no force of gravity” infallible? Have you never mistranslated anything?
The fact is, there’s nothing infallible about “direct experience” either. Indeed, experience is never direct. It is a sort of virtual reality, created by our brains using sketchy and flawed sensory clues, given substance only by fallible expectations, explanations, and interpretations. Those can easily be more mistaken than the testimony of the passing hobo. If you doubt this, look at the work of psychologists Christopher Chabris and Daniel Simons, and verify by direct experience the fallibility of your own direct experience. Furthermore, the idea that your reminiscences are infallible is also heresy by the very doctrine that you are faithful to.
I’ll tell you what really happened3. You witnessed a dress rehearsal. The real ex cathedra ceremony was on the following day. In order not to make the declaration a day early, they substituted for the real text (which was about some arcane theological issue, not gravity) a lorem-ipsum-type placeholder that they deemed so absurd that any serious listener would immediately realize that that’s what it was.
And indeed, you did realize this; and as a result, you reinterpreted your “direct experience,” which was identical to that of witnessing an ex cathedra declaration, as not being one. Precisely by reasoning that the content of the declaration was absurd, you concluded that you didn’t have to believe it. Which is also what you would have done if you hadn’t believed the infallibility doctrine.
You remain a believer, serious about giving your faith absolute priority over your own “unaided” reason (as reason is called in these contexts). But that very seriousness has forced you to decide first on the substance of the issue, using reason, and only then whether to defer to the infallible authority. This is neither fluke nor paradox. It is simply that if you take ideas seriously, there is no escape, even in dogma and faith, from the obligation to use reason and to give it priority over dogma, faith, and obedience.
So, there you were, visiting the Vatican and you took a wrong turn and found yourself witnessing the pope as he solemnly declared that there is no force of gravity.
The real pope is unlikely to make an ex cathedra statement about gravity, and therefore you may be lucky enough never to encounter this particular case of the dilemma. Also, the real pope doesn’t just pull ex cathedra statements out of a hat. They’re hammered out by a team of expert advisors trying their best to weed out mistakes, a process structurally not unlike peer review. But if your faith in papal infallibility depends on reassuring yourself of things like that, then that just goes to show that for you, reason takes priority over faith.
It is hard to contain reason within bounds. If you take your faith sufficiently seriously you may realize that it is not only the printers who are fallible in stating the rules for ex cathedra, but also the committee that wrote down those rules. And then that nothing can infallibly tell you what is infallible, nor what is probable. It is precisely because you, being fallible and having no infallible access to the infallible authority, no infallible way of interpreting what the authority means, and no infallible means of identifying an infallible authority in the first place, that infallibility cannot help you before reason has had its say.
A related useful thing that faith tells you, if you take it seriously enough, is that the great majority of people who believe something on faith, in fact believe falsehoods. Hence, faith is insufficient for true belief. As the Nobel-Prize-winning biologist Peter Medawar said: “the intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not4.”
You know that Medawar’s advice holds for all ideas, not just scientific ones, and, by the same argument, to all the other diverse things that are held up as infallible (or probable) touchstones of truth: holy books; the evidence of the senses; statements about who is probably right; even true love.
How should this no-exceptions fallibilism play out when the physician suggests a treatment? The right question is not “who is more likely to be right, the physician or I?” but “has this idea been judged rationally, by its content?” Which means, in particular, has it been subjected to sufficiently severe attempts to detect and eliminate errors—both by explanatory argument and by rigorous experiment? If you think it has, then your opinion and the physician’s should become the same, and the issue of deference should not arise, nor should the need for anyone to claim effective infallibility.
On the other hand, if you suspect that the physician has not given enough thought to some feature that makes your case unusual, it would be irrational to defer. The physician’s greater knowledge is irrelevant until you are satisfied with the way that idea has been taken into account. And whether the idea was originally suggested to you by a passing hobo or a physicist makes no difference, either.
Nothing can infallibly tell you what is infallible, nor what is probable.
This logic of fallibility, discovered and rediscovered from time to time, has had profound salutary effects in the history of ideas. Whenever anything demands blind obedience, its ideology contains a claim of infallibility somewhere; but wherever someone believes seriously enough in that infallibility, they rediscover the need for reason to identify and correctly interpret the infallible source. Thus the sages of ancient Judaism were led, by the assumption of the Bible’s infallibility, to develop their tradition of critical discussion. And in an apparently remote application of the same logic, the British constitutional doctrine of “parliamentary sovereignty” was used by 20th-century judges such as Lord Denning to develop an institution of judicial review similar to that which, in the United States, had grown out of the opposite doctrine of “separation of powers.”
Fallibilism has practical consequences for the methodology and administration of science, and in government, law, education, and every aspect of public life. The philosopher Karl Popper elaborated on many of these. He wrote:5
The question about the sources of our knowledge . . . has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’
It’s all about error. We used to think that there was a way to organize ourselves that would minimize errors. This is an infallibilist chimera that has been part of every tyranny since time immemorial, from the “divine right of kings” to centralized economic planning. And it is implemented by many patterns of thought that protect misconceptions in individual minds, making someone blind to evidence that he isn’t Napoleon, or making the scientific crank reinterpret peer review as a conspiracy to keep falsehoods in place.
Whether the idea was originally suggested to you by a passing hobo or a physicist makes no difference.
Popper’s answer is: We can hope to detect and eliminate error if we set up traditions of criticism—substantive criticism, directed at the content of ideas, not their sources, and directed at whether they solve the problems that they purport to solve. Here is another apparent paradox, for a tradition is a set of ideas that stay the same, while criticism is an attempt to change ideas. But there is no contradiction. Our systems of checks and balances are steeped in traditions—such as freedom of speech and of the press, elections, and parliamentary procedures, the values behind concepts of contract and of tort—that survive not because they are deferred to but precisely because they are not: They themselves are continually criticized, and either survive criticism (which allows them to be adopted without deference) or are improved (for example, when the franchise is extended, or slavery abolished). Democracy, in this conception, is not a system for enforcing obedience to the authority of the majority. In the bigger picture, it is a mechanism for promoting the creation of consent, by creating objectively better ideas, by eliminating errors from existing ones.
“Our whole problem,” said the physicist John Wheeler, “is to make the mistakes as fast as possible.” This liberating thought is more obviously true in theoretical physics than in situations where mistakes hurt. A mistake in a military operation, or a surgical operation, can kill. But that only means that whenever possible we should make the mistakes in theory, or in the laboratory; we should “let our theories die in our place,” as Popper put it. But when the enemy is at the gates, or the patient is dying, one cannot confine oneself to theory. We should abjure the traditional totalitarian assumption, still lurking in almost every educational system, that every mistake is the result of wrongdoing or stupidity. For that implies that everyone other than the stupid and the wrongdoers is infallible. Headline writers should not call every failed military strike “botched;” courts should not call every medical tragedy malpractice, even if it’s true that they “shouldn’t have happened” in the sense that lessons can be learned to prevent them from happening again. “We are all alike,” as Popper remarked, “in our infinite ignorance.” And this is a good and hopeful thing, for it allows for a future of unbounded improvement.
Fallibilism, correctly understood, implies the possibility, not the impossibility, of knowledge, because the very concept of error, if taken seriously, implies that truth exists and can be found. The inherent limitation on human reason, that it can never find solid foundations for ideas, does not constitute any sort of limit on the creation of objective knowledge nor, therefore, on progress. The absence of foundation, whether infallible or probable, is no loss to anyone except tyrants and charlatans, because what the rest of us want from ideas is their content, not their provenance: If your disease has been cured by medical science, and you then become aware that science never proves anything but only disproves theories (and then only tentatively), you do not respond “oh dear, I’ll just have to die, then.”
Headline writers should not call every failed military strike “botched,” and courts should not call every medical tragedy malpractice.
The theory of knowledge is a tightrope that is the only path from A to B, with a long, hard drop for anyone who steps off on one side into “knowledge is impossible, progress is an illusion” or on the other side into “I must be right, or at least probably right.” Indeed, infallibilism and nihilism are twins. Both fail to understand that mistakes are not only inevitable, they are correctable (fallibly). Which is why they both abhor institutions of substantive criticism and error correction, and denigrate rational thought as useless or fraudulent. They both justify the same tyrannies. They both justify each other.
I must now apologize for trying to trick you earlier: All the ideas that I suggested we might know infallibly are in fact falsehoods. “Two plus two” of course isn’t6 “four” as you’d discover if you wrote “2+2” in an arithmetic test when asked to add two and two. If we were infallible about matters of pure logic, no one would ever fail a logic test either. Stubbing your toe does not always hurt if you are focused on some overriding priority like rescuing a comrade in battle. And as for knowing that “I” exist because I think—note that your knowledge that you think is only a memory of what you did think, a second or so ago, and that can easily be a false memory. (For discussions of some fascinating experiments demonstrating this, see Daniel Dennett’s book Brainstorms.) Moreover, if you think you are Napoleon, the person you think must exist because you think, doesn’t exist.
And the general theory of relativity denies that gravity exerts a force on falling objects. The pope would actually be on firm ground7 if he were to concur with that ex cathedra. Now, are you going to defer to my authority as a physicist about that? Or decide that modern physics is a sham? Or are you going to decide according to whether that claim really has survived all rational attempts to refute it?
David Deutsch, internationally acclaimed for his seminal publications on quantum computation, is a member of the Centre for Quantum Computation at the Clarendon Laboratory, Oxford University.