On Rationality, Correctness, and Expertise

The confusion between rational justification and truth or correctness is common enough in public discourse that it need be noted that beliefs that are rationally justified sometimes turn out untrue, and beliefs without rational justification occasionally turn out to be true. Similarly, the most rational decision is not guaranteed to be the correct decision, and not all decisions that turn out to have been correct were rational decisions. Where this confusion becomes most pernicious is when members of the public run afoul of it when determining what to believe and in whom to place their trust. Take a case wherein we have a preponderance of scientific opinion in favor of the truth of a certain claim X, but there are some outliers who take the heterodox view that X is false. In the fullness of time, X turns out false. In situations such as this, many people are apt, with the bias of hindsight, to imagine that this shows that the heterodox opinion was the most rational, and that those who held it the better judges of truth. In both respects, this line of argument is mistaken.

To truly determine the rationality of a claim, one must examine the quality and weight of the arguments and evidence for and against it in order to reach some idea of the probability of its being true. Likewise, to determine the quality of the judgement of those who make a given claim, one must also examine the evidence and arguments they proffer in support of the claim. Suppose that, according to sound reasoning on the basis of the best evidence available at the time, the probability of X being true was somewhere in the neighborhood of 80%. There’s still a roughly 20% chance that X will turn out false. One can hold the most probable belief on the basis of the most sound reasoning and highest quality evidence and still turn out to be wrong. On the other hand, one can hold the least probable belief on the basis of poor reasoning, based upon slim or no evidence, and nonetheless turn out to be right. That’s how probability works, contra certainty. Scientific work involves calculations of error and degrees of confidence, and are always subject to potential falsification in the future by more or better data. The rational belief to hold is the one that, at the time, is more probable given the best available evidence. The fact that people who bet on the improbable will sometimes turn out, in hindsight, to have been right is not a vindication of their rationality or judgement.


We have heard that a broken clock is right twice a day, but that doesn’t mean one should trust a broken clock as a generally reliable source of time. The judgement of those whose beliefs or decisions turned out to be correct, even in salient cases where much was at stake, should not be held in esteem merely on the basis of the correctness of their belief or decision in a particular instance. Rather, their judgement should be evaluated based on the rationality of the decisions or beliefs in question. By this metric alone can we ultimately establish the likelihood of sound judgement going forward.

One problem we face, when it comes to claims of interest to the general public that reside within special sciences or otherwise highly technical or specialized fields, is that members of the general public, or even experts within unrelated fields, are often not equipped to evaluate such highly technical or specialized evidence and argumentation. Even if, given enough time and effort, a person could amass the proficiency and knowledge needed to evaluate claims in a particular specialized field, this process would be prohibitively time-consuming if attempted for all domains of importance in making decisions in personal and political life. Given this, where one cannot evaluate claims directly, one must ultimately rely upon trust in the testimony of others who are experts within the relevant specialty. But how does one rationally determine in whom to place this trust?

The various methods of evaluating in which expert opinions to place ones trust are somewhat less well defined than those for evaluating claims directly, but there are analogous features, and they ultimately result in a similar assessment of probability. One might start by noting that an opinion held by one expert in the relevant domain is generally more reliable than one held by a non-expert, an opinion held by multiple experts yet more reliable, and an opinion held by a majority of experts more reliable still. Other factors also bear upon reliability of the expert opinion. Is the opinion held by a body or organization of experts that has objective standards, methods, and procedures that serve as both a rational process of arriving at truth, and as checks and balances to keep the experts honest? These features would include such things as methodological standards of scientific inquiry, judicial standards of evidence, competition between different opinions or adversarial processes, peer review, codes of ethics, etc. These and other similar factors jointly contribute to bolster the reliability of expert opinion.

Despite this, just as the most probable claims can nonetheless turn out to be false, the most reliable expert opinions can sometimes turn out to be wrong. Additionally, the most reliable expert opinions sometimes turn out to be spectacularly wrong on issues which are of great importance and consequence. Does this mean that these experts should not be trusted, or that other experts who don’t have the hallmarks of reliability previously mentioned, but who happened to be right on some salient issue, should be trusted over those who do? No, for the same reason advanced above in the case of the probability of claims themselves. That the more reliable expert opinion sometimes turns out wrong and the less reliable opinion sometimes turns out to be right is not an argument for the rationality of choosing the less reliable expert opinion. The important question is not whether the best available expert opinion will never turn out to be wrong. The important question is whether it is the most generally reliable source of truth amongst all necessarily imperfect sources. In consistently accepting the most reliable expert opinions, one will not always turn out to be correct, but one will turn out to be correct more often than if one does not consistently accept them.

The search for truth and the process of critical decision making are both inherently imperfect tasks that involve imperfect reasoners working from imperfect and incomplete experimental and observational data. Given this, the best that can be done to navigate these sometimes dangerous waters is to hold to what is most probable given sound reasoning from the best available evidence, or in cases where specialized expertise is needed, hold to what is likely the most reliable expert opinion given the sorts of objective considerations previously mentioned. The more this is put into practice, the more rational our worldview will be, and the better our decisions, both individually and collectively.

Previous
Previous

The Animal Holocaust

Next
Next

Liberalism: What it is and Why it Matters