Wednesday, August 25, 2010

Why I'm Not A Bayesian

Every once in a while, I'll get into a bullshitting session with a grad school friend and they'll ask me what my Bayesian probability estimate would be of such-and-such claim turning out to be true. What do I think the probability of God existing is? Of the Singularity happening? Of alien life existing somewhere?

I always have to explain in buzz-kill-ish fashion almost worthy of Buzz Killington that I don't think that there are probabilities in the sense assumed by the question--to be more precise about this, I think the probability calculus is a marvelous mathematical tool for juggling frequencies, but I strongly reject the claim that it can be used to model "degrees of belief" or justification or confirmation or rational belief revision or anything of the kind--and that, since I think the game people are playing when one says that the probability that God exists is .01% and the other says, "really, you think that high? I'd say .0001%" and so on is a deeply confused and silly game that doesn't really shed light on anything, I decline to play. I'm happy to say things like "given the overall evidence, it's irrational to think that God exists", or "given the overall evidence, my best guess is that the Singularity will not happen" or whatever, but that's where I'll leave it. There may be things such that I'd be much more surprised if they turned out to be true than I would be if other things turned out to be true, some things I'm more likely to constantly scrutinize new evidence and new arguments to make sure I'm right about than others, etc., but--it's good to repeat this, because I find that many people whose philosophical training has simply assumed Bayesianism end up being so shocked when you say that you reject it that they assume you must be saying something else--I don't think anything like the probability calculus is particularly relevant to the regulation of rational belief formation or rational belief revision.

So, why do I think this strange thing?

I've covered a lot of this ground here before, so most of this will be linking and summing up.

For one thing, I think that one of the most obvious claims in all of epistemology is that if you're rationally entitled to believe all of the premises of a valid deductive argument, and you know that the argument is valid, you're rationally entitled to believe the conclusion on that basis. Of course, it could be that you *were* rationally entitled to believe all of the premises until you reached the conclusion, but that the absurd conclusion makes continued acceptance of the premises irrational. That's fine. The relationship works in both directions. As we constantly teach our introductory logic students, if you're confronted with an apparently valid deductive argument connecting premises you accept with a conclusion you reject, you can't just say, "oh, well, it doesn't matter. Even if my beliefs do entail that other thing, I still believe what I believe and not that." You only have three choices. You can re-examine and ultimately reject a premise, you can find a flaw in the reasoning connecting the premises to the conclusion or you can go ahead and accept the conclusion after all. There's a lot more to rational belief-revision than that--a lore more--but that's the core.

Not obvious enough for you? OK, how about the following, which I think is an even more basic and obvious epistemic principle. If you know that something absolutely can't be true, you shouldn't believe it.

Epistemic principles really don't get a lot more intuitively compelling than that, do they? Well, cases like the Lottery Paradox and the Preface Paradox show that the two principles just laid out are in direct conflict with Bayesianism. See here for a more detailed explanation.

Another problem is that it seems deeply, crazily irrational to me to think that we can be absolutely certain that our initial best guesses about logic must be right. Whether one thinks (as I do) that logical laws are a matter of universal truth preservation, and thus that logical truth supervenes on all other kinds of truth (facts about protons and electrons, tables and chairs, dogs and cats) and is thus vulnerable to possible revision in light of new developments elsewhere in overall theory of the world, or one thinks (as many others do) that logical laws encode certain "rules of use" implicit in our "language" or some such thing, neither of those stories (nor any other remotely plausible view about logic) gives us any reason to think that we can be absolutely sure about it. What about linguistic or psychological "rules of use" makes you think that we can be absolutely infallible in our epistemic access to them, that we're incapable of making mistakes about them? Still more so, if our current beliefs about which logical laws there are encode our best theory--relative to the level of generality and abstraction at which formal systems operate--of how Absolutely Everything is, it seems beyond foolish to think that we can be utterly and infallibly certain about *that.*

When Frege and Russell, a bit over a century ago, codified the system we know think of as "classical logic", they were doing exactly what their non-classical opponents have done since then, which is attempting to capture a bunch of intuitions. Since then, "classical logic" has been challenged on the basis of a bunch of other intuitions--about referring to non-existents (free logics), about what it takes for one claim to really "follow" from another (relevance logics) and so on--and they've tried to capture these intuitions in formal systems of their own. Some reasons to doubt some very central assumptions built into not only classical logic but also into the older kind of syllogistic logic inherited from ancient Greece have been around since ancient Greece--the "sea battle" problem about future contingents, the Liar Paradox, problems about vagueness, etc.--and there still isn't any clear consensus about what to make of them.

Now, I find myself in the orthodox camp here--I think Frege and Russell's best guesses are still pretty much our best guesses--but the idea of thinking that we're rationally entitled to be absolutely certain, that there's no room for doubt, that various objections to the classical view don't deserve at least some serious epistemic weight and consideration, seems utterly indefensible to me.

Why do I stress this so much?

Well, once again, this obvious-seeming view is utterly incompatible with Bayesianism.

4 comments:

gwern said...

I've read both this and the other, and I don't understand what your objection is. Classical logic seems entirely compatible with Bayesianism - plug probabilities of 1 and 0 into Bayes's theorem and out fall your old classical results.

Ben said...

Of course classical logic is compatible with Bayesianism. Did I say anything that suggested that I thought otherwise?

My objections are--(1) that Bayesianism is incompatible with logical fallibilism, and (2) that I think that (a) one ought never believe something that one knows can't be true (e.g. a contradiction), and that (b) if one is rationally entitled to believe all of the premises of a valid deductive argument, and one knows that the conclusion follows from those premises, one is therefore rationally entitled to believe the conclusion of the argument on that basis, and the Lottery and Preface Paradoxes demonstrate that (a) and (b) are jointly incompatible with Bayesianism.

gwern said...

Well, I'm not entirely sure what you are saying. You seem to know the subject much better than I do, and are very free with the offhand allusions and comments, so that I quickly get lost.

For example, when you say "(a) one ought never believe something that one knows can't be true", I'm not sure how I would put this in Bayesian terms or not. It must be obvious to you since you conclude it and (b) are incompatible.

(I'm not even certain what 'logical fallibilism' is. Googling, one of the first hits is http://blogandnot-blog.blogspot.com/2010/05/few-thoughts-on-logical-fallibilism.html , which never seems to define it.)

Ben said...

Fallibilism means always keeping the door open to the possibility that you're views are wrong, being willing to consider and weigh new arguments and new evidence for contrary views, never being absolutely certain about anything, etc. I guarantee that if you google "fallibilism" on its own, my old posts won't be among the first results.

"Logical fallibilism" is just fallibilism about one's belief in basic logical principles--e.g. even if you accept the Law of the Excluded Middle, you shouldn't be absolutely dogmatically certain about it, and you should carefully weigh and consider new arguments against it that people might bring up based on, say, problems about vague predicates, or quantum physics, or whatever other arguments people might bring up to argue that there are cases in which Excluded Middle breaks down.