Every once in a while, I'll get into a bullshitting session with a grad school friend and they'll ask me what my Bayesian probability estimate would be of such-and-such claim turning out to be true. What do I think the probability of God existing is? Of the Singularity happening? Of alien life existing somewhere?
I always have to explain in buzz-kill-ish fashion almost worthy of Buzz Killington that I don't think that there are probabilities in the sense assumed by the question--to be more precise about this, I think the probability calculus is a marvelous mathematical tool for juggling frequencies, but I strongly reject the claim that it can be used to model "degrees of belief" or justification or confirmation or rational belief revision or anything of the kind--and that, since I think the game people are playing when one says that the probability that God exists is .01% and the other says, "really, you think that high? I'd say .0001%" and so on is a deeply confused and silly game that doesn't really shed light on anything, I decline to play. I'm happy to say things like "given the overall evidence, it's irrational to think that God exists", or "given the overall evidence, my best guess is that the Singularity will not happen" or whatever, but that's where I'll leave it. There may be things such that I'd be much more surprised if they turned out to be true than I would be if other things turned out to be true, some things I'm more likely to constantly scrutinize new evidence and new arguments to make sure I'm right about than others, etc., but--it's good to repeat this, because I find that many people whose philosophical training has simply assumed Bayesianism end up being so shocked when you say that you reject it that they assume you must be saying something else--I don't think anything like the probability calculus is particularly relevant to the regulation of rational belief formation or rational belief revision.
So, why do I think this strange thing?
I've covered a lot of this ground here before, so most of this will be linking and summing up.
For one thing, I think that one of the most obvious claims in all of epistemology is that if you're rationally entitled to believe all of the premises of a valid deductive argument, and you know that the argument is valid, you're rationally entitled to believe the conclusion on that basis. Of course, it could be that you *were* rationally entitled to believe all of the premises until you reached the conclusion, but that the absurd conclusion makes continued acceptance of the premises irrational. That's fine. The relationship works in both directions. As we constantly teach our introductory logic students, if you're confronted with an apparently valid deductive argument connecting premises you accept with a conclusion you reject, you can't just say, "oh, well, it doesn't matter. Even if my beliefs do entail that other thing, I still believe what I believe and not that." You only have three choices. You can re-examine and ultimately reject a premise, you can find a flaw in the reasoning connecting the premises to the conclusion or you can go ahead and accept the conclusion after all. There's a lot more to rational belief-revision than that--a lore more--but that's the core.
Not obvious enough for you? OK, how about the following, which I think is an even more basic and obvious epistemic principle. If you know that something absolutely can't be true, you shouldn't believe it.
Epistemic principles really don't get a lot more intuitively compelling than that, do they? Well, cases like the Lottery Paradox and the Preface Paradox show that the two principles just laid out are in direct conflict with Bayesianism. See here for a more detailed explanation.
Another problem is that it seems deeply, crazily irrational to me to think that we can be absolutely certain that our initial best guesses about logic must be right. Whether one thinks (as I do) that logical laws are a matter of universal truth preservation, and thus that logical truth supervenes on all other kinds of truth (facts about protons and electrons, tables and chairs, dogs and cats) and is thus vulnerable to possible revision in light of new developments elsewhere in overall theory of the world, or one thinks (as many others do) that logical laws encode certain "rules of use" implicit in our "language" or some such thing, neither of those stories (nor any other remotely plausible view about logic) gives us any reason to think that we can be absolutely sure about it. What about linguistic or psychological "rules of use" makes you think that we can be absolutely infallible in our epistemic access to them, that we're incapable of making mistakes about them? Still more so, if our current beliefs about which logical laws there are encode our best theory--relative to the level of generality and abstraction at which formal systems operate--of how Absolutely Everything is, it seems beyond foolish to think that we can be utterly and infallibly certain about *that.*
When Frege and Russell, a bit over a century ago, codified the system we know think of as "classical logic", they were doing exactly what their non-classical opponents have done since then, which is attempting to capture a bunch of intuitions. Since then, "classical logic" has been challenged on the basis of a bunch of other intuitions--about referring to non-existents (free logics), about what it takes for one claim to really "follow" from another (relevance logics) and so on--and they've tried to capture these intuitions in formal systems of their own. Some reasons to doubt some very central assumptions built into not only classical logic but also into the older kind of syllogistic logic inherited from ancient Greece have been around since ancient Greece--the "sea battle" problem about future contingents, the Liar Paradox, problems about vagueness, etc.--and there still isn't any clear consensus about what to make of them.
Now, I find myself in the orthodox camp here--I think Frege and Russell's best guesses are still pretty much our best guesses--but the idea of thinking that we're rationally entitled to be absolutely certain, that there's no room for doubt, that various objections to the classical view don't deserve at least some serious epistemic weight and consideration, seems utterly indefensible to me.
Why do I stress this so much?
Well, once again, this obvious-seeming view is utterly incompatible with Bayesianism.