The following assumptions all seem extremely plausible:
(1) If it is highly probable that P is true, then we are justified in believing P.
(2) If we are justified in believing P, and Q follows from P (i.e. there is no way for P to be true without Q being true as well), we are also justified in believing Q (at least if we believe it on the basis of this inference).
(3) We are never justified in believing things that we know to be false.
So, in a familiar puzzle, there's a lottery with a thousand tickets. One of them is the winner, and the other 999 are the losers. Thus, the probability of any individual ticket losing is 99.9%. By (1), we're justified in believing of each individual ticket that that ticket will lose. (There's no use saying that .99 isn't highly probably enough, since we can construct a Lottery case for any arbitrarily high number of tickets.) By (2), we're justified in believing that *all* of the tickets will lose, because if Ticket 1 loses, Ticket 2 loses, Ticket 3 loses, and all the way to Ticket 1000, it follows from all of that that none of them win.
...but now, of course, we've reasoned our way to a conclusion that conflicts with (3). We know perfectly well that one ticket *will* win. That bit of background information is how we assigned the probabilities of each ticket winning in the first place.
In more usual presentations, (3) might be "we are never justified in believing contradictions," but I'm deliberately *not* putting it that way, because I think the issue runs deeper than that. The Lottery Paradox looks to me like just as much of a problem for the dialetheist, who believes that some (but not all) contradictions are true, as it is for the rest of us. To make it clearer that the dialetheist isn't at any advantage here, we can re-phrase (3) to:
(3*) We are never justified in believing things that we know to be (just) false.
After all, no dialetheist believes that it is both true and false that lotteries have winning tickets. I suppose it's just barely possible that some radical dialetheist might say that we're both sometimes justified and never justified in believing things that we know to be (just) false, but if there are other available options, it certainly sounds like a violation of Priest's rule about not multiplying contradictions beyond necessity, and in any case, the radical dialetheist who picked this option would be conceding something important, since they'd be giving up on the extremely useful and intuitive principle that:
(3**) It is (just) true that we are never justified in believing things that we know to be (just) false.
Put bluntly, a hypothetical dialetheist who denies (3**), claiming that there are true contradictions about whether we're rationally entitled to believe things we know to be (just) false, starts to sounds like he's advocating the sort of dialetheism that Nester advocates in this comic, and we can start to suspect his dialetheism is similarly motivated.
So, in any case, the real issue seems to me to be the rationality of knowingly believing falsehoods, not just knowingly believing contradictions. Of course, given orthodox assumptions about the philosophy of logic, the latter is just a particularly severe case of the former, since contradictions are the only sorts of claims whose falsehood we can be sure of based on nothing more than their logical form.
Some theorists take the Lottery Paradox to be evidence against (2).
Similarly, some people take Moore's "hands argument" against skepticism to be a reductio proof against the universal reasonableness of (2). Moore proves that material objects exist by looking down on his hands and saying "yep, here's one material object and here's another one." One might think that Moore is justified in beleiving that his hands exist, but not that global skepticism is wrong or that the external world exists or any similar such thing. This line of thought has always seems extremely unconvincing to me. If his hands exist, so does the external world. If you don't think he'd be justified in believing the latter, then it seems like the rational thing to do would be to apply Modus Tollens and conclude that he's not really justified in believing the former either.
Regardless of how one feels about the Moore-type cases, however, in the particular case of the Lottery Paradox, rejecting (2) does nothing to get us around the conflict between (1) and (3). This is another reason (in fact, a much more important reason than demonstrating that the dialetheist is in the same boat as the rest of us here) for expressing (3) in terms of *things we know to be false* in general, not *contradictions* in particular. Rejecting (2) does get us out of the inference to the explicit contradiction (P&~P), where P is "one of the tickets will win," but it doesn't get us out of believing something we know to be false. We're still in a position of believing *of each ticket* that it will lose. Given that we know that one of the tickets will win, we know that one of our beliefs about individual tickets must be false, and we're still in flagrant violation of (3).
Of course, one could reject (3), but out of the three obviously available options, rejecting (3) seems like the most bitter pill to swallow. If we read J(P) as something like "given the available evidence, we're entitled to think P is true," then we seem to be putting ourselves in a considerably strange position if we say that J(P) could be true even if we already know perfectly well that P is false.
Given this, it looks to me like by far the most plausible option is to reject (1), and to take the Lottery Paradox to be a nice proof that, at least sometimes, something can be extremely probable, but it can still be the case that we aren't justified in believing it. (Moreover, I doubt that disambiguating different senses of probability will help here, because the 99.9% probability of each ticket losing sounds to me like an *epistemic* probability.) High probability may often, perhaps even usually or almost always, be sufficient for justified belief, but it isn't always suffient for it. (Granted, there's obviously a large and worrying open question here about how to decide which cases are which.)
Of course, the conclusion that the most reasonable reaction to the Lottery Paradox is to reject (1) isn't original to me. Simone Evnine, for instance, argues for the same point in his extremely interesting book "Epistemic Dimensions of Personhood," although he presents the argument there in a substantially different way than I do here.
...and, of course, he also talks about the Preface Paradox, a related puzzle about (1)-(3) that is likely to be brought up in the same breath as the Lottery by anyone (like, e.g., Penelope Maddy in her otherwise excellent book "Second Philosophy") who takes the Lottery Paradox to demonstrate that, although no contradictions are true, we're sometimes justified in having inconsistent beliefs. In some ways, for the point that I'm building to, the Preface Paradox is even more interesting than the Lottery Paradox.
Before we get to it, it's worth briefly thinking about the consequences of rejecting (1) in the lottery case. After all, one might think that we're losing something important by reacting to it that way. Don't we want to be able to assert, e.g. in talking a dim-witted friend out of wasting his money on a lottery ticket, that we're overwhelmingly rationally justified in thinking that their ticket will lose? After all, as a professor of mathematics who I'm very fond of used to tell me, the lottery is in its essence a tax on people who are bad at math. It *is* irrational of your friend to buy a lottery ticket, and that fact might seem to be a consequence of the fact that we're rationally entitled to believe that it will lose.
This worry is groundless. If we reject (1), the obvious thing to say about the claim that your friend's ticket will win is not that we should that we should reserve judgment about it, *but* that the probability is extremely low, and this last fact is sufficient to motivate the claim that it's irrational of your friend to throw his money away on a lottery ticket, and that he'd be better advised to spend it on something he has a better than .01% chance of getting something out of.
So, that preliminary out of the way, let's think about the Paradox of the Preface. The basic issue is the same as the Lottery Paradox, since it seems to be nicely thought of as a puzzle about (1)-(3). You write a book where you carefully research every claim, carefully considering the evidence, alternate interpretations, objections, etc. It is, however, a very long book in which you make a great many claims, and experience has taught you that with so many claims, no matter how careful and rigorous your research, it is extremely probably that you made at least one subtle, undetected mistake somewhere along the line and that as such at least one of your carefully documented, well-thought-out claims will later turn out to be false. Are you doing something irrational if you say in the preface that at least one of the claims in your book is false?
After all, by (1), you are justified in believing that at least one of the claims in your book is false, by (2) you are justified in beleiving that they are all true (since you are justified on the basis of the evidence in believing of each individual claim that it is true), but, once again, this leads to a contradiction that not even a dialetheist could love, and thus belief in it severely violated (3). Once again, rejecting (2) doesn't seem to help much, because even if you don't believe the conjunction of all of your claims, but just believe each of them individually, you still have a total set of beliefs that you know perfectly well can't *all* be true. Given the severe implausibility of rejecting (3), again, we seem to have another nice little proof of the falsity of (1). So far, so good.
But notice that we're in a slightly different epistemic situation than we were in with regard to the lottery case. With any individual lottery ticket, the rational thing is to *reserve judgment* about whether it will win, while advising against acting as if it were the winner, given the high probability that it won't be. With any individual carefully-researched claim in the book, despite the fact that it is highly probable that at least one of them will be false, the rational thing to do is to believe all of them, and (since denying (2) is counter-intuitive and accomplishes nothing) to believe the conjunction while we're at it, and to *disbelieve* the highly probable claim that one of the is false. Despite the high probability that one of them will be false, we shouldn't believe the negation of the conjunction of all of them.
Thinking hard about the Preface Paradox might shed light on a problem in the philosophy of science. Scientific realists believe that we should believe our best current scientific theories are true. (Of course, in practice may formulations of scientific realism are considerably weaker than this, but for our purposes here, it's useful to consider the strongest formulation and see how well we can defend *that.*) One of the best arguments *against* scientific realism comes from the Pessimistic Induction. In the past, many theories that seemed to be well-supported by the evidence have turned out to be false. Putting a little rhetorical flourish on this as Laudan does, we can say that the history of science is a "graveyard" of such theories. Reflecting on the history of scientific revolutions, and the high incidence of well-supported scientific theories turning out to be false in the past, how can we be sure that our best current theories won't meet the same fate? In fact, it seems highly probable that many of our best current theories will meet the same fate. As such, scientific anti-realist argue, we're not justified in believing them to be true.
Now, this is a quick and rough sketch that can't be expected to do justice to a complicated and subtle debate, but for my present purposes, it should be good enough. It's no doubt possible to advance the Pessimistic Induction without talking about probability at all, but familiar formulations of it tend to be expressed that way. Some of the best and the most sophisticated defenses of realism against the Pessimistic Induction are focused on denying the premise that there is a high probability that many of our best current theories will turn out to be false, like Peter Lewis' argument that the Pessimistic Induction commits the base rate fallacy. Other standard realist defenses turn on attempts to deny or blunt the edge of the historical narrative on which that probabilistic assessment is based. "Oh, it's not that our best theories in the past were shown to be *false,* it's that they were shown to be somewhat false, and throughout the history of science our theories have approximated the truth more and more closely, so we can be confident that by now we're approximating the truth *really* closely...."
At the moment, I don't want to comment on any of that one way or the other. I do think, however, that reflection of what the Lottery Paradox (and, even more so, the Preface Paradox) show us about the relationship between probability and justification points the way to a very different defense of realism against the Pessimistic Induction. This solution in no way contradicts any of the other defenses just mentioned...someone could reasonably think that the more optimistic reading of the history of science is the right one, or that the probabilistic inference commits the base rate fallacy, or both, but that *even if* they were shown to be wrong about them, the following defense is still sufficient to save scientific realism:
We can just grant that the anti-realist is completely right that, given the history of science and its "graveyard" of theories once well-supported on the basis of evidence and later shown to be false, there is a high probability, perhaps even an *extremely* high probability that many of our best current theories will turn out to be false.
But it doesn't matter.
The Lottery Paradox shows that sometimes P can have a high probability of being true, and we can still fail to be justified in believing it. The Preface Paradox shows that sometimes P can have a high probability of being false, and we can still be justified in actually believing it to be true.
In the case of our best current science, (2) fails, for precisely the same reason that it fails in the case of the Preface Paradox. We have excellent evidence that our best current theories are true, and on the basis of that, we are rationally justified in believing them, *even though* there is a high probability that many of them will end up in Laudan's "graveyard."
So...any thoughts? Have I lost my mind?
Am I just showing my ignorance of current work in the philosophy of science here? Maybe this is a thought that's been advanced many times before in the literature and decisively shown to be ridiculous. Or maybe no one has advanced it for the simple reason that any half-way intelligent person whose mind it momentarily crosses can immediately see deep flaws in the reasoning that I can't.
Let me know.