The following assumptions all seem extremely plausible:
(1) If it is highly probable that P is true, then we are justified in believing P.
(2) If we are justified in believing P, and Q follows from P (i.e. there is no way for P to be true without Q being true as well), we are also justified in believing Q (at least if we believe it on the basis of this inference).
(3) We are never justified in believing things that we know to be false.
So, in a familiar puzzle, there's a lottery with a thousand tickets. One of them is the winner, and the other 999 are the losers. Thus, the probability of any individual ticket losing is 99.9%. By (1), we're justified in believing of each individual ticket that that ticket will lose. (There's no use saying that .99 isn't highly probably enough, since we can construct a Lottery case for any arbitrarily high number of tickets.) By (2), we're justified in believing that *all* of the tickets will lose, because if Ticket 1 loses, Ticket 2 loses, Ticket 3 loses, and all the way to Ticket 1000, it follows from all of that that none of them win.
...but now, of course, we've reasoned our way to a conclusion that conflicts with (3). We know perfectly well that one ticket *will* win. That bit of background information is how we assigned the probabilities of each ticket winning in the first place.
In more usual presentations, (3) might be "we are never justified in believing contradictions," but I'm deliberately *not* putting it that way, because I think the issue runs deeper than that. The Lottery Paradox looks to me like just as much of a problem for the dialetheist, who believes that some (but not all) contradictions are true, as it is for the rest of us. To make it clearer that the dialetheist isn't at any advantage here, we can re-phrase (3) to:
(3*) We are never justified in believing things that we know to be (just) false.
After all, no dialetheist believes that it is both true and false that lotteries have winning tickets. I suppose it's just barely possible that some radical dialetheist might say that we're both sometimes justified and never justified in believing things that we know to be (just) false, but if there are other available options, it certainly sounds like a violation of Priest's rule about not multiplying contradictions beyond necessity, and in any case, the radical dialetheist who picked this option would be conceding something important, since they'd be giving up on the extremely useful and intuitive principle that:
(3**) It is (just) true that we are never justified in believing things that we know to be (just) false.
Put bluntly, a hypothetical dialetheist who denies (3**), claiming that there are true contradictions about whether we're rationally entitled to believe things we know to be (just) false, starts to sounds like he's advocating the sort of dialetheism that Nester advocates in this comic, and we can start to suspect his dialetheism is similarly motivated.
So, in any case, the real issue seems to me to be the rationality of knowingly believing falsehoods, not just knowingly believing contradictions. Of course, given orthodox assumptions about the philosophy of logic, the latter is just a particularly severe case of the former, since contradictions are the only sorts of claims whose falsehood we can be sure of based on nothing more than their logical form.
Some theorists take the Lottery Paradox to be evidence against (2).
Similarly, some people take Moore's "hands argument" against skepticism to be a reductio proof against the universal reasonableness of (2). Moore proves that material objects exist by looking down on his hands and saying "yep, here's one material object and here's another one." One might think that Moore is justified in beleiving that his hands exist, but not that global skepticism is wrong or that the external world exists or any similar such thing. This line of thought has always seems extremely unconvincing to me. If his hands exist, so does the external world. If you don't think he'd be justified in believing the latter, then it seems like the rational thing to do would be to apply Modus Tollens and conclude that he's not really justified in believing the former either.
Regardless of how one feels about the Moore-type cases, however, in the particular case of the Lottery Paradox, rejecting (2) does nothing to get us around the conflict between (1) and (3). This is another reason (in fact, a much more important reason than demonstrating that the dialetheist is in the same boat as the rest of us here) for expressing (3) in terms of *things we know to be false* in general, not *contradictions* in particular. Rejecting (2) does get us out of the inference to the explicit contradiction (P&~P), where P is "one of the tickets will win," but it doesn't get us out of believing something we know to be false. We're still in a position of believing *of each ticket* that it will lose. Given that we know that one of the tickets will win, we know that one of our beliefs about individual tickets must be false, and we're still in flagrant violation of (3).
Of course, one could reject (3), but out of the three obviously available options, rejecting (3) seems like the most bitter pill to swallow. If we read J(P) as something like "given the available evidence, we're entitled to think P is true," then we seem to be putting ourselves in a considerably strange position if we say that J(P) could be true even if we already know perfectly well that P is false.
Given this, it looks to me like by far the most plausible option is to reject (1), and to take the Lottery Paradox to be a nice proof that, at least sometimes, something can be extremely probable, but it can still be the case that we aren't justified in believing it. (Moreover, I doubt that disambiguating different senses of probability will help here, because the 99.9% probability of each ticket losing sounds to me like an *epistemic* probability.) High probability may often, perhaps even usually or almost always, be sufficient for justified belief, but it isn't always suffient for it. (Granted, there's obviously a large and worrying open question here about how to decide which cases are which.)
Of course, the conclusion that the most reasonable reaction to the Lottery Paradox is to reject (1) isn't original to me. Simone Evnine, for instance, argues for the same point in his extremely interesting book "Epistemic Dimensions of Personhood," although he presents the argument there in a substantially different way than I do here.
...and, of course, he also talks about the Preface Paradox, a related puzzle about (1)-(3) that is likely to be brought up in the same breath as the Lottery by anyone (like, e.g., Penelope Maddy in her otherwise excellent book "Second Philosophy") who takes the Lottery Paradox to demonstrate that, although no contradictions are true, we're sometimes justified in having inconsistent beliefs. In some ways, for the point that I'm building to, the Preface Paradox is even more interesting than the Lottery Paradox.
Before we get to it, it's worth briefly thinking about the consequences of rejecting (1) in the lottery case. After all, one might think that we're losing something important by reacting to it that way. Don't we want to be able to assert, e.g. in talking a dim-witted friend out of wasting his money on a lottery ticket, that we're overwhelmingly rationally justified in thinking that their ticket will lose? After all, as a professor of mathematics who I'm very fond of used to tell me, the lottery is in its essence a tax on people who are bad at math. It *is* irrational of your friend to buy a lottery ticket, and that fact might seem to be a consequence of the fact that we're rationally entitled to believe that it will lose.
This worry is groundless. If we reject (1), the obvious thing to say about the claim that your friend's ticket will win is not that we should that we should reserve judgment about it, *but* that the probability is extremely low, and this last fact is sufficient to motivate the claim that it's irrational of your friend to throw his money away on a lottery ticket, and that he'd be better advised to spend it on something he has a better than .01% chance of getting something out of.
So, that preliminary out of the way, let's think about the Paradox of the Preface. The basic issue is the same as the Lottery Paradox, since it seems to be nicely thought of as a puzzle about (1)-(3). You write a book where you carefully research every claim, carefully considering the evidence, alternate interpretations, objections, etc. It is, however, a very long book in which you make a great many claims, and experience has taught you that with so many claims, no matter how careful and rigorous your research, it is extremely probably that you made at least one subtle, undetected mistake somewhere along the line and that as such at least one of your carefully documented, well-thought-out claims will later turn out to be false. Are you doing something irrational if you say in the preface that at least one of the claims in your book is false?
After all, by (1), you are justified in believing that at least one of the claims in your book is false, by (2) you are justified in beleiving that they are all true (since you are justified on the basis of the evidence in believing of each individual claim that it is true), but, once again, this leads to a contradiction that not even a dialetheist could love, and thus belief in it severely violated (3). Once again, rejecting (2) doesn't seem to help much, because even if you don't believe the conjunction of all of your claims, but just believe each of them individually, you still have a total set of beliefs that you know perfectly well can't *all* be true. Given the severe implausibility of rejecting (3), again, we seem to have another nice little proof of the falsity of (1). So far, so good.
But notice that we're in a slightly different epistemic situation than we were in with regard to the lottery case. With any individual lottery ticket, the rational thing is to *reserve judgment* about whether it will win, while advising against acting as if it were the winner, given the high probability that it won't be. With any individual carefully-researched claim in the book, despite the fact that it is highly probable that at least one of them will be false, the rational thing to do is to believe all of them, and (since denying (2) is counter-intuitive and accomplishes nothing) to believe the conjunction while we're at it, and to *disbelieve* the highly probable claim that one of the is false. Despite the high probability that one of them will be false, we shouldn't believe the negation of the conjunction of all of them.
Thinking hard about the Preface Paradox might shed light on a problem in the philosophy of science. Scientific realists believe that we should believe our best current scientific theories are true. (Of course, in practice may formulations of scientific realism are considerably weaker than this, but for our purposes here, it's useful to consider the strongest formulation and see how well we can defend *that.*) One of the best arguments *against* scientific realism comes from the Pessimistic Induction. In the past, many theories that seemed to be well-supported by the evidence have turned out to be false. Putting a little rhetorical flourish on this as Laudan does, we can say that the history of science is a "graveyard" of such theories. Reflecting on the history of scientific revolutions, and the high incidence of well-supported scientific theories turning out to be false in the past, how can we be sure that our best current theories won't meet the same fate? In fact, it seems highly probable that many of our best current theories will meet the same fate. As such, scientific anti-realist argue, we're not justified in believing them to be true.
Now, this is a quick and rough sketch that can't be expected to do justice to a complicated and subtle debate, but for my present purposes, it should be good enough. It's no doubt possible to advance the Pessimistic Induction without talking about probability at all, but familiar formulations of it tend to be expressed that way. Some of the best and the most sophisticated defenses of realism against the Pessimistic Induction are focused on denying the premise that there is a high probability that many of our best current theories will turn out to be false, like Peter Lewis' argument that the Pessimistic Induction commits the base rate fallacy. Other standard realist defenses turn on attempts to deny or blunt the edge of the historical narrative on which that probabilistic assessment is based. "Oh, it's not that our best theories in the past were shown to be *false,* it's that they were shown to be somewhat false, and throughout the history of science our theories have approximated the truth more and more closely, so we can be confident that by now we're approximating the truth *really* closely...."
At the moment, I don't want to comment on any of that one way or the other. I do think, however, that reflection of what the Lottery Paradox (and, even more so, the Preface Paradox) show us about the relationship between probability and justification points the way to a very different defense of realism against the Pessimistic Induction. This solution in no way contradicts any of the other defenses just mentioned...someone could reasonably think that the more optimistic reading of the history of science is the right one, or that the probabilistic inference commits the base rate fallacy, or both, but that *even if* they were shown to be wrong about them, the following defense is still sufficient to save scientific realism:
We can just grant that the anti-realist is completely right that, given the history of science and its "graveyard" of theories once well-supported on the basis of evidence and later shown to be false, there is a high probability, perhaps even an *extremely* high probability that many of our best current theories will turn out to be false.
But it doesn't matter.
The Lottery Paradox shows that sometimes P can have a high probability of being true, and we can still fail to be justified in believing it. The Preface Paradox shows that sometimes P can have a high probability of being false, and we can still be justified in actually believing it to be true.
In the case of our best current science, (2) fails, for precisely the same reason that it fails in the case of the Preface Paradox. We have excellent evidence that our best current theories are true, and on the basis of that, we are rationally justified in believing them, *even though* there is a high probability that many of them will end up in Laudan's "graveyard."
So...any thoughts? Have I lost my mind?
Am I just showing my ignorance of current work in the philosophy of science here? Maybe this is a thought that's been advanced many times before in the literature and decisively shown to be ridiculous. Or maybe no one has advanced it for the simple reason that any half-way intelligent person whose mind it momentarily crosses can immediately see deep flaws in the reasoning that I can't.
Let me know.
Subscribe to: Post Comments (Atom)
The Lottery Paradox shows that sometimes P can have a high probability of being false, and we can still fail to be justified in believing it. The Preface Paradox shows that sometimes P can have a high probability of being false, and we can still be justified in actually believing it to be true.
If you accept both:
(1) Sometimes P can have a high probability of being false, and we can fail to be justified in believing it, and
(2) Sometimes P can have a high probability of being false, and we can be justified in believing it.
Then it looks like you have undermined anything but an accidental connection between one's justification in P and the Pr(P). If you accept this, then you will have to tell another story about justification. But I don't know that many would be so quick to give up the connection between J(P) and Pr(P).
"Sometimes P can have a high probability of being true, and we can fail to be justified in believing it."
That should be "Sometimes P can have a high probability of being *true,* and we can fail to be justified in believing it."
After all, assuming that the "it" is P, the first version isn't very controversial.
Also, more substantively, I'm not sure it's quite true that given (1) and (2), there can't be more than an accidental connection between there being a high probability of P being true and us being justified in believing it. I'm not sure I have anything too substantive to say about it, and I think there's an open and tricky question about just what the relationship is between high probability and justification once we reject the claim that the former is always sufficient for the latter.
But, for example, one could claim that *all else being equal*, P having a high probability of being true makes it more reasonable than not to believe in P. In cases like the Lottery, however, all else is not equal because we can be sure that one of the improbable ticket-winning claims is in fact true. In the case of the Preface, all else is not equal because the author has carefully and rigorously researched each of these claims and is thus justified in believing them.
I'm not saying that the "all else being equal" suggestion is particularly well-developed, and I'm not sure how promising it is, but I (tentatively) think that it demonstrates that there are more options than high probability always being sufficient for the justification of a claim (or even always being sufficient to rule out the justification of the negation of that claim), and the connection being a purely accidental one.
I was not as clear as I could have been originally:
The lottery shows that sometimes Pr(P) can be very high, and we can fail to be justified in believing it (Pr(the ticket will win)). If we accept the result of the lottery, it undermines that a high Pr(P) is sufficient for J(P).
You reject that we should believe that the negation of the conjunction of the propositions that constitute the book in the preface is true: "Despite the high probability that one of them will be false, we shouldn't believe the negation of the conjunction of all of them." So it would seem that a low probability in the conjunction of the propositions in the book is now sufficient for J(P). Now a high Pr(P) is no longer necessary for J(P).
If at times a high Pr(P) is not sufficient for J(P) [by the lottery], and at others a high Pr(P) is not necessary for J(P) [by the preface], it looks, again, like there is nothing more than an accidental connection between Pr(P) and J(P). Richard Foley makes this point in Working Without A Net.
One prevalent solution is to reject conjunctive closure, so that you could withhold judgment on the conjunction of propositions in the book, and then Pr(P) is necessary but not sufficient for J(P), but this does not resolve the inconsistent set of beliefs problem with the lottery. Additionally, I am not sure how this result would square with your phil. science argument.
If all you mean by "accidental" is "neither necessary nor sufficient," then you're right that if I'm right about the Lottery and the Preface, the connection is (in that sense) just accidental. That said, it seems to me that high probability could contribute something important to justification, such that in many, maybe even most circumstances, highly probable claims are justified, even if there are circumstances where you can have one without the other.
It could be that in most circumstances highly probable claims are justified. I am not disputing this. The problem is that if a high Pr(P) has no logical connection to J(P), then you have to tell a different story about justification, as having a high Pr(P) isn't what it's about.
Here is an argument by analogy:
Suppose when I am playing poker I tend to get really good hands when my collar is popped. Sometimes I get bad hands when my collar is popped, and sometimes I get good hands when my collar is not popped. But I still think that popping my collar has some important work to do in regards to my getting good hands at cards.
In this case popping of my collar is neither necessary nor sufficient for getting good hands at cards, and I need to tell a different story about why I get good hands (the table is hot, etc.).
Probability isn't logic, either. I tend to agree with the earlier writers who denied the Lottery Paradox was a paradox. It's more like a decision matrix regarding strategy (as are most games of chance). And as stated, not really accurate.
Say each ticket was a dollar, and you had a thousand to spend (and also say the prize was $10,000 or something). You buy tickets. Each losing ticket you buy increases the odds of winning with the next ticket: and they use this tactic in the 'hood, obviously, when people pool together a few thousand and buy a 100 lotto's etc. Obviously you would eventually win, so the problem really then would it be cost-effective to buy: for a rich man, yes: sort of why the winning dudes in vegas all happen to be like Jerry Buss, and starting with deeep pockets. In poker the guy who buys in the most, usually wins (because of leverage, raises, and so forth)
But make it one in 10 million (about lotto odds), and obviously not very cost effective, even with cheapo tickets. 1 in a 1000 are sweet odds. So it's more like a gambling strategy issue, not really a paradox.
I have absolutely no idea *what* you think isn't accurate.
...and, sure, the lottery-related problem you seem to be talking about (whether it's rational to play) doesn't involve any sort of interesting paradox, but that has absolutely nothing to do with the problem about justification the rest of us are talking about.
High probability may often, perhaps even usually or almost always, be sufficient for justified belief, but it isn't always suffient for it.
That's one of your problems: you're taking this to be about justifying belief, when the LotPar., concerns something like rational expectation. With no gain/loss scenario, no winnings specified, it's, like, moot, not to say dull (and again not really a contradiction, even assuming it's irrational to bet on 99% chance of losing with each draw. If you're Larry Ellison, it's not really irrational--more like a lark. if it's some street freak in LA living on food stamps, it would be stupid. And can one person buy all the tickets? Depending on price of ticket it might be rational to buy say 500 tickets if winnings were sufficient. The odds chance depending on how many tickets you buy--a point the filosophastical types overlook).
Well, for whatever it's worth, the knowledge gods at Wikipedia claim that it's always been about (among other things) the relationship between the rationality of holding beliefs and the probability of beliefs being true....
...or at least they agree with me for now, this being Wikipedia after all, but in any case, the Lottery Paradox as it comes up in current epistemology is precisely about the relationships between probability, justification and inconsistency. So, fine, for the sake of argument, let's say that you're right that this Lottery Paradox of contemporary epistemology (let's call it LP-B) is a puzzle about an entirely separate and unrelated set of issues from anything like what what Kyburg was concerned with when formulating LP-A.
Very good. I don't particularly care. Granting that would shed no particular light on what the right solution is to LP-B, and it certainly doesn't demonstrate that LP-B isn't interesting, or that it's a pseudo-problem, or whatever.
Note last paragraph from Guru Wiki, Doc:
"""""Finally, philosophers of science, decision scientists, and statisticians are inclined to see the lottery paradox as an early example of the complications one faces in constructing principled methods for aggregating uncertain information, which is now a thriving discipline of its own, with a dedicated journal, Information Fusion, in addition to continuous contributions to general area journals.""""
Aggregation, like in economics. OR gambling strategy.
For that matter it may be rational to believe that each ticket will lose, near certainty, but that isn't necessary: merely very high probability. But that one ticket winning is certain. So there is if you will a difference in the type of belief: one being based on very high probability that one will lose at 99% odds (that's what one might call the gambler's expectation), and the other based on a mere fact (a lottery has one winner). I still don't see the contradiction but will try again
The paradox comes only if you accept certain logical closure principles regarding belief formation and aggregation. For example, if you think that belief is closed under conjunction--that is to say, if I believe 'A' and I believe 'B', then I believe 'A & B'.
Given this assumption, if you are justified in believing that ticket A will lose, ticket B will lose, ..., as the credence in each belief will be 99.99, then you can iterate through the entire set of tickets. Given this, you have a belief in the conjunction 'A will lose & B will lose & ... & N will lose', but it is a fact that one ticket will win. Thus the justified belief that some ticket will win contradicts the justified belief in the conjunction previously mentioned.
I hope this clears things up.
Additionally, J., I think the source of your confusion might be the conflation of a first-order probabilistic relation, that between the credence one assigns a proposition which goes into the formation of a belief and the attribution of 'justified' or 'rational' to said belief, and the second-order logical relation of closure which holds over one's rational/justified beliefs (or even beliefs simpliciter).
The contradiction comes at the second order, not the first. Whatever one's betting strategy regarding whether one would buy a lottery ticket has nothing to do with whether a closure relation holds over their beliefs generally.
Right, although you certainly don't need to go so far as conjunctive closure for belief--if you believe 'A' and you believe 'B,' you believe 'A & B'--since all we need to get to the contradiction conjunctive closure for justification, such that if you're justified in believing that 'A,' and you're justified in believing 'B,' you're justified in believing 'A & B.' Even while granting this, we could still grant it to be be quite psychologically possible for people to (irrationally) believe 'A' and believe 'B' without believing 'A & B.' In fact, all we need is the even weaker version whereby we just say that if you are justified in believing 'A' and you are justified in believing 'B,' then if you come to believe 'A&B' on the basis of the logical inference, you are justified in believing it.
...and, of course, even that very weak principle is only necessary to get to the explicit contradiction (P & ~P), but even if we deny that principle, we're still left with a jointly inconsistent collection of beliefs.
Well, we might call a beliefs contradictory in colloquial sense, but it's obviously not the same as saying (A and not A), or A is square, and A is not a square. Beliefs are not discrete. Numbers are, geometric figures are, and truth and falsity are (presumably). But the mere thought isn't. Moreover the aggregation issue (really sort of a sum) implies that one says (1000 will lose) = (999 will lose)-- 1000 minus the winning ticket, so not equivalent to start with.
I hope that clears things up.
The paradox comes only if you accept certain logical closure PowerBall principles regarding belief formation and aggregation.
That would be Option (2) in the post, and it avoids the derivation of the outright contradiction, but not the inconsistency among your beliefs. Seems like a pretty bad option.
"Don't we want to be able to assert, e.g. in talking a dim-witted friend out of wasting his money on a lottery ticket, that we're overwhelmingly rationally justified in thinking that their ticket will win"
don't you mean "won't win"?
I do indeed! Corrected.
Post a Comment