Wednesday, October 28, 2009

Standards For Claims Of Retroactive Implicit Paraconsistency

In Mark Colyvan's article "Who's Afraid Of Inconsistent Mathematics?", he starts things off with a snarky "five line proof" of Fermat's Last Theorem.

1. The Russell Set is both a member of itself and not a member of itself.
2. (From 1, and Conjunction-Elimination): The Russell Set is a member of itself.
3. (From 2, and Disjunction-Addition): Either the Russell Set is a member of itself or Fermat's Last Theorem is true.
4. (From 1, and Conjunction-Elimination): The Russell Set is not a member of itself.
5. (From 3, 4 and Disjunctive Syllogism): Fermat's Last Theorem is true.

Now, he argues, the first premise is easily demonstratable given naive set theory. Why is it, then, that no mathematician ever tried to prove FLT in this way, and that FTL was considered un-proved until Andrew Wiley came along with a proof running over a hundred pages and employing all kinds of sophsiticated, recently developed mathematical machinery?

Part of the explanation is that the first premise relies on naive set theory, and mathematical orthodoxy has, in no small part *because* of Russell's Paradox, abandoned the naive conception of sets in favor of the hierchical conception of ZFC set theory. Fair enough, but what about the three decades between the discovery of Russell's Paradox and the consistent re-formulation of set theory in terms of ZFC's cumulative hierarchy? Why didn't anyone try to prove FTL this way during the lag period?

The lesson Colyvan draws is more or less standard for historical examples used in apologias for paraconsistency:

Mathematicians during the lag period implicitly treated the contradiction exposed by Russell's Paradox as entailing some conclusions but not others. In other words, the standards of reasoning in play in the mathematical community of the time are best captured by paraconsistent logic rather than classical logic. Moreover, so goes the story, they were not being irrational in implicitly employing these standards of reasoning. Therefore, paraconsistent logic represents the appropriate logic for at least some domains of inquiry, at least sometimes.

I'm picking on Colyvan's paper because it's an extraordinarily clear, clean, chemically pure example of its type, but this sort of thing is a very common manuever in the literature on paraconsistency. We can call these Retroactive Implicit Paraconsistency (RIP) claims. To make an RIP claim is to say that, in the past, some investigator or community of investigators about some topic reasoned in a way best codified by paraconsistent logic. (Not, of course, that they actually reasoned using formally explicit deductive arguments of any kind, or that they were aware that their practices were, in some, sense in conflict with the norms of classical reasoning--generally speaking, RIP claims are made about people who predate the explicit formulation of paraconsistent logics--but that the implicit standards of reasoning their practices seemed to conform to were ones that fit better with a paraconsistent logical consequence relationship than with one in which anything can be derived from any contradiction.) In most cases, a normative element is at least implicit--do we really want to say that these people *should* have concluded everything from the contradicitons inherent in their theories?--but for the sake of simplicity, let's put that aside and just deal with RIP claimss on a purely descriptive level.

I should have some more to say about this soon, but for now, I just want to note that the standards we use to evaluate RIP claims should be a bit more rigorous than the standards that tend to be appealed to in standard presentations of them.

Theorists often take it to be sufficient for an RIP claim about Person X that:

(a) Person X accepted some overall package of theories involving inconsistencies between its elements, or from which some sort of contradiction could be derived, or even is just used some sort of theoretical tools that they know *could* be used to generate contradictions, but:
(b) They didn't use the relevant contradiction to prove random arbitrary conclusions.

I think this is grossly insufficient. As a first stab as something a bit more substantive, I'd argue that we also need:

(c) That they were aware of the entailment of inconsistency (note that this is a standard that the Colyvan example meets, but which other historical examples used in this literature may not) and, crucially, that
(d) That they explicitly believed the contradiction, and also crucially, that
(e) They used both halves of the contradiction (conjunctively, or even one half at a time) as a premise in at least some of their reasoning about the subject.

Also, on an even more basic level, and as an absoltuely minimal standard, I'd suggest:

(f) That, in general or at *least* when dealing with the relevant subject matter (e.g. in other set-theoretic reasoning during the lag time Colyvan talks about), they did *not* implicitly reason according to classical rules of derivation that are paraconsistently invalid, like Disjunctive Syllogism and Reudctio Ad Absurdum.